Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Challenge: Creating a Perceptron | Neural Network from Scratch
Introduction to Neural Networks with Python

bookChallenge: Creating a Perceptron

Since the goal is to implement a multilayer perceptron, defining a Perceptron class helps organize and initialize the model efficiently. The class will contain a single attribute, layers, which is a list of Layer objects representing the structure of the network:

class Perceptron:
    def __init__(self, layers):
        self.layers = layers

The variables used to initialize the layers are:

  • input_size: the number of input features;
  • hidden_size: the number of neurons in each hidden layer (both hidden layers will have the same number of neurons in this case);
  • output_size: the number of neurons in the output layer.

The structure of the resulting multilayer perceptron will include:

  1. Input layer β†’ receives the data;
  2. Two hidden layers β†’ process the inputs and extract patterns;
  3. Output layer β†’ produces the final prediction.
Task

Swipe to start coding

Your goal is to set up the basic structure of a multilayer perceptron (MLP) by implementing the code for its layers.

Follow these steps carefully:

  1. Initialize layer parameters inside the __init__() method:
    • Create the weight matrix with shape (n_neurons, n_inputs);
    • Create the bias vector with shape (n_neurons, 1);
    • Fill both with random values from a uniform distribution in the range [βˆ’1,1)[-1, 1) using np.random.uniform().
  2. Implement forward propagation inside the forward() method:
    • Compute the raw output of each neuron using the dot product:
      np.dot(self.weights, self.inputs) + self.biases
      
    • Apply the assigned activation function to this result and return the activated output.
  3. Define the perceptron layers:
    • Create two hidden layers, each containing hidden_size neurons and using the ReLU activation function;
    • Create one output layer with output_size neuron(s) and the sigmoid activation function.

Solution

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 4
single

single

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

close

Awesome!

Completion rate improved to 4

bookChallenge: Creating a Perceptron

Swipe to show menu

Since the goal is to implement a multilayer perceptron, defining a Perceptron class helps organize and initialize the model efficiently. The class will contain a single attribute, layers, which is a list of Layer objects representing the structure of the network:

class Perceptron:
    def __init__(self, layers):
        self.layers = layers

The variables used to initialize the layers are:

  • input_size: the number of input features;
  • hidden_size: the number of neurons in each hidden layer (both hidden layers will have the same number of neurons in this case);
  • output_size: the number of neurons in the output layer.

The structure of the resulting multilayer perceptron will include:

  1. Input layer β†’ receives the data;
  2. Two hidden layers β†’ process the inputs and extract patterns;
  3. Output layer β†’ produces the final prediction.
Task

Swipe to start coding

Your goal is to set up the basic structure of a multilayer perceptron (MLP) by implementing the code for its layers.

Follow these steps carefully:

  1. Initialize layer parameters inside the __init__() method:
    • Create the weight matrix with shape (n_neurons, n_inputs);
    • Create the bias vector with shape (n_neurons, 1);
    • Fill both with random values from a uniform distribution in the range [βˆ’1,1)[-1, 1) using np.random.uniform().
  2. Implement forward propagation inside the forward() method:
    • Compute the raw output of each neuron using the dot product:
      np.dot(self.weights, self.inputs) + self.biases
      
    • Apply the assigned activation function to this result and return the activated output.
  3. Define the perceptron layers:
    • Create two hidden layers, each containing hidden_size neurons and using the ReLU activation function;
    • Create one output layer with output_size neuron(s) and the sigmoid activation function.

Solution

Switch to desktopSwitch to desktop for real-world practiceContinue from where you are using one of the options below
Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 4
single

single

some-alt