Challenge: Creating a Perceptron
Since the goal is to implement a multilayer perceptron, defining a Perceptron class helps organize and initialize the model efficiently. The class will contain a single attribute, layers, which is a list of Layer objects representing the structure of the network:
class Perceptron:
def __init__(self, layers):
self.layers = layers
The variables used to initialize the layers are:
input_size: the number of input features;hidden_size: the number of neurons in each hidden layer (both hidden layers will have the same number of neurons in this case);output_size: the number of neurons in the output layer.
The structure of the resulting multilayer perceptron will include:
- Input layer β receives the data;
- Two hidden layers β process the inputs and extract patterns;
- Output layer β produces the final prediction.
Swipe to start coding
Your goal is to set up the basic structure of a multilayer perceptron (MLP) by implementing the code for its layers.
Follow these steps carefully:
- Initialize layer parameters inside the
__init__()method:- Create the weight matrix with shape
(n_neurons, n_inputs); - Create the bias vector with shape
(n_neurons, 1); - Fill both with random values from a uniform distribution in the range [β1,1) using
np.random.uniform().
- Create the weight matrix with shape
- Implement forward propagation inside the
forward()method:- Compute the raw output of each neuron using the dot product:
np.dot(self.weights, self.inputs) + self.biases - Apply the assigned activation function to this result and return the activated output.
- Compute the raw output of each neuron using the dot product:
- Define the perceptron layers:
- Create two hidden layers, each containing
hidden_sizeneurons and using the ReLU activation function; - Create one output layer with
output_sizeneuron(s) and the sigmoid activation function.
- Create two hidden layers, each containing
Solution
Thanks for your feedback!
single
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Awesome!
Completion rate improved to 4
Challenge: Creating a Perceptron
Swipe to show menu
Since the goal is to implement a multilayer perceptron, defining a Perceptron class helps organize and initialize the model efficiently. The class will contain a single attribute, layers, which is a list of Layer objects representing the structure of the network:
class Perceptron:
def __init__(self, layers):
self.layers = layers
The variables used to initialize the layers are:
input_size: the number of input features;hidden_size: the number of neurons in each hidden layer (both hidden layers will have the same number of neurons in this case);output_size: the number of neurons in the output layer.
The structure of the resulting multilayer perceptron will include:
- Input layer β receives the data;
- Two hidden layers β process the inputs and extract patterns;
- Output layer β produces the final prediction.
Swipe to start coding
Your goal is to set up the basic structure of a multilayer perceptron (MLP) by implementing the code for its layers.
Follow these steps carefully:
- Initialize layer parameters inside the
__init__()method:- Create the weight matrix with shape
(n_neurons, n_inputs); - Create the bias vector with shape
(n_neurons, 1); - Fill both with random values from a uniform distribution in the range [β1,1) using
np.random.uniform().
- Create the weight matrix with shape
- Implement forward propagation inside the
forward()method:- Compute the raw output of each neuron using the dot product:
np.dot(self.weights, self.inputs) + self.biases - Apply the assigned activation function to this result and return the activated output.
- Compute the raw output of each neuron using the dot product:
- Define the perceptron layers:
- Create two hidden layers, each containing
hidden_sizeneurons and using the ReLU activation function; - Create one output layer with
output_sizeneuron(s) and the sigmoid activation function.
- Create two hidden layers, each containing
Solution
Thanks for your feedback!
single