Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Leer Challenge: Creating a Perceptron | Neural Network from Scratch
Introduction to Neural Networks
course content

Cursusinhoud

Introduction to Neural Networks

Introduction to Neural Networks

1. Concept of Neural Network
2. Neural Network from Scratch
3. Conclusion

book
Challenge: Creating a Perceptron

Since our goal is to implement a multilayer perceptron, creating a Perceptron class will simplify model initialization. Its only attribute, layers is essentially a list of the Layer objects that define the structure of the network:

python

The variables used to initialize the layers are the following:

  • input_size: the number of input features;
  • hidden_size: the number of neurons in each hidden layer (both hidden layers will have the same number of neurons in this case);
  • output_size: the number of neurons in the output layer.

The structure of the resulting perceptron should be as follows:

Taak

Swipe to start coding

Your goal is to set up the basic structure of the perceptron by implementing its layers:

  1. Initialize the weights (a matrix) and biases (a vector) with random values from a uniform distribution in range [-1, 1) using NumPy.
  2. Compute the raw output values of the neurons in the forward() method of the Layer class.
  3. Apply the activation function to the raw outputs in the forward() method of the Layer class and return the result.
  4. Define three layers in the Perceptron class: two hidden layers with the same number of neurons and one output layer. Both hidden layers should use the relu activation function, while the output layer should use sigmoid.

Oplossing

Switch to desktopSchakel over naar desktop voor praktijkervaringGa verder vanaf waar je bent met een van de onderstaande opties
Was alles duidelijk?

Hoe kunnen we het verbeteren?

Bedankt voor je feedback!

Sectie 2. Hoofdstuk 4
toggle bottom row

book
Challenge: Creating a Perceptron

Since our goal is to implement a multilayer perceptron, creating a Perceptron class will simplify model initialization. Its only attribute, layers is essentially a list of the Layer objects that define the structure of the network:

python

The variables used to initialize the layers are the following:

  • input_size: the number of input features;
  • hidden_size: the number of neurons in each hidden layer (both hidden layers will have the same number of neurons in this case);
  • output_size: the number of neurons in the output layer.

The structure of the resulting perceptron should be as follows:

Taak

Swipe to start coding

Your goal is to set up the basic structure of the perceptron by implementing its layers:

  1. Initialize the weights (a matrix) and biases (a vector) with random values from a uniform distribution in range [-1, 1) using NumPy.
  2. Compute the raw output values of the neurons in the forward() method of the Layer class.
  3. Apply the activation function to the raw outputs in the forward() method of the Layer class and return the result.
  4. Define three layers in the Perceptron class: two hidden layers with the same number of neurons and one output layer. Both hidden layers should use the relu activation function, while the output layer should use sigmoid.

Oplossing

Switch to desktopSchakel over naar desktop voor praktijkervaringGa verder vanaf waar je bent met een van de onderstaande opties
Was alles duidelijk?

Hoe kunnen we het verbeteren?

Bedankt voor je feedback!

Sectie 2. Hoofdstuk 4
Switch to desktopSchakel over naar desktop voor praktijkervaringGa verder vanaf waar je bent met een van de onderstaande opties
Onze excuses dat er iets mis is gegaan. Wat is er gebeurd?
some-alt