Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Single Neuron Implementation | Neural Network from Scratch
Introduction to Neural Networks
course content

Contenido del Curso

Introduction to Neural Networks

Introduction to Neural Networks

1. Concept of Neural Network
2. Neural Network from Scratch
3. Conclusion

book
Single Neuron Implementation

The fundamental computational unit of a neural network is the neuron. A neuron can be visualized as a small processing unit that takes multiple inputs, processes them, and gives a single output.

Here's what happens step by step:

  1. Each input is multiplied by a corresponding weight. The weights are learnable parameters and they determine the importance of the corresponding input;
  2. All the weighted inputs are summed together;
  3. In our implementation, we will also add an additional parameter called bias to the input sum. The bias allows the neuron to shift its output up or down, adding flexibility to the modeling capability;
  4. Then the input sum is passed through an activation function. We are using the sigmoid function, which squashes values into the range (0, 1).

Note

Bias of the neuron is also a trainable parameter.

Tarea
test

Swipe to show code editor

Implement the basic structure of a neuron. Complete the missing parts of the neuron class:

  1. Enter the number of inputs of the neuron.
  2. Use the uniform function to generate a random bias for every neuron.
  3. Enter the activation function of the neuron.

Once you've completed this task, click the button below the code to check your solution.

Switch to desktopCambia al escritorio para practicar en el mundo realContinúe desde donde se encuentra utilizando una de las siguientes opciones
¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 2. Capítulo 1
toggle bottom row

book
Single Neuron Implementation

The fundamental computational unit of a neural network is the neuron. A neuron can be visualized as a small processing unit that takes multiple inputs, processes them, and gives a single output.

Here's what happens step by step:

  1. Each input is multiplied by a corresponding weight. The weights are learnable parameters and they determine the importance of the corresponding input;
  2. All the weighted inputs are summed together;
  3. In our implementation, we will also add an additional parameter called bias to the input sum. The bias allows the neuron to shift its output up or down, adding flexibility to the modeling capability;
  4. Then the input sum is passed through an activation function. We are using the sigmoid function, which squashes values into the range (0, 1).

Note

Bias of the neuron is also a trainable parameter.

Tarea
test

Swipe to show code editor

Implement the basic structure of a neuron. Complete the missing parts of the neuron class:

  1. Enter the number of inputs of the neuron.
  2. Use the uniform function to generate a random bias for every neuron.
  3. Enter the activation function of the neuron.

Once you've completed this task, click the button below the code to check your solution.

Switch to desktopCambia al escritorio para practicar en el mundo realContinúe desde donde se encuentra utilizando una de las siguientes opciones
¿Todo estuvo claro?

¿Cómo podemos mejorarlo?

¡Gracias por tus comentarios!

Sección 2. Capítulo 1
Switch to desktopCambia al escritorio para practicar en el mundo realContinúe desde donde se encuentra utilizando una de las siguientes opciones
We're sorry to hear that something went wrong. What happened?
some-alt