Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Learn Single Neuron Implementation | Neural Network from Scratch
Introduction to Neural Networks
course content

Course Content

Introduction to Neural Networks

Introduction to Neural Networks

1. Concept of Neural Network
2. Neural Network from Scratch
3. Conclusion

book
Single Neuron Implementation

Note
Definition

A neuron is the basic computational unit of a neural network. It processes multiple inputs and generates a single output, enabling the network to learn and make predictions.

For now, we want to build a neural network with a single neuron. As an example, let's say we'll use it for a binary classification task, such as spam detection, where 0 represents a ham (non-spam) email and 1 represents a spam email.

The neuron will take numerical features related to emails as inputs and produce an output between 0 and 1, representing the probability that an email is spam.

Here's what happens step by step:

  1. Each input is multiplied by a corresponding weight. The weights are learnable parameters that determine the importance of each input;

  2. All the weighted inputs are summed together;

  3. An additional parameter called bias is added to the input sum. The bias allows the neuron to shift its output up or down, providing flexibility to the model;

  4. The input sum is then passed through an activation function. Since we have only a single neuron, which directly produces the final output (a probability), we'll use the sigmoid function, which compresses values into the range (0,1)(0, 1).

Note
Note

Bias of the neuron is also a trainable parameter.

Neuron Class

A neuron needs to store its weights and bias making a class a natural way to group these related properties.

Note
Note

While this class won't be part of the final neural network implementation, it effectively illustrates key principles.

python
  • weights: a list of randomly initialized values that determine how important each input (n_inputs is the number of inputs) is to the neuron;

  • bias: a randomly initialized value that helps the neuron make flexible decisions.

Weights and bias should be randomly initialized with small values between -1 and 1, drawn from a uniform distribution, to break symmetry and ensure that different neurons learn different features.

To recap, NumPy provides the random.uniform() function to generate a random number or an array (by specifying the size argument) of random numbers from a uniform distribution within the [low, high) range.

python

Forward Propagation

Additionally, the Neuron class should include an activate() method, which computes the weighted sum of the inputs and applies the activation function (sigmoid in our case).

In fact, if we have two vectors of equal length (weights and inputs), the weighted sum can be computed using the dot product of these vectors:

This allows us to compute the weighted sum in a single line of code using the numpy.dot() function, eliminating the need for a loop. The bias can then be directly added to the result to get input_sum_with_bias. The output is then computed by applying the sigmoid activation function:

python

Activation Functions

The formula for the sigmoid function is as follows, given that zz represents the weighted sum of inputs with bias added (raw output value) for this particular neuron:

Οƒ(z)=11+eβˆ’z\sigma(z) = \frac1{1 + e^{-z}}

Using this formula, sigmoid can be implemented as a simple function in Python:

python

The formula for the ReLU function is as follows, which basically sets the output equal to zz if it is positive and 0 otherwise:

ReLU(z)=max(0,z)ReLU(z) = max(0, z)
python

1. What is the role of the bias term in a single neuron?

2. Why do we initialize weights with small random values rather than zeros?

question mark

What is the role of the bias term in a single neuron?

Select the correct answer

question mark

Why do we initialize weights with small random values rather than zeros?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 1

Ask AI

expand

Ask AI

ChatGPT

Ask anything or try one of the suggested questions to begin our chat

course content

Course Content

Introduction to Neural Networks

Introduction to Neural Networks

1. Concept of Neural Network
2. Neural Network from Scratch
3. Conclusion

book
Single Neuron Implementation

Note
Definition

A neuron is the basic computational unit of a neural network. It processes multiple inputs and generates a single output, enabling the network to learn and make predictions.

For now, we want to build a neural network with a single neuron. As an example, let's say we'll use it for a binary classification task, such as spam detection, where 0 represents a ham (non-spam) email and 1 represents a spam email.

The neuron will take numerical features related to emails as inputs and produce an output between 0 and 1, representing the probability that an email is spam.

Here's what happens step by step:

  1. Each input is multiplied by a corresponding weight. The weights are learnable parameters that determine the importance of each input;

  2. All the weighted inputs are summed together;

  3. An additional parameter called bias is added to the input sum. The bias allows the neuron to shift its output up or down, providing flexibility to the model;

  4. The input sum is then passed through an activation function. Since we have only a single neuron, which directly produces the final output (a probability), we'll use the sigmoid function, which compresses values into the range (0,1)(0, 1).

Note
Note

Bias of the neuron is also a trainable parameter.

Neuron Class

A neuron needs to store its weights and bias making a class a natural way to group these related properties.

Note
Note

While this class won't be part of the final neural network implementation, it effectively illustrates key principles.

python
  • weights: a list of randomly initialized values that determine how important each input (n_inputs is the number of inputs) is to the neuron;

  • bias: a randomly initialized value that helps the neuron make flexible decisions.

Weights and bias should be randomly initialized with small values between -1 and 1, drawn from a uniform distribution, to break symmetry and ensure that different neurons learn different features.

To recap, NumPy provides the random.uniform() function to generate a random number or an array (by specifying the size argument) of random numbers from a uniform distribution within the [low, high) range.

python

Forward Propagation

Additionally, the Neuron class should include an activate() method, which computes the weighted sum of the inputs and applies the activation function (sigmoid in our case).

In fact, if we have two vectors of equal length (weights and inputs), the weighted sum can be computed using the dot product of these vectors:

This allows us to compute the weighted sum in a single line of code using the numpy.dot() function, eliminating the need for a loop. The bias can then be directly added to the result to get input_sum_with_bias. The output is then computed by applying the sigmoid activation function:

python

Activation Functions

The formula for the sigmoid function is as follows, given that zz represents the weighted sum of inputs with bias added (raw output value) for this particular neuron:

Οƒ(z)=11+eβˆ’z\sigma(z) = \frac1{1 + e^{-z}}

Using this formula, sigmoid can be implemented as a simple function in Python:

python

The formula for the ReLU function is as follows, which basically sets the output equal to zz if it is positive and 0 otherwise:

ReLU(z)=max(0,z)ReLU(z) = max(0, z)
python

1. What is the role of the bias term in a single neuron?

2. Why do we initialize weights with small random values rather than zeros?

question mark

What is the role of the bias term in a single neuron?

Select the correct answer

question mark

Why do we initialize weights with small random values rather than zeros?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

SectionΒ 2. ChapterΒ 1
some-alt