Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Summary | Conclusion
Introduction to Neural Networks
course content

Course Content

Introduction to Neural Networks

Introduction to Neural Networks

1. Concept of Neural Network
2. Neural Network from Scratch
3. Conclusion

bookSummary

Concept of a Neural Network

Neuron is the basic unit of information processing.

Weights are special coefficients that determine how important each input is to the neuron.

The process of training a neural network is to adjust the weight of each neuron in such a way that the results they give are the most accurate.

The activation function converts the sum to the neuron's output value.

Examples of activation functions include:

  1. Sigmoid Function;
  2. ReLU;
  3. Hyperbolic Tangent.

Forward propagation is the process by which information passes through the Neural Network from the input layer to the output layer. When the information reaches the output layer, the network makes a prediction or inference based on the data it has processed.

Backpropagation is the process in which the error information is used to traverse the network back and adjust the weights of the neurons.

Neural Network from Scratch

The bias allows the neuron to shift its output, adding flexibility to the modeling capability. Bias of the neuron is also a trainable parameter.

Multilayer Perceptron can have multiple layers:

  1. An input layer: It receives the input data;
  2. Hidden layers: These layers process the data and extract patterns;
  3. Output layer: Produces the final prediction or classifications.

Each layer consists of multiple neurons, and the output from one layer becomes the input for the next layer.

We can split backpropagation algorithm into several steps:

  1. Forward Propagation;
  2. Error Computing;
  3. Calculating the Gradient (Delta);
  4. Modifying Weights and Biases (Taking a Step in Gradient Descent).

Learning rate is an integral component of the gradient descent algorithm, the learning rate can be visualized as the pace of training. A higher learning rate accelerates the training process; however, an excessively high rate might cause the neural network to overlook valuable insights and patterns within the data.

There are many different ways to calculate the quality of a model:

  1. Accuracy;
  2. Mean Squared Error (MSE);
  3. Cross-entropy (Cross-entropy);
  4. And many others...

Creating a model:

Training a model:

Predict output values:

Conclusion

What to look for when choosinge between the Traditional Models and the Neural Networks:

  1. Dataset Size;
  2. Complexity of a Problem;
  3. Interpretability;
  4. Resources.

The most commonly used types of neural networks:

  1. Feedforward Neural Networks (FNN) or Multi-layer Perceptrons (MLP);
  2. Convolutional Neural Networks (CNN);
  3. Recurrent Neural Networks (RNN);
  4. Autoencoders (AE);
  5. Generative Adversarial Networks (GAN);
  6. Modular Neural Networks (MNN).

Libraries for Deep Learning:

  1. Tensorflow;
  2. PyTorch.
What is the primary purpose of a Neural Network?

What is the primary purpose of a Neural Network?

Select the correct answer

Everything was clear?

How can we improve it?

Thanks for your feedback!

Section 3. Chapter 6
some-alt