Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
学ぶ Challenge: Training the Perceptron | Section
Neural Network Fundamentals
セクション 1.  17
single

single

bookChallenge: Training the Perceptron

メニューを表示するにはスワイプしてください

Before proceeding with training the perceptron, keep in mind that it uses the binary cross-entropy loss function discussed earlier. The final key concept before implementing backpropagation is the formula for the derivative of this loss function with respect to the output activations, $a^n$. Below are the formulas for the loss function and its derivative:

L=(ylog(y^)+(1y)log(1y^))dan=y^yy^(1y^)\begin{aligned} L &= -(y \log(\hat{y}) + (1-y) \log(1 - \hat{y}))\\ da^n &= \frac {\hat{y} - y} {\hat{y}(1 - \hat{y})} \end{aligned}

where an=y^a^n = \hat{y}

To verify that the perceptron is training correctly, the fit() method also prints the average loss at each epoch. This is calculated by averaging the loss over all training examples in that epoch:

for epoch in range(epochs):
    loss = 0

    for i in range(training_data.shape[0]):
        loss += -(target * np.log(output) + (1 - target) * np.log(1 - output))

average_loss = loss[0, 0] / training_data.shape[0]
print(f'Loss at epoch {epoch + 1}: {average_loss:.3f}')
L=1Ni=1N(yilog(y^i)+(1yi)log(1y^i))L = -\frac1N \sum_{i=1}^N (y_i \log(\hat{y}_i) + (1 - y_i) \log(1 - \hat{y}_i))

Finally, the formulas for computing gradients in each layer are as follows:

dzl=dalfl(zl)dWl=dzl(al1)Tdbl=dzldal1=(Wl)Tdzl\begin{aligned} dz^l &= da^l \odot f'^l(z^l)\\ dW^l &= dz^l \cdot (a^{l-1})^T\\ db^l &= dz^l\\ da^{l-1} &= (W^l)^T \cdot dz^l \end{aligned}

Implementation Details to Remember

When translating these formulas into Python code for the backward() method, remember the NumPy operations discussed in the previous chapters:

  • The \odot operator denotes element-wise multiplication, which is done using the standard * operator in Python.
  • The \cdot operator denotes a dot product, implemented using the np.dot() function.
  • The TT superscript denotes a matrix transpose, handled by the .T attribute.
  • To compute fl(zl)f'^l(z^l), you can dynamically call the derivative of the layer's activation function using self.activation.derivative(self.outputs).

This makes the general structure of the backward() method look like this:

def backward(self, da, learning_rate):
    dz = ... # using da and self.activation.derivative()
    d_weights = ... # using np.dot() and .T
    d_biases = ...
    da_prev = ...

    self.weights -= learning_rate * d_weights
    self.biases -= learning_rate * d_biases

    return da_prev

Similarly, when putting everything together in the fit() method, remember that you need to iterate through the network backwards to propagate the error. The general structure looks like this:

def fit(self, training_data, labels, epochs, learning_rate):
    # ... (Epoch loop and data shuffling) ...
            # Forward propagation
            output = ...

            # Computing the gradient of the loss function w.r.t. output (da^n)
            da = ...

            # Backward propagation through all layers
            for layer in self.layers[::-1]:
                da = ... # Call the backward() method of the layer

The sample training data (X_train) along with the corresponding labels (y_train) are stored as NumPy arrays in the utils.py file. Additionally, instances of the activation functions are also defined there:

relu = ReLU()
sigmoid = Sigmoid()
タスク

スワイプしてコーディングを開始

Your goal is to complete the training process for a multilayer perceptron by implementing backpropagation and updating the model parameters.

Follow these steps carefully:

  1. Implement the backward() method in the Layer class:
    • Compute the following gradients:
      • dz: derivative of the loss with respect to the pre-activation values, using the derivative of the activation function;
      • d_weights: gradient of the loss with respect to the weights, calculated as the dot product of dz and the transposed input vector;
      • d_biases: gradient of the loss with respect to the biases, equal to dz;
      • da_prev: gradient of the loss with respect to the activations of the previous layer, obtained by multiplying the transposed weight matrix by dz.
    • Update the weights and biases using the learning rate.
  2. Complete the fit() method in the Perceptron class:
    • Compute the model output by calling the forward() method;
    • Calculate the loss using the cross-entropy formula;
    • Compute danda^n — the derivative of the loss with respect to the output activations;
    • Loop backward through the layers, performing backpropagation by calling each layer's backward() method.
  3. Check the training behavior:
    • If everything is implemented correctly, the loss should steadily decrease with each epoch when using a learning rate of 0.01.

解答

Switch to desktop実践的な練習のためにデスクトップに切り替える下記のオプションのいずれかを利用して、現在の場所から続行する
すべて明確でしたか?

どのように改善できますか?

フィードバックありがとうございます!

セクション 1.  17
single

single

AIに質問する

expand

AIに質問する

ChatGPT

何でも質問するか、提案された質問の1つを試してチャットを始めてください

some-alt