Challenge: Fitting a Line with Gradient Descent
A student is exploring how to use gradient descent to fit a straight line to a small dataset. The dataset shows years of experience versus salary (in thousands), and the goal is to find the best-fitting line using an iterative update rule.
Your task is to adjust the slope (m**) and intercept (b) so that the line closely follows the data points.
The expression you are trying to minimize is:
n1i=1∑n(yi−(mxi+b))2The gradient descent update rules for minimizing this function are:
m←m−α∂m∂Jb←b−α∂b∂JWhere:
- α is the learning rate (step size);
- ∂m∂J is the partial derivative of the loss function with respect to m;
- ∂b∂J is the partial derivative of the loss function with respect to b.
This loss measures how far off your predicted points are from the actual data. (P.S. Smaller values mean the line fits the data better.)
In order to find values m and b, use gradient descent.
Swipe to start coding
- Complete the Python code below to implement the gradient descent steps.
- Fill in the missing expressions using basic Python operations.
- Track how the values of
m
andb
change as the algorithm runs.
Løsning
Takk for tilbakemeldingene dine!
single
Spør AI
Spør AI
Spør om hva du vil, eller prøv ett av de foreslåtte spørsmålene for å starte chatten vår
Awesome!
Completion rate improved to 1.89
Challenge: Fitting a Line with Gradient Descent
Sveip for å vise menyen
A student is exploring how to use gradient descent to fit a straight line to a small dataset. The dataset shows years of experience versus salary (in thousands), and the goal is to find the best-fitting line using an iterative update rule.
Your task is to adjust the slope (m**) and intercept (b) so that the line closely follows the data points.
The expression you are trying to minimize is:
n1i=1∑n(yi−(mxi+b))2The gradient descent update rules for minimizing this function are:
m←m−α∂m∂Jb←b−α∂b∂JWhere:
- α is the learning rate (step size);
- ∂m∂J is the partial derivative of the loss function with respect to m;
- ∂b∂J is the partial derivative of the loss function with respect to b.
This loss measures how far off your predicted points are from the actual data. (P.S. Smaller values mean the line fits the data better.)
In order to find values m and b, use gradient descent.
Swipe to start coding
- Complete the Python code below to implement the gradient descent steps.
- Fill in the missing expressions using basic Python operations.
- Track how the values of
m
andb
change as the algorithm runs.
Løsning
Takk for tilbakemeldingene dine!
Awesome!
Completion rate improved to 1.89single