Seksjon 1. Kapittel 16
single
Challenge: Compare Convergence Speed
Sveip for å vise menyen
Oppgave
Sveip for å begynne å kode
You will simulate gradient descent on a simple linear regression problem to compare how feature scaling affects convergence speed.
Steps:
- Generate synthetic data
X(one feature) andyusing the relationy = 3 * X + noise. - Implement a simple gradient descent function that minimizes MSE loss:
def gradient_descent(X, y, lr, steps): w = 0.0 history = [] for _ in range(steps): grad = -2 * np.mean(X * (y - w * X)) w -= lr * grad history.append(w) return np.array(history) - Run gradient descent twice:
- on the original X,
- and on the standardized X_scaled = (X - mean) / std.
- Plot or print the loss decrease for both to see that scaling accelerates convergence.
- Compute and print final weights and losses for both cases.
Løsning
Alt var klart?
Takk for tilbakemeldingene dine!
Seksjon 1. Kapittel 16
single
Spør AI
Spør AI
Spør om hva du vil, eller prøv ett av de foreslåtte spørsmålene for å starte chatten vår