Gradient Boosting: Theory and Implementation
Gradient Boosting is an ensemble method that trains weak learners (usually decision trees) sequentially, where each new model fits the gradient of the loss function with respect to the current predictions.
Unlike AdaBoost, which adjusts sample weights, Gradient Boosting directly fits weak learners to the residuals of the previous model, minimizing a chosen loss function step by step.
Mathematical Intuition
Each model learns from the residuals of the previous stage:
ri(t)=−∂F(xi)∂L(yi,F(xi))Then, a new model $h_t(x)$ is fitted to these residuals, and the ensemble is updated as:
Ft+1(x)=Ft(x)+η,ht(x)where:
- L — loss function (e.g., MSE or log-loss),
- η — learning rate controlling the contribution of each tree.
1234567891011121314151617181920212223242526272829303132333435from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.ensemble import GradientBoostingClassifier from sklearn.metrics import accuracy_score import matplotlib.pyplot as plt # Load dataset X, y = load_breast_cancer(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # Initialize and train the model gb = GradientBoostingClassifier( n_estimators=100, learning_rate=0.1, max_depth=3, random_state=42 ) gb.fit(X_train, y_train) # Evaluate y_pred = gb.predict(X_test) # Compute staged accuracy manually (works in all sklearn versions) test_accuracy = [] for y_stage_pred in gb.staged_predict(X_test): acc = accuracy_score(y_test, y_stage_pred) test_accuracy.append(acc) # Plot staged accuracy plt.plot(range(1, len(test_accuracy) + 1), test_accuracy) plt.xlabel("Number of Trees") plt.ylabel("Test Accuracy") plt.title(f"Gradient Boosting Learning Progression (Accuracy: {accuracy_score(y_test, y_pred):.3f})") plt.grid(True) plt.show()
Дякуємо за ваш відгук!
Запитати АІ
Запитати АІ
Запитайте про що завгодно або спробуйте одне із запропонованих запитань, щоб почати наш чат
Can you explain the difference between Gradient Boosting and AdaBoost in more detail?
How does the learning rate ($\eta$) affect the performance of Gradient Boosting?
Can you walk me through the code and explain each step?
Чудово!
Completion показник покращився до 7.14
Gradient Boosting: Theory and Implementation
Свайпніть щоб показати меню
Gradient Boosting is an ensemble method that trains weak learners (usually decision trees) sequentially, where each new model fits the gradient of the loss function with respect to the current predictions.
Unlike AdaBoost, which adjusts sample weights, Gradient Boosting directly fits weak learners to the residuals of the previous model, minimizing a chosen loss function step by step.
Mathematical Intuition
Each model learns from the residuals of the previous stage:
ri(t)=−∂F(xi)∂L(yi,F(xi))Then, a new model $h_t(x)$ is fitted to these residuals, and the ensemble is updated as:
Ft+1(x)=Ft(x)+η,ht(x)where:
- L — loss function (e.g., MSE or log-loss),
- η — learning rate controlling the contribution of each tree.
1234567891011121314151617181920212223242526272829303132333435from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.ensemble import GradientBoostingClassifier from sklearn.metrics import accuracy_score import matplotlib.pyplot as plt # Load dataset X, y = load_breast_cancer(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42) # Initialize and train the model gb = GradientBoostingClassifier( n_estimators=100, learning_rate=0.1, max_depth=3, random_state=42 ) gb.fit(X_train, y_train) # Evaluate y_pred = gb.predict(X_test) # Compute staged accuracy manually (works in all sklearn versions) test_accuracy = [] for y_stage_pred in gb.staged_predict(X_test): acc = accuracy_score(y_test, y_stage_pred) test_accuracy.append(acc) # Plot staged accuracy plt.plot(range(1, len(test_accuracy) + 1), test_accuracy) plt.xlabel("Number of Trees") plt.ylabel("Test Accuracy") plt.title(f"Gradient Boosting Learning Progression (Accuracy: {accuracy_score(y_test, y_pred):.3f})") plt.grid(True) plt.show()
Дякуємо за ваш відгук!