Course Content
Ensemble Learning
Ensemble Learning
Stacking Classifier
Stacking Classifier is a stacking ensemble model which is used to solve classification tasks. It aims to exploit the strengths of individual models by using their predictions as input for a higher-level model, known as the meta-classifier or second-level model. The meta-classifier learns how to combine the predictions from the base models to make the final classification decision.
How does Stacking Classifier work?
- Base Models: Several different classification models are trained independently on the training data. These diverse models can utilize various algorithms, architectures, or parameter settings;
- Prediction Generation: After training the base models, they are used to make predictions on both the training data. These predictions serve as features (meta-features) for the next level of modeling;
- Meta-Classifier: A higher-level classifier (meta-classifier) is trained using the meta-features generated from the base models. The meta-classifier learns to combine the base model predictions to make a final classification decision;
- Final Prediction: The base models generate predictions for the new input data during prediction. These predictions are then used as input features for the meta-classifier, which produces the final classification prediction.
Example
import numpy as np from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split from sklearn.tree import DecisionTreeClassifier from sklearn.svm import SVC from sklearn.neural_network import MLPClassifier from sklearn.ensemble import StackingClassifier from sklearn.metrics import f1_score import warnings warnings.filterwarnings('ignore') # Load the Breast Cancer dataset data = load_breast_cancer() X = data.data y = data.target # Split the data into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Define base models base_models = [] for i in range(5): # Create 5 different Decision Tree models base_models.append(('decision_tree_' + str(i), DecisionTreeClassifier())) for i in range(3): # Create 3 different SVM models base_models.append(('svm_' + str(i), SVC(probability=True))) # Define meta-classifier meta_classifier = MLPClassifier(hidden_layer_sizes=(100, 50), max_iter=200) # Create the stacking ensemble stacking_classifier = StackingClassifier(estimators=base_models, final_estimator=meta_classifier) # Train the stacking classifier stacking_classifier.fit(X_train, y_train) # Make predictions y_pred = stacking_classifier.predict(X_test) # Calculate F1 score f1 = f1_score(y_test, y_pred, average='weighted') print(f'F1 Score: {f1:.4f}')
Thanks for your feedback!