Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprenda Challenge: Implementing a Random Forest | Random Forest
Classification with Python
course content

Conteúdo do Curso

Classification with Python

Classification with Python

1. k-NN Classifier
2. Logistic Regression
3. Decision Tree
4. Random Forest
5. Comparing Models

book
Challenge: Implementing a Random Forest

In sklearn, the classification version of Random Forest is implemented using the RandomForestClassifier:

You will also calculate the cross-validation accuracy using the cross_val_score() function:

In the end, you'll print the importance of each feature. The feature_importances_ attribute returns an array of importance scores — these scores represent how much each feature contributed to reducing Gini impurity across all the decision nodes where that feature was used. In other words, the more a feature helps split the data in a useful way, the higher its importance.

However, the attribute only gives the scores without feature names. To display both, you can pair them using Python’s zip() function:

python

This prints each feature name along with its importance score, making it easier to understand which features the model relied on most.

Tarefa

Swipe to start coding

You are given a Titanic dataset stored as a DataFrame in the df variable.

  • Initialize the Random Forest model, set random_state=42, train it, and store the fitted model in the random_forest variable.
  • Calculate the cross-validation scores for the trained model using 10 folds, and store the resulting scores in the cv_scores variable.

Solução

Switch to desktopMude para o desktop para praticar no mundo realContinue de onde você está usando uma das opções abaixo
Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 4. Capítulo 3
toggle bottom row

book
Challenge: Implementing a Random Forest

In sklearn, the classification version of Random Forest is implemented using the RandomForestClassifier:

You will also calculate the cross-validation accuracy using the cross_val_score() function:

In the end, you'll print the importance of each feature. The feature_importances_ attribute returns an array of importance scores — these scores represent how much each feature contributed to reducing Gini impurity across all the decision nodes where that feature was used. In other words, the more a feature helps split the data in a useful way, the higher its importance.

However, the attribute only gives the scores without feature names. To display both, you can pair them using Python’s zip() function:

python

This prints each feature name along with its importance score, making it easier to understand which features the model relied on most.

Tarefa

Swipe to start coding

You are given a Titanic dataset stored as a DataFrame in the df variable.

  • Initialize the Random Forest model, set random_state=42, train it, and store the fitted model in the random_forest variable.
  • Calculate the cross-validation scores for the trained model using 10 folds, and store the resulting scores in the cv_scores variable.

Solução

Switch to desktopMude para o desktop para praticar no mundo realContinue de onde você está usando uma das opções abaixo
Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 4. Capítulo 3
Switch to desktopMude para o desktop para praticar no mundo realContinue de onde você está usando uma das opções abaixo
Sentimos muito que algo saiu errado. O que aconteceu?
some-alt