Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Aprenda Challenge: Implementing a Random Forest | Random Forest
Classification with Python

bookChallenge: Implementing a Random Forest

In sklearn, the classification version of Random Forest is implemented using the RandomForestClassifier:

You will also calculate the cross-validation accuracy using the cross_val_score() function:

In the end, you'll print the importance of each feature. The feature_importances_ attribute returns an array of importance scores — these scores represent how much each feature contributed to reducing Gini impurity across all the decision nodes where that feature was used. In other words, the more a feature helps split the data in a useful way, the higher its importance.

However, the attribute only gives the scores without feature names. To display both, you can pair them using Python’s zip() function:

for feature, importance in zip(X.columns, model.feature_importances_):
    print(feature, importance)

This prints each feature name along with its importance score, making it easier to understand which features the model relied on most.

Tarefa

Swipe to start coding

You are given a Titanic dataset stored as a DataFrame in the df variable.

  • Initialize the Random Forest model, set random_state=42, train it, and store the fitted model in the random_forest variable.
  • Calculate the cross-validation scores for the trained model using 10 folds, and store the resulting scores in the cv_scores variable.

Solução

Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

Seção 4. Capítulo 3
single

single

Pergunte à IA

expand

Pergunte à IA

ChatGPT

Pergunte o que quiser ou experimente uma das perguntas sugeridas para iniciar nosso bate-papo

Suggested prompts:

Resumir este capítulo

Explicar o código em file

Explicar por que file não resolve a tarefa

close

Awesome!

Completion rate improved to 4.17

bookChallenge: Implementing a Random Forest

Deslize para mostrar o menu

In sklearn, the classification version of Random Forest is implemented using the RandomForestClassifier:

You will also calculate the cross-validation accuracy using the cross_val_score() function:

In the end, you'll print the importance of each feature. The feature_importances_ attribute returns an array of importance scores — these scores represent how much each feature contributed to reducing Gini impurity across all the decision nodes where that feature was used. In other words, the more a feature helps split the data in a useful way, the higher its importance.

However, the attribute only gives the scores without feature names. To display both, you can pair them using Python’s zip() function:

for feature, importance in zip(X.columns, model.feature_importances_):
    print(feature, importance)

This prints each feature name along with its importance score, making it easier to understand which features the model relied on most.

Tarefa

Swipe to start coding

You are given a Titanic dataset stored as a DataFrame in the df variable.

  • Initialize the Random Forest model, set random_state=42, train it, and store the fitted model in the random_forest variable.
  • Calculate the cross-validation scores for the trained model using 10 folds, and store the resulting scores in the cv_scores variable.

Solução

Switch to desktopMude para o desktop para praticar no mundo realContinue de onde você está usando uma das opções abaixo
Tudo estava claro?

Como podemos melhorá-lo?

Obrigado pelo seu feedback!

close

Awesome!

Completion rate improved to 4.17
Seção 4. Capítulo 3
single

single

some-alt