Notice: This page requires JavaScript to function properly.
Please enable JavaScript in your browser settings or update your browser.
Apprendre Feature Vector and Principal Components | Basic Concepts of PCA
Principal Component Analysis

bookFeature Vector and Principal Components

After we have our main components, we need to create a feature vector. Why do we need this new variable? At this stage, we decide whether to keep all components or discard those that have the least value. The feature vector is just a matrix of vectors from the remaining most significant components.

Thus, the creation of the feature vector is exactly the stage at which dataset dimensionality reduction occurs, because if we decide to keep only p principal components out of n, the final dataset will have only p dimensions.

We can reduce a matrix with 2 components to 1 component:

Finally, we have the main components and we can transform our data, i.e. reorient the data from the original axes to those represented by the principal components. This is implemented very simply by multiplying the feature vector by standardized data (the matrices must be transposed):

Quiz

From which dimension to which was the dataset in the image transferred?

question mark

Choose the correct option.

Select the correct answer

Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 2. Chapitre 4

Demandez à l'IA

expand

Demandez à l'IA

ChatGPT

Posez n'importe quelle question ou essayez l'une des questions suggérées pour commencer notre discussion

Suggested prompts:

Posez-moi des questions sur ce sujet

Résumer ce chapitre

Afficher des exemples du monde réel

Awesome!

Completion rate improved to 5.26

bookFeature Vector and Principal Components

Glissez pour afficher le menu

After we have our main components, we need to create a feature vector. Why do we need this new variable? At this stage, we decide whether to keep all components or discard those that have the least value. The feature vector is just a matrix of vectors from the remaining most significant components.

Thus, the creation of the feature vector is exactly the stage at which dataset dimensionality reduction occurs, because if we decide to keep only p principal components out of n, the final dataset will have only p dimensions.

We can reduce a matrix with 2 components to 1 component:

Finally, we have the main components and we can transform our data, i.e. reorient the data from the original axes to those represented by the principal components. This is implemented very simply by multiplying the feature vector by standardized data (the matrices must be transposed):

Quiz

From which dimension to which was the dataset in the image transferred?

question mark

Choose the correct option.

Select the correct answer

Tout était clair ?

Comment pouvons-nous l'améliorer ?

Merci pour vos commentaires !

Section 2. Chapitre 4
some-alt