Data Compression
Before dealing with the task of compressing data with PCA, it is important to understand the difference between data compression and dimensionality reduction.
Dimensional data reduction is one type of data compression. Data compression methods are divided into 2 main classes: those in which the processed data can then be restored and those in which it is impossible. Data dimensionality reduction is class 2, i.e. after processing the dataset, we will not be able to restore it back to its original form. More precisely, it can be done, but the data will not be the same, it will be an approximation to the original dataset.
It is generally accepted that PCA is not a method for saving storage space, but for performing expensive operations to achieve a similar result.
Let's get back to the code. We have the option to choose the amount of data variance we want to keep from the initial dataset. The value of the n_components
argument must have been between 0
and 1
. In this case, if we specify 0.85
, that would be 85%
of the stored variance.
from sklearn.decomposition import PCA
pca_model = PCA(n_components = 0.85)
Swipe to start coding
Create a PCA model with 90% variance preserved for the iris
dataset:
Solución
¡Gracias por tus comentarios!
single
Pregunte a AI
Pregunte a AI
Pregunte lo que quiera o pruebe una de las preguntas sugeridas para comenzar nuestra charla
Resumir este capítulo
Explicar el código en file
Explicar por qué file no resuelve la tarea
Awesome!
Completion rate improved to 5.26
Data Compression
Desliza para mostrar el menú
Before dealing with the task of compressing data with PCA, it is important to understand the difference between data compression and dimensionality reduction.
Dimensional data reduction is one type of data compression. Data compression methods are divided into 2 main classes: those in which the processed data can then be restored and those in which it is impossible. Data dimensionality reduction is class 2, i.e. after processing the dataset, we will not be able to restore it back to its original form. More precisely, it can be done, but the data will not be the same, it will be an approximation to the original dataset.
It is generally accepted that PCA is not a method for saving storage space, but for performing expensive operations to achieve a similar result.
Let's get back to the code. We have the option to choose the amount of data variance we want to keep from the initial dataset. The value of the n_components
argument must have been between 0
and 1
. In this case, if we specify 0.85
, that would be 85%
of the stored variance.
from sklearn.decomposition import PCA
pca_model = PCA(n_components = 0.85)
Swipe to start coding
Create a PCA model with 90% variance preserved for the iris
dataset:
Solución
¡Gracias por tus comentarios!
Awesome!
Completion rate improved to 5.26single