 k-NN with Multiple Features
k-NN with Multiple Features
You now understand how k-NN works with a single feature. Let's move on to a slightly more complex example that uses two features: weight and width.
In this case, we need to find neighbors based on both width and weight. But there's a small issue with that. Let's plot the sweets and see what goes wrong:
You can see that the weight ranges from 12 to 64, while the width is only between 5 and 12. Since the width's range is much smaller, the sweets appear almost vertically aligned. If we calculate distances now, they will primarily reflect differences in weight, as if we never considered width.
There is a solution, though - scaling the data.
Now, both weight and width are on the same scale and centered around zero. This can be achieved by the StandardScaler class from sklearn. StandardScaler just subtracts the sample's mean and then divides the result by the sample's standard deviation:
StandardScaler centers the data around zero. While centering is not mandatory for k-NN and might lead to confusion, such as "how can weight be negative", it is simply a way to present data to a computer. Some models require centering, so using StandardScaler for scaling by default is advisable.
In fact, you should always scale the data before using k-Nearest Neighbors. With the data scaled, we can now find the neighbors:
In the case of two features, k-NN defines a circular neighborhood containing the desired number of neighbors. With three features, this becomes a sphere. In higher dimensions, the neighborhood assumes a more complex shape that can't be visualized, yet the underlying calculations remain unchanged.
Thanks for your feedback!
Ask AI
Ask AI
Ask anything or try one of the suggested questions to begin our chat
Awesome!
Completion rate improved to 4.17 k-NN with Multiple Features
k-NN with Multiple Features
Swipe to show menu
You now understand how k-NN works with a single feature. Let's move on to a slightly more complex example that uses two features: weight and width.
In this case, we need to find neighbors based on both width and weight. But there's a small issue with that. Let's plot the sweets and see what goes wrong:
You can see that the weight ranges from 12 to 64, while the width is only between 5 and 12. Since the width's range is much smaller, the sweets appear almost vertically aligned. If we calculate distances now, they will primarily reflect differences in weight, as if we never considered width.
There is a solution, though - scaling the data.
Now, both weight and width are on the same scale and centered around zero. This can be achieved by the StandardScaler class from sklearn. StandardScaler just subtracts the sample's mean and then divides the result by the sample's standard deviation:
StandardScaler centers the data around zero. While centering is not mandatory for k-NN and might lead to confusion, such as "how can weight be negative", it is simply a way to present data to a computer. Some models require centering, so using StandardScaler for scaling by default is advisable.
In fact, you should always scale the data before using k-Nearest Neighbors. With the data scaled, we can now find the neighbors:
In the case of two features, k-NN defines a circular neighborhood containing the desired number of neighbors. With three features, this becomes a sphere. In higher dimensions, the neighborhood assumes a more complex shape that can't be visualized, yet the underlying calculations remain unchanged.
Thanks for your feedback!