KNN介紹
基礎原理沒什麼介紹的,可以參考我的KNN原理和實現,裏面介紹了KNN的原理同時使用KNN來進行mnist分類
KNN in sklearn
sklearn是這麼說KNN的:
The principle behind nearest neighbor methods is to find a predefined number of training samples closest in distance to the new point, and predict the label from these. The number of samples can be a user-defined constant (k-nearest neighbor learning), or vary based on the local density of points (radius-based neighbor learning). The distance can, in general, be any metric measure: standard Euclidean distance is the most common choice. Neighbors-based methods are known as non-generalizing machine learning methods, since they simply “remember” all of its training data (possibly transformed into a fast indexing structure such as a Ball Tree or KD Tree.).
接口介紹
sklearn.neighbors
主要有兩個:
- KNeighborsClassifier(RadiusNeighborsClassifier)
- kNeighborsRegressor (RadiusNeighborsRefressor)
其它的還有一些,不多說,上圖:

classifier
接口定義
KNeighborsClassifier(n_neighbors=5, weights=’uniform’, algorithm=’auto’, leaf_size=30, p=2, metric=’minkowski’, metric_params=None, n_jobs=1, **kwargs)
需要注意的點就是:
- weights(各個neighbor的權重分配)
- metric(距離的度量)
例子
這次就不寫mnist分類了,其實也很簡單,官網的教程就可以說明問題了
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.colors import ListedColormap
from sklearn import neighbors, datasets
n_neighbors = 15
# 導入iris數據集
iris = datasets.load_iris()
# iris特徵有四個,這裏只使用前兩個特徵來做分類
X = iris.data[:, :2]
# iris的label
y = iris.target
h = .02 # step size in the mesh
# colormap
cmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#AAAAFF'])
cmap_bold = ListedColormap(['#FF0000', '#00FF00', '#0000FF'])
for weights in ['uniform', 'distance']:
# KNN分類器
clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
# fit
clf.fit(X, y)
# Plot the decision boundary.
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h),
np.arange(y_min, y_max, h))
# predict
Z = clf.predict(np.c_[xx.ravel(), yy.ravel()])
# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=y, cmap=cmap_bold)
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
plt.title("3-Class classification (k = %i, weights = '%s')"
% (n_neighbors, weights))
plt.show()
其實非常簡單,如果去除畫圖的代碼其實就三行:
- clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
- clf.fit(X, y)
- clf.predict(Z)
如果你的數據不是uniformaly sampled的,你會需要用到RadiusNeighrborsClassifier,使用方法保持一致
regressor
大部分說KNN其實是說的是分類器,其實KNN還可以做迴歸,官網教程是這麼說的:
Neighbors-based regression can be used in cases where the data labels are continuous rather than discrete variables. The label assigned to a query point is computed based the mean of the labels of its nearest neighbors.
例子
同樣是官網的例子
import numpy as np
import matplotlib.pyplot as plt
from sklearn import neighbors
np.random.seed(0)
X = np.sort(5 * np.random.rand(40, 1), axis=0)
T = np.linspace(0, 5, 500)[:, np.newaxis]
y = np.sin(X).ravel()
# Add noise to targets
y[::5] += 1 * (0.5 - np.random.rand(8))
n_neighbors = 5
for i, weights in enumerate(['uniform', 'distance']):
knn = neighbors.KNeighborsRegressor(n_neighbors, weights=weights)
y_ = knn.fit(X, y).predict(T)
plt.subplot(2, 1, i + 1)
plt.scatter(X, y, c='k', label='data')
plt.plot(T, y_, c='g', label='prediction')
plt.axis('tight')
plt.legend()
plt.title("KNeighborsRegressor (k = %i, weights = '%s')" % (n_neighbors, weights)
plt.show()
簡單易懂,就不解釋了
與classifier一樣,如果你的數據不是uniformly sampled的,使用RadiusNeighborsRegressor更加合適