pytorch動手實現——K近鄰算法

簡介

k近鄰(knn)算法算是比較簡單的機器學習算法,它屬於惰性算法,無需訓練,但是每次預測都需要遍歷數據集,所以時間複雜度很高。
KNN模型的三個基本要素:

  1. K值得選擇,K值越小,近似誤差越小,估計誤差越大,相當於過擬合。舉個例子,如果k=1,那麼類別就會跟他最近的點一個類別。
  2. 距離度量:距離反映了特徵空間中兩個實例的相似程度。可以採用歐氏距離、曼哈頓距離。
  3. 分類決策規則:往往採用多數表決。

pytorch實現——Mnist數據集驗證

筆者採用了兩種方法來實現歐式距離計算,一直是迭代每個測試樣例,另一種是通過矩陣的方法計算歐式距離。矩陣方法原理矩陣計算歐幾里得距離

from torchvision import datasets, transforms
import numpy as np
from sklearn.metrics import accuracy_score
import torch
from tqdm import tqdm
import time

# matrix func
def KNN(train_x, train_y, test_x, test_y, k):

    since = time.time()

    m = test_x.size(0)
    n = train_x.size(0)

    # cal Eud distance mat
    print("cal dist matrix")
    xx = (test_x**2).sum(dim=1,keepdim=True).expand(m, n)
    yy = (train_x**2).sum(dim=1, keepdim=True).expand(n, m).transpose(0,1)

    dist_mat = xx + yy - 2*test_x.matmul(train_x.transpose(0,1))
    mink_idxs = dist_mat.argsort(dim=-1)

    res = []
    for idxs in mink_idxs:
        # voting
        res.append(np.bincount(np.array([train_y[idx] for idx in idxs[:k]])).argmax())
    
    assert len(res) == len(test_y)
    print("acc", accuracy_score(test_y, res))
    time_elapsed = time.time() - since
    print('KNN mat training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))

def cal_distance(x, y):
    return torch.sum((x-y)**2)**0.5
# iteration func
def KNN_by_iter(train_x, train_y, test_x, test_y, k):

    since = time.time()

    # cal distance
    res = []
    for x in tqdm(test_x):
        dists = []
        for y in train_x:
            dists.append(cal_distance(x,y).view(1))
        
        idxs = torch.cat(dists).argsort()[:k]
        res.append(np.bincount(np.array([train_y[idx] for idx in idxs])).argmax())

    # print(res[:10])
    print("acc",accuracy_score(test_y, res))

    time_elapsed = time.time() - since
    print('KNN iter training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
        


if __name__ == "__main__":

    train_dataset = datasets.MNIST(root="./data", transform=transforms.ToTensor(), train=True)
    test_dataset = datasets.MNIST(root="./data", transform=transforms.ToTensor(), train=False)

    # build train&test data
    train_x = []
    train_y = []
    for i in range(len(train_dataset)):
        img, target = train_dataset[i]
        train_x.append(img.view(-1))
        train_y.append(target)

        if i > 5000:
            break

    # print(set(train_y))

    test_x = [] 
    test_y = []
    for i in range(len(test_dataset)):
        img, target = test_dataset[i]
        test_x.append(img.view(-1))
        test_y.append(target)

        if i > 200:
            break

    print("classes:" , set(train_y))

    KNN(torch.stack(train_x), train_y, torch.stack(test_x), test_y, 7)
    KNN_by_iter(torch.stack(train_x), train_y, torch.stack(test_x), test_y, 7)

兩種方法的結果一樣,在5000個訓練集和200個測試集樣例上的結果:

ACC = 0.94059

兩種方法的時間對比:

矩陣實現 迭代實現
<<1s 28s
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章