機器學習算法——手動搭建KNN分類器(代碼+作圖)

KNN分類器原理

DD 爲一個包含 nn 個點 xiRdx_{i}\in R^{d} 的數據集,其中 DiD_{i} 爲類標籤爲 cic_i 的點的子集,ni=Din_{i}=|D_i|

現給定一個測試點 xjRdx_{j}\in R^{d}以及需要考慮的鄰居節點數爲 KK,令 rr 代表從 xjx_j 到它的第 KK 個最近鄰居的距離。

根據這個距離我們可以畫出一個以測試點 xjx_j 爲中心,半徑爲 rrdd 維超球體,表示爲
Bd(xj,r)={xiDδ(xj,xm)r} B_{d}\left( x_{j},r\right) =\left\{ x_{i}\in D\mid \delta \left( x_{j},x_{m}\right) \leqslant r\right\} 本式中 δ(xj,xm)\delta \left( x_{j},x_{m}\right) 表示測試點 xjx_j 到 集合 DD 中的點 xmx_m 的距離。我們這裏選取的是歐式距離,即 δ(xj,xm)=xjxm2\delta \left( x_{j},x_{m}\right)=\parallel x_{j}-x_{m}\parallel_{2}

KiK_i 表示 xjx_jKK 個最鄰近數據中被標註爲類 cic_i 的點的數目
Ki={xmBd(xj,r)ym=ci} K_{i}=\left\{ x_{m}\in B_{d}\left( x_{j},r\right) \mid y_{m}=c_{i}\right\} 其中,ymy_{m} 是數據點真實所屬類別。
xjx_j 處的類條件概率密度可估計爲
f^(xjci)=KiV×ni \hat{f} \left( x_{j}\mid c_{i}\right) =\frac{K_{i}}{V\times n_{i}} VV是超球體體積,Kini\frac{K_{i}}{n_{i}}表示超球體所包含的屬於類別 cic_i 的個數與整體樣本中一共類別 cic_i 的個數的比值。

於是我們有後驗概率
P(cixj)=f^(xjci)×P^(ci)m=1kf^(xjcm)×P^(cm) P\left( c_{i}\mid x_{j}\right) =\frac{\hat{f} \left( x_{j}\mid c_{i}\right) \times \hat{P} \left( c_{i}\right) }{\sum\nolimits^{k}_{m=1} \hat{f} \left( x_{j}\mid c_{m}\right) \times \hat{P} \left( c_{m}\right) } 由於P(cixm)=P^(ci)=ninP\left( c_{i}\mid x_{m}\right) =\hat{P} \left( c_{i}\right) =\frac{n_{i}}{n},所以
f^(xjci)×P^(ci)=KiV×ni×nin=Kin×V \hat{f} \left( x_{j}\mid c_{i}\right) \times \hat{P} \left( c_{i}\right) =\frac{K_{i}}{V\times n_{i}} \times \frac{n_{i}}{n} =\frac{K_{i}}{n\times V} 因此後驗概率爲
P(cixj)=Kin×Vm=1kKmn×V=KiK P\left( c_{i}\mid x_{j}\right) =\frac{\frac{K_{i}}{n\times V} }{\sum\nolimits^{k}_{m=1} \frac{K_{m}}{n\times V} } =\frac{K_{i}}{K} 所以 xjx_j 的預測類爲
yj^=argmaxci{P(cixj)}=argmaxci{Ki}\hat{y_{j}} =\arg \max_{c_{i}} \left\{ P\left( c_{i}\mid x_{j}\right) \right\} =\arg \max_{c_{i}} \left\{ K_{i}\right\} 由於 KK 本身是固定的,所以上式成立,求 xjx_j 所屬類就是找到其 KK 個鄰居中的多數類。

分類器實現

def str_column_to_int(dataset, column):
    """
    將類別轉化爲int型
    @dataset: 數據
    @column: 需要轉化的列
    """
    class_values = [row[column] for row in dataset]
    unique = set(class_values)
    lookup = dict()
    for i, value in enumerate(unique):
        lookup[value] = i
    for row in dataset:
        row[column] = lookup[row[column]]
    print(lookup)
    return lookup

def cross_validation_split(dataset, n_folds):
    """
    使用交叉檢驗方法驗證算法
    @dataset: 數據
    @n_folds: 想要劃分的折數
    """
    dataset_split = list()
    dataset_copy = list(dataset)
    fold_size = int(len(dataset) / n_folds)   # 一個fold的大小
    for _ in range(n_folds):
        fold = list()
        while len(fold) < fold_size:
            index = randrange(len(dataset_copy))
            fold.append(dataset_copy.pop(index))            
        dataset_split.append(fold)       
    return dataset_split

def accuracy_metric(actual, predicted):
    """
    計算準確率
    @actual: 真實值
    @predicted: 預測值
    """
    correct = 0
    for i in range(len(actual)):
        if actual[i] == predicted[i]:
            correct += 1
    return correct / float(len(actual)) * 100.0

def evaluate_algorithm(dataset, algorithm, n_folds, *args):
    """
    評估使用的分類算法(基於交叉檢驗)
    @dataset: 數據
    @algorithm: 使用的算法
    @n_folds: 選擇要劃分的折數
    @*args: 根據使用的分類算法而定,在樸素貝葉斯里面不需要其他的參數
    """
    folds = cross_validation_split(dataset, n_folds)
    scores = list()
    for i in range(len(folds)):  
        train_set = np.delete(folds, i, axis=0)
#         print(train_set)        
        test_set = list()
        for row in folds[i]:
            row_copy = list(row)
            test_set.append(row_copy)
            row_copy[-1] = None
        predicted = algorithm(train_set, test_set, *args)
        actual = [row[-1] for row in folds[i]]
        accuracy = accuracy_metric(actual, predicted)
        scores.append(accuracy)
    return scores

def calculate_distance(point1, point2, length):
    """
    計算兩點之間的歐式空間距離
    @point1: 數據點1
    @point2: 數據點2
    @length: 緯度數
    """
    distance = 0
    for i in range(length):
        distance += (point1[i] - point2[i])**2
    return sqrt(distance)

def get_neighbors(dataset, testpoint, k):
    """
    獲取最鄰近的K個鄰居節點
    @dataset: 數據集
    @testpoint: 目標測試點
    @k: 需要獲取的鄰居數
    """
    dataset = dataset.reshape((-1,5))
    distances = []
    for i in range(len(dataset)):
        dist = calculate_distance(testpoint, dataset[i], len(testpoint)-1)
        distances.append((dataset[i], dist))   
    distances.sort(key=operator.itemgetter(1))      # 根據距離來排序
    neighbors = []

    for i in range(k):
        neighbors.append(distances[i][0])
    return neighbors

def determine_class(neighbors):
    """
    根據鄰居節點類別,判斷該簇應當屬於哪個類別
    @neighbors: 鄰居節點列表
    """
    classvotes = {}
    for i in range(len(neighbors)):
        res = neighbors[i][-1]
        if (res in classvotes):
            classvotes[res] += 1
        else:
            classvotes[res] = 1
    sortedvotes = sorted(classvotes.items(), key=operator.itemgetter(1), reverse=True)
    return sortedvotes[0][0]    # 票數最多的那一個

def KNN(train, test, args):
    """
    KNN分類器
    @train: 訓練集
    @test: 測試集
    @args: 其他參數,這裏是k
    """
    k = int(args['k'])
    predictions = list()
    for point in test:
        neighbors = get_neighbors(train, point, k)
        output = determine_class(neighbors)
        predictions.append(output)
    return(predictions)

使用鳶尾花數據集檢驗

seed(1)
filename = 'iris.csv'
dataset = pd.read_csv(filename).values
str_column_to_int(dataset, len(dataset[0])-1)
n_folds = 3
k = 5
scores = evaluate_algorithm(dataset, KNN, n_folds, {'k': k})
print('某個折上的準確率: %s' % scores)
print('算法的平均準確率: %.3f%%' % (sum(scores)/float(len(scores))))   

結果爲

{'Iris-versicolor': 0, 'Iris-setosa': 1, 'Iris-virginica': 2}
某個折上的準確率: [98.0, 98.0, 94.0]
算法的平均準確率: 96.667%

可視化分類結果:

def plot_clustering():
    """
    繪製相關聯矩陣和結果
    """
    # 隨機抽樣2/3來訓練,1/3來預測
    train_index = np.random.choice(range(len(dataset)), int(len(dataset)*2/3), replace=False)
    test_index = np.array(list(set(np.array([i for i in range(len(dataset))])).difference(set(train_index))))
    train = dataset[train_index]
    test = dataset[test_index]
    prediction = KNN(train, test, {'k': 3})
    result = pd.DataFrame(columns=['trained', 'sepal length', 'sepal width', 'petal length', 'petal width', 'predicted', 'class'], index=range(len(dataset)))
    result.loc[train_index, 'trained'] = 1
    result.loc[test_index, 'trained'] = 0
    result.loc[test_index, 'predicted'] = prediction
    for i in range(len(dataset)):
        result.loc[i, ['sepal length', 'sepal width', 'petal length', 'petal width', 'class']] = dataset[i]
    fig = px.scatter_matrix(result, dimensions=["sepal length", "sepal width", "petal length", "petal width", "predicted", "class"],
                            color="class", symbol="trained")
    fig.update_layout(template='none', width=1200, height=1000,
        margin=dict(l=50, r=50, t=50, b=50))
    fig.show()
plot_clustering()

在這裏插入圖片描述
隨交叉檢驗折數和給定的簇的數量(kk)算法準確率的變化。

fig = make_subplots(rows=1, cols=2, subplot_titles=("Change folds", "Change cluster number"))
scores, index, acc = [], [], []
for i in range(2, 22):
    score = evaluate_algorithm(dataset, KNN, i, {'k': 3})
    scores.append(list(score))
    acc.append(sum(score)/float(len(score)))
    index.append([i for j in range(i)])
fig.append_trace(go.Scatter(x=[i + 2 for i in range(20)], y=acc,
                    mode='lines+markers',
                    name='mean'), row=1, col=1)
fig.append_trace(go.Scatter(x=sum(index, []), y=sum(scores, []),
                    mode='markers',
                    name='each'), row=1, col=1)
scores, index, acc = [], [], []
for j in range(1, 11):
    score = evaluate_algorithm(dataset, KNN, 5, {'k': j})
    scores.append(list(score))
    acc.append(sum(score)/float(len(score)))
    index.append(j)
fig.append_trace(go.Scatter(x=[i + 1 for i in range(10)], y=acc,
                    mode='lines+markers',
                    name='mean-acc'), row=1, col=2)
fig.update_layout(height=600, width=1200, template='none')
fig.update_yaxes(title_text="Accuracy", row=1, col=1)
fig.update_yaxes(title_text="Accuracy", row=1, col=2)
fig.update_xaxes(title_text="Folds Num", row=1, col=1)
fig.update_xaxes(title_text="Cluster Num", row=1, col=2)
fig.show()

在這裏插入圖片描述

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章