TensorFlow深度學習框架學習(三):TensorFlow實現K-Means算法

以下是K-Means算法的具體TensorFlow代碼:

#作者:寶蓓
#名稱:K-means算法
#代碼思路:基本K-Means算法:1、首先確定常數K,常數K意味着最終的聚類類別數;
#                           2、隨機選定初始點爲質心,並通過計算每一個樣本與質心之間的相似度(這裏爲歐式距離),將樣本點歸到最相似的類中;
#                           3、接着,重新計算每個類的質心(即爲類中心)
#                           4、重複這樣的過程,直到質心不再改變,最終就確定了每個樣本所屬的類別以及每個類的質心。
#選用的數據集:iris數據集,數據集已明確標註爲3類

#1、首先先導入必要的編程庫。
import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from sklearn import datasets
from scipy.spatial import cKDTree
#PCA主要是用來數據降維,將高維度的特徵映射到低維度的特徵,加快機器學習的速度。
from sklearn.decomposition import PCA
from sklearn.preprocessing import scale

#2、創建一個計算圖會話,加載iris數據集
sess = tf.Session()
iris = datasets.load_iris()
#num_pts表示數據的總數
num_pts = len(iris.data)
#每一個樣本所含的特徵數,即本例中每個樣本中含有4個特徵
num_feats = len(iris.data[0])

#3、設置分類樹
k=3
#設置迭代次數
generations = 25
#創建計算圖所需的變量,初始化點和每個點所含的標籤(即每個點屬於哪一類)
data_points = tf.Variable(iris.data)
cluster_labels = tf.Variable(tf.zeros([num_pts], dtype=tf.int64))

#4、聲明並初始化每個分組所需的幾何中心,本例中選擇3個數據集來初始化算法的幾何中心
#for _ in range(k),做range(k)次iris.data[np.random.choice(len(iris.data))]的循環,即做k次,隨機選取數據點的循環
rand_starts = np.array([iris.data[np.random.choice(len(iris.data))] for _ in range(k)])
centroids = tf.Variable(rand_starts)

#5、計算每個數據點到每個幾何中心的距離。
#本例所使用的是:將幾何中心點和數據點分別放入矩陣中,然後計算兩個矩陣的歐氏距離
#centroid_matrix表示幾何中心矩陣
#tf.tile(centroids, [num_pts, 1])這行代碼,表示,將矩陣中心複製150次,有3個矩陣中心,所以這裏有450行,每一行有4個特徵
#tf.reshape(tf.tile(centroids, [num_pts, 1]), [num_pts, k, num_feats])表示將前面得到的 450 * 4的矩陣變爲150*3*4的矩陣,表示分爲150組,每組中有3個向量,每個向量裏含有4個值
centroid_matrix = tf.reshape(tf.tile(centroids, [num_pts, 1]), [num_pts, k, num_feats])
#tf.tile(data_points, [1,k])將data_points中的列複製3次,也就是原來是4列,現在變爲12列,變爲150*12
#然後 reshape成 150*3*4,仍舊是150組,每組中有3個向量(這3個向量是相同的),每個向量有4個特徵值
point_matrix = tf.reshape(tf.tile(data_points, [1,k]), [num_pts, k, num_feats])
#於是對這2個矩陣計算其歐氏距離
#reduction_indices = 2表示對於 a*b*c矩陣,把每個向量加起來成爲一個新值,於是distance=150*3
distances = tf.reduce_sum(tf.square(centroid_matrix - point_matrix), reduction_indices = 2)

#6、分配時,是以到每個數據點最小距離爲最接近的幾何中心點
#tf.argmin(input, dimension, name=None)
#tf.argmin是返回input最小值的索引index,而dimension=1,表示在第2個維度進行求解(當dimension=0時,表示第一個維度),所以得到的是150個索引號(0,1,2),0表示和第一個幾何中心比較近,應該放在第1類中
centroid_group = tf.argmin(distances, 1)

#7、計算每組分類的平均距離得到新的幾何中心
def data_group_avg(group_ids, data):
    sum_total = tf.unsorted_segment_sum(data, group_ids, 3)
    #tf.ones_like是用於創建一個所有參數均爲 1 的tensor對象
    num_total = tf.unsorted_segment_sum(tf.ones_like(data), group_ids, 3)
    avg_by_group = sum_total / num_total
    return(avg_by_group)
means = data_group_avg(centroid_group, data_points)
update = tf.group(centroids.assign(means), cluster_labels.assign(centroid_group))

#8、初始化模型
init = tf.global_variables_initializer()
sess.run(init)

#9、遍歷迭代訓練,更新每組分類的幾何中心點
for i in range(generations):
    print('Calculating gen {},out of {}.'.format(i, generations))
    _, centroid_group_count = sess.run([update, centroid_group])
    group_count = []
    for ix in range(k):
        group_count.append(np.sum(centroid_group_count==ix))
    print('Group counts:{}'.format(group_count))

#10、驗證實際數據集與聚類的數據集有多少是匹配的
[centers, assignments] = sess.run([centroids, cluster_labels])
def most_common(my_list):
    return(max(set(my_list), key=my_list.count))
label0 = most_common(list(assignments[0:50]))
label1 = most_common(list(assignments[50:100]))
label2 = most_common(list(assignments[100:150]))
group0_count = np.sum(assignments[0:50]==label0)
group1_count = np.sum(assignments[50:100]==label1)
group2_count = np.sum(assignments[100:150]==label2)
accuracy = (group0_count + group1_count + group2_count)/150.
print('Accuracy:{:.2}'.format(accuracy))

#以下是將聚類結果可視化出來
#PCA(n_components=2)表示將4個特徵的向量降維到二維,即可以畫在平面
pca_model = PCA(n_components=2)
#將iris.data轉換成標準形式,然後存入reduced_data中
reduced_data = pca_model.fit_transform(iris.data)
#將前面的幾何中心點centers也轉換成標準形式,然後存入reduced_centers中
reduced_centers = pca_model.transform(centers)
#h表示間距
h = .02
#下面求x_min, x_max和y_min, y_max,主要是爲了確定座標軸
x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1
y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))

xx_pt = list(xx.ravel())
yy_pt = list(yy.ravel())
xy_pts = np.array([[x,y] for x,y in zip(xx_pt, yy_pt)])
mytree = cKDTree(reduced_centers)
dist, indexes = mytree.query(xy_pts)
indexes =indexes.reshape(xx.shape)

#下面使用matplotlib將圖給畫出來
plt.clf()
plt.imshow(indexes, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower')
symbols = ['o', '^', 'D']
label_name = ['Setosa', 'Versicolour', 'Virginica']
for i in range(3):
    temp_group = reduced_data[(i*50) : (50)*(i+1)]
    plt.plot(temp_group[:, 0], temp_group[:, 1], symbols[i], markersize=10, label=label_name[i])
plt.scatter(reduced_centers[:, 0], reduced_centers[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10)
plt.title('K-means clustering on Iris Dataset\n' 'Centroids are marked with wthite cross')
plt.xlim(x_min, x_max)
plt.ylim(y_min, y_max)
plt.legend(loc='lower right')
plt.show()

最後輸出的結果有:

Calculating gen 0,out of 25.
Group counts:[9, 72, 69]
Calculating gen 1,out of 25.
Group counts:[36, 63, 51]
Calculating gen 2,out of 25.
Group counts:[41, 59, 50]
Calculating gen 3,out of 25.
Group counts:[46, 54, 50]
Calculating gen 4,out of 25.
Group counts:[50, 50, 50]
Calculating gen 5,out of 25.
Group counts:[54, 46, 50]
Calculating gen 6,out of 25.
Group counts:[57, 43, 50]
Calculating gen 7,out of 25.
Group counts:[60, 40, 50]
Calculating gen 8,out of 25.
Group counts:[61, 39, 50]
Calculating gen 9,out of 25.
Group counts:[61, 39, 50]
Calculating gen 10,out of 25.
Group counts:[61, 39, 50]
Calculating gen 11,out of 25.
Group counts:[61, 39, 50]
Calculating gen 12,out of 25.
Group counts:[61, 39, 50]
Calculating gen 13,out of 25.
Group counts:[61, 39, 50]
Calculating gen 14,out of 25.
Group counts:[61, 39, 50]
Calculating gen 15,out of 25.
Group counts:[61, 39, 50]
Calculating gen 16,out of 25.
Group counts:[61, 39, 50]
Calculating gen 17,out of 25.
Group counts:[61, 39, 50]
Calculating gen 18,out of 25.
Group counts:[61, 39, 50]
Calculating gen 19,out of 25.
Group counts:[61, 39, 50]
Calculating gen 20,out of 25.
Group counts:[61, 39, 50]
Calculating gen 21,out of 25.
Group counts:[61, 39, 50]
Calculating gen 22,out of 25.
Group counts:[61, 39, 50]
Calculating gen 23,out of 25.
Group counts:[61, 39, 50]
Calculating gen 24,out of 25.
Group counts:[61, 39, 50]
Accuracy:0.89

這裏寫圖片描述

發佈了50 篇原創文章 · 獲贊 45 · 訪問量 11萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章