慣例的ML課堂作業,第四個也是最後一個線性分類模型,感知機。
感知機是一個非常簡單的線性分類模型,簡單來說就是一個神經元,其激活函數是門限函數,有n個輸入和一個輸出,和神經元結構十分相似。
感知機的損失函數是看作是分類錯的所有樣本的輸出值的和
hw的輸出就是模型的預測結果,對於二分類問題就是0/1兩種,y是真實標記。當預測和真實一致時,求和項爲0,當hw取1,真實樣本爲0,係數爲1。hw輸入正值時才預測1所以整一項也是取正值,相反時係數爲-1,但是輸入值也是負數,最後還是正數。
損失函數形式很簡單,對參數求導就可以得到梯度值。
對於SGD優化,參數更新規則如下:
由於數據是線性不可分的,SGD下會不斷的震盪,用GD的話可以穩定結果,但是參數會不斷擴大直到超出表示範圍。GD就是把所有的SGD更新加起來而已。
下面給出代碼:
import numpy as np
import matplotlib.pyplot as plt
import random
data_x = np.loadtxt("ex4Data/ex4x.dat")
data_y = np.loadtxt("ex4Data/ex4y.dat")
data_x_plt = data_x
plt.axis([15, 65, 40, 90])
plt.xlabel("exam 1 score")
plt.ylabel("exam 2 score")
for i in range(data_y.size):
if data_y[i] == 1:
plt.plot(data_x[i][0], data_x[i][1], 'b+')
else:
plt.plot(data_x[i][0], data_x[i][1], 'bo')
mean = data_x.mean(axis=0)
variance = data_x.std(axis=0)
data_x = (data_x-mean)/variance
data_y = data_y.reshape(-1, 1) # 拼接
temp = np.ones(data_y.size)
data_x = np.c_[temp, data_x]
data_x = np.mat(data_x)
learn_rate = 1
theda = np.mat(np.zeros([3, 1]))
loss = np.sum(np.multiply(np.heaviside(data_x*theda, 1)-data_y, (data_x*theda)))
old_loss = 0
# plt.ion()
for i in range(1000):
z = random.randint(0, data_y.size-1)
if int(data_y[z]) == 0 and int(np.sum(data_x[z]*theda)) == 1:
theda = theda-learn_rate*data_x[z].T
if int(data_y[z]) == 1 and int(np.sum(data_x[z]*theda)) == 0:
theda = theda+learn_rate*data_x[z].T
# print(np.sum(np.multiply(np.heaviside(data_x*theda, 1)-data_y, (data_x*theda))))
# plt.cla()
# plt.axis([15, 65, 40, 90])
# plt.xlabel("exam 1 score")
# plt.ylabel("exam 2 score")
# for j in range(data_y.size):
# if data_y[j] == 1:
# plt.plot(data_x_plt[j][0], data_x_plt[j][1], 'b+')
# else:
# plt.plot(data_x_plt[j][0], data_x_plt[j][1], 'bo')
# theta = np.array(theda)
# plot_y = np.zeros(65 - 16)
# plot_x = np.arange(16, 65)
# for i in range(16, 65):
# plot_y[i - 16] = -(theta[0] + theta[2] * ((i - mean[0]) / variance[0])) / theta[1]
# plot_y[i - 16] = plot_y[i - 16] * variance[1] + mean[1]
# plt.plot(plot_x, plot_y)
# plt.pause(0.1)
theta = np.array(theda)
plot_y = np.zeros(65-16)
plot_x = np.arange(16, 65)
for i in range(16, 65):
plot_y[i - 16] = -(theta[0] + theta[2] * ((i - mean[0]) / variance[0])) / theta[1]
plot_y[i - 16] = plot_y[i - 16] * variance[1] + mean[1]
plt.plot(plot_x, plot_y)
plt.show()
註釋掉的一部分代碼是實時顯示線性分類結果的,可以看到分類器是如何一步一步更新到最佳位置附近,因爲會經常抽到分類正確的樣本,所以很多時候模型並不更新參數,這裏可以簡單優化一下讓模型只在分類錯誤的樣本中隨機抽取,不過我犯懶了
下面給出GD算法下的代碼:
import numpy as np
import matplotlib.pyplot as plt
import random
data_x = np.loadtxt("ex4Data/ex4x.dat")
data_y = np.loadtxt("ex4Data/ex4y.dat")
data_x_plt = data_x
plt.axis([15, 65, 40, 90])
plt.xlabel("exam 1 score")
plt.ylabel("exam 2 score")
for i in range(data_y.size):
if data_y[i] == 1:
plt.plot(data_x[i][0], data_x[i][1], 'b+')
else:
plt.plot(data_x[i][0], data_x[i][1], 'bo')
mean = data_x.mean(axis=0)
variance = data_x.std(axis=0)
data_x = (data_x-mean)/variance
data_y = data_y.reshape(-1, 1) # 拼接
temp = np.ones(data_y.size)
data_x = np.c_[temp, data_x]
data_x = np.mat(data_x)
learn_rate = 0.18
theda = np.mat(np.zeros([3, 1]))
loss = np.sum(np.multiply(np.heaviside(data_x*theda, 1)-data_y, (data_x*theda)))
old_loss = 0
# plt.ion()
for i in range(100):
temp = np.mat(np.zeros([3, 1]))
inference = data_x*theda
for j in range(data_y.size):
temp += learn_rate*(float(data_y[j])-float(np.sum(inference[j])))*data_x[j].T
theda += temp
# print(np.sum(np.multiply(np.heaviside(data_x*theda, 1)-data_y, (data_x*theda))))
# plt.cla()
# plt.axis([15, 65, 40, 90])
# plt.xlabel("exam 1 score")
# plt.ylabel("exam 2 score")
# for j in range(data_y.size):
# if data_y[j] == 1:
# plt.plot(data_x_plt[j][0], data_x_plt[j][1], 'b+')
# else:
# plt.plot(data_x_plt[j][0], data_x_plt[j][1], 'bo')
# theta = np.array(theda)
# plot_y = np.zeros(65 - 16)
# plot_x = np.arange(16, 65)
# for i in range(16, 65):
# plot_y[i - 16] = -(theta[0] + theta[2] * ((i - mean[0]) / variance[0])) / theta[1]
# plot_y[i - 16] = plot_y[i - 16] * variance[1] + mean[1]
# plt.plot(plot_x, plot_y)
# plt.pause(0.1)
theta = np.array(theda)
plot_y = np.zeros(65-16)
plot_x = np.arange(16, 65)
for i in range(16, 65):
plot_y[i - 16] = -(theta[0] + theta[2] * ((i - mean[0]) / variance[0])) / theta[1]
plot_y[i - 16] = plot_y[i - 16] * variance[1] + mean[1]
plt.plot(plot_x, plot_y)
plt.show()
註釋掉的部分一樣是實時劃線用的,跟前文說的一樣其參數在穩定後會不斷的正比例增長,所以loss值也是不斷增大,直到超出表示範圍。
結果圖
第二個是多分類的感知機,有幾類需要分類的數據就有幾個神經元輸出,選其中值最大的爲正確類別,取消了階躍函數。其他基本跟二分類一樣。
最後結果需要畫出兩條線,不過因爲是二分類所以畫的線不會有多少偏差。
首先是代碼,這次的SGD一次抽取2個樣本,模型的震盪變得更小。
import numpy as np
import matplotlib.pyplot as plt
import random
data_x = np.loadtxt("ex4Data/ex4x.dat")
data_y = np.loadtxt("ex4Data/ex4y.dat")
plt.axis([15, 65, 40, 90])
plt.xlabel("exam 1 score")
plt.ylabel("exam 2 score")
for i in range(data_y.size):
if data_y[i] == 1:
plt.plot(data_x[i][0], data_x[i][1], 'b+')
else:
plt.plot(data_x[i][0], data_x[i][1], 'bo')
mean = data_x.mean(axis=0)
variance = data_x.std(axis=0)
data_x = (data_x-mean)/variance
data_y = data_y.reshape(-1, 1) # 拼接
temp = np.ones(data_y.size)
data_x = np.c_[temp, data_x]
data_x = np.mat(data_x)
learn_rate = 0.1
theda = np.mat(np.zeros([3, 2]))
loss = np.sum(np.max(data_x*theda)-data_x*theda)
for i in range(3000):
temp = theda.T
for s in range(2):
z = random.randint(0, data_y.size - 1)
for j in range(2):
if j == np.argmax(data_x[z]*theda) and j != int(data_y[z]):
temp[j] -= learn_rate*data_x[z]
if j != np.argmax(data_x[z]*theda) and j == int(data_y[z]):
temp[j] += learn_rate*data_x[z]
theda = temp.T
# print(np.sum(np.max(data_x*theda)-data_x*theda))
theta = np.array(theda)
plot_y = np.zeros(65-16)
plot_x = np.arange(16, 65)
for i in range(16, 65):
plot_y[i - 16] = -(theta[0][0] + theta[2][0] * ((i - mean[0]) / variance[0])) / theta[1][0]
plot_y[i - 16] = plot_y[i - 16] * variance[1] + mean[1]
plt.plot(plot_x, plot_y)
for i in range(16, 65):
plot_y[i - 16] = -(theta[0][1] + theta[2][1] * ((i - mean[0]) / variance[0])) / theta[1][1]
plot_y[i - 16] = plot_y[i - 16] * variance[1] + mean[1]
plt.plot(plot_x, plot_y)
plt.show()
結果如圖:
收斂應該不需要這麼多步,不過我也是犯懶了,而且說實話這個數據集真是有點用膩了……
有其他優化和想法的話會再更新