這裏是我的個人網站:
https://endlesslethe.com/perceptron-with-python.html
有更多總結分享,最新更新也只會發佈在我的個人網站上。排版也可能會更好看一點=v=
前言
本來想寫一個關於感知機的總結,但如果要深入探討,涉及的東西實在太多。僅僅淺嘗輒止的話,那我就相當於照搬原文,違背了我寫文章的初衷。
所以就單純地把我自己寫的感知機實現代碼發上來,輔助大家學習。
我還提供了一個數據生成器,可以生成訓練模型所需要的數據。
簡單地對結果做了可視化,具體繪製代碼見文末提供的github地址。跪求star=v=
感知機模型
感知機算法用於計算得到劃分可二分數據集的超平面S。
我們定義優化函數爲損失函數:
L=誤分類點到超平面S的距離和
\(d = \frac{1}{{\left| w \right|}}|w \bullet {x_i} + b|\)
\(L = - \sum\limits_N {{y_i}} (w \bullet {x_i} + b)\)
採用隨機梯度下降算法
\(\frac{{dL}}{{dw}} = - \sum\limits_N {{y_i}} {x_i}\)
故對於每一個誤分類點
\(w = w + \eta {y_i}{x_i}\)
算法流程
輸入:w, b;
訓練:f(x)=sign(wx+b)
- 選取初值w0, b0
- 隨機選取數據(xi, yi)
- 如果爲誤分類點,則更新
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(1)
def sign(x):
if x > 0:
return 1;
return -1
def svg(x, y, w, b, learning_rate):
i = np.random.randint(0, x.shape[0])
# print("svg")
# print(w.shape)
# print(x[0].shape)
# print(np.dot(w, x[i]))
if y[i] * (np.dot(w, x[i]) + b) <= 0:
w = w + learning_rate * x[i].T * y[i]
b = b + learning_rate * y[i]
params = {'w':w, 'b':b}
return params
預測
輸入:x
輸出:y=sign(wx+b)
def predict(x, w, b):
return sign(np.dot(w, x) + b)
小數據訓練
dim = 2 #屬性數量
dataSize = 10 #數據集大小
learning_rate = 0.1 #學習率
ITERATE = 1000 #迭代次數
x_train = np.array([[-1, 1], [-2, 0], [-1, 0], [-0.5, 0.5], [0, 0.5],[1, 3], [2, 3], [1, 1], [1, -0.5], [1, 0]])
x_train = x_train.reshape(10, dim, 1)
y_train = np.array([1, 1, 1, 1, 1, -1, -1, -1, -1, -1])
# print(x_train.shape)
# print(x_train[0].shape)
w = np.zeros((1, dim))
b = 0
assert(x_train.shape == (dataSize, dim, 1))
assert(x_train[0].shape == (dim, 1))
assert(w.shape == (1, dim))
for x in range(ITERATE):
params = svg(x_train, y_train, w, b, learning_rate)
w = params['w']
b = params['b']
print(w)
print(b)
訓練結果
數據生成器
def getData(rg, dim, size):
# w = np.random.rand(1, dim)
# b = np.random.randint(-rg/2, rg/2)
w = np.array([1, 1])
b = 2.5
x = []
y = []
for i in range(size):
x_i = np.random.rand(dim, 1) * rg - rg/2
y_i = -1
if np.dot(w, x_i) + b > 0:
y_i = 1
x.append(x_i)
y.append(y_i)
x = np.array(x)
y = np.array(y)
# print("getData")
# print(x)
data = {"x":x, "y":y}
return data
大數據測試
rangeOfNumber = 10 #隨機數的範圍
dim = 2 #屬性數量
dataSize = 1000 #數據集大小
testSize = 2000 #測試集大小
learning_rate = 0.05 #學習率
ITERATE = 1000 #迭代次數
data_train = getData(rangeOfNumber, dim, dataSize)
x_train = data_train["x"]
y_train = data_train["y"]
# print(x_train.shape)
# print(x_train[0].shape)
w = np.zeros((1, dim))
b = 0
assert(x_train.shape == (dataSize, dim, 1))
assert(x_train[0].shape == (dim, 1))
assert(w.shape == (1, dim))
for x in range(ITERATE):
params = svg(x_train, y_train, w, b, learning_rate)
w = params['w']
b = params['b']
print(w)
print(b)
訓練結果
對測試集預測
data_test = getData(rangeOfNumber, dim, testSize)
x_test = data_test["x"]
y_test = data_test["y"]
y_predict = []
for i in range(testSize):
y_predict.append(predict(x_test[i], w, b))
cnt = 0
for i in range(testSize):
if y_test[i] == y_predict[i]:
cnt = cnt + 1
print("Accuracy:%d" % (cnt / testSize * 100))