機器學習
一.概述
1.什麼是機器學習
- 人工智能:通過人工的方法,實現或者近似實現某些需要人類智能處理的問題,都可以稱爲人工智能
- 機器學習:一個計算機程序在完成任務T之後,獲得經驗E,而該經驗的效果可以通過P得以表現,如果隨着T的增加,藉助P來表現的E也可以同步增進,則稱這樣的程序爲機器學習系統.
- 自我完善,自我修正,自我增強
2.爲什麼需要機器學習
- 簡化或者替代人工方式的模式識別,易於系統的開發維護和升級換代.
- 對於那些算法過於複雜,或者沒有明確解法的問題,機器學習系統具有得天獨厚的優勢
- 借鑑機器學習的過程,反向推理出隱藏在業務數據背後的規則.----數據挖掘
3.機器學習的類型
- 有監督學習,無監督學習,半監督學習和強化學習
- 批量學習和增量學習
- 基於實例的學習和基於模型的學習
4.機器學習的流程
- 數據採集
- 數據清洗 數據
........................... - 數據預處理
- 選擇模型
- 訓練模型
- 驗證模型 機器模型
.................................. - 使用模型 業務
- 維護和升級
二.數據預處理
import sklearn.preprocessing as sp
樣本矩陣
輸入數據 輸出數據
_____特徵_____
/ | | \
身高 體重 年齡 性別
樣本1 1.7 60 25 男 -> 8000
樣本2 1.5 50 20 女 -> 6000
...
- 均值移除(標準化)
特徵A:10+-5
特徵B:10000+-5000
特徵淹沒
通過算法調整令樣本矩陣中每一列(特徵)的平均值爲0,標準差爲1.這樣一來,所有特徵對最終模型的預測結果都有接近一致的貢獻,模型對每個特徵的傾向性更加均衡.
[a b c]
m=(a+b+c)/3, s=sqrt(((a-m)^2+(b-m)^2+(c-m)^2)/3)
[a' b' c']
a'=a-m
b'=b-m
c'=c-m
m'
=(a'+b'+c')/3
=(a-m+b-m+c-m)/3
=(a+b+c-3m)/3
=(a+b+c)/3-m
=m-m
=0
[a" b" c"]
a"=a'/s
b"=b'/s
c"=c'/s
m"=0
s"
=sqrt((a"^2+b"^2+c"^2)/3)
=sqrt((a'^2+b'^2+c'^2)/(3s^2))
=sqrt(((a-m)^2+(b-m)^2+(c-m)^2)/(3s^2))
=sqrt(3s^2/(3s^2))
=1
sp.scale(原始樣本矩陣)->經過均值移除後的樣本矩陣
代碼:std.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import sklearn.preprocessing as sp raw_samples = np.array([ [3, -1.5, 2, -5.4], [0, 4, -0.3, 2.1], [1, 3.3, -1.9, -4.3]]) print(raw_samples) print(raw_samples.mean(axis=0)) print(raw_samples.std(axis=0)) std_samples = raw_samples.copy() for col in std_samples.T: col_mean = col.mean() col_std = col.std() col -= col_mean col /= col_std print(std_samples) print(std_samples.mean(axis=0)) print(std_samples.std(axis=0)) std_samples = sp.scale(raw_samples) print(std_samples) print(std_samples.mean(axis=0)) print(std_samples.std(axis=0))
- 範圍縮放
90/150 80/100 5/5
將樣本矩陣每一列的元素經過某種線性變換,使得所有列的元素都處在同樣的範圍區間內.
kx + b = y
k col_min + b = min \ -> k b
k col_max + b = max /
/ col_min 1 \ x / k \ = / min \
\ col_max 1/ \ b / \ max /
--------------- ----- --------
a x b
= np.linalg.solve(a, b)
= np.linalg.lstsq(a, b)[0]
範圍縮放器 = sp.MinMaxScaler(
feature_range=(min, max))
範圍縮放器.fit_transform(原始樣本矩陣)
->經過範圍縮放後的樣本矩陣
有時候也把以[0, 1]區間作爲目標範圍的範圍縮放稱爲"歸一化"
代碼:mms.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import sklearn.preprocessing as sp raw_samples = np.array([ [3, -1.5, 2, -5.4], [0, 4, -0.3, 2.1], [1, 3.3, -1.9, -4.3]]) print(raw_samples) mms_samples = raw_samples.copy() for col in mms_samples.T: col_min = col.min() col_max = col.max() a = np.array([ [col_min, 1], [col_max, 1]]) b = np.array([0, 1]) x = np.linalg.solve(a, b) col *= x[0] col += x[1] print(mms_samples) mms = sp.MinMaxScaler(feature_range=(0, 1)) mms_samples = mms.fit_transform(raw_samples) print(mms_samples)
- 歸一化
Python C/C++ Java PHP
2016 20 30 40 10 /100
2017 30 20 30 10 /90
2018 10 5 1 0 /16
用每個樣本各個特徵值除以該樣本所有特徵值絕對值之和,以佔比的形式來表現特徵。
sp.normalize(原始樣本矩陣, norm='l1')
->經過歸一化後的樣本矩陣
l1 - l1範數,矢量諸元素的絕對值之和
l2 - l2範數,矢量諸元素的(絕對值的)平方之和
...
ln - ln範數,矢量諸元素的絕對值的n次方之和
代碼:nor.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import sklearn.preprocessing as sp raw_samples = np.array([ [3, -1.5, 2, -5.4], [0, 4, -0.3, 2.1], [1, 3.3, -1.9, -4.3]]) print(raw_samples) nor_samples = raw_samples.copy() for row in nor_samples: row_absum = abs(row).sum() row /= row_absum print(nor_samples) print(abs(nor_samples).sum(axis=1)) nor_samples = sp.normalize(raw_samples, norm='l1') print(nor_samples) print(abs(nor_samples).sum(axis=1))
- 二值化
根據事先給定閾值,將樣本矩陣中高於閾值的元素設置爲1,否則設置爲0,得到一個完全由1和0組成的二值矩陣。
二值化器 = sp.Binarizer(threshold=閾值)
二值化器.transform(原始樣本矩陣)
->經過二值化後的樣本矩陣
代碼:bin.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import sklearn.preprocessing as sp raw_samples = np.array([ [3, -1.5, 2, -5.4], [0, 4, -0.3, 2.1], [1, 3.3, -1.9, -4.3]]) print(raw_samples) bin_samples = raw_samples.copy() bin_samples[bin_samples <= 1.4] = 0 bin_samples[bin_samples > 1.4] = 1 print(bin_samples) bin = sp.Binarizer(threshold=1.4) bin_samples = bin.transform(raw_samples) print(bin_samples)
- 獨熱編碼
用一個只包含一個1和若干個0的序列來表達每個特徵值的編碼方式,藉此既保留了樣本矩陣的所有細節,同時又得到一個只含有1和0的稀疏矩陣,既可以提高模型的容錯性,同時還能節省內存空間。
1 3 2
7 5 4
1 8 6
7 3 9
----------------------
1:10 3:100 2:1000
7:01 5:010 4:0100
8:001 6:0010
9:0001
----------------------
101001000
010100100
100010010
011000001
獨熱編碼器 = sp.OneHotEncoder(
sparse=是否緊縮(缺省True), dtype=類型)
獨熱編碼器.fit_transform(原始樣本矩陣)
->經過獨熱編碼後的樣本矩陣
代碼:ohe.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import sklearn.preprocessing as sp raw_samples = np.array([ [1, 3, 2], [7, 5, 4], [1, 8, 6], [7, 3, 9]]) print(raw_samples) # 建立編碼字典列表 code_tables = [] for col in raw_samples.T: # 針對一列的編碼字典 code_table = {} for val in col: code_table[val] = None code_tables.append(code_table) # 爲編碼字典列表中每個編碼字典添加值 for code_table in code_tables: size = len(code_table) for one, key in enumerate(sorted( code_table.keys())): code_table[key] = np.zeros( shape=size, dtype=int) code_table[key][one] = 1 # 根據編碼字典表對原始樣本矩陣做獨熱編碼 ohe_samples = [] for raw_sample in raw_samples: ohe_sample = np.array([], dtype=int) for i, key in enumerate(raw_sample): ohe_sample = np.hstack( (ohe_sample, code_tables[i][key])) ohe_samples.append(ohe_sample) ohe_samples = np.array(ohe_samples) print(ohe_samples) ohe = sp.OneHotEncoder(sparse=False, dtype=int) ohe_samples = ohe.fit_transform(raw_samples) print(ohe_samples)
- 標籤編碼
文本形式的特徵值->數值形式的特徵值
其編碼數值源於標籤字符串的字典排序,與標籤本身的含義無關
職位 車
員工 toyota - 0
組長 ford - 1
經理 audi - 2
老闆 bmw - 3
標籤編碼器 = sp.LabelEncoder()
標籤編碼器.fit_transform(原始樣本矩陣)
->經過標籤編碼後的樣本矩陣
標籤編碼器.inverse_transform(經過標籤編碼後的樣本矩陣)
->原始樣本矩陣
代碼:lab.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import sklearn.preprocessing as sp raw_samples = np.array([ 'audi', 'ford', 'audi', 'toyota', 'ford', 'bmw', 'toyota', 'bmw']) print(raw_samples) lbe = sp.LabelEncoder() lbe_samples = lbe.fit_transform(raw_samples) print(lbe_samples) raw_samples = lbe.inverse_transform(lbe_samples) print(raw_samples)
三.機器學習基本問題
- 迴歸問題:由已知飛分佈於連續域中的輸入和輸出,通過不斷地模型訓練,找到輸入和輸出之間的聯繫,通常這種聯繫可以通過一個函數方程被形式化:如y = w0 + w1 +w2x^2 ...,當提供未知輸出的輸入時,就可以根據以上函數方程,預測出與之對應的連續域輸出
- 分類問題:如果將回歸問題中的輸出從連續域變爲離散域,那麼該問題就是一個分類問題
- 聚類問題:從已知的輸入中尋找某種模式,比如相似性,根據該模式將輸入劃分爲不同的集羣,並對新的輸入應用同樣的劃分方式,以確定其歸屬的集羣
- 降維問題:從大量的特徵中選擇那些對模型預測最關鍵的少量特徵,以降低輸入樣本的維度,提高模型的性能
四.一元線性迴歸
- 預測函數
輸入 輸出
0 1
1 3
2 5
3 7
4 9
...
y = 1 + 2x
10 -> 21
y = w0 +w1x
任務就是尋找預測函數中的模型參數w0和w1,以滿足輸入和輸出之間的聯繫 - 單樣本誤差
x - >[y=w0+w1x] ->y' y->e= 1/2(y-y')^2 - 總樣本誤差
E = ∑[1/2(y-y')^2] - 損失函數
Loss(w0,w1) =∑[1/2(y-(w0+w1x))^2]
任務就是尋找可以使損失函數取得最小值的模型參數w0和w1. - 梯度下降法尋優
隨機選擇一組模型參數w0和w1
計算損失函數在該模型參數處的梯度
[DLoss/Dwo,Dloss/Dw1] <--|
計算與該梯度反方向的修正步長 |
[-nDLoss/Dwo,-nDLoss/Dw1] |
計算下一組模型參數 |
w0=w1-nDLoss/Dwo |
w1=w1-nDLoss/Dw1------------------------+
直到滿足迭代終止條件:
迭代足夠多次,
損失值已經足夠小,
損失值已經不再明顯減少
Loss = SIGMA[1/2(y-y')^2], y'=w0+w1x
DLoss/Dw0
=SIGMA[D(1/2(y-y')^2)/Dw0]
=SIGMA[(y-y')D(y-y')/Dw0]
=SIGMA[(y-y')(Dy/Dw0-Dy'/Dw0)]
=-SIGMA[(y-y')(Dy'/Dw0)]
=-SIGMA[(y-y')]
DLoss/Dw1
=SIGMA[D(1/2(y-y')^2)/Dw1]
...
=-SIGMA[(y-y')(Dy'/Dw1)]
=-SIGMA[(y-y')x]
代碼:gd.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import matplotlib.pyplot as mp from mpl_toolkits.mplot3d import axes3d train_x = np.array([0.5, 0.6, 0.8, 1.1, 1.4]) train_y = np.array([5.0, 5.5, 6.0, 6.8, 7.0]) n_epoches = 1000 lrate = 0.01 epoches, losses = [], [] w0, w1 = [1], [1] for epoch in range(1, n_epoches + 1): epoches.append(epoch) losses.append(((train_y - ( w0[-1] + w1[-1] * train_x)) ** 2 / 2).sum()) print('{:4}> w0={:.8f}, w1={:.8f}, loss={:.8f}'.format( epoches[-1], w0[-1], w1[-1], losses[-1])) d0 = -(train_y - ( w0[-1] + w1[-1] * train_x)).sum() d1 = -((train_y - ( w0[-1] + w1[-1] * train_x)) * train_x).sum() w0.append(w0[-1] - lrate * d0) w1.append(w1[-1] - lrate * d1) w0 = np.array(w0[:-1]) w1 = np.array(w1[:-1]) sorted_indices = train_x.argsort() test_x = train_x[sorted_indices] test_y = train_y[sorted_indices] pred_test_y = w0[-1] + w1[-1] * test_x grid_w0, grid_w1 = np.meshgrid( np.linspace(0, 9, 500), np.linspace(0, 3.5, 500)) flat_w0, flat_w1 = grid_w0.ravel(), grid_w1.ravel() flat_loss = (((flat_w0 + np.outer( train_x, flat_w1)) - train_y.reshape( -1, 1)) ** 2).sum(axis=0) / 2 grid_loss = flat_loss.reshape(grid_w0.shape) mp.figure('Linear Regression', facecolor='lightgray') mp.title('Linear Regression', fontsize=20) mp.xlabel('x', fontsize=14) mp.ylabel('y', fontsize=14) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.scatter(train_x, train_y, marker='s', c='dodgerblue', alpha=0.5, s=80, label='Training') mp.scatter(test_x, test_y, marker='D', c='orangered', alpha=0.5, s=60, label='Testing') mp.scatter(test_x, pred_test_y, c='orangered', alpha=0.5, s=60, label='Predicted') for x, y, pred_y in zip( test_x, test_y, pred_test_y): mp.plot([x, x], [y, pred_y], c='orangered', alpha=0.5, linewidth=1) mp.plot(test_x, pred_test_y, '--', c='limegreen', label='Regression', linewidth=1) mp.legend() mp.figure('Training Progress', facecolor='lightgray') mp.subplot(311) mp.title('Training Progress', fontsize=20) mp.ylabel('w0', fontsize=14) mp.gca().xaxis.set_major_locator( mp.MultipleLocator(100)) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.plot(epoches, w0, c='dodgerblue', label='w0') mp.legend() mp.subplot(312) mp.ylabel('w1', fontsize=14) mp.gca().xaxis.set_major_locator( mp.MultipleLocator(100)) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.plot(epoches, w1, c='limegreen', label='w1') mp.legend() mp.subplot(313) mp.xlabel('epoch', fontsize=14) mp.ylabel('loss', fontsize=14) mp.gca().xaxis.set_major_locator( mp.MultipleLocator(100)) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.plot(epoches, losses, c='orangered', label='loss') mp.legend() mp.tight_layout() mp.figure('Loss Function') ax = mp.gca(projection='3d') mp.title('Loss Function', fontsize=20) ax.set_xlabel('w0', fontsize=14) ax.set_ylabel('w1', fontsize=14) ax.set_zlabel('loss', fontsize=14) mp.tick_params(labelsize=10) ax.plot_surface(grid_w0, grid_w1, grid_loss, rstride=10, cstride=10, cmap='jet') ax.plot(w0, w1, losses, 'o-', c='orangered', label='BGD') mp.legend() mp.figure('Batch Gradient Descent', facecolor='lightgray') mp.title('Batch Gradient Descent', fontsize=20) mp.xlabel('x', fontsize=14) mp.ylabel('y', fontsize=14) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.contourf(grid_w0, grid_w1, grid_loss, 1000, cmap='jet') cntr = mp.contour(grid_w0, grid_w1, grid_loss, 10, colors='black', linewidths=0.5) mp.clabel(cntr, inline_spacing=0.1, fmt='%.2f', fontsize=8) mp.plot(w0, w1, 'o-', c='orangered', label='BGD') mp.legend() mp.show()
import sklearn.linear_model as lm
線性迴歸器 = lm.LinearRegression()
線性迴歸器.fit(已知輸入, 已知輸出) # 計算模型參數
線性迴歸器.predict(新的輸入)->新的輸出
代碼:line.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import sklearn.linear_model as lm import sklearn.metrics as sm import matplotlib.pyplot as mp x, y = [], [] with open('../../data/single.txt', 'r') as f: for line in f.readlines(): data = [float(substr) for substr in line.split(',')] x.append(data[:-1]) y.append(data[-1]) x = np.array(x) y = np.array(y) model = lm.LinearRegression() model.fit(x, y) pred_y = model.predict(x) # 1/(1+E) print(sm.r2_score(y, pred_y)) mp.figure('Linear Regression', facecolor='lightgray') mp.title('Linear Regression', fontsize=20) mp.xlabel('x', fontsize=14) mp.ylabel('y', fontsize=14) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.scatter(x, y, c='dodgerblue', alpha=0.75, s=60, label='Sample') sorted_indices = x.ravel().argsort() mp.plot(x[sorted_indices], pred_y[sorted_indices], c='orangered', label='Regression') mp.legend() mp.show()
模型的轉儲與載入:pickle
代碼:dump.py、load.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import pickle import numpy as np import sklearn.linear_model as lm import sklearn.metrics as sm import matplotlib.pyplot as mp x, y = [], [] with open('../../data/single.txt', 'r') as f: for line in f.readlines(): data = [float(substr) for substr in line.split(',')] x.append(data[:-1]) y.append(data[-1]) x = np.array(x) y = np.array(y) model = lm.LinearRegression() model.fit(x, y) pred_y = model.predict(x) # 1/(1+E) print(sm.r2_score(y, pred_y)) with open('../../data/linear.pkl', 'wb') as f: pickle.dump(model, f) mp.figure('Linear Regression', facecolor='lightgray') mp.title('Linear Regression', fontsize=20) mp.xlabel('x', fontsize=14) mp.ylabel('y', fontsize=14) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.scatter(x, y, c='dodgerblue', alpha=0.75, s=60, label='Sample') sorted_indices = x.ravel().argsort() mp.plot(x[sorted_indices], pred_y[sorted_indices], c='orangered', label='Regression') mp.legend() mp.show()
# -*- coding: utf-8 -*- from __future__ import unicode_literals import pickle import numpy as np import sklearn.linear_model as lm import sklearn.metrics as sm import matplotlib.pyplot as mp x, y = [], [] with open('../../data/single.txt', 'r') as f: for line in f.readlines(): data = [float(substr) for substr in line.split(',')] x.append(data[:-1]) y.append(data[-1]) x = np.array(x) y = np.array(y) with open('../../data/linear.pkl', 'rb') as f: model = pickle.load(f) pred_y = model.predict(x) # 1/(1+E) print(sm.r2_score(y, pred_y)) with open('../../data/linear.pkl', 'wb') as f: pickle.dump(model, f) mp.figure('Linear Regression', facecolor='lightgray') mp.title('Linear Regression', fontsize=20) mp.xlabel('x', fontsize=14) mp.ylabel('y', fontsize=14) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.scatter(x, y, c='dodgerblue', alpha=0.75, s=60, label='Sample') sorted_indices = x.ravel().argsort() mp.plot(x[sorted_indices], pred_y[sorted_indices], c='orangered', label='Regression') mp.legend() mp.show()
五.領回歸
- Loss(w0,w1) = SIGMA[1/2(y-(w0+w1x))^2]
-正則強度 *f(w0,w1) - 通過正則的方法,即在損失函數中加入正則項,以減弱模型參數對熟練數據的匹配度,藉以規避少數明顯偏移正常範圍的異常樣本影響模型的迴歸效果
代碼# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import sklearn.linear_model as lm import matplotlib.pyplot as mp x, y = [], [] with open('../../data/abnormal.txt', 'r') as f: for line in f.readlines(): data = [float(substr) for substr in line.split(',')] x.append(data[:-1]) y.append(data[-1]) x = np.array(x) y = np.array(y) model1 = lm.LinearRegression() model1.fit(x, y) pred_y1 = model1.predict(x) model2 = lm.Ridge(300, fit_intercept=True) model2.fit(x, y) pred_y2 = model2.predict(x) mp.figure('Linear & Ridge Regression', facecolor='lightgray') mp.title('Linear & Ridge Regression', fontsize=20) mp.xlabel('x', fontsize=14) mp.ylabel('y', fontsize=14) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.scatter(x, y, c='dodgerblue', alpha=0.75, s=60, label='Sample') sorted_indices = x.ravel().argsort() mp.plot(x[sorted_indices], pred_y1[sorted_indices], c='orangered', label='Linear') mp.plot(x[sorted_indices], pred_y2[sorted_indices], c='limegreen', label='Ridge') mp.legend() mp.show()
六.多項式迴歸
- 多元線性: y=w0+w1x1+w2x2+w3x3+...+wnxn
^ x1 = x^1
| x2 = x^2
| ...
| xn = x^n
一元多項式:y=w0+w1x+w2x^2+w3x^3+...+wnx^n
x->多項式特徵擴展器 -x1...xn-> 線性迴歸器->w0...wn
\______________________________________/
管線
代碼:poly.py# -*- coding: utf-8 -*- from __future__ import unicode_literals import numpy as np import sklearn.pipeline as pl import sklearn.preprocessing as sp import sklearn.linear_model as lm import sklearn.metrics as sm import matplotlib.pyplot as mp train_x, train_y = [], [] with open('../../data/single.txt', 'r') as f: for line in f.readlines(): data = [float(substr) for substr in line.split(',')] train_x.append(data[:-1]) train_y.append(data[-1]) train_x = np.array(train_x) train_y = np.array(train_y) model = pl.make_pipeline(sp.PolynomialFeatures(10), lm.LinearRegression()) model.fit(train_x, train_y) pred_train_y = model.predict(train_x) print(sm.r2_score(train_y, pred_train_y)) test_x = np.linspace(train_x.min(), train_x.max(), 1000).reshape(-1, 1) pred_test_y = model.predict(test_x) mp.figure('Polynomial Regression', facecolor='lightgray') mp.title('Polynomial Regression', fontsize=20) mp.xlabel('x', fontsize=14) mp.ylabel('y', fontsize=14) mp.tick_params(labelsize=10) mp.grid(linestyle=':') mp.scatter(train_x, train_y, c='dodgerblue', alpha=0.75, s=60, label='Sample') mp.plot(test_x, pred_test_y, c='orangered', label='Regression') mp.legend() mp.show()
文件:鏈接