手寫數字識別
mnist——keras多層感知器識別手寫數字
[外鏈圖片轉存失敗,源站可能有防盜鏈機制,建議將圖片保存下來直接上傳(img-hH7VWDuN-1570422464746)(C:\Users\72451\Desktop\MNIST數據集.png)]
1.進行數據預處理
導入所需模塊
from keras.utils import np_utils
import numpy as np
np.random.seed(10)
讀取MNIST數據集
from keras.datasets import mnist
(x_train_image, y_train_label),\
(x_test_image, y_test_label) = mnist.load_data()
將feature(數字圖像特徵值)使用reshape轉換
將 28*28 轉換爲 784 個Float數
x_Train = x_train_image.reshape(60000, 784).astype('float32')
x_Test = x_test_image.reshape(10000, 784).astype('float32')
將features(數字圖像特徵值)標準化
提高準確度
x_Train_normalize = x_Train / 255
x_Test_normalize = x_Test / 255
lable(數字真實的值)以 One-hot Encoding 進行轉換
y_Train_OneHot = np_utils.to_categorical(y_train_label)
y_Test_OneHot = np_utils.to_categorical(y_test_label)
2.建立模型
輸入層有784個神經元,隱藏層有1000個神經元,輸出層有10個神經元
導入需要模塊
from keras.models import Sequential
from keras.layers import Dense
建立Sequential模型
建立一個線性堆疊模型
model = Sequential()
建立輸入層,隱藏層
model.add(Dense(units = 1000, # 定義隱藏層神經元的個數爲1000
input_dim = 784, # 設置輸入層神經元個數爲784
kernel_initializer = 'normal', # 使用 normal distribution 正態分佈的隨機數來初始化weight(權重)和 bias(偏差)
activation = 'relu')) # 定義激活函數relu(小於0的值爲0,大於0的值不變)
建立輸出層
加入Dense神經網絡層,使用softmax激活函數進行轉換,可以將神經元的輸出轉換爲預測每一個數字的概率
model.add(Dense(units = 10, # 定義輸出層的神經元一共有10個
kernel_initializer = 'normal', # 使用 normal distribution 正態分佈的隨機數來初始化 weight 和 bias
activation = 'softmax')) # 定義激活函數
#不需要設置input_dim,Keras會自動按照上一層的units是256個神經元,設置這一次的input_dim是256
查看模型的摘要
print(model.summary())
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 1000) 785000
_________________________________________________________________
dense_2 (Dense) (None, 10) 10010
=================================================================
Total params: 795,010
Trainable params: 795,010
Non-trainable params: 0
_________________________________________________________________
None
3.進行訓練
定義訓練方式
model.compile(loss = 'categorical_crossentropy', #設置損失函數(交叉熵損失函數)
optimizer = 'adam', # 優化器使用
metrics = ['accuracy'])
ps:
-
交叉熵刻畫的是兩個概率分佈之間的距離,或可以說它刻畫的是通過概率分佈q來表達概率分佈p的困難程度,p代表正確答案,q代表的是預測值,交叉熵越小,兩個概率的分佈約接近。
-
Adam優化算法的基本機制
Adam 算法和傳統的隨機梯度下降不同。隨機梯度下降保持單一的學習率(即 alpha)更新所有的權重,學習率在訓練過程中並不會改變。而 Adam 通過計算梯度的一階矩估計和二階矩估計而爲不同的參數設計獨的自適應性學習率。
優點:
高效的計算
所需內存少
梯度對角縮放的不變性(第二部分將給予證明)
適合解決含大規模數據和參數的優化問題
適用於非穩態(non-stationary)目標
適用於解決包含很高噪聲或稀疏梯度的問題
超參數可以很直觀地解釋,並且基本上只需極少量的調參
開始訓練
train_history = model.fit(x = x_Train_normalize, # 特徵值
y = y_Train_OneHot, # 真實值
validation_split = 0.2, # 分割比例,將60000*0.8作爲訓練數據,60000*0.2作爲驗證數據
epochs = 10, # 設置訓練週期
batch_size = 200, # 每批訓練200個數據
verbose = 2) # 顯示訓練過程
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
- 1s - loss: 0.4379 - accuracy: 0.8830 - val_loss: 0.2182 - val_accuracy: 0.9408
Epoch 2/10
- 1s - loss: 0.1908 - accuracy: 0.9454 - val_loss: 0.1557 - val_accuracy: 0.9553
Epoch 3/10
- 1s - loss: 0.1354 - accuracy: 0.9615 - val_loss: 0.1257 - val_accuracy: 0.9647
Epoch 4/10
- 1s - loss: 0.1026 - accuracy: 0.9703 - val_loss: 0.1118 - val_accuracy: 0.9683
Epoch 5/10
- 1s - loss: 0.0809 - accuracy: 0.9771 - val_loss: 0.0982 - val_accuracy: 0.9715
Epoch 6/10
- 1s - loss: 0.0658 - accuracy: 0.9820 - val_loss: 0.0932 - val_accuracy: 0.9725
Epoch 7/10
- 1s - loss: 0.0543 - accuracy: 0.9851 - val_loss: 0.0916 - val_accuracy: 0.9738
Epoch 8/10
- 1s - loss: 0.0458 - accuracy: 0.9876 - val_loss: 0.0830 - val_accuracy: 0.9762
Epoch 9/10
- 1s - loss: 0.0379 - accuracy: 0.9902 - val_loss: 0.0823 - val_accuracy: 0.9762
Epoch 10/10
- 1s - loss: 0.0315 - accuracy: 0.9916 - val_loss: 0.0811 - val_accuracy: 0.9762
測試
val_loss, val_acc = model.evaluate(x_Test_normalize, y_Test_OneHot, 1) # 評估模型對樣本數據的輸出結果
print(val_loss) # 模型的損失值
print(val_acc) # 模型的準確度
10000/10000 [==============================] - 4s 379us/step
0.07567812022235794
0.9760000109672546
建立show_train_history 顯示訓練過程
import matplotlib.pyplot as plt
def show_train_history(train_history, train, validation):
plt.plot(train_history.history[train])
plt.plot(train_history.history[validation])
plt.title('Train History')
plt.ylabel(train)
plt.xlabel('Epoch')
plt.legend(['train', 'validation'], loc = 'upper left')
plt.show()
show_train_history(train_history, 'accuracy', 'val_accuracy')
# accuracy 是使用訓練集計算準確度
# val_accuracy 是使用驗證數據集計算準確度
4.實驗參數
激活函數 | 神經元數量 | 訓練平均運行時間 | 準確度 |
---|---|---|---|
relu | 256 | 1s | 0.9760 |
relu | 1000 | 3-4s | 0.9801 |
Sigmoid | 256 | 1s | 0.9645 |
tanh | 256 | 1s | 0.9753 |
rlu | 256 | 1s | 0.9749 |
kernel_initializer | 準確度 |
---|---|
normal | 0.9760 |
random_uniform | 0.9778 |
256個神經元
激活函數:relu
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 256) 200960
_________________________________________________________________
dense_2 (Dense) (None, 10) 2570
=================================================================
Total params: 203,530
Trainable params: 203,530
Non-trainable params: 0
_________________________________________________________________
None
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
- 1s - loss: 0.4379 - accuracy: 0.8830 - val_loss: 0.2182 - val_accuracy: 0.9407
Epoch 2/10
- 1s - loss: 0.1909 - accuracy: 0.9454 - val_loss: 0.1559 - val_accuracy: 0.9555
Epoch 3/10
- 1s - loss: 0.1355 - accuracy: 0.9617 - val_loss: 0.1260 - val_accuracy: 0.9649
Epoch 4/10
- 1s - loss: 0.1027 - accuracy: 0.9704 - val_loss: 0.1119 - val_accuracy: 0.9683
Epoch 5/10
- 1s - loss: 0.0810 - accuracy: 0.9773 - val_loss: 0.0979 - val_accuracy: 0.9721
Epoch 6/10
- 1s - loss: 0.0659 - accuracy: 0.9817 - val_loss: 0.0936 - val_accuracy: 0.9722
Epoch 7/10
- 1s - loss: 0.0543 - accuracy: 0.9851 - val_loss: 0.0912 - val_accuracy: 0.9737
Epoch 8/10
- 1s - loss: 0.0460 - accuracy: 0.9877 - val_loss: 0.0830 - val_accuracy: 0.9767
Epoch 9/10
- 1s - loss: 0.0379 - accuracy: 0.9902 - val_loss: 0.0828 - val_accuracy: 0.9760
Epoch 10/10
- 1s - loss: 0.0316 - accuracy: 0.9917 - val_loss: 0.0807 - val_accuracy: 0.9769
測試:
10000/10000 [==============================] - 4s 374us/step
0.07602789112742801
0.9757999777793884
1000個神經元
激活函數:relu
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 1000) 785000
_________________________________________________________________
dense_2 (Dense) (None, 10) 10010
=================================================================
Total params: 795,010
Trainable params: 795,010
Non-trainable params: 0
_________________________________________________________________
None
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
- 3s - loss: 0.2944 - accuracy: 0.9152 - val_loss: 0.1528 - val_accuracy: 0.9565
Epoch 2/10
- 3s - loss: 0.1179 - accuracy: 0.9661 - val_loss: 0.1073 - val_accuracy: 0.9678
Epoch 3/10
- 3s - loss: 0.0759 - accuracy: 0.9783 - val_loss: 0.0922 - val_accuracy: 0.9724
Epoch 4/10
- 3s - loss: 0.0514 - accuracy: 0.9853 - val_loss: 0.0869 - val_accuracy: 0.9733
Epoch 5/10
- 3s - loss: 0.0357 - accuracy: 0.9905 - val_loss: 0.0754 - val_accuracy: 0.9757
Epoch 6/10
- 4s - loss: 0.0257 - accuracy: 0.9932 - val_loss: 0.0743 - val_accuracy: 0.9778
Epoch 7/10
- 4s - loss: 0.0185 - accuracy: 0.9958 - val_loss: 0.0724 - val_accuracy: 0.9793
Epoch 8/10
- 4s - loss: 0.0132 - accuracy: 0.9971 - val_loss: 0.0718 - val_accuracy: 0.9778
Epoch 9/10
- 4s - loss: 0.0087 - accuracy: 0.9988 - val_loss: 0.0712 - val_accuracy: 0.9798
Epoch 10/10
- 4s - loss: 0.0062 - accuracy: 0.9992 - val_loss: 0.0705 - val_accuracy: 0.9800
測試:
10000/10000 [==============================] - 6s 569us/step
0.06873653566057918
0.9797999858856201
ps:有的時候能超過 0.98
激活函數:Sigmoid
256個神經元
摘要:
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_2 (Dense) (None, 256) 200960
_________________________________________________________________
dense_3 (Dense) (None, 10) 2570
=================================================================
Total params: 203,530
Trainable params: 203,530
Non-trainable params: 0
_________________________________________________________________
None
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
- 1s - loss: 0.7395 - accuracy: 0.8315 - val_loss: 0.3386 - val_accuracy: 0.9109
Epoch 2/10
- 1s - loss: 0.3100 - accuracy: 0.9136 - val_loss: 0.2560 - val_accuracy: 0.9277
Epoch 3/10
- 1s - loss: 0.2492 - accuracy: 0.9290 - val_loss: 0.2233 - val_accuracy: 0.9381
Epoch 4/10
- 1s - loss: 0.2119 - accuracy: 0.9391 - val_loss: 0.1974 - val_accuracy: 0.9424
Epoch 5/10
- 1s - loss: 0.1835 - accuracy: 0.9466 - val_loss: 0.1757 - val_accuracy: 0.9517
Epoch 6/10
- 1s - loss: 0.1608 - accuracy: 0.9533 - val_loss: 0.1607 - val_accuracy: 0.9551
Epoch 7/10
- 1s - loss: 0.1424 - accuracy: 0.9593 - val_loss: 0.1489 - val_accuracy: 0.9587
Epoch 8/10
- 1s - loss: 0.1269 - accuracy: 0.9638 - val_loss: 0.1394 - val_accuracy: 0.9621
Epoch 9/10
- 1s - loss: 0.1141 - accuracy: 0.9677 - val_loss: 0.1291 - val_accuracy: 0.9634
Epoch 10/10
- 1s - loss: 0.1025 - accuracy: 0.9711 - val_loss: 0.1216 - val_accuracy: 0.9659
10000/10000 [==============================] - 4s 380us/step
0.11642538407448501
0.9645000100135803
效果明顯差了很多
激活函數tanh
256個神經元
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
- 1s - loss: 0.4394 - accuracy: 0.8801 - val_loss: 0.2483 - val_accuracy: 0.9302
Epoch 2/10
- 1s - loss: 0.2252 - accuracy: 0.9352 - val_loss: 0.1883 - val_accuracy: 0.9479
Epoch 3/10
- 1s - loss: 0.1681 - accuracy: 0.9514 - val_loss: 0.1556 - val_accuracy: 0.9580
Epoch 4/10
- 1s - loss: 0.1313 - accuracy: 0.9631 - val_loss: 0.1374 - val_accuracy: 0.9603
Epoch 5/10
- 1s - loss: 0.1064 - accuracy: 0.9704 - val_loss: 0.1214 - val_accuracy: 0.9652
Epoch 6/10
- 1s - loss: 0.0876 - accuracy: 0.9763 - val_loss: 0.1140 - val_accuracy: 0.9668
Epoch 7/10
- 1s - loss: 0.0728 - accuracy: 0.9802 - val_loss: 0.1063 - val_accuracy: 0.9694
Epoch 8/10
- 1s - loss: 0.0610 - accuracy: 0.9837 - val_loss: 0.0951 - val_accuracy: 0.9731
Epoch 9/10
- 1s - loss: 0.0510 - accuracy: 0.9870 - val_loss: 0.0926 - val_accuracy: 0.9721
Epoch 10/10
- 1s - loss: 0.0426 - accuracy: 0.9894 - val_loss: 0.0866 - val_accuracy: 0.9738
10000/10000 [==============================] - 4s 371us/step
0.08017727720420531
0.9753999710083008
激活函數 rlu(Exponential Linear Units)
256個神經元
Train on 48000 samples, validate on 12000 samples
Epoch 1/10
- 1s - loss: 0.4413 - accuracy: 0.8773 - val_loss: 0.2636 - val_accuracy: 0.9261
Epoch 2/10
- 1s - loss: 0.2476 - accuracy: 0.9284 - val_loss: 0.2049 - val_accuracy: 0.9422
Epoch 3/10
- 1s - loss: 0.1849 - accuracy: 0.9471 - val_loss: 0.1645 - val_accuracy: 0.9557
Epoch 4/10
- 1s - loss: 0.1423 - accuracy: 0.9593 - val_loss: 0.1424 - val_accuracy: 0.9599
Epoch 5/10
- 1s - loss: 0.1139 - accuracy: 0.9676 - val_loss: 0.1232 - val_accuracy: 0.9658
Epoch 6/10
- 1s - loss: 0.0936 - accuracy: 0.9734 - val_loss: 0.1140 - val_accuracy: 0.9674
Epoch 7/10
- 1s - loss: 0.0781 - accuracy: 0.9778 - val_loss: 0.1070 - val_accuracy: 0.9692
Epoch 8/10
- 1s - loss: 0.0670 - accuracy: 0.9807 - val_loss: 0.0976 - val_accuracy: 0.9720
Epoch 9/10
- 1s - loss: 0.0570 - accuracy: 0.9839 - val_loss: 0.0939 - val_accuracy: 0.9725
Epoch 10/10
- 1s - loss: 0.0485 - accuracy: 0.9868 - val_loss: 0.0880 - val_accuracy: 0.9740
10000/10000 [==============================] - 4s 374us/step
0.07968259554752871
0.9749000072479248