【2019.09.13】基於TensorFlow2.0,使用Keras + fashion_mnist 數據集進行深度學習實戰

基於TensorFlow2.0,使用Keras + fashion_mnist 數據集進行深度學習實戰

人工智能、機器學習、深度學習 的關係
在這裏插入圖片描述

  • 導入相關包
import pandas as pd 
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Flatten, Dense, Conv2D, MaxPooling2D, Dropout
from tensorflow.keras.datasets import fashion_mnist
%matplotlib inline
tf.__version__
'2.0.0-alpha0'
# 導入數據集
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()
print( x_train.shape )
print( x_test.shape )
print( y_train.shape )
print( y_test.shape )
(60000, 28, 28)
(10000, 28, 28)
(60000,)
(10000,)
# 構建簡單模型(全連接網絡)
model = Sequential([Flatten(input_shape=[28, 28]),
                    Dense(100, activation='relu'),
                    Dense(10, activation='softmax')
                    ])
model.summary()
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 784)               0         
_________________________________________________________________
dense (Dense)                (None, 100)               78500     
_________________________________________________________________
dense_1 (Dense)              (None, 10)                1010      
=================================================================
Total params: 79,510
Trainable params: 79,510
Non-trainable params: 0
_________________________________________________________________
# 使用modle.layers[1].name 來獲取第n層的名稱
model.layers[1].name
'dense'
# get_weights() 來獲取每層的權重矩陣 W 和偏置向量 b。
weights, biases = model.layers[1].get_weights()
weights
array([[ 0.03753366, -0.07585614,  0.05845281, ..., -0.00696003,
        -0.02985662,  0.0026468 ],
       [-0.02360117,  0.07903261, -0.00201984, ...,  0.01831853,
        -0.05822062,  0.00874755],
       [ 0.03159927, -0.00679947,  0.03076784, ...,  0.06593607,
        -0.00499721,  0.03378649],
       ...,
       [ 0.01089679,  0.04923365,  0.07235795, ...,  0.01033241,
         0.01817431, -0.04198586],
       [ 0.03213584, -0.0057021 ,  0.00929629, ..., -0.03756753,
         0.01735194, -0.01611251],
       [ 0.06783222, -0.04055587, -0.06099807, ..., -0.06757091,
        -0.01999778,  0.00600851]], dtype=float32)
biases
array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
       0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
      dtype=float32)
# 當模型還沒訓練時,W 是隨機初始化,而 b 是零初始化。最後檢查一下它們的形狀。
print( weights.shape )
print( biases.shape )
(784, 100)
(100,)
# 編譯模型
model.compile(loss='sparse_categorical_crossentropy',
              optimizer='adam',
              metrics=['acc'])

損失函數loss

  • 二分類問題: 最後一層激活函數是 sogmoid,損失函數是binart_crossentropy
  • 多分類問題:最後一層激活函數是 softmax,損失函數是categorical_crossentropy
  • 多標籤問題:最後一層激活函數是 softmoid,損失函數是binary_crossentropy
  • 迴歸問題 :最後一層沒有激活函數,損失函數是mse
    Fashion_MNIST 是一個十分類問題,因此損失函數是 categorical_crossentropy。

優化器 optimizer

大多數情況下,使用 adamrmsprop 及其默認的學習率是穩妥的
除了通過名稱來調用優化器 model.compile(‘名稱’),我們還可以通過實例化對象來調用優化器 model.compile(‘優化器’)。選取幾個對比如下:

名稱:SGD

對象:SGD(lr=0.01, momentum=0.0, decay=0.0, nesterov=False)

名稱:RMSprop

對象:RMSprop(lr=0.001, rho=0.9, epsilon=None, decay=0.0)

名稱:Adagrad

對象:Adagrad(lr=0.01, epsilon=None, decay=0.0)

名稱:Adam

對象:

Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=None, decay=0.0, amsgrad=False)

這些優化器對象都在 keras.optimizer 命名空間下。使用優化器對象來編譯模型的好處是可以調節裏面的超參數比如學習率 lr,使用名稱則來編譯模型只能採用優化器的默認參數,比如用 Adam 裏面的學習率 0.001。

指標 metrics

指標不會用於訓練過程,只是讓我們監控模型訓練時的表現
除了 Keras 自帶指標,我們還可以自定指標,下列的 mean_pred 就是自定義指標(該指標計算預測的平均值)。
def mean_pred(y_true, y_pred):
return K.mean(y_pred)

model.compile(optimizer=‘sgd’,
loss=‘binary_crossentropy’,
metrics=[‘acc’, mean_pred])

# 調用函數(回調)
class myCallback(tf.keras.callbacks.Callback):
    def on_epoch_end(self, epoch, logs={}):
        if (logs.get('acc') > 0.9):
            print('\nReached 90% accuracy so cancelling training!')
            self.model.stop_training = True

callback = myCallback()
# 擬合模型
model.fit(x_train, y_train, epochs=20, callbacks=[callback])
Epoch 1/20
60000/60000 [==============================] - 5s 81us/sample - loss: 2.5439 - acc: 0.6660
Epoch 2/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.7489 - acc: 0.7143
Epoch 3/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.6793 - acc: 0.7381
Epoch 4/20
60000/60000 [==============================] - 4s 71us/sample - loss: 0.5989 - acc: 0.7839
Epoch 5/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.5315 - acc: 0.8148
Epoch 6/20
60000/60000 [==============================] - 4s 71us/sample - loss: 0.5149 - acc: 0.8244
Epoch 7/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.4978 - acc: 0.8292
Epoch 8/20
60000/60000 [==============================] - 4s 71us/sample - loss: 0.4953 - acc: 0.8321
Epoch 9/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.4870 - acc: 0.8366
Epoch 10/20
60000/60000 [==============================] - 4s 73us/sample - loss: 0.4856 - acc: 0.8376
Epoch 11/20
60000/60000 [==============================] - 4s 71us/sample - loss: 0.4868 - acc: 0.8378
Epoch 12/20
60000/60000 [==============================] - 4s 71us/sample - loss: 0.4844 - acc: 0.8399
Epoch 13/20
60000/60000 [==============================] - 4s 71us/sample - loss: 0.4689 - acc: 0.8425
Epoch 14/20
60000/60000 [==============================] - 4s 71us/sample - loss: 0.4794 - acc: 0.8420
Epoch 15/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.4715 - acc: 0.8451
Epoch 16/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.4695 - acc: 0.8469
Epoch 17/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.4604 - acc: 0.8473
Epoch 18/20
60000/60000 [==============================] - 4s 71us/sample - loss: 0.4617 - acc: 0.8481
Epoch 19/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.4678 - acc: 0.8471
Epoch 20/20
60000/60000 [==============================] - 4s 72us/sample - loss: 0.4570 - acc: 0.8475




<tensorflow.python.keras.callbacks.History at 0x210647eafd0>
# 預測模型
prob = model.predict( x_test[0:1] )
prob
array([[7.6639465e-23, 1.3957777e-23, 0.0000000e+00, 2.8266480e-21,
        0.0000000e+00, 9.7962271e-04, 0.0000000e+00, 8.5630892e-03,
        1.0883533e-21, 9.9045730e-01]], dtype=float32)

在測試集上第一張圖上做預測,輸出是一個數組,裏面 10 個數值代表每個類別預測的概率。看上去是第 10 類(索引爲 9)概率最大。用 argmax 驗證一下

print(np.argmax(prob))
9
plt.imshow(x_test[0])
<matplotlib.image.AxesImage at 0x2111b411e48>

在這裏插入圖片描述

最後用 model.evaluate() 來看看模型在所有測試集上的表現。

model.evaluate( x_test, y_test )
10000/10000 [==============================] - 1s 54us/sample - loss: 0.5990 - acc: 0.8217




[0.5990172575473786, 0.8217]

如果訓練集準確率大於測試集精度,這樣就說明有過擬合的徵兆

如果準確率表現不是很好,我們就應該考慮一下我們的模型了,在這裏,我們使用單層全連接模型可能是比較簡單的,我們使用卷積神經網絡嘗試一下
首先導入 二維卷積層Conv2D, 二維最大池化層MaxPooling2D

# 構建模型(卷積)
model = Sequential([Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
                           MaxPooling2D(2,2),
                           Conv2D(64, (3,3), activation='relu'),
                           MaxPooling2D(2,2),
                           Flatten(),
                           Dense(128, activation='relu'),
                           Dense(10, activation='softmax')])
# 編譯模型
model.compile(loss='sparse_categorical_crossentropy',
              optimizer='adam',
              metrics=['acc'])
# 擬合模型
(x_train_full, y_train_full), (x_test, y_test) = fashion_mnist.load_data()
x_train_full = x_train_full.reshape(60000, 28, 28, 1)
x_test = x_test.reshape(10000, 28, 28, 1) 
x_valid, x_train = x_train_full[:5000]/255.0, x_train_full[5000:] / 255.0
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]


# x_test = x_test.reshape(10000, 28, 28, 1)
# y_test = y_test.reshape(10000, 28, 28, 1)


history = model.fit(x_train, y_train, epochs=20, validation_data=(x_valid, y_valid))
Train on 55000 samples, validate on 5000 samples
Epoch 1/20
55000/55000 [==============================] - 9s 166us/sample - loss: 0.1063 - acc: 0.9602 - val_loss: 0.3073 - val_acc: 0.9094
Epoch 2/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.0950 - acc: 0.9645 - val_loss: 0.3091 - val_acc: 0.9158
Epoch 3/20
55000/55000 [==============================] - 9s 162us/sample - loss: 0.0893 - acc: 0.9664 - val_loss: 0.3295 - val_acc: 0.9138
Epoch 4/20
55000/55000 [==============================] - 9s 163us/sample - loss: 0.0783 - acc: 0.9704 - val_loss: 0.3551 - val_acc: 0.9138
Epoch 5/20
55000/55000 [==============================] - 9s 163us/sample - loss: 0.0712 - acc: 0.9734 - val_loss: 0.3801 - val_acc: 0.9070
Epoch 6/20
55000/55000 [==============================] - 9s 162us/sample - loss: 0.0666 - acc: 0.9752 - val_loss: 0.3812 - val_acc: 0.9126
Epoch 7/20
55000/55000 [==============================] - 9s 167us/sample - loss: 0.0599 - acc: 0.9775 - val_loss: 0.3997 - val_acc: 0.9122
Epoch 8/20
55000/55000 [==============================] - 9s 167us/sample - loss: 0.0543 - acc: 0.9802 - val_loss: 0.4552 - val_acc: 0.9068
Epoch 9/20
55000/55000 [==============================] - 9s 165us/sample - loss: 0.0515 - acc: 0.9807 - val_loss: 0.4546 - val_acc: 0.9056
Epoch 10/20
55000/55000 [==============================] - 9s 166us/sample - loss: 0.0483 - acc: 0.9822 - val_loss: 0.4561 - val_acc: 0.9122
Epoch 11/20
55000/55000 [==============================] - 9s 165us/sample - loss: 0.0427 - acc: 0.9842 - val_loss: 0.4877 - val_acc: 0.9092
Epoch 12/20
55000/55000 [==============================] - 9s 167us/sample - loss: 0.0406 - acc: 0.9849 - val_loss: 0.4913 - val_acc: 0.9100
Epoch 13/20
55000/55000 [==============================] - 9s 166us/sample - loss: 0.0392 - acc: 0.9853 - val_loss: 0.5382 - val_acc: 0.9102
Epoch 14/20
55000/55000 [==============================] - 9s 165us/sample - loss: 0.0361 - acc: 0.9866 - val_loss: 0.5899 - val_acc: 0.9116
Epoch 15/20
55000/55000 [==============================] - 9s 167us/sample - loss: 0.0375 - acc: 0.9864 - val_loss: 0.5630 - val_acc: 0.9136
Epoch 16/20
55000/55000 [==============================] - 9s 168us/sample - loss: 0.0301 - acc: 0.9891 - val_loss: 0.5853 - val_acc: 0.9140
Epoch 17/20
55000/55000 [==============================] - 9s 169us/sample - loss: 0.0347 - acc: 0.9875 - val_loss: 0.6378 - val_acc: 0.9054
Epoch 18/20
55000/55000 [==============================] - 9s 169us/sample - loss: 0.0280 - acc: 0.9899 - val_loss: 0.5961 - val_acc: 0.9106
Epoch 19/20
55000/55000 [==============================] - 9s 168us/sample - loss: 0.0306 - acc: 0.9890 - val_loss: 0.6319 - val_acc: 0.9100
Epoch 20/20
55000/55000 [==============================] - 9s 165us/sample - loss: 0.0252 - acc: 0.9908 - val_loss: 0.6697 - val_acc: 0.9078
acc = history.history.get('acc')
val_acc = history.history.get('val_acc')
loss = history.history.get('loss')
val_loss = history.history.get('val_loss')

epochs = range(1, len(acc)+1)
plt.figure(figsize=(8,4),dpi=100)
plt.subplot(1, 2, 1)
plt.plot(epochs, acc, 'bo', label='Traing acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(epochs, loss, 'ro', label='Traing loss')
plt.plot(epochs, val_loss, 'r', label='Validation val_loss')
plt.legend()
<matplotlib.legend.Legend at 0x211266eb550>

在這裏插入圖片描述

我們發現,訓練集準確率高於驗證集準確率,驗證集的acc和loss都有點不降反升的感覺,過擬合了,怎麼辦呢?用 Dropout 試試?

# 構建模型(卷積)
model = Sequential([Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
                           MaxPooling2D(2,2),
                           Conv2D(64, (3,3), activation='relu'),
                           MaxPooling2D(2,2),
                           Flatten(),
                    Dropout(0.5),
                           Dense(128, activation='relu'),
                           Dense(10, activation='softmax')])

# 編譯模型
model.compile(loss='sparse_categorical_crossentropy',
              optimizer='adam',
              metrics=['acc'])

history = model.fit(x_train, y_train, epochs=20, validation_data=(x_valid, y_valid))
Train on 55000 samples, validate on 5000 samples
Epoch 1/20
55000/55000 [==============================] - 9s 171us/sample - loss: 0.5157 - acc: 0.8105 - val_loss: 0.3606 - val_acc: 0.8676
Epoch 2/20
55000/55000 [==============================] - 9s 168us/sample - loss: 0.3601 - acc: 0.8664 - val_loss: 0.2841 - val_acc: 0.8968
Epoch 3/20
55000/55000 [==============================] - 9s 168us/sample - loss: 0.3160 - acc: 0.8840 - val_loss: 0.2870 - val_acc: 0.8942
Epoch 4/20
55000/55000 [==============================] - 9s 167us/sample - loss: 0.2900 - acc: 0.8914 - val_loss: 0.2527 - val_acc: 0.9056
Epoch 5/20
55000/55000 [==============================] - 9s 170us/sample - loss: 0.2703 - acc: 0.8976 - val_loss: 0.2413 - val_acc: 0.9088
Epoch 6/20
55000/55000 [==============================] - 10s 174us/sample - loss: 0.2525 - acc: 0.9052 - val_loss: 0.2270 - val_acc: 0.9158
Epoch 7/20
55000/55000 [==============================] - 9s 163us/sample - loss: 0.2375 - acc: 0.9115 - val_loss: 0.2294 - val_acc: 0.9172
Epoch 8/20
55000/55000 [==============================] - 9s 163us/sample - loss: 0.2306 - acc: 0.9107 - val_loss: 0.2197 - val_acc: 0.9186
Epoch 9/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.2170 - acc: 0.9173 - val_loss: 0.2165 - val_acc: 0.9198
Epoch 10/20
55000/55000 [==============================] - 9s 165us/sample - loss: 0.2110 - acc: 0.9200 - val_loss: 0.2181 - val_acc: 0.9160
Epoch 11/20
55000/55000 [==============================] - 9s 163us/sample - loss: 0.2021 - acc: 0.9235 - val_loss: 0.2230 - val_acc: 0.9222
Epoch 12/20
55000/55000 [==============================] - 9s 165us/sample - loss: 0.1973 - acc: 0.9248 - val_loss: 0.2282 - val_acc: 0.9200
Epoch 13/20
55000/55000 [==============================] - 9s 165us/sample - loss: 0.1899 - acc: 0.9288 - val_loss: 0.2201 - val_acc: 0.9162
Epoch 14/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1853 - acc: 0.9291 - val_loss: 0.2140 - val_acc: 0.9236
Epoch 15/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1787 - acc: 0.9317 - val_loss: 0.2265 - val_acc: 0.9204
Epoch 16/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1756 - acc: 0.9333 - val_loss: 0.2087 - val_acc: 0.9250
Epoch 17/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1680 - acc: 0.9357 - val_loss: 0.2250 - val_acc: 0.9170
Epoch 18/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1651 - acc: 0.9375 - val_loss: 0.2217 - val_acc: 0.9238
Epoch 19/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1588 - acc: 0.9399 - val_loss: 0.2241 - val_acc: 0.9254
Epoch 20/20
55000/55000 [==============================] - 9s 165us/sample - loss: 0.1616 - acc: 0.9383 - val_loss: 0.2200 - val_acc: 0.9208
acc = history.history.get('acc')
val_acc = history.history.get('val_acc')
loss = history.history.get('loss')
val_loss = history.history.get('val_loss')

epochs = range(1, len(acc)+1)
plt.figure(figsize=(8,4),dpi=100)
plt.subplot(1, 2, 1)
plt.plot(epochs, acc, 'bo', label='Traing acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(epochs, loss, 'ro', label='Traing loss')
plt.plot(epochs, val_loss, 'r', label='Validation val_lossc')
plt.legend()
<matplotlib.legend.Legend at 0x2113cca4fd0>

在這裏插入圖片描述

可以看出。訓練精度有所下降,但是驗證精度有所提升,Dropout 有效地抑制了過擬合。

# 保存模型

model.save("my_keras_model.h5")
# 加載可用 models 命名空間裏面的 load_model() 函數:
model = tf.keras.models.load_model("my_keras_model.h5")
history = model.fit(x_train, y_train, epochs=20, validation_data=(x_valid, y_valid))
Train on 55000 samples, validate on 5000 samples
Epoch 1/20
55000/55000 [==============================] - 10s 185us/sample - loss: 0.1527 - acc: 0.9412 - val_loss: 0.2212 - val_acc: 0.9226
Epoch 2/20
55000/55000 [==============================] - 10s 179us/sample - loss: 0.1515 - acc: 0.9425 - val_loss: 0.2161 - val_acc: 0.9268
Epoch 3/20
55000/55000 [==============================] - 10s 178us/sample - loss: 0.1459 - acc: 0.9451 - val_loss: 0.2118 - val_acc: 0.9234
Epoch 4/20
55000/55000 [==============================] - 10s 180us/sample - loss: 0.1446 - acc: 0.9440 - val_loss: 0.2290 - val_acc: 0.9232
Epoch 5/20
55000/55000 [==============================] - 10s 179us/sample - loss: 0.1413 - acc: 0.9461 - val_loss: 0.2303 - val_acc: 0.9186
Epoch 6/20
55000/55000 [==============================] - 10s 176us/sample - loss: 0.1401 - acc: 0.9467 - val_loss: 0.2247 - val_acc: 0.9230
Epoch 7/20
55000/55000 [==============================] - 10s 179us/sample - loss: 0.1375 - acc: 0.9472 - val_loss: 0.2225 - val_acc: 0.9266
Epoch 8/20
55000/55000 [==============================] - 10s 178us/sample - loss: 0.1312 - acc: 0.9489 - val_loss: 0.2249 - val_acc: 0.9248
Epoch 9/20
55000/55000 [==============================] - 10s 177us/sample - loss: 0.1300 - acc: 0.9496 - val_loss: 0.2330 - val_acc: 0.9238
Epoch 10/20
55000/55000 [==============================] - 10s 179us/sample - loss: 0.1294 - acc: 0.9519 - val_loss: 0.2330 - val_acc: 0.9288
Epoch 11/20
55000/55000 [==============================] - 10s 175us/sample - loss: 0.1274 - acc: 0.9515 - val_loss: 0.2265 - val_acc: 0.9262
Epoch 12/20
55000/55000 [==============================] - 10s 180us/sample - loss: 0.1264 - acc: 0.9516 - val_loss: 0.2454 - val_acc: 0.9206
Epoch 13/20
55000/55000 [==============================] - 10s 177us/sample - loss: 0.1231 - acc: 0.9519 - val_loss: 0.2327 - val_acc: 0.9256
Epoch 14/20
55000/55000 [==============================] - 10s 176us/sample - loss: 0.1238 - acc: 0.9531 - val_loss: 0.2210 - val_acc: 0.9262
Epoch 15/20
55000/55000 [==============================] - 10s 181us/sample - loss: 0.1185 - acc: 0.9541 - val_loss: 0.2355 - val_acc: 0.9232
Epoch 16/20
55000/55000 [==============================] - 10s 178us/sample - loss: 0.1186 - acc: 0.9555 - val_loss: 0.2252 - val_acc: 0.9268
Epoch 17/20
55000/55000 [==============================] - 10s 178us/sample - loss: 0.1157 - acc: 0.9553 - val_loss: 0.2346 - val_acc: 0.9260
Epoch 18/20
55000/55000 [==============================] - 10s 177us/sample - loss: 0.1195 - acc: 0.9551 - val_loss: 0.2319 - val_acc: 0.9242
Epoch 19/20
55000/55000 [==============================] - 10s 179us/sample - loss: 0.1128 - acc: 0.9569 - val_loss: 0.2406 - val_acc: 0.9212
Epoch 20/20
55000/55000 [==============================] - 10s 181us/sample - loss: 0.1125 - acc: 0.9573 - val_loss: 0.2504 - val_acc: 0.9276

總結 Keras

keras總體流程
[外鏈圖片轉存失敗(img-JDuioUCs-1568382806197)(attachment:image.png)]

import pandas as pd 
import numpy as np
import tensorflow as tf
import matplotlib.pyplot as plt
from tensorflow.keras import Sequential
from tensorflow.keras.layers import Flatten, Dense, Conv2D, MaxPooling2D, Dropout
from tensorflow.keras.datasets import fashion_mnist
%matplotlib inline

# 1、加載數據
(x_train_full, y_train_full), (x_test, y_test) = fashion_mnist.load_data()
x_train_full = x_train_full.reshape(60000, 28, 28, 1)
x_test = x_test.reshape(10000, 28, 28, 1) 
x_valid, x_train = x_train_full[:5000]/255.0, x_train_full[5000:] / 255.0
y_valid, y_train = y_train_full[:5000], y_train_full[5000:]


# 2、構建模型
model = Sequential([Conv2D(64, (3,3), activation='relu', input_shape=(28, 28, 1)),
                           MaxPooling2D(2,2),
                           Conv2D(64, (3,3), activation='relu'),
                           MaxPooling2D(2,2),
                           Flatten(),
                           Dropout(0.5),
                           Dense(128, activation='relu'),
                           Dense(10, activation='softmax')])

# 3、編譯模型
model.compile(loss='sparse_categorical_crossentropy',
              optimizer='adam',
              metrics=['acc'])

# 4、擬合模型
history = model.fit(x_train, y_train, epochs=20, validation_data=(x_valid, y_valid))

# 5、評估模型
# model.predict(x_test)
# model.predict_class(x_test)
model.evaluate(x_test, y_test)

# 6、保存模板
model.save("my_keras_model.h5")
# 7、調用模板
model = tf.keras.models.load_model("my_keras_model.h5")
Train on 55000 samples, validate on 5000 samples
Epoch 1/20
55000/55000 [==============================] - 10s 176us/sample - loss: 0.5164 - acc: 0.8091 - val_loss: 0.3403 - val_acc: 0.8806
Epoch 2/20
55000/55000 [==============================] - 10s 173us/sample - loss: 0.3575 - acc: 0.8678 - val_loss: 0.3001 - val_acc: 0.8872
Epoch 3/20
55000/55000 [==============================] - 9s 172us/sample - loss: 0.3096 - acc: 0.8844 - val_loss: 0.2651 - val_acc: 0.9026
Epoch 4/20
55000/55000 [==============================] - 9s 166us/sample - loss: 0.2854 - acc: 0.8941 - val_loss: 0.2607 - val_acc: 0.9040
Epoch 5/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.2641 - acc: 0.9009 - val_loss: 0.2316 - val_acc: 0.9122
Epoch 6/20
55000/55000 [==============================] - 9s 162us/sample - loss: 0.2494 - acc: 0.9055 - val_loss: 0.2283 - val_acc: 0.9156
Epoch 7/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.2356 - acc: 0.9109 - val_loss: 0.2252 - val_acc: 0.9152
Epoch 8/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.2228 - acc: 0.9153 - val_loss: 0.2217 - val_acc: 0.9220
Epoch 9/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.2160 - acc: 0.9177 - val_loss: 0.2171 - val_acc: 0.9206
Epoch 10/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.2028 - acc: 0.9226 - val_loss: 0.2163 - val_acc: 0.9208
Epoch 11/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.2014 - acc: 0.9228 - val_loss: 0.2114 - val_acc: 0.9196
Epoch 12/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1943 - acc: 0.9271 - val_loss: 0.2094 - val_acc: 0.9244
Epoch 13/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1854 - acc: 0.9282 - val_loss: 0.2132 - val_acc: 0.9216
Epoch 14/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1811 - acc: 0.9312 - val_loss: 0.2205 - val_acc: 0.9144
Epoch 15/20
55000/55000 [==============================] - 9s 163us/sample - loss: 0.1745 - acc: 0.9323 - val_loss: 0.2197 - val_acc: 0.9212
Epoch 16/20
55000/55000 [==============================] - 9s 165us/sample - loss: 0.1700 - acc: 0.9356 - val_loss: 0.2233 - val_acc: 0.9188
Epoch 17/20
55000/55000 [==============================] - 9s 163us/sample - loss: 0.1652 - acc: 0.9359 - val_loss: 0.2164 - val_acc: 0.9240
Epoch 18/20
55000/55000 [==============================] - 9s 163us/sample - loss: 0.1596 - acc: 0.9390 - val_loss: 0.2099 - val_acc: 0.9224
Epoch 19/20
55000/55000 [==============================] - 9s 164us/sample - loss: 0.1596 - acc: 0.9391 - val_loss: 0.2052 - val_acc: 0.9270
Epoch 20/20
55000/55000 [==============================] - 9s 166us/sample - loss: 0.1586 - acc: 0.9395 - val_loss: 0.2287 - val_acc: 0.9242
10000/10000 [==============================] - 1s 70us/sample - loss: 98.0071 - acc: 0.7226




[98.00711212615967, 0.7226]
# 結果可視化
acc = history.history.get('acc')
val_acc = history.history.get('val_acc')
loss = history.history.get('loss')
val_loss = history.history.get('val_loss')

epochs = range(1, len(acc)+1)
plt.figure(figsize=(8,4),dpi=100)
plt.subplot(1, 2, 1)
plt.plot(epochs, acc, 'bo', label='Traing acc')
plt.plot(epochs, val_acc, 'b', label='Validation acc')
plt.legend()

plt.subplot(1, 2, 2)
plt.plot(epochs, loss, 'ro', label='Traing loss')
plt.plot(epochs, val_loss, 'r', label='Validation val_lossc')
plt.legend()
<matplotlib.legend.Legend at 0x21140646f60>

在這裏插入圖片描述
編寫簡單的深度學習項目完全可以套用此模板
希望對大家有所幫助

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章