tensorflow2簡潔實現softmax迴歸

softmax迴歸簡潔實現

xiaoyao 動手學深度學習 tensorflow 2

import tensorflow as tf
from tensorflow import keras
print(tf.__version__)
2.1.0

1 .獲取和讀取數據

使用Fashion-MNIST數據集和上一節中設置的批量大小。

fashion_mnist = keras.datasets.fashion_mnist
(x_train, y_train), (x_test, y_test) = fashion_mnist.load_data()

對數據進行處理,歸一化,便於訓練

x_train = x_train / 255.0
x_test = x_test / 255.0

在“softmax迴歸”一節中提到,softmax迴歸的輸出層是一個全連接層。因此,添加一個輸出個數爲10的全連接層。
第一層是Flatten,將28 * 28的像素值,壓縮成一行 (784, )
第二層還是Dense,因爲是多分類問題,激活函數使用softmax

2 .定義和初始化模型

model = keras.Sequential([
    keras.layers.Flatten(input_shape=(28, 28)),
    keras.layers.Dense(10, activation=tf.nn.softmax)
])

3 .softmax和交叉熵損失函數

Tensorflow 2的keras API提供了一個loss參數。他的數值穩定性更好。

loss = 'sparse_categorical_crossentropy'

4 .定義優化算法

使用學習率爲0.1的小批量梯度下降作爲優化算法

optimizer = tf.keras.optimizers.SGD(0.1)

5 . 訓練模型

model.compile(optimizer = tf.keras.optimizers.SGD(0.1), 
             loss = 'sparse_categorical_crossentropy', 
             metrics = ['accuracy'])

6.訓練模型

model.fit(x_train,y_train,epochs=20,batch_size=256)
Train on 60000 samples
Epoch 1/20
60000/60000 [==============================] - 1s 16us/sample - loss: 0.4752 - accuracy: 0.8400
Epoch 2/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4666 - accuracy: 0.8421
Epoch 3/20
60000/60000 [==============================] - 1s 13us/sample - loss: 0.4594 - accuracy: 0.8442
Epoch 4/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4539 - accuracy: 0.8463
Epoch 5/20
60000/60000 [==============================] - 1s 13us/sample - loss: 0.4485 - accuracy: 0.8465
Epoch 6/20
60000/60000 [==============================] - 1s 13us/sample - loss: 0.4438 - accuracy: 0.8490
Epoch 7/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4403 - accuracy: 0.8505
Epoch 8/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4372 - accuracy: 0.8511
Epoch 9/20
60000/60000 [==============================] - 1s 13us/sample - loss: 0.4347 - accuracy: 0.8527
Epoch 10/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4326 - accuracy: 0.8518
Epoch 11/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4286 - accuracy: 0.8540
Epoch 12/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4260 - accuracy: 0.8553s - loss: 0.4249 - accuracy: 0.85
Epoch 13/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4248 - accuracy: 0.8550
Epoch 14/20
60000/60000 [==============================] - 1s 15us/sample - loss: 0.4226 - accuracy: 0.8557
Epoch 15/20
60000/60000 [==============================] - 1s 15us/sample - loss: 0.4206 - accuracy: 0.8565
Epoch 16/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4188 - accuracy: 0.8572
Epoch 17/20
60000/60000 [==============================] - 1s 15us/sample - loss: 0.4175 - accuracy: 0.8579
Epoch 18/20
60000/60000 [==============================] - 1s 15us/sample - loss: 0.4166 - accuracy: 0.8571
Epoch 19/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4153 - accuracy: 0.8583
Epoch 20/20
60000/60000 [==============================] - 1s 14us/sample - loss: 0.4135 - accuracy: 0.8585





<tensorflow.python.keras.callbacks.History at 0x1ce19ba4bc8>

接下來,比較模型在測試數據集上的表現情況

test_loss, test_acc = model.evaluate(x_test, y_test)
print('Test Acc:',test_acc)
10000/10000 [==============================] - 1s 54us/sample - loss: 0.4650 - accuracy: 0.8361
Test Acc: 0.8361

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章