TensorFlow MNIST手寫數字識別(神經網絡極簡版)

MNIST手寫數字識別數據集是NIST數據集的一個子集(介紹),常用於深度學習的入門樣例。

該數據集包含60000張圖片作爲訓練數據(爲驗證模型效果,一般從驗證數據中劃分出一部分作爲驗證數據,一般爲5000),10000張圖片作爲測試數據。MNIST數據集中每張圖片代表0-9中的一個數字,圖片大小爲28×2828 \times 28,且數字都位於圖片中央。

(1) 加載數據

Tensorflow提供了一個類來處理MNIST數據,這個類會自動下載並轉化MNIST數據的格式,將原始數據解析成可直接進行訓練和測試的數據格式。代碼如下:

mnist = input_data.read_data_sets("datasets/MNIST_data/", one_hot=True)
print("Training data size: ", mnist.train.num_examples)        #55000
print("Validating data size: ", mnist.validation.num_examples) #5000
print("Testing data size: ", mnist.test.num_examples)          #10000

(2)設置參數

learning_rate = 0.0001  # 學習率
num_epochs = 1000       # 迭代次數
BATCH_SIZE = 100        #每輪迭代的訓練數據個數

(3)前向傳播

定義神經網絡,輸入層爲784(28×28)784(對應像素28 \times 28),隱藏層爲500500,輸出層爲1010(對應1010個類別)。代碼如下:

(m,n_x) = mnist.train.images.shape #784
n_y = mnist.train.labels.shape[1] #10
n_1 = 500

X = tf.placeholder(tf.float32, shape=(None,n_x), name="X")  #(55000,784)
Y = tf.placeholder(tf.float32, shape=(None,n_y), name="Y")  #(55000,10)

W1 = tf.get_variable("w1",[n_x,n_1],initializer = tf.contrib.layers.xavier_initializer(seed = 1))   #(784,500)
b1 = tf.get_variable("b1",[1,n_1],  initializer = tf.zeros_initializer())                          #(1,500)
W2 = tf.get_variable("w2",[n_1,n_y],initializer = tf.contrib.layers.xavier_initializer(seed = 1))  #(500,10)
b2 = tf.get_variable("b2",[1,n_y],  initializer = tf.zeros_initializer())                          #(1,10)

Z1 = tf.nn.relu(tf.matmul(X,W1) + b1) #(55000,500)
Z2 = tf.matmul(Z1,W2) + b2            #(55000,10)

(4)定義損失函數

使用交叉熵損失函數,代碼如下:

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z2, labels = Y))

(5) 優化器

使用Adam優化器,代碼如下

optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

(6)模型訓練

with tf.Session() as sess:
    tf.initialize_all_variables().run()
    for i in range(num_epochs):
        x,y = mnist.train.next_batch(BATCH_SIZE)
        sess.run(optimizer,feed_dict={X:x,Y:y})
        
        if i%500 == 0:
            cost_v = sess.run(cost,feed_dict={X:x,Y:y})
            costs.append(cost_v)
            print(i,cost_v)
        
   # Calculate the correct accuracy
    correct_prediction = tf.equal(tf.argmax(Z2,1), tf.argmax(Y,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print ("Train Accuracy:", accuracy.eval({X:mnist.train.images, Y: mnist.train.labels})) #Train Accuracy: 0.98807275
    print ("Test Accuracy:", accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))   #Test Accuracy: 0.9756

(7)模型評估

代碼如下:

plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()

生成圖形如下:
在這裏插入圖片描述
可以看出損失值隨迭代輪數增加而減小。

下載完整代碼

import numpy as np
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tensorflow.examples.tutorials.mnist import input_data

mnist = input_data.read_data_sets("datasets/MNIST_data/", one_hot=True)


learning_rate = 0.0001
num_epochs = 10000
BATCH_SIZE = 100

(m,n_x) = mnist.train.images.shape #784
n_y = mnist.train.labels.shape[1] #10
n_1 = 500
costs = []

tf.set_random_seed(1)        # to keep consistent results

ops.reset_default_graph()    # to be able to rerun the model without overwriting tf variables
X = tf.placeholder(tf.float32, shape=(None,n_x), name="X")  #(55000,784)
Y = tf.placeholder(tf.float32, shape=(None,n_y), name="Y")  #(55000,10)

W1 = tf.get_variable("w1",[n_x,n_1],initializer = tf.contrib.layers.xavier_initializer(seed = 1))   #(784,500)
b1 = tf.get_variable("b1",[1,n_1],  initializer = tf.zeros_initializer())                          #(1,500)
W2 = tf.get_variable("w2",[n_1,n_y],initializer = tf.contrib.layers.xavier_initializer(seed = 1))  #(500,10)
b2 = tf.get_variable("b2",[1,n_y],  initializer = tf.zeros_initializer())                          #(1,10)

Z1 = tf.nn.relu(tf.matmul(X,W1) + b1) #(55000,500)
Z2 = tf.matmul(Z1,W2) + b2            #(55000,10)

cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = Z2, labels = Y))
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)

with tf.Session() as sess:
    tf.initialize_all_variables().run()
    for i in range(num_epochs):
        x,y = mnist.train.next_batch(BATCH_SIZE)
        sess.run(optimizer,feed_dict={X:x,Y:y})
        
        if i%500 == 0:
            cost_v = sess.run(cost,feed_dict={X:x,Y:y})
            costs.append(cost_v)
            print(i,cost_v)
        
   # Calculate the correct accuracy
    correct_prediction = tf.equal(tf.argmax(Z2,1), tf.argmax(Y,1))
    accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))
    print ("Train Accuracy:", accuracy.eval({X:mnist.train.images, Y: mnist.train.labels})) #Train Accuracy: 0.98807275
    print ("Test Accuracy:", accuracy.eval({X: mnist.test.images, Y: mnist.test.labels}))   #Test Accuracy: 0.9756
    
plt.plot(np.squeeze(costs))
plt.ylabel('cost')
plt.xlabel('iterations (per tens)')
plt.title("Learning rate =" + str(learning_rate))
plt.show()
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章