Python實現深度學習之-神經網絡識別手寫數字(更新中,更新日期:2017-07-12)

更新日期:
2017-07-11
2017-07-12

0.神經網絡的架構

構成任意一個神經網絡的四個要素:
層數(num_layers)、大小(sizes)、偏置(biases)、權重(weights)。
說明:在下圖中,大小(sizes)爲[2,3,1]
這裏寫圖片描述

1.神經網絡Network對象的初始化

前後層之間的關係如下式
這裏寫圖片描述 (1)

分析隱藏層:第1個神經元節點輸入爲權重爲1*2的矩陣(以下省略“矩陣”二字)和偏置爲1*1,輸出爲1*1,則從具有三個神經元節點的整體來看,輸入的權重爲3*2(3代表隱藏層神經元節點數,2代表輸入層神經元節點數)、偏置爲3*1(3代表隱藏層神經元節點數,1爲固定值),輸出爲3*1。

分析輸出層:只有一個神經元節點。輸入的權重1*3(1代表輸出層神經元節點數,3代表隱藏層神經元節點數),偏置1*1(1代表隱藏層神經元節點數,1爲固定值),輸出1*1。

偏置(biases,簡稱b):3*1和1*1,對應self.biases=[(3*1),(1*1)]((3*1)表示3*1的數組)

權重(weights,簡稱w):3*2和1*3,對應self.weights=[(3*2),(1*3)]

關係式中的a:2*1和3*1,對應a=[(2*1),(3*1)]

Network對象的代碼如下[2][3]:

class Network:
    def __init__(self,sizes):
        self.num_layers=len(sizes)
        self.sizes=sizes
        self.biases=[np.random.randn(y,1) for y in sizes[1:]]
        self.weights=[np.random.randn(x,y) \
                    for x,y in zip(sizes[1:],sizes[:-1])]

2.神經網絡層次

這裏設置如下結構的神經網絡。第一層有2個神經元,第二層有3個神經元,最後一層有1個神經元,如圖1所示。

net=Network([2,3,1])

3.sigmoid函數

這裏寫圖片描述

def sigmoid(z):
    return  1.0/(1.0+np.exp(-z))

4.feedforward函數

公式(1)的實現[4]:

def feedforward(self,a):
    for w,b in zip(self.weights,self.biases)
        a=sigmoid(np.dot(w,a)+b)
    return a

5.代價函數

這裏寫圖片描述

6.反向傳播

反向傳播其實是對權重和偏置變化影響代價函數過程的理解。最終級的含義其實就是計算代價函數對權重和權值的偏導數。計算機不擅長求偏導,因而先求神經元上的誤差,再求偏導數。反向傳播的四個方程式如下所示:

這裏寫圖片描述

方向傳播方程給出了一種計算代價函數梯度的方法。算法描述如下:

這裏寫圖片描述

4.sigmoid函數的導數:sigoid_prime

def sigmoid_prime(z):
    return sigmoid(z)*(1-sigmoid(z))

7.加載MNIST數據

"""
mnist_loader
~~~~~~~~~~~~
A library to load the MNIST image data.  For details of the data
structures that are returned, see the doc strings for ``load_data``
and ``load_data_wrapper``.  In practice, ``load_data_wrapper`` is the
function usually called by our neural network code.
"""

#### Libraries
# Standard library
import cPickle
import gzip

# Third-party libraries
import numpy as np

def load_data():
    """Return the MNIST data as a tuple containing the training data,
    the validation data, and the test data.
    The ``training_data`` is returned as a tuple with two entries.
    The first entry contains the actual training images.  This is a
    numpy ndarray with 50,000 entries.  Each entry is, in turn, a
    numpy ndarray with 784 values, representing the 28 * 28 = 784
    pixels in a single MNIST image.
    The second entry in the ``training_data`` tuple is a numpy ndarray
    containing 50,000 entries.  Those entries are just the digit
    values (0...9) for the corresponding images contained in the first
    entry of the tuple.
    The ``validation_data`` and ``test_data`` are similar, except
    each contains only 10,000 images.
    This is a nice data format, but for use in neural networks it's
    helpful to modify the format of the ``training_data`` a little.
    That's done in the wrapper function ``load_data_wrapper()``, see
    below.
    """
    f = gzip.open('../data/mnist.pkl.gz', 'rb')
    training_data, validation_data, test_data = cPickle.load(f)
    f.close()
    return (training_data, validation_data, test_data)

def load_data_wrapper():
    """Return a tuple containing ``(training_data, validation_data,
    test_data)``. Based on ``load_data``, but the format is more
    convenient for use in our implementation of neural networks.
    In particular, ``training_data`` is a list containing 50,000
    2-tuples ``(x, y)``.  ``x`` is a 784-dimensional numpy.ndarray
    containing the input image.  ``y`` is a 10-dimensional
    numpy.ndarray representing the unit vector corresponding to the
    correct digit for ``x``.
    ``validation_data`` and ``test_data`` are lists containing 10,000
    2-tuples ``(x, y)``.  In each case, ``x`` is a 784-dimensional
    numpy.ndarry containing the input image, and ``y`` is the
    corresponding classification, i.e., the digit values (integers)
    corresponding to ``x``.
    Obviously, this means we're using slightly different formats for
    the training data and the validation / test data.  These formats
    turn out to be the most convenient for use in our neural network
    code."""
    tr_d, va_d, te_d = load_data()
    training_inputs = [np.reshape(x, (784, 1)) for x in tr_d[0]]
    training_results = [vectorized_result(y) for y in tr_d[1]]
    training_data = zip(training_inputs, training_results)
    validation_inputs = [np.reshape(x, (784, 1)) for x in va_d[0]]
    validation_data = zip(validation_inputs, va_d[1])
    test_inputs = [np.reshape(x, (784, 1)) for x in te_d[0]]
    test_data = zip(test_inputs, te_d[1])
    return (training_data, validation_data, test_data)

def vectorized_result(j):
    """Return a 10-dimensional unit vector with a 1.0 in the jth
    position and zeroes elsewhere.  This is used to convert a digit
    (0...9) into a corresponding desired output from the neural
    network."""
    e = np.zeros((10, 1))
    e[j] = 1.0
    return e

參考資料

[1] 神經網絡與深度學習.pdf
[2] Python入門教程
[3] Python 並行遍歷zip()函數使用方法
[4] Python及其函數庫(Numpy、)常用函數集錦

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章