吳恩達Coursera深度學習課程 deeplearning.ai (2-3) TensorFlow Tutorial--編程作業

可執行源碼:https://download.csdn.net/download/haoyutiangang/10496503

TensorFlow Tutorial

1. 探索TensorFlow lib庫

導包

import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
import tensorflow as tf
from tensorflow.python.framework import ops
from tf_utils import load_dataset, random_mini_batches, convert_to_one_hot, predict

%matplotlib inline
np.random.seed(1)

有用的方法

def load_dataset():
    train_dataset = h5py.File('datasets/train_signs.h5', "r")
    train_set_x_orig = np.array(train_dataset["train_set_x"][:]) # your train set features
    train_set_y_orig = np.array(train_dataset["train_set_y"][:]) # your train set labels

    test_dataset = h5py.File('datasets/test_signs.h5', "r")
    test_set_x_orig = np.array(test_dataset["test_set_x"][:]) # your test set features
    test_set_y_orig = np.array(test_dataset["test_set_y"][:]) # your test set labels

    classes = np.array(test_dataset["list_classes"][:]) # the list of classes

    train_set_y_orig = train_set_y_orig.reshape((1, train_set_y_orig.shape[0]))
    test_set_y_orig = test_set_y_orig.reshape((1, test_set_y_orig.shape[0]))

    return train_set_x_orig, train_set_y_orig, test_set_x_orig, test_set_y_orig, classes


def random_mini_batches(X, Y, mini_batch_size = 64, seed = 0):
    """
    Creates a list of random minibatches from (X, Y)

    Arguments:
    X -- input data, of shape (input size, number of examples)
    Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples)
    mini_batch_size - size of the mini-batches, integer
    seed -- this is only for the purpose of grading, so that you're "random minibatches are the same as ours.

    Returns:
    mini_batches -- list of synchronous (mini_batch_X, mini_batch_Y)
    """

    m = X.shape[1]                  # number of training examples
    mini_batches = []
    np.random.seed(seed)

    # Step 1: Shuffle (X, Y)
    permutation = list(np.random.permutation(m))
    shuffled_X = X[:, permutation]
    shuffled_Y = Y[:, permutation].reshape((Y.shape[0],m))

    # Step 2: Partition (shuffled_X, shuffled_Y). Minus the end case.
    num_complete_minibatches = math.floor(m/mini_batch_size) # number of mini batches of size mini_batch_size in your partitionning
    for k in range(0, num_complete_minibatches):
        mini_batch_X = shuffled_X[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size]
        mini_batch_Y = shuffled_Y[:, k * mini_batch_size : k * mini_batch_size + mini_batch_size]
        mini_batch = (mini_batch_X, mini_batch_Y)
        mini_batches.append(mini_batch)

    # Handling the end case (last mini-batch < mini_batch_size)
    if m % mini_batch_size != 0:
        mini_batch_X = shuffled_X[:, num_complete_minibatches * mini_batch_size : m]
        mini_batch_Y = shuffled_Y[:, num_complete_minibatches * mini_batch_size : m]
        mini_batch = (mini_batch_X, mini_batch_Y)
        mini_batches.append(mini_batch)

    return mini_batches


def convert_to_one_hot(Y, C):
    Y = np.eye(C)[Y.reshape(-1)].T
    return Y


def predict(X, parameters):

    W1 = tf.convert_to_tensor(parameters["W1"])
    b1 = tf.convert_to_tensor(parameters["b1"])
    W2 = tf.convert_to_tensor(parameters["W2"])
    b2 = tf.convert_to_tensor(parameters["b2"])
    W3 = tf.convert_to_tensor(parameters["W3"])
    b3 = tf.convert_to_tensor(parameters["b3"])

    params = {"W1": W1,
              "b1": b1,
              "W2": W2,
              "b2": b2,
              "W3": W3,
              "b3": b3}

    x = tf.placeholder("float", [12288, 1])

    z3 = forward_propagation_for_predict(x, params)
    p = tf.argmax(z3)

    sess = tf.Session()
    prediction = sess.run(p, feed_dict = {x: X})

    return prediction

簡單的小例子

loss=L(y^,y)=(y^(i)y(i))
y_hat = tf.constant(36, name='y_hat')            # Define y_hat constant. Set to 36.
y = tf.constant(39, name='y')                    # Define y. Set to 39

loss = tf.Variable((y - y_hat)**2, name='loss')  # Create a variable for the loss

init = tf.global_variables_initializer()         # When init is run later (session.run(init)),
                                                 # the loss variable will be initialized and ready to be computed
with tf.Session() as session:                    # Create a session and print the output
    session.run(init)                            # Initializes the variables
    print(session.run(loss))                     # Prints the loss


# 9

開發TensorFlow程序的步驟

  1. 創建待計算的變量(Tensors) : 定義變量的類型和名稱(定義佔位符)
  2. 編寫變量(Tensors)之間轉換的方法: 編寫運算結構(運用佔位符)
  3. 初始化變量(Tensors):定義佔位符字典feed_dict
  4. 創建session: 傳入結構和佔位符字典
  5. 執行session: 執行上述編寫的各項操作

通常,我們計算loss時,先將其定義爲一個方法,然後用初始化函數該方法的參數,最後再執行計算,這樣我們可以在不改變loss方法的情況下,通過不同的初始化方法計算不同的loss

  • session : 創建和執行
a = tf.constant(2)
b = tf.constant(10)
c = tf.multiply(a,b)
print(c)

# Tensor("Mul:0", shape=(), dtype=int32)

以上代碼報錯是因爲我們僅僅定義和傳入了數據,但是沒有執行,想要看到執行的結果,需要用session執行

sess = tf.Session()
print(sess.run(c))

# 20

謹記: 佔位符,定結構,初始化,建session,執行session

  • 佔位符的運用 : placeholders and feed_dict
    1. placeholders: 佔位符,定義佔位符的類型和名稱
    2. 定運算表達式的結構
    3. feed_dict: 初始化佔位符的字典,鍵值對
# Change the value of x in the feed_dict

x = tf.placeholder(tf.int64, name = 'x')
print(sess.run(2 * x, feed_dict = {x: 3}))
sess.close()

# 6

定結構的時候是告訴tensorflow如何建立一張圖,在 session.run() 時再把結構連同填充結構佔位符的feed_dict傳過去。

1.1 線性函數

練習:實現 Y = WX + b,其中 W, X 爲隨機矩陣,b爲隨機向量

W:(4,3) X:(3,1) b(4,1)
定義 X 常量的方法:

X = tf.constant(np.random.randn(3,1), name = "X")

可能有用的函數

  • tf.matmal(…, …) 矩陣相乘
  • tf.add(…, …) 加法
  • np.random.randn(…) 隨機初始化
# GRADED FUNCTION: linear_function

def linear_function():
    """
    Implements a linear function: 
            Initializes W to be a random tensor of shape (4,3)
            Initializes X to be a random tensor of shape (3,1)
            Initializes b to be a random tensor of shape (4,1)
    Returns: 
    result -- runs the session for Y = WX + b 
    """

    np.random.seed(1)

    ### START CODE HERE ### (4 lines of code)
    X = tf.constant(np.random.randn(3,1), name = "X")
    W = tf.constant(np.random.randn(4,3), name = "W")
    b = tf.constant(np.random.randn(4,1), name  = "b")
    Y = tf.add(tf.matmul(W, X), b)
    ### END CODE HERE ### 

    # Create the session using tf.Session() and run it with sess.run(...) on the variable you want to calculate

    ### START CODE HERE ###
    sess = tf.Session()
    result = sess.run(Y)
    ### END CODE HERE ### 

    # close the session 
    sess.close()

    return result

#########################################################

print( "result = " + str(linear_function()))

# result = [[-2.15657382]
#  [ 2.95891446]
#  [-1.08926781]
#  [-0.84538042]]

1.2 計算 sigmoid

TensorFlow 框架提供了很多常用的函數,比如 tf.sigmoid 和 tf.softmax,下面我們自己實現以下sigmoid函數。
1. 定義佔位符變量: tf.placeholder(tf.float32, name = “…”)
2. 定義運算結構: tf.sigmoid(…)
3. 執行session: sess.run(…, feed_dict = {x: z})

有兩種典型的方式來實現session

  • Method1
sess = tf.Session()
# Run the variables initialization (if needed), run the operations
result = sess.run(..., feed_dict = {...})
sess.close() # Close the session
  • Method2
with tf.Session() as sess: 
    # run the variables initialization (if needed), run the operations
    result = sess.run(..., feed_dict = {...})
    # This takes care of closing the session for you :)

完成練習

# GRADED FUNCTION: sigmoid

def sigmoid(z):
    """
    Computes the sigmoid of z

    Arguments:
    z -- input value, scalar or vector

    Returns: 
    results -- the sigmoid of z
    """

    ### START CODE HERE ### ( approx. 4 lines of code)
    # Create a placeholder for x. Name it 'x'.
    x = tf.placeholder(tf.float32, name = "x")

    # compute sigmoid(x)
    sigmoid = tf.sigmoid(x)

    # Create a session, and run it. Please use the method 2 explained above. 
    # You should use a feed_dict to pass z's value to x. 
    with tf.Session() as sess:
        # Run session and call the output "result"
        result = sess.run(sigmoid, feed_dict = {x:z})

    ### END CODE HERE ###

    return result

#########################################################

print ("sigmoid(0) = " + str(sigmoid(0)))
print ("sigmoid(12) = " + str(sigmoid(12)))

# sigmoid(0) = 0.5
# sigmoid(12) = 0.999994

總結

  1. 創建佔位符
  2. 定義運算結構
  3. 初始化佔位符字典
  4. 創建session並執行,傳入結構和佔位符字典

1.3 計算成本函數

計算交叉熵成本
for i = 1…m:

J=1mi=1m(y(i)loga[2](i)+(1y(i))(1loga[2](i)))

這個函數可以直接實現:tf.nn.sigmoid_cross_entropy_with_logits(logits = …, labels = …)

其中 logits = a, label = y

下面練習:

1mi=1m(y(i)logσ(z[2](i))+(1y(i))(1logσ(z[2](i))))

也就是logits = a = sigmoid(z)
# GRADED FUNCTION: cost

def cost(logits, labels):
    """
    Computes the cost using the sigmoid cross entropy

    Arguments:
    logits -- vector containing z, output of the last linear unit (before the final sigmoid activation)
    labels -- vector of labels y (1 or 0) 

    Note: What we've been calling "z" and "y" in this class are respectively called "logits" and "labels" 
    in the TensorFlow documentation. So logits will feed into z, and labels into y. 

    Returns:
    cost -- runs the session of the cost (formula (2))
    """

    ### START CODE HERE ### 

    # Create the placeholders for "logits" (z) and "labels" (y) (approx. 2 lines)
    z = tf.placeholder(tf.float32, name = "z")
    y = tf.placeholder(tf.float32, name = "y")

    # Use the loss function (approx. 1 line)
    cost = tf.nn.sigmoid_cross_entropy_with_logits(logits = z, labels = y)

    # Create a session (approx. 1 line). See method 1 above.
    sess = tf.Session()

    # Run the session (approx. 1 line).
    cost = sess.run(cost, feed_dict = {z:logits, y:labels})

    # Close the session (approx. 1 line). See method 1 above.
    sess.close()

    ### END CODE HERE ###

    return cost

#########################################################

logits = sigmoid(np.array([0.2,0.4,0.7,0.9]))
cost = cost(logits, np.array([0,0,1,1]))
print ("cost = " + str(cost))

# cost = [ 1.00538719  1.03664088  0.41385433  0.39956614]

1.4 使用 One Hot 編碼

很多時候我們需要將一個數字向量(每個數字表示一個類別),轉化爲類別矩陣,其中向量的每一個值對應矩陣中的一個列向量,列向量中命中的類別爲1,其他爲0,這種表示方法稱爲“One Hot”。(1的位置就好像一個熱點

image

在tensorflow中實現:tf.one_hot(labels, depth, axis)

使用OneHot的一個小例子

# GRADED FUNCTION: one_hot_matrix

def one_hot_matrix(labels, C):
    """
    Creates a matrix where the i-th row corresponds to the ith class number and the jth column
                     corresponds to the jth training example. So if example j had a label i. Then entry (i,j) 
                     will be 1. 

    Arguments:
    labels -- vector containing the labels 
    C -- number of classes, the depth of the one hot dimension

    Returns: 
    one_hot -- one hot matrix
    """

    ### START CODE HERE ###

    # Create a tf.constant equal to C (depth), name it 'C'. (approx. 1 line)
    C = tf.constant(value = C, name = "C")

    # Use tf.one_hot, be careful with the axis (approx. 1 line)
    one_hot_matrix = tf.one_hot(labels, C, axis = 0)

    # Create the session (approx. 1 line)
    sess = tf.Session()

    # Run the session (approx. 1 line)
    one_hot = sess.run(one_hot_matrix)

    # Close the session (approx. 1 line). See method 1 above.
    sess.close()

    ### END CODE HERE ###

    return one_hot

#########################################################

labels = np.array([1,2,3,0,2,1])
one_hot = one_hot_matrix(labels, C = 4)
print ("one_hot = " + str(one_hot))

# one_hot = [[ 0.  0.  0.  1.  0.  0.]
#  [ 1.  0.  0.  0.  0.  1.]
#  [ 0.  1.  0.  0.  1.  0.]
#  [ 0.  0.  1.  0.  0.  0.]]

1.5 0值初始化和1值初始化

Tensorflow中:tf.zeros(shape) tf.ones(shape) 返回一個數組

小例子

# GRADED FUNCTION: ones

def ones(shape):
    """
    Creates an array of ones of dimension shape

    Arguments:
    shape -- shape of the array you want to create

    Returns: 
    ones -- array containing only ones
    """

    ### START CODE HERE ###

    # Create "ones" tensor using tf.ones(...). (approx. 1 line)
    ones = tf.ones(shape)

    # Create the session (approx. 1 line)
    sess = tf.Session()

    # Run the session to compute 'ones' (approx. 1 line)
    ones = sess.run(ones)

    # Close the session (approx. 1 line). See method 1 above.
    sess.close()

    ### END CODE HERE ###
    return ones

#########################################################

print ("ones = " + str(ones([3])))

# ones = [ 1.  1.  1.]

2 使用 Tensorflow 構建你的第一個神經網絡

2.0 問題陳述:手勢數據集

我們利用一下午的時間玩了一個小遊戲,給0-5的手勢拍照,編寫手勢數字識別程序,非常好玩,你也來試試吧。

  • 訓練集: 1080張手勢圖片(64, 64),0-5每種手勢圖片180張
  • 測試集: 120張圖片(63, 64), 0-5每種手勢圖片20張

這是一個玩樂的小數據集,實際工作中數據集會大很多。

手勢圖片示意圖
image

導入數據

# Loading the dataset
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_dataset()

查看數據示例

# Example of a picture
index = 0
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))

# y = 5

數據預處理

  • 圖片向量化 (變成一維向量)
  • 向量歸一化 (除以255)
  • 標記數據 Y 進行 one hot 轉換
# Flatten the training and test images
X_train_flatten = X_train_orig.reshape(X_train_orig.shape[0], -1).T
X_test_flatten = X_test_orig.reshape(X_test_orig.shape[0], -1).T
# Normalize image vectors
X_train = X_train_flatten/255.
X_test = X_test_flatten/255.
# Convert training and test labels to one hot matrices
Y_train = convert_to_one_hot(Y_train_orig, 6)
Y_test = convert_to_one_hot(Y_test_orig, 6)

#########################################################

print ("number of training examples = " + str(X_train.shape[1]))
print ("number of test examples = " + str(X_test.shape[1]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))

# number of training examples = 1080
# number of test examples = 120
# X_train shape: (12288, 1080)
# Y_train shape: (6, 1080)
# X_test shape: (12288, 120)
# Y_test shape: (6, 120)
  1. 64*64*3 = 12288 其中 3 表示 RGB 三原色
  • 模型:LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX
  • 二元分類用LINEAR, 多元分類用SOFTMAX

2.1 創建佔位符

  • 創建佔位符 X, Y :
    • X 表示輸入向量, 這裏是 64*64*3=12288
    • Y 表示輸出類的個數,這裏0-5,所以是6
# GRADED FUNCTION: create_placeholders

def create_placeholders(n_x, n_y):
    """
    Creates the placeholders for the tensorflow session.

    Arguments:
    n_x -- scalar, size of an image vector (num_px * num_px = 64 * 64 * 3 = 12288)
    n_y -- scalar, number of classes (from 0 to 5, so -> 6)

    Returns:
    X -- placeholder for the data input, of shape [n_x, None] and dtype "float"
    Y -- placeholder for the input labels, of shape [n_y, None] and dtype "float"

    Tips:
    - You will use None because it let's us be flexible on the number of examples you will for the placeholders.
      In fact, the number of examples during test/train is different.
    """

    ### START CODE HERE ### (approx. 2 lines)
    X = tf.placeholder(tf.float32, shape = [n_x, None])
    Y = tf.placeholder(tf.float32, shape = [n_y, None])
    ### END CODE HERE ###

    return X, Y

#########################################################

X, Y = create_placeholders(12288, 6)
print ("X = " + str(X))
print ("Y = " + str(Y))


# X = Tensor("Placeholder:0", shape=(12288, ?), dtype=float32)
# Y = Tensor("Placeholder_1:0", shape=(6, ?), dtype=float32)

2.2 初始化參數

  • 使用Xavier Initialization 爲 W 進行初始化
  • 使用Zero Initialization 爲 b 進行初始化

提示

W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())

程序中將隨機數的seed設置爲1,保證隨機數的分佈穩定性

# GRADED FUNCTION: initialize_parameters

def initialize_parameters():
    """
    Initializes parameters to build a neural network with tensorflow. The shapes are:
                        W1 : [25, 12288]
                        b1 : [25, 1]
                        W2 : [12, 25]
                        b2 : [12, 1]
                        W3 : [6, 12]
                        b3 : [6, 1]

    Returns:
    parameters -- a dictionary of tensors containing W1, b1, W2, b2, W3, b3
    """

    tf.set_random_seed(1)                   # so that your "random" numbers match ours

    ### START CODE HERE ### (approx. 6 lines of code)
    W1 = tf.get_variable("W1", [25,12288], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
    b1 = tf.get_variable("b1", [25,1], initializer = tf.zeros_initializer())
    W2 = tf.get_variable("W2", [12,25], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
    b2 = tf.get_variable("b2", [12,1], initializer = tf.zeros_initializer())
    W3 = tf.get_variable("W3",[6,12], initializer = tf.contrib.layers.xavier_initializer(seed = 1))
    b3 = tf.get_variable("b3", [6,1], initializer = tf.zeros_initializer())
    ### END CODE HERE ###

    parameters = {"W1": W1,
                  "b1": b1,
                  "W2": W2,
                  "b2": b2,
                  "W3": W3,
                  "b3": b3}

    return parameters

#########################################################

tf.reset_default_graph()
with tf.Session() as sess:
    parameters = initialize_parameters()
    print("W1 = " + str(parameters["W1"]))
    print("b1 = " + str(parameters["b1"]))
    print("W2 = " + str(parameters["W2"]))
    print("b2 = " + str(parameters["b2"]))


# W1 = <tf.Variable 'W1:0' shape=(25, 12288) dtype=float32_ref>
# b1 = <tf.Variable 'b1:0' shape=(25, 1) dtype=float32_ref>
# W2 = <tf.Variable 'W2:0' shape=(12, 25) dtype=float32_ref>
# b2 = <tf.Variable 'b2:0' shape=(12, 1) dtype=float32_ref>

和預期一樣,參數還沒有被計算

2.3 TensorFlow 中的前向傳播

實現前向傳播函數:利用 X 和 parameters
- tf.add(…,…) :加法
- tf.matmul(…,…) : 矩陣相乘
- tf.nn.relu(…) : relu 激活函數

注意: 方法結束語z3, 不需要計算a3, 因爲在TensorFlow中,最後一層的線性計算被作爲輸入進入到計算loss函數中。

# GRADED FUNCTION: forward_propagation

def forward_propagation(X, parameters):
    """
    Implements the forward propagation for the model: LINEAR -> RELU -> LINEAR -> RELU -> LINEAR -> SOFTMAX

    Arguments:
    X -- input dataset placeholder, of shape (input size, number of examples)
    parameters -- python dictionary containing your parameters "W1", "b1", "W2", "b2", "W3", "b3"
                  the shapes are given in initialize_parameters

    Returns:
    Z3 -- the output of the last LINEAR unit
    """

    # Retrieve the parameters from the dictionary "parameters" 
    W1 = parameters['W1']
    b1 = parameters['b1']
    W2 = parameters['W2']
    b2 = parameters['b2']
    W3 = parameters['W3']
    b3 = parameters['b3']

    ### START CODE HERE ### (approx. 5 lines)              # Numpy Equivalents:
    Z1 = tf.add(tf.matmul(W1, X), b1)                      # Z1 = np.dot(W1, X) + b1
    A1 = tf.nn.relu(Z1)                                    # A1 = relu(Z1)
    Z2 = tf.add(tf.matmul(W2, A1), b2)                     # Z2 = np.dot(W2, a1) + b2
    A2 = tf.nn.relu(Z2)                                    # A2 = relu(Z2)
    Z3 = tf.add(tf.matmul(W3, A2), b3)                     # Z3 = np.dot(W3,Z2) + b3
    ### END CODE HERE ###

    return Z3

#########################################################

tf.reset_default_graph()

with tf.Session() as sess:
    X, Y = create_placeholders(12288, 6)
    parameters = initialize_parameters()
    Z3 = forward_propagation(X, parameters)
    print("Z3 = " + str(Z3))

# Z3 = Tensor("Add_2:0", shape=(6, ?), dtype=float32)

注意到我們並沒有cache任何中間變量,下面你就明白了。

2.4 計算成本函數

如前所述,可以利用下面方法很簡單的計算cost。

tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = ..., labels = ...))

其中:

  • logits: Z3向量(樣本數)
  • labels: Y向量(分類數)
# GRADED FUNCTION: compute_cost 

def compute_cost(Z3, Y):
    """
    Computes the cost

    Arguments:
    Z3 -- output of forward propagation (output of the last LINEAR unit), of shape (6, number of examples)
    Y -- "true" labels vector placeholder, same shape as Z3

    Returns:
    cost - Tensor of the cost function
    """

    # to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
    logits = tf.transpose(Z3)
    labels = tf.transpose(Y)

    ### START CODE HERE ### (1 line of code)
    cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits = logits, labels = labels))
    ### END CODE HERE ###

    return cost

#########################################################

tf.reset_default_graph()

with tf.Session() as sess:
    X, Y = create_placeholders(12288, 6)
    parameters = initialize_parameters()
    Z3 = forward_propagation(X, parameters)
    cost = compute_cost(Z3, Y)
    print("cost = " + str(cost))

# cost = Tensor("Mean:0", shape=(), dtype=float32)

2.5 反向傳播 & 參數更新

這裏你將會體驗到框架的好處,反向傳播和參數更新僅需一行代碼就可以搞定。

在計算完cost後,你需要創建一個 optimizer 對象,當你運行sess.run的時候需要將cost和optimizer一起傳進去,這時,框架會對cost和learning_rate進行優化處理。

梯度下降優化器的定義:

optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)

執行優化

_ , c = sess.run([optimizer, cost], feed_dict={X: minibatch_X, Y: minibatch_Y})

這一步將會通過你給出的圖形結構的反向來計算反向傳播。

注意: 編程的時候,通常使用 “_” 來存儲”throwaway”變量,也就是之後不會再用到的臨時變量。

這裏,_ 表示優化器的評估值,我們並不需要;c 表示cost的值

2.6 構建你的模型

集成上述方法,構建一個模型

def model(X_train, Y_train, X_test, Y_test, learning_rate = 0.0001,
          num_epochs = 1500, minibatch_size = 32, print_cost = True):
    """
    Implements a three-layer tensorflow neural network: LINEAR->RELU->LINEAR->RELU->LINEAR->SOFTMAX.

    Arguments:
    X_train -- training set, of shape (input size = 12288, number of training examples = 1080)
    Y_train -- test set, of shape (output size = 6, number of training examples = 1080)
    X_test -- training set, of shape (input size = 12288, number of training examples = 120)
    Y_test -- test set, of shape (output size = 6, number of test examples = 120)
    learning_rate -- learning rate of the optimization
    num_epochs -- number of epochs of the optimization loop
    minibatch_size -- size of a minibatch
    print_cost -- True to print the cost every 100 epochs

    Returns:
    parameters -- parameters learnt by the model. They can then be used to predict.
    """

    ops.reset_default_graph()                         # to be able to rerun the model without overwriting tf variables
    tf.set_random_seed(1)                             # to keep consistent results
    seed = 3                                          # to keep consistent results
    (n_x, m) = X_train.shape                          # (n_x: input size, m : number of examples in the train set)
    n_y = Y_train.shape[0]                            # n_y : output size
    costs = []                                        # To keep track of the cost

    # Create Placeholders of shape (n_x, n_y)
    ### START CODE HERE ### (1 line)
    X, Y = create_placeholders(n_x, n_y)
    ### END CODE HERE ###

    # Initialize parameters
    ### START CODE HERE ### (1 line)
    parameters = initialize_parameters()
    ### END CODE HERE ###

    # Forward propagation: Build the forward propagation in the tensorflow graph
    ### START CODE HERE ### (1 line)
    Z3 = forward_propagation(X, parameters)
    ### END CODE HERE ###

    # Cost function: Add cost function to tensorflow graph
    ### START CODE HERE ### (1 line)
    cost = compute_cost(Z3, Y)
    ### END CODE HERE ###

    # Backpropagation: Define the tensorflow optimizer. Use an AdamOptimizer.
    ### START CODE HERE ### (1 line)
    optimizer = tf.train.AdamOptimizer(learning_rate = learning_rate).minimize(cost)
    ### END CODE HERE ###

    # Initialize all the variables
    init = tf.global_variables_initializer()

    # Start the session to compute the tensorflow graph
    with tf.Session() as sess:

        # Run the initialization
        sess.run(init)

        # Do the training loop
        for epoch in range(num_epochs):

            epoch_cost = 0.                       # Defines a cost related to an epoch
            num_minibatches = int(m / minibatch_size) # number of minibatches of size minibatch_size in the train set
            seed = seed + 1
            minibatches = random_mini_batches(X_train, Y_train, minibatch_size, seed)

            for minibatch in minibatches:

                # Select a minibatch
                (minibatch_X, minibatch_Y) = minibatch

                # IMPORTANT: The line that runs the graph on a minibatch.
                # Run the session to execute the "optimizer" and the "cost", the feedict should contain a minibatch for (X,Y).
                ### START CODE HERE ### (1 line)
                _ , minibatch_cost = sess.run([optimizer, cost], feed_dict = {X: minibatch_X, Y: minibatch_Y})
                ### END CODE HERE ###

                epoch_cost += minibatch_cost / num_minibatches

            # Print the cost every epoch
            if print_cost == True and epoch % 100 == 0:
                print ("Cost after epoch %i: %f" % (epoch, epoch_cost))
            if print_cost == True and epoch % 5 == 0:
                costs.append(epoch_cost)

        # plot the cost
        plt.plot(np.squeeze(costs))
        plt.ylabel('cost')
        plt.xlabel('iterations (per tens)')
        plt.title("Learning rate =" + str(learning_rate))
        plt.show()

        # lets save the parameters in a variable
        parameters = sess.run(parameters)
        print ("Parameters have been trained!")

        # Calculate the correct predictions
        correct_prediction = tf.equal(tf.argmax(Z3), tf.argmax(Y))

        # Calculate accuracy on the test set
        accuracy = tf.reduce_mean(tf.cast(correct_prediction, "float"))

        print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train}))
        print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test}))

        return parameters

#########################################################

parameters = model(X_train, Y_train, X_test, Y_test)

# Cost after epoch 0: 1.855702
# Cost after epoch 100: 1.016458
# Cost after epoch 200: 0.733102
# Cost after epoch 300: 0.572940
# Cost after epoch 400: 0.468774
# Cost after epoch 500: 0.381021
# Cost after epoch 600: 0.313822
# Cost after epoch 700: 0.254158
# Cost after epoch 800: 0.203829
# Cost after epoch 900: 0.166421
# Cost after epoch 1000: 0.141486
# Cost after epoch 1100: 0.107580
# Cost after epoch 1200: 0.086270
# Cost after epoch 1300: 0.059371
# Cost after epoch 1400: 0.052228

# Parameters have been trained!
# Train Accuracy: 0.999074
# Test Accuracy: 0.716667

image

思考

  • 從準確率來看,訓練集準確率比較高,測試集準確率不足,可以採用L2或者dorpout的正則化方式來改進避免過擬合。
  • 考慮將session作爲一個代碼塊來訓練模型。

2.7 測試你自己的圖片

import scipy
from PIL import Image
from scipy import ndimage

## START CODE HERE ## (PUT YOUR IMAGE NAME) 
my_image = "thumbs_up.jpg"
## END CODE HERE ##

# We preprocess your image to fit your algorithm.
fname = "images/" + my_image
image = np.array(ndimage.imread(fname, flatten=False))
my_image = scipy.misc.imresize(image, size=(64,64)).reshape((1, 64*64*3)).T
my_image_prediction = predict(my_image, parameters)

plt.imshow(image)
print("Your algorithm predicts: y = " + str(np.squeeze(my_image_prediction)))

# Your algorithm predicts: y = 3

image

經過試驗可以發現,0-5識別的還不錯,不過對於10(讚的手勢)將會識別錯誤,這是亦可能爲我們的訓練集不包含這個手勢,所以模型不認識。我們稱之爲”不匹配的數據分佈”,下一課的”構造機器學習項目” 中會涉及到這些內容。

謹記

  • TensorFlow 是一種用於深度學習的編程框架
  • TensorFlow 中的兩個主要類是 Tensors 和 Operators
  • TensorFlow 編程需要遵循下列步驟:
    • 創建圖結構,包含 Tensors (Variables, Placeholders …) 和 Operations (tf.matmul, tf.add, …)
    • 創建 session
    • 初始化 session
    • run session 來執行graph
    • 可以多次運行圖(多次迭代)
    • 在optimizer上執行session時,反向傳播和參數更新是自動完成的
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章