深度學習筆記(6):第一課第二週作業第二部分詳解與代碼

前言

筆者使用的是下面的平臺資源,歡迎大家也一起fork參與到深度學習的代碼實踐中來。鏈接如下
https://www.kesci.com/home/project/5dd23dbf00b0b900365ecef1

本章導讀

其實還是走的是用神經網絡思想去做一個羅傑斯特迴歸的實踐這一思路,而在這一部分,我們的任務是去做一個簡單的二分類,構建一個模型,判斷圖片中的物體是否是貓

題目要求

用神經網絡思想實現Logistic迴歸

歡迎來到你的第一個編程作業! 你將學習如何建立邏輯迴歸分類器用來識別貓。 這項作業將引導你逐步瞭解神經網絡的思維方式,同時磨練你對深度學習的直覺。

說明:
除非指令中明確要求使用,否則請勿在代碼中使用循環(for / while)。

你將學習以下內容:

  • 建立學習算法的一般架構,包括:
    • 初始化參數
    • 計算損失函數及其梯度
    • 使用優化算法(梯度下降)
  • 按正確的順序將以上所有三個功能集成到一個主模型上。

規模查看與重塑訓練和測試數據集

很多時候,深度學習中的報錯,來自於矩陣向量尺寸的不匹配,這要求我們要對於數據量與數據規模自己要有清醒的認識。
下面我們編碼查看訓練集和測試集的實例數量:(個人認爲在實際應用中這些維度及其對應是和存儲的形式息息相關的,也就是說我們要因地制宜的看,不妨我們直接print(x.shape())將所有信息先都打出來看看。)

### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###

print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))

彩色圖像,其實就是RG三個通道,本身都是0-255的數據,我們想要將其作爲數據訓練與預測,方便起見我們先將其平坦化。所以我們來做維度重塑。題目中說的比較清楚了,是將(num_px,num_px,3)展平成(num_px * num_px * 3, 1)。

這裏面有一個小trick。

X_flatten = X.reshape(X.shape [0]-1.T     # 其中X.T是X的轉置矩陣

解釋是這樣的,我們拿到這個x的第一維數據大小之後,將其設置爲新numpy數組的第一位,然後使用-1這一缺省填充參數,將剩下的所有維度都放置在第二個維度。在本例中,就形成了(209,64×64×3)(其中209是數據個數)這樣一個numpy數組,之後再通過轉置將其兩維度顛倒,這樣就得到了平坦化的numpy數組。

之後,爲了使得變量有相似的範圍,以至於漸變不會爆炸。這是很重要的,所以我們要對其進行標準化處理。一般來講我們的歸一化都是採用行列的範數,但是由於這裏面使用RGB通道比較特殊。我們直接將其都除以255即可。

回過頭來再看,這其實就是一種將圖像數據轉化爲數字數據的手段,通過上述的平坦化和標準化,預處理完成後,進而便可以進一步用我們的分類模型進行訓練與分類。
在這裏插入圖片描述

需要記住的內容:

預處理數據集的常見步驟是:

  • 找出數據的尺寸和維度(m_train,m_test,num_px等)
  • 重塑數據集,以使每個示例都是大小爲(num_px \ * num_px \ * 3,1)的向量
  • “標準化”數據

3- 學習算法的一般架構##

這個圖很到位,所以我必須要摘錄下來。

Image Name

算法的數學表達式

For one example x(i)x^{(i)}:
z(i)=wTx(i)+b(1)z^{(i)} = w^T x^{(i)} + b \tag{1}
y^(i)=a(i)=sigmoid(z(i))(2)\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}
L(a(i),y(i))=y(i)log(a(i))(1y(i))log(1a(i))(3)\mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}

The cost is then computed by summing over all training examples:
J=1mi=1mL(a(i),y(i))(6) J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}

關鍵步驟
在本練習中,你將執行以下步驟:

  • 初始化模型參數
    通過最小化損失來學習模型的參數
    使用學習到的參數進行預測(在測試集上)
    分析結果並得出結論
    

so我們現在開始實踐

輔助函數

sigmoid(老生常談不多廢話了)

# GRADED FUNCTION: sigmoid

def sigmoid(z):
    """
    Compute the sigmoid of z

    Arguments:
    z -- A scalar or numpy array of any size.

    Return:
    s -- sigmoid(z)
    """

    ### START CODE HERE ### (≈ 1 line of code)
    s = 1 / (1 + np.exp(-z))
    ### END CODE HERE ###
    
    return s

初始化參數

生成全零的wwbb

# GRADED FUNCTION: initialize_with_zeros

def initialize_with_zeros(dim):
    """
    This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
    
    Argument:
    dim -- size of the w vector we want (or number of parameters in this case)
    
    Returns:
    w -- initialized vector of shape (dim, 1)
    b -- initialized scalar (corresponds to the bias)
    """
    
    ### START CODE HERE ### (≈ 1 line of code)
    w = np.zeros((dim, 1))
    b = 0
    ### END CODE HERE ###

    assert(w.shape == (dim, 1))
    assert(isinstance(b, float) or isinstance(b, int))
    
    return w, b

前向和後向傳播

現在我們要執行“向前”和“向後”傳播步驟來學習參數。即實現函數propagate()來計算損失函數及其梯度。

提示

正向傳播:

  • 得到X
  • 計算A=σ(wTX+b)=(a(0),a(1),...,a(m1),a(m))A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})
  • 計算損失函數:J=1mi=1my(i)log(a(i))+(1y(i))log(1a(i))J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})

要使用到以下兩個公式:
Jw=1mX(AY)T(7) \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}
Jb=1mi=1m(a(i)y(i))(8) \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}
(這裏面也是給我一個啓發的事情是,當我本打算自己直接現場推導寫代碼的時候略慌。所以當我們自己設計模型從頭編程的時候,或者去做一個銜接時,我們最好也要寫下來,否則編碼的時候有可能就會很迷茫)

# GRADED FUNCTION: propagate

def propagate(w, b, X, Y):
    """
    Implement the cost function and its gradient for the propagation explained above

    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of size (num_px * num_px * 3, number of examples)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)

    Return:
    cost -- negative log-likelihood cost for logistic regression
    dw -- gradient of the loss with respect to w, thus same shape as w
    db -- gradient of the loss with respect to b, thus same shape as b
    
    Tips:
    - Write your code step by step for the propagation. np.log(), np.dot()
    """
    
    m = X.shape[1]
    
    # FORWARD PROPAGATION (FROM X TO COST)
    ### START CODE HERE ### (≈ 2 lines of code)
    a = sigmoid(np.dot(w.T,X)+b)
    cost = -1/m*((np.dot(Y,np.log(a).T))+(np.dot(1-Y,np.log(1-a).T)))
    ### END CODE HERE ###
    
    # BACKWARD PROPAGATION (TO FIND GRAD)
    ### START CODE HERE ### (≈ 2 lines of code)
    dw= 1/m*(np.dot(X,(a-Y).T))
    db =1/m*np.sum(a-Y,axis = 1,keepdims = True)
    ### END CODE HERE ###
    assert(dw.shape == w.shape)
    assert(db.dtype == float)
    db = np.squeeze(db)
    cost = np.squeeze(cost)
    assert(cost.shape == ())
    
    grads = {"dw": dw,
             "db": db}
    
    return grads, cost

優化參數(梯度下降)

遵循着 傳播、更新參數、再傳播…的循環就可以了,梯度下降迭代式子很簡單。

# GRADED FUNCTION: optimize

def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
    """
    This function optimizes w and b by running a gradient descent algorithm
    
    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of shape (num_px * num_px * 3, number of examples)
    Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
    num_iterations -- number of iterations of the optimization loop
    learning_rate -- learning rate of the gradient descent update rule
    print_cost -- True to print the loss every 100 steps
    
    Returns:
    params -- dictionary containing the weights w and bias b
    grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
    costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
    
    Tips:
    You basically need to write down two steps and iterate through them:
        1) Calculate the cost and the gradient for the current parameters. Use propagate().
        2) Update the parameters using gradient descent rule for w and b.
    """
    
    costs = []
    
    for i in range(num_iterations):
        
        
        # Cost and gradient calculation (≈ 1-4 lines of code)
        ### START CODE HERE ### 
        grads,cost = propagate(w,b,X,Y)
        ### END CODE HERE ###
        
        # Retrieve derivatives from grads
        dw = grads["dw"]
        db = grads["db"]
        
        # update rule (≈ 2 lines of code)
        ### START CODE HERE ###
        w = w - learning_rate*dw
        b = b - learning_rate*db
        ### END CODE HERE ###
        
        # Record the costs
        if i % 100 == 0:
            costs.append(cost)
        
        # Print the cost every 100 training examples
        if print_cost and i % 100 == 0:
            print ("Cost after iteration %i: %f" %(i, cost))
    
    params = {"w": w,
              "b": b}
    
    grads = {"dw": dw,
             "db": db}
    
    return params, grads, costs

得到預測

預測的過程也很簡單就是把X帶入到我們訓練好的參數的模型中,看sigmoid值與0.5的大小,如果大於,那麼將其預測爲1,否則將其預測爲0.

# GRADED FUNCTION: predict

def predict(w, b, X):
    '''
    Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
    
    Arguments:
    w -- weights, a numpy array of size (num_px * num_px * 3, 1)
    b -- bias, a scalar
    X -- data of size (num_px * num_px * 3, number of examples)
    
    Returns:
    Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
    '''
    
    m = X.shape[1]
    Y_prediction = np.zeros((1,m))
    w = w.reshape(X.shape[0], 1)
    
    # Compute vector "A" predicting the probabilities of a cat being present in the picture
    ### START CODE HERE ### (≈ 1 line of code)
    A = sigmoid(np.dot(w.T,X)+b)
    ### END CODE HERE ###

    for i in range(A.shape[1]):
        # Convert probabilities A[0,i] to actual predictions p[0,i]
        ### START CODE HERE ### (≈ 4 lines of code)
        if A[0,i]>0.5:
            Y_prediction[0,i]=1
        else:
            Y_prediction[0,i]=0
        ### END CODE HERE ###
    
    assert(Y_prediction.shape == (1, m))
    
    return Y_prediction

需要記住以下幾點:
你已經實現了以下幾個函數:

  • 初始化(w,b)
  • 迭代優化損失以學習參數(w,b):
    • 計算損失及其梯度
    • 使用梯度下降更新參數
  • 使用學到的(w,b)來預測給定示例集的標籤

整合模型,完成預測

第一步,初始化參數;
第二步,梯度下降優化;
第三步,預測;
第四步,評估。

# GRADED FUNCTION: model

def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
    """
    Builds the logistic regression model by calling the function you've implemented previously
    
    Arguments:
    X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
    Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
    X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
    Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
    num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
    learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
    print_cost -- Set to true to print the cost every 100 iterations
    
    Returns:
    d -- dictionary containing information about the model.
    """
    
    ### START CODE HERE ###
    
    # initialize parameters with zeros (≈ 1 line of code)
    w,b = initialize_with_zeros(X_train.shape[0])

    # Gradient descent (≈ 1 line of code)
    params, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost = False)
    
    # Retrieve parameters w and b from dictionary "parameters"
    w = params["w"]
    b = params["b"]
    
    # Predict test/train set examples (≈ 2 lines of code)
    Y_prediction_train = predict(w, b, X_train)
    Y_prediction_test = predict(w, b, X_test)

    ### END CODE HERE ###

    # Print train/test Errors
    print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
    print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))

    
    d = {"costs": costs,
         "Y_prediction_test": Y_prediction_test, 
         "Y_prediction_train" : Y_prediction_train, 
         "w" : w, 
         "b" : b,
         "learning_rate" : learning_rate,
         "num_iterations": num_iterations}
    
    return d

學習率的選擇

爲了使梯度下降起作用,必須明智地選擇學習率。 學習率α\alpha決定我們更新參數的速度。 如果學習率太大,我們可能會“超出”最佳值。 同樣,如果太小,將需要更多的迭代才能收斂到最佳值。 這也是爲什麼調整好學習率至關重要。記住這一部分是經驗相關,但是也是可以進行調整的。

此作業要記住的內容:

  1. 預處理數據集很重要。
  2. 如何實現每個函數:initializepropagationoptimizeinitialize(),propagation(),optimize(),並用此構建一個modelmodel()
  3. 調整學習速率(這是“超參數”的一個示例)可以對算法產生很大的影響。 你將在本課程的稍後部分看到更多示例!
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章