深度學習系列之cs231n assignment1 softmax(四)

寫在開頭:assignment對於softmax的作業對於svm差別就在於損失函數與梯度的差別,其餘地方几乎是一樣的,比如在預測的時候仍然選擇得分對高的類,所以今天就來開始softmax部分的作業分享。

內容安排

今天主要會對softmax損失函數以及softmax求梯度進行講解,然後通過編程來完成關於循環計算softmax loss function和向量計算loss function,然後對於訓練和預測函數使用linear_classifier.py中的函數,這個函數關於SGD和預測的函數與上一節相同這裏就不對代碼進行展示。

開始完成任務

1.加載包和數據
這裏仍然使用的是test、val、dev、train將數據集劃分爲四塊,

import random
import numpy as np
from cs231n.data_utils import load_CIFAR10
import matplotlib.pyplot as plt


%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'

# for auto-reloading extenrnal modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000, num_dev=500):
    """
    Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
    it for the linear classifier. These are the same steps as we used for the
    SVM, but condensed to a single function.  
    """
    # Load the raw CIFAR-10 data
    cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
    X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
    
    # subsample the data
    mask = list(range(num_training, num_training + num_validation))
    X_val = X_train[mask]
    y_val = y_train[mask]
    mask = list(range(num_training))
    X_train = X_train[mask]
    y_train = y_train[mask]
    mask = list(range(num_test))
    X_test = X_test[mask]
    y_test = y_test[mask]
    mask = np.random.choice(num_training, num_dev, replace=False)
    X_dev = X_train[mask]
    y_dev = y_train[mask]
    
    # Preprocessing: reshape the image data into rows
    X_train = np.reshape(X_train, (X_train.shape[0], -1))
    X_val = np.reshape(X_val, (X_val.shape[0], -1))
    X_test = np.reshape(X_test, (X_test.shape[0], -1))
    X_dev = np.reshape(X_dev, (X_dev.shape[0], -1))
    
    # Normalize the data: subtract the mean image
    mean_image = np.mean(X_train, axis = 0)
    X_train -= mean_image
    X_val -= mean_image
    X_test -= mean_image
    X_dev -= mean_image
    
    # add bias dimension and transform into columns
    X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
    X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
    X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
    X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])
    
    return X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev


# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test, X_dev, y_dev = get_CIFAR10_data()
print('Train data shape: ', X_train.shape)
print('Train labels shape: ', y_train.shape)
print('Validation data shape: ', X_val.shape)
print('Validation labels shape: ', y_val.shape)
print('Test data shape: ', X_test.shape)
print('Test labels shape: ', y_test.shape)
print('dev data shape: ', X_dev.shape)
print('dev labels shape: ', y_dev.shape)
Train data shape:  (49000, 3073)
Train labels shape:  (49000,)
Validation data shape:  (1000, 3073)
Validation labels shape:  (1000,)
Test data shape:  (1000, 3073)
Test labels shape:  (1000,)
dev data shape:  (500, 3073)
dev labels shape:  (500,)

2.計算softmax損失函數與梯度
這裏我們首先需要對softmax.py文件中的兩個任務進行完成,一個爲使用循環計算,一個爲使用向量numpy計算,我們首先來展示一下損失函數的形式,
在這裏插入圖片描述(公式來源cs231n官方文檔
這就是所謂的交叉熵損失函數,softmax function就是指的,
在這裏插入圖片描述將該公式與信息熵公式相結合就得到了L損失函數的公式,L損失函數進行展開還可以獲得爲正確類別得分函數的相反數加上後面那一項。那麼求梯度的話就是需要對交叉熵損失函數對正確分類W與錯誤分類W分別求偏導,具體公式如下,
在這裏插入圖片描述
在這裏插入圖片描述
在使用之前這裏我們還需要對損失函數進行一個操作,就是在得分函數上阿基加一個常數項logC,進行正則化,這裏是因爲由於指數的原因可能使得數值非常大,而導致無法運算,因此需要引入正則化
在這裏插入圖片描述
這裏的logC的選擇可以使任意的但一般選擇
在這裏插入圖片描述
對於for循環的具體編程代碼首先對i進行循環,對X[i]的得分函數已經損失函數進行計算,然後通過循環j和判斷函數來對梯度進行輸出;對於向量計算的部分主要是通過整體運算得到softmax的值,然後進行調用加和求平均得到損失函數。具體代碼如下,

import numpy as np
from random import shuffle
from past.builtins import xrange

def softmax_loss_naive(W, X, y, reg):
  """
  Softmax loss function, naive implementation (with loops)

  Inputs have dimension D, there are C classes, and we operate on minibatches
  of N examples.

  Inputs:
  - W: A numpy array of shape (D, C) containing weights.
  - X: A numpy array of shape (N, D) containing a minibatch of data.
  - y: A numpy array of shape (N,) containing training labels; y[i] = c means
    that X[i] has label c, where 0 <= c < C.
  - reg: (float) regularization strength

  Returns a tuple of:
  - loss as single float
  - gradient with respect to weights W; an array of same shape as W
  """
  # Initialize the loss and gradient to zero.
  loss = 0.0
  dW = np.zeros_like(W)
  num_train = X.shape[0]
  num_class = W.shape[1]
  for i in xrange(num_train):
      scores = np.dot(X[i], W)
      scores -= np.max(scores)
      correct_score = scores[y[i]]
      loss_i = -correct_score + np.log(np.sum(np.exp(scores)))
      loss += loss_i
      for j in xrange(num_class):
          softmax_output = np.exp(scores[j]) / np.sum(np.exp(scores))
          if j == y[i]:
              dW[:,j] += (-1 + softmax_output) * X[i,:]
          else:
              dW[:,j] += softmax_output * X[i,:]
  loss /= num_train
  loss += reg * np.sum(W*W)
  dW /= num_train
  dW += reg * W

  #############################################################################
  # TODO: Compute the softmax loss and its gradient using explicit loops.     #
  # Store the loss in loss and the gradient in dW. If you are not careful     #
  # here, it is easy to run into numeric instability. Don't forget the        #
  # regularization!                                                           #
  #############################################################################
  pass
  #############################################################################
  #                          END OF YOUR CODE                                 #
  #############################################################################

  return loss, dW


def softmax_loss_vectorized(W, X, y, reg):
  """
  Softmax loss function, vectorized version.

  Inputs and outputs are the same as softmax_loss_naive.
  """
  # Initialize the loss and gradient to zero.
  loss = 0.0
  dW = np.zeros_like(W)

  num_train = X.shape[0]
  num_classes = W.shape[1]

  scores = np.dot(X, W)
  scores -= np.max(scores, axis=1).reshape(-1,1)
  softmax_output = np.exp(scores) / np.sum(np.exp(scores), axis=1).reshape(-1,1)
  loss = np.sum(-np.log(softmax_output[range(softmax_output.shape[0]), list(y)]))
  loss /= num_train
  loss += reg * np.sum(W*W)

  dS = softmax_output
  dS[range(dS.shape[0]), list(y)] += -1
  dW = np.dot(X.T, dS)
  dW /= num_train
  dW += reg * W
  #############################################################################
  # TODO: Compute the softmax loss and its gradient using no explicit loops.  #
  # Store the loss in loss and the gradient in dW. If you are not careful     #
  # here, it is easy to run into numeric instability. Don't forget the        #
  # regularization!                                                           #
  #############################################################################
  pass
  #############################################################################
  #                          END OF YOUR CODE                                 #
  #############################################################################

  return loss, dW

這樣這一節的任務就完成了,那麼可以運行作業裏的代碼進行運算了,展示循環計算得到的結果

# First implement the naive softmax loss function with nested loops.
# Open the file cs231n/classifiers/softmax.py and implement the
# softmax_loss_naive function.

from cs231n.classifiers.softmax import softmax_loss_naive
import time

# Generate a random softmax weight matrix and use it to compute the loss.
W = np.random.randn(3073, 10) * 0.0001
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)

# As a rough sanity check, our loss should be something close to -log(0.1).
print('loss: %f' % loss)
print('sanity check: %f' % (-np.log(0.1)))
loss: 2.335045
sanity check: 2.302585

檢驗計算結果是否正確,通過和導數定義求解進行對比,如果無明顯差異則認爲結果一直

# Complete the implementation of softmax_loss_naive and implement a (naive)
# version of the gradient that uses nested loops.
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 0.0)

# As we did for the SVM, use numeric gradient checking as a debugging tool.
# The numeric gradient should be close to the analytic gradient.
from cs231n.gradient_check import grad_check_sparse
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 0.0)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)

# similar to SVM case, do another gradient check with regularization
loss, grad = softmax_loss_naive(W, X_dev, y_dev, 5e1)
f = lambda w: softmax_loss_naive(w, X_dev, y_dev, 5e1)[0]
grad_numerical = grad_check_sparse(f, W, grad, 10)
numerical: 0.398399 analytic: 0.398399, relative error: 1.318239e-08
numerical: 0.675929 analytic: 0.675929, relative error: 1.492763e-07
numerical: -2.331262 analytic: -2.331262, relative error: 6.991117e-09
numerical: 0.959885 analytic: 0.959885, relative error: 4.388168e-08
numerical: -0.109905 analytic: -0.109906, relative error: 2.639520e-07
numerical: 0.880625 analytic: 0.880625, relative error: 8.301655e-08
numerical: 3.192973 analytic: 3.192973, relative error: 4.163485e-08
numerical: -0.673371 analytic: -0.673371, relative error: 4.660361e-09
numerical: -0.293432 analytic: -0.293432, relative error: 1.006121e-07
numerical: -2.464912 analytic: -2.464912, relative error: 4.341671e-08
numerical: -1.051626 analytic: -1.049018, relative error: 1.241520e-03
numerical: -0.730290 analytic: -0.728252, relative error: 1.397527e-03
numerical: -2.870836 analytic: -2.871337, relative error: 8.737086e-05
numerical: -0.849208 analytic: -0.848228, relative error: 5.768790e-04
numerical: 0.056268 analytic: 0.060180, relative error: 3.359908e-02
numerical: 4.694562 analytic: 4.693325, relative error: 1.317799e-04
numerical: -4.972278 analytic: -4.982260, relative error: 1.002711e-03
numerical: -2.277979 analytic: -2.285063, relative error: 1.552316e-03
numerical: 2.912171 analytic: 2.911661, relative error: 8.752564e-05
numerical: 0.836952 analytic: 0.836742, relative error: 1.255306e-04

從結果來看整體的差異非常小,所以我們認爲梯度的計算是正確的。然後再比較向量計算與循環計算是否有差異,

# Now that we have a naive implementation of the softmax loss function and its gradient,
# implement a vectorized version in softmax_loss_vectorized.
# The two versions should compute the same results, but the vectorized version should be
# much faster.
tic = time.time()
loss_naive, grad_naive = softmax_loss_naive(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('naive loss: %e computed in %fs' % (loss_naive, toc - tic))

from cs231n.classifiers.softmax import softmax_loss_vectorized
tic = time.time()
loss_vectorized, grad_vectorized = softmax_loss_vectorized(W, X_dev, y_dev, 0.000005)
toc = time.time()
print('vectorized loss: %e computed in %fs' % (loss_vectorized, toc - tic))

# As we did for the SVM, we use the Frobenius norm to compare the two versions
# of the gradient.
grad_difference = np.linalg.norm(grad_naive - grad_vectorized, ord='fro')
print('Loss difference: %f' % np.abs(loss_naive - loss_vectorized))
print('Gradient difference: %f' % grad_difference)
naive loss: 2.335045e+00 computed in 0.318767s
vectorized loss: 2.335045e+00 computed in 0.021731s
Loss difference: 0.000000
Gradient difference: 0.000000

可以看到向量計算與傳統的循環計算在精度上沒有差異,但在運行時間上向量計算的效率明顯的提高。接下來通過對不同的學習率與正則係數進行極愛哦擦汗驗證,得到最優的參數,

# Use the validation set to tune hyperparameters (regularization strength and
# learning rate). You should experiment with different ranges for the learning
# rates and regularization strengths; if you are careful you should be able to
# get a classification accuracy of over 0.35 on the validation set.
from cs231n.classifiers import Softmax
results = {}
best_val = -1
best_softmax = None
learning_rates = [1e-7, 5e-7]
regularization_strengths = [2.5e4, 5e4]
best_lr = None
best_reg = None

for lr in learning_rates:
    for reg in regularization_strengths:
        softmax = Softmax()
        loss_history = softmax.train(X_train, y_train, learning_rate = lr, reg = reg, num_iters = 3000)
        y_train_pred = softmax.predict(X_train)
        accuracy_train = np.mean(y_train_pred == y_train)
        y_val_pred = softmax.predict(X_val)
        accuracy_val = np.mean(y_val_pred == y_val)
        results[(lr, reg)] = accuracy_train, accuracy_val
        if accuracy_val > best_val:
            best_lr = lr
            best_reg = reg
            best_val = accuracy_val
            best_softmax = softmax
################################################################################
# TODO:                                                                        #
# Use the validation set to set the learning rate and regularization strength. #
# This should be identical to the validation that you did for the SVM; save    #
# the best trained softmax classifer in best_softmax.                          #
################################################################################
pass
################################################################################
#                              END OF YOUR CODE                                #
################################################################################
    
# Print out results.
for lr, reg in sorted(results):
    train_accuracy, val_accuracy = results[(lr, reg)]
    print('lr %e reg %e train accuracy: %f val accuracy: %f' % (
                lr, reg, train_accuracy, val_accuracy))
    
print('best validation accuracy achieved during cross-validation: %f' % best_val)
lr 1.000000e-07 reg 2.500000e+04 train accuracy: 0.352184 val accuracy: 0.364000
lr 1.000000e-07 reg 5.000000e+04 train accuracy: 0.326347 val accuracy: 0.338000
lr 5.000000e-07 reg 2.500000e+04 train accuracy: 0.345367 val accuracy: 0.352000
lr 5.000000e-07 reg 5.000000e+04 train accuracy: 0.312367 val accuracy: 0.326000
best validation accuracy achieved during cross-validation: 0.364000

可以看到最好參數的預測準確度是0.364,那麼我們將最好的參數代入test集看一下計算效果,

# evaluate on test set
# Evaluate the best softmax on test set
y_test_pred = best_softmax.predict(X_test)
test_accuracy = np.mean(y_test == y_test_pred)
print('softmax on raw pixels final test set accuracy: %f' % (test_accuracy, ))
softmax on raw pixels final test set accuracy: 0.362000

再來輸出一下計算的W是個什麼樣子,也就是輸出每一類的模板。

# Visualize the learned weights for each class
w = best_softmax.W[:-1,:] # strip out the bias
w = w.reshape(32, 32, 3, 10)

w_min, w_max = np.min(w), np.max(w)

classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
for i in range(10):
    plt.subplot(2, 5, i + 1)
    
    # Rescale the weights to be between 0 and 255
    wimg = 255.0 * (w[:, :, :, i].squeeze() - w_min) / (w_max - w_min)
    plt.imshow(wimg.astype('uint8'))
    plt.axis('off')
    plt.title(classes[i])

在這裏插入圖片描述
總結過來看甚至要比昨天的svm更加的清楚看到模板的類別,所以這個方法還是可以用的,那麼我會在後文對softmax和svm之間的關係進行討論。


結語
以上就是cs231n assignment1關於softmax的作業的完成啦,從整體的效果來看還是可以的,softmax和svm也就僅在損失函數與梯度上的形式不一樣,整體沒有太大的差異。
謝謝閱讀。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章