(轉)CS231n Assignment2 Support Vector Machine

Begin


本文主要介紹CS231N系列課程的第一項作業,寫一個SVM無監督學習訓練模型。

課程主頁:網易雲課堂CS231N系列課程

語言:Python3.6

 

1線形分類器


 

        以圖像爲例,一幅圖像像素爲32*32*3代表長32寬32有3通道的衣服圖像,將其變爲1*3072的一個向量,即該圖像的特徵向量。

我們如果需要訓練1000幅圖像,那麼輸入則爲1000*3072的矩陣X。

  我們用X點乘矩陣W得到一個計分矩陣如下所示,W乘以一幅圖像的特徵向量的轉置得到一列代表分數。

       每個分數對應代表一個類別,分數越高代表她所屬於此類別紀律越大,所以W其實是一個類別權重的概念。

 注意:下圖爲CS231N中的一張圖,它是以一幅圖爲例,將X轉至爲3072*1,大家理解即可,在程序中我們採用X*W來編寫。

 更多細節可以參考CS231N作業1KNN詳解

 

 

 

2損失函數


 

       得到每一幅圖像對應每一個類別的分數之後,我們需要計算一個損失,去評估一下W矩陣的好壞。

如下右側爲SVM損失函數計算公式。

        對每一幅圖像的損失用其錯誤類別的分數減去正確類別的分數,並與0比較求最大值

一般我們應該正確類別的分數高就證明沒有損失,此時錯誤類別減去正確類別一定爲負值,比0小故取損失爲0.

爲了提高魯棒性,這裏給他加了一個1。

 

        計算所有的損失後,我們把損失累加作爲最後的損失

        整理後我們得到如下的公式,但是其存在一個問題,沒有考慮W的影響,不同的W可能得到同樣的損失,

 因此我們引入一個正則,正則係數可以調節W對整個損失的影響,W越分散當然越好

 

代碼如下:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

def svm_loss_native(W,X,Y,reg):

    '''

    本函數用於計算SVM分類器的損失以及梯度

    輸入參數:

        W(D,C)代表權重

            D爲特徵向量的維度,C爲分類類別的數量

        X(N,D)代表訓練樣本的特徵,0維代表每一個樣本,1維代表某一樣本的特徵向量

            對於32*32圖像,N代表有N個樣本,D=32*32*3全體像素值代表特徵向量

        Y(N,1)代表訓練樣本的標籤,0維代表每一個樣本,1維代表某一樣本的標籤

    輸出參數:

        Loss損失

 

    '''

    #獲取基礎參數

    num_train = X.shape[0]#訓練樣本的數量

    num_classes = W.shape[1]#劃分的種類

    loss = 0.0#初始化損失

    dW = np.zeros(W.shape)#創建一個梯度

    for in range(num_train):#分別求每一個訓練樣本的損失

        score = X[i].dot(W)#計算每個樣本的分數

 

        #計算損失

        for in range(num_classes):

            if == Y[i]:

                continue

            margin = score[j] - score[Y[i]] + 1

            #margin =  np.max(0,score[j] - score[Y[i]] + 1)#計算損失

            if margin > 0:

                loss += margin

    loss /= num_train

    #加入正則

    loss += reg * np.sum(W*W)

    return loss

  

 

 

  如此一套完整的損失函數就構造完成了,我們通過看損失可以知道這個W矩陣的好壞,那麼如果損失過大該怎麼調劑每一個參數呢?

        此時我們引入梯度下降法和梯度的概念

3梯度


 

梯度下降法:

        首先,我們有一個可微分的函數。這個函數就代表着一座山。我們的目標就是找到這個函數的最小值,也就是山底。根據之前的場景假設,最快的下山的方式就是找到當前位置最陡峭的方向,然後沿着此方向向下走,對應到函數中,就是找到給定點的梯度 ,然後朝着梯度相反的方向,就能讓函數值下降的最快!因爲梯度的方向就是函數之變化最快的方向(在後面會詳細解釋)
        所以,我們重複利用這個方法,反覆求取梯度,最後就能到達局部的最小值,這就類似於我們下山的過程。而求取梯度就確定了最陡峭的方向,也就是場景中測量方向的手段。

梯度如同求導一樣,如下圖所示,損失的導數反應着梯度狀況

如果W向前變化一格,損失增大,則dW梯度應該爲正值,此時應該W向相反方向變化。

 

 

對於本例中對於損失函數,可以改寫爲如下:

 

 

對於Lij,用其對Wj求偏導

 

 

CODE2 LOSS & 梯度 循環形式

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

def svm_loss_native(W,X,Y,reg):

    '''

    本函數用於計算SVM分類器的損失以及梯度

    輸入參數:

        W(D,C)代表權重

            D爲特徵向量的維度,C爲分類類別的數量

        X(N,D)代表訓練樣本的特徵,0維代表每一個樣本,1維代表某一樣本的特徵向量

            對於32*32圖像,N代表有N個樣本,D=32*32*3全體像素值代表特徵向量

        Y(N,1)代表訓練樣本的標籤,0維代表每一個樣本,1維代表某一樣本的標籤

    輸出參數:

        Loss損失

 

    '''

    #獲取基礎參數

    num_train = X.shape[0]#訓練樣本的數量

    num_classes = W.shape[1]#劃分的種類

    loss = 0.0#初始化損失

    dW = np.zeros(W.shape)#創建一個梯度

    for in range(num_train):#分別求每一個訓練樣本的損失

        score = X[i].dot(W)#計算每個樣本的分數

 

        #計算損失

        for in range(num_classes):

            if == Y[i]:

                continue

            margin = score[j] - score[Y[i]] + 1

            #margin =  np.max(0,score[j] - score[Y[i]] + 1)#計算損失

            if margin > 0:

                loss += margin

                dW[:,Y[i]] += -X[i,:].T

                dW[:,j] += X[i,:].T

    loss /= num_train

    dW /= num_train

    #加入正則

    loss += reg * np.sum(W*W)

    dW += reg * W

    return loss,dW

 

  

 

CODE3 LOSS & 梯度 向量矩陣形式

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

def svm_loss_vectorized(W,X,Y,reg):

 

    loss = 0.0

    num_train = X.shape[0]

    dW = np.zeros(W.shape)

    scores = np.dot(X,W)

    correct_class_score = scores[np.arange(num_train),Y]

    correct_class_score = np.reshape(correct_class_score,(num_train,-1))

    margin = scores - correct_class_score + 1.0

    margin[np.arange(num_train),Y] = 0.0

    margin[margin<0= 0.0

    loss += np.sum(margin)/num_train

    loss += 0.5*reg*np.sum(W*W)

 

    margin[margin>0= 1.0

    row_sum = np.sum(margin,axis = 1)

    margin[np.arange(num_train),Y] = -row_sum

    dW = 1.0/num_train*np.dot(X.T,margin) + reg*W  # ** #

    return loss,dW

 

 

4訓練函數


 

 

在得到損失和梯度後我們就可以根據梯度去調節W矩陣,這裏需要引入TRAIN函數的一些參數。

一般需要有以下參數:

訓練次數:要循環訓練多少步。

學習率:每一次根據梯度去修正W矩陣的係數。

樣本數:每一次訓練可能不是選擇所有樣本,需要取樣一定樣本。

核心點在於在循環中不斷去計算損失以及梯度,然後利用下面公式去調節。

 

self.W = self.W - learning_rate * grade

 

 

CODE4 梯度下降法

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

def train(self,X,Y,learning_rate=1e-3,reg=1e-5,num_iters=100,batch_size=200,verbose=False):

        '''

        隨機梯度下降法訓練分類器

        輸入參數:

        -learning_rate學習率

        -reg正則化強度

        -num_iters步長值

        -batch_size每一步使用的樣本數量

        -verbose若爲真則打印過程

        輸出參數:

        list損失值

        '''

        num_train,dim = X.shape

        num_classes = np.max(Y) + 1

         

        #if self.W is None:

            #初始化W矩陣

        self.W = 0.001 * np.random.randn(dim,num_classes)

        loss_history = []

        #開始訓練num_iters步

        for it in range(num_iters):

            X_batch = None

            Y_batch = None

            ########################

            # 選取部分訓練樣本

            # 隨機生成一個序列

            batch_inx = np.random.choice(num_train,batch_size)

            X_batch = X[batch_inx,:]

            Y_batch = Y[batch_inx]

            #########################

            # 計算損失與梯度

            loss,grade = self.loss(self.W,X_batch,Y_batch,reg)

            loss_history.append(loss)

 

            ########################

            # 參數更新

            # 梯度爲正表示損失增大,應該減少,成負相關

            self.W = self.W - learning_rate * grade

            #打印結果

            if verbose and it % 100 == 0:

                print('iteration %d / %d : loss %f'%(it ,num_iters,loss))

        return loss_history

 

運行結果如

 

  

 

 5預測predict


 

 

在訓練完模型後會得到一個較好的W矩陣,然後根據這個W去預測一下測試集看看模型的效果

1

2

3

4

5

6

7

def predict(self,X_train):

    y_predict = np.zeros(X_train.shape[1])

    #根據訓練後的W矩陣計算分數

    scores = X_train.dot(self.W)

    #找到得分中最大的值作爲類別

    y_predict = np.argmax(scores,axis = 1)#計算每一行最大值

    return y_predict

在主函數中運行如下代碼觀察預測情況

1

2

3

4

score1 = SVM1.predict(X_dev)

print('The predit result %f' %(np.mean(score1 == Y_dev)))

score1 = SVM1.predict(X_test)

print('The Test Data predit result %f' %(np.mean(score1 == Y_test)))

  

  

 

預測結果如下,用訓練集本身去預測得到0.756,用測試集去預測才0.218,不是太好

 

 

 

6參數調整


 

上述即完成了一整體的SVM模型庫,那麼我們如何自動訓練出一個好的學習率和正則化強度參數呢?

 我們需要不斷去測試每一個參數的好壞,用下面一個程序可以完成這個任務

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

#調參

#兩個參數,學習率;正則化強度

learning_rate = [2e-7,0.75e-7,1.5e-7,1.25e-7,0.75e-7]

regularization_strengths = [3e4,3.25e4,3.5e4,3.75e4,4e4]

 

results = {}

best_val = 0

best_svm = None

######################################

# 循環執行代碼

# 對不同的學習率以及正則化強度進行測試

#

for rate in learning_rate:

    for regular in regularization_strengths:

        SVM2 = SVM()

        #訓練

        SVM2.train(X_train,Y_train,learning_rate=rate,reg=regular,num_iters=1000)

        #預測

        Y1 = SVM2.predict(X_train)

        Y2 = SVM2.predict(X_val)

        accuracy_train = np.mean(Y1==Y_train)

        accuracy_val = np.mean(Y2==Y_val)

        #判斷優略

        if best_val < accuracy_val:

            best_val = accuracy_val

            best_svm = SVM2#保存當前模型

        #存儲數據

        results[rate,regular] = (accuracy_train,accuracy_val)

#打印數據

for lr,reg in sorted(results):

    accuracy_train,accuracy_val = results[(lr,reg)]

    print('lr:%e reg %e train accuracy: %f val val accuracy : %f'%(lr,reg,accuracy_train,accuracy_val))

  

運行結果如下:

 

7 可視化效果


 

在得到最優W時,我們有時要看一下W的可視化效果,從w的圖像可以看出權重高低,類似於一個反應這個類別的模板。

1

2

3

4

5

6

7

8

9

10

11

12

13

#可視化結果數據

= best_svm.W[:,:]

w=w.reshape(32,32,3,10)

w_min,w_max = np.min(w),np.max(w)

classes = ['plane','car','bird','cat','deer','dog','frog','hors','ships','truck']#類別劃分  列表

for in range(10):

    plt.subplot(2,5,i+1)

    wimg = 255.0 * (w[:,:,:,i].squeeze()-w_min) / (w_max - w_min)

 

    plt.imshow(wimg.astype('uint8'))

    plt.axis('off')

    plt.title(classes[i])

plt.show()

  如下圖所示

 

完整代碼(第一個代碼是data_util用來讀取數據集的工具包源碼)

from __future__ import print_function

from six.moves import cPickle as pickle
import numpy as np
import os
from matplotlib.pyplot import imread
import platform


def load_pickle(f):
    version = platform.python_version_tuple()
    if version[0] == '2':
        return pickle.load(f)
    elif version[0] == '3':
        return pickle.load(f, encoding='latin1')
    raise ValueError("invalid python version: {}".format(version))


def load_CIFAR_batch(filename):
    """ load single batch of cifar """
    with open(filename, 'rb') as f:
        datadict = load_pickle(f)
        X = datadict['data']
        Y = datadict['labels']
        X = X.reshape(10000, 3, 32, 32).transpose(0, 2, 3, 1).astype("float")
        Y = np.array(Y)
        return X, Y


def load_CIFAR10(ROOT):
    """ load all of cifar """
    xs = []
    ys = []
    for b in range(1, 6):
        f = os.path.join(ROOT, 'data_batch_%d' % (b,))
        X, Y = load_CIFAR_batch(f)
        xs.append(X)
        ys.append(Y)
    Xtr = np.concatenate(xs)
    Ytr = np.concatenate(ys)
    del X, Y
    Xte, Yte = load_CIFAR_batch(os.path.join(ROOT, 'test_batch'))
    return Xtr, Ytr, Xte, Yte


def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000,
                     subtract_mean=True):
    """
    Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
    it for classifiers. These are the same steps as we used for the SVM, but
    condensed to a single function.
    """
    # Load the raw CIFAR-10 data
    cifar10_dir = 'datasets/cifar-10-batches-py'
    X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)

    # Subsample the data
    mask = list(range(num_training, num_training + num_validation))
    X_val = X_train[mask]
    y_val = y_train[mask]
    mask = list(range(num_training))
    X_train = X_train[mask]
    y_train = y_train[mask]
    mask = list(range(num_test))
    X_test = X_test[mask]
    y_test = y_test[mask]

    # Normalize the data: subtract the mean image
    if subtract_mean:
        mean_image = np.mean(X_train, axis=0)
        X_train -= mean_image
        X_val -= mean_image
        X_test -= mean_image

    # Transpose so that channels come first
    X_train = X_train.transpose(0, 3, 1, 2).copy()
    X_val = X_val.transpose(0, 3, 1, 2).copy()
    X_test = X_test.transpose(0, 3, 1, 2).copy()

    # Package data into a dictionary
    return {
        'X_train': X_train, 'y_train': y_train,
        'X_val': X_val, 'y_val': y_val,
        'X_test': X_test, 'y_test': y_test,
    }


def load_tiny_imagenet(path, dtype=np.float32, subtract_mean=True):
    """
    Load TinyImageNet. Each of TinyImageNet-100-A, TinyImageNet-100-B, and
    TinyImageNet-200 have the same directory structure, so this can be used
    to load any of them.

    Inputs:
    - path: String giving path to the directory to load.
    - dtype: numpy datatype used to load the data.
    - subtract_mean: Whether to subtract the mean training image.

    Returns: A dictionary with the following entries:
    - class_names: A list where class_names[i] is a list of strings giving the
      WordNet names for class i in the loaded dataset.
    - X_train: (N_tr, 3, 64, 64) array of training images
    - y_train: (N_tr,) array of training labels
    - X_val: (N_val, 3, 64, 64) array of validation images
    - y_val: (N_val,) array of validation labels
    - X_test: (N_test, 3, 64, 64) array of testing images.
    - y_test: (N_test,) array of test labels; if test labels are not available
      (such as in student code) then y_test will be None.
    - mean_image: (3, 64, 64) array giving mean training image
    """
    # First load wnids
    with open(os.path.join(path, 'wnids.txt'), 'r') as f:
        wnids = [x.strip() for x in f]

    # Map wnids to integer labels
    wnid_to_label = {wnid: i for i, wnid in enumerate(wnids)}

    # Use words.txt to get names for each class
    with open(os.path.join(path, 'words.txt'), 'r') as f:
        wnid_to_words = dict(line.split('\t') for line in f)
        for wnid, words in wnid_to_words.iteritems():
            wnid_to_words[wnid] = [w.strip() for w in words.split(',')]
    class_names = [wnid_to_words[wnid] for wnid in wnids]

    # Next load training data.
    X_train = []
    y_train = []
    for i, wnid in enumerate(wnids):
        if (i + 1) % 20 == 0:
            print('loading training data for synset %d / %d' % (i + 1, len(wnids)))
        # To figure out the filenames we need to open the boxes file
        boxes_file = os.path.join(path, 'train', wnid, '%s_boxes.txt' % wnid)
        with open(boxes_file, 'r') as f:
            filenames = [x.split('\t')[0] for x in f]
        num_images = len(filenames)

        X_train_block = np.zeros((num_images, 3, 64, 64), dtype=dtype)
        y_train_block = wnid_to_label[wnid] * np.ones(num_images, dtype=np.int64)
        for j, img_file in enumerate(filenames):
            img_file = os.path.join(path, 'train', wnid, 'images', img_file)
            img = imread(img_file)
            if img.ndim == 2:
                ## grayscale file
                img.shape = (64, 64, 1)
            X_train_block[j] = img.transpose(2, 0, 1)
        X_train.append(X_train_block)
        y_train.append(y_train_block)

    # We need to concatenate all training data
    X_train = np.concatenate(X_train, axis=0)
    y_train = np.concatenate(y_train, axis=0)

    # Next load validation data
    with open(os.path.join(path, 'val', 'val_annotations.txt'), 'r') as f:
        img_files = []
        val_wnids = []
        for line in f:
            img_file, wnid = line.split('\t')[:2]
            img_files.append(img_file)
            val_wnids.append(wnid)
        num_val = len(img_files)
        y_val = np.array([wnid_to_label[wnid] for wnid in val_wnids])
        X_val = np.zeros((num_val, 3, 64, 64), dtype=dtype)
        for i, img_file in enumerate(img_files):
            img_file = os.path.join(path, 'val', 'images', img_file)
            img = imread(img_file)
            if img.ndim == 2:
                img.shape = (64, 64, 1)
            X_val[i] = img.transpose(2, 0, 1)

    # Next load test images
    # Students won't have test labels, so we need to iterate over files in the
    # images directory.
    img_files = os.listdir(os.path.join(path, 'test', 'images'))
    X_test = np.zeros((len(img_files), 3, 64, 64), dtype=dtype)
    for i, img_file in enumerate(img_files):
        img_file = os.path.join(path, 'test', 'images', img_file)
        img = imread(img_file)
        if img.ndim == 2:
            img.shape = (64, 64, 1)
        X_test[i] = img.transpose(2, 0, 1)

    y_test = None
    y_test_file = os.path.join(path, 'test', 'test_annotations.txt')
    if os.path.isfile(y_test_file):
        with open(y_test_file, 'r') as f:
            img_file_to_wnid = {}
            for line in f:
                line = line.split('\t')
                img_file_to_wnid[line[0]] = line[1]
        y_test = [wnid_to_label[img_file_to_wnid[img_file]] for img_file in img_files]
        y_test = np.array(y_test)

    mean_image = X_train.mean(axis=0)
    if subtract_mean:
        X_train -= mean_image[None]
        X_val -= mean_image[None]
        X_test -= mean_image[None]

    return {
        'class_names': class_names,
        'X_train': X_train,
        'y_train': y_train,
        'X_val': X_val,
        'y_val': y_val,
        'X_test': X_test,
        'y_test': y_test,
        'class_names': class_names,
        'mean_image': mean_image,
    }


def load_models(models_dir):
    """
    Load saved models from disk. This will attempt to unpickle all files in a
    directory; any files that give errors on unpickling (such as README.txt) will
    be skipped.

    Inputs:
    - models_dir: String giving the path to a directory containing model files.
      Each model file is a pickled dictionary with a 'model' field.

    Returns:
    A dictionary mapping model file names to models.
    """
    models = {}
    for model_file in os.listdir(models_dir):
        with open(os.path.join(models_dir, model_file), 'rb') as f:
            try:
                models[model_file] = load_pickle(f)['model']
            except pickle.UnpicklingError:
                continue
    return models
from dl.data_utils import load_CIFAR10
import numpy as np

classes = ['plane','car','bird','cat','deer','frog','horse','ship','truck']
x_train, y_train, x_test, y_test = load_CIFAR10('dataset/cifar-10-batches-py')
x_train = np.reshape(x_train, (x_train.shape[0], -1))
x_test = np.reshape(x_test, (x_test.shape[0], -1))

def svm_loss_vectorized(W, X, Y, reg):
    """
    計算loss和gradient,暫時不用正則化
    W: 10*3072
    X: num_train_3072
    """
    num_train = X.shape[0]
    scores = np.dot(X, W.T)
    correct_scores = scores[np.arange(num_train), Y]
    correct_scores  = np.reshape(correct_scores, (num_train,-1))
    loss = scores - correct_scores + 1.0  # num_train*10 , num_train*1
    loss[loss < 0] = 0.0 # max(0,sj-syi+1)
    loss[np.arange(num_train), Y] = 0.0 # 把正確分類的分數清空
    margin = loss
    loss = np.sum(loss, axis=1) # Li
    loss = np.mean(loss)
    #print('loss = ', loss)

    # 計算梯度
    dW = np.zeros(W.shape)
    margin[margin > 0] = 1.0
    row_sum = np.sum(margin, axis=1)
    margin[np.arange(num_train), Y] = -row_sum
    dW = 1.0/num_train * np.dot(margin.T, X)
    # margin[margin>0] = 1

    # dw = 1.0/num_train * np.dot(margin.T, X)
    return loss, dW

class SVM(object):
    def train(self,X,Y,learning_rate=1e-7*0.9,reg=1e-5,num_iters=6000,batch_size=256,verbose=True):
        num_train, dim = X.shape
        num_classes = np.max(Y) + 1

        self.W = 0.001 * np.random.randn(num_classes, dim)
        loss_history = []
        for it in range(num_iters):
            x_batch = []
            y_batch = []
            batch_inx = np.random.choice(num_train,batch_size)
            x_batch = X[batch_inx,:]
            y_batch = Y[batch_inx]

            loss, grad = svm_loss_vectorized(self.W, x_batch, y_batch, reg)
            loss_history.append(loss)

            self.W = self.W - learning_rate*grad
            if verbose and it%100==0:
                print('iteration %d / %d : loss %f' % (it, num_iters, loss))

        return loss_history

    def predict(self, x_train):
        y_predict = np.zeros(x_train.shape[1])
        scores = x_train.dot(self.W.T)
        y_pred = np.argmax(scores, axis=1)
        return y_pred

svm = SVM()
svm.train(x_train, y_train)
score1 = svm.predict(x_train)
print('The train ddata predict result %f' %(np.mean(score1 == y_train)))
score1 = svm.predict(x_test)
print('The Test Data predit result %f' %(np.mean(score1 == y_test)))


 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章