cs231n assignment1 features

features

前幾個作業都是直接將圖片的原始像素作爲模型輸入,本次作業是通過使用定向梯度直方圖Histogram of Oriented Gradients (HOG)和HSV顏色空間。簡單說,HOG不考慮圖片的顏色信息,只捕獲圖片的紋理;而顏色直方圖只考慮顏色,不考慮紋理。使用這兩個方法一起工作,效果會更好。

Train SVM on features
learning_rates = [0.8e-7,1e-7,1.2e-7,1.4e-7]
regularization_strengths = [5e4,5.1e4]

for lr in learning_rates:
    for rs in regularization_strengths:
        svm = LinearSVM()
        loss_history = svm.train(X_train_feats,y_train,learning_rate=lr,reg=rs,num_iters=1000)
        train_pred = svm.predict(X_train_feats)
        train_acc = np.mean(train_pred == y_train)
        val_pred = svm.predict(X_val_feats)
        val_acc = np.mean(val_pred == y_val)
        if val_acc > best_val:
            best_val = val_acc
            best_svm = svm
        results[(lr,rs)] = (train_acc,val_acc)
        plt.subplot()
        plt.plot(loss_history, label='loss')
        plt.legend()
        plt.show()
pass

準確率不是很高,我只達到了:42。3%

測試數據集的準確率爲:41.4%

分類展示:
在這裏插入圖片描述


Neural Network on image features
自己選擇了一些超參,畫圖看出梯度下降的效果非常棒,但是準確率小的可憐。只好參考別人的博客,準確率提高至57.7%,測試數據集準確率爲57.4%。

代碼:

reg = [1e-3]
learing_rate = [5e-1]
best_acc = 0.4
for r in reg:
    for lr in learing_rate:
        # Train the network
        stats = net.train(X_train_feats, y_train, X_val_feats, y_val,
                    num_iters=1000, batch_size=200,
                    learning_rate=lr, learning_rate_decay=0.95,
                    reg=r, verbose=False)
        # Predict on the validation set
        val_acc = (net.predict(X_val_feats) == y_val).mean()
        print('reg:%f,learning_rate:%f'%(r,lr))
        print('Validation accuracy: ', val_acc)
        if (val_acc > best_acc):
            best_acc = val_acc
            best_net = net

        plt.subplot(2, 1, 1)
        plt.plot(stats['loss_history'])
        plt.title('Loss history')
        plt.xlabel('Iteration')
        plt.ylabel('Loss')

        plt.subplot(2, 1, 2)
        plt.plot(stats['train_acc_history'], label='train')
        plt.plot(stats['val_acc_history'], label='val')
        plt.title('Classification accuracy history')
        plt.xlabel('Epoch')
        plt.ylabel('Classification accuracy')
        plt.legend()
        plt.show()
pass

Inline Question

Inline question 1:

Describe the misclassification results that you see. Do they make sense?

YourAnswer:\color{blue}{\textit Your Answer:}
雖然存在分類錯誤的圖片,但是還是有一定道理的。卡車和汽車形態較像、貓和狗也很像,因此被識別錯誤很正常。但是還有很多差別很大的圖片也被分類錯誤,從圖中可以看出卡車裏面居然有狗和🦌。。。emmmmm

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章