caffe學習筆記9 -- Brewing Logistic Regression then Going Deeper

這是caffe官方文檔Notebook Examples中的第三個例子,將caffe用於邏輯迴歸分類。參考網址:http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/02-brewing-logreg.ipynb

雖然caffe本身爲深度網絡設計,但是,它也可以用在表示淺層模型中,我們用人工合成的數據做簡單的邏輯迴歸,並且保存爲HDF5作爲caffe的輸入向量。當模型建成後,添加層數提高準確率,caffe需要做的是:定義模型,實驗然後應用。


1. 導入包

import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline

import os
os.chdir('..')

import sys
sys.path.insert(0, './python')
import caffe


import os
import h5py
import shutil
import tempfile

import sklearn
import sklearn.datasets
import sklearn.linear_model

import pandas as pd

2. 合成10000個4維向量用於二分類,包含兩個信息特徵和兩個噪聲特徵

X, y = sklearn.datasets.make_classification(
    n_samples=10000, n_features=4, n_redundant=0, n_informative=2, 
    n_clusters_per_class=2, hypercube=False, random_state=0
)

# Split into train and test
X, Xt, y, yt = sklearn.cross_validation.train_test_split(X, y)

# Visualize sample of the data
ind = np.random.permutation(X.shape[0])[:1000]
df = pd.DataFrame(X[ind])
_ = pd.scatter_matrix(df, figsize=(9, 9), diagonal='kde', marker='o', s=40, alpha=.4, c=y[ind])

圖像如下:


3. 使用sklearn包中的邏輯迴歸方法訓練數據,使用隨機梯度下降(SGD)優化。檢查分類器的準確性

%%timeit
# Train and test the scikit-learn SGD logistic regression.
clf = sklearn.linear_model.SGDClassifier(
    loss='log', n_iter=1000, penalty='l2', alpha=1e-3, class_weight='auto')

clf.fit(X, y)
yt_pred = clf.predict(Xt)
print('Accuracy: {:.3f}'.format(sklearn.metrics.accuracy_score(yt, yt_pred)))
結果:

Accuracy: 0.512
Accuracy: 0.512
Accuracy: 0.512
Accuracy: 0.512
1 loops, best of 3: 379 ms per loop

4. 將數據保存到HDF5中方便caffe使用

# Write out the data to HDF5 files in a temp directory.
# This file is assumed to be caffe_root/examples/hdf5_classification.ipynb
dirname = os.path.abspath('./examples/hdf5_classification/data')
if not os.path.exists(dirname):
    os.makedirs(dirname)

train_filename = os.path.join(dirname, 'train.h5')
test_filename = os.path.join(dirname, 'test.h5')

# HDF5DataLayer source should be a file containing a list of HDF5 filenames.
# To show this off, we'll list the same data file twice.
with h5py.File(train_filename, 'w') as f:
    f['data'] = X
    f['label'] = y.astype(np.float32)
with open(os.path.join(dirname, 'train.txt'), 'w') as f:
    f.write(train_filename + '\n')
    f.write(train_filename + '\n')
    
# HDF5 is pretty efficient, but can be further compressed.
comp_kwargs = {'compression': 'gzip', 'compression_opts': 1}
with h5py.File(test_filename, 'w') as f:
    f.create_dataset('data', data=Xt, **comp_kwargs)
    f.create_dataset('label', data=yt.astype(np.float32), **comp_kwargs)
with open(os.path.join(dirname, 'test.txt'), 'w') as f:
    f.write(test_filename + '\n')

5. 使用Python網絡規範在caffe中定義邏輯迴歸

from caffe import layers as L
from caffe import params as P

def logreg(hdf5, batch_size):
    # logistic regression: data, matrix multiplication, and 2-class softmax loss
    n = caffe.NetSpec()
    n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)
    n.ip1 = L.InnerProduct(n.data, num_output=2, weight_filler=dict(type='xavier'))
    n.accuracy = L.Accuracy(n.ip1, n.label)
    n.loss = L.SoftmaxWithLoss(n.ip1, n.label)
    return n.to_proto()
    
with open('examples/hdf5_classification/logreg_auto_train.prototxt', 'w') as f:
    f.write(str(logreg('examples/hdf5_classification/data/train.txt', 10)))
    
with open('examples/hdf5_classification/logreg_auto_test.prototxt', 'w') as f:
    f.write(str(logreg('examples/hdf5_classification/data/test.txt', 10)))

6. 開始學習評估caffe中的邏輯迴歸方法:

%%timeit
caffe.set_mode_cpu()
solver = caffe.get_solver('examples/hdf5_classification/solver.prototxt')
solver.solve()

accuracy = 0
batch_size = solver.test_nets[0].blobs['data'].num
test_iters = int(len(Xt) / batch_size)
for i in range(test_iters):
    solver.test_nets[0].forward()
    accuracy += solver.test_nets[0].blobs['accuracy'].data
accuracy /= test_iters

print("Accuracy: {:.3f}".format(accuracy))

Accuracy: 0.496
Accuracy: 0.496
Accuracy: 0.496
Accuracy: 0.496
1 loops, best of 3: 121 ms per loop

7. 使用命令行查看模型和求解的全過程

!./build/tools/caffe train -solver examples/hdf5_classification/solver.prototxt
輸出省略。。。。。。


8. 上面的模型logreg_auto_train.prototxt,是一個簡單的邏輯迴歸模型,可以通過在輸入與輸出的權重之間引入非線性改進算法,將這個網絡變爲兩層,網絡結構定義在nonlinear_auto_train.prototxt中。

from caffe import layers as L
from caffe import params as P

def nonlinear_net(hdf5, batch_size):
    # one small nonlinearity, one leap for model kind
    n = caffe.NetSpec()
    n.data, n.label = L.HDF5Data(batch_size=batch_size, source=hdf5, ntop=2)
    # define a hidden layer of dimension 40
    n.ip1 = L.InnerProduct(n.data, num_output=40, weight_filler=dict(type='xavier'))
    # transform the output through the ReLU (rectified linear) non-linearity
    n.relu1 = L.ReLU(n.ip1, in_place=True)
    # score the (now non-linear) features
    n.ip2 = L.InnerProduct(n.ip1, num_output=2, weight_filler=dict(type='xavier'))
    # same accuracy and loss as before
    n.accuracy = L.Accuracy(n.ip2, n.label)
    n.loss = L.SoftmaxWithLoss(n.ip2, n.label)
    return n.to_proto()
    
with open('examples/hdf5_classification/nonlinear_auto_train.prototxt', 'w') as f:
    f.write(str(nonlinear_net('examples/hdf5_classification/data/train.txt', 10)))
    
with open('examples/hdf5_classification/nonlinear_auto_test.prototxt', 'w') as f:
    f.write(str(nonlinear_net('examples/hdf5_classification/data/test.txt', 10)))

9. 評估模型

%%timeit
caffe.set_mode_cpu()
solver = caffe.get_solver('examples/hdf5_classification/nonlinear_solver.prototxt')
solver.solve()

accuracy = 0
batch_size = solver.test_nets[0].blobs['data'].num
test_iters = int(len(Xt) / batch_size)
for i in range(test_iters):
    solver.test_nets[0].forward()
    accuracy += solver.test_nets[0].blobs['accuracy'].data
accuracy /= test_iters

print("Accuracy: {:.3f}".format(accuracy))

Accuracy: 0.836
Accuracy: 0.839
Accuracy: 0.838
Accuracy: 0.840
1 loops, best of 3: 194 ms per loop

10. 與之前相同,查看詳細的結構

!./build/tools/caffe train -solver examples/hdf5_classification/nonlinear_solver.prototxt

11. 清除數據

shutil.rmtree(dirname)


參考網站:

http://nbviewer.jupyter.org/github/BVLC/caffe/blob/master/examples/02-brewing-logreg.ipynb

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章