自編碼器是一個三層的feed-forward神經網絡模型,輸入層經過隱含層的特徵表示後再重構出跟輸入層逼近的輸出層,中間的隱含層是特徵表示層,表示對輸入層學習到的特徵,這些特徵可能更好地表示了數據,如果用學到的特徵來訓練數據分類或迴歸可能學習效果更好,於是就有了自我學習和無監督特徵學習。
如果我們有很多的未標註數據,那就更好了,我們可以用自編碼器學習特徵表示,然後用學到的特徵表示對已標註數據提取特徵,再用機器學習算法比如softmax regression進行訓練、預測,即先經過無監督的特徵學習,然後再經過有監督的學習。未標註數據與已標註數據來自同一分佈時就是半監督學習,來自不同分佈就是無監督學習,比如我們的目標是要區分摩托車和汽車,如果未標註數據也是摩托車或汽車,那麼這個問題就是半監督學習,如果不是則是自我學習。
自編碼的網絡結構如下:
通過自編碼器得到特徵表示的模型參數W1和b1,我們就可以用W1和b1對已標註數據進行特徵提取,即算出它們的激活值。
實驗數據也是MNIST數據集,這次把5-9類的數據作爲無標註數據學習特徵表示,然後在0-4類的數據中分爲訓練集和測試集來運行模型,實驗結果的預測準確率爲98.32%,而直接用圖像像素作爲輸入得到準確率爲96.74%。
%% CS294A/CS294W Self-taught Learning Exercise
% Instructions
% ------------
%
% This file contains code that helps you get started on the
% self-taught learning. You will need to complete code in feedForwardAutoencoder.m
% You will also need to have implemented sparseAutoencoderCost.m and
% softmaxCost.m from previous exercises.
%
%% ======================================================================
% STEP 0: Here we provide the relevant parameters values that will
% allow your sparse autoencoder to get good filters; you do not need to
% change the parameters below.
inputSize = 28 * 28;
numLabels = 5;
hiddenSize = 200;
sparsityParam = 0.1; % desired average activation of the hidden units.
% (This was denoted by the Greek alphabet rho, which looks like a lower-case "p",
% in the lecture notes).
lambda = 3e-3; % weight decay parameter
beta = 3; % weight of sparsity penalty term
maxIter = 400;
%% ======================================================================
% STEP 1: Load data from the MNIST database
%
% This loads our training and test data from the MNIST database files.
% We have sorted the data for you in this so that you will not have to
% change it.
% Load MNIST database files
mnistData = loadMNISTImages('mnist/train-images-idx3-ubyte');
mnistLabels = loadMNISTLabels('mnist/train-labels-idx1-ubyte');
% Set Unlabeled Set (All Images)
% Simulate a Labeled and Unlabeled set
labeledSet = find(mnistLabels >= 0 & mnistLabels <= 4);
unlabeledSet = find(mnistLabels >= 5); %5-9類作爲無標籤數據集用來學習特徵表示
%已標註數據分一半分別用於訓練softmax和測試
numTrain = round(numel(labeledSet)/2);
trainSet = labeledSet(1:numTrain);
testSet = labeledSet(numTrain+1:end);
unlabeledData = mnistData(:, unlabeledSet);
trainData = mnistData(:, trainSet);
trainLabels = mnistLabels(trainSet)' + 1; % Shift Labels to the Range 1-5
testData = mnistData(:, testSet);
testLabels = mnistLabels(testSet)' + 1; % Shift Labels to the Range 1-5
% Output Some Statistics
fprintf('# examples in unlabeled set: %d\n', size(unlabeledData, 2));
fprintf('# examples in supervised training set: %d\n\n', size(trainData, 2));
fprintf('# examples in supervised testing set: %d\n\n', size(testData, 2));
%% ======================================================================
% STEP 2: Train the sparse autoencoder
% This trains the sparse autoencoder on the unlabeled training
% images.
% Randomly initialize the parameters
theta = initializeParameters(hiddenSize, inputSize);
%% ----------------- YOUR CODE HERE ----------------------
% Find opttheta by running the sparse autoencoder on
% unlabeledTrainingImages
opttheta = theta;
%用minFunc裏的L-BFGS算法訓練sparse autoencoder的模型,要用到sparse autoencoder的計算損失的代碼
addpath minFunc/
options.Method = 'lbfgs';
options.maxIter = 400;
options.display = 'on';
[opttheta, cost] = minFunc( @(p) sparseAutoencoderCost(p, ...
inputSize, hiddenSize, ...
lambda, sparsityParam, ...
beta, unlabeledData), ...
theta, options);
%% -----------------------------------------------------
% Visualize weights
W1 = reshape(opttheta(1:hiddenSize * inputSize), hiddenSize, inputSize);
display_network(W1');
%%======================================================================
%% STEP 3: Extract Features from the Supervised Dataset
%
% You need to complete the code in feedForwardAutoencoder.m so that the
% following command will extract features from the data.
trainFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
trainData);
testFeatures = feedForwardAutoencoder(opttheta, hiddenSize, inputSize, ...
testData);
%%======================================================================
%% STEP 4: Train the softmax classifier
softmaxModel = struct;
%% ----------------- YOUR CODE HERE ----------------------
% Use softmaxTrain.m from the previous exercise to train a multi-class
% classifier.
% Use lambda = 1e-4 for the weight regularization for softmax
% You need to compute softmaxModel using softmaxTrain on trainFeatures and
% trainLabels
%softmax訓練過程
options.maxIter = 100;
lambda = 1e-4;
inputSize = hiddenSize;
softmaxModel = softmaxTrain(inputSize, 5, lambda, ...
trainFeatures, trainLabels, options);
%% -----------------------------------------------------
%%======================================================================
%% STEP 5: Testing
%% ----------------- YOUR CODE HERE ----------------------
% Compute Predictions on the test set (testFeatures) using softmaxPredict
% and softmaxModel
%用到softmax練習中的預測函數
[pred] = softmaxPredict(softmaxModel, testFeatures);
acc = mean(pred(:) == testLabels(:));
fprintf('Accuracy: %0.3f%%\n', acc*100);
%% -----------------------------------------------------
% Classification Score
fprintf('Test Accuracy: %f%%\n', 100*mean(pred(:) == testLabels(:)));
% (note that we shift the labels by 1, so that digit 0 now corresponds to
% label 1)
%
% Accuracy is the proportion of correctly classified images
% The results for our implementation was:
%
% Accuracy: 98.3%
%
%
參考:
http://ufldl.stanford.edu/wiki/index.php/Self-Taught_Learning_to_Deep_Networks