week4 編程作業 回顧

這一週課程主要內容是關於神經網絡初步,包括一些邏輯運算結構等等,總體難度不大,但是編程還是有很多需要注意的地方。

下面附上我的代碼以及我在編程過程中出現的問題:

這個作業總的要求是讓你根據5000 個 20 *20 的灰度矩陣來識別字體

1. 首先是logical regression with regularization 這跟這周講的其實沒有多大聯繫:

function [J, grad] = lrCostFunction(theta, X, y, lambda)
%LRCOSTFUNCTION Compute cost and gradient for logistic regression with 
%regularization
%   J = LRCOSTFUNCTION(theta, X, y, lambda) computes the cost of using
%   theta as the parameter for regularized logistic regression and the
%   gradient of the cost w.r.t. to the parameters. 

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly 
J = -1/m*(y'*log(sigmoid(X*theta))+(1-y')*log(1-sigmoid(X*theta)))+lambda/(2*m)*(sum(theta.^2)-theta(1)*theta(1));
%grad = 1/m*X'*(X*theta-y')+lambda/m*theta;
% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta.
%               You should set J to the cost.
%               Compute the partial derivatives and set grad to the partial
%               derivatives of the cost w.r.t. each parameter in theta
%
% Hint: The computation of the cost function and gradients can be
%       efficiently vectorized. For example, consider the computation
%
%           sigmoid(X * theta)
%
%       Each row of the resulting matrix will contain the value of the
%       prediction for that example. You can make use of this to vectorize
%       the cost function and gradient computations. 
%
% Hint: When computing the gradient of the regularized cost function, 
%       there're many possible vectorized solutions, but one solution
%       looks like:
%           grad = (unregularized gradient for logistic regression)
%           temp = theta; 
%           temp(1) = 0;   % because we don't add anything for j = 0  
%           grad = grad + YOUR_CODE_HERE (using the temp variable)
%

grad = 1/m*X'*(sigmoid(X*theta)-y);
temp = theta;
temp(1)=0;
grad = grad + (lambda/m)*temp;

% =============================================================


end



重點是logical regression的表達式 有那個表達式這個函數就可以輕鬆的寫出來, 附的pdf 上面寫的非常清楚 :





以上就是代價函數的表達式 用python的語言來表達也十分容易

J = -1/m*(y'*log(sigmoid(X*theta))+(1-y')*log(1-sigmoid(X*theta)))+lambda/(2*m)*(sum(theta.^2)-theta(1)*theta(1));


因爲我們用的是一些標準化的方法 所以在cost這個函數中 我們也必須要給出目前的梯度供 fmincg 這個函數使用


有了代價函數 我們就很容易可以推導出偏導數:



也是非常容易表示 

grad = 1/m*X'*(sigmoid(X*theta)-y);
temp = theta;
temp(1)=0;
grad = grad + (lambda/m)*temp;


以上兩個地方都需要特別注意的一點是 在進行運算中 theta(1) 並不會參與到運算中 所以theta(1) = 0




2. onevsall 

這個函數的實質是對於我們的parameters 進行訓練

function [all_theta] = oneVsAll(X, y, num_labels, lambda)
%ONEVSALL trains multiple logistic regression classifiers and returns all
%the classifiers in a matrix all_theta, where the i-th row of all_theta 
%corresponds to the classifier for label i
%   [all_theta] = ONEVSALL(X, y, num_labels, lambda) trains num_labels
%   logistic regression classifiers and returns each of these classifiers
%   in a matrix all_theta, where the i-th row of all_theta corresponds 
%   to the classifier for label i

% Some useful variables
m = size(X, 1);
n = size(X, 2);

% You need to return the following variables correctly 
all_theta = zeros(num_labels,n+1);

% Add ones to the X data matrix
X = [ones(m, 1),X];

% ====================== YOUR CODE HERE ======================
% Instructions: You should complete the following code to train num_labels
%               logistic regression classifiers with regularization
%               parameter lambda. 
%
% Hint: theta(:) will return a column vector.
%
% Hint: You can use y == c to obtain a vector of 1's and 0's that tell you
%       whether the ground truth is true/false for this class.
%
% Note: For this assignment, we recommend using fmincg to optimize the cost
%       function. It is okay to use a for-loop (for c = 1:num_labels) to
%       loop over the different classes.
%
%       fmincg works similarly to fminunc, but is more efficient when we
%       are dealing with large number of parameters.
%
% Example Code for fmincg:
%
%     % Set Initial theta
%     initial_theta = zeros(n + 1, 1);
%     
%     % Set options for fminunc
%     options = optimset('GradObj', 'on', 'MaxIter', 50);
% 
%     % Run fmincg to obtain the optimal theta
%     % This function will return theta and the cost 
%     [theta] = ...
%         fmincg (@(t)(lrCostFunction(t, X, (y == c), lambda)), ...
%                 initial_theta, options);
%
%size(X),size(all_theta),size(y)
initial_theta = zeros(n+1,1);
options = optimset('GradObj','on','MaxIter',50);
for i = 1:num_labels   
  kk= fmincg(@(t)(lrCostFunction(t,X,(y==i),lambda)),initial_theta,options);
  size(all_theta);
  all_theta(i,:)=kk;
  %all_theta
 end
 %size(all_theta)
 %print(all_theta)
 %all_theta = all_theta';
 % =========================================================================
end


在這裏我們需要注意的是fmincg 這個函數的使用方法:

在其中 第一個參數是我們需要最小化的函數 在這裏是lrCostFunction 

[jVal,gradient] 注意這個函數返回兩個值,雖然這裏沒用但是也值得一說 第一個經過迭代以後的的代價 當這個代價比較小的時候我們這次訓練纔是有意義的。第二個就是梯度,相當於我們訓練的能夠優化代價函數的一些參數,在這裏也是我們訓練的一些參數。

訓練之後注意賦給kk 由kk再賦給all_theta (我的matlab語言學的不怎麼樣 如果有更好的方法請一定聯繫我)

在這裏訓練的時候只能夠是向量訓練,所以我們把 10 *401 的矩陣 化成1*401 的向量進行分別訓練 最後再合在一起。


3.onevsall predict 

這個就非常簡單了 並且max函數根據介紹也能返回最大值的索引 所以針對每一個行向量用一次max 或者 根據max函數的第二個返回值去做就好,詳情請baidu或google octave中max函數的用法

4.predict

這個就更簡單了 訓練好了神經網絡要求預測值 和 3基本上一樣,下面上核心代碼

for i = 1:m
  p(i)=find(temp(i,:)==max(temp(i,:)))


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章