機器學習:線性迴歸(Linear Regression)小項目

該小項目所有代碼在我的github上,歡迎有興趣的同學與我探討研究~
地址:Machine-Learning/machine-learning-ex1/


1. Introduction

線性迴歸(Linear Regression),在wiki上的定義如下:

In statistics, linear regression is a linear approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables (or independent variables) denoted X. The case of one explanatory variable is called simple linear regression. For more than one explanatory variable, the process is called multiple linear regression.

談談我個人的理解吧。
線性迴歸,從應用層面上來講,是用於數據擬合的工具。從許多數據中找到一條直線能夠擬合大部分數據,從而能夠根據輸入的值預測輸出的值。

何爲線性?數據擬合得到的是呈線性的,換句話說就是條直線。
何爲迴歸?就是根據以前的數據預測出一個準確的輸出值。

線性迴歸算法,可以分爲兩類:

  • Linear Regression with one variable 單變量線性迴歸
  • Linear Regression with multiple variable 多變量線性迴歸
    注:有時會用feature代替variable這個詞

而解決線性迴歸問題,通常會涉及到兩種方法:

  • Gradient Descent 梯度下降法
  • Normal Equation 正規方程法

簡單談談什麼時候使用梯度下降法,什麼時候使用正規方程法?

  • 首先,正規方程法在訓練集個數較少時(<10000),計算效率會優於梯度下降法,否則便使用梯度下降法;
  • 其次,正規方程法不需要設定學習率,即不會涉及到調參的問題,且不需要迭代;
  • 最後,梯度下降法的時間複雜度O(kn^2), 正規方程法的時間複雜度O(n^3)

    總而言之,訓練集個數少於10000優先使用正規方程法,否則使用梯度下降法。

對於多變量的線性迴歸,還會涉及到:

  • Feature scaling 特徵縮放
  • Mean Normalization 均值歸一化
    注:有時會提到Feature Normalization,指的便是Mean Normalization。

還有,Vectorization(向量化):
很多複雜的計算都可以轉換成矩陣或者向量的計算,這在一定程度上大大提高了計算的效率。同時,代碼也會變的十分簡潔。待會在項目代碼中會有所體現。

最後,還得了解一下Cost Function (損失/代價函數)。當數據擬合度越高,損失函數的值越小,極限是等於0.

需要掌握的公式:
1. 線性迴歸函數(含一般式與向量式);
2. 損失函數;
3. 梯度下降的迭代式子(含一般式和向量式);
4. 正規函數參數向量化求法。

聲明:本項目代碼用Octave實現(語法與matlab相似)。 代碼註釋很詳細,就不另外講解了了~

2. Linear Regression with one variable

主函數:

%% Initialization
% clear means clear all the valuable in the workspace
% close all means close all the windows except the main window
% clc means clean all the info in command 
clear ; close all; clc

%% ==================== Part 1: Basic Function ====================
% Complete warmUpExercise.m
fprintf('Running warmUpExercise ... \n');
fprintf('5x5 Identity Matrix: \n');
warmUpExercise()

fprintf('Program paused. Press enter to continue.\n');
pause;


%% ======================= Part 2: Plotting =======================
fprintf('Plotting Data ...\n')
data = load('ex1data1.txt');
X = data(:, 1); y = data(:, 2);
m = length(y); % number of training examples

% Plot Data
% Note: You have to complete the code in plotData.m
plotData(X, y);

fprintf('Program paused. Press enter to continue.\n');
pause;

%% =================== Part 3: Cost and Gradient descent ===================

X = [ones(m, 1), data(:,1)]; % Add a column of ones to x
theta = zeros(2, 1); % initialize fitting parameters

% Some gradient descent settings
iterations = 1500;
alpha = 0.01;

fprintf('\nTesting the cost function ...\n')
% compute and display initial cost
J = computeCost(X, y, theta);
fprintf('With theta = [0 ; 0]\nCost computed = %f\n', J);
fprintf('Expected cost value (approx) 32.07\n');

% further testing of the cost function
J = computeCost(X, y, [-1 ; 2]);
fprintf('\nWith theta = [-1 ; 2]\nCost computed = %f\n', J);
fprintf('Expected cost value (approx) 54.24\n');

fprintf('Program paused. Press enter to continue.\n');
pause;

fprintf('\nRunning Gradient Descent ...\n')
% run gradient descent
theta = gradientDescent(X, y, theta, alpha, iterations);

% print theta to screen
fprintf('Theta found by gradient descent:\n');
fprintf('%f\n', theta);
fprintf('Expected theta values (approx)\n');
fprintf(' -3.6303\n  1.1664\n\n');

% Plot the linear fit
hold on; % keep previous plot visible
plot(X(:,2), X*theta, '-')
legend('Training data', 'Linear regression')
hold off % don't overlay any more plots on this figure

% Predict values for population sizes of 35,000 and 70,000
predict1 = [1, 3.5] *theta;
fprintf('For population = 35,000, we predict a profit of %f\n',...
    predict1*10000);
predict2 = [1, 7] * theta;
fprintf('For population = 70,000, we predict a profit of %f\n',...
    predict2*10000);

fprintf('Program paused. Press enter to continue.\n');
pause;

%% ============= Part 4: Visualizing J(theta_0, theta_1) =============
fprintf('Visualizing J(theta_0, theta_1) ...\n')

% Grid over which we will calculate J
theta0_vals = linspace(-10, 10, 100);
theta1_vals = linspace(-1, 4, 100);

% initialize J_vals to a matrix of 0's
J_vals = zeros(length(theta0_vals), length(theta1_vals));

% Fill out J_vals
for i = 1:length(theta0_vals)
    for j = 1:length(theta1_vals)
      t = [theta0_vals(i); theta1_vals(j)];
      J_vals(i,j) = computeCost(X, y, t);
    end
end


% Because of the way meshgrids work in the surf command, we need to
% transpose J_vals before calling surf, or else the axes will be flipped
J_vals = J_vals';
% Surface plot
figure;
surf(theta0_vals, theta1_vals, J_vals)
xlabel('\theta_0'); ylabel('\theta_1');

% Contour plot
figure;
% Plot J_vals as 15 contours spaced logarithmically between 0.01 and 100
contour(theta0_vals, theta1_vals, J_vals, logspace(-2, 3, 20))
xlabel('\theta_0'); ylabel('\theta_1');
hold on;
plot(theta(1), theta(2), 'rx', 'MarkerSize', 10, 'LineWidth', 2);

Part1純屬打醬油,跳過。

Part2,構圖,顯示訓練集中待擬合的數據。
plotData.m

function plotData(x, y)
%PLOTDATA Plots the data points x and y into a new figure 
%   PLOTDATA(x,y) plots the data points and gives the figure axes labels of
%   population and profit.

figure; % open a new figure window

% ====================== YOUR CODE HERE ======================
% Instructions: Plot the training data into a figure using the 
%               "figure" and "plot" commands. Set the axes labels using
%               the "xlabel" and "ylabel" commands. Assume the 
%               population and revenue data have been passed in
%               as the x and y arguments of this function.
%
% Hint: You can use the 'rx' option with plot to have the markers
%       appear as red crosses. Furthermore, you can make the
%       markers larger by using plot(..., 'rx', 'MarkerSize', 10);
    plot(x, y, 'rx');
    xlabel("population");
    ylabel("profit");
% ============================================================

end

這裏寫圖片描述

上圖中的小紅叉代表訓練集中的training example。

Part3:求代價函數,梯度下降,構圖顯示擬合的直線
computeCost.m

function J = computeCost(X, y, theta)
%COMPUTECOST Compute cost for linear regression
%   J = COMPUTECOST(X, y, theta) computes the cost of using theta as the
%   parameter for linear regression to fit the data points in X and y

% Initialize some useful values
m = length(y); % number of training examples

% You need to return the following variables correctly 
J = 0;

% ====================== YOUR CODE HERE ======================
% Instructions: Compute the cost of a particular choice of theta
%               You should set J to the cost.

% 得到括號內的值
A = X*theta-y;
% 再將括號平方
A = A.^2;
% 得到求和的值
errorSum = sum(A);
% 得到損失函數
J = 1/(2*m)*errorSum;

% =========================================================================

end

gradientDescent.m

function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
%   theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by 
%   taking num_iters gradient steps with learning rate alpha

% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);

for iter = 1:num_iters

    % ====================== YOUR CODE HERE ======================
    % Instructions: Perform a single gradient step on the parameter vector
    %               theta. 
    %
    % Hint: While debugging, it can be useful to print out the values
    %       of the cost function (computeCost) and gradient here.
    %
    % 得到求和的值
    sum = ((X*theta-y)'*X)';
    % 得到偏導的值
    derivative = 1/m*sum;
    % 得到迭代後的theta
    theta = theta - alpha*derivative;
    % ============================================================

    % Save the cost J in every iteration    
    J_history(iter) = computeCost(X, y, theta);

end

end

這裏寫圖片描述

Part4: 畫出損失函數的圖像以及theta的輪廓圖。
這裏寫圖片描述

損失函數的弓形圖

這裏寫圖片描述

theta的輪廓圖,小紅叉接近中央。

終端輸出:
這裏寫圖片描述

從輸出可以看到,梯度下降得到的結果與預期十分接近。

3. Linear Regression with multiple variable

主函數:

%% Initialization

%% ================ Part 1: Feature Normalization ================

%% Clear and Close Figures
clear ; close all; clc

fprintf('Loading data ...\n');

%% Load Data
data = load('ex1data2.txt');
X = data(:, 1:2);
y = data(:, 3);
m = length(y);

% Print out some data points
fprintf('First 10 examples from the dataset: \n');
fprintf(' x = [%.0f %.0f], y = %.0f \n', [X(1:10,:) y(1:10,:)]');

fprintf('Program paused. Press enter to continue.\n');
pause;

% Scale features and set them to zero mean
fprintf('Normalizing Features ...\n');

[X mu sigma] = featureNormalize(X);

% Add intercept term to X
X = [ones(m, 1) X];


%% ================ Part 2: Gradient Descent ================

% ====================== YOUR CODE HERE ======================
% Instructions: We have provided you with the following starter
%               code that runs gradient descent with a particular
%               learning rate (alpha). 
%
%               Your task is to first make sure that your functions - 
%               computeCost and gradientDescent already work with 
%               this starter code and support multiple variables.
%
%               After that, try running gradient descent with 
%               different values of alpha and see which one gives
%               you the best result.
%
%               Finally, you should complete the code at the end
%               to predict the price of a 1650 sq-ft, 3 br house.
%
% Hint: By using the 'hold on' command, you can plot multiple
%       graphs on the same figure.
%
% Hint: At prediction, make sure you do the same feature normalization.
%

fprintf('Running gradient descent ...\n');

% Choose some alpha value
alpha = 0.01;
num_iters = 400;

% Init Theta and Run Gradient Descent 
theta = zeros(3, 1);
[theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters);

% Plot the convergence graph
% numel() ·µ»ØÔªËظöÊý    
figure;
plot(1:numel(J_history), J_history, '-b', 'LineWidth', 2);
xlabel('Number of iterations');
ylabel('Cost J');

% Display gradient descent's result
fprintf('Theta computed from gradient descent: \n');
fprintf(' %f \n', theta);
fprintf('\n');

% Estimate the price of a 1650 sq-ft, 3 br house
% ====================== YOUR CODE HERE ======================
% Recall that the first column of X is all-ones. Thus, it does
% not need to be normalized.
price = 0; % You should change this
price = [1, 1650, 3]*theta;

% ============================================================

fprintf(['Predicted price of a 1650 sq-ft, 3 br house ' ...
         '(using gradient descent):\n $%f\n'], price);

fprintf('Program paused. Press enter to continue.\n');
pause;

%% ================ Part 3: Normal Equations ================

fprintf('Solving with normal equations...\n');

% ====================== YOUR CODE HERE ======================
% Instructions: The following code computes the closed form 
%               solution for linear regression using the normal
%               equations. You should complete the code in 
%               normalEqn.m
%
%               After doing so, you should complete this code 
%               to predict the price of a 1650 sq-ft, 3 br house.
%

%% Load Data
data = csvread('ex1data2.txt');
X = data(:, 1:2);
y = data(:, 3);
m = length(y);

% Add intercept term to X
X = [ones(m, 1) X];

% Calculate the parameters from the normal equation
theta = normalEqn(X, y);

% Display normal equation's result
fprintf('Theta computed from the normal equations: \n');
fprintf(' %f \n', theta);
fprintf('\n');


% Estimate the price of a 1650 sq-ft, 3 br house
% ====================== YOUR CODE HERE ======================
price = 0; % You should change this
price = [1, 1650, 3]*theta;

% ============================================================

fprintf(['Predicted price of a 1650 sq-ft, 3 br house ' ...
         '(using normal equations):\n $%f\n'], price);

Part1: 特徵歸一化 (特徵-特徵對應的均值)/(特徵對應的標準差或者max-min)
featureNormalize.m

function [X_norm, mu, sigma] = featureNormalize(X)
%FEATURENORMALIZE Normalizes the features in X 
%   FEATURENORMALIZE(X) returns a normalized version of X where
%   the mean value of each feature is 0 and the standard deviation
%   is 1. This is often a good preprocessing step to do when
%   working with learning algorithms.

% You need to set these values correctly
X_norm = X;
mu = zeros(1, size(X, 2));
sigma = zeros(1, size(X, 2));

% ====================== YOUR CODE HERE ======================
% Instructions: First, for each feature dimension, compute the mean
%               of the feature and subtract it from the dataset,
%               storing the mean value in mu. Next, compute the 
%               standard deviation of each feature and divide
%               each feature by it's standard deviation, storing
%               the standard deviation in sigma. 
%
%               Note that X is a matrix where each column is a 
%               feature and each row is an example. You need 
%               to perform the normalization separately for 
%               each feature. 
%
% Hint: You might find the 'mean' and 'std' functions useful.
%       

% 求特徵對應均值
  mu = mean(X);
% 求特徵對應標準差
  sigma = std(X);
% 特徵均一化,即特徵減去均值除以標準差(或者max-min)
% 易錯
% 錯誤寫法: x_norm = (X-mu) ./ sigma;

X_norm = (X - ones(length(X), 1) * mu) ./ (ones(length(X), 1) * sigma);

% ============================================================

end

這裏有小坑

錯誤寫法: x_norm = (X-mu) ./ sigma;
正確寫法:X_norm = (X - ones(length(X), 1) * mu) ./ (ones(length(X), 1) * sigma);

必須將mu轉化爲與X同規格的矩陣,sigma也是。
如果你單單只是計算X-mu,或者X ./sigma,是沒有問題的,octave會自動給你修復填充,將mu與sigma變成與X同規格的矩陣。可是如果你直接把他們揉雜在一起,就像我上面的錯誤寫法,則octave不會自動給你修復填充,所以會出現theta是NAN的情況。

Part2: 梯度下降,同時畫出損失函數的值隨着迭代次數的變化圖像,以確保梯度下降能正常執行。

梯度下降的代碼與前面的一致。

function [theta, J_history] = gradientDescent(X, y, theta, alpha, num_iters)
%GRADIENTDESCENT Performs gradient descent to learn theta
%   theta = GRADIENTDESCENT(X, y, theta, alpha, num_iters) updates theta by 
%   taking num_iters gradient steps with learning rate alpha

% Initialize some useful values
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);

for iter = 1:num_iters

    % ====================== YOUR CODE HERE ======================
    % Instructions: Perform a single gradient step on the parameter vector
    %               theta. 
    %
    % Hint: While debugging, it can be useful to print out the values
    %       of the cost function (computeCost) and gradient here.
    %
    sum = ((X*theta-y)'*X)';
    derivative = 1/m*sum;
    theta = theta - alpha*derivative;
    % ============================================================

    % Save the cost J in every iteration    
    J_history(iter) = computeCost(X, y, theta);

end

end

這裏寫圖片描述

下降速度適中,梯度下降正常運行。
Part3: 正規方程解法

function [theta] = normalEqn(X, y)
%NORMALEQN Computes the closed-form solution to linear regression 
%   NORMALEQN(X,y) computes the closed-form solution to linear 
%   regression using the normal equations.

theta = zeros(size(X, 2), 1);

% ====================== YOUR CODE HERE ======================
% Instructions: Complete the code to compute the closed form solution
%               to linear regression and put the result in theta.
%

% ---------------------- Sample Solution ----------------------

 theta = pinv(X'*X)*X'*y;

% -------------------------------------------------------------


% ============================================================

end

終端輸出:
這裏寫圖片描述

從輸出中看到,梯度下降法與正規方程法得到的結果是不一樣的,而且相差很大。很明顯,梯度下降法得到的是不太可靠的,導致這樣的原因是:梯度下降得到的不一定是全局最優解,很有可能是局部最優解。所以我猜測,此處的梯度下降法得到的是局部最優解,這也就是該方法的侷限性所在。

4. Conclusion

這個小項目的主要難點在於:

  • 將一般式子以向量化的形式表示
  • Debug

以上內容皆爲本人觀點,歡迎大家提出批評和指導,我們一起探討!


發佈了146 篇原創文章 · 獲贊 268 · 訪問量 53萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章