[機器學習實驗5]樸素貝葉斯(篩選垃圾郵件)

本次實驗是使用生成學習算法來處理數據(篩選垃圾郵件)。
判別學習算法(discriminative learning algorithm):直接學習p(y|x)(比如說logistic迴歸)或者說是從輸入直接映射到{0,1}.
生成學習算法(generative learning algorithm):對p(x|y)(和p(y))進行建模,比如高斯判別法(GDA)和樸素貝葉斯法,前者是用來處理連續數據的,後者是用來處理離散數據的。
簡單的來說,判別學習算法的模型是通過一條分隔線把兩種類別區分開,而生成學習算法是對兩種可能的結果分別進行建模,然後分別和輸入進行比對,計算出相應的概率。
比如說良性腫瘤和惡性腫瘤的問題,對良性腫瘤建立model1(y=0),對惡性腫瘤建立model2(y=1),p(x|y=0)表示是良性腫瘤的概率,p(x|y=1)表示是惡性腫瘤的概率,然後根據貝葉斯公式(Bayes rule)推導出惡性腫瘤的概率:p(y=1|x),貝葉斯公式如下:
這裏寫圖片描述
本次實驗主要是使用樸素貝葉斯法處理離散數據,高斯判別法類似,只是參數的計算方法不同。
題目如下:
這裏寫圖片描述
數據鏈接:
http://openclassroom.stanford.edu/MainFolder/courses/MachineLearning/exercises/ex6materials/ex6DataPrepared.zip
原題鏈接:
http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&doc=exercises/ex6/ex6.html
原理:
這裏就不給出詳細的概念和推導了,想要了解的可以查閱其他資料,這裏直接給出計算樸素貝葉斯參數的公式並做個解釋:
這裏寫圖片描述
這裏1{…}表達式的意思:1{true}=1 , 1{false}=0
m代表有m個特徵值(x),φj表示第j個特徵向量xj的概率,^表示and的意思,所以我們在使用的時候就是算出xj的個數,除以分母即是φj|y=1和φj|y=0的值。
然後根據公式:
這裏寫圖片描述
可以得到n個特徵量對應的概率,注意這裏公式上面的x是向量x,因爲我們對樸素貝葉斯的假設是各特徵量之間是獨立的,所以計算概率可以進行乘法計算。
因爲在甄別過程中還可能碰到沒有加入過的特徵量,但是如果按照之前的公式就會計算出0概率,而實際上這是不合理,所以需要引入拉普拉斯平滑
這裏寫圖片描述
最後給出我們實驗中用到的公式:
這裏寫圖片描述
這裏寫圖片描述
m代表有m個文本,本試驗中有700的文本用例,k代表的是對應的特徵詞,ni表示第i個文本中有ni個特徵詞,V代表的是特徵數量。
最後轉換成對數進行計算:
這裏寫圖片描述
訓練部分的代碼:

% train.m
% Exercise 6: Naive Bayes text classifier

clear all; close all; clc

% store the number of training examples
numTrainDocs = 700;

% store the dictionary size
numTokens = 2500;

% read the features matrix
M = dlmread('train-features.txt', ' ');
spmatrix = sparse(M(:,1), M(:,2), M(:,3), numTrainDocs, numTokens);
train_matrix = full(spmatrix);

% train_matrix now contains information about the words within the emails
% the i-th row of train_matrix represents the i-th training email
% for a particular email, the entry in the j-th column tells
% you how many times the j-th dictionary word appears in that email



% read the training labels
train_labels = dlmread('train-labels.txt');
% the i-th entry of train_labels now indicates whether document i is spam


% Find the indices for the spam and nonspam labels
spam_indices = find(train_labels);
nonspam_indices = find(train_labels == 0);

% Calculate probability of spam
prob_spam = length(spam_indices) / numTrainDocs;

% Sum the number of words in each email by summing along each row of
% train_matrix
email_lengths = sum(train_matrix, 2);%得到每個郵件中的特徵詞的個數,ni個
% Now find the total word counts of all the spam emails and nonspam emails
spam_wc = sum(email_lengths(spam_indices));%代表∑1{y(i)=1}ni 
nonspam_wc = sum(email_lengths(nonspam_indices));%代表∑1{y(i)=0}ni 

% Calculate the probability of the tokens in spam emails
%對應於∑∑1{xj^i=K and y(i)=1}+1 
prob_tokens_spam = (sum(train_matrix(spam_indices, :)) + 1) ./ ...
    (spam_wc + numTokens);
% Now the k-th entry of prob_tokens_spam represents phi_(k|y=1)

% Calculate the probability of the tokens in non-spam emails
prob_tokens_nonspam = (sum(train_matrix(nonspam_indices, :)) + 1)./ ...
    (nonspam_wc + numTokens);
% Now the k-th entry of prob_tokens_nonspam represents phi_(k|y=0)

分類測試部分的代碼:

% test.m
% Exercise 6: Naive Bayes text classifier

% read the test matrix in the same way we read the training matrix
N = dlmread('test-features.txt', ' ');
spmatrix = sparse(N(:,1), N(:,2), N(:,3));
test_matrix = full(spmatrix);

% Store the number of test documents and the size of the dictionary
numTestDocs = size(test_matrix, 1);
numTokens = size(test_matrix, 2);


% The output vector is a vector that will store the spam/nonspam prediction
% for the documents in our test set.
output = zeros(numTestDocs, 1);

% Calculate log p(x|y=1) + log p(y=1)
% and log p(x|y=0) + log p(y=0)
% for every document
% make your prediction based on what value is higher
% (note that this is a vectorized implementation and there are other
%  ways to calculate the prediction)
log_a = test_matrix*(log(prob_tokens_spam))' + log(prob_spam);
log_b = test_matrix*(log(prob_tokens_nonspam))'+ log(1 - prob_spam);  
output = log_a > log_b;


% Read the correct labels of the test set
test_labels = dlmread('test-labels.txt');

% Compute the error on the test set
% A document is misclassified if it's predicted label is different from
% the actual label, so count the number of 1's from an exclusive "or"
numdocs_wrong = sum(xor(output, test_labels))

%Print out error statistics on the test set
fraction_wrong = numdocs_wrong/numTestDocs


這裏寫圖片描述
注意這個地方的test_matrix爲我們測試用的數據,代表了xk,那麼我們需要通過φk|y=1 ^num(xk)=P(x|y=1)來換算得到,注意這裏的x是向量,num表示k特徵值出現的次數,因爲是獨立的,所以是連乘換算得到。

最後把結果和人工甄別的結果做個對比
這裏寫圖片描述
誤檢率:1.9%

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章