SVM學習—SMO算法(Sequential Minimal Optimization)

SVM學習——Sequential Minimal Optimization


1、前言

        接觸SVM也有一段時間了,從理論到實踐都有了粗淺的認識,我認爲SVM的發展可以劃分爲幾個相對獨立的部分,首先是SVM理論本身,包括尋找最大間隔分類超平面、引入核方法極大提高對非線性問題的處理能力、引入鬆弛變量的軟間隔優化,用間隔定量的描述置信風險等等;其次是核方法理論的發展,它獨立於SVM本身,這也同時是SVM的一大優點;最後是最優化理論的發展,它同樣與SVM具有相對獨立性;套個流行話,三者是相輔相成、互相促進的。本文所提到的方法已經被很多人學習和研究過了,我這裏做個簡單介紹再加點自己的想法作爲自己對學習SVM的一點總結吧。

2、背景知識

        關於SVM的基礎理論在先前的系列文章中已經有所體現了,Sequential Minimal Optimization(之後就用其簡寫SMO了)所解決的是支持向量機C-SVC模型,就是之前SVM學習——軟間隔優化一文中提到的SVM模型,從這個名字也可以看出,這個模型的主要功能是分類,它有一個需要調節的參數C,以一階軟間隔優化爲例,模型如下:

                                                        

                                                                

                                                                                

        SMO就是要解這個凸二次規劃問題,這裏的C是個很重要的參數,它從本質上說是用來折中經驗風險和置信風險的,C越大,置信風險越大,經驗風險越小;並且所有的因子都被限制在了以C爲邊長的大盒子裏。SMO的出現使得我們不必去求助於昂貴的第三方工具去解決這個凸二次規劃問題,目前對它的改進版本很多,我這裏就介紹它的最初形式和思想。

        SMO是Microsoft Research的John C. Platt在《Sequential Minimal Optimization:A Fast Algorithm for Training Support Vector Machines》一文中提出的,作者信息:http://research.microsoft.com/en-us/people/jplatt/,其基本思想是將Vapnik在1982年提出的Chunking方法推到極致,即:通過將原問題分解爲一系列小規模凸二次規劃問題而獲得原問題解的方法,每次迭代只優化由2個點組成的工作集,SMO算法每次啓發式地選擇兩個因子同時固定其它因子來找到這兩個因子的最優值,直到達到停止條件

3、算法詳述

(1)、 KKT條件

        SMO是以C-SVC的KKT條件爲基礎進行後續操作的,這個KKT條件是:

                                         

                                         

                                         

          其中

上述條件其實就是KT互補條件,SVM學習——軟間隔優化一文,有如下結論:

                                  

                                          

       從上面式子可以得到的信息是:當時,鬆弛變量,此時有:,對應樣本點就是誤分點;當時,鬆弛變量爲零,此時有,對應樣本點就是內部點,即分類正確而又遠離最大間隔分類超平面的那些樣本點;而時,鬆弛變量爲零,有,對應樣本點就是支持向量。

(2)、凸優化問題停止條件

       對於凸優化問題,在實現時總需要適當的停止條件來結束優化過程,停止條件可以是:

       1、監視目標函數的增長率,在它低於某個容忍值時停止訓練,這個條件是最直白和簡單的,但是效果不好;

       2、監視原問題的KKT條件,對於凸優化來說它們是收斂的充要條件,但是由於KKT條件本身是比較苛刻的,所以也需要設定一個容忍值,即所有樣本在容忍值範圍內滿足KKT條件則認爲訓練可以結束;

       3、監視可行間隙,它是原始目標函數值和對偶目標函數值的間隙,對於凸二次優化來說這個間隙是零,以一階範數軟間隔爲例:

原始目標函數與對偶目標函數的差爲:

                            

                                     

                                     

                                     

                                     

定義比率:

                 ,可以利用這個比率達到某個容忍值作爲停止條件。

(3)、SMO思想

        沿襲分解思想,固定“Chunking工作集”的大小爲2,每次迭代只優化兩個點的最小子集且可直接獲得解析解,算法流程:

image

(4)、僅含兩個Langrange乘子解析解

       爲了描述方便定義如下符號:

                                          

                                         

                                          

於是目標函數就變成了:

                                            

                                                         

                                                      

                                                                                               

                                                      

                                                                                     

                                                      

                                                                     

                                                                     

                                                      

                                                                    

注意第一個約束條件:,可以將看作常數,有(爲常數,我們不關心它的值),等式兩邊同時乘以,得到(爲常數,其值爲,我們不關心它,)。將用上式替換則得到一個只含有變量的求極值問題:

                                         

                                                                                                   

這下問題就簡單了,對求偏導數得到:

                                        

將、帶入上式有:

                    

帶入、,用,表示誤差項(可以想象,即使分類正確,的值也可能很大)、(是原始空間向特徵空間的映射),這裏可以看成是一個度量兩個樣本相似性的距離,換句話說,一旦選擇核函數則意味着你已經定義了輸入空間中元素的相似性

最後得到迭代式:

                                        

注意第二個約束條件——那個強大的盒子:,這意味着也必須落入這個盒子中,綜合考慮兩個約束條件,下圖更直觀:

image

和異號的情形

image

和同號的情形

可以看到兩個乘子既要位於邊長爲C的盒子裏又要在相應直線上,於是對於的界來說,有如下情況:

                                           

整理得下式:

                                          

又因爲,,消去後得到:

                                          

(5)、啓發式的選擇方法

        根據選擇的停止條件可以確定怎麼樣選擇點能對算法收斂貢獻最大,例如使用監視可行間隙的方法,一個最直白的選擇就是首先優化那些最違反KKT條件的點,所謂違反KKT條件是指:

                                          

                                          

                                          

由前面的停止條件3可知,對可行間隙貢獻最大的點是那些

                                          

                                         其中

取值大的點,這些點導致可行間隙變大,因此應該首先優化它們,原因如下:

        1、當滿足KKT條件:即時,

當違背KKT條件:即時,,於是

可見,由於違背KKT條件導致可行間隙變大;

        2、當滿足KKT條件:即時,

當違背KKT條件:即時

              若則且,其中

              若則且,其中

可見,由於違背KKT條件依然導致可行間隙變大;

        3、當滿足KKT條件:即時,

當違背KKT條件:即時,且,其中

可見,由於違背KKT條件還是會導致可行間隙變大。

        SMO的啓發式選擇有兩個策略:

        啓發式選擇1:

        最外層循環,首先,在所有樣本中選擇違反KKT條件的一個乘子作爲最外層循環,用“啓發式選擇2”選擇另外一個乘子並進行這兩個乘子的優化,接着,從所有非邊界樣本中選擇違反KKT條件的一個乘子作爲最外層循環,用“啓發式選擇2”選擇另外一個乘子並進行這兩個乘子的優化(之所以選擇非邊界樣本是爲了提高找到違反KKT條件的點的機會),最後,如果上述非邊界樣本中沒有違反KKT條件的樣本,則再從整個樣本中去找,直到所有樣本中沒有需要改變的乘子或者滿足其它停止條件爲止。

        啓發式選擇2:

        內層循環的選擇標準可以從下式看出:

                                             

要加快第二個乘子的迭代速度,就要使最大,而在上沒什麼文章可做,於是只能使最大。

確定第二個乘子方法:

        1、首先在非界乘子中尋找使得最大的樣本; 
        2、如果1中沒找到則從隨機位置查找非界乘子樣本; 
        3、如果2中也沒找到,則從隨機位置查找整個樣本(包含界上和非界乘子)。

(6)、關於兩乘子優化的說明  

         由式子

                    

        可知:

                   

於是對於這個單變量二次函數而言,如果其二階導數,則二次函數開口向下,可以用上述迭代的方法更新乘子,如果,則目標函數只能在邊界上取得極值(此時二次函數開口向上),換句話說,SMO要能處理取任何值的情況,於是在時有以下式子:

1、時:

                         

2、時:

                         

                       

3、                   

                                            

                                            

                                                                                 

                                            

                                                 

                                            

                                                 

分別將乘子帶入得到兩種情況下的目標函數值: 和。顯然,哪種情況下目標函數值最大,則乘子就往哪兒移動,如果目標函數的差在某個指定精度範圍內,說明優化沒有進展。

        另外發現,每一步迭代都需要計算輸出進而得到,於是還要更新閾值,使得新的乘子、滿足KKT條件,考慮、至少有一個在界內,則需要滿足,於是的迭代可以這樣得到:

1、設在界內,則:

                        

又因爲:    

                        

於是有:

                        

                        

等式兩邊同乘後移項得:

                         

                               ;

2、設在界內,則:

                        ;

3、設、都在界內,則:情況1和情況2的值相等,任取一個;

4、設、都不在界內,則:取值爲情況1和情況2之間的任意值。

(7)、提高SMO的速度       

       從實現上來說,對於標準的SMO能提高速度的地方有:

       1、能用緩存的地方儘量用,例如,緩存核矩陣,減少重複計算,但是增加了空間複雜度;

       2、如果SVM的核爲線性核時候,可直接更新,畢竟每次計算的代價較高,於是可以利用舊的乘子信息來更新,具體如下:

,應用到這個性質的例子可以參見SVM學習——Coordinate Desent Method

       3、關注可以並行的點,用並行方法來改進,例如可以使用MPI,將樣本分爲若干份,在查找最大的乘子時可以現在各個節點先找到局部最大點,然後再從中找到全局最大點;又如停止條件是監視對偶間隙,那麼可以考慮在每個節點上計算出局部可行間隙,最後在master節點上將局部可行間隙累加得到全局可行間隙。

       對標準SMO的改進有很多文獻,例如使用“Maximal Violating Pair ”去啓發式的選擇乘子是一種很有效的方法,還有使用“ Second Order Information”的方法,我覺得理想的算法應該是:算法本身的收斂速度能有較大提高,同時算法可並行程度也較高。

4、算法實現

        以cowman望達兄的PyMing-v0.2爲平臺做一個簡單的實現,初學python,寫的比較爛,拍磚時候輕點哈,貌似目前PyMing支持python3.0以下版本,開發python代碼我使用的工具是:(python-2.6)+(PyQt-Py2.6-x86-gpl-4.8.4-1)+(Eric4-4.4.14 ),用Eric的最大好處是調試方便和智能感知做的比較好,安裝很簡單,先裝python-2.6、再裝PyQt(不停的點下一步)、最後裝Eric,建立一個項目如圖:

image

實現思路就是platt的那段僞碼,需要加三個文件,py_mining_0_2_D\pymining\classifier里加一個分類器standard_smo_csvc;py_mining_0_2_D\example里加一個測試的例子standard_smo_csvc_train_test;py_mining_0_2_D\example\conf裏改一下配置文件test。

1、standard_smo_csvc.py

import math
import pickle
import random
 
from ..math.matrix import Matrix
from ..math.text2matrix import Text2Matrix
from ..nlp.segmenter import Segmenter
from ..common.global_info import GlobalInfo
from ..common.configuration import Configuration
 
class StandardSMO:
    '''Platt's standard SMO algorithm for csvc.'''
    
    def __init__(self,config, nodeName, loadFromFile = False, C = 100, tolerance = 0.001):
        #store a number nearly zero.
        self.accuracy = 1E-3
        #store penalty coefficient of slack variable.
        self.C = C
        #store tolerance of KKT conditions.
        self.tolerance = tolerance        
        #store isTrained by data.
        self.istrained = loadFromFile
 
        #store lagrange multipiers.
        self.alpha = []
        #store weight
        self.w = []
        #store threshold.
        self.b = float(0)        
        #store kii
        self.kcache = {}
        
        self.istrained = False
 
        #-------------------begin model info-------------------------------
        self.curNode = config.GetChild(nodeName)
        self.modelPath = self.curNode.GetChild("model_path").GetValue()
        self.logPath = self.curNode.GetChild("log_path").GetValue()
        #-------------------end  model info-------------------------------
 
        #-------------------begin kernel info-------------------------------
        self.curNode = config.GetChild(nodeName)
        self.kernelNode = self.curNode.GetChild("kernel");
        self.kernelName = self.kernelNode.GetChild("name").GetValue();
        #to get parameters from top to button -> from left to right -> from inner to outer.        
        self.parameters = self.kernelNode.GetChild("parameters").GetValue().split(',');
        #-------------------end  kernel info-------------------------------
 
        if (loadFromFile):
            f = open(self.modelPath, "r")
            modelStr = pickle.load(f)
            [self.alphay, self.sv, self.b, self.w] = pickle.loads(modelStr)
            f.close()
            self.istrained = True
 
    def DotProduct(self,i1,i2):
        '''To get vector's dot product for training.'''
 
        dot = float(0)
        for i in range(0,self.trainx.nCol):
            dot += self.trainx.Get(i1,i) * self.trainx.Get(i2,i)  
        return dot
        
    def Kernel(self):
        '''To get kernel function with configuration for training.
 
            kernel function includes RBF,Linear and so on.'''      
 
        if self.kernelName == 'RBF':
            return lambda xi,yi: math.exp((2*self.DotProduct(xi,yi)-self.DotProduct(xi,xi)-self.DotProduct(yi,yi))/(2*float(self.parameters[0])*float(self.parameters[0])))
        elif self.kernelName == 'Linear':
            return lambda xi,yi:self.DotProduct(xi,yi) + float(self.parameters[0])
        elif self.kernelName == 'Polynomial':
            return lambda xi,yi: (float(self.parameters[0]) * self.DotProduct(xi,yi) + float(self.parameters[1])) ** int(self.parameters[2])
    
    def DotVectorProduct(self,v1,v2):
        '''To get vector's dot product for testing.'''
 
        if len(v1) != len(v2):
            print 'The dimensions of two vector should equal'
            return 0.0
        dot = float(0)
        for i in range(0,len(v1)):
            dot += v1[i] * v2[i]
        return dot
        
    def KernelVector(self, v1, v2):
        '''To get kernel function for testing.'''
        
        if self.kernelName == 'RBF':
            return math.exp((2*self.DotVectorProduct(v1, v2)-self.DotVectorProduct(v1, v1)-self.DotVectorProduct(v2, v2))/(2*float(self.parameters[0])*float(self.parameters[0])))
        elif self.kernelName == 'Linear':
            return self.DotVectorProduct(v1, v2) + float(self.parameters[0])
        elif self.kernelName == 'Polynomial':
            return (float(self.parameters[0]) * self.DotVectorProduct(v1,v2) + float(self.parameters[1])) ** int(self.parameters[2])
        
    def F(self,i1):
        '''To calculate output of an sample.
 
            return output.'''
                
        if self.kernelName == 'Linear':
            dot = 0
            for i in range(0,self.trainx.nCol):
                dot += self.w[i] * self.trainx.Get(i1,i);    
            return dot + self.b
 
        K = self.Kernel()   
        final = 0.0
        for i in range(0,len(self.alpha)):
            if self.alpha[i] > 0:
                key1 = '%s%s%s'%(str(i1), '-', str(i))
                key2 = '%s%s%s'%(str(i), '-', str(i1))
                if self.kcache.has_key(key1):
                    k = self.kcache[key1]
                elif self.kcache.has_key(key2):
                    k = self.kcache[key2]
                else:
                    k =  K(i1,i)
                    self.kcache[key1] = k
                    
                final += self.alpha[i] * self.trainy[i] * k
        final += self.b
        return final
 
    def examineExample(self,i1):
        '''To find the first lagrange multipliers.
 
                then find the second lagrange multipliers.'''
        y1 = self.trainy[i1]
        alpha1 = self.alpha[i1]
 
        E1 = self.F(i1) - y1
 
        kkt = y1 * E1
 
        if (kkt > self.tolerance and kkt > 0) or (kkt <- self.tolerance and kkt < self.C):#not abide by KKT conditions
            if self.FindMaxNonbound(i1,E1):
                return 1
            elif self.FindRandomNonbound(i1):
                return 1
            elif self.FindRandom(i1):
                return 1
        return 0
 
    def FindMaxNonbound(self,i1,E1):
        '''To find second lagrange multipliers from non-bound.
 
            condition is maximum |E1-E2| of non-bound lagrange multipliers.'''
        i2 = -1
        maxe1e2 = None
        for i in range(0,len(self.alpha)):
            if self.alpha[i] > 0 and self.alpha[i] < self.C:
                E2 = self.F(i) - self.trainy[i]
                tmp = math.fabs(E1-E2)
                if maxe1e2 == None or maxe1e2 < tmp:
                    maxe1e2 = tmp
                    i2 = i
        if i2 >= 0 and self.StepOnebyOne(i1,i2) :
            return  1              
        return 0
 
    def FindRandomNonbound(self,i1):
        '''To find second lagrange multipliers from non-bound.
 
            condition is random of non-bound lagrange multipliers.'''
        k = random.randint(0,len(self.alpha)-1)
        for i in range(0,len(self.alpha)):
            i2 = (i + k)%len(self.alpha)
            if self.alpha[i2] > 0 and self.alpha[i2] < self.C and self.StepOnebyOne(i1,i2):
                return 1
        return 0
 
    def FindRandom(self,i1):
        '''To find second lagrange multipliers from all.
 
            condition is random one of all lagrange multipliers.'''
        k = random.randint(0,len(self.alpha)-1)
        for i in range(0,len(self.alpha)):
            i2 = (i + k)%len(self.alpha)
            if self.StepOnebyOne(i1,i2):
                return 1
        return 0
 
    def W(self,alpha1new,alpha2newclipped,i1,i2,E1,E2, k11, k22, k12):
        '''To calculate W value.'''
 
        K = self.Kernel()
        alpha1 = self.alpha[i1]
        alpha2 = self.alpha[i2]
        y1 = self.trainy[i1]
        y2 = self.trainy[i2]
        s = y1 * y2
        
        w1 = alpha1new * (y1 * (self.b - E1) + alpha1 * k11 + s * alpha2 * k12)
        w1 += alpha2newclipped * (y2 * (self.b - E2) + alpha2 * k22 + s * alpha1 * k12)
        w1 = w1 - k11 * alpha1new * alpha1new/2 - k22 * alpha2newclipped * alpha2newclipped/2 - s * k12 * alpha1new * alpha2newclipped
        return w1
 
    def StepOnebyOne(self,i1,i2):
        '''To solve two lagrange multipliers problem.
            the algorithm can reference the blog.'''
 
        if i1==i2:
            return 0
 
        #to get kernel function.
        K = self.Kernel()
        
        alpha1 = self.alpha[i1]
        alpha2 = self.alpha[i2]
        alpha1new = -1.0
        alpha2new = -1.0
        alpha2newclipped = -1.0
        y1 = self.trainy[i1]
        y2 = self.trainy[i2]
        s = y1 * y2
        
        key11 = '%s%s%s'%(str(i1), '-', str(i1))
        key22 = '%s%s%s'%(str(i2), '-', str(i2))
        key12 = '%s%s%s'%(str(i1), '-', str(i2))
        key21 = '%s%s%s'%(str(i2), '-', str(i1))
        if self.kcache.has_key(key11):
            k11 = self.kcache[key11]
        else:
            k11 = K(i1,i1)
            self.kcache[key11] = k11    
            
        if self.kcache.has_key(key22):
            k22 = self.kcache[key22]
        else:
            k22 = K(i2,i2)
            self.kcache[key22] = k22
            
        if self.kcache.has_key(key12):
            k12 = self.kcache[key12]
        elif self.kcache.has_key(key21):
            k12 = self.kcache[key21]
        else:
            k12 = K(i1,i2)
            self.kcache[key12] = k12       
        
        eta = k11 + k22 - 2 * k12
        
        E1 = self.F(i1) - y1        
        E2 = self.F(i2) - y2                
 
        #to calucate bound.
        L = 0.0
        H = 0.0
        if y1*y2 == -1:
            gamma = alpha2 - alpha1
            if gamma > 0:
                L = gamma
                H = self.C
            else:
                L = 0
                H = self.C + gamma            
 
        if y1*y2 == 1:
            gamma = alpha2 + alpha1
            if gamma - self.C > 0:
                L = gamma - self.C
                H = self.C
            else:
                L = 0
                H = gamma
        if H == L:
            return 0
        #------------------------begin to move lagrange multipliers.----------------------------
        if -eta < 0:
            #to calculate apha2's new value
            alpha2new = alpha2 + y2 * (E1 - E2)/eta
            
            if alpha2new < L:
                alpha2newclipped = L
            elif alpha2new > H:
                 alpha2newclipped = H
            else:
                alpha2newclipped = alpha2new
        else:            
            w1 = self.W(alpha1 + s * (alpha2 - L),L,i1,i2,E1,E2, k11, k22, k12)
            w2 = self.W(alpha1 + s * (alpha2 - H),H,i1,i2,E1,E2, k11, k22, k12)
            if w1 - w2 > self.accuracy:
                alpha2newclipped = L
            elif w2 - w1 > self.accuracy:
                alpha2newclipped = H
            else:
                alpha2newclipped = alpha2  
        
        if math.fabs(alpha2newclipped - alpha2) < self.accuracy * (alpha2newclipped + alpha2 + self.accuracy):
            return 0
        
        alpha1new = alpha1 + s * (alpha2 - alpha2newclipped)
        if alpha1new < 0:
            alpha2newclipped += s * alpha1new
            alpha1new = 0
        elif alpha1new > self.C:
            alpha2newclipped += s * (alpha1new - self.C)
            alpha1new = self.C
        #------------------------end   to move lagrange multipliers.----------------------------
        if alpha1new > 0 and alpha1new < self.C:
            self.b += (alpha1-alpha1new) * y1 * k11 + (alpha2 - alpha2newclipped) * y2 *k12 - E1
        elif alpha2newclipped > 0 and alpha2newclipped < self.C:
            self.b += (alpha1-alpha1new) * y1 * k12 + (alpha2 - alpha2newclipped) * y2 *k22 - E2
        else:
            b1 = (alpha1-alpha1new) * y1 * k11 + (alpha2 - alpha2newclipped) * y2 *k12 - E1 + self.b
            b2 = (alpha1-alpha1new) * y1 * k12 + (alpha2 - alpha2newclipped) * y2 *k22 - E2 + self.b
            self.b = (b1 + b2)/2
        
        if self.kernelName == 'Linear':
            for j in range(0,self.trainx.nCol):
                self.w[j] += (alpha1new - alpha1) * y1 * self.trainx.Get(i1,j) + (alpha2newclipped - alpha2) * y2 * self.trainx.Get(i2,j)
                
        self.alpha[i1] = alpha1new
        self.alpha[i2] = alpha2newclipped
        
        print 'a', i1, '=',alpha1new,'a', i2,'=', alpha2newclipped
        return 1        
       
    def Train(self,trainx,trainy):
        '''To train samples.
 
            self.trainx is training matrix and self.trainy is classifying label'''
 
        self.trainx = trainx
        self.trainy = trainy
        
        if len(self.trainy) != self.trainx.nRow:
            print "ERROR!, x.nRow should == len(y)"
            return 0
            
        numChanged = 0;
        examineAll = 1;
        #to initialize all lagrange multipiers with zero.
        for i in range(0,self.trainx.nRow):
            self.alpha.append(0.0)
        #to initialize w with zero.
        for j in range(0,self.trainx.nCol):
            self.w.append(float(0))
 
        while numChanged > 0 or examineAll:
            numChanged=0
            print 'numChanged =', numChanged
            if examineAll:
                for k in range(0,self.trainx.nRow):
                    numChanged += self.examineExample(k);#first time or all of lagrange multipiers are abide by KKT conditions then examin all samples.
            else:
                for k in range(0,self.trainx.nRow):
                    if self.alpha[k] !=0 and self.alpha[k] != self.C:
                        numChanged += self.examineExample(k);#to examin all non-bound lagrange multipliers
          
            if(examineAll == 1):
                examineAll = 0
            elif(numChanged == 0):
                examineAll = 1
        else:
            #store support vector machine.                
            self.alphay = []
            self.index = []
            for i in range(0,len(self.alpha)):
                if self.alpha[i] > 0:
                    self.index.append(i)
                    self.alphay.append(self.alpha[i] * self.trainy[i])
                    
            self.sv = [[0 for j in range(self.trainx.nCol)]  for i in range(len(self.index))]
                
            for i in range(0, len(self.index)):
                for j in range(0,self.trainx.nCol):
                    self.sv[i][j] = self.trainx.Get(self.index[i], j)
                
            #dump model path
            f = open(self.modelPath, "w")
            modelStr = pickle.dumps([self.alphay, self.sv, self.b, self.w], 1)
            pickle.dump(modelStr, f)
            f.close()   
            
            self.istrained = True
            
    def Test(self,testx,testy):
        '''To test samples.
 
            self.testx is training matrix and self.testy is classifying label'''    
 
        #check parameter
        if (not self.istrained):
            print "Error!, not trained!"
            return False
        if (testx.nRow != len(testy)):
            print "Error! testx.nRow should == len(testy)"
            return False
            
        self.trainx = testx
        self.trainy = testy
        correct = 0.0
        for i in range(0, self.trainx.nRow):
            fxi = 0.0
            rowvector = [self.trainx.Get(i, k) for k in range(0, self.trainx.nCol)]
 
            if self.kernelName == 'Linear':
                fxi += self.KernelVector(self.w, rowvector) + self.b
            else:
                for j in range(0, len(self.alphay)):                  
                    fxi += self.alphay[j] * self.KernelVector(self.sv[j], rowvector) 
                fxi += self.b
                     
            if fxi * self.trainy[i] >= 0:
                correct +=1
            
            print 'output is', fxi, 'label is', self.trainy[i]
            
        print 'acu=', correct/len(self.trainy)
            
            
    
                

2、standard_smo_csvc_train_test

import sys, os
import math
sys.path.append(os.path.join(os.getcwd(), '../'))
 
from pymining.math.matrix import Matrix
from pymining.math.text2matrix import Text2Matrix
from pymining.nlp.segmenter import Segmenter
from pymining.common.global_info import GlobalInfo
from pymining.common.configuration import Configuration
from pymining.preprocessor.chisquare_filter import ChiSquareFilter
from pymining.classifier.standard_smo_csvc import StandardSMO
 
if __name__ == "__main__":
    config = Configuration.FromFile("conf/test.xml")
    GlobalInfo.Init(config, "__global__")
    txt2mat = Text2Matrix(config, "__matrix__")
    [trainx, trainy] = txt2mat.CreateTrainMatrix("data/train-csvc.txt")
    chiFilter = ChiSquareFilter(config, "__filter__")
    chiFilter.TrainFilter(trainx, trainy)
    [trainx, trainy] = chiFilter.MatrixFilter(trainx, trainy)
        
    nbModel = StandardSMO(config, "standard_smo_csvc")
    nbModel.Train(trainx, trainy)
          
    [trainx, trainy] = txt2mat.CreatePredictMatrix("data/train-csvc-test.txt")
    [trainx, trainy] = chiFilter.MatrixFilter(trainx, trainy)
      
    nbModel = StandardSMO(config, "standard_smo_csvc", True)
    nbModel.Test(trainx, trainy)
    

3、test.xml

<config>
  <__segmenter__>
    <main_dict>dict/dict.main</main_dict>
  </__segmenter__>
 
  <__matrix__>
  </__matrix__>
 
  <__global__>
    <term_to_id>mining/term_to_id</term_to_id>
    <id_to_term>mining/id_to_term</id_to_term>
    <id_to_doc_count>mining/id_to_doc_count</id_to_doc_count>
    <class_to_doc_count>mining/class_to_doc_count</class_to_doc_count>
    <id_to_idf>mining/id_to_idf</id_to_idf>
  </__global__>
 
  <__filter__>
    <rate>0.9</rate>
    <method>max</method>
    <log_path>mining/filter.log</log_path>
    <model_path>mining/filter.model</model_path>
  </__filter__>
 
  <naive_bayes>
    <model_path>mining/naive_bayes.model</model_path>
    <log_path>mining/naive_bayes.log</log_path>
  </naive_bayes>
 
  <twc_naive_bayes>
    <model_path>mining/naive_bayes.model</model_path>
    <log_path>mining/naive_bayes.log</log_path>
  </twc_naive_bayes>
 
  <standard_smo_csvc>
    <model_path>mining/standard_smo.model</model_path>
    <log_path>mining/standard_smo.log</log_path>
    <kernel>
      <name>RBF</name>
      <parameters>10</parameters>
    </kernel>
  </standard_smo_csvc>
</config>

 代碼可以從這裏獲得

5、參考文獻

        1)、Platt(1998):http://research.microsoft.com/~jplatt/smoTR.pdf;

        2)、《An Introduction to Support Vector Machines and Other Kernel-based Learning Methods》;

        3)、Osuna等:An  Improved Training Algorithm for Support Vector Machines;

        4)、Keerthi et al.(2001): Improvements to Platt’s SMO algorithm for SVM classifier design;

        5)、Fan,Chen,and Lin(2005): Working Set Selection Using Second Order Information;

        6)、google


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章