隨機森林原理介紹與適用情況(綜述篇)

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"一句話介紹"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨機森林是一種集成算法(Ensemble Learning),它屬於Bagging類型,通過組合多個弱分類器,最終結果通過投票或取均值,使得整體模型的結果具有較高的精確度和泛化性能。其可以取得不錯成績,主要歸功於“"},{"type":"text","marks":[{"type":"strong"}],"text":"隨機"},{"type":"text","text":"”和“"},{"type":"text","marks":[{"type":"strong"}],"text":"森林"},{"type":"text","text":"”,一個使它具有抗過擬合能力,一個使它更加精準。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/cd/cde054dfec504226e697df6c6c96af8c.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Bagging結構"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Bagging"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Bagging也叫自舉匯聚法(bootstrap aggregating),是一種在原始數據集上通過有放回抽樣重新選出k個新數據集來訓練分類器的集成技術。它使用訓練出來的分類器的集合來對新樣本進行分類,然後用多數投票或者對輸出求均值的方法統計所有分類器的分類結果,結果最高的類別即爲最終標籤。此類算法可以有效降低bias,並能夠降低variance。"}]},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"自助法"},{"type":"text","text":"】它通過自助法(bootstrap)重採樣技術,從訓練集裏面採集固定個數的樣本,但是每採集一個樣本後,都將樣本放回。也就是說,之前採集到的樣本在放回後有可能繼續被採集到。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"OOB"},{"type":"text","text":"】在Bagging的每輪隨機採樣中,訓練集中大約有36.8%的數據沒有被採樣集採集中。對於這部分沒采集到的數據,我們常常稱之爲袋外數據(Out Of Bag,簡稱OOB)。這些數據沒有參與訓練集模型的擬合,因此可以用來檢測模型的泛化能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"隨機性"},{"type":"text","text":"】對於我們的Bagging算法,一般會對樣本使用boostrap進行隨機採集,每棵樹採集相同的樣本數量,一般小於原始樣本量。這樣得到的採樣集每次的內容都不同,通過這樣的自助法生成k個分類樹組成隨機森林,做到"},{"type":"text","marks":[{"type":"strong"}],"text":"樣本隨機性"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"輸出"},{"type":"text","text":"】Bagging的集合策略也比較簡單,對於分類問題,通常使用簡單投票法,得到最多票數的類別或者類別之一爲最終的模型輸出。對於迴歸問題,通常使用簡單平均法,對T個弱學習器得到的迴歸結果進行算術平均得到最終的模型輸出。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"隨機森林"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨機森林(Random Forest,RF)是Bagging算法的一種,其實在介紹完Bagging算法之後,隨機森林幾乎是呼之欲出的,RF相對於Bagging只是對其中一些細節做了自己的規定和設計。"}]},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"弱分類器"},{"type":"text","text":"】首先,RF使用了CART決策樹作爲弱學習器。換句話說,其實我們只是將使用CART決策樹作爲弱學習器的Bagging方法稱爲隨機森林。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"隨機性"},{"type":"text","text":"】同時,在生成每棵樹的時候,每個樹選取的特徵都僅僅是隨機選出的少數特徵,一般默認取特徵總數m的開方。而一般的CART樹則是會選取全部的特徵進行建模。因此,不但特徵是隨機的,也保證了"},{"type":"text","marks":[{"type":"strong"}],"text":"特徵隨機性"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"樣本量"},{"type":"text","text":"】相對於一般的Bagging算法,RF會選擇採集和訓練集樣本數N一樣個數的樣本。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"特點"},{"type":"text","text":"】由於隨機性,對於降低模型的方差很有作用,故隨機森林一般不需要額外做剪枝,即可以取得較好的泛化能力和抗過擬合能力(Low Variance)。當然對於訓練集的擬合程度就會差一些,也就是模型的偏倚會大一些(High Bias),"},{"type":"text","marks":[{"type":"strong"}],"text":"僅僅是相對的"},{"type":"text","text":"。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"CART樹"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨機森林的弱分類器使用的是CART數,CART決策樹又稱分類迴歸樹。當數據集的因變量爲連續性數值時,該樹算法就是一個迴歸樹,可以用葉節點觀察的均值作爲預測值;當數據集的因變量爲離散型數值時,該樹算法就是一個分類樹,可以很好的解決分類問題。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但需要注意的是,該算法是一個"},{"type":"text","marks":[{"type":"strong"}],"text":"二叉樹"},{"type":"text","text":",即每一個非葉節點只能引伸出兩個分支,所以當某個非葉節點是多水平(2個以上)的離散變量時,該變量就有可能被多次使用。同時,若某個非葉節點是連續變量時,決策樹也將把他當做離散變量來處理(即在有限的可能值中做劃分)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"特徵選擇"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"特徵選擇目前比較流行的方法是信息增益、增益率、基尼係數和卡方檢驗。這裏主要介紹基於基尼係數(GINI)的特徵選擇,因爲隨機森林採用的CART決策樹就是基於基尼係數選擇特徵的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基尼係數的選擇的標準就是每個子節點達到最高的純度,即落在子節點中的所有觀察都屬於同一個分類,此時基尼係數最小,純度最高,不確定度最小。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於一般的決策樹,假如總共有K類,樣本屬於第k類的概率爲:pk,則該概率分佈的基尼指數爲:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/43/43e2084186f8e3444369d2caae843637.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基尼指數越大,說明不確定性就越大;基尼係數越小,不確定性越小,數據分割越徹底,越乾淨。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"對於CART樹而言,由於是二叉樹,可以通過下面的表示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/50/50a8eabb71eab8edee7a5b5ba7163349.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在我們遍歷每個特徵的每個分割點時,當使用特徵A=a,將D劃分爲兩部分,即D1(滿足A=a的樣本集合),D2(不滿足A=a的樣本集合)。則在特徵A=a的條件下D的基尼指數爲:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/08/08c97d2940347c8d97861eec00e45580.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Gini(D):表示集合D的不確定性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Gini(A,D):表示經過A=a分割後的集合D的不確定性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"隨機森林中的每棵CART決策樹都是通過不斷遍歷這棵樹的特徵子集的所有可能的分割點,尋找Gini係數最小的特徵的分割點,將數據集分成兩個子集,直至滿足停止條件爲止。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"抗過擬合"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先,正如Bagging介紹中提到的,每個樹選取使用的特徵時,都是從全部m個特徵中隨機產生的,本身已經降低了過擬合的風險和趨勢。模型不會被特定的特徵值或者特徵組合所決定,隨機性的增加,將控制模型的擬合能力不會無限提高。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二,與決策樹不同,RF對決策樹的建立做了改進。對於普通的決策樹,我們會在節點上所有的m個樣本特徵中選擇一個最優的特徵來做決策樹的左右子樹劃分。但是RF的每個樹,其實選用的特徵是一部分,在這些少量特徵中,選擇一個最優的特徵來做決策樹的左右子樹劃分,將隨機性的效果擴大,進一步增強了模型的泛化能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"假設每棵樹選取msub個特徵,msub越小,此時模型對於訓練集的擬合程度會變差,偏倚增加,但是會泛化能力更強,模型方差減小。msub越大則相反。在實際使用中,一般會將msub的取值作爲一個參數,通過開啓oob驗證或使用交叉驗證,不斷調整參數以獲取一個合適的msub的值。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"優點總結"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"由於採用了集成算法,本身精度比大多數單個算法要好"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"在測試集上表現良好,由於兩個隨機性的引入,使得隨機森林不容易陷入過擬合(樣本隨機,特徵隨機)"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"在工業上,由於兩個隨機性的引入,使得隨機森林具有一定的抗噪聲能力,對比其他算法具有一定優勢"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"由於樹的組合,使得隨機森林可以處理非線性數據,本身屬於非線性分類(擬合)模型"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"它能夠處理很高維度(feature很多)的數據,並且不用做特徵選擇,對數據集的適應能力強:既能處理離散型數據,也能處理連續型數據,數據集無需規範化"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":6,"align":null,"origin":null},"content":[{"type":"text","text":"訓練速度快,可以運用在大規模數據集上"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":7,"align":null,"origin":null},"content":[{"type":"text","text":"可以處理缺省值(單獨作爲一類),不用額外處理"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":8,"align":null,"origin":null},"content":[{"type":"text","text":"由於有袋外數據(OOB),可以在模型生成過程中取得真實誤差的無偏估計,且不損失訓練數據量"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":9,"align":null,"origin":null},"content":[{"type":"text","text":"在訓練過程中,能夠檢測到feature間的互相影響,且可以得出feature的重要性,具有一定參考意義"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":10,"align":null,"origin":null},"content":[{"type":"text","text":"由於每棵樹可以獨立、同時生成,容易做成並行化方法"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":11,"align":null,"origin":null},"content":[{"type":"text","text":"由於實現簡單、精度高、抗過擬合能力強,當面對非線性數據時,適於作爲基準模型"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"參考目錄"}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[1] "},{"type":"link","attrs":{"href":"https://link.jianshu.com?t=https%3A%2F%2Fwww.cnblogs.com%2Fpinard%2Fp%2F6156009.html","title":null},"content":[{"type":"text","text":"https://www.cnblogs.com/pinard/p/6156009.html"}]}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[2] "},{"type":"link","attrs":{"href":"https://link.jianshu.com?t=https%3A%2F%2Fwww.cnblogs.com%2Fmaybe2030%2Fp%2F4585705.html","title":null},"content":[{"type":"text","text":"https://www.cnblogs.com/maybe2030/p/4585705.html"}]}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[3] "},{"type":"link","attrs":{"href":"https://link.jianshu.com?t=https%3A%2F%2Fwww.cnblogs.com%2Fliuwu265%2Fp%2F4688403.html","title":null},"content":[{"type":"text","text":"https://www.cnblogs.com/liuwu265/p/4688403.html"}]}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[4] "},{"type":"link","attrs":{"href":"https://link.jianshu.com?t=http%3A%2F%2Fblog.csdn.net%2Fqq_30189255%2Farticle%2Fdetails%2F51532442","title":null},"content":[{"type":"text","text":"http://blog.csdn.net/qq_30189255/article/details/5153244"}]}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"鏈接:https://www.jianshu.com/p/a779f0686acc"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"來源:簡書"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章