随机森林原理介绍与适用情况(综述篇)

{"type":"doc","content":[{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"一句话介绍"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随机森林是一种集成算法(Ensemble Learning),它属于Bagging类型,通过组合多个弱分类器,最终结果通过投票或取均值,使得整体模型的结果具有较高的精确度和泛化性能。其可以取得不错成绩,主要归功于“"},{"type":"text","marks":[{"type":"strong"}],"text":"随机"},{"type":"text","text":"”和“"},{"type":"text","marks":[{"type":"strong"}],"text":"森林"},{"type":"text","text":"”,一个使它具有抗过拟合能力,一个使它更加精准。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/cd/cde054dfec504226e697df6c6c96af8c.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Bagging结构"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Bagging"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Bagging也叫自举汇聚法(bootstrap aggregating),是一种在原始数据集上通过有放回抽样重新选出k个新数据集来训练分类器的集成技术。它使用训练出来的分类器的集合来对新样本进行分类,然后用多数投票或者对输出求均值的方法统计所有分类器的分类结果,结果最高的类别即为最终标签。此类算法可以有效降低bias,并能够降低variance。"}]},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"自助法"},{"type":"text","text":"】它通过自助法(bootstrap)重采样技术,从训练集里面采集固定个数的样本,但是每采集一个样本后,都将样本放回。也就是说,之前采集到的样本在放回后有可能继续被采集到。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"OOB"},{"type":"text","text":"】在Bagging的每轮随机采样中,训练集中大约有36.8%的数据没有被采样集采集中。对于这部分没采集到的数据,我们常常称之为袋外数据(Out Of Bag,简称OOB)。这些数据没有参与训练集模型的拟合,因此可以用来检测模型的泛化能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"随机性"},{"type":"text","text":"】对于我们的Bagging算法,一般会对样本使用boostrap进行随机采集,每棵树采集相同的样本数量,一般小于原始样本量。这样得到的采样集每次的内容都不同,通过这样的自助法生成k个分类树组成随机森林,做到"},{"type":"text","marks":[{"type":"strong"}],"text":"样本随机性"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"输出"},{"type":"text","text":"】Bagging的集合策略也比较简单,对于分类问题,通常使用简单投票法,得到最多票数的类别或者类别之一为最终的模型输出。对于回归问题,通常使用简单平均法,对T个弱学习器得到的回归结果进行算术平均得到最终的模型输出。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"随机森林"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随机森林(Random Forest,RF)是Bagging算法的一种,其实在介绍完Bagging算法之后,随机森林几乎是呼之欲出的,RF相对于Bagging只是对其中一些细节做了自己的规定和设计。"}]},{"type":"blockquote","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"弱分类器"},{"type":"text","text":"】首先,RF使用了CART决策树作为弱学习器。换句话说,其实我们只是将使用CART决策树作为弱学习器的Bagging方法称为随机森林。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"随机性"},{"type":"text","text":"】同时,在生成每棵树的时候,每个树选取的特征都仅仅是随机选出的少数特征,一般默认取特征总数m的开方。而一般的CART树则是会选取全部的特征进行建模。因此,不但特征是随机的,也保证了"},{"type":"text","marks":[{"type":"strong"}],"text":"特征随机性"},{"type":"text","text":"。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"样本量"},{"type":"text","text":"】相对于一般的Bagging算法,RF会选择采集和训练集样本数N一样个数的样本。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"【"},{"type":"text","marks":[{"type":"strong"}],"text":"特点"},{"type":"text","text":"】由于随机性,对于降低模型的方差很有作用,故随机森林一般不需要额外做剪枝,即可以取得较好的泛化能力和抗过拟合能力(Low Variance)。当然对于训练集的拟合程度就会差一些,也就是模型的偏倚会大一些(High Bias),"},{"type":"text","marks":[{"type":"strong"}],"text":"仅仅是相对的"},{"type":"text","text":"。"}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"CART树"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随机森林的弱分类器使用的是CART数,CART决策树又称分类回归树。当数据集的因变量为连续性数值时,该树算法就是一个回归树,可以用叶节点观察的均值作为预测值;当数据集的因变量为离散型数值时,该树算法就是一个分类树,可以很好的解决分类问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"但需要注意的是,该算法是一个"},{"type":"text","marks":[{"type":"strong"}],"text":"二叉树"},{"type":"text","text":",即每一个非叶节点只能引伸出两个分支,所以当某个非叶节点是多水平(2个以上)的离散变量时,该变量就有可能被多次使用。同时,若某个非叶节点是连续变量时,决策树也将把他当做离散变量来处理(即在有限的可能值中做划分)"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"特征选择"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"特征选择目前比较流行的方法是信息增益、增益率、基尼系数和卡方检验。这里主要介绍基于基尼系数(GINI)的特征选择,因为随机森林采用的CART决策树就是基于基尼系数选择特征的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基尼系数的选择的标准就是每个子节点达到最高的纯度,即落在子节点中的所有观察都属于同一个分类,此时基尼系数最小,纯度最高,不确定度最小。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于一般的决策树,假如总共有K类,样本属于第k类的概率为:pk,则该概率分布的基尼指数为:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/43/43e2084186f8e3444369d2caae843637.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"基尼指数越大,说明不确定性就越大;基尼系数越小,不确定性越小,数据分割越彻底,越干净。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"对于CART树而言,由于是二叉树,可以通过下面的表示:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/50/50a8eabb71eab8edee7a5b5ba7163349.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在我们遍历每个特征的每个分割点时,当使用特征A=a,将D划分为两部分,即D1(满足A=a的样本集合),D2(不满足A=a的样本集合)。则在特征A=a的条件下D的基尼指数为:"}]},{"type":"image","attrs":{"src":"https://static001.geekbang.org/infoq/08/08c97d2940347c8d97861eec00e45580.png","alt":null,"title":null,"style":null,"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Gini(D):表示集合D的不确定性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Gini(A,D):表示经过A=a分割后的集合D的不确定性。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"随机森林中的每棵CART决策树都是通过不断遍历这棵树的特征子集的所有可能的分割点,寻找Gini系数最小的特征的分割点,将数据集分成两个子集,直至满足停止条件为止。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"抗过拟合"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"首先,正如Bagging介绍中提到的,每个树选取使用的特征时,都是从全部m个特征中随机产生的,本身已经降低了过拟合的风险和趋势。模型不会被特定的特征值或者特征组合所决定,随机性的增加,将控制模型的拟合能力不会无限提高。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第二,与决策树不同,RF对决策树的建立做了改进。对于普通的决策树,我们会在节点上所有的m个样本特征中选择一个最优的特征来做决策树的左右子树划分。但是RF的每个树,其实选用的特征是一部分,在这些少量特征中,选择一个最优的特征来做决策树的左右子树划分,将随机性的效果扩大,进一步增强了模型的泛化能力。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"假设每棵树选取msub个特征,msub越小,此时模型对于训练集的拟合程度会变差,偏倚增加,但是会泛化能力更强,模型方差减小。msub越大则相反。在实际使用中,一般会将msub的取值作为一个参数,通过开启oob验证或使用交叉验证,不断调整参数以获取一个合适的msub的值。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"优点总结"}]},{"type":"numberedlist","attrs":{"start":null,"normalizeStart":1},"content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":1,"align":null,"origin":null},"content":[{"type":"text","text":"由于采用了集成算法,本身精度比大多数单个算法要好"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":2,"align":null,"origin":null},"content":[{"type":"text","text":"在测试集上表现良好,由于两个随机性的引入,使得随机森林不容易陷入过拟合(样本随机,特征随机)"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":3,"align":null,"origin":null},"content":[{"type":"text","text":"在工业上,由于两个随机性的引入,使得随机森林具有一定的抗噪声能力,对比其他算法具有一定优势"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":4,"align":null,"origin":null},"content":[{"type":"text","text":"由于树的组合,使得随机森林可以处理非线性数据,本身属于非线性分类(拟合)模型"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":5,"align":null,"origin":null},"content":[{"type":"text","text":"它能够处理很高维度(feature很多)的数据,并且不用做特征选择,对数据集的适应能力强:既能处理离散型数据,也能处理连续型数据,数据集无需规范化"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":6,"align":null,"origin":null},"content":[{"type":"text","text":"训练速度快,可以运用在大规模数据集上"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":7,"align":null,"origin":null},"content":[{"type":"text","text":"可以处理缺省值(单独作为一类),不用额外处理"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":8,"align":null,"origin":null},"content":[{"type":"text","text":"由于有袋外数据(OOB),可以在模型生成过程中取得真实误差的无偏估计,且不损失训练数据量"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":9,"align":null,"origin":null},"content":[{"type":"text","text":"在训练过程中,能够检测到feature间的互相影响,且可以得出feature的重要性,具有一定参考意义"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":10,"align":null,"origin":null},"content":[{"type":"text","text":"由于每棵树可以独立、同时生成,容易做成并行化方法"}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":11,"align":null,"origin":null},"content":[{"type":"text","text":"由于实现简单、精度高、抗过拟合能力强,当面对非线性数据时,适于作为基准模型"}]}]}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"参考目录"}]},{"type":"bulletedlist","content":[{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[1] "},{"type":"link","attrs":{"href":"https://link.jianshu.com?t=https%3A%2F%2Fwww.cnblogs.com%2Fpinard%2Fp%2F6156009.html","title":null},"content":[{"type":"text","text":"https://www.cnblogs.com/pinard/p/6156009.html"}]}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[2] "},{"type":"link","attrs":{"href":"https://link.jianshu.com?t=https%3A%2F%2Fwww.cnblogs.com%2Fmaybe2030%2Fp%2F4585705.html","title":null},"content":[{"type":"text","text":"https://www.cnblogs.com/maybe2030/p/4585705.html"}]}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[3] "},{"type":"link","attrs":{"href":"https://link.jianshu.com?t=https%3A%2F%2Fwww.cnblogs.com%2Fliuwu265%2Fp%2F4688403.html","title":null},"content":[{"type":"text","text":"https://www.cnblogs.com/liuwu265/p/4688403.html"}]}]}]},{"type":"listitem","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[4] "},{"type":"link","attrs":{"href":"https://link.jianshu.com?t=http%3A%2F%2Fblog.csdn.net%2Fqq_30189255%2Farticle%2Fdetails%2F51532442","title":null},"content":[{"type":"text","text":"http://blog.csdn.net/qq_30189255/article/details/5153244"}]}]}]}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"链接:https://www.jianshu.com/p/a779f0686acc"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"来源:简书"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章