如何用Python构建一个决策树

{"type":"doc","content":[{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"italic"},{"type":"strong"}],"text":"本文最初发布于Medium网站,经原作者授权由InfoQ中文站翻译并分享。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"决策树是一个经久不衰的话题。本文要做的是将一系列决策树组合成一个单一的预测模型;也就是说,我们将创建集成方法(Ensemble Methods)的模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"决策树是准确度最高的预测模型之一。想象一下同时使用多棵树能把预测能力提高到多高的水平!"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"一些集成方法算法的预测能力超过了当今机器学习领域的一流高级深度学习模型。此外,Kaggle参赛者广泛使用集成方法来应对数据科学挑战。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"集成方法以相对较低的复杂性提供了更高水平的准确度。决策树模型和决策树组构建、理解和解释起来都很容易。我们将在另一篇文章中谈论宏伟的随机森林,因为它除了作为机器学习的模型外,还广泛用于执行变量选择!我们可以为机器学习模型的预测变量(predictor)选择最佳的候选变量。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"Jupyter笔记本"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"请查看"},{"type":"link","attrs":{"href":"https:\/\/github.com\/Anello92\/Machine_Learning_Python\/blob\/main\/DecisionTree-Python%20%282%29.ipynb","title":"","type":null},"content":[{"type":"text","text":"Jupyter笔记本"}]},{"type":"text","text":",了解我们接下来要介绍的构建机器学习模型的概念,也可以参阅我在"},{"type":"link","attrs":{"href":"https:\/\/medium.com\/@anello92","title":"","type":null},"content":[{"type":"text","text":"Medium"}]},{"type":"text","text":"中写的其他数据科学文章和教程。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们在本文中要做的是用Python构建一个决策树。实践中会有两棵树:一棵树基于熵,另一棵树基于基尼系数。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"安装包"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"第一步是安装pydot和Graphviz包来查看决策树。如果没有这些包,我们就只有模型了——我们想更进一步,分别考虑用熵和基尼系数计算值的决策树。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"命令!表示它将在操作系统上运行。它是一种快捷方式,所以我们不必离开Jupyter并打开终端。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"!pip install --upgrade pydot\nRequirement already satisfied: pip in c:\\users\\anell\\appdata\\local\\programs\\python\\python38\\lib\\site-packages (21.1.3)\n\n!pip install --upgrade graphviz\nRequirement already satisfied: graphviz in c:\\users\\anell\\appdata\\local\\programs\\python\\python38\\lib\\site-packages (0.16)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果我们在Windows上安装Graphviz时遇到问题,可以在终端上运行!conda install python-Graphviz命令。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"#!pip install graphviz\n# You may need to run this command (CMD) for windows\n#!conda install python-graphviz\n# Documentation http:\/\/www.graphviz.org\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Graphviz是一个图形可视化包。计算图是一种具有节点和边的结构;也就是说,实际的决策树是一个计算图。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/07\/48\/07479b47447c485cdde8a4b869c36148.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"导入很多包"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们需要Pandas来创建Datarame格式的结构。我们将使用DecisionTreeClassifier,即在Scikit Learn的树包中实现的决策树算法。另外,我们需要export_graphviz函数将决策树导出为Graphviz格式,然后使用Graphviz来可视化这个导出。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"如果我们想看到这棵树,到这里它还没准备好!我们需要创建模型,以图格式导出模型,并使用Graphviz才能看到它。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"# Importing packages\nimport pandas as pd\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.tree import export_graphviz\nimport pydot\nimport graphviz\n"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"创建数据集"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下来我们创建一个数据集,它实际上是一个字典列表。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"# Creating a dataset\ninstances = [\n{'Best Friend': False, 'Species': 'Dog'},\n{'Best Friend': True, 'Species': 'Dog'},\n{'Best Friend': True, 'Species': 'Cat'},\n{'Best Friend': True, 'Species': 'Cat'},\n{'Best Friend': False, 'Species': 'Cat'},\n{'Best Friend': True, 'Species': 'Cat'},\n{'Best Friend': True, 'Species': 'Cat'},\n{'Best Friend': False, 'Species': 'Dog'},\n{'Best Friend': True, 'Species': 'Cat'},\n{'Best Friend': False, 'Species': 'Dog'},\n{'Best Friend': False, 'Species': 'Dog'},\n{'Best Friend': False, 'Species': 'Cat'},\n{'Best Friend': True, 'Species': 'Cat'},\n{'Best Friend': True, 'Species': 'Dog'}\n]\n"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"转换为数据帧"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们来转换这些数据,将其格式化为DataFrame。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"# Turning the Dictionary into DataFrame\ndf = pd.DataFrame(instances)\ndf\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/76\/fa\/7681d3d4000b8d765bb39deb58f70efa.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这样我们就有了一个DataFrame,用它可以判断某个物种是否适合成为人类最好的朋友。一会儿我们将通过决策树进行分类。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"拆分数据"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"接下来我们划分训练数据和测试数据。本例中我们使用List Comprehension,根据括号内的条件将数据转换为0或1:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"# Preparing training and test data\nX_train = [[1] if a else [0] for a in df['Best Friend']]\ny_train = [1 if d == 'Dog' else 0 for d in df['Species']]\n\nlabels = ['Best Friend']\n\nprint(X_train)\n[[0], [1], [1], [1], [0], [1], [1], [0], [1], [0], [0], [0], [1], [1]]\n\nprint(y_train)\n[1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1]\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们将输入数据和输出值转换为0或1的表示——机器学习算法处理数字是最擅长的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"这样,我们将值转换成了X输入值和目标y目标值,因为我们在一个循环结构中构造了变量X。接下来,我们遍历Best Friend列的每个元素来表示输入变量,并将其放入变量a和变量d中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们使用代码将值转换为文本,并转换为数字表示以呈现给机器学习模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"构建机器学习模型并非易事。它涉及不同领域的一系列知识,以及构成这一切基础的数学和统计学背景。此外,它还需要计算机编程知识和对我们正在研究的语言包、业务问题的知识、数据的预处理的理解。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"也就是说,构建机器学习模型的过程涉及多个领域。到目前为止,我们已经准备好了数据,虽然我们还没有准备好测试数据来制作熵和基尼系数的模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们不会在这里评估模型,所以不需要测试数据。因此,我们获取所有数据并将它们作为X和y的训练数据。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"必须记住,如果我们评估模型,就需要测试数据。由于我们这里不会进行评估,因此我们只会使用训练数据,也就是整个数据集。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"构建树模型"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在这个阶段,我们已经构建了模型,其中第一阶段定义了model_v1对象,然后使用model_v1.fit()训练模型。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"model_v1 = DecisionTreeClassifier(max_depth = None,\nmax_features = None,\ncriterion = 'entropy',\nmin_samples_leaf = 1,\nmin_samples_split = 2)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"Python是面向对象的编程。DecisionTreeClassifier函数实际上是一个类,它创建该类的一个实例——一个对象。当我们调用DecisionTreeClassifier类时,它将依赖几个参数来定义要在sklearn文档中查询的​​算法行为。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们在这里使用熵作为标准。至于我们未能指定的参数,算法都认为是默认的。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/eb\/dc\/eb20f159e80fe0d42701bbea20c176dc.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"那么这个对象就会有方法和属性,fit()是应用于model_v1对象进行模型训练的方法:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"# Presenting the data to the Classifier\nmodel_v1.fit(X_train, y_train)\nDecisionTreeClassifier(criterion='entropy')\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/14\/6b\/14f99yy168644ec2f01ea9dfc19efa6b.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"创建变量"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在这里,我们定义了一个名为tree_model_v1的变量,它位于我们现在所在的目录中。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"# Setting the file name with the decision tree\nfile = '\/Doc\/MachineLearning\/Python\/DecisionTree\/tree_model_v1.dot'\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"定义文件变量后,我们将调用从model_v1计算图中提取的export_graphviz,即决策树。我们打开整个文件,并记录计算图的所有元素:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"# Generating the decision tree graph\nexport_graphviz(model_v1, out_file = file, feature_names = labels)\nwith open(file) as f:\ndot_graph = f.read()\ngraphviz.Source(dot_graph)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/b4\/97\/b44af736c58a51234a9caf1f33e67097.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"将点文件转换为png"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"在上面,我们有了计算图格式的树。如果你想以png格式写入此树:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"!dot -Tpng tree_model_v1.dot -o tree_model_v1.png\n"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"演绎"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"层次结构的顶部是Best Friend。在本例中我们只有一个变量。请注意,该算法计算的熵为0.985,并且仍会计算其他集群的熵。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"[8,6]的第一组和14个样本通过熵呈现出最高的信息增益。基于此,我们有了一个位于熵计算层次结构顶部的节点。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"该算法遍历了所有数据示例,进行了熵计算,并找到了最佳组合——具有最高熵的属性到达顶部并创建了我们的决策树级别。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"第二版模型"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"另一种方法是使用另一个标准代替基于熵的决策树创建相同的模型,我们将使用基尼系数。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"要只使用基尼系数构建相同的树,就是要更改标准,删除标准参数:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"model_v2 = DecisionTreeClassifier(max_depth = None,\nmax_features = None,\nmin_samples_leaf = 1,\nmin_samples_split = 2)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当我们移除标准参数时,算法会考虑应用基尼系数。我们有一个有趣的参数叫做max_depth。在其中我们可以定义树的最大深度。在我们的例子里它没有意义,因为我们只有一个变量;但如果我们有几十个输入变量,max_depth会很有用。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"当我们有很多输入变量时,树的深度会很大,这会带来过拟合的问题。因此我们在构建模型时要定义树的深度。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"由于我们有很多参数,找到构建算法的最佳参数组合是一项复杂的任务!在以自动化方式测试多个参数组合时,我们可以使用需要大量计算资源的交叉验证。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们还有min_samples_leaf表示决策树的最低级别:叶节点所需的最小样本数,就是说它会考虑用多少次观察来构建决策树的最低节点。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最后,参数min_samples_split是拆分一个内部节点所需的最小样本数。在决策树中,我们有作为顶部的根节点、作为树基部的顶部节点,和中间节点。这样我们就可以简单地调整这些参数来定义如何构建所有节点。"}]},{"type":"heading","attrs":{"align":null,"level":2},"content":[{"type":"text","text":"训练基尼版本"}]},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"# Presenting the data to the Classifier\nmodel_v2.fit(X_train, y_train)\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们再次生成文件参数:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"# Setting the file name with the decision tree\nfile ='\/User\/Documents\/MachineLearning\/DecisionTree_tree_model_v1.dot''\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们提取了模型的计算图:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"image","attrs":{"src":"https:\/\/static001.geekbang.org\/resource\/image\/77\/09\/779c935fd74536687a7b40df259a7f09.png","alt":null,"title":null,"style":[{"key":"width","value":"75%"},{"key":"bordertype","value":"none"}],"href":null,"fromPaste":true,"pastePass":true}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"本文中我们用同样的方式创建了两颗树。我们分别使用了熵和基尼系数来计算。两棵树的区别在于用于定义节点组织的标准。它们没有高下之分,都可以很有趣,具体则取决于上下文、数据和业务问题。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"我们建议创建同一模型的多个版本并评估最佳性能。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"codeblock","attrs":{"lang":"plain"},"content":[{"type":"text","text":"!dot -Tpng tree_model_v2.dot -o tree_model_v2.png\n"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","text":"最后,我们保存了决策树的出口。我希望这篇文章对你有所帮助。感谢你的阅读。"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"text","marks":[{"type":"strong"}],"text":"原文链接:"}]},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null}},{"type":"paragraph","attrs":{"indent":0,"number":0,"align":null,"origin":null},"content":[{"type":"link","attrs":{"href":"https:\/\/levelup.gitconnected.com\/how-to-build-a-decision-tree-model-in-python-75f6f3af159d","title":"","type":null},"content":[{"type":"text","text":"https:\/\/levelup.gitconnected.com\/how-to-build-a-decision-tree-model-in-python-75f6f3af159d"}]}]}]}
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章