原创 關於sklearn中“決策樹是否可以轉化爲json並進行繪製”的調研

1.這個代碼是跑不通的 https://www.garysieling.com/blog/convert-scikit-learn-decision-trees-json#comment-9679 2.這個代碼可以順利轉化爲json,

原创 根據子樹樣本數對cart樹剪枝與剪枝前後圖形繪製

代碼如下: #-*- coding:utf-8 -*- import sys reload(sys) sys.setdefaultencoding('utf-8') import numpy as np import pandas a

原创 display your decision tree interactively

Environment: Ubuntu Linux 16.04 1.python sklearn2json.py 2.modify the path of structure.json in index.html 3.put stru

原创 linux桌面寵物

環境: Linux 16.04 下載鏈接是: https://pan.baidu.com/s/1hzF9UDGj0KdSnvgUGfeDtA 解壓後運行: ./ui.zs

原创 CVP(Critical value pruning)examples with python implemention

The python implemention for CVP(Critical Value Pruning) is here: https://github.com/appleyuchi/Decision_Tree_Prune Th

原创 contingency(列聯表)python計算與實驗結果分析

代碼如下 import numpy as np from scipy.stats import chi2_contingency d = np.array([[37, 49, 23], [150, 100, 57]]) print

原创 python卡方分佈計算

根據p-value計算分位點 import scipy.stats print scipy.stats.chi2.ppf(0.05, 5) 根據分位點計算p-value from scipy import stats print 1

原创 The proof of “chi-square statistics follows chi-square distribution”

chi-square test(principle used in C4.5’s CVP Pruning), also called chi-square statistics, also called chi-square good

原创 CVP(Critical Value Pruning)illustration with clear principle in details

Note: CVP(Critical Value Pruning) is also called Chi-Square Pruning(test) in many materials. The following is a conti

原创 關於輔酶Q10的相關常識與選購要點(轉)

Q10有類型={泛醌(ubiquinone)泛醇(Ubiquinol) Q10有類型=\left\{ \begin{aligned} 泛醌(ubiquinone)\\ 泛醇(Ubiquinol)\\ \end{aligned} \ri

原创 problem about degree of freedom

Dear Professor Ricco RAKOTOMALALA: the Reference is: <An Empirical Comparison of Selection Measures for Decision-Tree

原创 詳細解釋到底啥是共軛先驗(用本科知識來解釋)

我們直奔主題。 根據百度百科上的解釋: 如果後驗分佈與先驗分佈屬於同類(分佈簇),則先驗分佈與後驗分佈被稱爲共軛分佈,而先驗分佈被稱爲似然函數的共軛先驗。 上面這個定義有點複雜,我們待會兒再回過頭來看這個定義 P(θ∣x)=P(x∣

原创 MEP(minimum error pruning) principle with python implemention

According to《Estimating Probabilities: A Crucial Task in Machine learning》: Θ=p(C)p(C∣V1)p(C)p(C∣V1V2)p(C∣V1)p(C∣V1V2

原创 Two Examples of Minimum Error Pruning(reprint)

The first example Attention,this article is from the following link with my own notes: http://www.cse.unsw.edu.au/~bi

原创 通俗講清楚爲什麼使用信息熵增益比而不是信息熵增益?

來舉個簡單的例子: 數據集D(出去玩是標籤) A代表屬性,A=心情、天氣 心情 天氣 出去玩 好 晴朗 玩 不好 下雨 不玩 不好 颳風 不玩 好了 ,現在建立決策樹,根節點是啥? 第一種方式(信息熵增益