機器學習系列(十二) 集成學習 2020.6.13

前言

本節學習集成學習

1、原理

集成學習

  • 幾種機器學習算法都跑一遍
  • 少數服從多數決定結果

scikit中

  • hard:一人一票
  • soft:有權重,每種算法做出結果的概率

差異性

  • 每個子模型只看樣本數據一部分
  • 每個子模型不需要太高的準確率
  • bagging:放回取樣
  • pasting:不放回取樣

隨機森林

  • 決策樹在節點劃分上,在隨機的特徵子集上尋找最優劃分特徵

2、實現

import numpy as np
import matplotlib.pyplot as plt
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import accuracy_score

"""集成學習"""
# 數據
X, y = datasets.make_moons(n_samples=500, noise=0.3, random_state=42)
plt.scatter(X[y==0,0], X[y==0,1])
plt.scatter(X[y==1,0], X[y==1,1])
plt.show()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=42)
# 邏輯迴歸
log_clf = LogisticRegression()
log_clf.fit(X_train, y_train)
print(log_clf.score(X_test, y_test))
# SVM
svm_clf = SVC()
svm_clf.fit(X_train, y_train)
print(svm_clf.score(X_test, y_test))
# 決策樹
dt_clf = DecisionTreeClassifier(random_state=666)
dt_clf.fit(X_train, y_train)
print(dt_clf.score(X_test, y_test))
# 集成學習(投票)
y_predict1 = log_clf.predict(X_test)
y_predict2 = svm_clf.predict(X_test)
y_predict3 = dt_clf.predict(X_test)
y_predict = np.array((y_predict1 + y_predict2 + y_predict3) >= 2, dtype='int') #少數服從多數
print(accuracy_score(y_test, y_predict))

"""使用scikit庫實現集成學習"""
from sklearn.ensemble import VotingClassifier
# 使用 Hard Voting Classifier
voting_clf = VotingClassifier(estimators=[
    ('log_clf', LogisticRegression()),
    ('svm_clf', SVC()),
    ('dt_clf', DecisionTreeClassifier(random_state=666))],
                             voting='hard')
voting_clf.fit(X_train, y_train)
print(voting_clf.score(X_test, y_test))
# 使用 Soft Voting Classifier
voting_clf2 = VotingClassifier(estimators=[
    ('log_clf', LogisticRegression()),
    ('svm_clf', SVC(probability=True)), #計算結果概率
    ('dt_clf', DecisionTreeClassifier(random_state=666))],
                             voting='soft')
voting_clf2.fit(X_train, y_train)
print(voting_clf2.score(X_test, y_test))
# 使用 bagging
from sklearn.ensemble import BaggingClassifier
bagging_clf = BaggingClassifier(DecisionTreeClassifier(),
                           n_estimators=500, max_samples=100,
                           bootstrap=True) #bootstrap決定取樣後放不放回
bagging_clf.fit(X_train, y_train)
print(bagging_clf.score(X_test, y_test))
# oob和n_jobs
bagging_clf = BaggingClassifier(DecisionTreeClassifier(),
                               n_estimators=500, max_samples=100,
                               bootstrap=True, oob_score=True,
                               n_jobs=-1) #oob保證所有樣本都能被抽取到,n_jobs進行並行運算
bagging_clf.fit(X, y)
print(bagging_clf.oob_score_)
# 對特徵隨機採樣
random_patches_clf = BaggingClassifier(DecisionTreeClassifier(),
                               n_estimators=500, max_samples=100,
                               bootstrap=True, oob_score=True,
                               max_features=1, bootstrap_features=True)
random_patches_clf.fit(X, y)
print(random_patches_clf.oob_score_)

"""隨機森林"""
from sklearn.ensemble import RandomForestClassifier
rf_clf = RandomForestClassifier(n_estimators=500, max_leaf_nodes=16, oob_score=True, random_state=666, n_jobs=-1) #隨機森林擁有決策樹和BaggingClassifier的所有參數
rf_clf.fit(X, y)
print(rf_clf.oob_score_)

結語

傳統的機器學習
大多過了一遍

後續會繼續深入學習

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章