sklearn 中的GridSearchCV網格搜索

sklearn 中的網格搜索

  • 同樣的以kNN算法距離, 其中kNN中的k取值直接影響了結果的準確率,
  • 在調參過程中, 網格搜索是常用的方式之一

先看一下常規代碼:

import numpy as np
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target

from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=666)

from sklearn.neighbors import KNeighborsClassifier
sk_knn_clf = KNeighborsClassifier(n_neighbors=4, weights="uniform")
sk_knn_clf.fit(X_train, y_train)
sk_knn_clf.score(X_test, y_test)

輸出:

0.9916666666666667

這裏手動設定了一個n_neighbors=4,

使用sklearn中的Grid Search

# 先定義自己的網格
param_grid = [
    {
        'weights': ['uniform'],
        'n_neighbors': [i for i in range(1, 11)]
    },
    {
        'weights': ['distance'],
        'n_neighbors': [i for i in range(1, 11)],
        'p': [i for i in range(1, 6)]
    }
]
​
knn_clf = KNeighborsClassifier()

from sklearn.model_selection import GridSearchCV
grid_search = GridSearchCV(knn_clf, param_grid)

%%time
grid_search.fit(X_train, y_train)

輸出:

CPU times: user 2min 1s, sys: 235 ms, total: 2min 1s
Wall time: 2min 2s
GridSearchCV(cv=None, error_score=‘raise’,
estimator=KNeighborsClassifier(algorithm=‘auto’, leaf_size=30, metric=‘minkowski’,
metric_params=None, n_jobs=1, n_neighbors=5, p=2,
weights=‘uniform’),
fit_params=None, iid=True, n_jobs=1,
param_grid=[{‘weights’: [‘uniform’], ‘n_neighbors’: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]}, {‘weights’: [‘distance’], ‘n_neighbors’: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], ‘p’: [1, 2, 3, 4, 5]}],
pre_dispatch=‘2*n_jobs’, refit=True, return_train_score=‘warn’,
scoring=None, verbose=0)

grid_search.best_estimator_

KNeighborsClassifier(algorithm=‘auto’, leaf_size=30, metric=‘minkowski’,
metric_params=None, n_jobs=1, n_neighbors=3, p=3,
weights=‘distance’)

grid_search.best_score_

輸出:

0.98538622129436326

grid_search.best_params_

{‘n_neighbors’: 3, ‘p’: 3, ‘weights’: ‘distance’}

knn_clf = grid_search.best_estimator_
knn_clf.score(X_test, y_test)

輸出:

0.98333333333333328

搜索出來的結果是k=3, weights=distance爲最佳結果, 不過耗時非常高,用了2分1秒

再來看下sklearn中的n_jobs=-1的搜索情況, verbose=2這個參數我暫時也還沒理解到

  • n_jobs 用於標記計算使用幾個cpu, 正整數表示集體的數量, 超過最大數量按最大計算機cpu算, -1表示使用所有計算機cpu
  • verbose表示在搜索過程中實時輸出數據, 在數據規模較大的情況下讓運算具備一些可見性
%%time
grid_search = GridSearchCV(knn_clf, param_grid, n_jobs=-1, verbose=2)
grid_search.fit(X_train, y_train)
grid_search.best_estimator_

輸出:

[CV] n_neighbors=10, p=5, weights=distance …
[CV] … n_neighbors=10, p=4, weights=distance, total= 0.8s
[CV] … n_neighbors=10, p=5, weights=distance, total= 0.8s
[CV] … n_neighbors=10, p=5, weights=distance, total= 0.8s
[CV] … n_neighbors=10, p=5, weights=distance, total= 0.7s
CPU times: user 752 ms, sys: 320 ms, total: 1.07 s
Wall time: 1min 20s
[Parallel(n_jobs=-1)]: Done 180 out of 180 | elapsed: 1.3min finished
KNeighborsClassifier(algorithm=‘auto’, leaf_size=30, metric=‘minkowski’,
metric_params=None, n_jobs=1, n_neighbors=3, p=3,
weights=‘uniform’)

時間消耗爲1min20s

看下搜索的效果:

knn = grid_search.best_estimator_
param = grid_search.best_params_
param

{‘n_neighbors’: 3, ‘weights’: ‘uniform’}

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章