sklearn 中的GridSearchCV网格搜索

sklearn 中的网格搜索

  • 同样的以kNN算法距离, 其中kNN中的k取值直接影响了结果的准确率,
  • 在调参过程中, 网格搜索是常用的方式之一

先看一下常规代码:

import numpy as np
from sklearn import datasets
digits = datasets.load_digits()
X = digits.data
y = digits.target

from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test = train_test_split(X,y,test_size=0.2,random_state=666)

from sklearn.neighbors import KNeighborsClassifier
sk_knn_clf = KNeighborsClassifier(n_neighbors=4, weights="uniform")
sk_knn_clf.fit(X_train, y_train)
sk_knn_clf.score(X_test, y_test)

输出:

0.9916666666666667

这里手动设定了一个n_neighbors=4,

使用sklearn中的Grid Search

# 先定义自己的网格
param_grid = [
    {
        'weights': ['uniform'],
        'n_neighbors': [i for i in range(1, 11)]
    },
    {
        'weights': ['distance'],
        'n_neighbors': [i for i in range(1, 11)],
        'p': [i for i in range(1, 6)]
    }
]
​
knn_clf = KNeighborsClassifier()

from sklearn.model_selection import GridSearchCV
grid_search = GridSearchCV(knn_clf, param_grid)

%%time
grid_search.fit(X_train, y_train)

输出:

CPU times: user 2min 1s, sys: 235 ms, total: 2min 1s
Wall time: 2min 2s
GridSearchCV(cv=None, error_score=‘raise’,
estimator=KNeighborsClassifier(algorithm=‘auto’, leaf_size=30, metric=‘minkowski’,
metric_params=None, n_jobs=1, n_neighbors=5, p=2,
weights=‘uniform’),
fit_params=None, iid=True, n_jobs=1,
param_grid=[{‘weights’: [‘uniform’], ‘n_neighbors’: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]}, {‘weights’: [‘distance’], ‘n_neighbors’: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], ‘p’: [1, 2, 3, 4, 5]}],
pre_dispatch=‘2*n_jobs’, refit=True, return_train_score=‘warn’,
scoring=None, verbose=0)

grid_search.best_estimator_

KNeighborsClassifier(algorithm=‘auto’, leaf_size=30, metric=‘minkowski’,
metric_params=None, n_jobs=1, n_neighbors=3, p=3,
weights=‘distance’)

grid_search.best_score_

输出:

0.98538622129436326

grid_search.best_params_

{‘n_neighbors’: 3, ‘p’: 3, ‘weights’: ‘distance’}

knn_clf = grid_search.best_estimator_
knn_clf.score(X_test, y_test)

输出:

0.98333333333333328

搜索出来的结果是k=3, weights=distance为最佳结果, 不过耗时非常高,用了2分1秒

再来看下sklearn中的n_jobs=-1的搜索情况, verbose=2这个参数我暂时也还没理解到

  • n_jobs 用于标记计算使用几个cpu, 正整数表示集体的数量, 超过最大数量按最大计算机cpu算, -1表示使用所有计算机cpu
  • verbose表示在搜索过程中实时输出数据, 在数据规模较大的情况下让运算具备一些可见性
%%time
grid_search = GridSearchCV(knn_clf, param_grid, n_jobs=-1, verbose=2)
grid_search.fit(X_train, y_train)
grid_search.best_estimator_

输出:

[CV] n_neighbors=10, p=5, weights=distance …
[CV] … n_neighbors=10, p=4, weights=distance, total= 0.8s
[CV] … n_neighbors=10, p=5, weights=distance, total= 0.8s
[CV] … n_neighbors=10, p=5, weights=distance, total= 0.8s
[CV] … n_neighbors=10, p=5, weights=distance, total= 0.7s
CPU times: user 752 ms, sys: 320 ms, total: 1.07 s
Wall time: 1min 20s
[Parallel(n_jobs=-1)]: Done 180 out of 180 | elapsed: 1.3min finished
KNeighborsClassifier(algorithm=‘auto’, leaf_size=30, metric=‘minkowski’,
metric_params=None, n_jobs=1, n_neighbors=3, p=3,
weights=‘uniform’)

时间消耗为1min20s

看下搜索的效果:

knn = grid_search.best_estimator_
param = grid_search.best_params_
param

{‘n_neighbors’: 3, ‘weights’: ‘uniform’}

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章