實戰: 對GBDT(lightGBM)分類任務進行貝葉斯優化, 並與隨機方法對比

保險數據集,來進行GBDT分類任務預測,基於貝葉斯和隨機優化方法進行對比分析.

一. 數據預處理

1.1 讀取&清理&切割數據

import pandas as pd
import numpy as np

data = pd.read_csv('caravan-insurance-challenge.csv')
data.head()

在這裏插入圖片描述

train = data[data['ORIGIN'] == 'train']
test = data[data['ORIGIN'] == 'test']

train_labels = np.array(train['CARAVAN'].astype(np.int32)).reshape((-1,))
test_labels = np.array(test['CARAVAN'].astype(np.int32)).reshape((-1,))

train = train.drop(['ORIGIN', 'CARAVAN'], axis = 1)
test = test.drop(['ORIGIN', 'CARAVAN'], axis = 1)

features = np.array(train)
test_features = np.array(test)
lebels = train_labels[:]

print('Train shape:', train.shape)
print('Test shape:', test.shape)
train.head()

在這裏插入圖片描述

1.2 標籤的分佈

import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline

plt.hist(labels, edgecolor = 'k')
plt.xlabel('Label'); plt.ylabel('Count'); plt.title('Count of Labels')

在這裏插入圖片描述
樣本是不平衡數據,所以在這裏選擇使用ROC曲線來進行評估,接下來我們的目標就是使得其AUC的值越大越好。

二. 基礎模型建立

2.1 LightGBM建模

import lightgbm as lgb
model = lgb.LGBMClassifier()
model

LGBMClassifier(boosting_type=‘gbdt’, class_weight=None, colsample_bytree=1.0, importance_type=‘split’, learning_rate=0.1, max_depth=-1, min_child_samples=20, min_child_weight=0.001, min_split_gain=0.0, n_estimators=100, n_jobs=-1, num_leaves=31, objective=None, random_state=None, reg_alpha=0.0, reg_lambda=0.0, silent=True, subsample=1.0, subsample_for_bin=200000, subsample_freq=0)

2.2 默認參數的效果

這個基礎模型,我們要做的就是儘可能高的來提升AUC指標。

from sklearn.metrics import roc_auc_score
from timeit import default_timer as timer

start = timer()
model.fit(features, labels)
train_time = timer() - start

predictions = model.predict_proba(test_featurs)[:, 1]
auc = roc_auc_score(test_labels, predictions)

print('The baseline score on the test set is {:.4f}.'.format(auc))
print('The baseline training time is {:.4f} seconds.'.format(train_time))

The baseline score on the test set is 0.7092.
The baseline training time is 0.3402 seconds.

三. 設置參數空間

RandomizedSearchCV沒有Early Stopping功能 , 所以我們來自己寫一下 .

有些參數設置成對數分佈,比如學習率,因爲這類參數都是要累乘才能發揮效果的,一般經驗都是寫成log分佈形式。還有一些參數得在其他參數控制下來進行選擇

import random

param_grid = {'class_weight': [None, 'balanced'],
             'boosting_type': ['gbdt', 'goss', 'dart'],
             'num_leaves': list(range(30, 150)),
             'learning_rate': list(np.logspace(np.log(0.005), np.log(0.2), base=np.exp(1), num=800))),
             'subsample_for_bin': list(range(20000, 300000, 20000)),
             'min_child_samples': list(range(20, 500, 5)),
             'reg_alpha': list(np.linspace(0, 1)),
             'reg_lambda': list(np.linspace(0, 1)),
             'colsample_bytree': list(np.linspace(0.6, 1, 10))}
subsample_dist = list(np.linepace(0.5, 1, 100))

# 學習率的分佈
plt.hist(param_grid['learning_rate'], color = 'r', edgecolor = 'k')
plt.xlabel('Learning Rate'); plt.ylabel('Count'); plt.title('Learning Rate Distribution', size =18)

在這裏插入圖片描述

# 葉子數目的分佈
plt.hist(param_grid['num_leaves'], color = 'm', edgecolor = 'k')
plt.xlabel('Learning Number of Leaves'); plt.ylabel('Count'); plt.title('Number of Leaves Distribution')

在這裏插入圖片描述

3.* 參數空間採樣

{key: random.sample(value, 2) for key, value in param_grid.items()}

在這裏插入圖片描述

params = {key: random.sample(value, 1)[0] for key, value in param_grid.items()}
params['subsample'] = random.sample(subsample_dist, 1)[0] if params['boosting_type'] != 'goss' else 1.0
params

{‘class_weight’: ‘balanced’, ‘boosting_type’: ‘gbdt’,
‘num_leaves’: 149, ‘learning_rate’: 0.024474734290096542,
‘subsample_for_bin’: 200000, ‘min_child_samples’: 110,
‘reg_alpha’: 0.8163265306122448, ‘reg_lambda’: 0.26530612244897955,
‘colsample_bytree’: 0.6888888888888889, ‘subsample’: 0.8282828282828283}

四. 隨機優化

4.1 交叉驗證LightGBM

# Create a lgb dataset
train_set = lgb.Dataset(features, label = labels)

r = lgb.cv(params, train_set, num_boost_round=10000, nfold=10, metrics='auc',
          early_stopping_rounds = 80, verbose_eval = False, seed = 42)
# early_stopping_rounds = 80:如果再連續構造80次還是沒進步,那就停止

r_best = np.max(r['auc-mean']) # Highest score
r_best_std = r['auc-stdv'][np.argmax(r['auc-mean'])]
# Standard deviation of best score

print('The maximum ROC AUC on the validation set was {:.5f}.'.format(r_best, r_best_std))
print('The ideal numbel of iterations was {}.'.format(np.argmax(r['auc-mean']) + 1)

The maximum ROC AUC on the validation set was 0.75553 with std of 0.03082.
The ideal number of iterations was 73.

# 保存結果
random_results = pd.DataFrame(columns = ['loss', 'params', 'iteration', 'estimators',
                                        'time'], index = list(range(Max_evals)))

4.2 Objective Function

用AUC指標當做我們的目標

Max_evals = 200
N_folds = 3
def random_objective(params, iteration, n_folds = N_folds):
	start = timer()
	cv_results = lgb.cv(params, train_set, num_boost_round = 10000, nfold = n_folds,
                       early_stopping_rounds = 80, metrics = 'auc', seed = 42)
	end = timer()
	best_score = np.max(cv_results['auc-mean'])
	loss = 1 - best_score
	n_estimators = int(np.argmax(cv_results['auc-mean']) + 1)
	return [loss, params, iteration, n_estimators, end-start]

4.3 執行隨機調參

random.seed(42)

for i in range(Max_evals):
	params = {key: random.sample(value, 1)[0] for key, value in param_grid.items()}

	if params['boosting_type'] == 'goss':
		params['subsample'] = 1.0
	else:
		params['subsample'] = random.sample(subsample_dist, 1)[0]

	results_list = random_objective(params, i)
	random_results.loc[i, :] = results_list

random_results.sort_values('loss', ascending = True, inplace = True)
random_results.reset_index(inplace = True, drop = True)
random_results.head()

在這裏插入圖片描述

4.4 Random Search 結果

random_results.loc[0, 'params']

{‘class_weight’: None, ‘boosting_type’: ‘dart’, ‘num_leaves’: 112,
‘learning_rate’: 0.020631460653340816, ‘subsample_for_bin’: 160000,
‘min_child_samples’: 220, ‘reg_alpha’: 0.9795918367346939,
‘reg_lambda’: 0.08163265306, ‘colsample_bytree’: 0.6, ‘subsample’: 0.7929292929292929}

best_random_params = random_results.loc[0, 'params'].copy()
best_random_estimators = int(random_results.loc[0, 'estimators'])
best_random_model = lgb.LGBMClassifier(n_estimators=best_random_estimators, n_jobs=-1,
                                      objective='binary', **best_random_params, random_state=42)
best_random_model.fit(features, labels)
predictions = best_random_model.predict_proba(test_features)[:, 1]

print('The best model from random search scores {:.4f} on the test data.'.format(roc_auc_score(test_labels, predictions)))
print('This was achieved using {} search iterations.'.format(random_results.loc[0, 'iteration']))

The best model from random search scores 0.7179 on the test data.
This was achieved using 38 search iterations.

五. 貝葉斯優化

5.1 Objective Function

import csv
from hyperopt import STATUS_OK
from timeit import default_timer as timer

def objective(params, n_folds = N_folds):
    global ITERATION
    ITERATION += 1
    
    subsample = params['boosting_type'].get('subsample', 1.0)
    params['boosting_type'] = params['boosting_type']['boosting_type']
    params['subsample'] = subsample
    
    for parameter_name in ['num_leaves', 'subsample_for_bin', 'min_child_samples']:
        params[parameter_name] = int(params[parameter_name])
    start = timer()
    cv_results = lgb.cv(params, train_set, num_boost_round = 10000, nfold = n_folds,
                       early_stopping_rounds = 80, metrics = 'auc', seed = 42)
    run_time = timer() - start
    
    best_score = np.max(cv_results['auc-mean'])
    loss = 1 - best_score
    n_estimators = int(np.argmax(cv_results['auc-mean']) + 1)
    
    of_connection = open(out_file, 'a')
    writer = csv.writer(of_connection)
    writer.writerow([loss, params, ITERATION, n_estimators, run_time])
    
    return {'loss': loss, 'params': params, 'iteration': ITERATION,
           'estimators': n_estimators, 'train_time': run_time, 'status': STATUS_OK}

5.2 Domain Space

5.2.1 學習率分佈

from hyperopt import hp
from hyperopt.pyll.stochastic import sample

learning_rate = {'learning_rate': hp.loguniform('learning_rate', np.log(0.005), np.log(0.2))}

learning_rate_dist = []
for _ in range(10000):
    learning_rate_dist.append(sample(learning_rate)['learning_rate'])
    
plt.figure(figsize = (8, 6))
sns.kdeplot(learning_rate_dist, color = 'r', linewidth = 2, shade = True)
plt.title('Learning Rate Distribution', size = 18)
plt.xlabel('Learning Rate', size = 16)
plt.ylabel('Density', size = 16)

在這裏插入圖片描述

5.2.2 葉子數分佈

quniform的效果

num_leaves = {'num_leaves': hp.quniform('num_leaves', 30, 150, 1)}
num_leaves_dist = []
for _ in range(10000):
    num_leaves_dist.append(sample(num_leaves)['num_leaves'])
    
plt.figure(figsize = (8,6))
sns.kdeplot(num_leaves_dist, linewidth = 2, shade = True)
plt.title('Number of Leaves Distribution', size = 18); plt.xlabel('Number of Leaves', size = 16); plt.ylabel('Density', size = 16)

在這裏插入圖片描述

5.2.3 boosting_type

boosting_type = {'boosting_type': hp.choice('boosting_type',
                                           [{'boosting_type': 'gbdt', 'subsample': hp.uniform('subsample', 0.5, 1)}, 
                                             {'boosting_type': 'dart', 'subsample': hp.uniform('subsample', 0.5, 1)},
                                             {'boosting_type': 'goss', 'subsample': 1.0}])}
params = sample(boosting_type)
params

{‘boosting_type’: {‘boosting_type’: ‘gbdt’, ‘subsample’: 0.659771523544347}}

subsample = params['boosting_type'].get('subsample', 1.0)

params['boosting_type'] = params['boosting_type']['boosting_type']
params['subsample'] = subsample
params

{‘boosting_type’: ‘gbdt’, ‘subsample’: 0.659771523544347}

5.2.4 參數分佈彙總

space = {'class_weight': hp.choice('class_weight', [None, 'balanced']),
        'boosting_type': hp.choice('boosting_type', [{'boosting_type': 'gbdt', 'subsample': hp.uniform('gdbt_subsample', 0.5, 1)},
                                                    {'boosting_type': 'dart', 'subsample': hp.uniform('dart_subsample', 0.5, 1)},
                                                    {'boosting_type': 'goss', 'subsample': 1.0}]),
        'num_leaves': hp.quniform('num_leaves', 30, 150, 1),
        'learning_rate': hp.loguniform('learning_rate', np.log(0.01), np.log(0.2)),
        'subsample_for_bin': hp.quniform('subsample_for_bin', 20000, 300000, 20000),
         'min_child_samples': hp.quniform('min_child_samples', 20, 500, 5),
         'reg_alpha': hp.uniform('reg_alpha', 0.0, 1.0),
         'reg_lambda': hp.uniform('reg_lambda', 0.0, 1.0),
         'colsample_bytree': hp.uniform('colsample_by_tree', 0.6, 1.0)}
5.2.4.* 參數採樣結果看一下
x = sample(space)
subsample = x['boosting_type'].get('subsample', 1.0)
x['boosting_type'] = x['boosting_type']['boosting_type']
x['subsample'] = subsample
x

{‘boosting_type’: ‘goss’,
‘class_weight’: ‘balanced’,
‘colsample_bytree’: 0.6765996025430209,
‘learning_rate’: 0.13232409656402305,
‘min_child_samples’: 330.0,
‘num_leaves’: 103.0,
‘reg_alpha’: 0.5849415659238283,
‘reg_lambda’: 0.4787001151843524,
‘subsample_for_bin’: 100000.0,
‘subsample’: 1.0}

5.3 準備貝葉斯優化

from hyperopt import tpe
tpe_algorithm = tpe.suggest

from hyperopt import Trials
bayes_trials = Trials()

# 可以將結果保存下來

out_file = 'gbm_trials.csv'
of_connection = open(out_file, 'w')
writer = csv.writer(of_connection)

writer.writerow(['loss', 'params', 'iteration', 'estimators', 'train_time'])
of_connection.close()

5.4 貝葉斯優化結果

from hyperopt import fmin

# Global variable
global  ITERATION

ITERATION = 0

# Run optimization
best = fmin(fn = objective, space = space, algo = tpe.suggest, 
            max_evals = Max_evals, trials = bayes_trials, rstate = np.random.RandomState(42))

# Sort the trials with lowest loss (highest AUC) first
bayes_trials_results = sorted(bayes_trials.results, key = lambda x: x['loss'])
bayes_trials_results[0]

[{‘loss’: 0.23670902556787576,
‘params’: {‘boosting_type’: ‘dart’,
‘class_weight’: None,
‘colsample_bytree’: 0.6777142263201398,
‘learning_rate’: 0.10896162558676845,
‘min_child_samples’: 200,
‘num_leaves’: 50,
‘reg_alpha’: 0.75201502515923,
‘reg_lambda’: 0.2500317899561674,
‘subsample_for_bin’: 220000,
‘subsample’: 0.8299430626318801},
‘iteration’: 109,
‘estimators’: 39,
‘train_time’: 135.7437369420004,
‘status’: ‘ok’}]

5.4.1 保存結果

results = pd.read_csv('gbm_trials.csv')
results.sort_values('loss', ascending = True, inplace = True)
results.reset_index(inplace = True, drop = True)
print(results.shape)
results.head()

在這裏插入圖片描述

import ast
ast.literal_eval(results.loc[0, 'params'])
# 出於安全考慮,對字符串進行類型轉換的時候,最好使用ast.literal_eval()函數, 而不是直接用eval()

{‘boosting_type’: ‘dart’,
‘class_weight’: None,
‘colsample_bytree’: 0.6777142263201398,
‘learning_rate’: 0.10896162558676845,
‘min_child_samples’: 200,
‘num_leaves’: 50,
‘reg_alpha’: 0.75201502515923,
‘reg_lambda’: 0.2500317899561674,
‘subsample_for_bin’: 220000,
‘subsample’: 0.8299430626318801}

5.4.2 測試集上的效果

best_bayes_estimators = int(results.loc[0, 'estimators'])
best_bayes_params = ast.literal_eval(results.loc[0, 'params']).copy()

best_bayes_model = lgb.LGBMClassifier(n_estimators=best_bayes_estimators, n_jobs=-1,
                                     objective='binary', **best_bayes_params, random_state=42)
best_bayes_model.fit(features, labels)

LGBMClassifier(boosting_type=‘dart’, class_weight=None,
colsample_bytree=0.6777142263201398, importance_type=‘split’,
learning_rate=0.10896162558676845, max_depth=-1,
min_child_samples=200, min_child_weight=0.001, min_split_gain=0.0,
n_estimators=39, n_jobs=-1, num_leaves=50, objective=‘binary’,
random_state=42, reg_alpha=0.75201502515923,
reg_lambda=0.2500317899561674, silent=True,
subsample=0.8299430626318801, subsample_for_bin=220000,
subsample_freq=0)

preds = best_bayes_model.predict_proba(test_features)[:, 1]
print('The best model from Bayes optimization scores {:.4f} AUC ROC on the test set.'.format(roc_auc_score(test_labels, preds)))
print('This was achieved after {} search iteration.'.format(results.loc[0, 'iteration']))

The best model from Bayes optimization scores 0.7275 AUC ROC on the test set.
This was achieved after 109 search iteration.

六. 隨機VS貝葉斯 方法對比

best_random_params['method'] = 'random search'
best_bayes_params['method'] = 'Bayesian optimization'
best_params = pd.DataFrame(best_bayes_params, index = [0]).append(pd.DataFrame(best_random_params, index = [0]), ignore_index = True)
best_params

在這裏插入圖片描述

6.1 調參過程可視化展示

random_params = pd.DataFrame(columns = list(random_results.loc[0, 'params'].keys()),
                            index = list(range(len(random_results))))
for i, params in enumerate(random_results['params']):
    random_params.loc[i, :] = list(params.values())
    
random_params['loss'] = random_results['loss']
random_params['iteration'] = random_results['iteration']
random_params.head()

在這裏插入圖片描述

bayes_params = pd.DataFrame(columns = list(ast.literal_eval(results.loc[0,'params']).keys()),
                           index = list(range(len(results))))
for i, params in enumerate(results['params']):
    bayes_params.loc[i, :] = list(ast.literal_eval(params).values())
    
bayes_params['loss'] = results['loss']
bayes_params['iteration'] = results['iteration']
bayes_params.head()

在這裏插入圖片描述

6.2 學習率對比

plt.figure(figsize = (20, 8))
plt.rcParams['font.size'] = 18

sns.kdeplot(learning_rate_dist, label = 'Sampling Distribution', linewidth = 2)
sns.kdeplot(random_params['learning_rate'], label = 'Random Search', linewidth = 2)
sns.kdeplot(bayes_params['learning_rate'], label = 'Bayes Optimization', linewidth=2)
plt.legend()
plt.xlabel('Learning Rate')
plt.ylabel('Density')
plt.title('Learning Rate Distribution')

在這裏插入圖片描述

6.3 Boosting Type 對比

fig, axs = plt.subplots(1, 2, sharey = True, sharex = True)

random_params['boosting_type'].value_counts().plot.bar(ax=axs[0], figsize=(14,6),
                                                      color='orange', title='Random Search Boosting Type')
bayes_params['boosting_type'].value_counts().plot.bar(ax=axs[1], figsize= (14,6),
                                                     color='green', title='Bayes Optimization Boosting Type')

在這裏插入圖片描述

print('Random Search boosting type percentages:')
print(100 * random_params['boosting_type'].value_counts() / len(random_params))

print('Bayes Optimization boosting type percentages:')
print(100 * bayes_params['boosting_type'].value_counts() / len(bayes_params))

Random Search boosting type percentages:
dart 36.5
gbdt 33.0
goss 30.5
Name: boosting_type, dtype: float64

Bayes Optimization boosting type percentages:
dart 54.5
gbdt 29.0
goss 16.5
Name: boosting_type, dtype: float64

6.4 數值型參數 對比

for i, hyper in enumerate(random_params.columns):
    if hyper not in ['class_weight','boosting_type','iteration','subsample','metric','verbose']:
        plt.figure(figsize = (14, 6))
        if hyper != 'loss':
            sns.kdeplot([sample(space[hyper]) for _ in range(1000)], label = 'Sampling Distribution')
        sns.kdeplot(random_params[hyper], label = 'Random Search')
        sns.kdeplot(bayes_params[hyper], label = 'Bayes Optimization')
        plt.legend(loc = 1)
        plt.title('{} Distribution'.format(hyper))
        plt.xlabel('{}'.format(hyper))
        plt.ylabel('Density')

在這裏插入圖片描述在這裏插入圖片描述
在這裏插入圖片描述在這裏插入圖片描述
在這裏插入圖片描述在這裏插入圖片描述
在這裏插入圖片描述在這裏插入圖片描述

七. 貝葉斯優化參數變化情況

7.1 Boosting Type 變化

bayes_params['boosting_int'] = bayes_params['boosting_type'].replace({'gbdt':1,'goss':2,'dart':3})
plt.plot(bayes_params['iteration'], bayes_params['boosting_int'], 'ro')
plt.yticks([1, 2, 3], ['gdbt', 'goss', 'dart'])
plt.xlabel('Iteration')
plt.title('Boosting Type over Search')

在這裏插入圖片描述

7.2 學習率&葉子數&… 變化

plt.figure(figsize = (14, 14))
colors = ['red', 'blue', 'orange', 'green']

for i, hyper in enumerate(['colsample_bytree', 'learning_rate', 'min_child_samples', 'num_leaves']):
    plt.subplot(2, 2, i+1)
    sns.regplot('iteration', hyper, data = bayes_params, color = colors[i])
    # plt.xlabel('Iteration')
    # plt.ylabel('{}'.format(hyper))
    plt.title('{} over Search'.format(hyper))
plt.tight_layout()

在這裏插入圖片描述

7.3 reg_alpha, reg_lambda 變化

fig, axes = plt.subplots(1, 3, figsize = (18, 6))
for i, hyper in enumerate(['reg_alpha', 'reg_lambda', 'subsample_for_bin']):
    sns.regplot('iteration', hyper, data = bayes_params, ax = axes[i])
    axes[i].set(title = '{} over Search'.format(hyper))
plt.tight_layout()

在這裏插入圖片描述

7.4 隨機與貝葉斯優化損失變化的對比

scores = pd.DataFrame({'ROC AUC': 1 - random_params['loss'],
                       'iteration': random_params['iteration'],
                      'search': 'random'})
scores = scores.append(pd.DataFrame({'ROC AUC': 1 - bayes_params['loss'],
                                    'iteration': bayes_params['iteration'],
                                    'search': 'Bayes'}))
scores['ROC AUC'] = scores['ROC AUC'].astype(np.float32)
scores['iteration'] = scores['iteration'].astype(np.int32)
scores.head()

在這裏插入圖片描述

plt.figure(figsize = (18, 6))

plt.subplot(1, 2, 1)
plt.hist(1 - random_results['loss'].astype(np.float32), label = 'Random Search', edgecolor = 'k')
plt.xlabel('Validation Roc Auc')
plt.ylabel('Count')
plt.title('Random Search Validation Scores')
plt.xlim(0.73, 0.765)

plt.subplot(1, 2, 2)
plt.hist(1 - bayes_params['loss'], label = 'Bayes Optimization', edgecolor = 'k')
plt.xlabel('Validation Roc Auc')
plt.ylabel('Count')
plt.title('Bayes Optimization Validation Scores')
plt.xlim(0.73, 0.765)

在這裏插入圖片描述

sns.lmplot('iteration', 'ROC AUC', hue = 'search', data = scores, height = 8)
plt.xlabel('Iteration')
plt.ylabel('ROC AUC')
plt.title('ROC AUC versus Iteration')

在這裏插入圖片描述

7.5 保存結果

import json
with open('trials.json', 'w') as f:
    f.write(json.dumps(bayes_trials.results))
    
bayes_params.to_csv('bayes_params.csv', index = False)
random_params.to_csv('random_params.csv', index = False)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章