書接上文。
四. 模型實驗
這裏我們補充一些數據分析的東西。
1. 數據分析及可視化相關
- 熟悉pandas的操作
- matplotlib/ seaborn 可視化分析
(數據科學學習手札62)詳解seaborn中的kdeplot、rugplot、distplot與jointplot
matplotlib在同一座標系上繪製多條曲線 及在多個子圖上繪圖
將數據/特徵中的規律,可視化出來,也是很重要的。
下面的實驗部分,從交叉驗證,實驗和評估三個部分介紹
2. 交叉驗證 cross valid
1)分三類功能,一是簡易實驗和學習用的,直接獲取交叉驗證的結果;
from sklearn.model_selection import cross_val_score, ShuffleSplit, cross_validate, train_test_split
# 簡易分割 不測試模型
X_train, X_test, y_train, y_test = train_test_split(iris.data, iris.target, test_size=0.4, random_state=0)
# 簡易用法,交叉驗證模型效果
scores = cross_val_score(clf, iris.data, iris.target, cv=5)
cv = ShuffleSplit(n_splits=5, test_size=0.3, random_state=0)
cross_val_score(clf, iris.data, iris.target, cv=cv)
# 允許指定多個指標進行評估, 返回內容更豐富
scoring = ['precision_macro', 'recall_macro']
clf = svm.SVC(kernel='linear', C=1, random_state=0)
scores = cross_validate(clf, iris.data, iris.target, scoring=scoring, cv=5)
2)接下來的部分用於生成索引標號,用於在不同的交叉驗證策略中生成數據劃分的工具。
留一和留P 似乎有重要的意義,但是目前看項目中,幾折交叉驗證即可
from sklearn.model_selection import KFold, LeaveOneOut, LeavePOut, ShuffleSplit
# K折交叉驗證
kf = KFold(n_splits=5)
for train, test in kf.split(X):
print("%s %s" % (train, test))
# 留一
loo = LeaveOneOut()
for train, test in loo.split(X):
print("%s %s" % (train, test))
# 留P
lpo = LeavePOut(p=2)
for train, test in lpo.split(X):
print("%s %s" % (train, test))
# 隨機
ss = ShuffleSplit(n_splits=3, test_size=0.25, random_state=0)
for train_index, test_index in ss.split(X):
print("%s %s" % (train_index, test_index))
3)基於類標籤、具有分層的交叉驗證迭代器
這一類方法在項目中具有重要的意義,我們最好是分層的劃分數據集
from sklearn.model_selection import StratifiedKFold, StratifiedShuffleSplit
skf = StratifiedKFold(n_splits=3)
for train, test in skf.split(X, y):
print("%s %s" % (train, test))
此外,還有一類用於分組數據的交叉驗證迭代器,GroupKFold等。
數據劃分進行交叉驗證十分重要,第一類模型訓練過程不可控,只適用於學習。
對於我們的項目,爲了統一交叉驗證的手法,就採用的5 cv ,對ID 取餘 %5,相當於自己寫一個簡單的 split方法。
3. 模型評估
模型訓練的時候肯定要指定評估方法,先介紹一下:
https://sklearn.apachecn.org/docs/master/32.html
下面的文字說,評估指標 可以用在模型(LR,SVM,Xgboost)的參數中,模型評估工具(如cross_val_score)的參數中,或者metrics模塊裏的工具的參數中。到處都用得到,必須熟悉。
有 3 種不同的 API 用於評估模型預測的質量:
- Estimator score method(估計器得分的方法): Estimators(估計器)有一個
score(得分)
方法,爲其解決的問題提供了默認的 evaluation criterion (評估標準)。 在這個頁面上沒有相關討論,但是在每個 estimator (估計器)的文檔中會有相關的討論。- Scoring parameter(評分參數): Model-evaluation tools (模型評估工具)使用 cross-validation(如
model_selection.cross_val_score
和model_selection.GridSearchCV
) 依靠 internal scoring strategy (內部 scoring(得分) 策略)。這在 scoring 參數: 定義模型評估規則 部分討論。- Metric functions(指標函數):
metrics
模塊實現了針對特定目的評估預測誤差的函數。這些指標在以下部分部分詳細介紹 分類指標, 多標籤排名指標, 迴歸指標 和 聚類指標 。
這部分要了解兩方面:一是指定所需指標名稱,二是瞭解各類問題的評估指標的含義。
Scoring(得分) | Function(函數) | Comment(註解) |
---|---|---|
Classification(分類) | ||
‘accuracy’ | metrics.accuracy_score |
|
‘average_precision’ | metrics.average_precision_score |
|
‘f1’ | metrics.f1_score |
for binary targets(用於二進制目標) |
‘f1_micro’ | metrics.f1_score |
micro-averaged(微平均) |
‘f1_macro’ | metrics.f1_score |
macro-averaged(宏平均) |
‘neg_log_loss’ | metrics.log_loss |
requires predict_proba support(需要 predict_proba 支持) |
‘precision’ etc. | metrics.precision_score |
suffixes apply as with ‘f1’(後綴適用於 ‘f1’) |
‘recall’ etc. | metrics.recall_score |
suffixes apply as with ‘f1’(後綴適用於 ‘f1’) |
‘roc_auc’ | metrics.roc_auc_score |
Regression(迴歸) | ||
‘explained_variance’ | metrics.explained_variance_score |
|
‘neg_mean_absolute_error’ | metrics.mean_absolute_error |
|
‘neg_mean_squared_error’ | metrics.mean_squared_error |
|
‘neg_mean_squared_log_error’ | metrics.mean_squared_log_error |
|
‘neg_median_absolute_error’ | metrics.median_absolute_error |
|
‘r2’ | metrics.r2_score |
4. 模型訓練
模型訓練的框架就是交叉驗證,模型器及參數和評估構成的,下面我們直接列出一份框架的例子。
def xgb_train_process(trail_code, valid_id, selected_modeling_data, delq_data, delq_label):
if not path.exists(trail_code):
makedirs(trail_code)
print(len(valid_id))
print(len(selected_modeling_data.columns))
train_result = list()
feature_rank_all = dict()
for i in range(5):
gc.collect()
train_id = np.array(valid_id)
sub_train_id = train_id[train_id % 5 != i]
sub_test_id = train_id[train_id % 5 == i]
print(len(sub_train_id), len(sub_test_id))
xgtrain = selected_modeling_data.reindex(sub_train_id)
xgtrain = xgb.DMatrix(xgtrain, label=delq_data.reindex(sub_train_id)[delq_label].values)
gc.collect()
xgtest = selected_modeling_data.reindex(sub_test_id)
xgtest = xgb.DMatrix(xgtest, label=delq_data.reindex(sub_test_id)[delq_label].values)
gc.collect()
params = {'booster': 'gbtree',
'objective': 'binary:logitraw',
'objective': 'binary:logistic',
'tree_method': 'hist',
'max_depth': 3, # 樹的最大深度
'subsample': 0.70, # 隨機採樣得到樣本訓練模型,採樣的樣本數佔總樣本數的比例
'colsample_bytree': 0.70, # 構建樹時對特徵採樣的比例(隨機選取k個屬性對樣本進行訓練)
'silent': 1, # 不實時顯示程序運行情況
'eta': 0.02, # 學習率
'seed': 200, # 隨機數的種子
'eval_metric': 'auc',
'alpha': 1,
'lambda': 1.6,
# 'gamma': 0.1,
'min_child_weight': 100,
# 'nthread': 32
}
# 非均衡數據集 設置樣本權重
params['scale_pos_weight'] = float((xgtrain.get_label() == 0).sum()) / (xgtrain.get_label() == 1).sum()
params['base_score'] = np.mean(xgtrain.get_label())
watchlist = [(xgtrain, 'train'), (xgtest, 'test')]
# 訓練模型
num_rounds = 1000
model = xgb.train(params, xgtrain, num_rounds, watchlist, early_stopping_rounds=100, verbose_eval=200)
# 預測
xgb_sub_train = util.give_prediction(model, xgtrain, sub_train_id)
xgb_sub_test = util.give_prediction(model, xgtest, sub_test_id)
# 本輪對 所有數據對預測,實際上測試集部分預測有意義
xgb_sub_train.append(xgb_sub_test).to_csv(trail_code + '/score_'+str(num_rounds)+'_'+str(i)+'.csv')
# 保存訓練結果
train_result.append([i, num_rounds, model.best_iteration,
metrics.roc_auc_score(xgtrain.get_label(),xgb_sub_train['xgb'].values),
metrics.roc_auc_score(xgtest.get_label(), xgb_sub_test['xgb'].values)])
# 特徵排序大集合
model_feature_rank = model.get_fscore()
for key in model_feature_rank:
if key not in feature_rank_all:
feature_rank_all[key] = model_feature_rank[key]
else:
feature_rank_all[key] += model_feature_rank[key]
# 保存模型
util.save_model(model, trail_code + '/' + str(i), save_feature=False)
open(trail_code + '/feature.txt', 'w').writelines([x+'\n' for x in selected_modeling_data])
open(trail_code + '/train_result.txt', 'w').writelines([','.join([str(xx) for xx in x])+'\n' for x in train_result])
feature_rank_all = sorted(feature_rank_all.items(), key=lambda x: x[1], reverse=True)
feature_rank_all_rec = [' '.join([str(idx), str(item)])+'\n' for idx, item in enumerate(feature_rank_all)]
open(trail_code + '/feature_rank_all.txt', 'w') .writelines(feature_rank_all_rec)
def give_prediction(model, xgdata, accountid):
return pd.DataFrame({'loanaccountid': accountid, 'xgb': model.predict(xgdata)}).set_index('loanaccountid')
def save_model(model, filepath, save_feature=True):
model.dump_model(filepath + '_tree.txt')
model.save_model(filepath + '_model.model')
if save_feature:
open(filepath + '_feature.txt', 'w').writelines([x+'\n' for x in model.feature_names])
做項目建議有一個util文件,保存常用函數。
最後是 預測值獲取及auc:
xgb_score = util.get_5cv_score("path") # 爲df對象
delq = delq_data.reindex(xgb_score.index)[label7].tolist()
auc = metrics.roc_auc_score(delq, xgb_score['xgb'])
def get_5cv_score(trail_code):
files = [x for x in listdir(trail_code) if x[0: 5] == 'score']
rounds_set = set([x.split('.')[0].split('_')[1] for x in files])
folds_set = set([x.split('.')[0].split('_')[2] for x in files])
if len(rounds_set) == 1:
return get_5cv_score_sub(trail_code, list(rounds_set)[0], folds_set)
else:
xgb_score_dict = dict()
for round_set in sorted(rounds_set):
xgb_score_dict[round_set] = get_5cv_score_sub(trail_code, round_set, folds_set)
return xgb_score_dict
def get_5cv_score_sub(trail_code, round_set, folds_set):
for fold_set in sorted(folds_set):
xgb_score_one = pd.read_csv(trail_code + '/score_'+round_set+'_'+fold_set+'.csv', index_col='loanaccountid')
xgb_score_one = xgb_score_one[xgb_score_one.index%5 == int(fold_set)]
if fold_set == '0':
xgb_score = xgb_score_one
else:
xgb_score = xgb_score.append(xgb_score_one)
return xgb_score
項目的核心實驗部分就這些,下一篇再看對結果分析的相關內容。