kaggle經典比賽總結(一)Stacked Regressions to predict House Prices

kaggle經典比賽優秀社區總結:Stacked Regressions to predict House Prices

本文主要講述特徵工程和Stacking迴歸模型,可以說本文是新手入kaggle必經歷的過程。本篇文章主要講述上如何在數據集上進行特徵工程,然後使用sklearn的基礎模型加上xgboost和lightGBM進行集成,目的是能夠使得線性模型有很好魯棒性,最終達到一個很好的預測效果。

Stacked Regressions to predict House Prices
kaggle地址:https://www.kaggle.com/serigne/stacked-regressions-top-4-on-leaderboard/comments

這裏面首先給幾個鏈接,全都是kaggle社區裏面比較好的筆記:
(1)Comprehensive data exploration with Python by Pedro MKarcelino:有關數據分析
(2)A Study on Regression applied to the Ames dataset by Julien Cohen-Solal:在線性迴歸模型使用特徵工程和深度挖掘
(3)Regularized Linear Models by Alexandru Papiu:使用了模型選擇和交叉驗證

特徵工程一般要做的事情:
(1)處理缺失值
(2)轉換數據
(3)標籤編碼
(4)Box Cox 變換 
(5)類別特徵one-hot編碼


一、數據概覽

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns

from scipy import stats
from scipy.stats import norm, skew

# set float format limiting floats output to 3 points
pd.set_option('display.float_format', lambda x: '{:.3f}'.format(x))

train = pd.read_csv('/Users/xudong/kaggleData/houseprice/train.csv')
test = pd.read_csv('/Users/xudong/kaggleData/houseprice/test.csv')

# display the first rows of the data
print(train.head(5))
print('------------')
print(test.head(5))

# check the numbers of samples and features
print("The train data size before dropping Id feature is : {} ".format(train.shape))
print("The test data size before dropping Id feature is : {} ".format(test.shape))

# save the 'Id' column
train_ID = train['Id']
test_ID = test['Id']

# Now drop the  'Id' column since it's unnecessary for  the prediction process.
train.drop("Id", axis=1, inplace=True)
test.drop("Id", axis=1, inplace=True)

# check again the data size after dropping the 'Id' variable
print("\nThe train data size after dropping Id feature is : {} ".format(train.shape))
print("The test data size after dropping Id feature is : {} ".format(test.shape))

二、數據處理

1.離羣點

數據集文檔表明了數據中有離羣點(原文地址已經失效,這裏就不復制地址立刻),並且建議移除。下面使用plot畫出數據特徵GrLivArea和SalePrice的數據分佈圖

# Data Processing 數據處理過程
# outliers 離羣點

fig, ax = plt.subplots()
ax.scatter(x=train['GrLivArea'], y=train['SalePrice'])
plt.ylabel('SalePrice', fontsize=13)
plt.xlabel('GrLivArea', fontsize=13)
plt.show()

從圖上看,右下角有兩個離羣點,它們GrLiveArea很大但是SalePrice很小,極度偏離數據,需要刪除

# delete outliers
train = train.drop(train[(train['GrLivArea'] > 4000) & (train['SalePrice'] < 300000)].index)

Note:
移除離羣點,一般都是選擇明顯的(比如本文中的超大的面積和很低的價格)
雖然數據中還可能有其他的離羣點,但是,把他們都移除了可能會影響到我們的模型的訓練如果測試數據中會有離羣點的話。所以,一般不會把所有的離羣點都移除掉,我們只是增加模型的魯棒性。

2.目標變量(Target Variable)
SalePrice是我們需要預測的,因此對這個變量做一些分析。我們一般都期望數據的分佈符合正態,但是有時候實際得到的數據不是這樣的。

color = sns.color_palette()
sns.set_style('darkgrid')

sns.distplot(train['SalePrice'], fit=norm)

# get the fitted parameters used by the function
(mu, sigma) = norm.fit(train['SalePrice'])
print('\n mu = {:.2f} and sigma = {:.2f}\n'.format(mu, sigma))

# now plot the distribution
plt.legend(['Normal dist. ($\mu=$ {:.2f} and $\sigma=$ {:.2f})'.format(mu, sigma)], loc='best')
plt.ylabel('Frequency')
plt.title('SalePrice distribution')

# get also the QQ-plot
fig = plt.figure()
res = stats.probplot(train['SalePrice'], plot=plt)
plt.show()

SalePrice的分佈圖:

SalePrice的QQ圖:

可以看出的是,目標變量(SalePrice)偏度很大(一般線性模型要求變量的分佈符合正態分佈)。因此,我們對目標變量進行log變換,變換後重新進行畫圖(這個地方不貼圖了),可以明顯看出目標變量的分佈比較正常。

# we use the numpy function log1p which applies log(1+x) to all elements of the column
train['SalePrice'] = np.log1p(train['SalePrice'])

3.特徵工程

這裏將訓練數據集和測試數據集拼接起來一起處理

# concat the train and test in the same dataframe
n_train = train.shape[0]
n_test = test.shape[0]
y_train = train.SalePrice.values
all_data = pd.concat((train, test), sort=False).reset_index(drop=True)
all_data.drop(['SalePrice'], axis=1, inplace=True)
print('all_data size is : {}'.format(all_data.shape))

3.1 缺失數據

查看數據的特徵缺失值的比例

# missing data
all_data_na = (all_data.isnull().sum() / len(all_data))*100
all_data_na = all_data_na.drop(all_data_na[all_data_na == 0].index).sort_values(ascending=False)[:30]
missing_data = pd.DataFrame({'Missing Ratio' : all_data_na})
print(missing_data.head(20))

打印查看缺失比例最大的前20個特徵,如下面

並將缺失數據畫成柱狀圖,如下

# plot the missing data
f, ax = plt.subplots(figsize=(15, 12))
plt.xticks(rotation='90')
sns.barplot(x=all_data_na.index, y=all_data_na)
plt.xlabel('Features', fontsize=15)
plt.ylabel('Percent of missing values', fontsize=15)
plt.title('Percent missing data by feature', fontsize=15)
plt.show()

3.2 特徵之間的相關性

下面計算特徵之間的相關性,並畫出熱力圖分佈

# data correlation
# correlation map to see how features are correlated with SalePrice
corrmat = train.corr()
plt.subplots(figsize=(12, 9))
sns.heatmap(corrmat, vmax=0.9, square=True)
plt.show()

3.3 處理缺失數據

下面根據數據的對特徵字段的描述來處理缺失的數據

# imputing missing values
# we impute them by processding sequentially through features with missing values

# poolQC: data says NA means no pool
all_data['PoolQC'] = all_data['PoolQC'].fillna('None')

# MiscFeature: NA means no misc feature
all_data['MiscFeature'] = all_data['MiscFeature'].fillna('None')

# Alley: NA means no alley access
all_data['Alley'] = all_data['Alley'].fillna('None')

# Fence: NA means no fence
all_data['Fence'] = all_data['Fence'].fillna('None')

# FireplaceQu: NA means no fireplace
all_data['FireplaceQu'] = all_data['FireplaceQu'].fillna('None')

# LotFrontage: have similar area to other houses
# we fill in missing values by median values
all_data['LotFrontage'] = all_data.groupby('Neighborhood')['LotFrontage'].transform(lambda x: x.fillna(x.median()))

# GarageType, GarageFinish, GarageQual and GarageCond : Replacing missing data with None
for col in ('GarageType', 'GarageFinish', 'GarageQual', 'GarageCond'):
    all_data[col] = all_data[col].fillna('None')

# GarageYrBlt, GarageArea and GarageCars : Replacing missing data with 0 (Since No garage = no cars in such garage.)
for col in ('GarageYrBlt', 'GarageArea', 'GarageCars'):
    all_data[col] = all_data[col].fillna(0)

# BsmtFinSF1, BsmtFinSF2, BsmtUnfSF, TotalBsmtSF, BsmtFullBath and BsmtHalfBath : missing values are likely zero for having no basement
for col in ('BsmtFinSF1', 'BsmtFinSF2', 'BsmtUnfSF', 'TotalBsmtSF', 'BsmtFullBath', 'BsmtHalfBath'):
    all_data[col] = all_data[col].fillna(0)

# BsmtQual, BsmtCond, BsmtExposure, BsmtFinType1 and BsmtFinType2 : For all these categorical basement-related features, NaN means that there is no basement.
for col in ('BsmtQual', 'BsmtCond', 'BsmtExposure', 'BsmtFinType1', 'BsmtFinType2'):
    all_data[col] = all_data[col].fillna('None')

# MasVnrArea and MasVnrType : NA most likely means no masonry veneer for these houses. We can fill 0 for the area and None for the type.
all_data['MasVnrArea'] = all_data['MasVnrArea'].fillna(0)
all_data['MasVnrType'] = all_data['MasVnrType'].fillna('None')

# MSZoning (The general zoning classification) : 'RL' is by far the most common value. So we can fill in missing values with 'RL'
all_data['MSZoning'] = all_data['MSZoning'].fillna(all_data['MSZoning'].mode()[0])

# drop Utilities
all_data = all_data.drop(['Utilities'], axis=1)

# Functional : data description says NA means typical
all_data["Functional"] = all_data["Functional"].fillna("Typ")

# Electrical : It has one NA value. Since this feature has mostly 'SBrkr', we can set that for the missing value.
all_data['Electrical'] = all_data['Electrical'].fillna(all_data['Electrical'].mode()[0])

# KitchenQual: Only one NA value, and same as Electrical, we set 'TA' (which is the most frequent) for the missing value in KitchenQual.
all_data['KitchenQual'] = all_data['KitchenQual'].fillna(all_data['KitchenQual'].mode()[0])

# Exterior1st and Exterior2nd : Again Both Exterior 1 & 2 have only one missing value. We will just substitute in the most common string
all_data['Exterior1st'] = all_data['Exterior1st'].fillna(all_data['Exterior1st'].mode()[0])
all_data['Exterior2nd'] = all_data['Exterior2nd'].fillna(all_data['Exterior2nd'].mode()[0])

# SaleType : Fill in again with most frequent which is "WD"
all_data['SaleType'] = all_data['SaleType'].fillna(all_data['SaleType'].mode()[0])

# MSSubClass : Na most likely means No building class. We can replace missing values with None
all_data['MSSubClass'] = all_data['MSSubClass'].fillna("None")

# check is there any remaining missing value
all_data_na_re = (all_data.isnull().sum() / len(all_data))*100
all_data_na_re = all_data_na_re.drop(all_data_na_re[all_data_na_re == 0].index).sort_values(ascending=False)
missing_data_re = pd.DataFrame({'Missing ratio': all_data_na_re})
print(missing_data_re.head(5))

4.更多特徵工程

4.1 轉換特徵爲類別特徵

# Transforming some numerical variables that are really categorical

# MSSubClass = the building class
all_data['MSSubClass'] = all_data['MSSubClass'].apply(str)

# changing overallcond into a categorical variable
all_data['OverallCond'] = all_data['OverallCond'].astype(str)

# year and month sold are transformed into categorical features
all_data['YrSold'] = all_data['YrSold'].astype(str)
all_data['MoSold'] = all_data['MoSold'].astype(str)

4.2 編碼類別特徵

使用sklearn的LabelEncoder方法將類別特徵(離散型)編碼爲0~n-1之間連續的特徵數值

# Label Encoding some categorical variables that may contain information in their ordering set
from sklearn.preprocessing import LabelEncoder
cols = ('FireplaceQu', 'BsmtQual', 'BsmtCond', 'GarageQual', 'GarageCond',
        'ExterQual', 'ExterCond','HeatingQC', 'PoolQC', 'KitchenQual', 'BsmtFinType1',
        'BsmtFinType2', 'Functional', 'Fence', 'BsmtExposure', 'GarageFinish', 'LandSlope',
        'LotShape', 'PavedDrive', 'Street', 'Alley', 'CentralAir', 'MSSubClass', 'OverallCond',
        'YrSold', 'MoSold')
# process columns, apply LabelEncoder to categorical features
for c in cols:
    lbl = LabelEncoder()
    lbl.fit(list(all_data[c].values))
    all_data[c] = lbl.transform(list(all_data[c].values))

# shape
print('Shape all_data: {}'.format(all_data.shape))

4.3 增加更重要的特徵

房屋面積往往會決定一個房屋的價格,因此這裏面將所有房屋面積累積成一個總的面積特徵

# Since area related features are very important to determine house prices
# we add one more feature which is the total area of basement, first and second floor areas of each house
# adding total sqfootage feature
all_data['TotalSF'] = all_data['TotalBsmtSF'] + all_data['1stFlrSF'] + all_data['2ndFlrSF']

4.4 特徵偏度

我們一般都期望數據的分佈 符合正態,但是有時候實際得到的數據不是這樣的。

首先我們檢查數值型特徵數據的偏度(skewness),但是要注意,object類型的數據無法計算skewness,因此計算的時候要過濾掉object數據。

umeric_feats = all_data.dtypes[all_data.dtypes != 'object'].index

# check the skew of all numerical features
skewed_feats = all_data[numeric_feats].apply(lambda x: skew(x.dropna())).sort_values(ascending=False)
skewness = pd.DataFrame({'Skew': skewed_feats})
print(skewness.head(10))

對於偏度過大的特徵數據利用sklearn的box-cox轉換函數,以降低數據的偏度

# box cox transformation of highly skewed features
# box cox轉換的知識可以google
skewness = skewness[abs(skewness) > 0.75]
print('there are {} skewed numerical features to Box Cox transform'.format(skewness.shape[0]))
from scipy.special import boxcox1p
skewed_feats_index = skewness.index
lam = 0.15
for feat in skewed_feats_index:
    all_data[feat] = boxcox1p(all_data[feat], lam)

4.5 One-Hot編碼

使用pandas的dummy方法來進行數據獨熱編碼,並形成最終的訓練和測試數據集

# getting dummy categorical features onehot???
all_data = pd.get_dummies(all_data)
print(all_data.shape)

# getting the new train and test sets
train = all_data[:n_train]
test = all_data[n_train:]

三、模型

1. 導入需要的包和交叉驗證策略

這裏使用sklearn的交叉驗證函數cross_val_score(使用詳情可參見官網api),由於該函數並沒有shuffle的功能,我們還需要額外的kfold函數來對數據集進行隨機分割。

# import lib
from sklearn.linear_model import ElasticNet, Lasso, BayesianRidge, LassoLarsIC
from sklearn.ensemble import RandomForestRegressor, GradientBoostingRegressor
from sklearn.kernel_ridge import KernelRidge
from sklearn.pipeline import make_pipeline
from sklearn.preprocessing import RobustScaler
from sklearn.base import BaseEstimator, TransformerMixin, RegressorMixin, clone
from sklearn.model_selection import KFold, cross_val_score, train_test_split
from sklearn.metrics import mean_squared_error
import xgboost as xgb
# import lightgbm as lgb 當前環境無法使用

# Define a cross validation strategy
# We use the cross_val_score function of Sklearn. However this function has not a shuffle attribut,
# we add then one line of code, in order to shuffle the dataset prior to cross-validation

def rmsle_cv(model):
    kf = KFold(n_splits=5, shuffle=True, random_state=42).get_n_splits(train.values)
    rmse = np.sqrt(-cross_val_score(model, train.values, y_train, scoring="neg_mean_squared_error", cv = kf))
    return rmse

2. 基礎模型

下面介紹sklearn中幾種常見的模型,Lasso Regression、Elastic Net Regression、Kernel Ridge Regression、Gradient Boosting Regression、Xgboost和LightGBM(當前這個模型由於mac環境的影響未運行成功,xgboost和lightgbm非sklearn的包,需要額外安裝)

# base models
# LASSO regression : this model very sensitive to outliers need to make it more robust
lasso = make_pipeline(RobustScaler(), Lasso(alpha=0.0005, random_state=1))

# Elastic Net Regression : again made robust to outliers
enet = make_pipeline(RobustScaler(), ElasticNet(alpha=0.0005, l1_ratio=0.9, random_state=3))

# Kernel Ridge Regression:
krr = KernelRidge(alpha=0.6, kernel='polynomial', degree=2, coef0=2.5)

# Gradient boosting regression:with huber loss that makes it robust to outliers
gboost = GradientBoostingRegressor(n_estimators=3000, learning_rate=0.05, max_depth=4, max_features='sqrt',
                                   min_samples_leaf=15, min_samples_split=10, loss='huber', random_state=5)

# Xgboost:
xgb_model = xgb.XGBRegressor(colsample_bytree=0.4603, gamma=0.0468, learning_rate=0.05, max_depth=3,
                             min_child_weight=1.7817, n_estimators=2200, reg_alpha=0.4640, reg_lambda=0.8571,
                             subsample=0.5213, silent=1, random_state=7, nthread=-1)
 
# LightGBM 居多變化 原版本和現版本差距太大 當前mac環境未加載成功
# lgb_model = lgb.LGBMRegressor(objective='regression', num_leaves=5, learning_rate=0.05, n_estimators=720)

3. 基礎模型的得分

使用rmsle_cv函數對各個模型計算rmse的得分情況

# base models scores
score = rmsle_cv(lasso)
print("\nLasso score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

score = rmsle_cv(enet)
print("ElasticNet score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

score = rmsle_cv(krr)
print("Kernel Ridge score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

score = rmsle_cv(gboost)
print("Gradient Boosting score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

score = rmsle_cv(xgb_model)
print("Xgboost score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

# score = rmsle_cv(lgb_model)
# print("LightGBM score: {:.4f} ({:.4f})\n".format(score.mean(), score.std()))

四、Stacking模型

1. 簡單的Stacking模型:均化基礎模型

這裏新建了一個繼承scikit-learn的均化模型類,然後重寫了fit和predict方法,其實就是用數據擬合不同的模型,然後將所有的基礎模型的預測結果累加取平均值作爲當前均化模型的預測結果。

# averaged base models class
class AveragingModels(BaseEstimator, RegressorMixin, TransformerMixin):
    def __init__(self, models):
        self.models = models

    # we define clones of the original models to fit the data in
    def fit(self, X, y):
        self.models_ = [clone(x) for x in self.models]

        # train cloned base models
        for model in self.models_:
            model.fit(X, y)

        return self

    # we do the predictions for cloned models and average them
    def predict(self, X):
        predictions = np.column_stack([
            model.predict(X) for model in self.models_
        ])
        return np.mean(predictions, axis=1)

定義完均化模型,然後使用前面定義好的enet、gboost、krr和lasso基礎模型,當然也可以加入其他的基礎模型。

測試均化模型的rmse得分

# averaged base models score
# we just average four models here enet gboost krr lasso
averaged_models = AveragingModels(models=(enet, gboost, krr, lasso))

score_all = rmsle_cv(averaged_models)
print('Averaged base models score: {:.4f} ({:.4f})\n'.format(score_all.mean(), score_all.std()))

基礎模型和均化模型的得分情況如下:

  • Lasso score:0.1115(0.0074)
  • ElasticNet score:0.1116(0.0074)
  • Kernel Ridge score:0.1153(0.0075)
  • Gradient Boosting score:0.1168(0.0083)
  • Averaged base models score:0.1087(0.0077)

可以看出,均化模型能夠提升效果。

2. 在Stacking模型基礎上加入元模型

這裏在均化模型基礎上加入元模型,然後在這些基礎模型上使用折外預測(out-of-folds)來訓練我們的元模型,其訓練步驟如下:

  • 1 將訓練集分出2個部分:train_a和train_b
  • 2 用train_a來訓練其他基礎模型
  • 3 然後用其訓練模型在測試集train_b上進行預測
  • 4 使用步驟3中中的預測結果作爲輸入,然後在其元模型上進行訓練

如下圖所示(圖片來自:KazAnovas interview),如果我們使用五折stacking方法,一般情況下,我們會將訓練集分爲5個部分,每次的訓練中都會使用其中4個部分的數據集,然後使用最後一個部分數據集來預測,五次迭代後我們會得到五次預測結果,最終使用着五次結果作爲元模型的輸入進行元模型的訓練(其預測目標變量不變)。在元模型的預測部分,我們會平均所有基礎模型的預測結果作爲元模型的輸入進行預測。

其模型代碼如下:

# Stacking averaged Models class
class StackingAveragedModels(BaseEstimator, RegressorMixin, TransformerMixin):
    def __init__(self, base_models, meta_model, n_folds=5):
        self.base_models = base_models
        self.meta_model = meta_model
        self.n_folds = n_folds

    # We again fit the data on clones of the original models
    def fit(self, X, y):
        self.base_models_ = [list() for x in self.base_models]
        self.meta_model_ = clone(self.meta_model)
        kfold = KFold(n_splits=self.n_folds, shuffle=True, random_state=156)

        # Train cloned base models then create out-of-fold predictions
        # that are needed to train the cloned meta-model
        out_of_fold_predictions = np.zeros((X.shape[0], len(self.base_models)))
        for i, model in enumerate(self.base_models):
            for train_index, holdout_index in kfold.split(X, y):
                instance = clone(model)
                self.base_models_[i].append(instance)
                instance.fit(X[train_index], y[train_index])
                y_pred = instance.predict(X[holdout_index])
                out_of_fold_predictions[holdout_index, i] = y_pred

        # Now train the cloned  meta-model using the out-of-fold predictions as new feature
        self.meta_model_.fit(out_of_fold_predictions, y)
        return self

    # Do the predictions of all base models on the test data and use the averaged predictions as
    # meta-features for the final prediction which is done by the meta-model
    def predict(self, X):
        meta_features = np.column_stack([
            np.column_stack([model.predict(X) for model in base_models]).mean(axis=1)
            for base_models in self.base_models_])
        return self.meta_model_.predict(meta_features)

下面,使用前面定義好的enet、gboost、krr基礎模型,使用lasso作爲元模型進行訓練預測,並計算得分

stacked_averaged_models = StackingAveragedModels(base_models = (enet, gboost, krr), meta_model = lasso)
score_all_stacked = rmsle_cv(stacked_averaged_models)
print('Stacking Averaged base models score: {:.4f} ({:.4f})\n'.format(score_all_stacked.mean(), score_all_stacked.std()))

其結果是:Stacking Averaged base models score: 0.1081 (0.0073),我們再一次獲得比之前模型更好的分數。

3. StackedRegressor,XGBoost和LightGBM模型集成

下面使用XGBoost和LightGBM和StackedRegressor集成。

首先定義rmsle評價函數:

# we first define a rmsle evaluation function
def rmsle(y, y_pred):
    return np.sqrt(mean_squared_error(y, y_pred))

分別訓練XGBoost和LightGBM和StackedRegressor模型

# final training and prediction
stacked_averaged_models.fit(train.values, y_train)
stacked_train_pred = stacked_averaged_models.predict(train.values)
stacked_test_pred = np.expm1(stacked_averaged_models.predict(test.values))
print(rmsle(y_train, stacked_train_pred))

xgb_model.fit(train, y_train)
xgb_train_pred = xgb_model.predict(train)
xgb_test_pred = np.expm1(xgb_model.predict(test))
print(rmsle(y_train, xgb_train_pred))

# lgb_model.fit(train, y_train)
# lgb_train_pred = lgb_model.predict(train)
# lgb_pred = np.expm1(lgb_model.predict(test.values))
# print(rmsle(y_train, lgb_train_pred))

然後使用加權平均來集成XGBoost和LightGBM和StackedRegressor模型

# 這裏lightgbm沒有環境,直接使用xgb和stacked兩個模型集成
print('RMSLE score on train data all models:')
print(rmsle(y_train, stacked_train_pred*0.7 + xgb_train_pred*0.30))

# Ensemble prediction 集成預測
ensemble_result = stacked_test_pred*0.7 + xgb_test_pred*0.30

五、生成結果提交

submission = pd.DataFrame()
submission['Id'] = test_ID
submission['SalePrice'] = ensemble_result
submission.to_csv('/Users/xudong/kaggleData/houseprice/submission.csv', index=False)

六、結果

最終結果得分是0.11674,排名是679

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章