自然語言處理基礎-Kaggle競賽題

題目-Predict the relevance of search results on homedepot

競賽題地址:https://www.kaggle.com/c/home-depot-product-search-relevance
參考github上的原文地址:https://github.com/yjfiejd/Product_search_relevance_NLP-/blob/master/Product_search_relevance(jupyter%20notebook).ipynb

具體實現代碼及理解註釋如下:

# -*-coding: UTF-8 -*-
# @Time:2019/8/2720:19

import numpy as np
import pandas as pd
from sklearn.ensemble import RandomForestRegressor, BaggingRegressor
from nltk.stem.snowball import SnowballStemmer

#工具用來進行詞幹提取
stemmer = SnowballStemmer('english')

#pandas用來讀取數據批量處理
df_train = pd.read_csv('train.csv', encoding="ISO-8859-1")
df_test = pd.read_csv('test.csv', encoding="ISO-8859-1")

#產品介紹的表格數據來讀取進來
df_pro_desc = pd.read_csv('product_descriptions.csv')

#訓練數據的行數
num_train = df_train.shape[0]

#此函數對詞語進行詞幹提取
def str_stemmer(s):
	return " ".join([stemmer.stem(word) for word in s.lower().split()])

#計算關鍵詞次數
def str_common_word(str1, str2):
	return sum(int(str2.find(word)>=0) for word in str1.split())

#直接合並測試集和訓練集,以便於同意做進一步的文本預處理
df_all = pd.concat((df_train, df_test), axis=0, ignore_index=True)

#產品介紹也是一個有用的信息,我們把它放在表格的左面
# #把描述信息加入表,how='left'表示左邊全部保留,on表示以什麼爲基準對齊
df_all = pd.merge(df_all, df_pro_desc, how='left', on='product_uid')

#把每一個column進行處理,以清潔所有的文本內容
#把表格中的搜索關鍵詞進行詞幹提取
#這裏要記住map函數和lambda函數的用法
"""
map(function, iterable)
匿名函數lambda
"""
df_all['search_term'] = df_all['search_term'].map(lambda x:str_stemmer(x))
df_all['product_title'] = df_all['product_title'].map(lambda x:str_stemmer(x))
df_all['product_description'] = df_all['product_description'].map(lambda x:str_stemmer(x))

#提取文本特徵
#關鍵詞的長度
df_all['len_of_query'] = df_all['search_term'].map(lambda x:len(x.split())).astype(np.int64)
#描述中有多少關鍵詞重合
df_all['product_info'] = df_all['search_term']+"\t"+df_all['product_title']+"\t"+df_all['product_description']
df_all['word_in_title'] = df_all['product_info'].map(lambda x:str_common_word(x.split('\t')[0],x.split('\t')[1]))
df_all['word_in_description'] = df_all['product_info'].map(lambda x:str_common_word(x.split('\t')[0],x.split('\t')[2]))

#把不能用於機器學習模型處理的column給drop
df_all = df_all.drop(['search_term','product_title','product_description','product_info'],axis=1)

#分開訓練集和測試集
df_train = df_all.iloc[:num_train]
df_test = df_all.iloc[num_train:]

#記錄下測試集的id
id_test = df_test['id']

#分離出y_train
y_train = df_train['relevance'].values
X_train = df_train.drop(['id','relevance'],axis=1).values
X_test = df_test.drop(['id','relevance'],axis=1).values

rf = RandomForestRegressor(n_estimators=15, max_depth=6, random_state=0)
clf = BaggingRegressor(rf, n_estimators=45, max_samples=0.1, random_state=25)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)

pd.DataFrame({"id": id_test, "relevance": y_pred}).to_csv('submission.csv',index=False)


發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章