機器學習筆記(更新)

***  2018.12.17 ***
(1)pandas.read_table() 可以用來讀取.txt類型的dataframe文件

(2)忽視運行結果的警告:
import warnings
warnings.filterwarnings('ignore')

(3)熱力圖
import seaborn
import matplotlib.pyplot as plt

# 找出相關程度
plt.figure(figsize=(20, 16))  # 指定繪圖對象寬度和高度
colnm = df.columns.tolist()[:39]  # 列表頭
mcorr = df[colnm].corr()  # 相關係數矩陣,即給出了任意兩個變量之間的相關係數
mask = np.zeros_like(mcorr, dtype=np.bool)  # 構造與mcorr同維數矩陣 爲bool型
mask[np.triu_indices_from(mask)] = True  # 角分線右側爲True
cmap = sns.diverging_palette(220, 10, as_cmap=True)  # 返回matplotlib colormap對象
g = sns.heatmap(mcorr, mask=mask, cmap=cmap, square=True, annot=True, fmt='0.2f')  # 熱力圖(看兩兩相似度)
plt.show()

(4)繪製特徵和標籤的散點圖
fig, axes = plt.subplots(nrows=5, ncols=8, figsize=(20, 12), 
                         tight_layout=True)
for ax ,column in zip(axes.ravel(),train_data.columns):
    ax.scatter(train_data[column],train_data['target'])
    ax.set_ylabel('target')
    ax.set_xlabel(column)
	
(5)df_matric = df.values

(6)推薦系統當中相似度的選擇:如果數據受分數貶值(不同用戶使用不同的評級範圍)的影響,則使用皮爾遜相關係數。如果數據稠密(幾乎所有屬性都沒有零值)且屬性值大小十分重要,那麼使用諸如歐式距離或者曼哈頓距離。如果數據稀疏,考慮使用餘弦相似度。

(7)突然想到利用最普通的神經網絡模擬線性函數,並且神經網絡的梯度下降可以很大程度過濾沒有用的特徵

(8)cross validation 來調節model的某個參數的取值
#調節KNeighborsClassifier()模型當中n_neighbors參數的值(用validation_curve)
param_range = range(5,60,2)
train_loss,test_loss = validation_curve(KNeighborsClassifier(weights = 'distance',n_neighbors = 30),
                                        data_x,data_y,
                                        param_name = 'leaf_size',
                                       param_range = param_range,cv = 6,
                                       scoring = 'neg_mean_squared_error')
train_loss_mean = -np.mean(train_loss,axis = 1)
test_loss_mean = -np.mean(test_loss,axis = 1)
plt.plot(param_range,train_loss_mean,color = 'r',
         label = 'Training')
plt.plot(param_range,test_loss_mean,color = 'g',
        label = 'Cross_vilidation')
plt.xlabel('leaf_size')
plt.ylabel('Loss')
plt.legend(loc = 'best')
plt.show()

(9)學習曲線函數
import numpy as np
import matplotlib.pyplot as plt
from sklearn.naive_bayes import GaussianNB
from sklearn.svm import SVC
from sklearn.datasets import load_digits
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
 
 
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
                        n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
    plt.figure()
    plt.title(title)
    if ylim is not None:
        plt.ylim(*ylim)
    plt.xlabel("Training examples")
    plt.ylabel("Score")
    train_sizes, train_scores, test_scores = learning_curve(
        estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
    train_scores_mean = np.mean(train_scores, axis=1)
    train_scores_std = np.std(train_scores, axis=1)
    test_scores_mean = np.mean(test_scores, axis=1)
    test_scores_std = np.std(test_scores, axis=1)
    plt.grid()
 
    plt.fill_between(train_sizes, train_scores_mean - train_scores_std,
                     train_scores_mean + train_scores_std, alpha=0.1,
                     color="r")
    plt.fill_between(train_sizes, test_scores_mean - test_scores_std,
                     test_scores_mean + test_scores_std, alpha=0.1, color="g")
    plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
             label="Training score")
    plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
             label="Cross-validation score")
 
    plt.legend(loc="best")
    return plt
 
 
digits = load_digits()
X, y = digits.data, digits.target    # 加載樣例數據
 
# 圖一
title = r"Learning Curves (Naive Bayes)"
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = GaussianNB()    #建模
plot_learning_curve(estimator, title, X, y, ylim=(0.7, 1.01), cv=cv, n_jobs=1)
 
# 圖二
title = r"Learning Curves (SVM, RBF kernel, $\gamma=0.001$)"
cv = ShuffleSplit(n_splits=10, test_size=0.2, random_state=0)
estimator = SVC(gamma=0.001)    # 建模
plot_learning_curve(estimator, title, X, y, (0.7, 1.01), cv=cv, n_jobs=1)
 
plt.show()

(10)
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章