一、主成分分析
-
原理
最終不斷壓縮團圓範圍,坐落於橢圓內的點爲主要影響因素,實現降維操作
-
代碼實現
數據:
Alcohol,Malic_Acid,Ash,Ash_Alcanity,Magnesium,Total_Phenols,Flavanoids,Nonflavanoid_Phenols,Proanthocyanins,Color_Intensity,Hue,OD280,Proline,Customer_Segment 14.23,1.71,2.43,15.6,127,2.8,3.06,0.28,2.29,5.64,1.04,3.92,1065,1 13.2,1.78,2.14,11.2,100,2.65,2.76,0.26,1.28,4.38,1.05,3.4,1050,1 13.16,2.36,2.67,18.6,101,2.8,3.24,0.3,2.81,5.68,1.03,3.17,1185,1 14.37,1.95,2.5,16.8,113,3.85,3.49,0.24,2.18,7.8,0.86,3.45,1480,1 13.24,2.59,2.87,21,118,2.8,2.69,0.39,1.82,4.32,1.04,2.93,735,1 ... 此數據爲酒中不同數量成分的組合最終合成酒的種類,共分爲了1,2,3三種不同種類
實現步驟:
代碼:from sklearn.model_selection import train_test_split from sklearn.linear_model import LogisticRegression from sklearn.preprocessing import StandardScaler from sklearn.metrics import confusion_matrix from matplotlib.colors import ListedColormap from sklearn.decomposition import PCA import matplotlib.pyplot as plt import pandas as pd import numpy as np dataset = pd.read_csv("Wine.csv") X = dataset.iloc[:, 0:-1].values y = dataset.iloc[:, -1].values # 劃分數據集爲訓練集和測試集 X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) # 特徵縮放 sc_X = StandardScaler() X_train = sc_X.fit_transform(X_train) X_test = sc_X.transform(X_test) # 利用PCA對數據降維 pca = PCA(n_components=2) # n_components: 新的自變量的個數 X_train = pca.fit_transform(X_train) X_test = pca.transform(X_test) explained_varance = pca.explained_variance_ratio_ # 解釋方差的百分比, 用來確定新自變量的個數 # 邏輯迴歸擬合數據 classifier = LogisticRegression(random_state=0) classifier.fit(X_train, y_train) # 預測測試集 y_pred = classifier.predict(X_test) # 構建混淆矩陣 cm = confusion_matrix(y_test, y_pred) # 畫圖 X_set, y_set = X_train, y_train X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min() - 1, stop=X_set[:, 0].max() + 1, step=0.01), np.arange(start=X_set[:, 1].min() - 1, stop=X_set[:, 1].max() + 1, step=0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha=0.75, cmap=ListedColormap(('red', 'green', 'black'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c=ListedColormap(('orange', 'blue', 'grey'))(i), label=j) plt.title('Logistic Regression (Training set)') plt.xlabel('pc1') plt.ylabel('pc2') plt.legend() plt.show() X_set, y_set = X_test, y_test X1, X2 = np.meshgrid(np.arange(start=X_set[:, 0].min() - 1, stop=X_set[:, 0].max() + 1, step=0.01), np.arange(start=X_set[:, 1].min() - 1, stop=X_set[:, 1].max() + 1, step=0.01)) plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape), alpha=0.75, cmap=ListedColormap(('red', 'green', 'black'))) plt.xlim(X1.min(), X1.max()) plt.ylim(X2.min(), X2.max()) for i, j in enumerate(np.unique(y_set)): plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1], c=ListedColormap(('orange', 'blue', 'grey'))(i), label=j) plt.title('Logistic Regression (Test set)') plt.xlabel('pc1') plt.ylabel('pc2') plt.legend() plt.show()
輸出結果:
訓練結果
測試結果: