對於線性迴歸,由於都是求線性參數;邏輯迴歸,由於其呈S型,具有在座標軸兩個左右邊進行急劇上升下降的趨近1或者0,因此具有分類特性。
決策樹是天生的過擬合,而線性迴歸是天生的欠擬合;
L1範式可以完成
損失函數只針對於【有參數求解的模型】,損失函數越小則模型擬合的越好; Kmean對於total inertial(簇內平方和),雖然不是損失函數,並不是爲了求解參數,但是它確實對模型擬合起到重要作用,可以不嚴謹的講它是Keans的損失函數。
但決策樹和KNN是絕對沒有損失函數。
from sklearn.linear_model import LogisticRegression as LR
from sklearn.datasets import load_breast_cancer
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
data = load_breast_cancer()
X = data.data
y = data.target
X.shape
lr1 = LR(penalty="l1",solver="liblinear",C=0.5,max_iter=100)
lr1 = lr1.fit(X,y)
lr1.coef_
(lr1.coef_!=0).sum(axis=1)
lr2 = LR(penalty="l2",solver="liblinear",C=0.5,max_iter=100)
lr2 = lr2.fit(X,y)
# coef對應着邏輯迴歸中的W參數
lr2.coef_
(lr2.coef_ != 0).sum(axis=1)
l1 = []
l2 = []
l1test = []
l2test = []
Xtrain,Xtest,Ytrain,Ytest = train_test_split(X,y,test_size=0.3,random_state=420)
for i in np.linspace(0.05,1,19):
lrl1 = LR(penalty="l1",solver="liblinear",C=i,max_iter=100)
lrl2 = LR(penalty="l2",solver="liblinear",C=i,max_iter=100)
lrl1 = lrl1.fit(Xtrain,Ytrain)
l1.append(accuracy_score(lrl1.predict(Xtrain),Ytrain))
l1test.append(accuracy_score(lrl1.predict(Xtest),Ytest))
lrl2 = lrl2.fit(Xtrain,Ytrain)
l2.append(accuracy_score(lrl2.predict(Xtrain),Ytrain))
l2test.append(accuracy_score(lrl2.predict(Xtest),Ytest))
graph = [l1,l2,l1test,l2test]
color = ['green','black','lightgreen','gray']
label = ['L1','L2','L1test','L2test']
plt.figure(figsize=(6,6))
for i in range(len(graph)):
plt.plot(np.linspace(0.05,1,19),graph[i],color[i],label=label[i])
plt.legend(loc=4) #圖例的位置在哪裏?4表示,右下角
plt.show()