四、.svm
from sklearn import svm
4.1 分類
svm.SVC(C=1.0, kernel=’rbf’, degree=3, gamma=’auto_deprecated’, coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=’ovr’, random_state=None)
C
kernel
:指定核函數:linear
,polynomial
,rbf
,sigmoid
。還可以是自定義的核函數。
degree
:當kernel
指定爲polynomial
時,指定多項式次數。
gamma
:當kernel
爲非linear
時,指定的值。
coef0
:當kernel
爲polynomial
或sigmoid
時,指定r的值。
shrinking
probability
tol
cache_size
class_weight
:在fit方法中設置,用於樣本不平衡問題。
verbose
max_iter
decision_function_shape
:ovo
表示一對一,ovr
表示一對剩下的,
random_state
:略
svm.NuSVC(nu=0.5, kernel=’rbf’, degree=3, gamma=’auto_deprecated’, coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=’ovr’, random_state=None)
svm.LinearSVC(penalty=’l2’, loss=’squared_hinge’, dual=True, tol=0.0001, C=1.0, multi_class=’ovr’, fit_intercept=True, intercept_scaling=1, class_weight=None, verbose=0, random_state=None, max_iter=1000)
4.2 迴歸
svm.SVR(kernel=’rbf’, degree=3, gamma=’auto_deprecated’, coef0=0.0, tol=0.001, C=1.0, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter=-1)
svm.NuSVR(nu=0.5, C=1.0, kernel=’rbf’, degree=3, gamma=’auto_deprecated’, coef0=0.0, shrinking=True, tol=0.001, cache_size=200, verbose=False, max_iter=-1)
svm.LinearSVR(epsilon=0.0, tol=0.0001, C=1.0, loss=’epsilon_insensitive’, fit_intercept=True, intercept_scaling=1.0, dual=True, verbose=0, random_state=None, max_iter=1000)
屬性:
support_vectors_
support_
n_support
.decision_function
dual_coef_
:或
intercept_
:略
五、.neighbors
.NearestNeighbors(n_neighbors=5, radius=1.0, algorithm=’auto’, leaf_size=30, metric=’minkowski’, p=2, metric_params=None, n_jobs=None, **kwargs)
n_neighbors
:近鄰數,即選擇最近的幾個樣本來分類。
radius
algorithm
:算法:‘auto’, ‘ball_tree’, ‘kd_tree’, ‘brute`
.KDTree
.BallTree
方法: