python學習第15周:sklearn

Question:

 

 


datasets.make_classification是一個智能生成數據集的函數

from sklearn import datasets
dataset = datasets.make_classification(n_samples=1000, n_features=10,
n_informative=2, n_redundant=2, n_repeated=0, n_classes=2)

2
We can use Scikit-learn for K-fold cross-validation

#2
from sklearn import cross_validation
kf = cross_validation.KFold(len(dataset[0]), n_folds=10, shuffle=True)
for train_index, test_index in kf:
    X_train, y_train = dataset[0][train_index], dataset[1][train_index]
    X_test, y_test = dataset[0][test_index], dataset[1][test_index]
print(X_train)
print(y_train)
print(X_test)
print(y_test)

3

Scikit-learn implements several naive Bayes algorithms
Scikit-learn also includes Support Vector Machine algorithms
Scikit-learn includes ensemble-based methods for classification

#3
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print(pred)
print(y_test)
from sklearn.svm import SVC
clf = SVC(C=1e-01, kernel='rbf', gamma=0.1)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print(pred)
print(y_test)
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier(n_estimators=6)
clf.fit(X_train, y_train)
pred = clf.predict(X_test)
print(pred)
print(y_test)

4
The sklearn.metrics module includes score functions, performance metrics, pairwise metrics and distance measures.

#4
from sklearn import metrics
acc = metrics.accuracy_score(y_test, pred)
print(acc)
f1 = metrics.f1_score(y_test, pred)
print(f1)
auc = metrics.roc_auc_score(y_test, pred)
print(auc)

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章