機器學習之指標和評分:量化預測的質量

文章參考:https://scikit-learn.org/stable/modules/model_evaluation.html#clustering-metrics

1、分類指標 Classification Metrics

sklearn.metrics 模塊實現了一些損失、評分和實用函數衡量分類的性能。一些指標可能需要對正類、置信度值、或二進制決策值的概率估計。大部分實現都允許每個樣本通過sample_weight參數爲總評分提供加權貢獻。

(1)classification_report 函數:顯示主要分類指標的文本報告,在報告中顯示每個類別的精確度、召回率、F1值等信息。

sklearn.metrics.classfication_report(y_true,y_pred,labels=None.target_names=None,sample_weight=None,digits=2,output_dict=False)
  • y_true:真實目標值
  • y_pred:估計器預測目標值
  • labels :包含的標籤索引的可選列表
  • target_names:目標類別名稱
  • digits:輸出浮點值的位數
  • return : 每個類別精確率和召回率、F1值
from sklearn.metrics import classification_report

y_true = [0, 1, 2, 2, 2]
y_pred = [0, 0, 2, 2, 1]
target_names = ['class 0', 'class 1', 'class 2']
print(classification_report(y_true, y_pred, target_names=target_names))
             precision    recall  f1-score   support

     class 0       0.50      1.00      0.67         1
     class 1       0.00      0.00      0.00         1
     class 2       1.00      0.67      0.80         3

    accuracy                           0.60         5
   macro avg       0.50      0.56      0.49         5
weighted avg       0.70      0.60      0.61         5

其中列表左邊的一列爲分類的標籤名,右邊support列爲每個標籤的出現次數.avg / total行爲各列的均值(support列爲總和).
precision recall f1-score三列分別爲各個類別的精確度/召回率及 F1值.

(2)sklearn.metrics.f1_score

sklearn.metrics.f1_score(y_truey_predlabels=Nonepos_label=1average='binary'sample_weight=Nonezero_division='warn')


 

>>> from sklearn.metrics import f1_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> f1_score(y_true, y_pred, average='macro')
0.26...
>>> f1_score(y_true, y_pred, average='micro')
0.33...
>>> f1_score(y_true, y_pred, average='weighted')
0.26...
>>> f1_score(y_true, y_pred, average=None)
array([0.8, 0. , 0. ])
>>> y_true = [0, 0, 0, 0, 0, 0]
>>> y_pred = [0, 0, 0, 0, 0, 0]
>>> f1_score(y_true, y_pred, zero_division=1)
1.0...

(3)sklearn.metrics.precision_score

 

sklearn.metrics.precision_score(y_truey_predlabels=Nonepos_label=1average='binary'sample_weight=Nonezero_division='warn')

>>> from sklearn.metrics import precision_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> precision_score(y_true, y_pred, average='macro')
0.22...
>>> precision_score(y_true, y_pred, average='micro')
0.33...
>>> precision_score(y_true, y_pred, average='weighted')
0.22...
>>> precision_score(y_true, y_pred, average=None)
array([0.66..., 0.        , 0.        ])
>>> y_pred = [0, 0, 0, 0, 0, 0]
>>> precision_score(y_true, y_pred, average=None)
array([0.33..., 0.        , 0.        ])
>>> precision_score(y_true, y_pred, average=None, zero_division=1)
array([0.33..., 1.        , 1.        ])

(4)sklearn.metrics.recall_score

sklearn.metrics.recall_score(y_truey_predlabels=Nonepos_label=1average='binary'sample_weight=Nonezero_division='warn')

>>> from sklearn.metrics import recall_score
>>> y_true = [0, 1, 2, 0, 1, 2]
>>> y_pred = [0, 2, 1, 0, 0, 1]
>>> recall_score(y_true, y_pred, average='macro')
0.33...
>>> recall_score(y_true, y_pred, average='micro')
0.33...
>>> recall_score(y_true, y_pred, average='weighted')
0.33...
>>> recall_score(y_true, y_pred, average=None)
array([1., 0., 0.])
>>> y_true = [0, 0, 0, 0, 0, 0]
>>> recall_score(y_true, y_pred, average=None)
array([0.5, 0. , 0. ])
>>> recall_score(y_true, y_pred, average=None, zero_division=1)
array([0.5, 1. , 1. ])

(5)sklearn.metrics.roc_auc_score

sklearn.metrics.roc_auc_score(y_truey_scoreaverage='macro'sample_weight=Nonemax_fpr=Nonemulti_class='raise'labels=None)

根據預測評分計算接收器操作特徵曲線ROC-AUC下方的面積

>>> import numpy as np
>>> from sklearn.metrics import roc_auc_score
>>> y_true = np.array([0, 0, 1, 1])
>>> y_scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> roc_auc_score(y_true, y_scores)
0.75

(6)sklearn.metrics.roc_curve

sklearn.metrics.roc_curve(y_truey_scorepos_label=Nonesample_weight=Nonedrop_intermediate=True)

>>> import numpy as np
>>> from sklearn import metrics
>>> y = np.array([1, 1, 2, 2])
>>> scores = np.array([0.1, 0.4, 0.35, 0.8])
>>> fpr, tpr, thresholds = metrics.roc_curve(y, scores, pos_label=2)
>>> fpr
array([0. , 0. , 0.5, 0.5, 1. ])
>>> tpr
array([0. , 0.5, 0.5, 1. , 1. ])
>>> thresholds
array([1.8 , 0.8 , 0.4 , 0.35, 0.1 ])

(7)sklearn.metrics.confusion_matrix

sklearn.metrics.confusion_matrix(y_truey_predlabels=Nonesample_weight=Nonenormalize=None)

>>> from sklearn.metrics import confusion_matrix
>>> y_true = [2, 0, 2, 2, 0, 1]
>>> y_pred = [0, 0, 2, 2, 0, 2]
>>> confusion_matrix(y_true, y_pred)
array([[2, 0, 0],
       [0, 0, 1],
       [1, 0, 2]])

(8)sklearn.metrics.accuracy_score :分類準確率分數是指所有分類正確的百分比

sklearn.metrics.accuracy_score(y_truey_prednormalize=Truesample_weight=None)

normalize:默認值爲True,返回正確分類的比例如果爲False,返回正確分類的樣本數

>>> from sklearn.metrics import accuracy_score
>>> y_pred = [0, 2, 1, 3]
>>> y_true = [0, 1, 2, 3]
>>> accuracy_score(y_true, y_pred)
0.5
>>> accuracy_score(y_true, y_pred, normalize=False)
2

2、迴歸指標 Regression Metrics

(1)Max error:計算最達殘留誤差,該度量將捕獲預測值和真實值之間的最壞情況誤差。 在一個完全擬合的單輸出迴歸模型中,訓練集上的max_error將爲0,儘管在現實世界中這不太可能,但該指標顯示了擬合模型時的誤差程度。

>>> from sklearn.metrics import max_error
>>> y_true = [3, 2, 7, 1]
>>> y_pred = [9, 2, 7, 1]
>>> max_error(y_true, y_pred)
6

(2)Mean absolute error

>>> from sklearn.metrics import mean_absolute_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_absolute_error(y_true, y_pred)
0.5
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_absolute_error(y_true, y_pred)
0.75
>>> mean_absolute_error(y_true, y_pred, multioutput='raw_values')
array([0.5, 1. ])
>>> mean_absolute_error(y_true, y_pred, multioutput=[0.3, 0.7])
0.85...

(3)Mean squared error:

>>> from sklearn.metrics import mean_squared_error
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> mean_squared_error(y_true, y_pred)
0.375
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> mean_squared_error(y_true, y_pred)
0.7083...

(4) R² score, the coefficient of determination

>>> from sklearn.metrics import r2_score
>>> y_true = [3, -0.5, 2, 7]
>>> y_pred = [2.5, 0.0, 2, 8]
>>> r2_score(y_true, y_pred)
0.948...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred, multioutput='variance_weighted')
0.938...
>>> y_true = [[0.5, 1], [-1, 1], [7, -6]]
>>> y_pred = [[0, 2], [-1, 2], [8, -5]]
>>> r2_score(y_true, y_pred, multioutput='uniform_average')
0.936...
>>> r2_score(y_true, y_pred, multioutput='raw_values')
array([0.965..., 0.908...])
>>> r2_score(y_true, y_pred, multioutput=[0.3, 0.7])
0.925...

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章