mx.metric.create()是模型評價的一個選擇
API中的是這樣的:
def create(metric, *args, **kwargs): """Creates evaluation metric from metric names or instances of EvalMetric or a custom metric function. Parameters ---------- metric : str or callable Specifies the metric to create. This argument must be one of the below: - Name of a metric. - An instance of `EvalMetric`. - A list, each element of which is a metric or a metric name. - An evaluation function that computes custom metric for a given batch of labels and predictions. *args : list Additional arguments to metric constructor. Only used when metric is str. **kwargs : dict Additional arguments to metric constructor. Only used when metric is str Examples -------- >>> def custom_metric(label, pred): ... return np.mean(np.abs(label - pred)) ... >>> metric1 = mx.metric.create('acc') >>> metric2 = mx.metric.create(custom_metric) >>> metric3 = mx.metric.create([metric1, metric2, 'rmse']) """
這裏的參數可以是個字符串,比如直接給出評介的方式如‘acc’
也可以是自己定義的評價方式的函數名
也可以是一個列表, 列表中的每一個都是評價方式,可以是自己定義的,也可以是像‘acc’一樣的名字,
# 第一個是自己定義的一個評價方式,
def custom_metric(label, pred):
return np.mean(np.abs(label - pred))
metric1 = mx.metric.create('acc')
metric2 = mx.metric.create(custom_metric)
metric3 = mx.metric.create([metric1, metric2, 'rmse'])