Libsvm的說明——方法

Utility Functions

=================

To use utility functions, type:

    >>> from svmutil import *

The above command loads:

    svm_train()            : train an SVM model

    svm_predict()          : predict testing data

    svm_read_problem()    : read the data from a LIBSVM-format file.

    svm_load_model()      : load a LIBSVM model.

    svm_save_model()      : save model to a file.

    evaluations()          : evaluate prediction results.

    csr_find_scale_param() : find scaling parameter for data in csr format(查找csr格式數據的縮放參數).

    csr_scale()            : apply data scaling to data in csr format(對csr格式的數據應用數據縮放).

## 第一個function

- ***Function: svm_train***

There are three ways to call svm_train()

    >>> model = svm_train(y, x [, 'training_options'])

    >>> model = svm_train(prob [, 'training_options'])

    >>> model = svm_train(prob, param)

    y: a list/tuple/ndarray of l training labels (type must be int/double).

    x: 1. a list/tuple of l training instances. Feature vector of each training instance is a list/tuple or dictionary.

      2. an l * n numpy ndarray or scipy spmatrix (n: number of features).

    training_options: a string in the same form as that for LIBSVM command mode.

    prob: an svm_problem instance generated by calling

          svm_problem(y, x).

          For pre-computed kernel, you should use

          svm_problem(y, x, isKernel=True)

    param: an svm_parameter instance generated by calling

          svm_parameter('training_options')

    model: the returned svm_model instance. See svm.h for details of this structure. If '-v' is specified, cross validation is

          conducted and the returned model is just a scalar: cross-validation accuracy for classification and mean-squared error for regression.

To train the same data many times with different  parameters, the second and the third ways should be faster..

  Examples:


    >>> y, x = svm_read_problem('../heart_scale')

    >>> prob = svm_problem(y, x)

    >>> param = svm_parameter('-s 3 -c 5 -h 0')

    >>> m = svm_train(y, x, '-c 5')

    >>> m = svm_train(prob, '-t 2 -c 5')

    >>> m = svm_train(prob, param)

    >>> CV_ACC = svm_train(y, x, '-v 3')

## 第二個function

***- Function: svm_predict***

    To predict testing data with a model, use

    >>> p_labs, p_acc, p_vals = svm_predict(y, x, model [,'predicting_options'])

    y: a list/tuple/ndarray of l true labels (type must be int/double).

      It is used for calculating the accuracy. Use [] if true labels are unavailable.

    x: 1. a list/tuple of l training instances. Feature vector of each training instance is a list/tuple or dictionary.

      2. an l * n numpy ndarray or scipy spmatrix (n: number of features).

    predicting_options: a string of predicting options in the same format as that of LIBSVM.

    model: an svm_model instance.

    p_labels: a list of predicted labels

    p_acc: a tuple including accuracy (for classification), mean squared error, and squared correlation coefficient (for regression)(包括準確度(用於分類)、均方誤差和平方相關係數(用於迴歸)的元組).

    p_vals: a list of decision values or probability estimates (if '-b 1' is specified). If k is the number of classes in training data, for decision values, each element includes results of predicting k(k-1)/2 binary-class SVMs. For classification, k = 1 is a special case. Decision value [+1] is returned for each testing instance, instead of an empty list.

            For probabilities, each element contains k values indicating the probability that the testing instance is in each class. Note that the order of classes is the same as the 'model.label' field in the model structure.

Example:

    >>> m = svm_train(y, x, '-c 5')

    >>> p_labels, p_acc, p_vals = svm_predict(y, x, m)

## 第三組functions

***- Functions: svm_read_problem/svm_load_model/svm_save_model***

    See the usage by examples:

    >>> y, x = svm_read_problem('data.txt')

    >>> m = svm_load_model('model_file')

    >>> svm_save_model('model_file', m)

## 第四個functions

***- Function: evaluations***

Calculate some evaluations using the true values (ty) and the predicted values (pv):

    >>> (ACC, MSE, SCC) = evaluations(ty, pv, useScipy)

    ty: a list/tuple/ndarray of true values.

    pv: a list/tuple/ndarray of predicted values.

    useScipy: convert ty, pv to ndarray, and use scipy functions to do the evaluation

    ACC: accuracy(準確度).

    MSE: mean squared error(均方誤差).

    SCC: squared correlation coefficient(平方相關係數).

## 第五組functions

***- Function: csr_find_scale_parameter/csr_scale***

Scale data in csr format.

    >>> param = csr_find_scale_param(x [, lower=l, upper=u])

    >>> x = csr_scale(x, param)

    x: a csr_matrix of data.

    l: x scaling lower limit; default -1.(縮放下限,默認-1)

    u: x scaling upper limit; default 1.(縮放上限,默認1)

    The scaling process is: x * diag(coef) + ones(l, 1) * offset'

    param: a dictionary of scaling parameters, where param['coef'] = coef and param['offset'] = offset.

    coef: a scipy array of scaling coefficients(係數).

    offset: a scipy array of scaling offsets(偏移).

Additional Information

======================

This interface was written by Hsiang-Fu Yu from Department of Computer Science, National Taiwan University. If you find this tool useful, please cite LIBSVM as follows Chih-Chung Chang and Chih-Jen Lin, LIBSVM : a library for support vector machines. ACM Transactions on Intelligent Systems and

Technology, 2:27:1--27:27, 2011. Software available at http://www.csie.ntu.edu.tw/~cjlin/libsvm

For any question, please contact Chih-Jen Lin <[email protected]>, or check the FAQ page: [http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html](http://www.csie.ntu.edu.tw/~cjlin/libsvm/faq.html).

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章