SVR forecasts stock opening price

SVM-Regression
        The method of Support Vector Classification can be extended to solve regression problems. This method is called Support Vector Regression.
        The model produced by support vector classification (as described above) depends only on a subset of the training data, because the cost function for building the model does not care about training points that lie beyond the margin. Analogously, the model produced by Support Vector Regression depends only on a subset of the training data, because the cost function for building the model ignores any training data close to the model prediction.
        There are three different implementations of Support Vector Regression: SVR, NuSVR and LinearSVR. LinearSVR provides a faster implementation than SVR but only considers linear kernels, while NuSVR implements a slightly different formulation than SVR and LinearSVR.
        As with classification classes, the fit method will take as argument vectors X, y, only that in this case y is expected to have floating point values instead of integer values:

> ```
>>>> from sklearn import svm
>>>> X = [[0, 0], [2, 2]]
>>>> y = [0.5, 2.5]
>>>> clf = svm.SVR()
>>>> clf.fit(X, y) 
SVR(C=1.0, cache_size=200, coef0=0.0, degree=3, epsilon=0.1, gamma='auto',
    kernel='rbf', max_iter=-1, shrinking=True, tol=0.001, verbose=False)
>>>> clf.predict([[1, 1]])
array([ 1.5])
> ```

Support Vector Regression (SVR) using linear and non-linear kernels:

> ```
> import numpy as np
> from sklearn.svm import SVR
> import matplotlib.pyplot as plt

> ###############################################################################
> # Generate sample data
> X = np.sort(5 * np.random.rand(40, 1), axis=0)
> y = np.sin(X).ravel()

> ###############################################################################
> # Add noise to targets
> y[::5] += 3 * (0.5 - np.random.rand(8))

> ###############################################################################
> # Fit regression model
> svr_rbf = SVR(kernel='rbf', C=1e3, gamma=0.1)
> svr_lin = SVR(kernel='linear', C=1e3)
> svr_poly = SVR(kernel='poly', C=1e3, degree=2)
> y_rbf = svr_rbf.fit(X, y).predict(X)
> y_lin = svr_lin.fit(X, y).predict(X)
> y_poly = svr_poly.fit(X, y).predict(X)
>
> ###############################################################################
> # look at the results
> plt.scatter(X, y, c='k', label='data')
> plt.plot(X, y_rbf, c='g', label='RBF model')
> plt.plot(X, y_lin, c='r', label='Linear model')
> plt.plot(X, y_poly, c='b', label='Polynomial model')
> plt.xlabel('data')
> plt.ylabel('target')
> plt.title('Support Vector Regression')
> plt.legend()
> plt.show()
> ```

 

References:
A Tutorial on Support Vector Regression” Alex J. Smola, Bernhard Schölkopf -Statistics and Computing archive Volume 14 Issue 3, August 2004, p. 199-222

發佈了332 篇原創文章 · 獲贊 141 · 訪問量 58萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章