-
什麼是pmml
- predictive model markup language 預測模型標記語言
- 1997年7月提出
- xml格式
- 通用性(跨平臺)、規範性(規範化模型描述語言)、異構性(xml本身的異構性)、獨立性(獨立於數據挖掘工具和)、易用性(編輯xml文檔)
-
fit / transform / fit_transform的區別
- fit:從數據中生成參數
- tranform:根據fit生成的參數,應用到數據集中,並轉換
- fit_transform:fit 和 transform的結合
- 不能直接對測試數據集按公式進行歸一化,而是要使用訓練數據集的均值和方差對測試數據集歸一化,見下圖
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
iris = load_iris()
X=iris.data
Y=iris.target
xtrain,xtest,ytrain,ytest=train_test_split(X,Y,test_size=0.3)
#
ss=StandardScaler()
##1 在同一個數據集上,比對直接fit_transform 和 fit+transform 的結果
ss_fit=ss.fit(xtrain) # 先fit
result1=ss_fit.transform(xtrain) #然後transform
result2=ss.fit_transform(xtrain) # 一起fit,transform
print(result1==result2) # 顯示相等
##2 在一個數據集上fit,在另外一個數據集上transform,比對直接fit_transform 和 fit+transform 的結果
ss_fit=ss.fit(xtrain) # 先fit
result1=ss_fit.transform(xtest) #然後transform
result2=ss.fit_transform(xtest) # 一起fit,transform
print(result1==result2) # 顯示不相等
-
pipline
- 顧名思義,管道,就是把各種transfrom的操作 加上 estimator 有序的組合在一起
- 最後一個必須爲 estimator
- 作用: 對於一個模型來說,如果要比對不同參數之間的區別,那麼就比較方便簡化很多代碼,比如stackoverflow裏面一個說明例子,按正常的流程,我們是按照如下方式做的
vect = CountVectorizer()
tfidf = TfidfTransformer()
clf = SGDClassifier()
vX = vect.fit_transform(Xtrain)
tfidfX = tfidf.fit_transform(vX)
predicted = clf.fit_predict(tfidfX)
# Now evaluate all steps on test set
vX = vect.fit_transform(Xtest)
tfidfX = tfidf.fit_transform(vX)
predicted = clf.fit_predict(tfidfX)
4.使用了pipline之後,那麼我們需要更少的代碼,說白了,就是把一些通用的流程給封裝好
pipeline = Pipeline([
('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf', SGDClassifier()),
])
predicted = pipeline.fit(Xtrain).predict(Xtrain)
# Now evaluate all steps on test set
predicted = pipeline.predict(Xtest)
- steps:是一個列表,列表的元素爲tuple,tuple的第一個值是tranform的自定義的別名,第二個值是tranform的名字,例如 Pipeline([('anova', anova_filter), ('svc', clf)]
- fit:基於前面的transform後的數據集,用最後一個estimator在該數據集上做fit(擬合)
- fit_predict:基於前面的transform後的數據集,用最後一個estimator在該數據集上做fit 和 predict ;比如在訓練集上,那麼就得到訓練集上的預測結果
- fit_transform: 基於前面的transform後的數據集,用最後一個estimator在該數據集上做fit 和 transform
- get_params: 獲取estimator的參數
- predict:基於transform後的數據集,做預測
- predict_log_proba:基於transform後的數據集,estimator估計結果的對數概率值
- predict_proba:基於transform後的數據集,estimator估計結果的概率值
- score:基於transform後的數據集,estimator估計結果的得分
- score_samples:部分樣本的得分
- set_params:對estimator設置參數
- 一個栗子
# SelectKBest + svm 組成pipline
from sklearn import svm
from sklearn.datasets import make_classification
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import f_regression
from sklearn.pipeline import Pipeline
# generate some data to play with
X, y = make_classification(n_informative=5, n_redundant=0, random_state=42)
# ANOVA SVM-C
anova_filter = SelectKBest(f_regression, k=5)
clf = svm.SVC(kernel='linear')
anova_svm = Pipeline(steps=[('anova', anova_filter), ('svc', clf)])
# You can set the parameters using the names issued
# For instance, fit using a k of 10 in the SelectKBest
# and a parameter 'C' of the svm
anova_svm.set_params(anova__k=10, svc__C=.1).fit(X, y)
prediction = anova_svm.predict(X)
print(prediction)
print(anova_svm.score(X,y))
-
如何生成pmml文件? 通過 nyoka模塊 + pipline
生成xgboost的pmml文件
from sklearn import datasets
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
import pandas as pd
from xgboost import XGBClassifier
from nyoka import xgboost_to_pmml
seed = 123456
iris = datasets.load_iris()
target = 'Species'
features = iris.feature_names
iris_df = pd.DataFrame(iris.data, columns=features)
iris_df[target] = iris.target
X, y = iris_df[features], iris_df[target]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=seed)
pipeline = Pipeline([
('scaling', StandardScaler()),
('xgb', XGBClassifier(n_estimators=5, seed=seed))
])
pipeline.fit(X_train, y_train)
y_pred = pipeline.predict(X_test)
y_pred_proba = pipeline.predict_proba(X_test)
xgboost_to_pmml(pipeline, features, target, "/Users/hqh/pycharm/pmml/xgb-iris.pmml")
生成svm的pmml文件
import pandas as pd
from sklearn import datasets
from sklearn.pipeline import Pipeline
from sklearn.preprocessing import StandardScaler
from sklearn.svm import SVC
from nyoka import skl_to_pmml
iris = datasets.load_iris()
irisd = pd.DataFrame(iris.data,columns=iris.feature_names)
irisd['Species'] = iris.target
features = irisd.columns.drop('Species')
target = 'Species'
pipeline_obj = Pipeline([
('scaler', StandardScaler()),
('svm',SVC())
])
pipeline_obj.set_params(svm__C=.1)
pipeline_obj.fit(irisd[features],irisd[target])
skl_to_pmml(pipeline_obj,features,target,"svc_pmml.pmml")
生成Isolation Forest的pmml文件
from sklearn.ensemble import IsolationForest
import numpy as np
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
from sklearn import datasets
from sklearn.pipeline import Pipeline
from nyoka import skl_to_pmml
iris = datasets.load_iris()
irisd = pd.DataFrame(iris.data,columns=iris.feature_names)
irisd['Species'] = iris.target
features = irisd.columns.drop('Species')
target = 'Species'
iforest = IsolationForest(n_estimators=40, max_samples=3000, contamination=0, random_state=np.random.RandomState(42))
model_type="iforest"
pipeline = Pipeline([
(model_type, iforest)
])
pipeline.fit(iris.data)
skl_to_pmml(pipeline, features, "","forest.pmml")
-
利用pmml文件進行預測
from pypmml import Model
model = Model.fromFile('/Users/hqh/pycharm/pmml/forest.pmml')
result = model.predict({'sepal length (cm)':1,
"sepal width (cm)":1,"petal length (cm)":1,"petal width (cm)":1})
print(result)
'''
{'outlier': True, 'anomalyScore': 0.625736561904991}
'''