实现sklearn中的train_test_split
sklearn.model_selection里面有个方法-> train_test_split
其作用是把数据集按训练集和测试集拆分, 为后面做调参(炼丹)做准备
先看下sklearn中的使用:
# Jupyter Notebook
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X,y)
my_knn_clf.fit(X_train, y_train)
y_predict = my_knn_clf.predict(X_test)
my_knn_clf.fit(X_train, y_train)
y_predict = my_knn_clf.predict(X_test)
print(np.sum(y_predict==y_test)/len(y_test))
输出:
方法很简单, 就是把数据集分割成指定比例即可:
* 需要把数据要打乱, 因为训练集可能是整理好的有序的
* 有可能在多次打乱情况下, 还要指定的部分训练集, 所以需要numpy中的随机seed, 默认是不指定随机seed的
见代码: train_test_split.py
import numpy as np
def train_test_split(X,y,test_ratio=0.2,seed=None):
assert X.shape[0] == y.shape[0], '样本和标签个数不一致'
assert 0<=test_ratio<1, '无效的测试比例'
if seed:
np.random.seed(seed)
shuffled_indexes = np.random.permutation(len(X))
test_size = int(len(X) * test_ratio)
train_index = shuffled_indexes[test_size:]
test_index = shuffled_indexes[:test_size]
return X[train_index], X[test_index], y[train_index], y[test_index]
现在上上面的sklearn中的分隔方法换成自己写的试一下:
# Jupyter Notebook
%run train_test_split.py # 载入文件
X_train, X_test, y_train, y_test = train_test_split(X,y)
my_knn_clf.fit(X_train, y_train)
y_predict = my_knn_clf.predict(X_test)
my_knn_clf.fit(X_train, y_train)
y_predict = my_knn_clf.predict(X_test)
print(np.sum(y_predict==y_test)/len(y_test))
输出:
因为鸢尾花数据集比较小, knn没有达到100%准确,
在split成测试集和训练集是随机数不一样,
准确率有点波动, 不过在正常范围内