Keras學習---數據預處理篇

1. 數據預處理是必要的,這裏以最簡單的MNIST dataset的輸入數據預處理爲例。 
    A. 設置隨機種子 
    np.random.seed(1337)  # for reproducibility

    B. 輸入數據維度規格化,這裏每個樣本只是size爲784的一維數組。
    X_train = X_train.reshape(60000, 784)

   將類別標籤轉換爲one-hot encoding, 這一步對多分類是必須的
   one_hot_labels  = keras.utils.np_utils.to_categorical(labels, num_classes=10)

   train sets 和test sets可能需要shuffle處理?

   C. 輸入數據類型轉換,數值歸一化
   X_train = X_train.astype('float32')
   X_train /= 255

   MNIST dataset的MLP完整代碼如下:
'''Trains a simple deep NN on the MNIST dataset.



Gets to 98.40% test accuracy after 20 epochs

(there is *a lot* of margin for parameter tuning).

2 seconds per epoch on a K520 GPU.

'''



from __future__ import print_function

import numpy as np

np.random.seed(1337)  # for reproducibility



from keras.datasets import mnist

from keras.models import Sequential

from keras.layers.core import Dense, Dropout, Activation

from keras.optimizers import SGD, Adam, RMSprop

from keras.utils import np_utils





batch_size = 128

nb_classes = 10

nb_epoch = 20



# the data, shuffled and split between train and test sets

(X_train, y_train), (X_test, y_test) = mnist.load_data()



X_train = X_train.reshape(60000, 784)

X_test = X_test.reshape(10000, 784)

X_train = X_train.astype('float32')

X_test = X_test.astype('float32')

X_train /= 255

X_test /= 255

print(X_train.shape[0], 'train samples')

print(X_test.shape[0], 'test samples')



# convert class vectors to binary class matrices

Y_train = np_utils.to_categorical(y_train, nb_classes)

Y_test = np_utils.to_categorical(y_test, nb_classes)



model = Sequential()

model.add(Dense(512, input_shape=(784,)))

model.add(Activation('relu'))

model.add(Dropout(0.2))

model.add(Dense(512))

model.add(Activation('relu'))

model.add(Dropout(0.2))

model.add(Dense(10))

model.add(Activation('softmax'))



model.summary()



#model.compile(loss='categorical_crossentropy',

#              optimizer=RMSprop(),

#              metrics=['accuracy'])

model.compile(loss='categorical_crossentropy',

              optimizer=SGD(lr=0.02),

              metrics=['accuracy'])


history = model.fit(X_train, Y_train,

                    batch_size=batch_size, nb_epoch=nb_epoch,

                    verbose=1, validation_data=(X_test, Y_test))

score = model.evaluate(X_test, Y_test, verbose=0)

print('Test score:', score[0])

print('Test accuracy:', score[1])


2. 如果輸入數據是圖像,並且使用的是CNN模型,輸入數據的維度處理會稍微複雜些。
先了解下Keras 1.x中的image_dim_ordering參數。
“channels_last”對應原本的“tf”,“channels_first”對應原本的“th”。
以128x128的RGB圖像爲例,“channels_first”應將數據組織爲(3,128,128),而“channels_last”應將數據組織爲(128,128,3)。

MNIST dataset的CNN模型完整代碼如下,特別需要注意input_shape和X_train/X_test。
'''Trains a simple convnet on the MNIST dataset.



Gets to 99.25% test accuracy after 12 epochs

(there is still a lot of margin for parameter tuning).

16 seconds per epoch on a GRID K520 GPU.

'''



from __future__ import print_function

import numpy as np

np.random.seed(1337)  # for reproducibility



from keras.datasets import mnist

from keras.models import Sequential

from keras.layers import Dense, Dropout, Activation, Flatten

from keras.layers import Convolution2D, MaxPooling2D

from keras.utils import np_utils

from keras import backend as K



batch_size = 128

nb_classes = 10

nb_epoch = 12



# input image dimensions

img_rows, img_cols = 28, 28

# number of convolutional filters to use

nb_filters = 32

# size of pooling area for max pooling

pool_size = (2, 2)

# convolution kernel size

kernel_size = (3, 3)



# the data, shuffled and split between train and test sets

(X_train, y_train), (X_test, y_test) = mnist.load_data()
#import gzip
#from six.moves import cPickle
#path=r'C:\Users\ll\.keras\datasets\mnist.pkl.gz'
#f = gzip.open(path, 'rb')
#(X_train, y_train), (x_valid,y_valid),(X_test, y_test)  = cPickle.load(f, encoding='bytes')
#f.close()


if K.image_dim_ordering() == 'th':

    X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)

    X_test = X_test.reshape(X_test.shape[0], 1, img_rows, img_cols)

    input_shape = (1, img_rows, img_cols)

else:

    X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)

    X_test = X_test.reshape(X_test.shape[0], img_rows, img_cols, 1)

    input_shape = (img_rows, img_cols, 1)



X_train = X_train.astype('float32')

X_test = X_test.astype('float32')

X_train /= 255

X_test /= 255

print('X_train shape:', X_train.shape)

print(X_train.shape[0], 'train samples')

print(X_test.shape[0], 'test samples')



# convert class vectors to binary class matrices

Y_train = np_utils.to_categorical(y_train, nb_classes)

Y_test = np_utils.to_categorical(y_test, nb_classes)



model = Sequential()



model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1],

                        border_mode='valid',

                        input_shape=input_shape))

model.add(Activation('relu'))

model.add(Convolution2D(nb_filters, kernel_size[0], kernel_size[1]))

model.add(Activation('relu'))

model.add(MaxPooling2D(pool_size=pool_size))

model.add(Dropout(0.25))



model.add(Flatten())

model.add(Dense(128))

model.add(Activation('relu'))

model.add(Dropout(0.5))

model.add(Dense(nb_classes))

model.add(Activation('softmax'))



model.compile(loss='categorical_crossentropy',

             optimizer='adadelta',

              metrics=['accuracy'])



model.fit(X_train, Y_train, batch_size=batch_size, nb_epoch=nb_epoch,

          verbose=1, validation_data=(X_test, Y_test))

score = model.evaluate(X_test, Y_test, verbose=0)

print('Test score:', score[0])

print('Test accuracy:', score[1])



發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章