Keras 構建CNN

Keras 構建CNN

一.構建CNN準備

Keras構建CNN準備不像Tensorflow那麼繁瑣,只需要導入對應的包就行。

from keras.models import Sequential

導入順序模型,這是Keras最簡單的模型Sequential 順序模型,它由多個網絡層線性堆疊。

from keras.layers import Dense,Activation,Convolution2D,MaxPooling2D,Flatten

導入可用於處理全連接層,激活函數,二維卷積,最大池化,壓平數據包

from keras.optimizers import Adam

導入優化損失方法

構建模型:

model = Sequential()

二.構建CNN結構

上圖爲一個卷積層的示意圖,可以知道,卷積層需要突觸權值,偏置(可以選擇不要偏置)激活函數,最後得到輸出。

1.創建卷積層,並且用relu激活。

只需要在model中加入對應層

model.add(Convolution2D(

      filters=32,

      kernel_size=3,

      padding='same',

      ))##patch 3x3 ,in size 1,out size 32, Nx465x128x32

model.add(Activation('relu'))

filters=32,表示要輸出32個通道

kernel_size=3,卷積核大小3x3

padding=’same’,這樣最後輸出的每個通道大小不變。

2.創建池化層

池化層按照我的理解是對卷積後的結果進行降維。降維後每個通道圖大小爲N=(imgSize-kSize)/Strides,這裏imgSize爲原來圖像的寬或者高,kSize爲池化核大小,Strides爲池化步長。同樣只需要把對應層加入model中,這裏需要注意的是我們輸入的形式最好定義爲(batch_size,channels ,pooled_rows, pooled_cols) 4D 張量,在Keras中通道是放在first,所以稱爲’channel_first‘,而在Tensorflow通道放在最後稱爲’channel_last‘。當然Keras也能定義爲和Tensorflow一樣的形式。只是在運算時速度會變慢不少,因爲在Keras內部會轉換成’channel_first’。

model.add(MaxPooling2D(

      pool_size=2,

      strides=2,

      padding='valid',

      data_format='channels_first'

      ))   ## Nx232x64x32

3.對池化得到的結果壓平用於全連接層

壓平這個操作其實就是矩陣轉換成一維矩陣,最後一維矩陣大小爲N=high*wide*channel也就是輸出通道數乘以圖的寬度高度

model.add(Flatten())# N x 3 x 2 x 64 =>> N x 384

4.創建全連接層

全連接是對壓平後的數據再次變小,用矩陣乘法得到更新的維度再激活函數激活

# Dense layer  # 1 Dense layer(units: 100, activation: ReLu )

model.add(Dense(100))

model.add(Activation('relu'))

5.預測

預測也是矩陣相乘,壓縮輸出

model.add(Dense(10))

model.add(Activation('softmax'))

到此一個CNN構建完成,卷積池化全連接大小可以根據實際情況自行增加或者減少。最後可以看下圖進行回顧。

 

三.訓練模型

訓練模型我們需要定義損失,優化損失方法,接下來就是訓練。因爲訓練數據量很大我們需要對數據按照batch劃分,一個一個小的batch進行訓練。

1.定義優化方法編譯並且編譯最後模型

# Another way to define your optimizer

adam = Adam(lr=1e-4)

# We add metrics to get more results you want to see

model.compile(optimizer=adam,

            loss='categorical_crossentropy',

            metrics=['accuracy'])

2.定義訓練

model.fit(X_train, y_train, epochs=10, batch_size=64,)

epochs=10,表示把數據反覆訓練10遍。batch_size=64

三.完整實例

import numpy as np

import os

import datetime

import tensorflow as tf

import h5py

from ops import *

from read_hdf5 import *

 

from keras.utils import np_utils

from keras.models import Sequential

from keras.layers import Dense,Activation,Convolution2D,MaxPooling2D,Flatten,Dropout

from keras.optimizers import Adam



 

feature_format = 'tfrecord'

feature_path = '/home/rainy/tlj/dcase/h5/train_fold1.h5'

statistical_parameter_path = '/home/rainy/Desktop/model_xception/statistical_parameter.hdf5'

save_path = '/home/rainy/Desktop/model_xception'

 

max_epoch = 20

high = 465

wide = 128

shape = high * wide

keep_prob = 1

max_batch_size = 50

 

#fp = h5py.File(statistical_parameter_path, 'r')

 

starttime = datetime.datetime.now()

 

feature, label = load_hdf5(feature_path)

index_shuffle = np.arange(feature.shape[0])

np.random.shuffle(index_shuffle)

feature = feature[index_shuffle]

label = label[index_shuffle]

 

feature_mean = np.zeros(wide)

feature_var = np.zeros(wide)

 

for i in range(feature.shape[2]):

  feature_mean[i] = np.mean(feature[:,:,i])

  feature_var[i] = np.var(feature[:,:,i])

 

for i in range(feature.shape[0]):

  for j in range(feature.shape[1]):

      feature[i,j,:] = (feature[i,j,:] - feature_mean)/np.sqrt(feature_var)

 

y_data = np.zeros((label.shape[0], 10),dtype=int)

for j in range(label.shape[0]):

  y_data[j, label[j]] = 1

 

feature = feature.reshape([-1,1,465,128])

 

# load testing data

test_feature,test_label = read_data()

test_feature = test_feature.reshape([-1,1,465,128])

 

LEARNING_RATE_BASE = 0.001

LEARNING_RATE_DECAY = 0.1

LEARNING_RATE_STEP = 300

gloabl_steps = tf.Variable(0, trainable=False)

learning_rate = tf.train.exponential_decay(LEARNING_RATE_BASE

                                         , gloabl_steps,

                                         LEARNING_RATE_STEP,

                                         LEARNING_RATE_DECAY,

                                         staircase=True)

 

model = Sequential()

# 2D Convolutional layer(filters: 32, kernelsize: 7) + Batchnormalization + ReLuactivation

model.add(Convolution2D(

      filters=32,

      kernel_size=3,

      padding='same',

      ))##patch 3x3 ,in size 1,out size 32, Nx465x128x32

model.add(Activation('relu'))

# 2D maxpooling(poolsize: (5, 2)) + Dropout(rate: 30 %)

model.add(MaxPooling2D(

      pool_size=2,

      strides=2,

      padding='valid',

      data_format='channels_first'

      ))   ## Nx232x64x32

# 2D Convolutional layer(filters: 64, kernelsize: 7) + Batchnormalization + ReLuactivation

model.add(Convolution2D(

      filters=64,

      kernel_size=5,

      padding='same'

      )) ##patch 5x5 ,in size 32,out size 64 , Nx232x64x64

model.add(Activation('relu'))

# 2D maxpooling(poolsize: (4, 100)) + Dropout(rate: 30 %)

model.add(MaxPooling2D(

      pool_size=(2,2),

      strides=(2,2),

      padding='valid',

      data_format='channels_first'

      )) ## Nx116x32x64

# Flatten

model.add(Flatten())  # N x 116 x 32 x 64 =>> N x (116*32*64)

# Dense layer  # 1 Dense layer(units: 100, activation: ReLu ) Dropout(rate: 30 %)

model.add(Dense(100))

model.add(Activation('relu'))

# Output layer(activation: softmax)

model.add(Dense(10))

model.add(Activation('softmax'))

 

adam = Adam(lr=learning_rate)

 

model.compile(

      optimizer=adam,

      loss='categorical_crossentropy'

      )

 

print('Training--------------------------')

model.fit(feature,y_data,epochs=max_epoch,batch_size=max_batch_size)

 

print('Testing')

validation_loss = model.evaluate(feature,y_data)

print('validation loss:',validation_loss)

t_pre = model.predict(feature)

t_prediction = tf.equal(tf.argmax(t_pre,1), tf.argmax(y_data,1))

train_accuracy = tf.reduce_mean(tf.cast(t_prediction, tf.float32))

init = tf.group(tf.global_variables_initializer(),tf.local_variables_initializer())

sess = tf.Session()

sess.run(init)

print("train accurary:",sess.run(train_accuracy))

 

y_pre = model.predict(test_feature)

correct_prediction = tf.equal(tf.argmax(y_pre,1), tf.argmax(test_label,1))

test_accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

print("test accurary:",sess.run(test_accuracy))

 

endtime = datetime.datetime.now()

print("code finish time is:",(endtime - starttime).seconds)

 

參考:以上圖片均爲網絡圖片,僅作示例,侵權聯繫刪除







 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章