Keras:使用預訓練網絡的bottleneck特徵

使用預訓練網絡的bottleneck特徵
在規模較大的數據集上訓練好的網絡,一般都具有非常好的特徵提取能力.以VGG16爲例,其網絡就是通過卷基層提取到圖像特徵後通過後面的全連接層進行分類.現在我們通過使用VGG16的卷基層對我們自己的圖像數據集進行特徵提取以提高

通過

from keras.applications.vgg16 import VGG16
from keras.utils import plot_model
model = VGG16(include_top=True, weights='imagenet')
model.summary()
plot_model(model, to_file='VGG16.png')

可以看到VGG16的網絡結構爲:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 224, 224, 3)       0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, 224, 224, 64)      1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, 224, 224, 64)      36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, 112, 112, 64)      0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, 112, 112, 128)     73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, 112, 112, 128)     147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, 56, 56, 128)       0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, 56, 56, 256)       295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, 56, 56, 256)       590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, 28, 28, 256)       0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, 28, 28, 512)       1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, 28, 28, 512)       2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, 14, 14, 512)       0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, 14, 14, 512)       2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, 7, 7, 512)         0         
_________________________________________________________________
flatten (Flatten)            (None, 25088)             0         
_________________________________________________________________
fc1 (Dense)                  (None, 4096)              102764544 
_________________________________________________________________
fc2 (Dense)                  (None, 4096)              16781312  
_________________________________________________________________
predictions (Dense)          (None, 1000)              4097000   
=================================================================
Total params: 138,357,544
Trainable params: 138,357,544
Non-trainable params: 0
_________________________________________________________________

這裏寫圖片描述

當我們需要使用VGG16的卷積部分時,我們將全連接以上的部分拋掉.即另

VGG16(include_top=True, weights='imagenet')

中的include_top=False.
此時,網絡僅僅保留了卷基層部分,如圖:
這裏寫圖片描述

這樣使用.flow_from_directory()從圖片中產生數據,利用網絡的卷基層部分得到訓練集和測試集的bottleneck特徵,將得到的特徵記錄在numpy array裏.
這裏使用的數據量很小,每個文件夾下僅30幅圖像.存放結構爲:

train\
      0\
      1\
validation\
      0\
      1\

在生成訓練集與驗證集的bottleneck特徵數據時,steps分別選了200和80,這樣得到的bottleneck特徵的shape分別爲(6000, 4, 4, 512)與(2400, 4, 4, 512).(即200*30=6000,80*30=2400)
關於bottleneck特徵的shape查看可用:

import numpy as np
test = np.load('bottleneck_features_train.npy')
print (test.shape)

生成訓練集與驗證集的bottleneck特徵數據:

#!/usr/bin/python
# coding:utf8

from keras.preprocessing.image import ImageDataGenerator
import numpy as np
from keras.applications.vgg16 import VGG16

# include_top:是否保留頂層的3個全連接網絡
# 'imagenet'代表加載預訓練權重
model = VGG16(include_top=False, weights='imagenet')
# 加載pre-model的權重
model.load_weights('vgg16_weights_tf_dim_ordering_tf_kernels_notop.h5')

datagen = ImageDataGenerator(rescale=1./255)
# 訓練集圖像生成器,以文件夾路徑爲參數,生成經過數據提升/歸一化後的數據
train_generator = datagen.flow_from_directory('train',
                                              target_size=(150, 150),
                                              batch_size=32,
                                              class_mode=None,
                                              shuffle=False)
# 驗證集圖像生成器
validation_generator = datagen.flow_from_directory('validation',
                                                    target_size=(150, 150),
                                                    batch_size=32,
                                                    class_mode=None,
                                                    shuffle=False)

# 得到bottleneck feature
# 使用一個生成器作爲數據源預測模型,生成器應返回與test_on_batch的輸入數據相同類型的數據
bottleneck_features_train = model.predict_generator(train_generator, steps=200) 
print (bottleneck_features_train)
# steps是生成器要返回數據的輪數
# 將得到的特徵記錄在numpy array裏
np.save(open('bottleneck_features_train.npy', 'w'), bottleneck_features_train)
bottleneck_features_validation = model.predict_generator(validation_generator, steps=80) 
# 一個epoch有800張圖片,驗證集
np.save(open('bottleneck_features_validation.npy', 'w'), bottleneck_features_validation)

然後訓練我們的全連接網絡,這裏由於之前我們得到的訓練集與測試集的bottleneck特徵分別有200*30=6000,80*30=2400組,因此對應labels分別設爲:

train_labels = np.array([0]*3000 + [1]*3000)
validation_labels = np.array([0]*1200 + [1]*1200)

然後將得到的數據載入,開始訓練全連接網絡:

#!/usr/bin/python
# coding:utf8

# fine-tune網絡的後面幾層.Fine-tune以一個預訓練好的網絡爲基礎,在新的數據集上重新訓練一小部分權重
from keras.models import Sequential
from keras.layers import Dropout, Flatten, Dense
import numpy as np

train_data = np.load(open('bottleneck_features_train.npy'))
train_labels = np.array([0]*3000 + [1]*3000)
print (train_labels)
validation_data = np.load(open('bottleneck_features_validation.npy'))
validation_labels = np.array([0]*1200 + [1]*1200)

model = Sequential()
model.add(Flatten(input_shape=train_data.shape[1:]))
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))

model.compile(optimizer='rmsprop', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(train_data, train_labels, epochs=5, batch_size=32, validation_data=(validation_data,validation_labels))
model.save_weights('bottleneck_fc_model.h5')
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章