原文代碼+Food_5K數據集,提取碼:1izj
什麼是遷移學習
當數據集沒有大到足以訓練整個CNN網絡時,通常可以對預訓練好的imageNet網絡(如VGG16,Inception-v3等)進行調整以適應新任務。
通常來說,遷移學習有兩種類型:
- 特徵提取
- 微調(fine-tuning)
第一種遷移學習是將預訓練的網絡視爲一個任意特徵提取器。圖片經過輸入層,然後前向傳播,最後在指定層停止,通過提取該指定層的輸出結果作爲輸入圖片的特徵。
第二種遷移學習需要更改預訓練模型的結構,具體方法爲移除全連接層,添加一組自定義的全連接層來進行新的分類(不唯一)。
本文通過對第二種類型的遷移學習進行項目實操,加深讀者理解。
預備知識
1. keras內置的VGG-16網絡模塊
先簡單瞭解下VGG16網絡結構(圖1),具體包括5個卷積組和3個全連接層。5個卷積組分別有2,2,3,3,3個卷積層,因此,共有2+2+3+3+3+3=16層。
本文將通過移除頂層的3個全連接層,添加自定義全連接層來進行Food-5K數據集的分類訓練。
通過如下代碼預覽去除全連接層後的網絡結構。當模型初始化的時候權重會自動下載,這裏採用的是在imageNet數據集上預訓練好的權重。
from keras.applications import VGG16
model=VGG16(weights='imagenet',include_top=False)
model.summary()
輸出結果如下:
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) (None, None, None, 3) 0
_________________________________________________________________
block1_conv1 (Conv2D) (None, None, None, 64) 1792
_________________________________________________________________
block1_conv2 (Conv2D) (None, None, None, 64) 36928
_________________________________________________________________
block1_pool (MaxPooling2D) (None, None, None, 64) 0
_________________________________________________________________
block2_conv1 (Conv2D) (None, None, None, 128) 73856
_________________________________________________________________
block2_conv2 (Conv2D) (None, None, None, 128) 147584
_________________________________________________________________
block2_pool (MaxPooling2D) (None, None, None, 128) 0
_________________________________________________________________
block3_conv1 (Conv2D) (None, None, None, 256) 295168
_________________________________________________________________
block3_conv2 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_conv3 (Conv2D) (None, None, None, 256) 590080
_________________________________________________________________
block3_pool (MaxPooling2D) (None, None, None, 256) 0
_________________________________________________________________
block4_conv1 (Conv2D) (None, None, None, 512) 1180160
_________________________________________________________________
block4_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block4_pool (MaxPooling2D) (None, None, None, 512) 0
_________________________________________________________________
block5_conv1 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv2 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_conv3 (Conv2D) (None, None, None, 512) 2359808
_________________________________________________________________
block5_pool (MaxPooling2D) (None, None, None, 512) 0
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________
2. Food-5K數據集
Food-5K數據集包括training,validation,evaluation三個子包,分別有3000,1000,1000張圖片,食物和非食物均佔一半(圖2)。
項目實踐
1. 項目結構
- dataset是我們自定義的數據集,初始文件夾爲空,
- Food-5K爲原始數據集,
- config.py完成一些基本的配置,
- custom_dataset.py文件可以獲得自定義的dataset數據集,
- load_data.py返回我們所需的數據格式(images,labels)
- train.py完成遷移學習的訓練,
- evaluate.py得到模型在測試集上的準確率,
- model.architecture.json爲保存的模型結構(不含權重),
- transfer_learning_weights.h5爲VGG16微調並重新訓練後的模型權重。
2. config 文件
ORIG_DATA_PATH='Food-5K' #原始文件夾
BASE_PATH='dataset' #自定義文件夾
TRAIN='training' #訓練集
VALID='validation' #驗證集
TEST='evaluation' #測試集
CLASSES=['Non-food','food'] #標籤類別
3. 自定義數據集
import os
import config
import shutil
for split in (config.TRAIN,config.VALID,config.TEST):
print('[INFO] processing {} split:'.format(split))
imagePaths=os.listdir(os.path.join(config.ORIG_DATA_PATH,split))
for ele in imagePaths:
if not ele.endswith('.jpg'):
imagePaths.remove(ele)
for imagePath in imagePaths:
label=config.CLASSES[int(imagePath.split('_')[0])]
dst=os.path.join(config.BASE_PATH,split,label)
if not os.path.exists(dst):
os.makedirs(dst)
#複製圖片
shutil.copy2(os.path.join(config.ORIG_DATA_PATH,config.TRAIN,imagePath), os.path.join(dst,imagePath))
print('[INFO] All is done' )
分別完成Food-5K文件夾中三個子包的食物和非食物分類。
4. 獲得訓練、驗證、測試所用的數據結構
from config import BASE_PATH
from imutils import paths
import numpy as np
import random
import cv2
import os
#定義圖像載入函數
def load_images(x):
image=cv2.imread(x)
image=cv2.resize(image,(224,224))
return image
#獲得模型用數據結構
def load_data_split(datapath):
imagePaths=list(paths.list_images(os.path.join(BASE_PATH,datapath)))
random.shuffle(imagePaths)
labels=[int(i.split('\\')[-1][0]) for i in imagePaths]
images=np.array([load_images(i) for i in imagePaths])
return (images,labels)
5. VGG16網絡的微調及訓練
from keras.layers import Flatten,Dense,Dropout,Input
from keras.applications import VGG16
from load_data import load_data_split
from keras.optimizers import SGD
from keras.models import Model
from keras.utils import np_utils
import config
print('[INFO] loading dataset......')
(x_train,y_train)=load_data_split(config.TRAIN)
(x_valid,y_valid)=load_data_split(config.VALID)
y_train=np_utils.to_categorical(y_train,2)
y_valid=np_utils.to_categorical(y_valid,2)
print('[INFO] initializing model......')
base_model=VGG16(weights='imagenet',include_top=False,input_tensor=Input(shape=(224,224,3)))
#微調
head_model=base_model.output
head_model=Flatten(name="flatten")(head_model)
head_model = Dense(512, activation="relu")(head_model)
head_model = Dropout(0.5)(head_model)
head_model=Dense(64,activation='relu')(head_model)
head_model = Dense(len(config.CLASSES), activation="softmax")(head_model)
model=Model(base_model.input,head_model)
#凍結前面的5個卷積組,只訓練自定義的全連接層
for layer in base_model.layers:
layer.trainable=False
print('[INFO] compiling model')
sgd=SGD(lr=0.0001,momentum=0.9)
model.compile(loss='categorical_crossentropy',metrics=['accuracy'],optimizer=sgd)
print('[INFO] training model')
model.fit(x_train, y_train, batch_size=32, epochs=2, validation_data=(x_valid,y_valid))
print('[INFO] saving model and weights')
#保存模型(不含權重)
model_json=model.to_json()
open('model_architecture.json','w').write(model_json)
#保存權重
model.save_weights('transfer_learning_weights.h5', overwrite=True)
凍結去除了頂層的VGG16網絡的權重參數,只訓練自定義的全連接層。最後將新的模型和權重分別保存。
經過兩輪的訓練,訓練集上準確率就已經達到了96.13%,驗證集上99.2%。結果如下:
- loss: 0.4639 - acc: 0.9613 - val_loss: 0.1036 - val_acc: 0.9920
6.測試集上的準確率
from keras.models import model_from_json
from keras.utils import np_utils
from load_data import load_data_split
from keras.optimizers import SGD
import config
#載入模型和權重
loaded_model_json = open('model_architecture.json', 'r').read()
model=model_from_json(loaded_model_json)
model.load_weights('transfer_learning_weights.h5')
print('[INFO] loading dataset...')
(x_test,y_test)=load_data_split(config.TEST)
y_test=np_utils.to_categorical(y_test,2)
sgd=SGD(lr=0.0001,momentum=0.9)
model.compile(loss='categorical_crossentropy',metrics=['accuracy'],optimizer=sgd)
print('[INFO] evaluating...')
score=model.evaluate(x_test,y_test,batch_size=32)
print('test score: {}'.format(score[0]))
print('test accuracy:{}'.format(score[1]))
輸出結果如下:
test score: 0.08451384264268018
test accuracy:0.992
可以發現通過遷移學習,經過兩輪的訓練後在測試集上同樣達到99.2%的準確率。