kaggle猫狗分类的总结(AlexNet模型, keras框架),完整实验流程,源代码和详细解析

现在将已掌握的知识进行总结,方便以后自己写网络增加思路。

首先数据集下载:链接:https://pan.baidu.com/s/1U4N0PCNfyIP9iHLidVb9xA 提取码:vcvl
keras框架中文文档:https://keras.io/zh/  英文文档: https://keras.io/

  • 说一下这个数据集的构成:
    train文件夹下有25000张猫狗照片,猫和狗各12500张,命名方式分别为cat.x.jpg(x为索引0, 1, 2,… )(注意cat后边也有一个点!!),dog.x.jpg。
    test1文件夹下有12500张猫狗混合照片。

1、数据集制作:

  • 分类任务为例,制作好数据集包括图像和对应的标签。需要将数据和标签一一对应上。
    由于使用train中的照片实在是太多(不适合在CPU跑),我就选择了250张猫和250张狗的照片作为训练集,选100张猫和100张狗照片作为测试集。
    测试集和训练集我都在train文件夹中选择。(test1文件夹是混合猫狗,不方便打标签)
    在train文件夹选0-249的猫和0-249的狗作为训练集,选250-349的猫和250-349的狗作为测试集。
    所以重新制作的数据集如下图:
    在这里插入图片描述
    在这里插入图片描述
    所以文件路径就是:

    data
        train
         cats (250张猫,编号0-249)
         dogs (250张狗, 编号0-249)
        test
          cats (100张猫, 编号250-349)
         dogs (100张狗, 编号250-349)


放置一个train–cats文件夹详情给大家,方便看。
到现在,数据集已经制作完毕。

2、 数据预处理

  • 将数据重新定义成标准尺寸image.shape = (224, 224, 3),还可以根据图像生成器(ImageGenerator)增加图像数量,提高训练精度。(我没有用,这个数据集够多,没有必要,这个生成器也很简单,可以自行百度学习)
    代码段
IMG_CHANNELS=3 #PGB图像为三通道 R、G、B
#weight_decay = 0.0005 
IMG_ROWS=224  #图像的行像素
IMG_COLS=224  #图像的列像素
BATCH_SIZE=64 #batch大小
NB_EPOCH=10   #循环次数
NB_CLASSES=2  #分类  猫和狗两种
VERBOSE=1
#VALIDATION_SPLIT=0.2
#OPTIM=RMSprop()
x_test=np.empty((200,IMG_ROWS,IMG_COLS,3),np.float16)  #测试集两百张(200, 224, 224, 3)
x_train=np.empty((500,IMG_ROWS,IMG_COLS,3),np.float16)  #训练集五百张(500, 224, 224, 3)
train_data=np.zeros(500)  #训练集标签
test_data=np.zeros(200)  #测试集标签
#读入训练样本
for i in range(500):
    if i<250:
		#载入250张猫
        train_data[i]=1#猫的标签就是1
		#dog.8
        imagepath = "D:/python_pro/deep_learning_al/data/train/cats/cat." + str(i)+ ".jpg"#猫照片存储路径
        image1 = cv2.imread(imagepath)  #读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)  #转为RGB
        image1 = cv2.resize(image1,(IMG_ROWS,IMG_COLS))  #重新定义形状为(224, 224)
        #plt.imshow(image1)
        #plt.show()
        x_train[i,:,:,:]=image1   #训练集矩阵,四维(500, 224, 224, 3)

    else:
		#载入250张狗
        train_data[i]=0#狗的标签就是0
        imagepath = "D:/python_pro/deep_learning_al/data/train/dogs/dog." + str(i - 250)+ ".jpg" #狗照片路径
        image1 = cv2.imread(imagepath)  #读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)#转为RGB
        image1 = cv2.resize(image1, (IMG_ROWS, IMG_COLS))#重新定义形状为(224, 224)
        x_train[i, :, :, :] = image1 #训练集矩阵,四维(500, 224, 224, 3)
y_train=to_categorical(train_data) #训练集标签(500, 2)
x_train=np.array(x_train)  #完整训练集矩阵
#读入测试样本
for i in range(200):
    if i<100:
		#载入100张猫
        test_data[i]=1#猫的标签就是1
        imagepath =  "D:/python_pro/deep_learning_al/data/test/cats/cat."+str(i+250)+'.jpg'#猫照片存储路径
        image1 = cv2.imread(imagepath)#读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)#转为RGB
        image1 = cv2.resize(image1,(IMG_ROWS,IMG_COLS)) #重新定义形状为(224, 224)
        x_test[i,:,:,:]=image1

    else:
		#载入100张狗
        test_data[i]=0#狗的标签就是0
        imagepath=  "D:/python_pro/deep_learning_al/data/test/dogs/dog."+str(i+250-100)+'.jpg'#狗照片路径
        image1 = cv2.imread(imagepath)#读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)#转为RGB
        image1 = cv2.resize(image1, (IMG_ROWS, IMG_COLS))#重新定义形状为(224, 224)
        x_test[i, :, :, :] = image1
y_test=to_categorical(test_data) #测试集标签(200, 2)
#print(sys.getsizeof(x_test),sys.getsizeof(y_test))
x_test=np.array(x_test) #完整测试集矩阵(200, 224, 224, 3)
#归一化
#x_train.shape = (500, 224, 224, 3)
#x_test.shape = (200, 224, 224, 3)
x_train=x_train/255  #像素是8位的,所以0-255编程0-1除以255即可。
x_test = x_test/255

print(x_train.shape,x_test.shape,y_train.shape,y_test.shape)

到这里数据预处理完毕(详细的注释已经给出。)

3、定义训练模型

  • 可以完全自己写,也可以直接在框架中调用add,来增加层(卷积层,drop out 层,全连接层等)。

  • 我使用的是书中的alexnet模型

  • AlexNet模型简单介绍:
    网络架构一共有5个卷积层,3个Maxpooling层、2个全连接层(值得注意的是三、四卷积层后没有Pooling层),其输入矩阵为3x224×224大小的三通道彩色图像。
    下面为AlexNet网络模型架构详情。
    输入层(Input):输入为3×224×224大小图像矩阵。
    卷积层(Conv1):96个11*11大小的卷积核(每个GPU上48个卷积核)。
    Pooling层(Pool1):Max Pooling窗口大小为2×2,stride=2。
    卷积层(Conv2):256个5×5大小的卷积核(每个GPU上128个卷积核)。
    Pooling层(Pool2):Max Pooling窗口大小为2×2,stride=2。
    卷积层(Conv3):384个3×3大小的卷积核(每个GPU上各192个卷积核)。
    卷积层(Conv4)384个3x3大小的卷积核(每个GPU上各192个卷积核)。
    卷积层(Conv5):256个3x3大小的卷积核(每个GPU上各128个卷积核)。
    Pooling层(Pool5):Max Pooling窗口大小为2×2,stride=2。
    全连接层(FC1):第一个全连接层,将第五层Pooling层的输出连接成为一个一维向量作为该层的输入,输出4096个神经节点。
    全连接层(FC2):第二个全连接层,输入输出均为4096个神经节点。
    Sofmax输出层:输出为1000个神经节点,输出的每个神经节点单独对应图像所属分类的概率。因为在ImageNet数据集中有1000个分类,因此设定输出维度为1000。们用于二分类,所以输出为2个神经节点)

代码段:

def alex_net(w_path = None):
	#输入图像为(3, 224, 224)
	input_shape = (224, 224, 3)
	#输入层
	inputs = Input(shape = input_shape, name = 'input')

	#第一层:两个卷积操作和两个pooling操作

	conv1_1 = Convolution2D(48, (11, 11), strides = (4, 4), activation = 'relu', name = 'conv1_1')(inputs)
	conv1_2 = Convolution2D(48, (11, 11), strides = (4, 4), activation = 'relu', name = 'conv1_2')(inputs)

	pool1_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_1')(conv1_1)
	pool1_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_2')(conv1_2)

	#layer2:两个卷积,将前边的池化得到的数据,进行卷积,再继续池化

	conv2_1 = Convolution2D(128, (5, 5), activation = 'relu', padding = 'same')(pool1_1)
	conv2_2 = Convolution2D(128, (5, 5), activation = 'relu', padding = 'same')(pool1_2)

	pool2_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_1')(conv2_1)
	pool2_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_2')(conv2_2)

	#merge合并层:第二层进入第三层,将数据混合合并
	merge1 = concatenate([pool2_2, pool2_1], axis = 1)

	#layer3:两个卷积操作

	conv3_1 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv3_1', padding = 'same')(merge1)
	conv3_2 = Convolution2D(193, (3, 3), activation = 'relu', name = 'conv3_2', padding = 'same')(merge1)

	#latyer4:两个卷积操作
	conv4_1 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv4_1', padding = 'same')(conv3_1)
	conv4_2 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv4_2', padding = 'same')(conv3_2)

	#layer5:两个卷积操作和两个pooling操作
	conv5_1 = Convolution2D(128, (3, 3), activation = 'relu', name = 'conv5_1', padding = 'same')(conv4_1)
	conv5_2 = Convolution2D(128, (3, 3), activation = 'relu', name = 'conv5_2', padding = 'same')(conv4_2)

	pool5_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_1')(conv5_1)
	pool5_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_2')(conv5_2)

	#merge合并层:第五层进入全连接之前,要将分开的合并
	merge2 = concatenate([pool5_1, pool5_2], axis = 1)

	#通过flatten将多维输入一维化
	dense1 = Flatten(name = 'flatten')(merge2)

	#layer6, layer7 第六,七层, 进行两次4096维的全连接,中间加dropout避免过拟合
	dense2_1 = Dense(4096, activation = 'relu', name = 'dense2_1')(dense1)
	dense2_2 = Dropout(0.5)(dense2_1)

	dense3_1 = Dense(4096, activation = 'relu', name = 'dense3_1')(dense2_2)
	dense3_2 = Dropout(0.5)(dense3_1)

	#输出层:输出类别,分类函数使用softmax

	dense3_3 = Dense(nb_classes, name = 'dense3_3')(dense3_2)
	prediction = Activation('softmax', name = 'softmax')(dense3_3)
	

	#最后定义模型输出
	AlexNet = Model(input = inputs, outputs = prediction)
	if(w_path):
		#加载权重数据
		AlexNet.load_weights(w_path)
		
	return AlexNet

我将模型写成了一个函数,封装起来,方便随时调用。

使用语句

AlexNet = alex_net()
AlexNet.summary()

可以得到模型的详细信息。

   AlexNet = Model(input = inputs, outputs = prediction)
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to
==================================================================================================
input (InputLayer)              (None, 224, 224, 3)  0
__________________________________________________________________________________________________
conv1_2 (Conv2D)                (None, 54, 54, 48)   17472       input[0][0]
__________________________________________________________________________________________________
conv1_1 (Conv2D)                (None, 54, 54, 48)   17472       input[0][0]
__________________________________________________________________________________________________
pool1_2 (MaxPooling2D)          (None, 26, 26, 48)   0           conv1_2[0][0]
__________________________________________________________________________________________________
pool1_1 (MaxPooling2D)          (None, 26, 26, 48)   0           conv1_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D)               (None, 26, 26, 128)  153728      pool1_2[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D)               (None, 26, 26, 128)  153728      pool1_1[0][0]
__________________________________________________________________________________________________
pool2_2 (MaxPooling2D)          (None, 12, 12, 128)  0           conv2d_2[0][0]
__________________________________________________________________________________________________
pool2_1 (MaxPooling2D)          (None, 12, 12, 128)  0           conv2d_1[0][0]
__________________________________________________________________________________________________
concatenate_1 (Concatenate)     (None, 24, 12, 128)  0           pool2_2[0][0]
                                                                 pool2_1[0][0]
__________________________________________________________________________________________________
conv3_1 (Conv2D)                (None, 24, 12, 192)  221376      concatenate_1[0][0]
__________________________________________________________________________________________________
conv3_2 (Conv2D)                (None, 24, 12, 193)  222529      concatenate_1[0][0]
__________________________________________________________________________________________________
conv4_1 (Conv2D)                (None, 24, 12, 192)  331968      conv3_1[0][0]
__________________________________________________________________________________________________
conv4_2 (Conv2D)                (None, 24, 12, 192)  333696      conv3_2[0][0]
__________________________________________________________________________________________________
conv5_1 (Conv2D)                (None, 24, 12, 128)  221312      conv4_1[0][0]
__________________________________________________________________________________________________
conv5_2 (Conv2D)                (None, 24, 12, 128)  221312      conv4_2[0][0]
__________________________________________________________________________________________________
pool5_1 (MaxPooling2D)          (None, 11, 5, 128)   0           conv5_1[0][0]
__________________________________________________________________________________________________
pool5_2 (MaxPooling2D)          (None, 11, 5, 128)   0           conv5_2[0][0]
__________________________________________________________________________________________________
concatenate_2 (Concatenate)     (None, 22, 5, 128)   0           pool5_1[0][0]
                                                                 pool5_2[0][0]
__________________________________________________________________________________________________
flatten (Flatten)               (None, 14080)        0           concatenate_2[0][0]
__________________________________________________________________________________________________
dense2_1 (Dense)                (None, 4096)         57675776    flatten[0][0]
__________________________________________________________________________________________________
dropout_1 (Dropout)             (None, 4096)         0           dense2_1[0][0]
__________________________________________________________________________________________________
dense3_1 (Dense)                (None, 4096)         16781312    dropout_1[0][0]
__________________________________________________________________________________________________
dropout_2 (Dropout)             (None, 4096)         0           dense3_1[0][0]
__________________________________________________________________________________________________
dense3_3 (Dense)                (None, 2)            8194        dropout_2[0][0]
__________________________________________________________________________________________________
softmax (Activation)            (None, 2)            0           dense3_3[0][0]
==================================================================================================
Total params: 76,359,875
Trainable params: 76,359,875
Non-trainable params: 0
__________________________________________________________________________________________________


到这里模型定义完毕。

4、 将模型实例化,并编译(compile) ,并开始训练。

模型定义完成,使用简单的语句实现编译和训练。
代码段:

"""
编译和训练这个模型, 并保存模型。
"""
AlexNet = alex_net()

#检查保存断点文件夹是否存在,没有的话就创建一个

if not os.path.exists('alex_net_checkpoints'):
	os.mkdir('alex_net_checkpoints')
	
#优化使用随机梯度下降
sgd = SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
#编译网络
AlexNet.compile(loss = 'categorical_crossentropy', optimizer = sgd, metrics = ['accuracy'])
#保存最优模型
checkpoint = ModelCheckpoint(monitor = 'val_acc', 
							filepath = "weights.best.hdf5",
							verbose = 1,
							mode = 'max',
							save_best_only = True)
							
							
#开始训练网络
AlexNet.fit(x_train, y_train, 
			batch_size = 64, 
			epochs = 5, 
			verbose = 1, 
			validation_data = (x_test, y_test),
			callbacks = [checkpoint]
			)
#打印测试集的精度和损失				
score = AlexNet.evaluate(x_test, y_test, verbose = 0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])

代码中将开始训练并且将最优模型保存为hdf5文件(当预测时调用权重文件,进行预测)。
训练效果:

Train on 500 samples, validate on 200 samples
Epoch 1/5
2019-05-27 16:46:15.490164: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
500/500 [==============================] - 117s 234ms/step - loss: 0.6915 - acc: 0.5100 - val_loss: 0.6922 - val_acc: 0.5050

Epoch 00001: val_acc improved from -inf to 0.50500, saving model to weights.best.hdf5
Epoch 2/5
500/500 [==============================] - 115s 229ms/step - loss: 0.6951 - acc: 0.4940 - val_loss: 0.6907 - val_acc: 0.5000

Epoch 00002: val_acc did not improve from 0.50500
Epoch 3/5
500/500 [==============================] - 114s 228ms/step - loss: 0.6932 - acc: 0.5280 - val_loss: 0.6896 - val_acc: 0.5200

Epoch 00003: val_acc improved from 0.50500 to 0.52000, saving model to weights.best.hdf5
Epoch 4/5
500/500 [==============================] - 113s 227ms/step - loss: 0.6913 - acc: 0.5540 - val_loss: 0.6934 - val_acc: 0.5000

Epoch 00004: val_acc did not improve from 0.52000
Epoch 5/5
500/500 [==============================] - 117s 235ms/step - loss: 0.6975 - acc: 0.5080 - val_loss: 0.6887 - val_acc: 0.6300

Epoch 00005: val_acc improved from 0.52000 to 0.63000, saving model to weights.best.hdf5
Test loss: 0.6887068557739258
Test accuracy: 0.63

为了节省时间,我将epoch改成了5。
由于我们使用的数据集很少,而且epoch不多(正常是80),训练效果一般,但是实验过程相同。

完整训练代码:

from keras import Model
from keras.callbacks import ModelCheckpoint
from keras.utils import to_categorical
from keras.layers import Flatten, Dense, Input
from keras.layers import Convolution2D, MaxPooling2D
from keras.preprocessing import image
from keras.layers.core import Dense, Dropout, Activation
from keras.layers import concatenate
import pandas
from keras.optimizers import SGD,Adam,RMSprop
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import numpy as np
import cv2
import os


IMG_CHANNELS=3 #PGB图像为三通道 R、G、B
#weight_decay = 0.0005 
IMG_ROWS=224  #图像的行像素
IMG_COLS=224  #图像的列像素
BATCH_SIZE=64 #batch大小
NB_EPOCH=10   #循环次数
NB_CLASSES=2  #分类  猫和狗两种
VERBOSE=1
#VALIDATION_SPLIT=0.2
#OPTIM=RMSprop()
x_test=np.empty((200,IMG_ROWS,IMG_COLS,3),np.float16)  #测试集两百张(200, 224, 224, 3)
x_train=np.empty((500,IMG_ROWS,IMG_COLS,3),np.float16)  #训练集五百张(500, 224, 224, 3)
train_data=np.zeros(500)  #训练集标签
test_data=np.zeros(200)  #测试集标签
#读入训练样本
for i in range(500):
    if i<250:
		#载入250张猫
        train_data[i]=1#猫的标签就是1
		#dog.8
        imagepath = "D:/python_pro/deep_learning_al/data/train/cats/cat." + str(i)+ ".jpg"#猫照片存储路径
        image1 = cv2.imread(imagepath)  #读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)  #转为RGB
        image1 = cv2.resize(image1,(IMG_ROWS,IMG_COLS))  #重新定义形状为(224, 224)
        #plt.imshow(image1)
        #plt.show()
        x_train[i,:,:,:]=image1   #训练集矩阵,四维(500, 224, 224, 3)

    else:
		#载入250张狗
        train_data[i]=0#狗的标签就是0
        imagepath = "D:/python_pro/deep_learning_al/data/train/dogs/dog." + str(i - 250)+ ".jpg" #狗照片路径
        image1 = cv2.imread(imagepath)  #读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)#转为RGB
        image1 = cv2.resize(image1, (IMG_ROWS, IMG_COLS))#重新定义形状为(224, 224)
        x_train[i, :, :, :] = image1 #训练集矩阵,四维(500, 224, 224, 3)
y_train=to_categorical(train_data) #训练集标签(500, 2)
x_train=np.array(x_train)  #完整训练集矩阵
#读入测试样本
for i in range(200):
    if i<100:
		#载入100张猫
        test_data[i]=1#猫的标签就是1
        imagepath =  "D:/python_pro/deep_learning_al/data/test/cats/cat."+str(i+250)+'.jpg'#猫照片存储路径
        image1 = cv2.imread(imagepath)#读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)#转为RGB
        image1 = cv2.resize(image1,(IMG_ROWS,IMG_COLS)) #重新定义形状为(224, 224)
        x_test[i,:,:,:]=image1

    else:
		#载入100张狗
        test_data[i]=0#狗的标签就是0
        imagepath=  "D:/python_pro/deep_learning_al/data/test/dogs/dog."+str(i+250-100)+'.jpg'#狗照片路径
        image1 = cv2.imread(imagepath)#读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)#转为RGB
        image1 = cv2.resize(image1, (IMG_ROWS, IMG_COLS))#重新定义形状为(224, 224)
        x_test[i, :, :, :] = image1
y_test=to_categorical(test_data) #测试集标签(200, 2)
#print(sys.getsizeof(x_test),sys.getsizeof(y_test))
x_test=np.array(x_test) #完整测试集矩阵(200, 224, 224, 3)
#归一化
#x_train.shape = (500, 224, 224, 3)
#x_test.shape = (200, 224, 224, 3)
x_train=x_train/255  #像素是8位的,所以0-255编程0-1除以255即可。
x_test = x_test/255

print(x_train.shape,x_test.shape,y_train.shape,y_test.shape)
# #数据多样化

# batch_size = 16

# train_datagen = ImageDataGenerator(
        # rescale=1./255,
        # shear_range=0.2,
        # zoom_range=0.2,
        # horizontal_flip=True)

# test_datagen = ImageDataGenerator(rescale=1./255)

# train_generator = train_datagen.flow_from_directory(
        # 'D:\python项目\deep_learning_al\train1', 
        # target_size=(224, 224),  
        # batch_size=batch_size,
        # class_mode='categorical')

# validation_generator = test_datagen.flow_from_directory(
        # 'D:\python项目\deep_learning_al\validation',
        # target_size=(224, 224),
        # batch_size=batch_size,
        # class_mode='categorical')	
		
		
		
input_shape = (224, 224, 3)

"""
创建alexnet网络模型
"""

#分类数量
nb_classes = 2
def alex_net(w_path = None):
	#输入图像为(3, 224, 224)
	input_shape = (224, 224, 3)
	#输入层
	inputs = Input(shape = input_shape, name = 'input')

	#第一层:两个卷积操作和两个pooling操作

	conv1_1 = Convolution2D(48, (11, 11), strides = (4, 4), activation = 'relu', name = 'conv1_1')(inputs)
	conv1_2 = Convolution2D(48, (11, 11), strides = (4, 4), activation = 'relu', name = 'conv1_2')(inputs)

	pool1_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_1')(conv1_1)
	pool1_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_2')(conv1_2)

	#layer2:两个卷积,将前边的池化得到的数据,进行卷积,再继续池化

	conv2_1 = Convolution2D(128, (5, 5), activation = 'relu', padding = 'same')(pool1_1)
	conv2_2 = Convolution2D(128, (5, 5), activation = 'relu', padding = 'same')(pool1_2)

	pool2_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_1')(conv2_1)
	pool2_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_2')(conv2_2)

	#merge合并层:第二层进入第三层,将数据混合合并
	merge1 = concatenate([pool2_2, pool2_1], axis = 1)

	#layer3:两个卷积操作

	conv3_1 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv3_1', padding = 'same')(merge1)
	conv3_2 = Convolution2D(193, (3, 3), activation = 'relu', name = 'conv3_2', padding = 'same')(merge1)

	#latyer4:两个卷积操作
	conv4_1 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv4_1', padding = 'same')(conv3_1)
	conv4_2 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv4_2', padding = 'same')(conv3_2)

	#layer5:两个卷积操作和两个pooling操作
	conv5_1 = Convolution2D(128, (3, 3), activation = 'relu', name = 'conv5_1', padding = 'same')(conv4_1)
	conv5_2 = Convolution2D(128, (3, 3), activation = 'relu', name = 'conv5_2', padding = 'same')(conv4_2)

	pool5_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_1')(conv5_1)
	pool5_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_2')(conv5_2)

	#merge合并层:第五层进入全连接之前,要将分开的合并
	merge2 = concatenate([pool5_1, pool5_2], axis = 1)

	#通过flatten将多维输入一维化
	dense1 = Flatten(name = 'flatten')(merge2)

	#layer6, layer7 第六,七层, 进行两次4096维的全连接,中间加dropout避免过拟合
	dense2_1 = Dense(4096, activation = 'relu', name = 'dense2_1')(dense1)
	dense2_2 = Dropout(0.5)(dense2_1)

	dense3_1 = Dense(4096, activation = 'relu', name = 'dense3_1')(dense2_2)
	dense3_2 = Dropout(0.5)(dense3_1)

	#输出层:输出类别,分类函数使用softmax

	dense3_3 = Dense(nb_classes, name = 'dense3_3')(dense3_2)
	prediction = Activation('softmax', name = 'softmax')(dense3_3)
	

	#最后定义模型输出
	AlexNet = Model(input = inputs, outputs = prediction)
	if(w_path):
		#加载权重数据
		AlexNet.load_weights(w_path)
		
	return AlexNet	
#AlexNet.summary()

"""
编译和训练这个模型
"""
AlexNet = alex_net()

#检查保存断点文件夹是否存在,没有的话就创建一个

if not os.path.exists('alex_net_checkpoints'):
	os.mkdir('alex_net_checkpoints')
	
#优化使用随机梯度下降
sgd = SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
AlexNet.compile(loss = 'categorical_crossentropy', optimizer = sgd, metrics = ['accuracy'])

checkpoint = ModelCheckpoint(monitor = 'val_acc', 
							filepath = "weights.best.hdf5",
							verbose = 1,
							mode = 'max',
							save_best_only = True)
							
							
#开始训练网络
AlexNet.fit(x_train, y_train, 
			batch_size = 64, 
			epochs = 5, 
			verbose = 1, 
			validation_data = (x_test, y_test),
			callbacks = [checkpoint]
			)
				
score = AlexNet.evaluate(x_test, y_test, verbose = 0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
# #训练模型,训练结果保存在history-callback中
# history_callback = AlexNet.fit_generator(
										# train_generator,
										# steps_per_epoch = 2000,
										# epochs = 80,
										# validation_data = validation_generator,
										# validation_steps = 800
										# )
										
# #训练完存储训练的结果和模型中的权重参数
# pandas.DataFrame(history_callback.history).to_csv('.\AlexNet_model.csv')
# AlexNet.save_weights('.\AlexNet_model.h5')
	

5、预测

  • 加载权重文件和待预测图像,开始预测。
  • 只要将保存好的模型加载,并对待预测图片进行一次正向传播,就可以预测出结果。
    代码段:
#对模型做出预测
AlexNet.load_weights('D:/python_pro/deep_learning_al/weights.best.hdf5')	#加载权重文件
img_path =  "D:/python_pro/deep_learning_al/data/80.jpg"  #待预测图片的路径
img = image.load_img(img_path, target_size = input_shape[0:]) #加载图像
x = image.img_to_array(img) #  转换为二维数组形式
x = np.expand_dims(x, axis = 0) #扩展图像维度为(1, 224, 224, 3)
x = x.reshape((-1, ) + input_shape) / 255 #归一化

pres = AlexNet.predict(x)
print(pres)

预测结果:

predict.py:86: UserWarning: Update your `Model` call to the Keras 2 API: `Model(outputs=Tensor("so..., inputs=Tensor("in...)`
  AlexNet = Model(input = inputs, outputs = prediction)
2019-05-27 17:25:24.061630: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
[[0.5057532 0.4942468]]

80.jpg是狗狗,0代表狗, 1代表猫。勉强能看出是狗…,虽然效果不好,但是流程就是这样!改模型,增加数据集,增加epoch,改loss,改优化函数,加Drop out 等等吧就能可能得到不错的效果,参数的更改和模型的使用更多的还是经验。

完整预测代码:

from keras import Model
from keras.callbacks import ModelCheckpoint
from keras.utils import to_categorical
from keras.layers import Flatten, Dense, Input
from keras.layers import Convolution2D, MaxPooling2D
from keras.preprocessing import image
from keras.layers.core import Dense, Dropout, Activation
from keras.layers import concatenate
import pandas
from keras.optimizers import SGD,Adam,RMSprop
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import numpy as np
import cv2
import os

input_shape = (224, 224, 3)

"""
创建alexnet网络模型
"""

#分类数量
nb_classes = 2
def alex_net(w_path = None):
	#输入图像为(3, 227, 227)
	input_shape = (224, 224, 3)
	#输入层
	inputs = Input(shape = input_shape, name = 'input')

	#第一层:两个卷积操作和两个pooling操作

	conv1_1 = Convolution2D(48, (11, 11), strides = (4, 4), activation = 'relu', name = 'conv1_1')(inputs)
	conv1_2 = Convolution2D(48, (11, 11), strides = (4, 4), activation = 'relu', name = 'conv1_2')(inputs)

	pool1_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_1')(conv1_1)
	pool1_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_2')(conv1_2)

	#layer2:两个卷积,将前边的池化得到的数据,进行卷积,再继续池化

	conv2_1 = Convolution2D(128, (5, 5), activation = 'relu', padding = 'same')(pool1_1)
	conv2_2 = Convolution2D(128, (5, 5), activation = 'relu', padding = 'same')(pool1_2)

	pool2_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_1')(conv2_1)
	pool2_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_2')(conv2_2)

	#merge合并层:第二层进入第三层,将数据混合合并
	merge1 = concatenate([pool2_2, pool2_1], axis = 1)

	#layer3:两个卷积操作

	conv3_1 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv3_1', padding = 'same')(merge1)
	conv3_2 = Convolution2D(193, (3, 3), activation = 'relu', name = 'conv3_2', padding = 'same')(merge1)

	#latyer4:两个卷积操作
	conv4_1 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv4_1', padding = 'same')(conv3_1)
	conv4_2 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv4_2', padding = 'same')(conv3_2)

	#layer5:两个卷积操作和两个pooling操作
	conv5_1 = Convolution2D(128, (3, 3), activation = 'relu', name = 'conv5_1', padding = 'same')(conv4_1)
	conv5_2 = Convolution2D(128, (3, 3), activation = 'relu', name = 'conv5_2', padding = 'same')(conv4_2)

	pool5_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_1')(conv5_1)
	pool5_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_2')(conv5_2)

	#merge合并层:第五层进入全连接之前,要将分开的合并
	merge2 = concatenate([pool5_1, pool5_2], axis = 1)

	#通过flatten将多维输入一维化
	dense1 = Flatten(name = 'flatten')(merge2)

	#layer6, layer7 第六,七层, 进行两次4096维的全连接,中间加dropout避免过拟合
	dense2_1 = Dense(4096, activation = 'relu', name = 'dense2_1')(dense1)
	dense2_2 = Dropout(0.5)(dense2_1)

	dense3_1 = Dense(4096, activation = 'relu', name = 'dense3_1')(dense2_2)
	dense3_2 = Dropout(0.5)(dense3_1)

	#输出层:输出类别,分类函数使用softmax

	dense3_3 = Dense(nb_classes, name = 'dense3_3')(dense3_2)
	prediction = Activation('softmax', name = 'softmax')(dense3_3)
	

	#最后定义模型输出
	AlexNet = Model(input = inputs, outputs = prediction)
	if(w_path):
		#加载权重数据
		AlexNet.load_weights(w_path)
		
	return AlexNet	
AlexNet = alex_net()
#优化使用随机梯度下降
sgd = SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
AlexNet.compile(loss = 'categorical_crossentropy', optimizer = sgd, metrics = ['accuracy'])

#对模型做出预测
AlexNet.load_weights('D:/python_pro/deep_learning_al/weights.best.hdf5')	#加载权重文件
img_path =  "D:/python_pro/deep_learning_al/data/80.jpg"  #待预测图片的路径
img = image.load_img(img_path, target_size = input_shape[0:]) #加载图像
x = image.img_to_array(img) #  转换为二维数组形式
x = np.expand_dims(x, axis = 0) #扩展图像维度为(1, 224, 224, 3)
x = x.reshape((-1, ) + input_shape) / 255 #归一化

pres = AlexNet.predict(x)
print(pres)

6、微调网络

神经网络的训练精度一般,我们可以考虑为微调网络,本次实验仅仅对不同层的学习率进行了更改。
代码段1:数据生成器

datagen = ImageDataGenerator(
    featurewise_center=True,
    featurewise_std_normalization=True,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True)

datagen.fit(x_train)

代码段2:

#微调网络
#定义需要重新训练的网络层
layers = ['dense3_3', 'dense3_1', 'dense2_2', 'pool5_1', 
			'conv4_1', 'conv3_1', 'conv2_1', 'conv1_1'
			
		]
#对重新训练的网络层迭代次数
epochs = [1, 1, 1, 1, 1, 1,1, 1]

#重新定义的网络层随机梯度下降的学习率
lr = [1e-2, 1e-3, 1e-4, 1e-4,  1e-4,  1e-4,  1e-4, 1e-4]

#开始迭代需要重新训练的网络层
for i, layer in enumerate(layers):
	#标记指定layer是可训练的层
	for layer in AlexNet.layers:
		if layer.name == layer:
			layer.trainable = True
		layer.trainable = False  #其余的层冻结,不能够训练
		
	#编译该层网络的权重参数
	sgd = SGD(lr = lr[i], decay = 1e-6, momentum = 0.9, nesterov = True)
	AlexNet.compile(loss = 'categorical_crossentropy', optimizer = sgd, metrics = ['accuracy'])
		
		
	#训练该层网络的权重参数,每层迭代epoch[i]次
	for epoch in range(epochs[i]):
	# history = AlexNet.fit_generator(
									# x_train,
									# #steps_per_epoch = None,
									# epochs = 1,
									# validation_data = x_test,
									# #validation_steps = None
									# )
									
		AlexNet.fit_generator(datagen.flow(x_train, y_train, batch_size=32),
						steps_per_epoch=len(x_train) / 32, epochs=2)
									
#存储fine_tune后的权重参数
AlexNet.save_weights('weights1.h5')

完整代码段:

from keras import Model
from keras.callbacks import ModelCheckpoint
from keras.utils import to_categorical
from keras.layers import Flatten, Dense, Input
from keras.layers import Convolution2D, MaxPooling2D
from keras.preprocessing import image
from keras.layers.core import Dense, Dropout, Activation
from keras.layers import concatenate
import pandas
from keras.optimizers import SGD,Adam,RMSprop
from keras.optimizers import SGD
from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img
import numpy as np
import cv2
import os


IMG_CHANNELS=3 #PGB图像为三通道 R、G、B
#weight_decay = 0.0005 
IMG_ROWS=224  #图像的行像素
IMG_COLS=224  #图像的列像素
BATCH_SIZE=64 #batch大小
NB_EPOCH=10   #循环次数
NB_CLASSES=2  #分类  猫和狗两种
VERBOSE=1
#VALIDATION_SPLIT=0.2
#OPTIM=RMSprop()
x_test=np.empty((200,IMG_ROWS,IMG_COLS,3),np.float16)  #测试集两百张(200, 224, 224, 3)
x_train=np.empty((500,IMG_ROWS,IMG_COLS,3),np.float16)  #训练集五百张(500, 224, 224, 3)
train_data=np.zeros(500)  #训练集标签
test_data=np.zeros(200)  #测试集标签
#读入训练样本
for i in range(500):
    if i<250:
		#载入250张猫
        train_data[i]=1#猫的标签就是1
		#dog.8
        imagepath = "D:/python_pro/deep_learning_al/data/train/cats/cat." + str(i)+ ".jpg"#猫照片存储路径
        image1 = cv2.imread(imagepath)  #读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)  #转为RGB
        image1 = cv2.resize(image1,(IMG_ROWS,IMG_COLS))  #重新定义形状为(224, 224)
        #plt.imshow(image1)
        #plt.show()
        x_train[i,:,:,:]=image1   #训练集矩阵,四维(500, 224, 224, 3)

    else:
		#载入250张狗
        train_data[i]=0#狗的标签就是0
        imagepath = "D:/python_pro/deep_learning_al/data/train/dogs/dog." + str(i - 250)+ ".jpg" #狗照片路径
        image1 = cv2.imread(imagepath)  #读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)#转为RGB
        image1 = cv2.resize(image1, (IMG_ROWS, IMG_COLS))#重新定义形状为(224, 224)
        x_train[i, :, :, :] = image1 #训练集矩阵,四维(500, 224, 224, 3)
y_train=to_categorical(train_data) #训练集标签(500, 2)
x_train=np.array(x_train)  #完整训练集矩阵
#读入测试样本
for i in range(200):
    if i<100:
		#载入100张猫
        test_data[i]=1#猫的标签就是1
        imagepath =  "D:/python_pro/deep_learning_al/data/test/cats/cat."+str(i+250)+'.jpg'#猫照片存储路径
        image1 = cv2.imread(imagepath)#读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)#转为RGB
        image1 = cv2.resize(image1,(IMG_ROWS,IMG_COLS)) #重新定义形状为(224, 224)
        x_test[i,:,:,:]=image1

    else:
		#载入100张狗
        test_data[i]=0#狗的标签就是0
        imagepath=  "D:/python_pro/deep_learning_al/data/test/dogs/dog."+str(i+250-100)+'.jpg'#狗照片路径
        image1 = cv2.imread(imagepath)#读入文件
        image1 = cv2.cvtColor(image1, cv2.COLOR_BGR2RGB)#转为RGB
        image1 = cv2.resize(image1, (IMG_ROWS, IMG_COLS))#重新定义形状为(224, 224)
        x_test[i, :, :, :] = image1
y_test=to_categorical(test_data) #测试集标签(200, 2)
#print(sys.getsizeof(x_test),sys.getsizeof(y_test))
x_test=np.array(x_test) #完整测试集矩阵(200, 224, 224, 3)
#归一化
#x_train.shape = (500, 224, 224, 3)
#x_test.shape = (200, 224, 224, 3)
x_train=x_train/255  #像素是8位的,所以0-255编程0-1除以255即可。
x_test = x_test/255

print(x_train.shape,x_test.shape,y_train.shape,y_test.shape)

datagen = ImageDataGenerator(
    featurewise_center=True,
    featurewise_std_normalization=True,
    rotation_range=20,
    width_shift_range=0.2,
    height_shift_range=0.2,
    horizontal_flip=True)

datagen.fit(x_train)


# #数据多样化

# batch_size = 16

# train_datagen = ImageDataGenerator(
        # rescale=1./255,
        # shear_range=0.2,
        # zoom_range=0.2,
        # horizontal_flip=True)

# test_datagen = ImageDataGenerator(rescale=1./255)

# train_generator = train_datagen.flow_from_directory(
        # 'D:\python项目\deep_learning_al\train1', 
        # target_size=(224, 224),  
        # batch_size=batch_size,
        # class_mode='categorical')

# validation_generator = test_datagen.flow_from_directory(
        # 'D:\python项目\deep_learning_al\validation',
        # target_size=(224, 224),
        # batch_size=batch_size,
        # class_mode='categorical')	
		
		
		
input_shape = (224, 224, 3)

"""
创建alexnet网络模型
"""

#分类数量
nb_classes = 2
def alex_net(w_path = None):
	#输入图像为(3, 224, 224)
	input_shape = (224, 224, 3)
	#输入层
	inputs = Input(shape = input_shape, name = 'input')

	#第一层:两个卷积操作和两个pooling操作

	conv1_1 = Convolution2D(48, (11, 11), strides = (4, 4), activation = 'relu', name = 'conv1_1')(inputs)
	conv1_2 = Convolution2D(48, (11, 11), strides = (4, 4), activation = 'relu', name = 'conv1_2')(inputs)

	pool1_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_1')(conv1_1)
	pool1_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool1_2')(conv1_2)

	#layer2:两个卷积,将前边的池化得到的数据,进行卷积,再继续池化

	conv2_1 = Convolution2D(128, (5, 5), activation = 'relu', padding = 'same')(pool1_1)
	conv2_2 = Convolution2D(128, (5, 5), activation = 'relu', padding = 'same')(pool1_2)

	pool2_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_1')(conv2_1)
	pool2_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool2_2')(conv2_2)

	#merge合并层:第二层进入第三层,将数据混合合并
	merge1 = concatenate([pool2_2, pool2_1], axis = 1)

	#layer3:两个卷积操作

	conv3_1 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv3_1', padding = 'same')(merge1)
	conv3_2 = Convolution2D(193, (3, 3), activation = 'relu', name = 'conv3_2', padding = 'same')(merge1)

	#latyer4:两个卷积操作
	conv4_1 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv4_1', padding = 'same')(conv3_1)
	conv4_2 = Convolution2D(192, (3, 3), activation = 'relu', name = 'conv4_2', padding = 'same')(conv3_2)

	#layer5:两个卷积操作和两个pooling操作
	conv5_1 = Convolution2D(128, (3, 3), activation = 'relu', name = 'conv5_1', padding = 'same')(conv4_1)
	conv5_2 = Convolution2D(128, (3, 3), activation = 'relu', name = 'conv5_2', padding = 'same')(conv4_2)

	pool5_1 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_1')(conv5_1)
	pool5_2 = MaxPooling2D((3, 3), strides = (2, 2), name = 'pool5_2')(conv5_2)

	#merge合并层:第五层进入全连接之前,要将分开的合并
	merge2 = concatenate([pool5_1, pool5_2], axis = 1)

	#通过flatten将多维输入一维化
	dense1 = Flatten(name = 'flatten')(merge2)

	#layer6, layer7 第六,七层, 进行两次4096维的全连接,中间加dropout避免过拟合
	dense2_1 = Dense(4096, activation = 'relu', name = 'dense2_1')(dense1)
	dense2_2 = Dropout(0.5)(dense2_1)

	dense3_1 = Dense(4096, activation = 'relu', name = 'dense3_1')(dense2_2)
	dense3_2 = Dropout(0.5)(dense3_1)

	#输出层:输出类别,分类函数使用softmax

	dense3_3 = Dense(nb_classes, name = 'dense3_3')(dense3_2)
	prediction = Activation('softmax', name = 'softmax')(dense3_3)
	

	#最后定义模型输出
	AlexNet = Model(input = inputs, outputs = prediction)
	if(w_path):
		#加载权重数据
		AlexNet.load_weights(w_path)
		
	return AlexNet	
#AlexNet.summary()

"""
编译和训练这个模型
"""
AlexNet = alex_net()

# #检查保存断点文件夹是否存在,没有的话就创建一个

# if not os.path.exists('alex_net_checkpoints'):
	# os.mkdir('alex_net_checkpoints')
	
#微调网络
#定义需要重新训练的网络层
layers = ['dense3_3', 'dense3_1', 'dense2_2', 'pool5_1', 
			'conv4_1', 'conv3_1', 'conv2_1', 'conv1_1'
			
		]
#对重新训练的网络层迭代次数
epochs = [1, 1, 1, 1, 1, 1,1, 1]

#重新定义的网络层随机梯度下降的学习率
lr = [1e-2, 1e-3, 1e-4, 1e-4,  1e-4,  1e-4,  1e-4, 1e-4]

#开始迭代需要重新训练的网络层
for i, layer in enumerate(layers):
	#标记指定layer是可训练的层
	for layer in AlexNet.layers:
		if layer.name == layer:
			layer.trainable = True
		layer.trainable = False  #其余的层冻结,不能够训练
		
	#编译该层网络的权重参数
	sgd = SGD(lr = lr[i], decay = 1e-6, momentum = 0.9, nesterov = True)
	AlexNet.compile(loss = 'categorical_crossentropy', optimizer = sgd, metrics = ['accuracy'])
		
		
	#训练该层网络的权重参数,每层迭代epoch[i]次
	for epoch in range(epochs[i]):
	# history = AlexNet.fit_generator(
									# x_train,
									# #steps_per_epoch = None,
									# epochs = 1,
									# validation_data = x_test,
									# #validation_steps = None
									# )
									
		AlexNet.fit_generator(datagen.flow(x_train, y_train, batch_size=32),
						steps_per_epoch=len(x_train) / 32, epochs=2)
									
#存储fine_tune后的权重参数
AlexNet.save_weights('weights1.h5')
# #优化使用随机梯度下降
# sgd = SGD(lr = 0.01, decay = 1e-6, momentum = 0.9, nesterov = True)
# AlexNet.compile(loss = 'categorical_crossentropy', optimizer = sgd, metrics = ['accuracy'])

# checkpoint = ModelCheckpoint(monitor = 'val_acc', 
							# # filepath = "weights.best.hdf5",
							# # verbose = 1,
							# # mode = 'max',
							# # save_best_only = True)
							
							
# #开始训练网络
# AlexNet.fit(x_train, y_train, 
			# batch_size = 64, 
			# epochs = 5, 
			# verbose = 1, 
			# validation_data = (x_test, y_test),
			# callbacks = [checkpoint]
			# )
				
# score = AlexNet.evaluate(x_test, y_test, verbose = 0)
# print('Test loss:', score[0])
# print('Test accuracy:', score[1])
# # #训练模型,训练结果保存在history-callback中
# # history_callback = AlexNet.fit_generator(
										# train_generator,
										# steps_per_epoch = 2000,
										# epochs = 80,
										# validation_data = validation_generator,
										# validation_steps = 800
										# )
										
# #训练完存储训练的结果和模型中的权重参数
# pandas.DataFrame(history_callback.history).to_csv('.\AlexNet_model.csv')
# AlexNet.save_weights('.\AlexNet_model.h5')

微调效果:

 AlexNet = Model(input = inputs, outputs = prediction)
Epoch 1/2
2019-05-28 14:31:46.084488: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX AVX2
16/15 [==============================] - 43s 3s/step - loss: 0.6985 - acc: 0.4818
Epoch 2/2
16/15 [==============================] - 39s 2s/step - loss: 0.6902 - acc: 0.5282
Epoch 1/2
16/15 [==============================] - 42s 3s/step - loss: 0.6914 - acc: 0.5522
Epoch 2/2
16/15 [==============================] - 39s 2s/step - loss: 0.6975 - acc: 0.5308
Epoch 1/2
16/15 [==============================] - 42s 3s/step - loss: 0.6930 - acc: 0.5139
Epoch 2/2
16/15 [==============================] - 39s 2s/step - loss: 0.7016 - acc: 0.4972
Epoch 1/2
16/15 [==============================] - 42s 3s/step - loss: 0.7037 - acc: 0.4830
Epoch 2/2
16/15 [==============================] - 39s 2s/step - loss: 0.6950 - acc: 0.5229
Epoch 1/2
16/15 [==============================] - 42s 3s/step - loss: 0.6974 - acc: 0.5000
Epoch 2/2
16/15 [==============================] - 39s 2s/step - loss: 0.6979 - acc: 0.5237
Epoch 1/2
16/15 [==============================] - 41s 3s/step - loss: 0.6949 - acc: 0.5156
Epoch 2/2
16/15 [==============================] - 39s 2s/step - loss: 0.7011 - acc: 0.5000
Epoch 1/2
16/15 [==============================] - 42s 3s/step - loss: 0.6912 - acc: 0.4972
Epoch 2/2
16/15 [==============================] - 39s 2s/step - loss: 0.6988 - acc: 0.4813
Epoch 1/2
16/15 [==============================] - 42s 3s/step - loss: 0.6874 - acc: 0.5394
Epoch 2/2
16/15 [==============================] - 39s 2s/step - loss: 0.6955 - acc: 0.5193

模型微调之后,重新存储模型文件,然后利用新的权重文件进行预测。

这里边没有用到预训练模型,以后可能还会遇到的.

这篇文章主要是对自己的实验流程做个总结,同时希望对喜欢深度学习的你带来帮助,谢谢!

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章