深度学习第四周--第二课keras、残差网络搭建

声明

本文参考何宽

前言

本文的结构:
1、学习一个高级的神经网络的框架,能够运行在包括TensorFlow和CNTK的几个较低级别的框架之上的框架—keras。
2、使用残差网络构建一个非常深的卷积网络。

一、keras入门–笑脸识别

1.1、为何使用keras框架

为了使深度学习工程师能够很快地建立和实验不同的模型,keras是相比tensorflow和python更高层次的框架,并提供了额外的抽象方法,最关键的是keras能够以最短的时间让想法变位现实。但是,keras比底层框架更具有限制性,所以有一些非常复杂的模型可以在TensorFlow中实现,但在keras中却没有,但keras对许多常见模型都能正常运行。

1.2、任务描述

建立一个算法,它使用来自前门摄像头的图片来检查这个人是否快乐,只有在人高兴的时候,门才会打开。

import numpy as np
from keras import layers
from keras.layers import Input, Dense, Activation, ZeroPadding2D, BatchNormalization, Flatten, Conv2D
from keras.layers import AveragePooling2D, MaxPooling2D, Dropout, GlobalMaxPooling2D, GlobalAveragePooling2D
from keras.models import Model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
import pydot
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
import kt_utils 

import keras.backend as K
K.set_image_data_format('channels_last')
import matplotlib.pyplot as plt
from matplotlib.pyplot import imshow

加载数据集:

X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = kt_utils.load_dataset()
X_train  = X_train_orig/255
X_test  = X_test_orig/255

Y_train = Y_train_orig.T
Y_test = Y_test_orig.T

print(X_train.shape[0],X_test.shape[0],X_train.shape,Y_train.shape,X_test.shape,Y_test.shape)

结果:

600 150 (600, 64, 64, 3) (600, 1) (150, 64, 64, 3) (150, 1)

1.3、使用keras框架构建模型

这是模型的例子:

def model(input_shape):
    """
    模型大纲
    """
    
    X_input = Input(input_shape)
    
    X = ZeroPadding2D((3,3))(X_input)
    
    X = Conv2D(32,(7,7),strides = (1,1),name='conv0')(X)
    X = BatchNormalization(axis = 3,name='bn0')(X)
    X = Activation('relu')(X)
    
    X = MaxPooling2D((2,2),name="max_pool")(X)
    
    X = Flatten()(X)
    X = Dense(1,activation='sigmoid',name='fc')(X)
    
    model = Model(inputs = X_input,outputs = X,name='HappyModel')
    
    return model

**注意:**keras框架使用的变量名和我们之前使用的numpy和TensorFlow变量不一样。它不是在前向传播的每一步上创建新变量(比如X,Z1,A1,Z2,A2…)以便于不同层之间的计算。在keras中,我们使用X覆盖了所有的值,没有保存每一层结果,只需要最新的值,唯一例外的就是X_input,我们将它分离出来是因为它是输入的数据,我们要在最后的创建模型那一步中用到。

def HappyModel(input_shape):
    """
    实现一个检测笑容的模型
    
    参数:
        input_shape - 输入的数据的维度
    返回:
        model - 创建的keras模型
    """
    X_input = Input(input_shape)
    
    X = ZeroPadding2D((3,3))(X_input)
    
    X = Conv2D(32,(7,7),strides = (1,1),name='conv0')(X)
    X = BatchNormalization(axis = 3,name='bn0')(X)
    X = Activation('relu')(X)
    
    X = MaxPooling2D((2,2),name="max_pool")(X)
    
    X = Flatten()(X)
    X = Dense(1,activation='sigmoid',name='fc')(X)
    
    model = Model(inputs = X_input,outputs = X,name='HappyModel')
    
    return model

现在已经设计好模型了,要训练并测试模型我们需要这么做:

  • 创建一个模型实体
  • 编译模型,可以使用这个语句:model.compile(optimizer = '...',loss='...',metrucs=['accuracy'])
  • 训练模型:model.fit(x=...,y=...,epochs=...,batch_size=...)
  • 评估模型:model.evaluate(x=...,y=...)

测试:

happy_model = HappyModel(X_train.shape[1:])
happy_model.compile('adam','binary_crossentropy',metrics=['accuracy'])
happy_model.fit(X_train,Y_train,epochs=10,batch_size=50)
preds = happy_model.evaluate(X_test,Y_test,batch_size=32,verbose=1,sample_weight=None)
print(preds[0])
print(preds[1])

结果:

Epoch 1/10
600/600 [==============================] - 8s 13ms/step - loss: 2.1831 - acc: 0.5567
Epoch 2/10
600/600 [==============================] - 8s 13ms/step - loss: 0.6674 - acc: 0.7700
Epoch 3/10
600/600 [==============================] - 9s 15ms/step - loss: 0.3431 - acc: 0.8383
Epoch 4/10
600/600 [==============================] - 8s 14ms/step - loss: 0.1949 - acc: 0.9133
Epoch 5/10
600/600 [==============================] - 8s 13ms/step - loss: 0.1515 - acc: 0.9433
Epoch 6/10
600/600 [==============================] - 8s 13ms/step - loss: 0.1541 - acc: 0.9433
Epoch 7/10
600/600 [==============================] - 8s 13ms/step - loss: 0.0998 - acc: 0.9700
Epoch 8/10
600/600 [==============================] - 8s 13ms/step - loss: 0.0950 - acc: 0.9717
Epoch 9/10
600/600 [==============================] - 8s 13ms/step - loss: 0.0777 - acc: 0.9750
Epoch 10/10
600/600 [==============================] - 8s 13ms/step - loss: 0.0733 - acc: 0.9783
150/150 [==============================] - 1s 6ms/step
0.4635606718063354
0.806666665871938

只要准确度大于75%就算正常,若你的准确度小于75%,可以尝试改变模型。

X = Conv2D(32, (3, 3), strides = (1, 1), name = 'conv0')(X)
X = BatchNormalization(axis = 3, name = 'bn0')(X)
X = Activation('relu')(X)
  • 可以在每个块后面使用最大值池化层,它将会减少宽、高的维度。
  • 改变优化器,这里使用的是Adam
  • 如果模型难以运行,并且遇到了内存不够的问题,那么就降低batch_size(12)
  • 运行更多代,直到看到有良好效果的时候。

1.4、测试你的图片

img_path = '1.png'
img = image.load_img(img_path,target_size=(64,64))
imshow(img)

x = image.img_to_array(img)
x = np.expand_dims(x,axis=0)
x = preprocess_input(x)

print(happy_model.predict(x))

结果:

[[1.]]

1.5、其他一些有用的功能

  • model.summary():打印出你的每一层的大小细节
  • plot_model():绘制出布局图
happy_model.summary()

结果:

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 64, 64, 3)         0         
_________________________________________________________________
zero_padding2d_1 (ZeroPaddin (None, 70, 70, 3)         0         
_________________________________________________________________
conv0 (Conv2D)               (None, 64, 64, 32)        4736      
_________________________________________________________________
bn0 (BatchNormalization)     (None, 64, 64, 32)        128       
_________________________________________________________________
activation_1 (Activation)    (None, 64, 64, 32)        0         
_________________________________________________________________
max_pool (MaxPooling2D)      (None, 32, 32, 32)        0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 32768)             0         
_________________________________________________________________
fc (Dense)                   (None, 1)                 32769     
=================================================================
Total params: 37,633
Trainable params: 37,569
Non-trainable params: 64
_________________________________________________________________
%matplotlib inline
plot_model(happy_model,to_file='happy_model.png')
SVG(model_to_dot(happy_model).create(prog='dot',format='svg'))

在这里插入图片描述

二、残差网络的搭建

2.1、为什么使用残差网络

为了解决深网络的难以训练的问题。

2.2、构建残差网络

本文,我们将:

  • 实习基本的残差块
  • 将这些残差块放在一起,实现并训练用于图像分类的神经网络。
import numpy as np
import tensorflow as tf

from keras import layers
from keras.layers import Input,Add,Dense,Activation,ZeroPadding2D,BatchNormalization,Flatten,Conv2D,AveragePooling2D,MaxPooling2D,GlobalMaxPooling2D
from keras.models import Model,load_model
from keras.preprocessing import image
from keras.utils import layer_utils
from keras.utils.data_utils import get_file
from keras.applications.imagenet_utils import preprocess_input
from keras.utils.vis_utils import model_to_dot
from keras.utils import plot_model
from keras.initializers import glorot_uniform

import pydot
from IPython.display import SVG
import scipy.misc
from matplotlib.pyplot import imshow
import keras.backend as K
K.set_image_data_format('channels_last')
K.set_learning_phase(1)

import resnets_utils

2.2.1、恒等块(identity block)

恒等块是残差网络使用的标准块,对应于输入的激活值与输出激活值具有相同的维度。如下图所示:
在这里插入图片描述
上图中,上面的曲线路径是“捷径”,下面的直线路径是主路径。在上图中,我们依旧把conv2D与relu包含到了每个步骤中,为了提升训练的速度,我们在每一步也把数据进行了归一化batchNorm。
在实践中,跳跃连接会跳过3个隐藏层:
在这里插入图片描述
每个步骤如下:
1、主路径的第一部分:

  • 第一个conv2D有F1F_1个过滤器,其大小为(1,1),步长为(1,1),使用填充方式为“valid”,命名规则为conv_name_base+'2a',使用0作为随机种子为其初始化。
  • 第一个batchNorm是通道的轴归一化,其命名规则为 bn_name_base+'2a'
  • 接着使用relu激活函数,它没有命名也没有超参数。

2、主路径的第二部分:

  • 第二个conv2D有F2F_2个过滤器,其大小为(1,1),步长为(1,1),使用填充方式为“same”,命名规则为conv_name_base+'2b',使用0作为随机种子为其初始化。
  • 第一个batchNorm是通道的轴归一化,其命名规则为 bn_name_base+'2b'
  • 接着使用relu激活函数,它没有命名也没有超参数。

3、主路径的第三部分:

  • 第三个conv2D有F3F_3个过滤器,其大小为(1,1),步长为(1,1),使用填充方式为“valid”,命名规则为conv_name_base+'2c',使用0作为随机种子为其初始化。
  • 第一个batchNorm是通道的轴归一化,其命名规则为 bn_name_base+'2c'
  • 注意这里没有relu函数。

4、最后一步:

  • 将捷径与输入加在一起
  • 使用relu激活函数,它没有命名也没有超参数。
def identity_block(X,f,filters,stage,block):
    """
    实现图3的恒等快
    
    参数:
        X - 输入的tensor类型的数据,维度为(m,n_H_prev,n_W_prev,n_H_prev)
        f - 整数,指定主路径中间的CONV窗口的维度
        filters - 整数列表,定义了主路径每层的卷积层的过滤器数量
        stage - 整数,根据每层的位置来命名每一层,与block参数一起使用
        block - 字符串,据每层的位置来命名每一层,与stage参数一起使用
    返回:
        X - 恒等块的输出,tensor类型,维度为(n_H,n_W,n_C)
    """
    conv_name_base = "res" + str(stage) + block + "_branch"
    bn_name_base   = "bn"  + str(stage) + block + "_branch"
    
    F1,F2,F3 = filters
    
    X_shortcut = X
    
    X = Conv2D(filters = F1,kernel_size=(1,1),strides=(1,1),padding="valid",name=conv_name_base+"2a",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2a")(X)
    X = Activation("relu")(X)
    
    X = Conv2D(filters=F2,kernel_size=(f,f),strides=(1,1),padding="same",name=conv_name_base+"2b",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2b")(X)
    X = Activation("relu")(X)
    
    X = Conv2D(filters=F3,kernel_size=(1,1),strides=(1,1),padding="valid",name=conv_name_base+"2c",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2c")(X)
    
    X = Add()([X,X_shortcut])
    X = Activation("relu")(X)
    
    return X

测试:

tf.reset_default_graph()
with tf.Session() as test:
    np.random.seed(1)
    A_prev = tf.placeholder("float",[3,4,4,6])
    X = np.random.randn(3,4,4,6)
    A = identity_block(A_prev,f=2,filters=[2,4,6],stage=1,block="a")
    
    test.run(tf.global_variables_initializer())
    out = test.run([A],feed_dict={A_prev:X,K.learning_phase():0})
    print("out = " + str(out[0][1][1][0]))
    
    test.close()

结果:

out = [0.94822985 0.         1.1610144  2.747859   0.         1.36677   ]

2.2.2、卷积块

残差网络的卷积块是另一种类型的残差块,它适用于输入输出的维度不一致的情况,它不同于上面的恒等块,与之区别在于,捷径中有一个conv2D层:
在这里插入图片描述
具体步骤:
1、主路径的第一部分:

  • 第一个conv2D有F1F_1个过滤器,其大小为(1,1),步长为(s,s),使用填充方式为“valid”,命名规则为conv_name_base+'2a',使用0作为随机种子为其初始化。
  • 第一个batchNorm是通道的轴归一化,其命名规则为 bn_name_base+'2a'
  • 接着使用relu激活函数,它没有命名也没有超参数。

2、主路径的第二部分:

  • 第二个conv2D有F2F_2个过滤器,其大小为(f,f),步长为(1,1),使用填充方式为“same”,命名规则为conv_name_base+'2b',使用0作为随机种子为其初始化。
  • 第一个batchNorm是通道的轴归一化,其命名规则为 bn_name_base+'2b'
  • 接着使用relu激活函数,它没有命名也没有超参数。

3、主路径的第三部分:

  • 第三个conv2D有F3F_3个过滤器,其大小为(1,1),步长为(s,s),使用填充方式为“valid”,命名规则为conv_name_base+'2c',使用0作为随机种子为其初始化。
  • 第一个batchNorm是通道的轴归一化,其命名规则为 bn_name_base+'2c'
  • 没有relu函数。

4、捷径:

  • 此卷积层有F3F_3个过滤器,其维度为(1,1),步伐为(s,s),适用“valid”的填充方式,命名规则为conv_name_base+'1'
  • 此规范层是通道的轴归一化,其命名规则为bn_name_base+'1'

5、最后一步:

  • 将捷径与输入加在一起
  • 使用relu激活函数。
def convolutional_block(X,f,filters,stage,block,s=2):
    """
    实现图5的卷积块
    
    参数:
        X - 输入的tensor类型的数据,维度为(m,n_H_prev,n_W_prev,n_H_prev)
        f - 整数,指定主路径中间的CONV窗口的维度
        filters - 整数列表,定义了主路径每层的卷积层的过滤器数量
        stage - 整数,根据每层的位置来命名每一层,与block参数一起使用
        block - 字符串,据每层的位置来命名每一层,与stage参数一起使用
        s - 整数,指定要使用的步骤
    返回:
        X - 卷积块的输出,tensor类型,维度为(n_H,n_W,n_C)
    """
    conv_name_base = "res" + str(stage) + block + "_branch"
    bn_name_base   = "bn"  + str(stage) + block + "_branch"
    
    F1,F2,F3 = filters
    
    X_shortcut = X
    
    X = Conv2D(filters = F1,kernel_size=(1,1),strides=(s,s),padding="valid",name=conv_name_base+"2a",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2a")(X)
    X = Activation("relu")(X)
    
    X = Conv2D(filters=F2,kernel_size=(f,f),strides=(1,1),padding="same",name=conv_name_base+"2b",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2b")(X)
    X = Activation("relu")(X)
    
    X = Conv2D(filters=F3,kernel_size=(1,1),strides=(1,1),padding="valid",name=conv_name_base+"2c",kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3,name=bn_name_base+"2c")(X)
    
    X_shortcut = Conv2D(filters = F3,kernel_size=(1,1),strides=(s,s),padding="valid",name=conv_name_base+"1",kernel_initializer=glorot_uniform(seed=0))(X_shortcut)
    X_shortcut = BatchNormalization(axis=3,name=bn_name_base+"1")(X_shortcut)
    
    
    X = Add()([X,X_shortcut])
    X = Activation("relu")(X)
    
    return X

测试:

tf.reset_default_graph()
with tf.Session() as test:
    np.random.seed(1)
    A_prev = tf.placeholder("float",[3,4,4,6])
    X = np.random.randn(3,4,4,6)
    A = convolutional_block(A_prev,f=2,filters=[2,4,6],stage=1,block="a")
    
    test.run(tf.global_variables_initializer())
    out = test.run([A],feed_dict={A_prev:X,K.learning_phase():0})
    print("out = " + str(out[0][1][1][0]))
    
    test.close()

结果:

out = [0.09018463 1.2348977  0.46822017 0.0367176  0.         0.65516603]

2.3、构建你的第一个残差网络(50层)

已经做完所需要的所有残差块了,下面这个图就描述了神经网络的算法细节,图中‘id block’是指标准的恒等块,“id block x3”是指把三个恒等块放在一起。
在这里插入图片描述
这50层的网络的细节如下:
1、对输入数据进行0填充,padding=(3,3)
2、stage1:

  • 卷积层有64个过滤器,其维度为(7,7),步伐为(2,2),命名为‘conv1’
  • 规范层batchNorm对输入数据进行通道轴归一化
  • 最大值池化层使用一个(3,3)的窗口和(2,2)的步伐

3、stage2:

  • 卷积块使用f=3个大小为[64,64,256]的过滤器,f=3,s=1,block=‘a’
  • 2个恒等块使用三个大小为[64,64,256]的过滤器,f=3,block=‘b’、‘c’

4、stage3:

  • 卷积块使用f=3个大小为[128,128,512]的过滤器,f=3,s=2,block=‘a’
  • 3个恒等块使用三个大小为[128,128,512]的过滤器,f=3,block=‘b’、‘c’、‘d’
    5、stage4:
  • 卷积块使用f=3个大小为[256,256,1024]的过滤器,f=3,s=2,block=‘a’
  • 5个恒等块使用三个大小为[256,256,1024]的过滤器,f=3,block=‘b’、‘c’、‘d’、‘e’、‘f’
    6、stage5:
  • 卷积块使用f=3个大小为[512,512,2048]的过滤器,f=3,s=2,block=‘a’
  • 2个恒等块使用三个大小为[512,512,2048]的过滤器,f=3,block=‘b’、‘c’

7、均值池化层使用维度为(2,2)的窗口,命名为‘avg_pool’
8、展开操作没有任何超参数以及命名
9、全连接层(密集连接)使用softmax激活函数,命名为"fc"+str(classes)

def ResNet50(input_shape=(64,64,3),classes=6):
    """
    实现ResNet50
    conv2D->batchnorm->relu->maxpool->convblock->idblock*2->convblock->idblock*3->convblock->idblock*5->convblock->idblock*2->avgpool->toplayer
    
    参数:
        input_shape - 图像数据集的维度
        classes - 整数,分类数
    返回:
        model - keras框架的模型
    """
    #定义tensor类型的输入数据
    X_input = Input(input_shape)
    
    #0填充
    X = ZeroPadding2D((3,3))(X_input)
    
    #stage1
    X = Conv2D(filters=64, kernel_size=(7,7), strides=(2,2), name="conv1",
               kernel_initializer=glorot_uniform(seed=0))(X)
    X = BatchNormalization(axis=3, name="bn_conv1")(X)
    X = Activation("relu")(X)
    X = MaxPooling2D(pool_size=(3,3), strides=(2,2))(X)
    
    #stage2
    X = convolutional_block(X, f=3, filters=[64,64,256], stage=2, block="a", s=1)
    X = identity_block(X, f=3, filters=[64,64,256], stage=2, block="b")
    X = identity_block(X, f=3, filters=[64,64,256], stage=2, block="c")
    
    #stage3
    X = convolutional_block(X, f=3, filters=[128,128,512], stage=3, block="a", s=2)
    X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="b")
    X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="c")
    X = identity_block(X, f=3, filters=[128,128,512], stage=3, block="d")
    
    #stage4
    X = convolutional_block(X, f=3, filters=[256,256,1024], stage=4, block="a", s=2)
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="b")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="c")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="d")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="e")
    X = identity_block(X, f=3, filters=[256,256,1024], stage=4, block="f")
    
    #stage5
    X = convolutional_block(X, f=3, filters=[512,512,2048], stage=5, block="a", s=2)
    X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block="b")
    X = identity_block(X, f=3, filters=[512,512,2048], stage=5, block="c")
    
    #均值池化层
    X = AveragePooling2D(pool_size=(2,2),padding="same")(X)
    
    #输出层
    X = Flatten()(X)
    X = Dense(classes, activation="softmax", name="fc"+str(classes),
              kernel_initializer=glorot_uniform(seed=0))(X)
    
    
    #创建模型
    model = Model(inputs=X_input, outputs=X, name="ResNet50")
    
    return model

对模型做实体化和编译工作:

model = ResNet50(input_shape=(64,64,3),classes=6)
model.compile(optimizer="adam",loss="categorical_crossentropy",metrics=["accuracy"])
X_train_orig,Y_train_orig,X_test_orig,Y_test_orig,classes=resnets_utils.load_dataset()

X_train = X_train_orig/255
X_test = X_test_orig/255

Y_train = resnets_utils.convert_to_one_hot(Y_train_orig,6).T
Y_test = resnets_utils.convert_to_one_hot(Y_test_orig,6).T

print(X_train.shape[0],X_test.shape[0],X_train.shape,Y_train.shape,X_test.shape,Y_test.shape)

结果:

1080 120 (1080, 64, 64, 3) (1080, 6) (120, 64, 64, 3) (120, 6)

运行模型两代,batch=32,每代大约3分钟左右。

model.fit(X_train,Y_train,epochs=2,batch_size=32)

结果:

Epoch 1/2
1080/1080 [==============================] - 151s 140ms/step - loss: 3.0510 - acc: 0.2407
Epoch 2/2
1080/1080 [==============================] - 2597s 2s/step - loss: 2.1708 - acc: 0.3611
  • 12\frac{1}{2}epoch中,loss在1-5之间算正常,acc在0.2-0.5之间算正常。
  • 22\frac{2}{2}epoch中,loss在1-5之间算正常,acc在0.2-0.5之间算正常。可以看到损失在下降,准确度在上升。

评估模型:

preds = model.evaluate(X_test,Y_test)
print(preds[0],preds[1])

结果:

120/120 [==============================] - 4s 36ms/step
2.5867568492889403 0.16666666666666666

2.4、使用自己的图片做测试

from PIL import Image
import numpy as np
import matplotlib.pyplot as plt # plt 用于显示图片

%matplotlib inline

img_path = 'images/fingers_big/2.jpg'

my_image = image.load_img(img_path, target_size=(64, 64))
my_image = image.img_to_array(my_image)

my_image = np.expand_dims(my_image,axis=0)
my_image = preprocess_input(my_image)

print("my_image.shape = " + str(my_image.shape))

print("class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = ")
print(model.predict(my_image))

my_image = scipy.misc.imread(img_path)
plt.imshow(my_image)

结果

my_image.shape = (1, 64, 64, 3)
class prediction vector [p(0), p(1), p(2), p(3), p(4), p(5)] = 
[[ 1.  0.  0.  0.  0.  0.]]
model.summary()

结果

__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 64, 64, 3)    0                                            
__________________________________________________________________________________________________
zero_padding2d_1 (ZeroPadding2D (None, 70, 70, 3)    0           input_1[0][0]                    
__________________________________________________________________________________________________
conv1 (Conv2D)                  (None, 32, 32, 64)   9472        zero_padding2d_1[0][0]           
__________________________________________________________________________________________________
bn_conv1 (BatchNormalization)   (None, 32, 32, 64)   256         conv1[0][0]                      
__________________________________________________________________________________________________
activation_4 (Activation)       (None, 32, 32, 64)   0           bn_conv1[0][0]                   
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D)  (None, 15, 15, 64)   0           activation_4[0][0]               
__________________________________________________________________________________________________
res2a_branch2a (Conv2D)         (None, 15, 15, 64)   4160        max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
bn2a_branch2a (BatchNormalizati (None, 15, 15, 64)   256         res2a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_5 (Activation)       (None, 15, 15, 64)   0           bn2a_branch2a[0][0]              
__________________________________________________________________________________________________
res2a_branch2b (Conv2D)         (None, 15, 15, 64)   36928       activation_5[0][0]               
__________________________________________________________________________________________________
bn2a_branch2b (BatchNormalizati (None, 15, 15, 64)   256         res2a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_6 (Activation)       (None, 15, 15, 64)   0           bn2a_branch2b[0][0]              
__________________________________________________________________________________________________
res2a_branch2c (Conv2D)         (None, 15, 15, 256)  16640       activation_6[0][0]               
__________________________________________________________________________________________________
res2a_branch1 (Conv2D)          (None, 15, 15, 256)  16640       max_pooling2d_1[0][0]            
__________________________________________________________________________________________________
bn2a_branch2c (BatchNormalizati (None, 15, 15, 256)  1024        res2a_branch2c[0][0]             
__________________________________________________________________________________________________
bn2a_branch1 (BatchNormalizatio (None, 15, 15, 256)  1024        res2a_branch1[0][0]              
__________________________________________________________________________________________________
add_2 (Add)                     (None, 15, 15, 256)  0           bn2a_branch2c[0][0]              
                                                                 bn2a_branch1[0][0]               
__________________________________________________________________________________________________
activation_7 (Activation)       (None, 15, 15, 256)  0           add_2[0][0]                      
__________________________________________________________________________________________________
res2b_branch2a (Conv2D)         (None, 15, 15, 64)   16448       activation_7[0][0]               
__________________________________________________________________________________________________
bn2b_branch2a (BatchNormalizati (None, 15, 15, 64)   256         res2b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_8 (Activation)       (None, 15, 15, 64)   0           bn2b_branch2a[0][0]              
__________________________________________________________________________________________________
res2b_branch2b (Conv2D)         (None, 15, 15, 64)   36928       activation_8[0][0]               
__________________________________________________________________________________________________
bn2b_branch2b (BatchNormalizati (None, 15, 15, 64)   256         res2b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_9 (Activation)       (None, 15, 15, 64)   0           bn2b_branch2b[0][0]              
__________________________________________________________________________________________________
res2b_branch2c (Conv2D)         (None, 15, 15, 256)  16640       activation_9[0][0]               
__________________________________________________________________________________________________
bn2b_branch2c (BatchNormalizati (None, 15, 15, 256)  1024        res2b_branch2c[0][0]             
__________________________________________________________________________________________________
add_3 (Add)                     (None, 15, 15, 256)  0           bn2b_branch2c[0][0]              
                                                                 activation_7[0][0]               
__________________________________________________________________________________________________
activation_10 (Activation)      (None, 15, 15, 256)  0           add_3[0][0]                      
__________________________________________________________________________________________________
res2c_branch2a (Conv2D)         (None, 15, 15, 64)   16448       activation_10[0][0]              
__________________________________________________________________________________________________
bn2c_branch2a (BatchNormalizati (None, 15, 15, 64)   256         res2c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_11 (Activation)      (None, 15, 15, 64)   0           bn2c_branch2a[0][0]              
__________________________________________________________________________________________________
res2c_branch2b (Conv2D)         (None, 15, 15, 64)   36928       activation_11[0][0]              
__________________________________________________________________________________________________
bn2c_branch2b (BatchNormalizati (None, 15, 15, 64)   256         res2c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_12 (Activation)      (None, 15, 15, 64)   0           bn2c_branch2b[0][0]              
__________________________________________________________________________________________________
res2c_branch2c (Conv2D)         (None, 15, 15, 256)  16640       activation_12[0][0]              
__________________________________________________________________________________________________
bn2c_branch2c (BatchNormalizati (None, 15, 15, 256)  1024        res2c_branch2c[0][0]             
__________________________________________________________________________________________________
add_4 (Add)                     (None, 15, 15, 256)  0           bn2c_branch2c[0][0]              
                                                                 activation_10[0][0]              
__________________________________________________________________________________________________
activation_13 (Activation)      (None, 15, 15, 256)  0           add_4[0][0]                      
__________________________________________________________________________________________________
res3a_branch2a (Conv2D)         (None, 8, 8, 128)    32896       activation_13[0][0]              
__________________________________________________________________________________________________
bn3a_branch2a (BatchNormalizati (None, 8, 8, 128)    512         res3a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_14 (Activation)      (None, 8, 8, 128)    0           bn3a_branch2a[0][0]              
__________________________________________________________________________________________________
res3a_branch2b (Conv2D)         (None, 8, 8, 128)    147584      activation_14[0][0]              
__________________________________________________________________________________________________
bn3a_branch2b (BatchNormalizati (None, 8, 8, 128)    512         res3a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_15 (Activation)      (None, 8, 8, 128)    0           bn3a_branch2b[0][0]              
__________________________________________________________________________________________________
res3a_branch2c (Conv2D)         (None, 8, 8, 512)    66048       activation_15[0][0]              
__________________________________________________________________________________________________
res3a_branch1 (Conv2D)          (None, 8, 8, 512)    131584      activation_13[0][0]              
__________________________________________________________________________________________________
bn3a_branch2c (BatchNormalizati (None, 8, 8, 512)    2048        res3a_branch2c[0][0]             
__________________________________________________________________________________________________
bn3a_branch1 (BatchNormalizatio (None, 8, 8, 512)    2048        res3a_branch1[0][0]              
__________________________________________________________________________________________________
add_5 (Add)                     (None, 8, 8, 512)    0           bn3a_branch2c[0][0]              
                                                                 bn3a_branch1[0][0]               
__________________________________________________________________________________________________
activation_16 (Activation)      (None, 8, 8, 512)    0           add_5[0][0]                      
__________________________________________________________________________________________________
res3b_branch2a (Conv2D)         (None, 8, 8, 128)    65664       activation_16[0][0]              
__________________________________________________________________________________________________
bn3b_branch2a (BatchNormalizati (None, 8, 8, 128)    512         res3b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_17 (Activation)      (None, 8, 8, 128)    0           bn3b_branch2a[0][0]              
__________________________________________________________________________________________________
res3b_branch2b (Conv2D)         (None, 8, 8, 128)    147584      activation_17[0][0]              
__________________________________________________________________________________________________
bn3b_branch2b (BatchNormalizati (None, 8, 8, 128)    512         res3b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_18 (Activation)      (None, 8, 8, 128)    0           bn3b_branch2b[0][0]              
__________________________________________________________________________________________________
res3b_branch2c (Conv2D)         (None, 8, 8, 512)    66048       activation_18[0][0]              
__________________________________________________________________________________________________
bn3b_branch2c (BatchNormalizati (None, 8, 8, 512)    2048        res3b_branch2c[0][0]             
__________________________________________________________________________________________________
add_6 (Add)                     (None, 8, 8, 512)    0           bn3b_branch2c[0][0]              
                                                                 activation_16[0][0]              
__________________________________________________________________________________________________
activation_19 (Activation)      (None, 8, 8, 512)    0           add_6[0][0]                      
__________________________________________________________________________________________________
res3c_branch2a (Conv2D)         (None, 8, 8, 128)    65664       activation_19[0][0]              
__________________________________________________________________________________________________
bn3c_branch2a (BatchNormalizati (None, 8, 8, 128)    512         res3c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_20 (Activation)      (None, 8, 8, 128)    0           bn3c_branch2a[0][0]              
__________________________________________________________________________________________________
res3c_branch2b (Conv2D)         (None, 8, 8, 128)    147584      activation_20[0][0]              
__________________________________________________________________________________________________
bn3c_branch2b (BatchNormalizati (None, 8, 8, 128)    512         res3c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_21 (Activation)      (None, 8, 8, 128)    0           bn3c_branch2b[0][0]              
__________________________________________________________________________________________________
res3c_branch2c (Conv2D)         (None, 8, 8, 512)    66048       activation_21[0][0]              
__________________________________________________________________________________________________
bn3c_branch2c (BatchNormalizati (None, 8, 8, 512)    2048        res3c_branch2c[0][0]             
__________________________________________________________________________________________________
add_7 (Add)                     (None, 8, 8, 512)    0           bn3c_branch2c[0][0]              
                                                                 activation_19[0][0]              
__________________________________________________________________________________________________
activation_22 (Activation)      (None, 8, 8, 512)    0           add_7[0][0]                      
__________________________________________________________________________________________________
res3d_branch2a (Conv2D)         (None, 8, 8, 128)    65664       activation_22[0][0]              
__________________________________________________________________________________________________
bn3d_branch2a (BatchNormalizati (None, 8, 8, 128)    512         res3d_branch2a[0][0]             
__________________________________________________________________________________________________
activation_23 (Activation)      (None, 8, 8, 128)    0           bn3d_branch2a[0][0]              
__________________________________________________________________________________________________
res3d_branch2b (Conv2D)         (None, 8, 8, 128)    147584      activation_23[0][0]              
__________________________________________________________________________________________________
bn3d_branch2b (BatchNormalizati (None, 8, 8, 128)    512         res3d_branch2b[0][0]             
__________________________________________________________________________________________________
activation_24 (Activation)      (None, 8, 8, 128)    0           bn3d_branch2b[0][0]              
__________________________________________________________________________________________________
res3d_branch2c (Conv2D)         (None, 8, 8, 512)    66048       activation_24[0][0]              
__________________________________________________________________________________________________
bn3d_branch2c (BatchNormalizati (None, 8, 8, 512)    2048        res3d_branch2c[0][0]             
__________________________________________________________________________________________________
add_8 (Add)                     (None, 8, 8, 512)    0           bn3d_branch2c[0][0]              
                                                                 activation_22[0][0]              
__________________________________________________________________________________________________
activation_25 (Activation)      (None, 8, 8, 512)    0           add_8[0][0]                      
__________________________________________________________________________________________________
res4a_branch2a (Conv2D)         (None, 4, 4, 256)    131328      activation_25[0][0]              
__________________________________________________________________________________________________
bn4a_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_26 (Activation)      (None, 4, 4, 256)    0           bn4a_branch2a[0][0]              
__________________________________________________________________________________________________
res4a_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_26[0][0]              
__________________________________________________________________________________________________
bn4a_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_27 (Activation)      (None, 4, 4, 256)    0           bn4a_branch2b[0][0]              
__________________________________________________________________________________________________
res4a_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_27[0][0]              
__________________________________________________________________________________________________
res4a_branch1 (Conv2D)          (None, 4, 4, 1024)   525312      activation_25[0][0]              
__________________________________________________________________________________________________
bn4a_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4a_branch2c[0][0]             
__________________________________________________________________________________________________
bn4a_branch1 (BatchNormalizatio (None, 4, 4, 1024)   4096        res4a_branch1[0][0]              
__________________________________________________________________________________________________
add_9 (Add)                     (None, 4, 4, 1024)   0           bn4a_branch2c[0][0]              
                                                                 bn4a_branch1[0][0]               
__________________________________________________________________________________________________
activation_28 (Activation)      (None, 4, 4, 1024)   0           add_9[0][0]                      
__________________________________________________________________________________________________
res4b_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_28[0][0]              
__________________________________________________________________________________________________
bn4b_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_29 (Activation)      (None, 4, 4, 256)    0           bn4b_branch2a[0][0]              
__________________________________________________________________________________________________
res4b_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_29[0][0]              
__________________________________________________________________________________________________
bn4b_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_30 (Activation)      (None, 4, 4, 256)    0           bn4b_branch2b[0][0]              
__________________________________________________________________________________________________
res4b_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_30[0][0]              
__________________________________________________________________________________________________
bn4b_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4b_branch2c[0][0]             
__________________________________________________________________________________________________
add_10 (Add)                    (None, 4, 4, 1024)   0           bn4b_branch2c[0][0]              
                                                                 activation_28[0][0]              
__________________________________________________________________________________________________
activation_31 (Activation)      (None, 4, 4, 1024)   0           add_10[0][0]                     
__________________________________________________________________________________________________
res4c_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_31[0][0]              
__________________________________________________________________________________________________
bn4c_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_32 (Activation)      (None, 4, 4, 256)    0           bn4c_branch2a[0][0]              
__________________________________________________________________________________________________
res4c_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_32[0][0]              
__________________________________________________________________________________________________
bn4c_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_33 (Activation)      (None, 4, 4, 256)    0           bn4c_branch2b[0][0]              
__________________________________________________________________________________________________
res4c_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_33[0][0]              
__________________________________________________________________________________________________
bn4c_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4c_branch2c[0][0]             
__________________________________________________________________________________________________
add_11 (Add)                    (None, 4, 4, 1024)   0           bn4c_branch2c[0][0]              
                                                                 activation_31[0][0]              
__________________________________________________________________________________________________
activation_34 (Activation)      (None, 4, 4, 1024)   0           add_11[0][0]                     
__________________________________________________________________________________________________
res4d_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_34[0][0]              
__________________________________________________________________________________________________
bn4d_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4d_branch2a[0][0]             
__________________________________________________________________________________________________
activation_35 (Activation)      (None, 4, 4, 256)    0           bn4d_branch2a[0][0]              
__________________________________________________________________________________________________
res4d_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_35[0][0]              
__________________________________________________________________________________________________
bn4d_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4d_branch2b[0][0]             
__________________________________________________________________________________________________
activation_36 (Activation)      (None, 4, 4, 256)    0           bn4d_branch2b[0][0]              
__________________________________________________________________________________________________
res4d_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_36[0][0]              
__________________________________________________________________________________________________
bn4d_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4d_branch2c[0][0]             
__________________________________________________________________________________________________
add_12 (Add)                    (None, 4, 4, 1024)   0           bn4d_branch2c[0][0]              
                                                                 activation_34[0][0]              
__________________________________________________________________________________________________
activation_37 (Activation)      (None, 4, 4, 1024)   0           add_12[0][0]                     
__________________________________________________________________________________________________
res4e_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_37[0][0]              
__________________________________________________________________________________________________
bn4e_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4e_branch2a[0][0]             
__________________________________________________________________________________________________
activation_38 (Activation)      (None, 4, 4, 256)    0           bn4e_branch2a[0][0]              
__________________________________________________________________________________________________
res4e_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_38[0][0]              
__________________________________________________________________________________________________
bn4e_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4e_branch2b[0][0]             
__________________________________________________________________________________________________
activation_39 (Activation)      (None, 4, 4, 256)    0           bn4e_branch2b[0][0]              
__________________________________________________________________________________________________
res4e_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_39[0][0]              
__________________________________________________________________________________________________
bn4e_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4e_branch2c[0][0]             
__________________________________________________________________________________________________
add_13 (Add)                    (None, 4, 4, 1024)   0           bn4e_branch2c[0][0]              
                                                                 activation_37[0][0]              
__________________________________________________________________________________________________
activation_40 (Activation)      (None, 4, 4, 1024)   0           add_13[0][0]                     
__________________________________________________________________________________________________
res4f_branch2a (Conv2D)         (None, 4, 4, 256)    262400      activation_40[0][0]              
__________________________________________________________________________________________________
bn4f_branch2a (BatchNormalizati (None, 4, 4, 256)    1024        res4f_branch2a[0][0]             
__________________________________________________________________________________________________
activation_41 (Activation)      (None, 4, 4, 256)    0           bn4f_branch2a[0][0]              
__________________________________________________________________________________________________
res4f_branch2b (Conv2D)         (None, 4, 4, 256)    590080      activation_41[0][0]              
__________________________________________________________________________________________________
bn4f_branch2b (BatchNormalizati (None, 4, 4, 256)    1024        res4f_branch2b[0][0]             
__________________________________________________________________________________________________
activation_42 (Activation)      (None, 4, 4, 256)    0           bn4f_branch2b[0][0]              
__________________________________________________________________________________________________
res4f_branch2c (Conv2D)         (None, 4, 4, 1024)   263168      activation_42[0][0]              
__________________________________________________________________________________________________
bn4f_branch2c (BatchNormalizati (None, 4, 4, 1024)   4096        res4f_branch2c[0][0]             
__________________________________________________________________________________________________
add_14 (Add)                    (None, 4, 4, 1024)   0           bn4f_branch2c[0][0]              
                                                                 activation_40[0][0]              
__________________________________________________________________________________________________
activation_43 (Activation)      (None, 4, 4, 1024)   0           add_14[0][0]                     
__________________________________________________________________________________________________
res5a_branch2a (Conv2D)         (None, 2, 2, 512)    524800      activation_43[0][0]              
__________________________________________________________________________________________________
bn5a_branch2a (BatchNormalizati (None, 2, 2, 512)    2048        res5a_branch2a[0][0]             
__________________________________________________________________________________________________
activation_44 (Activation)      (None, 2, 2, 512)    0           bn5a_branch2a[0][0]              
__________________________________________________________________________________________________
res5a_branch2b (Conv2D)         (None, 2, 2, 512)    2359808     activation_44[0][0]              
__________________________________________________________________________________________________
bn5a_branch2b (BatchNormalizati (None, 2, 2, 512)    2048        res5a_branch2b[0][0]             
__________________________________________________________________________________________________
activation_45 (Activation)      (None, 2, 2, 512)    0           bn5a_branch2b[0][0]              
__________________________________________________________________________________________________
res5a_branch2c (Conv2D)         (None, 2, 2, 2048)   1050624     activation_45[0][0]              
__________________________________________________________________________________________________
res5a_branch1 (Conv2D)          (None, 2, 2, 2048)   2099200     activation_43[0][0]              
__________________________________________________________________________________________________
bn5a_branch2c (BatchNormalizati (None, 2, 2, 2048)   8192        res5a_branch2c[0][0]             
__________________________________________________________________________________________________
bn5a_branch1 (BatchNormalizatio (None, 2, 2, 2048)   8192        res5a_branch1[0][0]              
__________________________________________________________________________________________________
add_15 (Add)                    (None, 2, 2, 2048)   0           bn5a_branch2c[0][0]              
                                                                 bn5a_branch1[0][0]               
__________________________________________________________________________________________________
activation_46 (Activation)      (None, 2, 2, 2048)   0           add_15[0][0]                     
__________________________________________________________________________________________________
res5b_branch2a (Conv2D)         (None, 2, 2, 512)    1049088     activation_46[0][0]              
__________________________________________________________________________________________________
bn5b_branch2a (BatchNormalizati (None, 2, 2, 512)    2048        res5b_branch2a[0][0]             
__________________________________________________________________________________________________
activation_47 (Activation)      (None, 2, 2, 512)    0           bn5b_branch2a[0][0]              
__________________________________________________________________________________________________
res5b_branch2b (Conv2D)         (None, 2, 2, 512)    2359808     activation_47[0][0]              
__________________________________________________________________________________________________
bn5b_branch2b (BatchNormalizati (None, 2, 2, 512)    2048        res5b_branch2b[0][0]             
__________________________________________________________________________________________________
activation_48 (Activation)      (None, 2, 2, 512)    0           bn5b_branch2b[0][0]              
__________________________________________________________________________________________________
res5b_branch2c (Conv2D)         (None, 2, 2, 2048)   1050624     activation_48[0][0]              
__________________________________________________________________________________________________
bn5b_branch2c (BatchNormalizati (None, 2, 2, 2048)   8192        res5b_branch2c[0][0]             
__________________________________________________________________________________________________
add_16 (Add)                    (None, 2, 2, 2048)   0           bn5b_branch2c[0][0]              
                                                                 activation_46[0][0]              
__________________________________________________________________________________________________
activation_49 (Activation)      (None, 2, 2, 2048)   0           add_16[0][0]                     
__________________________________________________________________________________________________
res5c_branch2a (Conv2D)         (None, 2, 2, 512)    1049088     activation_49[0][0]              
__________________________________________________________________________________________________
bn5c_branch2a (BatchNormalizati (None, 2, 2, 512)    2048        res5c_branch2a[0][0]             
__________________________________________________________________________________________________
activation_50 (Activation)      (None, 2, 2, 512)    0           bn5c_branch2a[0][0]              
__________________________________________________________________________________________________
res5c_branch2b (Conv2D)         (None, 2, 2, 512)    2359808     activation_50[0][0]              
__________________________________________________________________________________________________
bn5c_branch2b (BatchNormalizati (None, 2, 2, 512)    2048        res5c_branch2b[0][0]             
__________________________________________________________________________________________________
activation_51 (Activation)      (None, 2, 2, 512)    0           bn5c_branch2b[0][0]              
__________________________________________________________________________________________________
res5c_branch2c (Conv2D)         (None, 2, 2, 2048)   1050624     activation_51[0][0]              
__________________________________________________________________________________________________
bn5c_branch2c (BatchNormalizati (None, 2, 2, 2048)   8192        res5c_branch2c[0][0]             
__________________________________________________________________________________________________
add_17 (Add)                    (None, 2, 2, 2048)   0           bn5c_branch2c[0][0]              
                                                                 activation_49[0][0]              
__________________________________________________________________________________________________
activation_52 (Activation)      (None, 2, 2, 2048)   0           add_17[0][0]                     
__________________________________________________________________________________________________
average_pooling2d_1 (AveragePoo (None, 1, 1, 2048)   0           activation_52[0][0]              
__________________________________________________________________________________________________
flatten_1 (Flatten)             (None, 2048)         0           average_pooling2d_1[0][0]        
__________________________________________________________________________________________________
fc6 (Dense)                     (None, 6)            12294       flatten_1[0][0]                  
==================================================================================================
Total params: 23,600,006
Trainable params: 23,546,886
Non-trainable params: 53,120
__________________________________________________________________________________________________
plot_model(model,to_file='model.png')
SVG(model_to_dot(model).create(prog='dot',format='svg'))

结果:略

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章