定义生成器 :
def build_generator(self):
model = Sequential()
model.add(Dense(128*56*56, activation = 'relu', input_dim = self.latent_dim))
model.add(Reshape((56,56,128)))
model.add(BatchNormalization(momentum = 0.8))
model.add(UpSampling2D())
model.add(Conv2D(128, kernel_size = 3, padding = 'same'))
model.add(Activation('relu'))
model.add(BatchNormalization(momentum = 0.8))
model.add(UpSampling2D())
model.add(Conv2D(64, kernel_size = 3, padding = 'same'))
model.add(Activation('relu'))
model.add(BatchNormalization(momentum = 0.8))
model.add(Conv2D(self.channels, kernel_size = 3, padding = 'same'))
model.add(Activation('tanh'))
model.summary()
(代码中参数值:latent_dim = 100, channels = 3)
输入:100维的噪声
输出:224*224*3的图像
细心的人会发现,前几层的激活函数都是relu,最后一层改为了tanh。其实普通的神经网络的最后一层是用不到激活函数的,GAN网络是由生成器和判别器组成,生成器的输出是判别器的输入,两者构成一个完整的网络。生成器的最后一层接tanh激活函数,既起到激活作用,又起到归一作用,将生成器的输出归一化至[-1,1],作为判别器的输入。
定义判别器:
def build_discriminator(self):
model = Sequential()
model.add(Conv2D(16, kernel_size = 3, strides = 2, input_shape =(224,224,3),padding = 'same'))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Conv2D(32, kernel_size=3, strides=2, padding="same"))
model.add(ZeroPadding2D(padding=((0,1),(0,1))))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(BatchNormalization(momentum=0.8))
model.add(Conv2D(64, kernel_size=3, strides=2, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(BatchNormalization(momentum=0.8))
model.add(Conv2D(128, kernel_size=3, strides=1, padding="same"))
model.add(LeakyReLU(alpha=0.2))
model.add(Dropout(0.25))
model.add(Flatten())
model.summary()
输入:224*224*3,即生成器的输出