tensorlayer学习日志19_chapter8_2

无奈啊,按照书上的源码,怎么都调试不出来,无奈只好找过的源码:

在这个博客中发现一个卡通用的数据集,比较小,心想用小数据集试下吧,反正也就没指望能跑出来~~~~~~https://blog.csdn.net/heyc861221/article/details/80127914,数据集是卡通的头像,才两百多M。

源码我换成这个:

https://github.com/carpedm20/DCGAN-tensorflow

一共五个文件,多了个ops.py,但是download.py用的还是google盘,所以还是不能用,utils.py 文件增添了些内容,main.py把flags弄得更全点,ops.py是把卷积和反卷积等操作单独设成函数存在这里,其它的方面没什么区别,反正核心是model.py这里没改动就行。

这里要在data文件夹下再新建个叫anime的文件夹, 用来存卡通头像,data文件夹与源码是在同一个文件夹下的,data文件夹里应该还有之前下载过mnist和celebA。

windows的cmd是可以运行py文件的。这并不是MAC或Linux的专利。输入python加空格后,把要运行的py文件拖到cmd框上,这样就不用去输这么夸张的路径的, 然后再空格,用--参数项 参数 的形式定义好参数,再回车运行就可以的

我这样运行是报错的,说是准备checkpoint存取的文件夹找不到,后来我加上 --checkpoint_dir anime_64_48_48就可以了。如果不想在cmd上运行,需要改下main.py的开头几个flags代码的参数如下:

flags = tf.app.flags
# flags.DEFINE_integer("epoch", 25, "Epoch to train [25]")
flags.DEFINE_integer("epoch", 2, "Epoch to train [25]")
flags.DEFINE_float("learning_rate", 0.0002, "Learning rate of for adam [0.0002]")
flags.DEFINE_float("beta1", 0.5, "Momentum term of adam [0.5]")
flags.DEFINE_float("train_size", np.inf, "The size of train images [np.inf]")
flags.DEFINE_integer("batch_size", 64, "The size of batch images [64]")
# flags.DEFINE_integer("input_height", 108, "The size of image to use (will be center cropped). [108]")
flags.DEFINE_integer("input_height", 96, "The size of image to use (will be center cropped). [108]")
flags.DEFINE_integer("input_width", None, "The size of image to use (will be center cropped). If None, same value as input_height [None]")
# flags.DEFINE_integer("output_height", 64, "The size of the output images to produce [64]")
flags.DEFINE_integer("output_height", 48, "The size of the output images to produce [64]")
flags.DEFINE_integer("output_width", None, "The size of the output images to produce. If None, same value as output_height [None]")
# flags.DEFINE_string("dataset", "celebA", "The name of dataset [celebA, mnist, lsun]")
flags.DEFINE_string("dataset", "anime", "The name of dataset [celebA, mnist, lsun]")
flags.DEFINE_string("input_fname_pattern", "*.jpg", "Glob pattern of filename of input images [*]")
# flags.DEFINE_string("checkpoint_dir", "checkpoint", "Directory name to save the checkpoints [checkpoint]")
flags.DEFINE_string("checkpoint_dir", "anime_64_48_48", "Directory name to save the checkpoints [checkpoint]")
flags.DEFINE_string("data_dir", "./data", "Root directory of dataset [data]")
flags.DEFINE_string("sample_dir", "samples", "Directory name to save the image samples [samples]")
# flags.DEFINE_boolean("train", False, "True for training, False for testing [False]")
flags.DEFINE_boolean("train", True, "True for training, False for testing [False]")
# flags.DEFINE_boolean("crop", False, "True for training, False for testing [False]")
flags.DEFINE_boolean("crop", True, "True for training, False for testing [False]")
flags.DEFINE_boolean("visualize", False, "True for visualizing, False for nothing [False]")
flags.DEFINE_integer("generate_test_images", 100, "Number of images to generate during test. [100]")
FLAGS = flags.FLAGS

因为是正方形,所以只要输入input_height和output_height的参数就可以的。这里的卡通头像大小都是96像素的正方形,所以参数是96,输出是48像素正方形,而checkpoint的anime_64_48_48指的是anime数据集的三层处理大小分别是64,48,48所以这么取名的,我是根据报错信息取的。

然后运行如下:

{'batch_size': <absl.flags._flag.Flag object at 0x000000000958C5F8>,
 'beta1': <absl.flags._flag.Flag object at 0x000000000957DCF8>,
 'checkpoint_dir': <absl.flags._flag.Flag object at 0x000000000C3BFA90>,
 'crop': <absl.flags._flag.BooleanFlag object at 0x000000000C3BFC50>,
 'data_dir': <absl.flags._flag.Flag object at 0x000000000C3BFB00>,
 'dataset': <absl.flags._flag.Flag object at 0x000000000C3BF940>,
 'epoch': <absl.flags._flag.Flag object at 0x000000000126F0F0>,
 'generate_test_images': <absl.flags._flag.Flag object at 0x000000000C3BFD68>,
 'h': <tensorflow.python.platform.app._HelpFlag object at 0x000000000C3BFDA0>,
 'help': <tensorflow.python.platform.app._HelpFlag object at 0x000000000C3BFDA0>,
 'helpfull': <tensorflow.python.platform.app._HelpfullFlag object at 0x000000000C3BFE48>,
 'helpshort': <tensorflow.python.platform.app._HelpshortFlag object at 0x000000000C3BFEB8>,
 'input_fname_pattern': <absl.flags._flag.Flag object at 0x000000000C3BF9E8>,
 'input_height': <absl.flags._flag.Flag object at 0x000000000958C6D8>,
 'input_width': <absl.flags._flag.Flag object at 0x000000000C3BF7B8>,
 'learning_rate': <absl.flags._flag.Flag object at 0x0000000004388FD0>,
 'output_height': <absl.flags._flag.Flag object at 0x000000000C3BF828>,
 'output_width': <absl.flags._flag.Flag object at 0x000000000C3BF8D0>,
 'sample_dir': <absl.flags._flag.Flag object at 0x000000000C3BFB70>,
 'train': <absl.flags._flag.BooleanFlag object at 0x000000000C3BFBA8>,
 'train_size': <absl.flags._flag.Flag object at 0x000000000958C208>,
 'visualize': <absl.flags._flag.BooleanFlag object at 0x000000000C3BFCC0>}
---------
Variables: name (type shape) [size]
---------
generator/g_h0_lin/Matrix:0 (float32_ref 100x4608) [460800, bytes: 1843200]
generator/g_h0_lin/bias:0 (float32_ref 4608) [4608, bytes: 18432]
generator/g_bn0/beta:0 (float32_ref 512) [512, bytes: 2048]
generator/g_bn0/gamma:0 (float32_ref 512) [512, bytes: 2048]
generator/g_h1/w:0 (float32_ref 5x5x256x512) [3276800, bytes: 13107200]
generator/g_h1/biases:0 (float32_ref 256) [256, bytes: 1024]
generator/g_bn1/beta:0 (float32_ref 256) [256, bytes: 1024]
generator/g_bn1/gamma:0 (float32_ref 256) [256, bytes: 1024]
generator/g_h2/w:0 (float32_ref 5x5x128x256) [819200, bytes: 3276800]
generator/g_h2/biases:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/beta:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/gamma:0 (float32_ref 128) [128, bytes: 512]
generator/g_h3/w:0 (float32_ref 5x5x64x128) [204800, bytes: 819200]
generator/g_h3/biases:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/beta:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/gamma:0 (float32_ref 64) [64, bytes: 256]
generator/g_h4/w:0 (float32_ref 5x5x3x64) [4800, bytes: 19200]
generator/g_h4/biases:0 (float32_ref 3) [3, bytes: 12]
discriminator/d_h0_conv/w:0 (float32_ref 5x5x3x64) [4800, bytes: 19200]
discriminator/d_h0_conv/biases:0 (float32_ref 64) [64, bytes: 256]
discriminator/d_h1_conv/w:0 (float32_ref 5x5x64x128) [204800, bytes: 819200]
discriminator/d_h1_conv/biases:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn1/beta:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn1/gamma:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_h2_conv/w:0 (float32_ref 5x5x128x256) [819200, bytes: 3276800]
discriminator/d_h2_conv/biases:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_bn2/beta:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_bn2/gamma:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_h3_conv/w:0 (float32_ref 5x5x256x512) [3276800, bytes: 13107200]
discriminator/d_h3_conv/biases:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_bn3/beta:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_bn3/gamma:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_h4_lin/Matrix:0 (float32_ref 4608x1) [4608, bytes: 18432]
discriminator/d_h4_lin/bias:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 9086340
Total bytes of variables: 36345360
 [*] Reading checkpoints...
 [*] Success to read DCGAN.model-2
 [*] Load SUCCESS
Epoch: [ 0/ 2] [   0/ 800] time: 38.8909, d_loss: 4.83695269, g_loss: 0.01604199
Epoch: [ 0/ 2] [   1/ 800] time: 75.9409, d_loss: 4.21985102, g_loss: 0.03411261
Epoch: [ 0/ 2] [   2/ 800] time: 112.8350, d_loss: 4.44241142, g_loss: 0.02101352
Epoch: [ 0/ 2] [   3/ 800] time: 149.5263, d_loss: 3.07594109, g_loss: 0.11336002
Epoch: [ 0/ 2] [   4/ 800] time: 186.3891, d_loss: 2.70502377, g_loss: 0.14985262
Epoch: [ 0/ 2] [   5/ 800] time: 223.3300, d_loss: 3.69357109, g_loss: 0.04717619
Epoch: [ 0/ 2] [   6/ 800] time: 260.0525, d_loss: 2.17558765, g_loss: 0.68981493
Epoch: [ 0/ 2] [   7/ 800] time: 297.1181, d_loss: 5.51236010, g_loss: 0.00628419
Epoch: [ 0/ 2] [   8/ 800] time: 333.8094, d_loss: 2.00994205, g_loss: 0.82554901
Epoch: [ 0/ 2] [   9/ 800] time: 370.4227, d_loss: 5.24547815, g_loss: 0.01088684
Epoch: [ 0/ 2] [  10/ 800] time: 407.0671, d_loss: 2.80249119, g_loss: 1.36248946
Epoch: [ 0/ 2] [  11/ 800] time: 443.8676, d_loss: 5.26725864, g_loss: 0.01024086
Epoch: [ 0/ 2] [  12/ 800] time: 480.6368, d_loss: 1.86196470, g_loss: 1.17058265
Epoch: [ 0/ 2] [  13/ 800] time: 517.5933, d_loss: 4.18832588, g_loss: 0.02691788
Epoch: [ 0/ 2] [  14/ 800] time: 554.1754, d_loss: 2.07127762, g_loss: 2.26849294
Epoch: [ 0/ 2] [  15/ 800] time: 590.8510, d_loss: 4.49704552, g_loss: 0.01644055
Epoch: [ 0/ 2] [  16/ 800] time: 627.6671, d_loss: 1.76221132, g_loss: 2.19296288
Epoch: [ 0/ 2] [  17/ 800] time: 664.5924, d_loss: 3.17999744, g_loss: 0.08178606
Epoch: [ 0/ 2] [  18/ 800] time: 701.6112, d_loss: 1.68936574, g_loss: 3.26065779
Epoch: [ 0/ 2] [  19/ 800] time: 738.1621, d_loss: 3.68083525, g_loss: 0.05507284
Epoch: [ 0/ 2] [  20/ 800] time: 774.7442, d_loss: 1.62791634, g_loss: 1.72477102
Epoch: [ 0/ 2] [  21/ 800] time: 811.2326, d_loss: 4.06185627, g_loss: 0.03332135
Epoch: [ 0/ 2] [  22/ 800] time: 848.3139, d_loss: 0.83045113, g_loss: 3.93568897
Epoch: [ 0/ 2] [  23/ 800] time: 884.9740, d_loss: 3.16240478, g_loss: 0.05494529
Epoch: [ 0/ 2] [  24/ 800] time: 921.7744, d_loss: 1.69060552, g_loss: 1.86502504
Epoch: [ 0/ 2] [  25/ 800] time: 958.3253, d_loss: 4.17921877, g_loss: 0.03710852
[Cancelled]

 我算了一下,就算跑一个epoch也要38个小时,用的是家里的电脑,所以果断放弃了,这个还真只能等有好机器时才能试了,用cpu来跑是不可能的了,必须要有GPU才行啊!~~希望有土豪机路过可以来试下。但别忘了把结果告我下喔

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章