tensorlayer學習日誌19_chapter8_2

無奈啊,按照書上的源碼,怎麼都調試不出來,無奈只好找過的源碼:

在這個博客中發現一個卡通用的數據集,比較小,心想用小數據集試下吧,反正也就沒指望能跑出來~~~~~~https://blog.csdn.net/heyc861221/article/details/80127914,數據集是卡通的頭像,才兩百多M。

源碼我換成這個:

https://github.com/carpedm20/DCGAN-tensorflow

一共五個文件,多了個ops.py,但是download.py用的還是google盤,所以還是不能用,utils.py 文件增添了些內容,main.py把flags弄得更全點,ops.py是把卷積和反捲積等操作單獨設成函數存在這裏,其它的方面沒什麼區別,反正核心是model.py這裏沒改動就行。

這裏要在data文件夾下再新建個叫anime的文件夾, 用來存卡通頭像,data文件夾與源碼是在同一個文件夾下的,data文件夾裏應該還有之前下載過mnist和celebA。

windows的cmd是可以運行py文件的。這並不是MAC或Linux的專利。輸入python加空格後,把要運行的py文件拖到cmd框上,這樣就不用去輸這麼誇張的路徑的, 然後再空格,用--參數項 參數 的形式定義好參數,再回車運行就可以的

我這樣運行是報錯的,說是準備checkpoint存取的文件夾找不到,後來我加上 --checkpoint_dir anime_64_48_48就可以了。如果不想在cmd上運行,需要改下main.py的開頭幾個flags代碼的參數如下:

flags = tf.app.flags
# flags.DEFINE_integer("epoch", 25, "Epoch to train [25]")
flags.DEFINE_integer("epoch", 2, "Epoch to train [25]")
flags.DEFINE_float("learning_rate", 0.0002, "Learning rate of for adam [0.0002]")
flags.DEFINE_float("beta1", 0.5, "Momentum term of adam [0.5]")
flags.DEFINE_float("train_size", np.inf, "The size of train images [np.inf]")
flags.DEFINE_integer("batch_size", 64, "The size of batch images [64]")
# flags.DEFINE_integer("input_height", 108, "The size of image to use (will be center cropped). [108]")
flags.DEFINE_integer("input_height", 96, "The size of image to use (will be center cropped). [108]")
flags.DEFINE_integer("input_width", None, "The size of image to use (will be center cropped). If None, same value as input_height [None]")
# flags.DEFINE_integer("output_height", 64, "The size of the output images to produce [64]")
flags.DEFINE_integer("output_height", 48, "The size of the output images to produce [64]")
flags.DEFINE_integer("output_width", None, "The size of the output images to produce. If None, same value as output_height [None]")
# flags.DEFINE_string("dataset", "celebA", "The name of dataset [celebA, mnist, lsun]")
flags.DEFINE_string("dataset", "anime", "The name of dataset [celebA, mnist, lsun]")
flags.DEFINE_string("input_fname_pattern", "*.jpg", "Glob pattern of filename of input images [*]")
# flags.DEFINE_string("checkpoint_dir", "checkpoint", "Directory name to save the checkpoints [checkpoint]")
flags.DEFINE_string("checkpoint_dir", "anime_64_48_48", "Directory name to save the checkpoints [checkpoint]")
flags.DEFINE_string("data_dir", "./data", "Root directory of dataset [data]")
flags.DEFINE_string("sample_dir", "samples", "Directory name to save the image samples [samples]")
# flags.DEFINE_boolean("train", False, "True for training, False for testing [False]")
flags.DEFINE_boolean("train", True, "True for training, False for testing [False]")
# flags.DEFINE_boolean("crop", False, "True for training, False for testing [False]")
flags.DEFINE_boolean("crop", True, "True for training, False for testing [False]")
flags.DEFINE_boolean("visualize", False, "True for visualizing, False for nothing [False]")
flags.DEFINE_integer("generate_test_images", 100, "Number of images to generate during test. [100]")
FLAGS = flags.FLAGS

因爲是正方形,所以只要輸入input_height和output_height的參數就可以的。這裏的卡通頭像大小都是96像素的正方形,所以參數是96,輸出是48像素正方形,而checkpoint的anime_64_48_48指的是anime數據集的三層處理大小分別是64,48,48所以這麼取名的,我是根據報錯信息取的。

然後運行如下:

{'batch_size': <absl.flags._flag.Flag object at 0x000000000958C5F8>,
 'beta1': <absl.flags._flag.Flag object at 0x000000000957DCF8>,
 'checkpoint_dir': <absl.flags._flag.Flag object at 0x000000000C3BFA90>,
 'crop': <absl.flags._flag.BooleanFlag object at 0x000000000C3BFC50>,
 'data_dir': <absl.flags._flag.Flag object at 0x000000000C3BFB00>,
 'dataset': <absl.flags._flag.Flag object at 0x000000000C3BF940>,
 'epoch': <absl.flags._flag.Flag object at 0x000000000126F0F0>,
 'generate_test_images': <absl.flags._flag.Flag object at 0x000000000C3BFD68>,
 'h': <tensorflow.python.platform.app._HelpFlag object at 0x000000000C3BFDA0>,
 'help': <tensorflow.python.platform.app._HelpFlag object at 0x000000000C3BFDA0>,
 'helpfull': <tensorflow.python.platform.app._HelpfullFlag object at 0x000000000C3BFE48>,
 'helpshort': <tensorflow.python.platform.app._HelpshortFlag object at 0x000000000C3BFEB8>,
 'input_fname_pattern': <absl.flags._flag.Flag object at 0x000000000C3BF9E8>,
 'input_height': <absl.flags._flag.Flag object at 0x000000000958C6D8>,
 'input_width': <absl.flags._flag.Flag object at 0x000000000C3BF7B8>,
 'learning_rate': <absl.flags._flag.Flag object at 0x0000000004388FD0>,
 'output_height': <absl.flags._flag.Flag object at 0x000000000C3BF828>,
 'output_width': <absl.flags._flag.Flag object at 0x000000000C3BF8D0>,
 'sample_dir': <absl.flags._flag.Flag object at 0x000000000C3BFB70>,
 'train': <absl.flags._flag.BooleanFlag object at 0x000000000C3BFBA8>,
 'train_size': <absl.flags._flag.Flag object at 0x000000000958C208>,
 'visualize': <absl.flags._flag.BooleanFlag object at 0x000000000C3BFCC0>}
---------
Variables: name (type shape) [size]
---------
generator/g_h0_lin/Matrix:0 (float32_ref 100x4608) [460800, bytes: 1843200]
generator/g_h0_lin/bias:0 (float32_ref 4608) [4608, bytes: 18432]
generator/g_bn0/beta:0 (float32_ref 512) [512, bytes: 2048]
generator/g_bn0/gamma:0 (float32_ref 512) [512, bytes: 2048]
generator/g_h1/w:0 (float32_ref 5x5x256x512) [3276800, bytes: 13107200]
generator/g_h1/biases:0 (float32_ref 256) [256, bytes: 1024]
generator/g_bn1/beta:0 (float32_ref 256) [256, bytes: 1024]
generator/g_bn1/gamma:0 (float32_ref 256) [256, bytes: 1024]
generator/g_h2/w:0 (float32_ref 5x5x128x256) [819200, bytes: 3276800]
generator/g_h2/biases:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/beta:0 (float32_ref 128) [128, bytes: 512]
generator/g_bn2/gamma:0 (float32_ref 128) [128, bytes: 512]
generator/g_h3/w:0 (float32_ref 5x5x64x128) [204800, bytes: 819200]
generator/g_h3/biases:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/beta:0 (float32_ref 64) [64, bytes: 256]
generator/g_bn3/gamma:0 (float32_ref 64) [64, bytes: 256]
generator/g_h4/w:0 (float32_ref 5x5x3x64) [4800, bytes: 19200]
generator/g_h4/biases:0 (float32_ref 3) [3, bytes: 12]
discriminator/d_h0_conv/w:0 (float32_ref 5x5x3x64) [4800, bytes: 19200]
discriminator/d_h0_conv/biases:0 (float32_ref 64) [64, bytes: 256]
discriminator/d_h1_conv/w:0 (float32_ref 5x5x64x128) [204800, bytes: 819200]
discriminator/d_h1_conv/biases:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn1/beta:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_bn1/gamma:0 (float32_ref 128) [128, bytes: 512]
discriminator/d_h2_conv/w:0 (float32_ref 5x5x128x256) [819200, bytes: 3276800]
discriminator/d_h2_conv/biases:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_bn2/beta:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_bn2/gamma:0 (float32_ref 256) [256, bytes: 1024]
discriminator/d_h3_conv/w:0 (float32_ref 5x5x256x512) [3276800, bytes: 13107200]
discriminator/d_h3_conv/biases:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_bn3/beta:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_bn3/gamma:0 (float32_ref 512) [512, bytes: 2048]
discriminator/d_h4_lin/Matrix:0 (float32_ref 4608x1) [4608, bytes: 18432]
discriminator/d_h4_lin/bias:0 (float32_ref 1) [1, bytes: 4]
Total size of variables: 9086340
Total bytes of variables: 36345360
 [*] Reading checkpoints...
 [*] Success to read DCGAN.model-2
 [*] Load SUCCESS
Epoch: [ 0/ 2] [   0/ 800] time: 38.8909, d_loss: 4.83695269, g_loss: 0.01604199
Epoch: [ 0/ 2] [   1/ 800] time: 75.9409, d_loss: 4.21985102, g_loss: 0.03411261
Epoch: [ 0/ 2] [   2/ 800] time: 112.8350, d_loss: 4.44241142, g_loss: 0.02101352
Epoch: [ 0/ 2] [   3/ 800] time: 149.5263, d_loss: 3.07594109, g_loss: 0.11336002
Epoch: [ 0/ 2] [   4/ 800] time: 186.3891, d_loss: 2.70502377, g_loss: 0.14985262
Epoch: [ 0/ 2] [   5/ 800] time: 223.3300, d_loss: 3.69357109, g_loss: 0.04717619
Epoch: [ 0/ 2] [   6/ 800] time: 260.0525, d_loss: 2.17558765, g_loss: 0.68981493
Epoch: [ 0/ 2] [   7/ 800] time: 297.1181, d_loss: 5.51236010, g_loss: 0.00628419
Epoch: [ 0/ 2] [   8/ 800] time: 333.8094, d_loss: 2.00994205, g_loss: 0.82554901
Epoch: [ 0/ 2] [   9/ 800] time: 370.4227, d_loss: 5.24547815, g_loss: 0.01088684
Epoch: [ 0/ 2] [  10/ 800] time: 407.0671, d_loss: 2.80249119, g_loss: 1.36248946
Epoch: [ 0/ 2] [  11/ 800] time: 443.8676, d_loss: 5.26725864, g_loss: 0.01024086
Epoch: [ 0/ 2] [  12/ 800] time: 480.6368, d_loss: 1.86196470, g_loss: 1.17058265
Epoch: [ 0/ 2] [  13/ 800] time: 517.5933, d_loss: 4.18832588, g_loss: 0.02691788
Epoch: [ 0/ 2] [  14/ 800] time: 554.1754, d_loss: 2.07127762, g_loss: 2.26849294
Epoch: [ 0/ 2] [  15/ 800] time: 590.8510, d_loss: 4.49704552, g_loss: 0.01644055
Epoch: [ 0/ 2] [  16/ 800] time: 627.6671, d_loss: 1.76221132, g_loss: 2.19296288
Epoch: [ 0/ 2] [  17/ 800] time: 664.5924, d_loss: 3.17999744, g_loss: 0.08178606
Epoch: [ 0/ 2] [  18/ 800] time: 701.6112, d_loss: 1.68936574, g_loss: 3.26065779
Epoch: [ 0/ 2] [  19/ 800] time: 738.1621, d_loss: 3.68083525, g_loss: 0.05507284
Epoch: [ 0/ 2] [  20/ 800] time: 774.7442, d_loss: 1.62791634, g_loss: 1.72477102
Epoch: [ 0/ 2] [  21/ 800] time: 811.2326, d_loss: 4.06185627, g_loss: 0.03332135
Epoch: [ 0/ 2] [  22/ 800] time: 848.3139, d_loss: 0.83045113, g_loss: 3.93568897
Epoch: [ 0/ 2] [  23/ 800] time: 884.9740, d_loss: 3.16240478, g_loss: 0.05494529
Epoch: [ 0/ 2] [  24/ 800] time: 921.7744, d_loss: 1.69060552, g_loss: 1.86502504
Epoch: [ 0/ 2] [  25/ 800] time: 958.3253, d_loss: 4.17921877, g_loss: 0.03710852
[Cancelled]

 我算了一下,就算跑一個epoch也要38個小時,用的是家裏的電腦,所以果斷放棄了,這個還真只能等有好機器時才能試了,用cpu來跑是不可能的了,必須要有GPU才行啊!~~希望有土豪機路過可以來試下。但別忘了把結果告我下喔

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章