報錯:AttributeError: NoneType object has no attribute device

今天搞個測試,測試是在horovod下進行的。

問題就出在加載權重(參數)文件的地方,加載權重命令load_weights前要先build一下,結果就build出這麼一個錯誤:

Exception ignored in: <bound method _RandomSeedGeneratorDeleter.__del__ of <tensorflow.python.data.ops.dataset_ops._RandomSeedGeneratorDeleter object at 0x7f363100d4e0>>
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 3462, in __del__
AttributeError: 'NoneType' object has no attribute 'device'

這個報錯真的是謎之報錯,我根本沒搞懂錯哪了。但反正是註釋掉build這句就不會報錯。

於是我脫離horovod,在單純的tensorflow2.1.0中重寫了一下,發現沒有問題,可以正常運行。

但是我一開始運行了一下發現報錯是這樣的:

Traceback (most recent call last):
  File "error.py", line 30, in <module>
    mnist_model.build(input_shape = (None, 28 ,28))
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/sequential.py", line 260, in build
    super(Sequential, self).build(input_shape)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/network.py", line 682, in build
    self.call(x, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/sequential.py", line 281, in call
    outputs = layer(inputs, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 737, in __call__
    self.name)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/input_spec.py", line 177, in assert_input_compatibility
    str(x.shape.as_list()))
ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=3. Full shape received: [None, 28, 28]
Exception ignored in: <bound method _RandomSeedGeneratorDeleter.__del__ of <tensorflow.python.data.ops.dataset_ops._RandomSeedGeneratorDeleter object at 0x7f7c76f37320>>
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/data/ops/dataset_ops.py", line 3462, in __del__
AttributeError: 'NoneType' object has no attribute 'device'

可以看到是model.build那句的問題,最後同樣是AttributeError: 'NoneType' object has no attribute 'device'。

但是這次的錯誤提示比較多,裏面有個有用信息就是倒數第5行的提示:

ValueError: Input 0 of layer conv2d is incompatible with the layer: expected ndim=4, found ndim=2. Full shape received: [None, 784]

原來是build輸入尺度錯了,修改正確後發現脫離horovod運行是不會報錯的。

 

最後我無意間發現了這樣的Warning提示:

[1,0]<stderr>:2020-06-30 08:34:47.818137: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 376320000 exceeds 10% of system memory.
[1,0]<stderr>:2020-06-30 08:34:49.030225: W tensorflow/core/framework/cpu_allocator_impl.cc:81] Allocation of 188160000 exceeds 10% of system memory.

這個提示我就很熟悉了,這是提示memory佔用率過高啊。但是我的程序是一個mnist測試程序而已,又是放在相當不錯的服務器上運行的,而且服務器我是一個人獨佔並沒有其他任務佔用memory,怎麼會有這個提示那?同時,https://github.com/tensorflow/tensorflow/issues/35326裏面也有人遇見這個錯誤並提出了一樣的懷疑。

爲了驗證是不是這個問題我觀察mem使用率在程序運行的全過程都沒看見有超過10%。但是我還是把程序中關於數據處理部分的程序註釋掉,發現果然能正常運行沒有報錯了。所以這個報錯究竟是不是memory問題還有待考證,初步判斷是和memory有可能有一定關係的。

下面把程序放上來詳細說明一下:

import tensorflow as tf
import horovod.tensorflow.keras as hvd
import os
import datetime
import package

time_start = datetime.datetime.now()

# 初始化
Log,arg = package.initial()

# 指定GPU信息和operator
gpus, opt = package.gpu_setting('keras+tensorflow2.0', Log)


(mnist_images, mnist_labels), _ = \
    tf.keras.datasets.mnist.load_data(path='mnist-%d.npz' % hvd.rank())

dataset = tf.data.Dataset.from_tensor_slices(
    (tf.cast(mnist_images[..., tf.newaxis] / 255.0, tf.float32),
             tf.cast(mnist_labels, tf.int64))
)
# dataset = dataset.repeat().shuffle(10000).batch(128)
dataset = dataset.repeat().batch(128)
mnist_model = tf.keras.Sequential([
    tf.keras.layers.Conv2D(32, [3, 3], activation='relu'),
    tf.keras.layers.Conv2D(64, [3, 3], activation='relu'),
    tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
    tf.keras.layers.Dropout(0.25),
    tf.keras.layers.Flatten(),
    tf.keras.layers.Dense(128, activation='relu'),
    tf.keras.layers.Dropout(0.5),
    tf.keras.layers.Dense(10, activation='softmax')
])

mnist_model.compile(loss=tf.losses.SparseCategoricalCrossentropy(),
                    optimizer=opt,
                    metrics=['accuracy'],
                    experimental_run_tf_function=False)
# weight_file = os.path.join(arg.ckp_path,'checkpoint-break-step-64.h5')
if hvd.rank()==0:
    mnist_model.build(input_shape=(None, 28, 28, 1))
    mnist_model.load_weights('checkpoint-1.h5')

第23行被註釋掉的“# dataset = dataset.repeat().shuffle(10000).batch(128)”是原來的程序。

下面第24行“dataset = dataset.repeat().batch(128)”是我修改後的程序。

實驗發現,23行換成24行(刪除shuffle)就不會報錯了。

難道真是內存的事???

只能說有一定關聯吧,但應該也不是memory不足引起的。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章