tensorflow和pytorch比較

pyorch的坑

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [2, 3, 13, 13, 85]], which is output 0 of AsStridedBackward, is at version 6; expected version 3 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
他跟我說good luck????????????????????????????????
反正我找了很多解釋方法,比如inplace、+=都改了沒B用,看了源碼,就是一個簡單的element 乘,我很無語。並且神奇的是,在A電腦可以跑,在B電腦不能跑,一個沒GPU一個有GPU

tensorflow的坑

tensorflow底層代碼不會錯,但是寫起來很麻煩,一個batchnormalization都要自己寫,可讀性又是極差。用keras的話,涉及到多gpu時候我感覺裏寫的很亂,比如並行多gpu
wtih tf.device(cpu0):
model
parallel_model=parallel_model(model)
有些電腦就會報錯,但是你改成:
wtih tf.device(cpu0):
model.predict()
就不會報錯,我賊矇蔽。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章