常見pytorch問題和解決方法

常見pytorch問題和解決方法

  1. RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time

Solution:To reduce memory usage, during the .backward() call, all the intermediary results are deleted when they are not needed anymore. Hence if you try to call .backward() again, the intermediary results don’t exist and the backward pass cannot be performed (and you get the error you see). You can call .backward(retain_graph=True) to make a backward pass that will not delete intermediary results, and so you will be able to call .backward() again. All but the last call to backward should have the retain_graph=True option.

  1. RuntimeError: copy_if failed to synchronize: device-side assert triggered

Solution:Run the code without cuda, and then you will see the real error message.

  1. encounter problems when installing pytorch

ModuleNotFoundError: No module named 'past’
solution: pip install future

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章