常見pytorch問題和解決方法
- RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time
Solution:To reduce memory usage, during the .backward() call, all the intermediary results are deleted when they are not needed anymore. Hence if you try to call .backward() again, the intermediary results don’t exist and the backward pass cannot be performed (and you get the error you see). You can call .backward(retain_graph=True) to make a backward pass that will not delete intermediary results, and so you will be able to call .backward() again. All but the last call to backward should have the retain_graph=True option.
- RuntimeError: copy_if failed to synchronize: device-side assert triggered
Solution:Run the code without cuda, and then you will see the real error message.
- encounter problems when installing pytorch
ModuleNotFoundError: No module named 'past’
solution: pip install future