Pytorch模型推理注意事項

with torch.no_grad

  • Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward().
  • It will reduce memory consumption for computations that would otherwise have requires_grad=True.
  • In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True.

nn.Module.eval()

  • This has any effect only on certain modules. See documentations of particular modules for details of their behaviors in training/evaluation mode, if they are affected, e.g. DropoutBatchNorm, etc

Remark : 如果只需要模型進行推理,with torch.no_grad 和 model.eval()都需要顯示地寫出來!

model = ModelArch()
checkpoint = torch.load(checkpoint_path, map_location=lambda storage, loc: storage)
model.load_state_dict(checkpoint)
model.eval()   # CORE-1
with torch.no_grad():  # CORE-2
    ...
    output = model(input)
    ...

 

發佈了116 篇原創文章 · 獲贊 53 · 訪問量 5萬+
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章