- 查看服務器上gpu使用情況(shell)
watch -n 1 nvidia-smi
- 查看當前環境是否有gpu(python)
torch.cuda.is_available()
- 指定使用某一塊gpu(單GPU)
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
- 將數據放到gpu上計算
方法一:
a_cuda = a.cuda()
方法二:
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.D.to(self.device)# self.device ==‘cpu’ or ‘cuda’
- 加速
torch.backends.cudnn.benchmark = True # to speed up
- GPU、CPU模型的轉換加載
CPU->CPU,GPU->GPU
torch.load('gen_500000.pkl')
GPU->CPU
torch.load('gen_500000.pkl', map_location=lambda storage, loc: storage)
CPU->GPU1
torch.load('gen_500000.pkl', map_location=lambda storage, loc: storage.cuda(1))
參考鏈接:https://blog.csdn.net/dcrmg/article/details/79503978
- 多GPU
https://blog.csdn.net/qq_19598705/article/details/80396325