- 查看服务器上gpu使用情况(shell)
watch -n 1 nvidia-smi
- 查看当前环境是否有gpu(python)
torch.cuda.is_available()
- 指定使用某一块gpu(单GPU)
os.environ["CUDA_VISIBLE_DEVICES"] = "3"
- 将数据放到gpu上计算
方法一:
a_cuda = a.cuda()
方法二:
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
self.D.to(self.device)# self.device ==‘cpu’ or ‘cuda’
- 加速
torch.backends.cudnn.benchmark = True # to speed up
- GPU、CPU模型的转换加载
CPU->CPU,GPU->GPU
torch.load('gen_500000.pkl')
GPU->CPU
torch.load('gen_500000.pkl', map_location=lambda storage, loc: storage)
CPU->GPU1
torch.load('gen_500000.pkl', map_location=lambda storage, loc: storage.cuda(1))
参考链接:https://blog.csdn.net/dcrmg/article/details/79503978
- 多GPU
https://blog.csdn.net/qq_19598705/article/details/80396325