Pytorch distributed RuntimeError: Address already in use
如果是使用pytorch distributed 單機多卡訓練方式,出現該錯誤,非常好解決。
Traceback (most recent call last):
File "main1.py", line 279, in <module>
train(args, io,root)
File "main1.py", line 53, in train
torch.distributed.init_process_group('nccl', init_method='env://')
File "/home/labpos/anaconda3/envs/ldr/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 400, in init_process_group
store, rank, world_size = next(rendezvous(url))
File "/home/labpos/anaconda3/envs/ldr/lib/python3.6/site-packages/torch/distributed/rendezvous.py", line 143, in _env_rendezvous_handler
store = TCPStore(master_addr, master_port, world_size, start_daemon)
RuntimeError: Address already in use
在啓動分佈式訓練時,加上端口號(任意)即可解決:
CUDA_VISIBLE_DEVICES=0,1 python -m torch.distributed.launch --nproc_per_node=2 --master_port 29501 main.py