DistributedDataParallel 관련 링크 정리
·
기술/PyTorch
https://pytorch.org/docs/stable/notes/cuda.html?highlight=torch%20distributed%20init_process_group CUDA semantics — PyTorch 1.10.1 documentation CUDA semantics torch.cuda is used to set up and run CUDA operations. It keeps track of the currently selected GPU, and all CUDA tensors you allocate will by default be created on that device. The selected device can be changed with a torch.cuda.device c..