기술/PyTorch

    nn.AdaptiveAvgPool1d

    https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool1d.html AdaptiveAvgPool1d — PyTorch 1.10 documentation Shortcuts pytorch.org 이미지 사이즈를 torch.nn.AdaptiveAvgPool1d(output_size) 의 output_size로 맞춘다. 물론 여기선 1d인거고, 2d, 3d도 마찬가지일거임.

    DistributedDataParallel 관련 링크 정리

    https://pytorch.org/docs/stable/notes/cuda.html?highlight=torch%20distributed%20init_process_group CUDA semantics — PyTorch 1.10.1 documentation CUDA semantics torch.cuda is used to set up and run CUDA operations. It keeps track of the currently selected GPU, and all CUDA tensors you allocate will by default be created on that device. The selected device can be changed with a torch.cuda.device c..

    torchvision.models 의 IntermediateLayerGetter

    레이어 결과를 중간에 낚아채게 해주는거임 https://github.com/biubug6/Pytorch_Retinaface/blob/master/models/retinaface.py GitHub - biubug6/Pytorch_Retinaface: Retinaface get 80.99% in widerface hard val using mobilenet0.25. Retinaface get 80.99% in widerface hard val using mobilenet0.25. - GitHub - biubug6/Pytorch_Retinaface: Retinaface get 80.99% in widerface hard val using mobilenet0.25. github.com 위에거 보다 알아냈다. 만..

    pbar 고급진 사용법

    pbar = tqdm(enumerate(train_dataloader), total=len(train_dataloader)) model.train() for batch, (data, labels) in pbar: data, labels = data.to(device), labels.to(device) # ... pbar.update() pbar.set_description( f"Train: [{epoch + 1:03d}] " f"Loss: {(epoch_loss / (batch + 1)):.3f}, " ) pbar.close()

    실제 torch weights 살펴보기

    torch.nn.Linear https://pytorch.org/docs/stable/_modules/torch/nn/modules/linear.html#Linear https://pytorch.org/docs/stable/generated/torch.nn.functional.linear.html https://pytorch.org/docs/stable/_modules/torch/nn/modules/module.html#Module Module에 내가 건드려봤던 register hook들도 있음. 그 외에 zero_grad, requires_grad_, children 등등.. https://doodlrudco.tistory.com/entry/torchnnParameter-%EC%97%90-%EA%B4%..