Skip to content

karpenet/Distributed-Data-Parallel-on-DenseNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Distributed Data Parallel on DenseNet

Distributed training using PyTorch on a DenseNet Model.

Requirements

  • Python 3.8.6
pip install -r requirements.txt

Training

Single GPU training:

CUDA_VISIBLE_DEVICES=0 python train.py

Distributed training using two GPUs:

CUDA_VISIBLE_DEVICES=0,1 python train_ddp.py -g 2

Distributed training using two GPUs with Mixed Precision:

CUDA_VISIBLE_DEVICES=0,1 python train_ddp_mp.py -g 2

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages