diff --git a/README.md b/README.md index f7e6eb7..2bbf094 100644 --- a/README.md +++ b/README.md @@ -2,6 +2,8 @@ This codebase implements the system described in the paper ["AdaptIS: Adaptive Instance Selection Network"](https://arxiv.org/abs/1909.07829), Konstantin Sofiiuk, Olga Barinova, Anton Konushin. Accepted at ICCV 2019. The code performs **instance segmentation** and can be also used for **panoptic segmentation**. +**[UPDATE]** We have released PyTorch implementation of our algorithm (now it supports only ToyV1 and ToyV2 datasets on single gpu). See [pytorch](https://github.com/saic-vul/adaptis/tree/pytorch) branch. +

drawing

@@ -12,7 +14,7 @@ The code performs **instance segmentation** and can be also used for **panoptic We generated an even more complex synthetic dataset to show the main advantage of our algorithm over other detection-based instance segmentation algorithms. The new dataset contains 25000 images for training and 1000 images each for validation and testing. Each image has resolution of 128x128 and can contain from 12 to 52 highly overlapping objects. -You can download the ToyV2 dataset from [here](https://drive.google.com/open?id=1iUMuWZUA4wzBC3ka01jkUM5hNqU3rV_U). You can test and visualize the model trained on this dataset using [this](notebooks/test_toy_v2_model.ipynb) notebook. +You can download the ToyV2 dataset from [here](https://drive.google.com/open?id=1iUMuWZUA4wzBC3ka01jkUM5hNqU3rV_U). You can test and visualize the model trained on this dataset using [this](notebooks/test_toy_v2_model.ipynb) notebook. You can download pretrained model from [here](https://drive.google.com/open?id=1RxepfpJF5gRpRNYu1urdV748suF3TL5k). ![alt text](./images/toy_v2_comparison.jpg) @@ -23,7 +25,7 @@ We used the ToyV1 dataset for our experiments in the paper. We generated 12k sam * **original** contains generated samples without augmentations; * **augmented** contains generated samples with fixed augmentations (random noise and blur). -We trained our model on the original/train part and tested it on the augmented/test part. You can download the toy dataset from [here](https://drive.google.com/open?id=161UZrYSE_B3W3hIvs1FaXFvoFaZae4FT). The repository provides an example of testing and metric evalutation for the toy dataset. You can test and visualize trained model on the toy dataset using [provided](notebooks/test_toy_model.ipynb) Jupyter Notebook. +We trained our model on the original/train part and tested it on the augmented/test part. You can download the toy dataset from [here](https://drive.google.com/open?id=161UZrYSE_B3W3hIvs1FaXFvoFaZae4FT). The repository provides an example of testing and metric evalutation for the toy dataset. You can test and visualize trained model on the toy dataset using [provided](notebooks/test_toy_model.ipynb) Jupyter Notebook. You can download pretrained model from [here](https://drive.google.com/open?id=1IuJUh0JvbKYILBxCeO2h6U4LG-9DoTHi). ### Setting up a development environment diff --git a/adaptis/utils/args.py b/adaptis/utils/args.py index 830e449..d56ed7a 100644 --- a/adaptis/utils/args.py +++ b/adaptis/utils/args.py @@ -11,9 +11,6 @@ def get_common_arguments(): parser.add_argument('--thread-pool', action='store_true', default=False, help='use ThreadPool for dataloader workers') - parser.add_argument('--no-cuda', action='store_true', default=False, - help='disables CUDA training') - parser.add_argument('--ngpus', type=int, default=len(mx.test_utils.list_gpus()), help='number of GPUs') diff --git a/adaptis/utils/exp.py b/adaptis/utils/exp.py index e07c612..a925364 100644 --- a/adaptis/utils/exp.py +++ b/adaptis/utils/exp.py @@ -46,20 +46,15 @@ def init_experiment(experiment_name, add_exp_args, script_path=None): fh.setFormatter(formatter) logger.addHandler(fh) - if args.no_cuda: - logger.info('Using CPU') - args.kvstore = 'local' - args.ctx = mx.cpu(0) + if args.gpus: + args.ctx = [mx.gpu(int(i)) for i in args.gpus.split(',')] + args.ngpus = len(args.ctx) else: - if args.gpus: - args.ctx = [mx.gpu(int(i)) for i in args.gpus.split(',')] - args.ngpus = len(args.ctx) - else: - args.ctx = [mx.gpu(i) for i in range(args.ngpus)] - logger.info(f'Number of GPUs: {args.ngpus}') - - if args.ngpus < 2: - args.syncbn = False + args.ctx = [mx.gpu(i) for i in range(args.ngpus)] + logger.info(f'Number of GPUs: {args.ngpus}') + + if args.ngpus < 2: + args.syncbn = False logger.info(args)