Skip to content

Commit

Permalink
update: models & weights for cspdarknet
Browse files Browse the repository at this point in the history
  • Loading branch information
wondervictor committed Oct 31, 2022
1 parent ce005c8 commit fd6fb38
Show file tree
Hide file tree
Showing 3 changed files with 43 additions and 10 deletions.
10 changes: 4 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,8 @@ Tianheng Cheng, <a href="https://xinggangw.info/">Xinggang Wang</a><sup><span>&#

`This project is under active development, please stay tuned!` &#9749;

* `[2022-10-31]`: We release the models & weights for the [`CSP-DarkNet53`](configs/sparse_inst_cspdarknet53_giam.yaml) backbone. Which is a strong baseline with highly-competitve inference speed and accuracy.

* `[2022-10-19]`: We provide the implementation and inference code based on [MindSpore](https://www.mindspore.cn/), a nice and efficient Deep Learning framework. Thanks [Ruiqi Wang](https://github.com/RuiqiWang00) for this kind contribution!

* `[2022-8-9]`: We provide the FLOPs counter [`get_flops.py`](./tools/get_flops.py) to obtain the FLOPs/Parameters of SparseInst. This update also includes some bugfixs.
Expand Down Expand Up @@ -85,12 +87,11 @@ All models are trained on MS-COCO *train2017*.
| [SparseInst (G-IAM)](configs/sparse_inst_r50vd_dcn_giam_aug.yaml) | [R-50-vd-DCN](https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet50d_ra2-464e36ba.pth) | 608 | &#10003; | 37.4 | 37.9 | 40.0 | [model](https://drive.google.com/file/d/1clYPdCNrDNZLbmlAEJ7wjsrOLn1igOpT/view?usp=sharing)|
| [SparseInst (G-IAM)](configs/sparse_inst_r50vd_dcn_giam_aug.yaml) | [R-50-vd-DCN](https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/resnet50d_ra2-464e36ba.pth) | 640 | &#10003; | 37.7 | 38.1 | 39.3 | [model](https://drive.google.com/file/d/1clYPdCNrDNZLbmlAEJ7wjsrOLn1igOpT/view?usp=sharing)|

<!-- #### SparseInst with other backbones
#### SparseInst with other backbones

| model | backbone | input | AP<sup>val</sup> | AP | FPS | weights |
| :---- | :------ | :---: | :--------------: | :--: | :-: | :-----: |
| SparseInst (G-IAM) | [VoVNet]() | 640 | - | - | - | [model]() |
| SparseInst (G-IAM) | [CSPDarkNet]() | 640 | - | -| - | [model]() | -->
| SparseInst (G-IAM) | [CSPDarkNet](configs/sparse_inst_cspdarknet53_giam.yaml) | 640 | 35.1 | -| - | [model](https://drive.google.com/file/d/1rcUJWUbusM216Zbtmo_xB774jdjb3qSt/view?usp=sharing) |

#### Larger models

Expand Down Expand Up @@ -238,11 +239,8 @@ python tools/train_net.py --config-file configs/sparse_inst_r50vd_dcn_giam_aug.y
### Custom Training of SparseInst

1. We suggest you convert your custom datasets into the `COCO` format, which enables the usage of the default dataset mappers and loaders. You may find more details in the [official guide of detectron2](https://detectron2.readthedocs.io/en/latest/tutorials/datasets.html#register-a-coco-format-dataset).

2. You need to check whether `NUM_CLASSES` and `NUM_MASKS` should be changed according to your scenarios or tasks.

3. Change the configurations accordingly.

4. After finishing the above procedures, you can easily train SparseInst by `train_net.py`.


Expand Down
4 changes: 3 additions & 1 deletion configs/sparse_inst_cspdarknet53_giam.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,9 @@ MODEL:
SPARSE_INST:
ENCODER:
IN_FEATURES: ["csp2", "csp3", "csp4"]
DECODER:
NAME: "GroupIAMSoftDecoder"
CSPNET:
NAME: "darknet53"
NAME: "cspdarknet53"
OUT_FEATURES: ["csp2", "csp3", "csp4"]
OUTPUT_DIR: "output/sparse_inst_cspdarknet53_giam"
39 changes: 36 additions & 3 deletions sparseinst/backbones/cspnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@
import torch
import torch.nn as nn

from timm.models.layers import ConvBnAct, DropPath, AvgPool2dSame, create_attn
from timm.models.layers import create_conv2d, create_act_layer
from timm.models.layers import DropPath, AvgPool2dSame, create_attn


from detectron2.layers import ShapeSpec, FrozenBatchNorm2d
Expand Down Expand Up @@ -84,6 +85,39 @@
)
)

class ConvBnAct(nn.Module):
def __init__(self, in_channels, out_channels, kernel_size=1, stride=1, padding='', dilation=1, groups=1,
bias=False, apply_act=True, norm_layer=nn.BatchNorm2d, act_layer=nn.ReLU, aa_layer=None,
drop_block=None):
super(ConvBnAct, self).__init__()
use_aa = aa_layer is not None

self.conv = create_conv2d(
in_channels, out_channels, kernel_size, stride=1 if use_aa else stride,
padding=padding, dilation=dilation, groups=groups, bias=bias)

# NOTE for backwards compatibility with models that use separate norm and act layer definitions
self.bn = norm_layer(out_channels)
self.act = act_layer()
self.aa = aa_layer(
channels=out_channels) if stride == 2 and use_aa else None

@property
def in_channels(self):
return self.conv.in_channels

@property
def out_channels(self):
return self.conv.out_channels

def forward(self, x):
x = self.conv(x)
x = self.bn(x)
x = self.act(x)
if self.aa is not None:
x = self.aa(x)
return x


def create_stem(
in_chans=3, out_chs=32, kernel_size=3, stride=2, pool='',
Expand Down Expand Up @@ -394,8 +428,7 @@ def build_cspnet_backbone(cfg, input_shape=None):
if norm_name == "FrozenBN":
norm = FrozenBatchNorm2d
elif norm_name == "SyncBN":
from detectron2.layers import NaiveSyncBatchNorm
norm = NaiveSyncBatchNorm
norm = nn.SyncBatchNorm
else:
norm = nn.BatchNorm2d

Expand Down

0 comments on commit fd6fb38

Please sign in to comment.