Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About contribute densnet to Mxnet #4

Open
qingzhouzhen opened this issue Jan 17, 2018 · 5 comments
Open

About contribute densnet to Mxnet #4

qingzhouzhen opened this issue Jan 17, 2018 · 5 comments

Comments

@qingzhouzhen
Copy link

Hi xiong, I used your densnet core code and fit it to the form as incubator-mxnet/example/image-classification/symbol, then I write the train code for distribute train(because memory of my GPUs is not large enough), the results as below:
densnet_plot_cur
Sorry for this picture, I cannot upload file beyound 10K,
my code refer to https://github.com/qingzhouzhen/incubator-mxnet/blob/densenet-symbol/example/image-classification/symbols/densenet.py
Would you mind I push it to mxnet? I would describle my training indetail

@bruinxiong
Copy link
Owner

Hi qing, sorry, that picture is too blur that I can not see the detailed information. Open-sourcing is a good thing, I'm very happy to see my code can help you to understand densenet architecture. If you want push your code to mxnet, I only have one request, please refer and mention my code or my link when you push to mxnet. Thanks!

@shiyuanyin
Copy link

@bruinxiong
作者你好,我想改动一个新的结构,是在SE的地方改动的,有点困惑,mxnet 的symbol,不能直接得到bchw的值,需要根据当前bn3的输出得到bchw,然后reshape,操作,可是symbol只能在喂数据后才能得到每一层的形状
pytorch 的SGE,一个实现架构语句, 对应你提供的模型SE代码位置修改的话,symbol每一层bn3 后边的bchw,我直接得不到,我要mxnet,实现这句话,b, c, h, w = x.size(), x = x.reshape(b * self.groups, -1, h, w) 我对mxnet 不是那么熟悉,不知道作者你有没有好的方式实现这句reshape

下面是对应pytorch 实现

class SpatialGroupEnhance(nn.Module): # 3 2 1 hw is half, 311 is same size
def init(self, groups = 64):
super(SpatialGroupEnhance, self).init()
self.groups = groups
self.avg_pool = nn.AdaptiveAvgPool2d(1)
self.weight = Parameter(torch.zeros(1, groups, 1, 1))
self.bias = Parameter(torch.ones(1, groups, 1, 1))
self.sig = nn.Sigmoid()

def forward(self, x): # (b, c, h, w)
b, c, h, w = x.size()
x = x.view(b * self.groups, -1, h, w) ##reshape
xn = x * self.avg_pool(x) # x * global pooling(h,w change 1)
xn = xn.sum(dim=1, keepdim=True) #(b,1,h,w)
t = xn.view(b * self.groups, -1)
t = t - t.mean(dim=1, keepdim=True)
std = t.std(dim=1, keepdim=True) + 1e-5
t = t / std # normalize -mean/std
t = t.view(b, self.groups, h, w)
t = t * self.weight + self.bias
t = t.view(b * self.groups, 1, h, w)
x = x * self.sig(t) #in order to sigmod facter,this is group factor (0-1)
x = x.view(b, c, h, w) #get to varying degrees of importance,Restoration dimension
return x

@liuhangfan
Copy link

@shiyuanyin 你好,请问你用mxnet实现了吗?我现在也遇到了这个问题

@sky186
Copy link

sky186 commented Apr 15, 2020

@bruinxiong
Copy link
Owner

@shiyuanyin @liuhangfan @sky186 你们好,很久没有关注这个帖子了,抱歉才看到。已经转战TF平台了,我目前没有资源做进一步尝试。如果后面有的话,我会试试。不过在这之前,可以参考 @sky186 这位同学的实现。感谢 @sky186 同学的回复,我抽空看看你的代码哈。非常感谢大家!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants