You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there any way to remove this constraint? For example, the original ResNet implementation uses maxpooling with a kernel size of 3 and stride of 2. If I use that model, the bound_backward() would raise a ValueError. Thank you!
The text was updated successfully, but these errors were encountered:
Thanks for reporting this problem to us. We will add this to our to-do list and this may be fixed in a future release. However, it may take some time to add this support. If you want to train a network for verification, it is better to avoid maxpooling. Average pooling is more verification friendly.
Thanks for the awesome work. Here is a relevant kind reminder that in the future release, it would be much appreciated if bound_backward of MaxPooling could be implemented for the asymmetric padding cases https://github.com/Verified-Intelligence/auto_LiRPA/blob/master/auto_LiRPA/operators/pooling.py#L175, making it easier to satisfy the constraint among kernel size, stride and padding for different input image sizes. Thank you!
Hello, thanks for the great work!
I noticed that the BoundMaxPool requires to have equal kernel size and stride
https://github.com/Verified-Intelligence/auto_LiRPA/blob/master/auto_LiRPA/operators/pooling.py#L66
Is there any way to remove this constraint? For example, the original ResNet implementation uses maxpooling with a kernel size of 3 and stride of 2. If I use that model, the
bound_backward()
would raise aValueError
. Thank you!The text was updated successfully, but these errors were encountered: