Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Even kernel size for pooling layers in the GenericFrontend not permitted #50

Open
kuacakuaca opened this issue Apr 26, 2024 · 5 comments

Comments

@kuacakuaca
Copy link
Collaborator

for kernel_sizes in filter(None, [self.conv_kernel_sizes, self.pool_kernel_sizes]):

@albertz
Copy link
Member

albertz commented Apr 26, 2024

What is this issue about? You think this is a problem that even kernel sizes are not permitted? So you want to change that?

@kuacakuaca
Copy link
Collaborator Author

kuacakuaca commented Apr 26, 2024

What is this issue about? You think this is a problem that even kernel sizes are not permitted? So you want to change that?

I want to use even kernel size for the pooling layers, is there any reason we don’t want to allow this?

@albertz
Copy link
Member

albertz commented Apr 26, 2024

As far as I remember, there were technical reasons why it did not work. (I actually was against having the check here at that point, because this makes it unclear why it was not allowed. Just trying it out shows the problem much more clearly.) Maybe it's actually not a problem anymore in a more recent PyTorch version? I don't know. Just try it out.

@JackTemaki
Copy link
Contributor

I think the limitation (from PyTorch side) was only for Convolution with "padding=same"...

@kuacakuaca
Copy link
Collaborator Author

thanks! I'll just try.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants