You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Regarding "convolutional_autoencoder" branch, I added 3 files:
bin/basic_ae_train.py lib/ae_training.py models/ae_cnn.py
When I'm trying to run bin/basic_ae_train.py ( as the bin/basic_training.py runs) I'm getting wrong dimensions from the decoder part as you can see below:
Using a target size (torch.Size([16, 1, 128, 201])) that is different to the input size (torch.Size([16, 1, 129, 201])). This will likely lead to incorrect results due to broadcasting. Please ensure they have the same size.
Please let me know what I should check and If the initial try of the autoencoder is ok besides this error.
after calculating the right padding, stride etc. Got output 201 x 128. Somehow still not getting the result I want from the loss function. Can't figure out why.
No description provided.
The text was updated successfully, but these errors were encountered: