You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been trying to adapt the TDNet td4_psp model to my own dataset for binary segmentation. While it produces quite promising results I can't seem to understand the augmentations part. While in the original cityscapes .yml config you seem to use multiscale augmentation when Ievaluating the model the image size has to be exactly the same size as in training.
I was trying to evaluate full scale images on my model trained on half scale. While the model obviously won't load from my checkpoint when using self.layer_norm1 = Layer_Norm([68, 120]) (50%) to self.layer_norm1 = Layer_Norm([136, 241]) (100%) the model won't run when I use self.layer_norm1 = Layer_Norm([68, 120]) (50%) and don't scale it to 50%.
´´´
RuntimeError: Given normalized_shape=[68, 120], expected input with shape [*, 68, 120], but got input of size[4, 512, 136, 240]
´´´
How did you manage to use multiscale and do you have an idea on how I can evaluate my training with full scale images?
The text was updated successfully, but these errors were encountered:
Hi, this is due to the fixed input size of Layer_Norm. To train/test with a different input size, you can resize the weights of Layer_Norm in the pretrained model into the desired size, and manually reload it.
Resizing the weights actually solved my problem and improved my prediction. Thanks!
However I still don't understand how you managed to use multi scale while having fixed layer_norm sizes. Shouldn't this result in the same error as the input size is varyiing?
I've been trying to adapt the TDNet td4_psp model to my own dataset for binary segmentation. While it produces quite promising results I can't seem to understand the augmentations part. While in the original cityscapes .yml config you seem to use multiscale augmentation when Ievaluating the model the image size has to be exactly the same size as in training.
I was trying to evaluate full scale images on my model trained on half scale. While the model obviously won't load from my checkpoint when using self.layer_norm1 = Layer_Norm([68, 120]) (50%) to self.layer_norm1 = Layer_Norm([136, 241]) (100%) the model won't run when I use self.layer_norm1 = Layer_Norm([68, 120]) (50%) and don't scale it to 50%.
´´´
RuntimeError: Given normalized_shape=[68, 120], expected input with shape [*, 68, 120], but got input of size[4, 512, 136, 240]
´´´
How did you manage to use multiscale and do you have an idea on how I can evaluate my training with full scale images?
The text was updated successfully, but these errors were encountered: