You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Such a good work! But we have a problem with the Saliency indicator. Why can't we reproduce the 85.56 effect of the paper when we train 40,000 iterations with a total batch size of 8 according to the provided code and take the result of the round with the highest average gain of task performance?
The indicators of Semseg, Parsing, Saliency, Normal, and Boundary are as follows:
Our reproduction: 81.98, 73.32, 84.49, 14.18, 78.60.
While the paper gives: 81.94 72.87 85.56 14.29 78.60.
Do you have any good suggestions to improve this saliency detection indicator?
Thanks a lot!
The text was updated successfully, but these errors were encountered:
We train MTMamba++ with a total batch size of 4. Moreover, in different rounds, some tasks may achieve slightly better performance while the other may have a slight drop in performance, which is quite normal in multi-task learning.
Such a good work! But we have a problem with the Saliency indicator. Why can't we reproduce the 85.56 effect of the paper when we train 40,000 iterations with a total batch size of 8 according to the provided code and take the result of the round with the highest average gain of task performance?
The indicators of Semseg, Parsing, Saliency, Normal, and Boundary are as follows:
Our reproduction: 81.98, 73.32, 84.49, 14.18, 78.60.
While the paper gives: 81.94 72.87 85.56 14.29 78.60.
Do you have any good suggestions to improve this saliency detection indicator?
Thanks a lot!
The text was updated successfully, but these errors were encountered: