Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About saliency indicator #5

Open
CQC-gogopro opened this issue Jan 12, 2025 · 1 comment
Open

About saliency indicator #5

CQC-gogopro opened this issue Jan 12, 2025 · 1 comment

Comments

@CQC-gogopro
Copy link

Such a good work! But we have a problem with the Saliency indicator. Why can't we reproduce the 85.56 effect of the paper when we train 40,000 iterations with a total batch size of 8 according to the provided code and take the result of the round with the highest average gain of task performance?
The indicators of Semseg, Parsing, Saliency, Normal, and Boundary are as follows:
Our reproduction: 81.98, 73.32, 84.49, 14.18, 78.60.
While the paper gives: 81.94 72.87 85.56 14.29 78.60.
Do you have any good suggestions to improve this saliency detection indicator?
Thanks a lot!

@Baijiong-Lin
Copy link
Collaborator

We train MTMamba++ with a total batch size of 4. Moreover, in different rounds, some tasks may achieve slightly better performance while the other may have a slight drop in performance, which is quite normal in multi-task learning.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants