Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concerns about feature dimensionality in MoCo self-training #139

Open
LALBJ opened this issue May 1, 2023 · 0 comments
Open

Concerns about feature dimensionality in MoCo self-training #139

LALBJ opened this issue May 1, 2023 · 0 comments

Comments

@LALBJ
Copy link

LALBJ commented May 1, 2023

I noticed that during self-training, the output dimensions of MoCo are set to 128, and the InfoNCE loss is calculated based on the 128-dimension features. However, when training the linear head, the fully connected output layer is concatenated with the 2048-dimensional features. In my opinion, if the 128-dimensional data represents latent features, it would be better to concatenate the classification head with the 128-dimensional output instead.

So may I ask what the reason is for using this implementation in the code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant