Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

text encoder does not support pretrained models #159

Open
FeminaS17 opened this issue Jul 27, 2024 · 1 comment
Open

text encoder does not support pretrained models #159

FeminaS17 opened this issue Jul 27, 2024 · 1 comment

Comments

@FeminaS17
Copy link

FeminaS17 commented Jul 27, 2024

https://huggingface.co/lukewys/laion_clap/resolve/main/music_audioset_epoch_15_esc_90.14.pt
When I was trying to fine-tune the model with the training scipt, I'm getting this error "AssertionError: bert/roberta/bart text encoder does not support pretrained models."

for the same model and transformer versions 4.30.0 and 4.30.2. Please suggest a workaround.
@waldleitner @lukewys @Neptune-S-777

@FeminaS17 FeminaS17 changed the title Issue with Inference model-checkpoint generated from eval_linear_probe text encoder does not support pretrained models Aug 5, 2024
@waldleitner
Copy link

@FeminaS17 I think you fail due to this check in main.py. I would assume, due to training your own projection layers on top of the text encoder, you cannot specify both pretrained model and a separate text encoder.

    if args.tmodel == "bert" or args.tmodel == "roberta" or args.tmodel == "bart":
        assert (
            args.pretrained == "" or args.pretrained is None
        ), "bert/roberta/bart text encoder does not support pretrained models."

https://github.com/LAION-AI/CLAP/blob/8e558817d853808486768004fa1b61ac9d69f2a2/src/laion_clap/training/main.py#L140C1-L144C1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants