You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The text was updated successfully, but these errors were encountered:
FeminaS17
changed the title
Issue with Inference model-checkpoint generated from eval_linear_probe
text encoder does not support pretrained models
Aug 5, 2024
@FeminaS17 I think you fail due to this check in main.py. I would assume, due to training your own projection layers on top of the text encoder, you cannot specify both pretrained model and a separate text encoder.
ifargs.tmodel=="bert"orargs.tmodel=="roberta"orargs.tmodel=="bart":
assert (
args.pretrained==""orargs.pretrainedisNone
), "bert/roberta/bart text encoder does not support pretrained models."
https://huggingface.co/lukewys/laion_clap/resolve/main/music_audioset_epoch_15_esc_90.14.pt
When I was trying to fine-tune the model with the training scipt, I'm getting this error "AssertionError: bert/roberta/bart text encoder does not support pretrained models."
for the same model and transformer versions 4.30.0 and 4.30.2. Please suggest a workaround.
@waldleitner @lukewys @Neptune-S-777
The text was updated successfully, but these errors were encountered: