-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to do model fine tuning? #12
Comments
@colemanhindes Thanks for your interest. We are planning to release the training scripts soon but due to some other engagements there's no ETA yet. In the meantime, @canerturkmen and @shchur are working towards integrating Chronos into AutoGluon-TimeSeries (autogluon/autogluon#3978) and they're also planning to offer ways of fine-tuning the models. |
+1 for this, if possible please mind #22 too for some custom data. Thanks! |
+1, looking forward to releasing the scripts of training and fine-tuning! |
1 similar comment
+1, looking forward to releasing the scripts of training and fine-tuning! |
I caught a glimpse of it and noticed it's utilizing a torch.nn model. I've put together this notebook for training/finetuning. Could someone verify if it's set up correctly? The losses seem unusual, but I suspect it's due to the dataset being quite small and my use of:
notebook: here |
+1 for this, if possible please mind #22 too for some custom data. Thanks! |
Training and fine-tuning script was added in #63, together with configurations that were used for pretraining the models on HuggingFace. We still need to add proper documentation, but roughly speaking:
Happy training! cc @colemanhindes @Saeufer @HALF111 @TPF2017 @0xrushi @iganggang |
More detailed examples at: https://github.com/amazon-science/chronos-forecasting/tree/main/scripts |
I get this error when training chronos-t5-small: |
@Alonelymess that means your GPU does not support TF32 floating point format. Please run training/fine-tuning with the |
I only have a single timeseries, and I want to do forecasting on it. Does it make sense to do fine-tuning in this case? I was thinking maybe I could split the data chronologically (use data from 2022 to 2023 for training and data from 2023 to 2024 for testing), but I'm not sure if that makes sense. |
Really cool project! Enjoy the paper and have had fun testing it out. Will instructions on fine tuning be released?
Thanks for your time
The text was updated successfully, but these errors were encountered: