You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Our current Prompt2Model pipeline uses a fixed set of hyperparameters for all tasks (shown here).
To robustly handle different tasks, we want to implement automated hyperparameter selection by computing metrics on the validation splits of the retrieved and generated datasets for various configurations of a given model. Once put in place, our architecture diagram will look like this (unimplemented components are in blue):
There are two primary design decisions required in implementing this component:
What is the space of hyperparameters to choose from? We could define a default space of parameters (e.g. learning rate between 1e-3 and 1e-6, optimizer should be one of AdamW, Adam, or SGD w/ momentum). If we wanted to be more exploratory, we could even ask an LLM to suggest a space of parameters to consider for this task. We could also include the choice of base model to finetune as a hyperparameter (e.g. try the top-5 model architectures returned by the model retriever, and choose the one with the best validation metrics)
How do we select the best hyperparameters on the validation data? To avoid doing extra work, we could use a library like Hyperopt to do this. As a simple, hand-rolled option, we could also just do "random search" where we sample random configurations from the given space of hyperparameters and choose the configuration with the greatest validation metrics.
The text was updated successfully, but these errors were encountered:
Our current Prompt2Model pipeline uses a fixed set of hyperparameters for all tasks (shown here).
To robustly handle different tasks, we want to implement automated hyperparameter selection by computing metrics on the validation splits of the retrieved and generated datasets for various configurations of a given model. Once put in place, our architecture diagram will look like this (unimplemented components are in blue):
There are two primary design decisions required in implementing this component:
learning rate between 1e-3 and 1e-6
, optimizer should be one ofAdamW
,Adam
, orSGD w/ momentum
). If we wanted to be more exploratory, we could even ask an LLM to suggest a space of parameters to consider for this task. We could also include the choice of base model to finetune as a hyperparameter (e.g. try the top-5 model architectures returned by the model retriever, and choose the one with the best validation metrics)The text was updated successfully, but these errors were encountered: