Replies: 1 comment
-
Hello, I have a similar situation. Did you manage to solve the problem? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello everyone,
I need your help. I'm using the NBEATSx model and I want to save my model during training based on the valid_loss.
Therefore, I'm using the following callback:
checkpoint_callback = ModelCheckpoint(
monitor='valid_loss',
dirpath='checkpoints',
filename='alpha{epoch:02d}-{valid_loss:.5f}',
save_top_k=1,
verbose=True,
mode='min',
model = NBEATSx(
h=step_forward,
callbacks=[checkpoint_callback],
enable_checkpointing=True,
…
lr_scheduler=torch.optim.lr_scheduler.StepLR,
lr_scheduler_kwargs={
'step_size': step_by_epoch,
'gamma': 0.1
},
….
nf = NeuralForecast(
models=[model],
freq=240
)
nf.fit(df=train_data, val_size=valid_size)_
When I try to load my model with :
checkpoint = NeuralForecast.load(path='./checkpoints/test_run', verbose=True)
I get the following message:
Found more than one stateful callback of type
EarlyStopping
. In the current configuration, this callback does not support being saved alongside other instances of the same type. Please consult the documentation ofEarlyStopping
regarding valid settings for the callback state to be checkpointable. HINT: Thecallback.state_key
must be unique among all callbacks in the Trainer.How can I fix this problem? Thank you very much!
Beta Was this translation helpful? Give feedback.
All reactions