You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
As per the autoTS tutorial, metric_weighting allows the user to choose from a list of available metrics, but there is no option to use other metrics if they are not present in the dictionary keys.
I would like to use the MASE metric (Mean Absolute Scaled Error). Is there a plan to allow integration of custom metrics, or perhaps to add metrics requested by users ?
Thank you!
The text was updated successfully, but these errors were encountered:
No current option to support custom metrics.
Yes it is on my to do list but it is tricky because of how it is all set up.
For your immediate needs:
SMAPE should correlate well with MASE. uwmse is another scaled error that would work. Between those you should be able to have model selection that is effectively the same as MASE, I should think.
You can access model.initial_results.per_series_mae and scale that pretty easily if you want to view results. For even more custom metrics you can run model.retrieve_validation_forecasts(models=[list of model ids]) and then calculate whatever you want afterwards.
As per the autoTS tutorial, metric_weighting allows the user to choose from a list of available metrics, but there is no option to use other metrics if they are not present in the dictionary keys.
I would like to use the MASE metric (Mean Absolute Scaled Error). Is there a plan to allow integration of custom metrics, or perhaps to add metrics requested by users ?
Thank you!
The text was updated successfully, but these errors were encountered: