You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We will also want to consider bottom-up aggregation of state forecasts, but even if/when we find and approach we like, it will be useful to have a national fit as a comparison and/or fallback.
The text was updated successfully, but these errors were encountered:
@dylanhmorris I think that optimized forecast reconciliation might be worth a test here, in comparison to the optimized copula bottom-up approach we tried last year at points.
Forecast hierarchy
The structure:
We fit each state/territory (base unit).
We fit each CDC region (intermediate unit).
We fit national (top level unit).
We then interleave the forecasts using reconciliation.
One approach could be to do sample-by-sample minimum trace (MinT) reconciliation. This would involves matching forecast samples across levels (e.g. by .draw), then applying minT to each before re-ensembling. This is equivalent to the Bayesian view of minT forecast across uncertainty in models. Main upside here is that hts package has a function for this with many options, and python versions exist e.g. https://github.com/AngelPone/pyhts
We will also want to consider bottom-up aggregation of state forecasts, but even if/when we find and approach we like, it will be useful to have a national fit as a comparison and/or fallback.
The text was updated successfully, but these errors were encountered: