Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Write an archiver for NREL Cambium expansions of Standard Scenarios #565

Open
10 tasks
krivard opened this issue Jan 31, 2025 · 1 comment · May be fixed by #569
Open
10 tasks

Write an archiver for NREL Cambium expansions of Standard Scenarios #565

krivard opened this issue Jan 31, 2025 · 1 comment · May be fixed by #569
Labels

Comments

@krivard
Copy link
Contributor

krivard commented Jan 31, 2025

Motivation and context:

NREL selects a subset of their Standard Scenarios (see #561) for expansion and deeper analysis using Cambium, a specialized modeling and analysis tool. The primary contribution of this dataset is hourly long-run marginal emissions rates.

The results are available via the Scenario Viewer and could be downloaded using the same/similar code we use for Standard Scenarios, but the datasets are much larger -- 1s-10s of GB zipped.

Requirements for archiving

To be archived on Zenodo, a dataset must be:

  • published under an open license that permits reuse and redistribution
  • less than 50Gb in size (when zipped) - this is the sketchy bit; we might have to split the archive into 5-year batches
  • relevant to energy modelling and research

Checklist for archive creation

Based on the README documentation on creating a new archive:

Links to published archives:

Include a link to the published sandbox archive for review.

@krivard krivard linked a pull request Jan 31, 2025 that will close this issue
@zaneselvans zaneselvans linked a pull request Feb 1, 2025 that will close this issue
@krivard
Copy link
Contributor Author

krivard commented Feb 12, 2025

Challenges so far:

  • The server keeps timing out. Not sure if this is a soft IP-based rate-limit or actual server load/unreliability. Boosting to start backoff at 60 seconds helps a little but still fails some times.
  • The data is too big: just two years would put us over the Zenodo limit, so we'd have to separate them annually, which seems annoying:
Report Size in GB (zipped)
Cambium 2020 Total 17.61
Cambium 2021 Total 28.36
Cambium 2022 Total 36.41
Cambium 2023 Total 6.58
Grand Total 88.96
  • An analysis of file sizes suggests the ALL - ALL - ALL files are not easier-to-handle packages of everything, but are instead ...whatever the opposite of microdata is. They're aggregates that can't be disaggregated.
  • An analysis of .zip contents of the 2023 files suggests there is no overlap in files between zip files; every file name only appears once. 😞

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: New
Development

Successfully merging a pull request may close this issue.

1 participant