Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add snowstorm_dataset and IceCubehosted class #783

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

sevmag
Copy link

@sevmag sevmag commented Jan 27, 2025

  • Addition of Curated Datasets hosted on the IceCube cluster. (Download requires Username and Password)
  • Implementation of the curated IceCube-hosted Snowstorm dataset

validation_dataloader_kwargs: Optional[Dict[str, Any]] = None,
test_dataloader_kwargs: Optional[Dict[str, Any]] = None,
):
"""Initialize SnowStorm dataset."""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should explain the arguments here. Most can be repeated from the parent class but run_ids is new.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree!

Copy link
Collaborator

@RasmusOrsoe RasmusOrsoe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for this very clean contribution @sevmag!

Before we proceed, do you have comments on this @Aske-Rosted? If you have a wiki for the dataset/conversion, maybe we should link to that instead of the generic snowstorm wiki?

assert match
run_id = match.group(1)

query_df = query_database(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How fast is this query in your experience? If the database is large, it might take minutes to execute. In that case, we could consider providing a .parquet file with the ids in them.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, you are right. It depends on the files, but some take over 2 minutes, and it's probably the query.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UPDATE: The bottleneck of initializing this SnowStormDataset implementation is mainly in the initialization called here (especially when selecting RunIDs with a lot of files)

# Instantiate
super().__init__(
dataset_reference=dataset_ref,
dataset_args=dataset_args,
train_dataloader_kwargs=train_dataloader_kwargs,
validation_dataloader_kwargs=validation_dataloader_kwargs,
test_dataloader_kwargs=test_dataloader_kwargs,
selection=selec,
test_selection=test_selec,
)

Contrary to my belief that it was the prepare_args function snipped referenced above. Therefore, I decided to stick to this version, keeping in mind that one can make the prepare_args function more efficient.

"""

_experiment = "IceCube SnowStorm dataset"
_creator = "Severin Magel"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably mention @Aske-Rosted here :-)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True! Changed it

"""Initialize SnowStorm dataset."""
self._run_ids = run_ids
self._zipped_files = [
os.path.join(self._data_root_dir, f"{s}.tar.gz") for s in run_ids
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's currently 28 valid run id's, which is a subset of the overall available sets. I think the current code assumes the user knows this. Perhaps it would be wise to include an assertion that checks that the user-given ids actually exists.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants