-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Cloud Deployment IVc] Deploy NeuroConv in AWS with EFS #1086
base: rclone_aws
Are you sure you want to change the base?
Conversation
for more information, see https://pre-commit.ci
if docker_tag is None or docker_tag == "latest": | ||
date = datetime.now().strftime("%Y-%m-%d") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor cleanups here caused by the previous removal of the creation date tag
@h-mayorquin This one is ready too! Proof of key tests passing: https://github.com/catalystneuro/neuroconv/actions/runs/11089506338/job/30810706117?pr=1086 Of course, the data is just sitting on the EFS until #1089 allows direct upload to DANDI. See better idea https://github.com/catalystneuro/neuroconv/pull/1086/files#diff-fbec866f95a8edcf190ce00113773906f58d29fdf491a2c27dbf57dedecf9187R183 that I did not have time to figure a way to do (probably another small helper function and CLI entrypoint to be called from the same neuroconv docker image) |
available_efs_volumes = efs_client.describe_file_systems() | ||
matching_efs_volumes = [ | ||
file_system | ||
for file_system in available_efs_volumes["FileSystems"] | ||
for tag in file_system["Tags"] | ||
if tag["Key"] == "Name" and tag["Value"] == efs_volume_name | ||
] | ||
max_iterations = 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This middle-man EFS management is rather annoying but had bugs with duplicates otherwise. Maybe a better way in the future would be direct EFS ID passing that overrides the 'search' based creation based on name pattern
# Cleanup EFS after job is complete - must clear mount targets first, then wait before deleting the volume | ||
efs_volumes = efs_client.describe_file_systems() | ||
matching_efs_volumes = [ | ||
file_system | ||
for file_system in efs_volumes["FileSystems"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@h-mayorquin Also, in a follow-up, you may wish to add logic for detecting if the EFS volume exists before this method is called, in which case we might not want to clean it up automatically (might be shared across jobs or something)
And/or always control the behavior with an extra argument cleanup: bool
perhaps. Given the billing consequences I tend to err on the side of always perform cleanup, hence the current logic
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## rclone_aws #1086 +/- ##
===========================================
Coverage 90.58% 90.58%
===========================================
Files 129 129
Lines 8164 8167 +3
===========================================
+ Hits 7395 7398 +3
Misses 769 769
Flags with carried forward coverage won't be shown. Click here to find out more.
|
The most important step in the process, combining each of the previous developments