You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have been using ESMValTool to analyse CMIP6 data on JASMIN and am having issues with daily output from a particular model for which I encounter OOM errors.
Specifically, I am trying to process this model:
/badc/cmip6/data/CMIP6/ScenarioMIP/CNRM-CERFACS/CNRM-CM6-1-HR/ssp585/r1i1p1f2/day/tas/gr/v20191202/tas_day_CNRM-CM6-1-HR_ssp585_r1i1p1f2_gr_20650101-21001231.nc, which is 6.4GB in size (so big, but not crazy big). The script works for other models e.g. tas_day_CNRM-ESM2-1_ssp585_r1i1p1f2_gr_20150101-21001231.nc, which is 2.2GB in size.
It is encountering issues while running these pre-processors:
preprocessors:
calculate_base_pp1: # Lusaka
custom_order: true
# Select area - all timesteps
extract_point:
latitude: -15.5
longitude: 28.25
scheme: 'linear'
# Get ltmean for late 21C
extract_time:
start_year: 2070
start_month: 1
start_day: 1
end_year: 2100
end_month: 12
end_day: 31
I have requested 32GB of memory for the job on the sci clusters and have also tried submitting it to the par-single queue requesting 6 nodes (although I'll admit I have not used that queue before, so am not sure what that actually equates to in terms of resources). Neither of these worked. Do you have any advice as to how much memory processing a file of this size should take or if there is anything else I can do to avoid OOM errors?
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello,
I have been using ESMValTool to analyse CMIP6 data on JASMIN and am having issues with daily output from a particular model for which I encounter OOM errors.
Specifically, I am trying to process this model:
/badc/cmip6/data/CMIP6/ScenarioMIP/CNRM-CERFACS/CNRM-CM6-1-HR/ssp585/r1i1p1f2/day/tas/gr/v20191202/tas_day_CNRM-CM6-1-HR_ssp585_r1i1p1f2_gr_20650101-21001231.nc, which is 6.4GB in size (so big, but not crazy big). The script works for other models e.g. tas_day_CNRM-ESM2-1_ssp585_r1i1p1f2_gr_20150101-21001231.nc, which is 2.2GB in size.
It is encountering issues while running these pre-processors:
I have requested 32GB of memory for the job on the sci clusters and have also tried submitting it to the par-single queue requesting 6 nodes (although I'll admit I have not used that queue before, so am not sure what that actually equates to in terms of resources). Neither of these worked. Do you have any advice as to how much memory processing a file of this size should take or if there is anything else I can do to avoid OOM errors?
Many thanks!
Alan
Beta Was this translation helpful? Give feedback.
All reactions