-
Notifications
You must be signed in to change notification settings - Fork 127
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Plev compile fix #107
base: master
Are you sure you want to change the base?
Plev compile fix #107
Conversation
… being merged into master. Need to make sure that mkmf template used to compile plev script is the same as used for the model.
As we discussed this yesterday, maybe @wseviour could do the code review for me? |
… will remove the restarts from all experiments within that folder.
I've also added a change to the script created for removing restart files for data clearup. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General ideas all look good. I have not run the code so I assume it all works as expected. I am nervious about scripts that remove things so I would probably put in a few more catching, along of the lines of .... "Are you sure you want to delete filename .... bla bla" and then have a y, n prompt. Unless you feel this is a standard thing to do without any risks.
@@ -5,6 +5,9 @@ cd ./exec | |||
|
|||
source $GFDL_BASE/src/extra/env/$GFDL_ENV | |||
|
|||
../bin/mkmf -p plev.x -t ../bin/mkmf.template.ia64 -c "-Duse_netCDF" -a ../src ../src/path_names ../src/shared/mpp/include ../src/shared/include | |||
compiler=${GFDL_MKMF_TEMPLATE:-ia64} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to add any other templates to the folder or is the idea people can add them if they want? Just wondering if we have other templates sitting on different branches that could come for free in the PR.
import subprocess | ||
|
||
start_time=time.time() | ||
base_dir=os.environ['GFDL_DATA'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a nice change here. In the other example this was a hard coded link to a path name.
plevel_call(nc_file_in,nc_file_out, var_names = var_names[avg_or_daily], p_levels = plevs[avg_or_daily], mask_below_surface_option=mask_below_surface_set) | ||
|
||
print('execution time', time.time()-start_time) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is much cleaner. Thanks for the example.
expname = '/'+exp_name+'/' | ||
|
||
exp_object = temporary_exp_object(basedir, workdir, datadir, exp_name) | ||
|
||
return exp_object | ||
|
||
|
||
def keep_only_certain_restart_files(exp_object, max_num_files, interval=12): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why comment this function out and not delete it? Do you want the legacy?
|
||
# sh.ls(sh.glob(P(self.workdir,'restarts','res_*.cpio'))) #TODO get max_num_files calculated in line, rather than a variable to pass. | ||
|
||
if max_num_files is None: | ||
month_list = glob(P(exp_object.datadir,exp_object.expname, 'restarts')+'/res*.tar.gz') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a non-unix server (like if you are running on a local machine for example) would it still create a tar.gz?
#Then we remove them. | ||
for entry in files_to_remove: | ||
for entry in files_to_remove[1:-1]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would benefit from an assert to make sure there is at least one element to loop over
# exp_name_list = ['giant_drag_exp_chai_values_without_dc_bug_latest_1'] | ||
# exp_name_list = ['aquaplanet_qflux_control_1'] | ||
# exp_name_list = [''] | ||
exp_name_list = glob('/disca/share/sit204/data_from_isca_cpu/*/') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could this be a non-hard coded option? If not, could you link at the top of the script so it is easy for a user to open and change.
Just a quick note on this. In Oxford I'm currently using the -Duse_netCDF3 option when compiling Isca with Intel compilers. This is a temporary snag deriving from the fact that here the netcdf library compatible with Intel 2018 compilers was compiled without netCDF4 enabled. I just thought I'd mention that the above won't work with a mkmf template that uses-Duse_netCDF3, as pinterp_utilities.F90 and run_pressure_interp.F90 assume netCDF4 and don't have any #ifdef statements to switch to a 'netCDF3' mode. I can get around this by compiling the plevel interpolation stuff with gfortran (for which I can access a compatible netCDF4 library). |
As per issue #105, it was noted by users in Bristol that the
compile_plev_interpolation.sh
script was giving errors. I found that the reason was that it was trying to use the mkmf template from thepostprocessing/bin
directory, rather than the mkmf template used to compile the model. I had fixed this problem on thepre_ictp_mods
branch back in August 2018, but hadn't merged it in. So apologies for that.I have now updated the compile script so that it picks the same mkmf template as the model uses, and I've also added a simplified version of my
run_plevel.py
script, which was also taken from thepre_ictp_mods
branch. There is no need to run the trip test in this case, as no fortran modifications have been made.