Skip to content
This repository has been archived by the owner on Oct 23, 2020. It is now read-only.

Performing online regression testing #1473

Open
pwolfram opened this issue Dec 15, 2017 · 2 comments
Open

Performing online regression testing #1473

pwolfram opened this issue Dec 15, 2017 · 2 comments

Comments

@pwolfram
Copy link
Contributor

Currently our nightly regression test suite runs in about 5 min on common hardware (macOS Sierra, LANL IC, LCFs).

Is it worth exploring whether we can perform automated regression testing on github to help avoid overheads associated with individual testing on an array of platforms? The hardest thing is to be able to get an installation of MPAS to work on something like jenkins / circleci but we could consider just paying the overhead to build from source the whole workflow in order to make sure that everything works on a standard linux machine, at least for tagged compiler, mpi, and library versions.

Even if the whole process takes several hours it would at least document in github whether successive commits are working without being broken so I think it is at least worth considering as a group.

@xylar
Copy link
Contributor

xylar commented Dec 15, 2017

I would be interested in helping with this. I have a conda environment together with specific versions of netcdf, netcdf-fortran, netcdf-cxx, parallel-netcdf and PIO that work for me under Linux. I don't think automated testing would take hours but likely on the order of 30 to 45 minutes would be my guess.

@pwolfram
Copy link
Contributor Author

@xylar, this would be really great. Do you mind sharing the "hello-world" for your conda environment? We can probably use that to set up something like we already have for MPAS-Analysis' CI. I can take a first stab at this and we can work together on it if you are interested. The motivation here is to do some updating on the COMPASS test suite since I'm already in it to update the LIGHT tests.

@mark-petersen, @vanroekel, @maltrud, @toddringler, @jonbob, @akturner, @matthewhoffman, @mgduda: do any of you have any objections or recommendations you think we should consider as we move forward? The biggest thing that comes to mind is that we may have to enable CI for the whole repo because of the way things are set up currently, which I think would imply we might need buy-in from the other cores, but we could also do this in such a way that testing is only performed if certain branch-specific criteria are met, otherwise a green check mark is produced.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants