Skip to content

Convergence testing

K Clough edited this page Nov 16, 2021 · 2 revisions

Convergence testing with an AMR code can be challenging, but it is essential. The basic principle of convergence testing is that, once the solution is well resolved, running with a higher resolution should give roughly the same result, with an error that decreases at an order set by the finite difference and time evolution stencils used, in the case of GRChombo this is 4th order (although we will get only 3rd order convergence at the boundaries between AMR grids because of the interpolation scheme used).

Here is a rough guide to how we do convergence testing. You should refer to Alcubierre's book on numerical relativity for a more detailed treatment of convergence and convergence rates. Here we just give practical tips related to GRChombo.

  • What should I do my convergence testing on?

    The general rule is that you should convergence test the physical results that you want to present in any research publication for which you are using the code. So if you are calculating GW waveforms, you need to check their amplitude and phase converge. If you are tracking a total mass of matter, you should check that. In general global quantities will give better convergence than local ones, but if you are showing plots of the field value at a particular point, you might want to convergence test that.

    One thing you can and should always do a convergence test on is the constraint error - this has the advantage that you know it should be zero, so you can calculate the convergence rate with only 2 (rather than 3) simulation resolutions. Usually we take a norm of the constraint across the region of interest - this may involve zeroing it near the boundaries, where you will always get constraint error because Sommerfeld conditions are not constraint preserving. This may feel like cheating, but as long as the region of physical interest is converging well, this is good enough to support your physical result. Likewise it is conventional to zero within any BH horizons to remove the spurious effect of the singularity (which has higher constraint error the better your resolution).

    Note that there are functions for taking quantities across the grid in the AMRReductions class. These will often be very useful for convergence testing as they allow global quantities to be extracted. An example of their use can be seen in the BinaryBH example to extract the constraint error norms.

  • How do I set the resolutions so the grid does the right thing?

    This is where AMR makes thing hard. Say you are trying to double the resolution (normally in practise you would aim for some smaller ratio like 1.5 as doubling is expensive, but let's say double just for illustration). Your goal should be to double the resolution of every region of the domain, so each grid should cover the same area as before but with double the resolution. Obviously you need to double the resolution of the coarsest grid, and reduce your tagging thresholds, but depending on the tagging criteria you are using getting regions to match up may be very hard to achieve exactly. However, provided you roughly cover the same regions at double resolution, and in particular make sure any regions driving your convergence quantity are appropriately re-refined, the convergence test should work ok. In general we aim to adjust the thresholds until we get the finest grid to cover the same area at double the resolution, and let the other levels do what they want to do.

  • I have run three versions of the simulation at 3 different resolutions HR, MR and LR, now what?

    You now need to take your outputs at each resolution and difference them, so ErrorH = HR - MR and ErrorL = MR - LR. ErrorL should be bigger than ErrorH - if it is not you are in trouble and definitely need higher resolution. It should also decrease at the appropriate rate, so for 4th order stencils and doubling the resolutions it should be (1/2)^4 smaller. If you get it being much smaller than this, don't celebrate - it is likely that your low resolution run was not in the convergence regime, so you should again increase resolution to check you can achieve the appropriate rate. Note that you may need to use an interpolating function (e.g. in Python) to obtain comparable data at the same times, since the time steps won't match up at different resolutions.

  • I don't get exactly the right convergence rate, but my results look exactly the same at each resolution and when I add more resolution I get the same thing. Help!

    This is when you need to apply some judgement. Anyone who does simulations will tell you that convergence is an art rather than a science, and it won't always be perfect. If you plot your quantity of interest and you can't see any difference in the 3 resolutions by eye (and you have tried increasing the resolution again with the same result), it might be fine and you have just reached some fundamental limit of numerical precision somewhere. You might especially lose some convergence order when there is a lot of rapid regridding in the AMR, which does introduce errors. But higher resolution should always give you smaller error, otherwise you definitely need to investigate more.

    You should also consider the accuracy of your initial data - if you use data from the Initial Condition Solver that is only 2nd order accurate, and this error will tend to dominate at initial times, so you may only recover 3rd/4th order at later times once the evolution error starts to dominate.

    Also consider whether you have used some interpolation in extracting the data - this will add a source of error too. The AMRInterpolator uses by default 4th order interpolation, so should not spoil the convergence results. However, yt uses zeroth order interpolation unless you tell it to do otherwise.

Clone this wiki locally