Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in exchange waterbalance coupler Ribasim-MF-MS #1863

Closed
SnippenE opened this issue Oct 3, 2024 · 5 comments · Fixed by #1874
Closed

Error in exchange waterbalance coupler Ribasim-MF-MS #1863

SnippenE opened this issue Oct 3, 2024 · 5 comments · Fixed by #1874
Assignees
Labels
blocker Someone should do this ASAP coupling Coupling to other models v1.0 Release!

Comments

@SnippenE
Copy link

SnippenE commented Oct 3, 2024

After the refactor in waterbalance error #1808 test in coupler are giving unexpected results. Could be related to the changes in how fluxes are integrated on Ribasim side.

Possibly the same as #1851.

@SnippenE SnippenE added coupling Coupling to other models blocker Someone should do this ASAP v1.0 Release! labels Oct 3, 2024
@visr
Copy link
Member

visr commented Oct 3, 2024

@rleander made a reproducer in Python that I translated into Julia so I can directly test changes in the core. Here it is: https://gist.github.com/visr/78a864974619f9d445a3a0e1226310b0. It outputs figures like the following:

mwe-1-main

These lines should both stay at 0 and 25 where they start. The error gets worse over time. Resetting the cumulatives like in #1864 seems to help. But then how often should we do this and who should do this in BMI context? It is probably not a real solution, just like how forcing smaller timesteps or lower tolerances also helps reduce the problem.

Changing the solver algorithm to Rodas5P does seem to help, showing no divergence even if I run the bucket model for 30.000 days, hence #1869.
mwe-4

It seems like the cumulative flow states are difficult for some algorithms. Though I also wouldn't rule out that we can do more in the core to alleviate this.

@visr
Copy link
Member

visr commented Oct 7, 2024

This is with QNDF, only changing the reltol from 1e-5 to 1e-7. PR to make that the default incoming.
image

visr added a commit that referenced this issue Oct 7, 2024
With #1819 the state goes further away from 0 with time. That meanse we rely more on the relative tolerance than the aboslute tolerance.

Reducing the default `reltol` by two orders of magnitude is enough to get only negligible leaky Terminals with occasional very brief leaks of 1e-8 m3/s. It is also enough to minimize the infiltration error to acceptable levels as can be seen in #1863 (comment).

For HWS this slows down simulation from 9.1s to 12.3s, which isn't too bad for a two orders of magnitude reduction in relative tolerance.

https://docs.sciml.ai/DiffEqDocs/stable/basics/faq/#What-does-tolerance-mean-and-how-much-error-should-I-expect
@Huite
Copy link
Contributor

Huite commented Oct 7, 2024

Is this without or without resetting to 0?

It kinda seems like it's showing growing oscillations:

image

@visr visr closed this as completed in #1874 Oct 8, 2024
@visr visr closed this as completed in 89e0e9f Oct 8, 2024
@visr
Copy link
Member

visr commented Oct 10, 2024

Indeed it's still growing over time, just slower so it's not so much a problem in practice. That is also consistent with your comment in #1874 (comment). I wrote a new issue for follow-up work: #1897.

This is running the gist 30.000 days rather than 300:

Image

@SouthEndMusic
Copy link
Collaborator

SouthEndMusic commented Oct 21, 2024

With #1912 divergence is very slow (I subtracted 25 from the bottom plot):

Image

I wouldn't be surprised if this is now dominated by floating point truncation errors.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocker Someone should do this ASAP coupling Coupling to other models v1.0 Release!
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

6 participants