Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Runge-Kutta Integration #6

Open
jacione opened this issue Dec 12, 2024 · 10 comments
Open

Runge-Kutta Integration #6

jacione opened this issue Dec 12, 2024 · 10 comments

Comments

@jacione
Copy link

jacione commented Dec 12, 2024

I love your particle life GUI. In particular, the 1/r and 1/r^2 attractors are an incredibly intuitive way to simulate material properties like emergent crystal formation, etc.

However, certain regions of parameter space cause the simulation to become metastable or unstable for these attractors. I haven't taken the time to find the exact limits, but it seems to be a combination of:

  • large number of particles
  • high force multiplier
  • low friction
  • large timestep

As far as I can tell, this occurs because the inter-particle potential goes to infinity as the distance goes to zero, and because the force is assumed to be constant between timesteps (I assume--I'm not familiar enough with Java to know for sure, but Euler methods are usually associated with this kind of instability). For small distances, this will always underestimate the force between two approaching particles and overestimate the force between two diverging particles. In such a case, each interaction actually adds energy to the system.

This can still be stable as long as that extra energy is dissipated before the next interaction, which depends on both the particle density (mean free path) and the friction. Otherwise, it leads to a chain reaction (which is actually another interesting physics simulation).

The simplest fix to this would probably be to limit the interatomic potential function so that it doesn't actually go to infinity. However, while this would probably expand the stable region, it doesn't actually solve the problem entirely.

Alternatively, you could adjust the way the positions are calculated to use a Runge-Kutta method. This would be significantly more stable and would let you keep the inter-particle potentials true to their names. It would be more expensive per-step, but it would also be able to take larger (i.e. fewer) steps. In my experience, RK methods typically come out ahead on that tradeoff. It would also be more work for you, so there's that to consider :P

Once again, I love the project. Keep up the good work!

@tom-mohr
Copy link
Owner

Hey jacione,

thank you for your interesting message! You mention a very good point. The app should definitely offer other integration methods, not just Euler.

As far as I can see, RK just takes more time per particle, but doesn't change the computational complexity, which is perfect.
Can you help me write pseudocode for using RK, e.g. RK4?


Let's say each particle has

  • position vector $x$
  • velocity $v := \frac{dx}{dt}$.

Those values ($x$, $v$) are saved in memory between each simulation step.

A simulation step consists of two parts

  1. update $v$ based on the acceleration $\frac{dv}{dt} := a$
  2. update $x$ based on $v$

For each particle, the algorithm calls an "acceleration" method $f(x)$ that returns $a$ for that given particle (looking at its local neighborhood etc.). (This corresponds to $f$ in the wikipedia article you linked.) We have to treat this acceleration method as a blackbox.

So in a first step, the velocities are changed based on $a$.
Right now, this is done with simple Euler integration:
$$v \leftarrow v + a \cdot \Delta t$$

In the second step, the positions are changed based on $v$.
This is, again, done with simple Euler integration:
$$x \leftarrow x + v \cdot \Delta t$$


Does RK4 even fit into this framework?

image

The main point of confusion for me is currently that we need to query the change at different time steps. In the wikipedia article you linked, $f(t, x)$ takes a time variable $t$ and we can apparently just call $f(t + h, x)$. But how is that possible if I have a complex system where I need to employ some integration in order to look into the future? In Particle Life, I can't just tell what $f(t + h, x)$ is.

The acceleration depends on the other particles' positions. So maybe, in order to compute $f(t + h, x)$, we could temporarily execute an Euler integration for the particles' positions?
$$x \leftarrow x + v \cdot h$$
This would move all particles linearily along their current velocity and give them new positions, resulting in a different $f(x)$, kind of as if we actually looked into the future.

(This seems expensive, but I can think of a few possible ways to optimize this.)

Does this make sense or am I on the wrong path here? Maybe you can find time to write out how you would implement RK4. That would be incredibly helpful.

@tom-mohr tom-mohr changed the title Instability with 1/r and 1/r^2 attractors Runge-Kutta Integration Dec 13, 2024
@tom-mohr
Copy link
Owner

Ah, is it possible that my thoughts about "looking into the future" were just way too complicated? Can I maybe just leave out the time parameter $t$, because I don't have time-varying forces? So $f(t, x)$ from the wikipedia article becomes $f(x)$ in my case. This would make it very easy.

@jacione
Copy link
Author

jacione commented Dec 14, 2024

Actually, looking into the future is exactly what RK is all about :) The forces don't explicitly depend on time, but they do depend on the overall state of the system, which develops in time. So, when the definition refers to $f(t,x)$, it might be better to think of that as $f(S(t),x)$, where $S(t)$ is the state of the system (i.e. the positions, velocities, and colors of all the particles within $r_{max}$ at time $t$.

Having a black-box accelerator is not a problem; in fact it's what these methods were made for.

The variable $k$ represents $\frac{dy}{dt}$ at time $t$. The Euler update basically just takes this at face value and says $y_{n+1}=y_n+kh$. The whole idea of RK and similar methods is to find some value of $k$ that will better approximate the average of $\frac{dy}{dt}$ over the entire interval between $t_n$ and $t_{n+1}$. You may have already figured that out, but when I took computational physics, it was the piece that finally made everything else click.

Ok, so for a single RK4 step, we're basically calculating four slightly modified Euler steps and then taking a weighted average. I am by no means an expert in this, but here's how I would do that in pseudocode:

# Start out with the current position and velocity
v1 = v
x1 = x

# first acceleration calculation, same as normal.
a1 = accel(x1)

# Now find what the velocity & position would be after a half-step forward in time,
# and use that to get another acceleration.
v2 = v + a1*dt/2
x2 = x + v2*dt/2
a2 = accel(x2)

# Repeat this process with another half-step
v3 = v + a2*dt/2
x3 = x + v3*dt/2
a3 = accel(x3)

# And finally do it again with a full step
v4 = v + a3*dt
x4 = x + v4*dt
a4 = accel(x4)

# Now you use the weighted average of those four different accelerations
# to get the new velocity
v = v + (a1 + 2*a2 + 2*a3 + a4)*dt/6
# The divide by six comes from the fact that you're summing six accelerations (1+2+2+1)

# And use that to get the new position
x = x + v*dt

This is approximately 4 times as much work as the Euler method. It also takes up more memory, since you're holding onto all these intermediate steps. However, as you pointed out, it doesn't affect the way the algorithm scales, which is really nice. The benefit is that the steps are much more accurate and stable, allowing you to take significantly larger steps. In the end, it can actually end up being cheaper to simulate the same total amount of time.

@jacione
Copy link
Author

jacione commented Dec 14, 2024

I just realized something else. The equation of motion for a constantly accelerating body is:

$x = x_0 + v_0t + \frac{1}{2}at^2$

The way this is currently implemented is missing that factor $\frac12$:

$v = v_0 + at$
$x = x_0 + vt = x_0 + v_0t + at^2$

This doesn't seem to have broken anything so far, but it does change the way I would implement the RK integrator:

# first acceleration calculation, same as normal.
a1 = accel(x)

# Now find what the velocity & position would be after a half-step forward in time,
# and use that to get another acceleration.
v2 = v + 0.5*a1*dt
x2 = x + v*dt + 0.25*a1*dt**2
a2 = accel(x2)

# Repeat this process with another half-step
v3 = v + 0.5*a2*dt
x3 = x + v*dt + 0.25*a2*dt**2
a3 = accel(x3)

# And finally do it again with a full step
v4 = v + a3*dt
x4 = x + v*dt + 0.5*a3*dt**2
a4 = accel(x4)

# Now you use the weighted average of those four different accelerations
# to get the new velocity
a = (a1 + 2*a2 + 2*a3 + a4)/6
v = v + a*dt
# The divide by six comes from the fact that you're summing six accelerations (1+2+2+1)

# And use that to get the new position
x = x + v*dt + 0.5*a*t**2

I also changed the FP divisions to multiplications where it could be done exactly.

@tom-mohr
Copy link
Owner

Hi Nick, awesome, thanks for the explanation!

I've made several attempts to implement this in Particle Life now.

Btw, I think in your pseudo code, it should be x2 = x + v*0.5*dt + 0.125*a1*dt**2 (using dt/2 instead of full dt).
The same goes for x3.

For some reason though, the tradeoff you mentioned doesn't work for me.
My RK4 implementation improves stability somewhat, but this only allows me to increase dt by about 20%. Computation takes 4x as long, so this doesn't suffice.

Additionally, my RK4 implementation introduces some artifacts (only for larger dt) that I didn't see before with Euler integration.

So maybe there is still a mistake in how we're thinking about the interplay of x, v and a?

If you're interested, I opened a pull request where you can see the code I wrote for this: #7

@jacione
Copy link
Author

jacione commented Dec 17, 2024

Ah yes, I see my mistake. Good catch!

I looked at your code and did some searching on Stack Exchange, and I think I see the problem. As I understand it, the updateVelocity functions only act on a single particle at a time. However, the forces acting on each particle are dependent on the positions of all other particles, so each partial step needs to be done for all the particles before calculating the next partial step.

I'm not sure if I'm describing this well. I'll see if I can get some pseudocode worked out...

@tom-mohr
Copy link
Owner

Yes, I had the exact same thought. It would take some additional effort to implement it though, and might again not improve stability enough.
So not sure in what time frame I can get to this.

@jacione
Copy link
Author

jacione commented Dec 22, 2024

Totally understand. Maybe I'll finally get around to learning some Java after all. I've got some time over the holidays... :)

@jacione
Copy link
Author

jacione commented Jan 3, 2025

I made a PR #8 with my own attempt. It's totally untested, and you should definitely treat it like pseudocode, but it might be a good place to start!

@tom-mohr
Copy link
Owner

tom-mohr commented Jan 6, 2025

Awesome! I'll try to look at it as soon as possible, but not before next week.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants