Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DG jump operator gives the wrong result for a vector-valued equation #138

Open
massimiliano-leoni opened this issue Dec 22, 2022 · 3 comments

Comments

@massimiliano-leoni
Copy link
Contributor

The standard Discontinuous Galerkin formulation of the scalar Poisson equation

$$ -\Delta u = 1 $$

has terms like

$$\sum_{e \in \mathcal{E}} \int_e [v] \{\nabla u\} $$

which in UFL is expressed as

inner(jump(v, n), avg(grad(u))) * dS

and all is well.

For the case in which the equation is vector valued, however, $u$ is now a vector and so $\nabla u$ is a second-order tensor, while $[v]$ will be either a scalar or a vector depending on whether jump(v, n) or jump(v) is used, so the inner product is not well-defined any more.

The (one?) correct DG formulation for the vector Poisson equation uses the term

$$\sum_{e \in \mathcal{E}} \int_e [v]_\otimes : \{\nabla u\} $$

where

$$[v]_\otimes = v^+ \otimes n^+ + v^- \otimes n^-.$$

I tried implementing this with

inner(jump(outer(v, n)), avg(grad(u))) * dS

but, with the current UFL implementation,

ufl/ufl/operators.py

Lines 441 to 452 in 64c846a

def jump(v, n=None):
"UFL operator: Take the jump of *v* across a facet."
v = as_ufl(v)
is_constant = len(extract_domains(v)) > 0
if is_constant:
if n is None:
return v('+') - v('-')
r = len(v.ufl_shape)
if r == 0:
return v('+') * n('+') + v('-') * n('-')
else:
return dot(v('+'), n('+')) + dot(v('-'), n('-'))

this returns [notice the wrong "-" instead of a "+"]
$$v^+ \otimes n^+ - v^- \otimes n^-.$$

@massimiliano-leoni
Copy link
Contributor Author

massimiliano-leoni commented Jan 31, 2023

As a side note, the formulation

$$ \sum_{e \in \mathcal{E}} \int_e [v] : \{\partial_n u \}, $$

implemented in UFL as

dot(nu * avg(dot(grad(u), n)), jump(v)) * dS,

works correctly.

This formulation is equivalent to the one I wrote in the issue description.

@wence-
Copy link
Collaborator

wence- commented Jan 31, 2023

I'm not sure that jump can automagically do the right thing in this case without inspecting the provided expression, which is a recipe for pain.

Perhaps it would be OK, you could traverse the expression to determine if it contains a FacetNormal and then (if no normal is provided) you would return v('+') + v('-') (on the assumption that the sign of the normal fixes up the jump).

IIRC the jump operators were implemented following https://www-users.cse.umn.edu/~arnold/papers/dgerr.pdf which I think only treats scalar problems?

@massimiliano-leoni
Copy link
Contributor Author

Thanks for the insight! Maybe then we could have it print an error or a warning so that at least the user is aware and can check it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants