Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Side-effectful Nontermination #772

Open
wants to merge 38 commits into
base: development
Choose a base branch
from
Open

Conversation

ThomasHaas
Copy link
Collaborator

@ThomasHaas ThomasHaas commented Nov 7, 2024

This PR adds a new, improved encoding that can detect some cases of side-effectful non-termination.

EDIT: The below description is not fully accurate anymore.


I added 5 new benchmarks to specifically test this, and updated the verdicts of the existing liveness tests that are now correctly identified as FAIL rather than UNKNOWN.
The encoding is functional and replaces the old encoding for deadlocks/liveness issues.
So feel free to test this branch on some real code.

Important notes:

  • In the presence of side-effectful loops, we generally need to unroll the code at least twice (B=2) and possibly even more times to find liveness issues
  • For side-effectful loops, we can only find liveness issues but no proof of termination! This means that no previous UNKNOWN verdict will become a PASS with theses changes, only possibly a FAIL!

The reasons that this PR is marked as DRAFT are:

  • The new encoding might have issues with uninit reads (we have no test for this). Even the old one might have had issues here.
  • The new encoding currently does also detect liveness issue that are caused by assertion failures, which the old one did not. This is easy to fix but we also have no test cases for this yet.
  • The new encoding does not reason about non-termination due to control barriers at all. We have no tests involving non-termination due to CBs.
  • We can detect many liveness issues in some of the currently skipped litmus tests for Vulkan. However, the liveness issue is only present under weak progress, so if I remove the test from the skip list, at least the LitmusVulkanFairLivenessTest will fail. We would need to be able to enable tests only for certain progress models and skip them for others.
  • EDIT: Fairness of spuriously failing events like StoreExclusive is not considered so far. We also do not have tests for this scenario.

Because of points (1) and (4), I have not yet deleted the old code (it is unused though).

Updated expected values in LivenessTest for existing tests that now show liveness violations (from UNKNOWN to FAIL)
@ThomasHaas
Copy link
Collaborator Author

ThomasHaas commented Nov 7, 2024

The existing liveness tests seem to be correct, and the change from UNKNOWN to FAIL was not caused by side-effectful nontermination but by point (2) I mentioned above. The relaxation of some barriers caused an assertion failure, which then caused some thread to abort and not release its lock, which was then treated as liveness issue.
I will fix this and revert the changes to the tests.

EDIT: Fixed the issue and reverted expected outcome of some tests.

Fixed Nontermination verdict if nontermination was caused due to assertion failure.
Updated expected outcomes of LivenessTest caused by the above change.
Updated expected values in LivenessTest for existing tests that now show liveness violations (from UNKNOWN to FAIL)
Fixed Nontermination verdict if nontermination was caused due to assertion failure.
Updated expected outcomes of LivenessTest caused by the above change.
@hernanponcedeleon
Copy link
Owner

  • We can detect many liveness issues in some of the currently skipped litmus tests for Vulkan. However, the liveness issue is only present under weak progress, so if I remove the test from the skip list, at least the LitmusVulkanFairLivenessTest will fail. We would need to be able to enable tests only for certain progress models and skip them for others.

Can't we add a configurable (similar to several of our providers) that allows us to define a more fine-grained skip predicate and define the predicate accordingly based on the progress model?

@ThomasHaas
Copy link
Collaborator Author

We could have skip lists per progress model, but can't we instead just comment out the tests in the expected files? Those files are already defined on a per-progress-model basis.

@hernanponcedeleon
Copy link
Owner

Having skip list per model is not maintainable. If at some point we notice that e.g., the arch can also influence these things we need to do yet another level of splitting.

can't we instead just comment out the tests in the expected files?

As long as we do not lose the information we currently have (e.g., removing the lines is a "no go"), it should be fine.

@ThomasHaas
Copy link
Collaborator Author

Having skip list per model is not maintainable. If at some point we notice that e.g., the arch can also influence these things we need to do yet another level of splitting.

Using providers to skip the tests would effectively also just be a skiplist per model, but just written in code (unless you find a way to skip tests based on something other than their name).

@ThomasHaas
Copy link
Collaborator Author

ThomasHaas commented Nov 11, 2024

Little update: this branch can give wrong verdicts on some simple cases where SCCP eliminates loop counter variables and the loop termination condition entirely.

int i = 0;
while (i < 10) { i++; }

The above program is wrongly considered non-terminating if unrolled insufficiently.
The reason is that after unrolling + SCCP, the resulting code is essentially indistuinguishable from a while (true) loop.

EDIT: This is fixed now

@hernanponcedeleon
Copy link
Owner

I suspect that the problem is not SCCP, but rather DeadAssignmentElimination which removes side-effects and might not be really sound when we want to check liveness.

@ThomasHaas
Copy link
Collaborator Author

It's both actually. SCCP makes the assignments dead and even if they were not deleted I would like my code to not care about variables that are not live inside the loop (I have not implemented this yet, but I will eventually).
The solution is similar to what we already do: instrument the loop properly. Then SCCP/DAE can run as usual because the instrumentation will capture the necessary information for liveness detection.

…detection: this removes a wrong verdict related to SCCP removing the whole loop body.

Added new test related to above issue.
Minor updates to related code.
@ThomasHaas
Copy link
Collaborator Author

How necessary is it to support non-termination due to blocked ControlBarriers? We don't have any tests for this nor do we actually know how scheduling in the presence of blocked ControlBarriers work (the blocked thread cannot be rescheduled I guess).

@hernanponcedeleon
Copy link
Owner

The best would be to have support for non-termination for a given encoding of BlockedBarrier (whatever that one is). Correctly handling that is orthogonal to this PR (and actually related to #768).

@ThomasHaas
Copy link
Collaborator Author

Then I will copy over whatever we are doing right now for blocked control barriers into the new non-termination detection.

@ThomasHaas
Copy link
Collaborator Author

Yes, cause I didn't implement control barriers at all, i.e., there was not a single line of code handling them.
I fixed this locally already but I have no tests besides this trivial one.

Pure control-barrier nontermination actually requires nothing of the new theory: it is a standard reachability problem as no thread does anything anymore and thus is witnessed by standard finite executions.
The difficulty here is just ensuring that CBs are correctly waiting for each other for which we do not have good tests yet.

The most interesting type of non-termination would involve both barriers and loops: some threads get stuck in barriers while others are looping (possibly even with barriers inside the loop).

@hernanponcedeleon
Copy link
Owner

clspv benchmarks/opencl/NonUniformBarrier.cl --cl-std=CL3.0 --inline-entry-points --spv-version=1.6 -o kernel.spv
spirv-dis kernel.spv > kernel.spv.dis
spirv-opt --upgrade-memory-model kernel.spv.dis -o dartagnan/src/test/resources/spirv/basic/UniformBarrier-opt.spv.dis

I actually got the order of cmds wrong, the optimization should be done on the binary

> clspv benchmarks/opencl/NonUniformBarrier.cl --cl-std=CL3.0 --inline-entry-points --spv-version=1.6 -o kernel.spv
> spirv-opt --upgrade-memory-model kernel.spv -o kernel-opt.spv
> spirv-dis kernel-opt.spv > dartagnan/src/test/resources/spirv/basic/NonUniformBarrier.spv.dis

@ThomasHaas
Copy link
Collaborator Author

ThomasHaas commented Feb 7, 2025

I have implemented a (more or less) complete detection of loop non-termination. It can detect all liveness violations in the Vulkan litmus tests, no matter how intricate they are.

The detection relies on a new instrumentation pass called NonterminationDetection. There is an option to affect how aggressively this pass instruments:
(1) Only spinloops: Should result in a detection identical to what we have currently on development
(2) Simple: A cheap version of side-effectful non-termination detection that detects many cases of non-termination that do not require asymmetric loop execution (where one loop runs more often than another).
(3) Full (default): A complete (and possibly expensive) version for side-effectful non-termination detection, required to detect asymmetric non-termination.

Future work:

  • The encoding can be optimized.
  • The encoding does not fully match the underlying theory. I believe an accurate encoding might require less unrollings to detect non-termination, though it can also be more expensive.

Issues:

  • There are very likely issues if loop non-termination is combined with barrier non-termination and I suspect possible false positives (i.e. false "FAIL" results) for now. Since we lack tests for this, I'm not really eager to work on this right now.
  • Some of the enabled litmus tests have expected result PASS but we return UNKNOWN, therefore the unit tests are failing. We either need to exclude them, or relativize our expected outcome (i.e., if the expectation is PASS, returning UNKNOWN is also correct).

EDIT: The NonterminationDetection pass is actually a kind of more liberal unrolling. Typically, if we have a loop that we unroll B times, then we only consider executions that execute the loop exactly B times (if it is non-terminating).
The idea of the pass is to relax this unrolling to consider also all executions with less than B unrollings.
The resulting program is effectively the superposition of all programs obtained by unrolling each loop 1 <= X <= B times.
Because of this, I might want to integrate this instrumentation into the LoopUnrolling pass with options to enable/disable this type of unrolling.

Btw. this superimposed unrolling also gives a nice monotonicity property which ensures that the executions of a (B+1)-unrolled program is a superset of the executions of a B-unrolled program (we never lose any "short" executions when unrolling more).
However, this may negatively impact the performance for detecting safety/cat violations, especially when iteratively extending the unrolling bound.

@ThomasHaas ThomasHaas changed the title Side-effectful Nontermination [DRAFT] Side-effectful Nontermination Feb 7, 2025
@hernanponcedeleon
Copy link
Owner

The second issue is to be expected since I re-enabled all tests in 40624e9. I can take care of this.

Copy link
Owner

@hernanponcedeleon hernanponcedeleon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still have not checked the main class NonTerminationEncoder (probably wont have time before the end of the week), but here are already some small comments.

return new LivenessEncoder().encodeLivenessBugs();
private TrackableFormula encodeNontermination() {
final BooleanFormula hasNontermination = new NonTerminationEncoder(context.getTask(), context).encodeNontermination();
return new TrackableFormula(context.getBooleanFormulaManager().not(LIVENESS.getSMTVariable(context)), hasNontermination);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use NON_TERMINATION instead of LIVENESS?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We specify all properties in the positive (e.g. program_spec rather than spec_violation), so I think it should be LIVENESS. However, the way we check this is by encoding the negative.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not referring to the positive vs negative part, but rather about the term "liveness" ... at some point we even had some "deadlock/no-deadlock" terms and I thought that "termination" was maybe a more general one.

However, we use liveness for the property and the nice stdout msg when we find a violation, so for consistency we should probably keep it as it is.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, we could use TERMINATION as the property. It would be more precise than liveness, since the latter can be understood in many ways.
If I rename it, dependent tools like Vsyncer might need updates.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can take care of updating vsyncer once we release a new dartagnan and I change the used version there.

final LoopAnalysis.LoopIterationInfo lastIter = loop.iterations().get(loop.iterations().size() - 1);
final Event lastEvent = lastIter.getIterationEnd();

final Event bound = lastEvent.getSuccessor().getSuccessor();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the unrolling guarantee that the bound even is add 2+ after the loop end? If so, document this assumption here

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we guarantee nothing really. While unrolling itself guarantees this, every other pass may break it.
I can add a comment saying that this is a "naive" check. But even if this fails, the code remains correct.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wrong. The code can accidentally treat a not-fully unrolled loop as fully unrolled and skip its instrumentation. This can result in wrong verdicts for non-termination.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Why are we filtering loops before the instrumentation? Is it purely for performance reasons (i.e. not to instrument loops unnecessarily) or the instrumentation relies on loops "passing" this filtering?
  2. Can we instead search for the next bound event in the program and check if it belongs to the same loop?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. The filtering is performance-only. No need to check bounded loops for non-termination.
  2. We can search for the next bound event, but there is no way to 100% know if it belongs to the same loop without tagging the bound events somehow

Copy link
Owner

@hernanponcedeleon hernanponcedeleon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Firs pass up to method encodeInfixSuffixDecomposition(). Will continue during the week

Comment on lines +84 to +85
- We assume that if the suffix is consistent with the infix and co/fr-fair, then it must be "strongly"
consistent. This may not be true and we could possibly report a liveness violation that is not consistently

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does "strongly consistent" mean?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a notion I have in my upcoming paper. I'm not sure how far I should explain the concepts (I cannot write the paper inside the comments :P).
Anyways, a suffix (extension) is strongly consistent if it is consistent with every possible prefix (modulo some properties known about the prefix). I can add that as a short explanation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems I already added an explanation in the comments describing the class.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where? "(3) Suffix must be strongly fair/consistent:"? The word "strongly" is there, but I still do not understand what this is supposed to mean

Copy link
Collaborator Author

@ThomasHaas ThomasHaas Feb 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, the explanation of what this means is given exactly by the points after :).
For the most part it boils down to the fact that the suffix cannot really interact with the prefix, even if it was consistent(!).
That is why we require "strong" consistency which guarantees that not only the finite execution we currently see is consistent but that it also stays consistent when we append the suffix repeatedly.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

guarantees that not only the finite execution we currently see is consistent but that it also stays consistent when we append the suffix repeatedly.

I find this much informative that the current We assume that if the suffix is consistent with the infix and co/fr-fair, then it must be "strongly" consistent

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure it is. But at some point I'm going to write a whole paper in the comments :).
I will add one more line of explanation.

repeatable. Fixing this would require additional checks relative to the memory model, possibly requiring
encoding of the memory model or native handling in CAAT.

TODO: We have not considered uninit reads, i.e., reads without rf-edges in the implementation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the consequences of this?

Copy link
Collaborator Author

@ThomasHaas ThomasHaas Feb 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't thought this through, but my gut-feeling is that a suffix could possibly read the initial value although it is not co-maximal (because there are no co-edges that witness non-maximality). This would result in a fr-edge from suffix into prefix and therefore violating strong consistency/fairness.

EDIT: Just to be clear about the consequence: We could report FAIL although the non-terminating execution is actually unfair.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could report FAIL although the non-terminating execution is actually unfair.

Please at this as the last sentence of the TODO


}

// TODO: Check other types of events as well.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing to detect equivalence of other methods would just make this analysis less precise, right? I.e. we might miss to detect some non-termination

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, currently it is the opposite: without the check it considers events equivalent rather than not equivalent.
For example, it will consider all fences equivalent, even if their tagging mismatches. The same is true for all other kinds of events.

@hernanponcedeleon
Copy link
Owner

@ThomasHaas please rebase and fix the conflicts

# Conflicts:
#	dartagnan/src/main/java/com/dat3m/dartagnan/program/processing/CoreCodeVerification.java
#	dartagnan/src/test/resources/VULKAN-Liveness-Fair-expected.csv
Copy link
Owner

@hernanponcedeleon hernanponcedeleon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another pass up to "Strong suffix extension encoding"


void *thread(void *unused)
{
__VERIFIER_loop_bound(5);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you are missing an #include dat3m.h. Similar in other benchmarks

@@ -49,6 +49,7 @@ public class OptionNames {
public static final String DYNAMIC_SPINLOOP_DETECTION = "program.processing.spinloops";
public static final String PROPAGATE_COPY_ASSIGNMENTS = "program.processing.propagateCopyAssignments";
public static final String REMOVE_ASSERTION_OF_TYPE = "program.processing.skipAssertionsOfType";
public static final String NONTERMINATION_INSTRUMENTATION = "program.processing.nonTerm";

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We are already very verbose on the option names; why not program.processing.nonTermination?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure.

return bmgr.and(exitReached, bmgr.not(encodeBoundEventExec()));
final BooleanFormula terminated = program.getThreadEventsWithAllTags(Tag.NONTERMINATION).stream()
.map(CondJump.class::cast)
.map(jump -> bmgr.not(bmgr.and(context.jumpCondition(jump), context.execution(jump))))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bound events are also tagged as NONTERMINATION and as their condition is trivially true (they are gotos), the previous encoding is supersed by this one, correct?

What about other events tagged by NONTERMINATION? Is it always guaranteed that if they jumped, then they did not terminate?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bound events are also tagged as NONTERMINATION and as their condition is trivially true (they are gotos), the previous encoding is supersed by this one, correct?

Yes.

What about other events tagged by NONTERMINATION? Is it always guaranteed that if they jumped, then they did not terminate?

Tag.NONTERMINATION should designate - as its name says - executions that did not terminate (yet). If this is not the case, then the events are incorrectly marked. Currently, this is the case for loop bound events, spin loop events, and the events placed by the new pass.
That being said, we exclusively use it for loop-induced nontermination and not for barrier-induced one.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My question is: in a jump that is tagged as non-terminated, non-termination requires (1) only execution (as with bound events), (2) execution + taking the jump, or (3) execution + not taking the jump?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The jump needs to be taken (which is always true for unconditional jumps that are executed).

return new LivenessEncoder().encodeLivenessBugs();
private TrackableFormula encodeNontermination() {
final BooleanFormula hasNontermination = new NonTerminationEncoder(context.getTask(), context).encodeNontermination();
return new TrackableFormula(context.getBooleanFormulaManager().not(LIVENESS.getSMTVariable(context)), hasNontermination);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can take care of updating vsyncer once we release a new dartagnan and I change the used version there.

final LoopAnalysis.LoopIterationInfo lastIter = loop.iterations().get(loop.iterations().size() - 1);
final Event lastEvent = lastIter.getIterationEnd();

final Event bound = lastEvent.getSuccessor().getSuccessor();

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. Why are we filtering loops before the instrumentation? Is it purely for performance reasons (i.e. not to instrument loops unnecessarily) or the instrumentation relies on loops "passing" this filtering?
  2. Can we instead search for the next bound event in the program and check if it belongs to the same loop?

- Prefix stores must be co-before suffix stores
- Only suffix reads can read from suffix stores, prefix/infix reads can not.
- Suffix reads can only read from infix/suffix or from co-maximal stores in the prefix,
if they access an address not stored to by infix/suffix.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't this line guaranteed by the fact that if you read from prefix, it is co-maximal?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does depend on how you read the sentence. I meant that the stores are co-maximal within the prefix, which by itself does not exclude the possibility that there are stores in the infix/suffix that are co-before/after the prefix one.
With some detailed analysis of the cases, it is possible to flatout require that the suffix only reads from globally co-maximal prefix stores.
I exploit this fact in the encoding.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have already a rather "fix/standard" meaning of what "co-maximal" means. I would rather say Suffix reads can only read from infix/suffix or from the last store in the prefix to the corresponding address

Comment on lines +84 to +85
- We assume that if the suffix is consistent with the infix and co/fr-fair, then it must be "strongly"
consistent. This may not be true and we could possibly report a liveness violation that is not consistently

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where? "(3) Suffix must be strongly fair/consistent:"? The word "strongly" is there, but I still do not understand what this is supposed to mean

repeatable. Fixing this would require additional checks relative to the memory model, possibly requiring
encoding of the memory model or native handling in CAAT.

TODO: We have not considered uninit reads, i.e., reads without rf-edges in the implementation.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could report FAIL although the non-terminating execution is actually unfair.

Please at this as the last sentence of the TODO

Comment on lines +220 to +222
// FIXME: This is likely going to cause problems (wrong result) with executions where some thread is stuck in a loop
// and another in a barrier. We could replace the code to check if ALL threads are stuck at a control-barrier
// to avoid such wrong results.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking that ALL threads are stuck sounds wrong to me. If we have two sets of threads, each group synchronizing on a different barrier, if one group properly sync and the other not, we still want to report a violation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean that we check if all non-terminated threads are stuck in a barrier. Then if one group properly synchronizes, it will terminate or get stuck in the next barrier.
The idea of checking that all non-terminated threads are stuck in a barrier is that we do explicitly avoid these problematic cases of mixed non-termination at the cost of completeness.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, now it is clearer, but it would still change to We could replace the code to check if ALL stuck threads are stuck at a control-barrier to avoid such wrong results.


// Encodes the basic properties of the infix-suffix matching relation.
// The semantics of what a matching actually implies is done by the above methods.
// NOTE: We do

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

broken sentence

// Internal classes

/*
- A (possibly nonterminating) "Loop" contains a list of "NonterminationCase"s.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can more than one iteration be non-terminating? Or is it that you consider any iteration after the non terminating also non terminating?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not more than one iteration, but there can be multiple ways in which a loop does not terminate. There are different iterations that can trigger a non-teminating situation and even multiple possiblities within a single iteration.
For example, if some loop/iteration is conditionally side-effect-free, then it may fail to terminate due to side-effect-free spinning (spinloop-tagged jump event) or due to side-effect-ful non-progress (other, nontermination-tagged jump).

Implementation-wise, the idea is that a single loop has special jump events tagged by Tag.NONTERMINATION to designate where non-termination could occur. Every such event is a "nontermination case" of the loop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants