Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve RemovalRecordsIntegrity #166

Open
3 tasks
aszepieniec opened this issue Jul 17, 2024 · 0 comments
Open
3 tasks

Improve RemovalRecordsIntegrity #166

aszepieniec opened this issue Jul 17, 2024 · 0 comments

Comments

@aszepieniec
Copy link
Contributor

aszepieniec commented Jul 17, 2024

The first iteration of the RemovalRecordsIntegrity consensus program has a rather disappointing performance, clocking in at 358000 cycles for an arbitrary 2-in 2-out transaction.

clock cycle count hash table height u32 table height op stack height ram table height
358251 49708 46564 252217 63020

Looking at the profile, it seems that more than two thirds of the clock cycles are spent verifying MMR authentication paths. (We need to do that because otherwise someone could produce a transaction with removal records that cannot be applied to the mutator set, but which still looks valid.)

There are several ways in which we might address this performance issue.

  • Wait for merkle_step_mem, which is in the pipeline over on Triton VM.
  • Separate the program RemovalRecordsIntegrity into two programs. The first program establishes that the salted inputs hash corresponds to the listed removal records. The second program establishes that the removal records can in fact be applied to the current mutator set accumulator. This option does not reduce the workload but shifts it around in such a way that users on light devices can still produce the proofs that are relevant for ownership (the lock scripts) and privacy (the first program), and delegate the other parts to peers.
  • Compress the target_chunks field of removal records. After all, the removal record indices are either (a) deep in the SWBF in which case they live in the same tree with overwhelming probability and hence their authentication paths will show large overlap, or (b) shallow in which case their authentication paths are short and there is no problem to begin with. In case (a) the authentication paths can be deduplicated into an authentication structure, which certainly saves memory. It is unclear though if the increased arithmetic cost of computing with this memory-efficient data structure justifies the benefit of having less data to run the computation over.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant