Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turbocharge mcl performance: Write our own mcl wrapper in C++ #32

Open
JanBobolz opened this issue Dec 5, 2021 · 0 comments
Open

Turbocharge mcl performance: Write our own mcl wrapper in C++ #32

JanBobolz opened this issue Dec 5, 2021 · 0 comments
Labels
enhancement New feature or request

Comments

@JanBobolz
Copy link
Member

We've recently benchmarked computation of 10k random Pedersen commitments with mclwrap vs directly in C++. The former took 3 times as long as the latter.
Profiling the Java code, it seems like one big issue is garbage collection (around 50% of CPU time seems to have gone there), as well as some BigInteger copy overhead for computation of wnaf forms.

Idea

So the idea is: let's write an mcl wrapper in C++, such that mclwrap (java) calls mclwrap (C++) calls mcl (C++, statically linked, no overhead). This would mean that we completely stop using mcl's ffi code. Our wrapper should:

  • Take as input lists of (g_i, x_i) and some sort of reference to precomputation data, compute wnafs of the x_i, and compute \prod g_i^x_i.
  • Take as input lists of (g_i, h_i, x_i) and some sort of reference to precomputation data, compute wnafs of the x_i, and compute \prod e(g_i^x_i, h_i) with shared final exponentiation.
  • Plus the standard methods we currently use from their ffi.

Benefits:

  • Performance
    • We can get rid of most of the garbage collection overhead. Previously, all intermediate results each got their own object. Now, most of the intermediate results are just computed in mcl (with constant memory and with a mutable group element interface where intermediate results are simply overwritten).
    • Computation of wnaf should be much faster in C++ (mostly because BigInteger induces weird overhead sometimes).
    • Potentially, we can get of some overhead regarding string parsing between mcl and mclwrap (though I'm not sure that really exists to any meaningful extent).
  • We make ourselves less dependent on the mcl ffi bindings, which are not the easiest to compile sometimes.
  • Maybe this makes it more feasible to use BLS and BN at the same time.

Necessary additions to math:

  • Allow GroupImpl to optionally implement their own multi-exponentiation, skipping the algorithms provided by math.
  • Potentially allow GroupImpls to do their own precomputation.

When developing this, maybe start a new project with a generic C++ wrapper (containing, for example, the wnaf computation code) and just instantiate that here.

@JanBobolz JanBobolz added the enhancement New feature or request label Dec 5, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant