Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Low-level: discrepancy between field arithmetic performance and elliptic curve performance #446

Open
mratsim opened this issue Jul 27, 2024 · 4 comments
Labels
bug 🪲 Something isn't working performance 🏁

Comments

@mratsim
Copy link
Owner

mratsim commented Jul 27, 2024

As mentioned in #445, there is a large discrepancy between the performance when benchmarking field arithmetic and the elliptic curves built on top, especially on Secp256k1 vs libsecp256k1 and RustCrypto.
We start with a 1.7x advantage for field that gets reduced to a 0.85x disadvantage on constant-time code.

There is an unexplained performance bug.

Some possibilities:

  • There is a parameter passing bug similar to Internal API: in-place vs result #21 and Extremely bad codegen on Fp2 #146 however looking into the assembly with Ghidra, we have 12 LEA and 13 MOV befor function calls, doesn't seem costly enough for such a difference. There is the regular if adx test but it should be cached and almost costless on Haswell and later CPU.
  • Unsaturated arithmetic allows for greater ILP (Instruction level parallelism. This seems unlikely as field arithmetic with unsaturated is 2x slower than my impl.
  • Cache effects. For example we don't hardcode the prime modulus and after a long computation it might be evicted from cache.
@mratsim mratsim added bug 🪲 Something isn't working performance 🏁 labels Jul 27, 2024
@mratsim
Copy link
Owner Author

mratsim commented Jul 27, 2024

CTT_LTO=false CC=clang nimble bench_ec_g1

with only secp256k1 EC G1 Jacobian addition and the if hasADX() checks forced true for secp256k1/Crandall primes

LTO ends up inlinining everything into the benchmark function (that are explicitly tagged {.noinline.} since #445.

image

It's extremely suspicious that addmod takes more time that multiplication, and submod takes that much time as well.

Addmod

Local percents
image
Global percents
image

Submod

Local percents
image
Global percents
image


Partial reduce

The crandall partial reduce seems to need a prefetch around mulx over rsp/temporaries:

image

vs mulx of registers
image

@mratsim
Copy link
Owner Author

mratsim commented Jul 27, 2024

perf stat -B -e cache-references,cache-misses,cycles,instructions,branches,faults,migrations build/bench/bench_ec_g1_clang_asmIfAvailable

Whoops, cache misses it is
image

perf record -e cache-references,cache-misses,cycles --call-graph dwarf build/bench/bench_ec_g1_clang_asmIfAvailable

image

Addmod

Cache misses local

image

Cache misses global

image

Submod

Cache-misses local

image

Cache-misses global

image

@mratsim
Copy link
Owner Author

mratsim commented Jul 27, 2024

Reverse engineering

image

Each call has some mov and LEA ceremony but recent CPUs can issue 4 mov per cycles

image

Function calls decompiled from assembly

void sum__bench95ec95g49_u1825
               (undefined8 param_1,longlong param_2,undefined8 param_3,undefined8 param_4)

{
  longlong lVar1;
  longlong lVar2;
  longlong unaff_RSI;
  longlong in_FS_OFFSET;
  undefined local_158 [32];
  undefined local_138 [32];
  undefined local_118 [32];
  undefined local_f8 [32];
  undefined local_d8 [32];
  undefined local_b8 [32];
  undefined local_98 [32];
  undefined local_78 [32];
  undefined local_58 [32];
  longlong local_38;
  
  local_38 = *(longlong *)(in_FS_OFFSET + 0x28);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127);
  lVar1 = param_2 + 0x20;
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127);
  lVar2 = param_2 + 0x40;
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,
             unaff_RSI + 0x20,param_3,param_4,param_2);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,lVar1);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_f8);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_98);
  submod_asm__OOZconstantineZnamedZconstantsZbandersnatch95subgroups_u482
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_f8);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,
             unaff_RSI + 0x40);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,lVar2);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_118);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_b8);
  submod_asm__OOZconstantineZnamedZconstantsZbandersnatch95subgroups_u482
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_118);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,
             unaff_RSI + 0x40);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,lVar2);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_138);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_b8);
  submod_asm__OOZconstantineZnamedZconstantsZbandersnatch95subgroups_u482
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_138);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_78);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_118);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_b8);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_58);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_b8);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_58);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_58);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_58);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_b8);
  submod_asm__OOZconstantineZnamedZconstantsZbandersnatch95subgroups_u482
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_b8);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_138);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_58);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_138);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_58);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_58);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_58);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_138);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_98);
  submod_asm__OOZconstantineZnamedZconstantsZbandersnatch95subgroups_u482
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_118);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_78);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_158);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_98);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_d8);
  mulCran_asm_adx__bench95ec95g49_u823
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_f8);
  addmod_asm__bench95ec95g49_u1081
            (Secp256k1_Modulus__OOZconstantineZnamedZconfig95fields95and95curves_u1127,local_78);
  if (*(longlong *)(in_FS_OFFSET + 0x28) == local_38) {
    return;
  }
                    /* WARNING: Subroutine does not return */
  __stack_chk_fail();
}

@mratsim
Copy link
Owner Author

mratsim commented Aug 5, 2024

Another potential slowness source is code alignment: https://www.bazhenov.me/posts/2024-02-performance-roulette/

As CPU fetch data 64B at a time, instruction density can be quite important. If we inline the modulus and load it in register with MOV, it requires 10 bytes to encode it. 1 byte for REX prefix, 1 for the MOV instructions and 8 for the 64-bit numbers.
https://github.com/mratsim/jitterland/blob/02febc4/jit/jit_x86_64_load_store.nim#L12-L18

func mov*(a: var Assembler[Reg_X86_64], reg: static range[rax..rdi], imm64: pointer) {.inline.} =
  ## Move immediate 64-bit pointer value into register
  a.code.add [
    rex_prefix(w = 1),
    static(0xB8.byte + reg.byte) # Move imm to r
  ]
  a.code.add cast[array[8, byte]](imm64)

So storing as a const and loading from it should be preferred.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug 🪲 Something isn't working performance 🏁
Projects
None yet
Development

No branches or pull requests

1 participant