Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance much worse than hnswlib #67

Open
siddhsql opened this issue Aug 19, 2023 · 5 comments
Open

Performance much worse than hnswlib #67

siddhsql opened this issue Aug 19, 2023 · 5 comments

Comments

@siddhsql
Copy link

Hello,

Thanks for this library. I did some benchmarking and the search is 100x slower than the C++ hnswlib. do you know why?

@jelmerk
Copy link
Owner

jelmerk commented Aug 25, 2023

Do you have the code somewhere ? It's definitely slower but not 100 times for sure

@siddhsql
Copy link
Author

I have attached the code as zip file. I am testing against the glove-100-angular dataset that can be downloaded from ann-benchmarks.com. It has 1M vectors. d = 100. and 10k test vectors.

src.zip

The Java code (this library) takes 2 mins to build the index (i.e., adding 1M vectors to the index) and 4.6s to query the index (10k test vectors).

The C++ code (original hnswlib) takes 35.94 seconds to build the index and 0.048 seconds to query the index (i.e., 100x faster).

Both tests run on same Linux machine with 14 threads (1 thread per vCPU). the multi-threading only applies when building the index. Querying is single-threaded in both cases.

@siddhsql
Copy link
Author

one question (unrelated to the topic in this thread btw) is that w.r.t. this:

Object lock = locks.computeIfAbsent(item.id(), k -> new Object());

Can't you just use

synchronized (node)

like you do on line 268

Another question I have is that I tried the code with SIMD optimizations on a AVX512 CPU but the perf is basically the same. Do you know why? I was expecting 16x improvement in perf (16*32=512) as one instruction would process 16 floats.

@siddhsql
Copy link
Author

re: this: Another question I have is that I tried the code with SIMD optimizations on a AVX512 CPU but the perf is basically the same. Do you know why? I was expecting 16x improvement in perf (16*32=512) as one instruction would process 16 floats.

could it be that Java compiler is auto-vectorizing the code without explicit SIMD optimizations? e.g., see this:

However, Oracle has apparently just accepted Intel's contribution to the HotSpot that enables FMA vectorization using AVX-512. To the delight of auto-vectorization fans and those lucky ones to have access to AVX-512 hardware, this may (with some luck) appear in one of the next jdk9 EA builds (beyond b175).

A link to support the previous statement (RFR(M): 8181616: FMA Vectorization on x86): mail.openjdk.java.net/pipermail/hotspot-compiler-dev/2017-June/…

do you know if is there any way to verify?

@jianshu93
Copy link

Hello All,
Is this issue fixed or something. Just want to check the most recent status on the java implementation. And also, I think bray-curtis distance is not a metric while hnsw requires that distance should be a metric.

Thanks,

Jianshu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants