You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Now the API supports late chunking for our new model. As we also want to extend the evaluation on jina embeddings v3 I implemented support for the new model and also add a test that compares the embeddings produced with the embeddings produced by the API. In general, they can differ a little bit (not only when using late chunking) because of different optimizations applied during inference (e.g. flash attention, optimizations done by cuda for bf16, ...). Both is not merged to the main branch at the moment, but if you want to take a look at how to do late chunking with the API, you can take a look at the test case in this PR: https://github.com/jina-ai/late-chunking/pull/8/files
Thank you, Jina team, for sharing this method.
I am currently trying to implement late chunking in my own workflow.
I noticed the following example:
Is it possible to use Jina’s API for this?
From what I’ve observed, the segment API only returns the start and end tokens, and it doesn’t seem to support this use case.
Let me know if I need any adjustments!
The text was updated successfully, but these errors were encountered: