Skip to content

Commit

Permalink
Browse files Browse the repository at this point in the history
  • Loading branch information
akshayballal95 committed Jan 14, 2025
2 parents 94914bc + 223e9b9 commit 5382238
Show file tree
Hide file tree
Showing 3 changed files with 38 additions and 5 deletions.
8 changes: 5 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
<div align="center">

<p align="center">
<b> Inference, ingestion, and indexing – supercharged by Rust 🦀</b>
<b> Inference, Ingestion, and Indexing – supercharged by Rust 🦀</b>
<br />
<a href="https://starlightsearch.github.io/EmbedAnything/references/"><strong>Python docs »</strong></a>
<br />
Expand Down Expand Up @@ -73,9 +73,11 @@ EmbedAnything is a minimalist, highly performant, lightning-fast, lightweight, m

- **Local Embedding** : Works with local embedding models like BERT and JINA
- **ONNX Models**: Works with ONNX models for BERT and ColPali
- **ColPali** : Support for ColPali in GPU version
- **ColPali** : Support for ColPali in GPU version both on ONNX and Candle
- **Splade** : Support for sparse embeddings for hybrid
- **ReRankers** : Support for ReRanking Models for better RAG.
- **ColBERT** : Support for ColBert on ONNX
- **ModernBERT**: Increase your token length to 8K
- **Cloud Embedding Models:**: Supports OpenAI and Cohere.
- **MultiModality** : Works with text sources like PDFs, txt, md, Images JPG and Audio, .WAV
- **Rust** : All the file processing is done in rust for speed and efficiency
Expand Down Expand Up @@ -121,7 +123,7 @@ data = embed_anything.embed_file("file_address", embedder=model, config=config)
| Bert | All Bert based models |
| CLIP | openai/clip-* |
| Whisper| [OpenAI Whisper models](https://huggingface.co/collections/openai/whisper-release-6501bba2cf999715fd953013)|
| ColPali | vidore/colpali-v1.2-merged |
| ColPali | starlight-ai/colpali-v1.2-merged-onnx|
| Colbert | answerdotai/answerai-colbert-small-v1, jinaai/jina-colbert-v2 and more |
| Splade | [Splade Models](https://huggingface.co/collections/naver/splade-667eb6df02c2f3b0c39bd248) and other Splade like models |
| Reranker | [Jina Reranker Models](https://huggingface.co/jinaai/jina-reranker-v2-base-multilingual), Xenova/bge-reranker |
Expand Down
4 changes: 2 additions & 2 deletions docs/blog/posts/Journey.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ And thus, vector streaming was born.

It's time to release 0.3 because we underwent major code refactoring. All the major functions are refactored, making calling models more intuitive and optimized. Check out our docs and usage. We also added audio modality and different types of ingestions.

We only supported dense, so we expanded the types of embedding we could support. We went for sparse and started supporting ColPali, Onnx, and Candle.
We only supported dense, so we expanded the types of embedding we could support. We went for sparse and started supporting ColPali, ColBert, ModernBert, Reranker, Jina V3.

## What We Got Right

Expand All @@ -61,7 +61,7 @@ We also released benches comparing it with other inference and to our suprise it

We presented Embedanything at many conferences, like Pydata Global, Elastic, voxel 51 meetups, AI builders, etc. Additionally, we forged collaborations with major brands like Weaviate and Elastic, a strategy we’re excited to continue expanding in 2025.

[Weaviate Collab](https://www.youtube.com/watch?v=OJRWPLQ44Dw)
[Elastic Collab](https://www.youtube.com/live/OzQopxkxHyY?si=shJ2hADyPPsYWmIF)


## What We Initially Got Wrong
Expand Down
31 changes: 31 additions & 0 deletions docs/blog/posts/v0.5.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
---
draft: false
date: 2025-01-01
authors:
- sonam
- akshay
slug: modernBERT
title: version 0.5
---

We are thrilled to share that EmbedAnything version 0.5 is out now and comprise of insane development like support for ModernBert and ReRanker models. Along with Ingestion pipeline support for DocX, and HTML let’s get in details.

The best of all have been support for late-interaction model, both ColPali and ColBERT on onnx.

1. **ModernBert** Support: Well it made quite a splash, and we were obliged to add it, in the fastest inference engine, embedanything. In addition to being faster and more accurate, ModernBERT also increases context length to 8k tokens (compared to just 512 for most encoders), and is the first encoder-only model that includes a large amount of code in its training data.
2. **ColPali- Onnx** :  Running the ColPali model directly on a local machine might not always be feasible. To address this, we developed a **quantized version of ColPali**. Find it on our hugging face, link [here](https://huggingface.co/starlight-ai/colpali-v1.2-merged-onnx). You could also run it both on Candle and on ONNX.
3. **ColBERT**: ColBERT is a *fast* and *accurate* retrieval model, enabling scalable BERT-based search over large text collections in tens of milliseconds.
4. **ReRankers:** EmbedAnything recently contributed for the support of reranking models to Candle so as to add it in our own library. It can support any kind of reranking models. Precision meets performance! Use reranking models to refine your retrieval results for even greater accuracy.
5. **Jina V3:** Also contributed to V3 models, for Jina can seamlessly integrate any V3 model.
6. **𝗗𝗢𝗖𝗫 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴**

Effortlessly extract text from .docx files and convert it into embeddings. Simplify your document workflows like never before!

7. **𝗛𝗧𝗠𝗟 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴:**

Parsing and embedding HTML documents just got easier!

✅ Extract rich metadata with embeddings
✅ Handle code blocks separately for better context

Supercharge your documentation retrieval with these advanced capabilities.

0 comments on commit 5382238

Please sign in to comment.