Show HN: Model2vec-Rs – Fast Static Text Embeddings in Rust

github.com

52 points by Tananon 11 hours ago

Hey HN! We’ve just open-sourced model2vec-rs, a Rust crate for loading and running Model2Vec static embedding models with zero Python dependency. This allows you to embed text at (very) high throughput; for example, in a Rust-based microservice or CLI tool. This can be used for semantic search, retrieval, RAG, or any other text embedding usecase.

Main Features:

- Rust-native inference: Load any Model2Vec model from Hugging Face or your local path with StaticModel::from_pretrained(...).

- Tiny footprint: The crate itself is only ~1.7 mb, with embedding models between 7 and 30 mb.

Performance:

We benchmarked single-threaded on a CPU:

- Python: ~4650 embeddings/sec

- Rust: ~8000 embeddings/sec (~1.7× speedup)

First open-source project in Rust for us, so would be great to get some feedback!

gthompson512 9 minutes ago

How does it handle documents longer than the context length of the model? Sorry there are a ton of these regularly and they don't usually think about this.

Edit: it seems like it just splits in to sentences which is a weird thing to do given in English only 95%ish percent agreement is even possible on what a sentence is. ``` // Process in batches for batch in sentences.chunks(batch_size) { // Truncate each sentence to max_length * median_token_length chars let truncated: Vec<&str> = batch .iter() .map(|text| { if let Some(max_tok) = max_length { Self::truncate_str(text, max_tok, self.median_token_length) } else { text.as_str() } }) .collect(); ```

  • gthompson512 2 minutes ago

    Sorry, looking more, it doesn't seem like you are doing what you are saying. This is just poorly breaking text into bad chunks with no regard for semantics and is like ~200 lines of actual code. What is this for? Most models can handle fairly large contexts.

noahbp 8 hours ago

What is your preferred static text embedding model?

For someone looking to build a large embedding search, fast static embeddings seem like a good deal, but almost too good to be true. What quality tradeoff are you seeing with these models versus embedding models with attention mechanisms?

  • Tananon 8 hours ago

    It depends a bit on the task and language, but my go-to is usually minishlab/potion-base-8M for every task except retrieval (classification, clustering, etc). For retrieval minishlab/potion-retrieval-32M works best. If performance is critical minishlab/potion-base-32M is best, although it's a bit bigger (~100mb).

    There's definitely a quality trade-off. We have extensive benchmarks here: https://github.com/MinishLab/model2vec/blob/main/results/REA.... potion-base-32M reaches ~92% of the performance of MiniLM while being much faster (about 70x faster on CPU). It depends a bit on your constraints: if you have limited hardware and very high throughput, these models will allow you to still make decent quality embeddings, but ofcourse an attention based model will be better, but more expensive.

    • refulgentis 4 hours ago

      Thanks man this is incredible work, really appreciate the details you went into.

      I've been chewing on if there was a miracle that could make embeddings 10x faster for my search app that uses minilmv3, sounds like there is :) I never would have dreamed. I'll definitely be trying potion-base in my library for Flutter x ONNX.

      EDIT: I was thanking you for thorough benchmarking, then it dawned on me you were on the team that built the model - fantastic work, I can't wait to try this. And you already have ONNX!

      EDIT2: Craziest demo I've seen in a while. I'm seeing 23x faster, after 10 minutes of work.

echelon an hour ago

I love that you're doing this, Tananon.

We've been using Candle and Cudarc and having a fairly good time of it. We've built a real time drawing app on a custom LCM stack, and Rust makes it feel rock solid. Python is way too flimsy for something like this.

The more the Rust ML ecosystem grows, the better. It's a little bit fledgling right now, so every little bit counts.

If llama.cpp had instead been llama.rs, I feel like we would have had a runaway success.

We'll be checking this out! Kudos, and keep it up!

Havoc 8 hours ago

Surprised it is so much faster. I would have thought the python one is C under the hood

  • Tananon 8 hours ago

    Indeed, I also didn't expect it to be so much faster! I think it's because most of the time is actually spent on tokenization (which also happens in Rust in the Python package), but there is some transfer overhead there between Rust and Python. The other operations should be the same speed I think.