Why did we open-source our inference engine? Read the post

mixedbread-ai/mxbai-edge-colbert-v0-32m

Architecture
Parameters
32M
Tasks
Encode
Outputs
Multi-Vec
Dimensions
Multi-Vec: 64
Max Sequence Length
8,192 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 28.3K
Corpus p50 60.5ms
Query TPS 3.0K
Query p50 55.7ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 15.0K
Corpus p50 51.8ms
Query TPS 1.9K
Query p50 48.5ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 38.9K
Corpus p50 57.5ms
Query TPS 3.7K
Query p50 48.0ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 87.3K
Corpus p50 76.5ms
Query TPS 5.4K
Query p50 48.3ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3376
map at 10 0.1285
mrr at 10 0.5432
Performance L4 b1 c16
Corpus TPS 67.7K
Corpus p50 58.6ms
Query TPS 1.4K
Query p50 53.0ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 36.4K
Corpus p50 70.2ms
Query TPS 2.7K
Query p50 61.8ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 58.8K
Corpus p50 61.2ms
Query TPS 5.4K
Query p50 49.0ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 52.8K
Corpus p50 58.9ms
Query TPS 68.3K
Query p50 66.6ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.