Why did we open-source our inference engine? Read the post

BAAI/bge-m3 (Score)

Architecture
Parameters
568M
Tasks
Encode
Outputs
Dense, Sparse, Multi-Vec
Dimensions
Dense: 1,024, Sparse: 250,002, Multi-Vec: 1,024
Max Sequence Length
8,192 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 27.0K
Corpus p50 75.3ms
Query TPS 2.6K
Query p50 55.2ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 13.5K
Corpus p50 58.4ms
Query TPS 1.5K
Query p50 57.6ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 26.8K
Corpus p50 89.3ms
Query TPS 2.9K
Query p50 55.8ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 36.7K
Corpus p50 224.8ms
Query TPS 3.7K
Query p50 59.5ms

NFCorpus

medical retrieval en

Performance L4 b1 c16
Corpus TPS 34.4K
Corpus p50 137.4ms
Query TPS 1.2K
Query p50 56.8ms

NanoFiQA2018Retrieval

finance retrieval en

Performance L4 b1 c16
Corpus TPS 30.9K
Corpus p50 86.4ms
Query TPS 2.8K
Query p50 54.9ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 31.0K
Corpus p50 101.1ms
Query TPS 2.7K
Query p50 55.7ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 32.2K
Corpus p50 126.9ms
Query TPS 3.9K
Query p50 57.5ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 29.5K
Corpus p50 118.6ms
Query TPS 31.5K
Query p50 147.9ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.