Why did we open-source our inference engine? Read the post

mixedbread-ai/mxbai-rerank-large-v2 (Encode)

Architecture
Parameters
435M
Tasks
Score
Outputs
Score
Max Sequence Length
8,192 tokens
License

Benchmarks

AskUbuntuDupQuestions

technology reranking en

Quality
ndcg at 10 0.6914
map at 10 0.5401
mrr at 10 0.7788
Performance L4 b1 c16
Query TPS 2.6K
Query p50 132.2ms

CMedQAv1Reranking

medical reranking zh

Quality
map at 10 0.8304
mrr at 10 0.8633

CMedQAv2Reranking

medical reranking zh

Quality
map at 10 0.8282
mrr at 10 0.8628

MMarcoReranking

general reranking zh

Quality
map at 10 0.3258
mrr at 10 0.3500
Performance L4 b1 c16

T2Reranking

general reranking zh

Quality
map at 10 0.5458
mrr at 10 0.7742

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.