Why did we open-source our inference engine? Read the post

cross-encoder/ms-marco-MiniLM-L-12-v2

Architecture
Parameters
33M
Tasks
Score
Outputs
Score
Max Sequence Length
512 tokens
License

Benchmarks

AskUbuntuDupQuestions

technology reranking en

Quality
ndcg at 10 0.6145
map at 10 0.4558
mrr at 10 0.6921
Performance L4 b1 c16
Query TPS 8.2K
Query p50 31.7ms

CMedQAv1Reranking

medical reranking zh

Quality
map at 10 0.1016
mrr at 10 0.1528

CMedQAv2Reranking

medical reranking zh

Quality
map at 10 0.1218
mrr at 10 0.1812

MMarcoReranking

general reranking zh

Quality
map at 10 0.0426
mrr at 10 0.0446
Performance L4 b1 c16

T2Reranking

general reranking zh

Quality
map at 10 0.5184
mrr at 10 0.7511

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.