Why did we open-source our inference engine? Read the post

sentence-transformers/all-MiniLM-L6-v2

Architecture
Parameters
22M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 384
Max Sequence Length
256 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Quality
ndcg at 10 0.4698
map at 10 0.4073
mrr at 10 0.4632
Performance A10G b1 c16
Corpus TPS 1.7K
Corpus p50 1.1s
Query TPS 512
Query p50 327.0ms
Performance L4 b1 c16
Corpus TPS 37.0K
Corpus p50 53.3ms
Query TPS 2.8K
Query p50 49.8ms

CosQA

technology retrieval en

Quality
ndcg at 10 0.3288
map at 10 0.2577
mrr at 10 0.2885
Performance A10G b1 c16
Corpus TPS 1.1K
Corpus p50 750.0ms
Query TPS 299
Query p50 296.9ms
Performance L4 b1 c16
Corpus TPS 17.1K
Corpus p50 50.1ms
Query TPS 1.8K
Query p50 45.5ms

FiQA2018

finance retrieval en

Quality
ndcg at 10 0.3687
map at 10 0.2914
mrr at 10 0.4451
Performance A10G b1 c16
Corpus TPS 1.9K
Corpus p50 1.3s
Query TPS 587
Query p50 345.4ms
Performance L4 b1 c16
Corpus TPS 46.3K
Corpus p50 52.4ms
Query TPS 3.7K
Query p50 43.8ms

LegalBenchConsumerContractsQA

legal retrieval en

Quality
ndcg at 10 0.6560
map at 10 0.5883
mrr at 10 0.5874
Performance L4 b1 c16
Corpus TPS 128.8K
Corpus p50 58.9ms
Query TPS 4.3K
Query p50 59.1ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3160
map at 10 0.1105
mrr at 10 0.5040
Performance L4 b1 c16
Corpus TPS 81.3K
Corpus p50 54.7ms
Query TPS 1.5K
Query p50 44.7ms

NanoFiQA2018Retrieval

finance retrieval en

Quality
ndcg at 10 0.4774
map at 10 0.3931
mrr at 10 0.5476
Performance L4 b1 c16
Corpus TPS 44.2K
Corpus p50 56.1ms
Query TPS 2.8K
Query p50 49.9ms

SCIDOCS

scientific retrieval en

Quality
ndcg at 10 0.2164
map at 10 0.1294
mrr at 10 0.3594
Performance L4 b1 c16
Corpus TPS 55.3K
Corpus p50 51.8ms
Query TPS 4.1K
Query p50 43.1ms

SciFact

scientific retrieval en

Quality
ndcg at 10 0.6451
map at 10 0.5959
mrr at 10 0.6047
Performance L4 b1 c16
Corpus TPS 75.7K
Corpus p50 52.8ms
Query TPS 5.8K
Query p50 44.5ms

StackOverflowQA

technology retrieval en

Quality
ndcg at 10 0.8396
map at 10 0.8117
mrr at 10 0.8117
Performance L4 b1 c16
Corpus TPS 58.9K
Corpus p50 56.0ms
Query TPS 65.7K
Query p50 61.5ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.