Why did we open-source our inference engine? Read the post

intfloat/e5-small-v2

Architecture
Parameters
33M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 384
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 42.5K
Corpus p50 47.8ms
Query TPS 3.1K
Query p50 48.7ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 16.7K
Corpus p50 49.7ms
Query TPS 1.6K
Query p50 50.3ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 41.5K
Corpus p50 57.8ms
Query TPS 3.3K
Query p50 48.5ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 115.7K
Corpus p50 63.0ms
Query TPS 4.0K
Query p50 57.9ms

NFCorpus

medical retrieval en

Performance L4 b1 c16
Corpus TPS 95.9K
Corpus p50 45.8ms
Query TPS 1.5K
Query p50 43.2ms

NanoFiQA2018Retrieval

finance retrieval en

Quality
ndcg at 10 0.4299
map at 10 0.3531
mrr at 10 0.4611
Performance L4 b1 c16
Corpus TPS 58.3K
Corpus p50 40.3ms
Query TPS 4.1K
Query p50 35.0ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 57.1K
Corpus p50 49.9ms
Query TPS 3.2K
Query p50 47.2ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 85.1K
Corpus p50 47.6ms
Query TPS 4.6K
Query p50 48.1ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 63.0K
Corpus p50 53.7ms
Query TPS 55.0K
Query p50 75.1ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.