Why did we open-source our inference engine? Read the post

intfloat/e5-base-v2

Architecture
Parameters
109M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 768
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 40.2K
Corpus p50 48.6ms
Query TPS 3.0K
Query p50 45.9ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 18.8K
Corpus p50 43.2ms
Query TPS 1.9K
Query p50 43.0ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 45.2K
Corpus p50 52.3ms
Query TPS 3.4K
Query p50 45.4ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 109.6K
Corpus p50 68.3ms
Query TPS 5.1K
Query p50 45.0ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3541
map at 10 0.1311
mrr at 10 0.5549
Performance L4 b1 c16
Corpus TPS 79.2K
Corpus p50 57.9ms
Query TPS 1.5K
Query p50 42.8ms

NanoFiQA2018Retrieval

finance retrieval en

Quality
ndcg at 10 0.4603
map at 10 0.3791
mrr at 10 0.5059
Performance L4 b1 c16
Corpus TPS 33.6K
Corpus p50 63.5ms
Query TPS 3.5K
Query p50 49.2ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 53.2K
Corpus p50 53.2ms
Query TPS 3.4K
Query p50 45.6ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 67.5K
Corpus p50 59.2ms
Query TPS 4.6K
Query p50 47.7ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 55.7K
Corpus p50 59.0ms
Query TPS 63.4K
Query p50 63.1ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.