Why did we open-source our inference engine? Read the post

intfloat/e5-large-v2

Architecture
Parameters
335M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 1,024
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 26.5K
Corpus p50 74.3ms
Query TPS 2.7K
Query p50 53.9ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 13.9K
Corpus p50 60.1ms
Query TPS 1.4K
Query p50 57.6ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 31.6K
Corpus p50 80.9ms
Query TPS 2.8K
Query p50 55.5ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 57.2K
Corpus p50 140.9ms
Query TPS 3.5K
Query p50 58.3ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3715
map at 10 0.1416
mrr at 10 0.5706
Performance L4 b1 c16
Corpus TPS 42.8K
Corpus p50 112.1ms
Query TPS 1.3K
Query p50 53.6ms

NanoFiQA2018Retrieval

finance retrieval en

Quality
ndcg at 10 0.4531
map at 10 0.3742
mrr at 10 0.5054
Performance L4 b1 c16
Corpus TPS 27.3K
Corpus p50 86.6ms
Query TPS 2.8K
Query p50 49.5ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 33.2K
Corpus p50 86.0ms
Query TPS 2.7K
Query p50 56.2ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 39.4K
Corpus p50 108.1ms
Query TPS 3.6K
Query p50 58.1ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 39.2K
Corpus p50 95.0ms
Query TPS 42.1K
Query p50 107.3ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.