Why did we open-source our inference engine? Read the post

intfloat/multilingual-e5-large

Architecture
Parameters
560M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 1,024
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 23.7K
Corpus p50 85.2ms
Query TPS 2.9K
Query p50 58.0ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 14.2K
Corpus p50 60.8ms
Query TPS 1.8K
Query p50 54.9ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 29.8K
Corpus p50 89.9ms
Query TPS 2.8K
Query p50 63.0ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 46.3K
Corpus p50 173.9ms
Query TPS 3.9K
Query p50 62.7ms

NFCorpus

medical retrieval en

Performance L4 b1 c16
Corpus TPS 33.8K
Corpus p50 142.5ms
Query TPS 1.4K
Query p50 55.6ms

NanoFiQA2018Retrieval

finance retrieval en

Quality
ndcg at 10 0.5035
map at 10 0.4111
mrr at 10 0.5364
Performance L4 b1 c16
Corpus TPS 27.7K
Corpus p50 91.9ms
Query TPS 3.1K
Query p50 55.7ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 27.5K
Corpus p50 108.6ms
Query TPS 3.1K
Query p50 56.9ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 33.6K
Corpus p50 134.7ms
Query TPS 4.3K
Query p50 60.0ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 33.7K
Corpus p50 112.0ms
Query TPS 38.8K
Query p50 121.7ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.