Why did we open-source our inference engine? Read the post

intfloat/multilingual-e5-large-instruct

Architecture
Parameters
560M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 1,024
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 25.8K
Corpus p50 82.4ms
Query TPS 2.8K
Query p50 57.0ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 12.5K
Corpus p50 66.4ms
Query TPS 1.5K
Query p50 58.6ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 29.4K
Corpus p50 89.2ms
Query TPS 3.0K
Query p50 60.0ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 45.3K
Corpus p50 178.3ms
Query TPS 4.5K
Query p50 58.0ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3521
map at 10 0.1313
mrr at 10 0.5378
Performance L4 b1 c16
Corpus TPS 35.7K
Corpus p50 137.0ms
Query TPS 1.3K
Query p50 61.6ms

NanoFiQA2018Retrieval

finance retrieval en

Quality
ndcg at 10 0.5539
map at 10 0.4744
mrr at 10 0.6084
Performance L4 b1 c16
Corpus TPS 30.6K
Corpus p50 87.2ms
Query TPS 3.3K
Query p50 51.9ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 28.8K
Corpus p50 106.9ms
Query TPS 2.6K
Query p50 61.6ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 31.8K
Corpus p50 137.5ms
Query TPS 4.3K
Query p50 60.0ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 29.3K
Corpus p50 112.5ms
Query TPS 39.4K
Query p50 124.1ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.