Why did we open-source our inference engine? Read the post

opensearch-project/opensearch-neural-sparse-encoding-doc-v3-distill

Architecture
Parameters
110M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 33.5K
Corpus p50 58.4ms
Query TPS 4.1K
Query p50 41.1ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 15.6K
Corpus p50 51.5ms
Query TPS 1.9K
Query p50 47.9ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 40.5K
Corpus p50 59.2ms
Query TPS 4.6K
Query p50 39.5ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 97.4K
Corpus p50 76.3ms
Query TPS 6.3K
Query p50 40.7ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3399
map at 10 0.1314
mrr at 10 0.5457
Performance L4 b1 c16
Corpus TPS 71.1K
Corpus p50 65.9ms
Query TPS 1.9K
Query p50 41.1ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 47.1K
Corpus p50 59.3ms
Query TPS 4.4K
Query p50 39.9ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 59.3K
Corpus p50 67.2ms
Query TPS 6.1K
Query p50 42.4ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 53.1K
Corpus p50 62.1ms
Query TPS 97.0K
Query p50 46.1ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.