Why did we open-source our inference engine? Read the post

opensearch-project/opensearch-neural-sparse-encoding-doc-v2-distill

Architecture
Parameters
110M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 31.0K
Corpus p50 60.2ms
Query TPS 4.1K
Query p50 41.5ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 17.8K
Corpus p50 46.4ms
Query TPS 2.6K
Query p50 37.3ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 43.5K
Corpus p50 56.2ms
Query TPS 4.2K
Query p50 42.8ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 104.9K
Corpus p50 72.6ms
Query TPS 6.9K
Query p50 37.3ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3398
map at 10 0.1303
mrr at 10 0.5489
Performance L4 b1 c16
Corpus TPS 56.8K
Corpus p50 70.8ms
Query TPS 1.8K
Query p50 41.7ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 48.7K
Corpus p50 58.5ms
Query TPS 4.7K
Query p50 37.7ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 60.2K
Corpus p50 66.4ms
Query TPS 5.8K
Query p50 44.1ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 49.6K
Corpus p50 70.8ms
Query TPS 77.6K
Query p50 52.3ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.