Why did we open-source our inference engine? Read the post

opensearch-project/opensearch-neural-sparse-encoding-v2-distill

Architecture
Parameters
110M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 31.1K
Corpus p50 57.0ms
Query TPS 3.2K
Query p50 52.8ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 13.5K
Corpus p50 51.4ms
Query TPS 2.1K
Query p50 45.3ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 37.0K
Corpus p50 59.3ms
Query TPS 3.8K
Query p50 47.5ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 91.6K
Corpus p50 73.7ms
Query TPS 5.2K
Query p50 49.9ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3373
map at 10 0.1278
mrr at 10 0.5548
Performance L4 b1 c16
Corpus TPS 61.6K
Corpus p50 68.8ms
Query TPS 1.6K
Query p50 49.3ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 42.4K
Corpus p50 60.4ms
Query TPS 3.5K
Query p50 49.6ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 54.4K
Corpus p50 68.2ms
Query TPS 5.1K
Query p50 50.1ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 46.0K
Corpus p50 66.2ms
Query TPS 69.8K
Query p50 66.0ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.