Why did we open-source our inference engine? Read the post

opensearch-project/opensearch-neural-sparse-encoding-doc-v3-gte

Architecture
ModernBERT
Parameters
305M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Quality
ndcg at 10 0.4057
map at 10 0.3518
mrr at 10 0.4049
Performance A10G b1 c16
Corpus TPS 1
Corpus p50 4.0s
Query TPS 0
Query p50 32.5s
Performance L4 b1 c16
Corpus TPS 24.3K
Corpus p50 75.0ms
Query TPS 4.2K
Query p50 40.6ms

CosQA

technology retrieval en

Quality
ndcg at 10 0.2244
map at 10 0.1739
mrr at 10 0.1860
Performance A10G b1 c16
Corpus TPS 24
Corpus p50 42.5s
Query TPS 24
Query p50 3.6s
Performance L4 b1 c16
Corpus TPS 12.4K
Corpus p50 61.4ms
Query TPS 2.1K
Query p50 43.2ms

FiQA2018

finance retrieval en

Quality
ndcg at 10 0.4062
map at 10 0.3301
mrr at 10 0.4849
Performance A10G b1 c16
Corpus TPS 0
Corpus p50 2.0s
Query TPS 0
Query p50 0.0ms
Performance L4 b1 c16
Corpus TPS 29.2K
Corpus p50 78.9ms
Query TPS 4.4K
Query p50 40.6ms

LegalBenchConsumerContractsQA

legal retrieval en

Quality
ndcg at 10 0.7290
map at 10 0.6704
mrr at 10 0.6712
Performance L4 b1 c16
Corpus TPS 59.1K
Corpus p50 127.0ms
Query TPS 6.2K
Query p50 41.7ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3606
map at 10 0.1391
mrr at 10 0.5725
Performance L4 b1 c16
Corpus TPS 37.7K
Corpus p50 114.2ms
Query TPS 1.7K
Query p50 43.9ms

SCIDOCS

scientific retrieval en

Quality
ndcg at 10 0.1586
map at 10 0.0918
mrr at 10 0.2747
Performance L4 b1 c16
Corpus TPS 34.2K
Corpus p50 86.0ms
Query TPS 4.2K
Query p50 41.2ms

SciFact

scientific retrieval en

Quality
ndcg at 10 0.6262
map at 10 0.5830
mrr at 10 0.5966
Performance L4 b1 c16
Corpus TPS 40.0K
Corpus p50 103.7ms
Query TPS 5.9K
Query p50 43.4ms

StackOverflowQA

technology retrieval en

Quality
ndcg at 10 0.7470
map at 10 0.7160
mrr at 10 0.7160
Performance L4 b1 c16
Corpus TPS 34.1K
Corpus p50 101.3ms
Query TPS 78.1K
Query p50 57.0ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.