Why did we open-source our inference engine? Read the post

rasyosef/splade-mini

Architecture
Parameters
33M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 39.6K
Corpus p50 49.1ms
Query TPS 3.8K
Query p50 44.3ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 17.3K
Corpus p50 45.7ms
Query TPS 2.1K
Query p50 46.1ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 40.1K
Corpus p50 58.1ms
Query TPS 3.9K
Query p50 47.0ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 116.3K
Corpus p50 63.3ms
Query TPS 5.1K
Query p50 50.5ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3090
map at 10 0.1164
mrr at 10 0.5183
Performance L4 b1 c16
Corpus TPS 69.4K
Corpus p50 59.1ms
Query TPS 1.6K
Query p50 49.6ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 53.3K
Corpus p50 51.6ms
Query TPS 3.8K
Query p50 45.5ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 70.9K
Corpus p50 55.0ms
Query TPS 6.2K
Query p50 42.5ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 59.4K
Corpus p50 57.0ms
Query TPS 80.8K
Query p50 56.4ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.