Why did we open-source our inference engine? Read the post

naver/splade-cocondenser-selfdistil

Architecture
Parameters
110M
Tasks
Encode
Outputs
Sparse
Dimensions
Sparse: 30,522
Max Sequence Length
512 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 32.8K
Corpus p50 60.2ms
Query TPS 3.0K
Query p50 56.3ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 14.9K
Corpus p50 53.6ms
Query TPS 1.6K
Query p50 55.0ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 34.6K
Corpus p50 69.2ms
Query TPS 3.2K
Query p50 56.2ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 84.4K
Corpus p50 88.5ms
Query TPS 5.2K
Query p50 49.9ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3403
map at 10 0.1282
mrr at 10 0.5447
Performance L4 b1 c16
Corpus TPS 55.9K
Corpus p50 79.2ms
Query TPS 1.5K
Query p50 52.1ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 42.1K
Corpus p50 68.2ms
Query TPS 3.1K
Query p50 55.1ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 53.5K
Corpus p50 75.5ms
Query TPS 4.8K
Query p50 54.3ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 37.9K
Corpus p50 76.9ms
Query TPS 60.7K
Query p50 76.7ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.