Why did we open-source our inference engine? Read the post

Alibaba-NLP/gte-multilingual-base

Architecture
ModernBERT
Parameters
305M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 768
Max Sequence Length
8,192 tokens
License

Benchmarks

CQADupstackPhysicsRetrieval

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 43.5K
Corpus p50 48.0ms
Query TPS 4.3K
Query p50 35.3ms

CosQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 20.4K
Corpus p50 39.9ms
Query TPS 2.2K
Query p50 37.2ms

FiQA2018

finance retrieval en

Performance L4 b1 c16
Corpus TPS 44.2K
Corpus p50 54.2ms
Query TPS 4.7K
Query p50 34.9ms

LegalBenchConsumerContractsQA

legal retrieval en

Performance L4 b1 c16
Corpus TPS 72.0K
Corpus p50 114.4ms
Query TPS 5.8K
Query p50 39.0ms

NFCorpus

medical retrieval en

Quality
ndcg at 10 0.3667
map at 10 0.1408
mrr at 10 0.5658
Performance L4 b1 c16
Corpus TPS 60.0K
Corpus p50 79.5ms
Query TPS 1.4K
Query p50 49.0ms

NanoFiQA2018Retrieval

finance retrieval en

Quality
ndcg at 10 0.5990
map at 10 0.5229
mrr at 10 0.6550
Performance L4 b1 c16
Corpus TPS 36.5K
Corpus p50 63.1ms
Query TPS 3.4K
Query p50 52.5ms

SCIDOCS

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 55.1K
Corpus p50 56.8ms
Query TPS 4.4K
Query p50 35.3ms

SciFact

scientific retrieval en

Performance L4 b1 c16
Corpus TPS 63.8K
Corpus p50 69.3ms
Query TPS 6.3K
Query p50 36.8ms

StackOverflowQA

technology retrieval en

Performance L4 b1 c16
Corpus TPS 55.5K
Corpus p50 65.9ms
Query TPS 58.5K
Query p50 75.0ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.