Why did we open-source our inference engine? Read the post

openai/clip-vit-large-patch14

Architecture
Parameters
428M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 768
Max Sequence Length
77 tokens
License

Benchmarks

Flickr30kI2TRetrieval

general retrieval en

Quality
ndcg at 10 0.7824
map at 10 0.6816
mrr at 10 0.9111
Performance L4 b1 c16
Corpus TPS 706
Corpus p50 298.1ms
Query TPS 25
Query p50 389.6ms

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.