Why did we open-source our inference engine? Read the post

IDEA-Research/grounding-dino-base

Architecture
Swin
Parameters
250M
Tasks
Extract
Outputs
Bounding Boxes
License

Benchmarks

COCO

general detection en

default_limit-1000
Performance A10G b1 c4
Performance L4-SPOT b1 c4
default_limit-100
Quality
ap 0.5809
ap50 0.7349
ap75 0.6241
ar 100 0.6503
Performance RTX-4090 b1 c16

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.

Github
1.5K

Contact us

Tell us about your use case and we'll get back to you shortly.