Why did we open-source our inference engine? Read the post

laion/CLIP-ViT-B-32-laion2B-s34B-b79K

1. Model Details 3. Training Details 4. Evaluation 5. Acknowledgements 6. Citation 7. How To Get Started With the Model

Architecture
CLIP
Parameters
151M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 512
Max Sequence Length
77 tokens
License
mit

Benchmarks

Flickr30kI2TRetrieval

general retrieval en

Image-to-text retrieval: retrieve captions from images

Corpus: 31,783 Queries: 1,000
Quality
ndcg at 10 0.7744
map at 10 0.6783
mrr at 10 0.8925
Performance L4 b1 c16
Corpus TPS 1.2K
Corpus p50 178.6ms
Query TPS 40
Query p50 231.1ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.