Why did we open-source our inference engine? Read the post

google/embeddinggemma-300m

Model Page: EmbeddingGemma

Architecture
Gemma 3
Parameters
303M
Tasks
Encode
Outputs
Dense
Dimensions
Dense: 768
Max Sequence Length
2,048 tokens
License
gemma

Benchmarks

NFCorpus

medical retrieval en

Biomedical literature search from NutritionFacts.org

Corpus: 3,593 Queries: 323
Quality
ndcg at 10 0.3876
map at 10 0.1471
mrr at 10 0.5895
Performance RTX-4090 b1 c16
Corpus TPS 79.6K
Corpus p50 55.7ms
Query TPS 1.9K
Query p50 27.8ms
Reference →

Self-hosted inference for search & document processing

Cut API costs by 50x, boost quality with 85+ SOTA models, and keep your data in your own cloud.