--- license: gemma pipeline_tag: sentence-similarity library_name: sentence-transformers tags: - sentence-transformers - sentence-similarity - feature-extraction extra_gated_heading: Access EmbeddingGemma on Hugging Face extra_gated_prompt: To access EmbeddingGemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- # EmbeddingGemma model card **Model Page**: [EmbeddingGemma](https://ai.google.dev/gemma/docs/embeddinggemma) **Resources and Technical Documentation**: * [Responsible Generative AI Toolkit](https://ai.google.dev/responsible) * [EmbeddingGemma on Kaggle](https://www.kaggle.com/models/google/embeddinggemma/) * [EmbeddingGemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/embeddinggemma) **Terms of Use**: [Terms](https://ai.google.dev/gemma/terms) **Authors**: Google DeepMind ## Model Information ### Description EmbeddingGemma is a 300M parameter, state-of-the-art for its size, open embedding model from Google, built from Gemma 3 (with T5Gemma initialization) and the same research and technology used to create Gemini models. EmbeddingGemma produces vector representations of text, making it well-suited for search and retrieval tasks, including classification, clustering, and semantic similarity search. This model was trained with data in 100+ spoken languages. The small size and on-device focus makes it possible to deploy in environments with limited resources such as mobile phones, laptops, or desktops, democratizing access to state of the art AI models and helping foster innovation for everyone. For more technical details, refer to our paper: [EmbeddingGemma: Powerful and Lightweight Text Representations](https://arxiv.org/abs/2509.20354). ### Inputs and outputs - **Input:** - Text string, such as a question, a prompt, or a document to be embedded - Maximum input context length of 2048 tokens - **Output:** - Numerical vector representations of input text data - Output embedding dimension size of 768, with smaller options available (512, 256, or 128) via Matryoshka Representation Learning (MRL). MRL allows users to truncate the output embedding of size 768 to their desired size and then re-normalize for efficient and accurate representation. ### Citation ```none @article{embedding_gemma_2025, title={EmbeddingGemma: Powerful and Lightweight Text Representations}, author={Schechter Vera, Henrique* and Dua, Sahil* and Zhang, Biao and Salz, Daniel and Mullins, Ryan and Raghuram Panyam, Sindhu and Smoot, Sara and Naim, Iftekhar and Zou, Joe and Chen, Feiyang and Cer, Daniel and Lisak, Alice and Choi, Min and Gonzalez, Lucas and Sanseviero, Omar and Cameron, Glenn and Ballantyne, Ian and Black, Kat and Chen, Kaifeng and Wang, Weiyi and Li, Zhe and Martins, Gus and Lee, Jinhyuk and Sherwood, Mark and Ji, Juyeong and Wu, Renjie and Zheng, Jingxiao and Singh, Jyotinder and Sharma, Abheesht and Sreepat, Divya and Jain, Aashi and Elarabawy, Adham and Co, AJ and Doumanoglou, Andreas and Samari, Babak and Hora, Ben and Potetz, Brian and Kim, Dahun and Alfonseca, Enrique and Moiseev, Fedor and Han, Feng and Palma Gomez, Frank and Hernández Ábrego, Gustavo and Zhang, Hesen and Hui, Hui and Han, Jay and Gill, Karan and Chen, Ke and Chen, Koert and Shanbhogue, Madhuri and Boratko, Michael and Suganthan, Paul and Duddu, Sai Meher Karthik and Mariserla, Sandeep and Ariafar, Setareh and Zhang, Shanfeng and Zhang, Shijie and Baumgartner, Simon and Goenka, Sonam and Qiu, Steve and Dabral, Tanmaya and Walker, Trevor and Rao, Vikram and Khawaja, Waleed and Zhou, Wenlei and Ren, Xiaoqi and Xia, Ye and Chen, Yichang and Chen, Yi-Ting and Dong, Zhe and Ding, Zhongli and Visin, Francesco and Liu, Gaël and Zhang, Jiageng and Kenealy, Kathleen and Casbon, Michelle and Kumar, Ravin and Mesnard, Thomas and Gleicher, Zach and Brick, Cormac and Lacombe, Olivier and Roberts, Adam and Sung, Yunhsuan and Hoffmann, Raphael and Warkentin, Tris and Joulin, Armand and Duerig, Tom and Seyedhosseini, Mojtaba}, publisher={Google DeepMind}, year={2025}, url={https://arxiv.org/abs/2509.20354} } ``` ### Usage These model weights are designed to be used with [Sentence Transformers](https://www.SBERT.net), using the [Gemma 3](https://huggingface.co/docs/transformers/main/en/model_doc/gemma3) implementation from [Hugging Face Transformers](https://huggingface.co/docs/transformers/en/index) as the backbone. First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("google/embeddinggemma-300m") # Run inference with queries and documents query = "Which planet is known as the Red Planet?" documents = [ "Venus is often called Earth's twin because of its similar size and proximity.", "Mars, known for its reddish appearance, is often referred to as the Red Planet.", "Jupiter, the largest planet in our solar system, has a prominent red spot.", "Saturn, famous for its rings, is sometimes mistaken for the Red Planet." ] query_embeddings = model.encode_query(query) document_embeddings = model.encode_document(documents) print(query_embeddings.shape, document_embeddings.shape) # (768,) (4, 768) # Compute similarities to determine a ranking similarities = model.similarity(query_embeddings, document_embeddings) print(similarities) # tensor([[0.3011, 0.6359, 0.4930, 0.4889]]) ``` **NOTE**: EmbeddingGemma activations do not support `float16`. Please use `float32` or `bfloat16` as appropriate for your hardware. ## Model Data ### Training Dataset This model was trained on a dataset of text data that includes a wide variety of sources totaling approximately 320 billion tokens. Here are the key components: - **Web Documents**: A diverse collection of web text ensures the model is exposed to a broad range of linguistic styles, topics, and vocabulary. The training dataset includes content in over 100 languages. - **Code and Technical Documents**: Exposing the model to code and technical documentation helps it learn the structure and patterns of programming languages and specialized scientific content, which improves its understanding of code and technical questions. - **Synthetic and Task-Specific Data**: Synthetically training data helps to teach the model specific skills. This includes curated data for tasks like information retrieval, classification, and sentiment analysis, which helps to fine-tune its performance for common embedding applications. The combination of these diverse data sources is crucial for training a powerful multilingual embedding model that can handle a wide variety of different tasks and data formats. ### Data Preprocessing Here are the key data cleaning and filtering methods applied to the training data: - CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was applied at multiple stages in the data preparation process to ensure the exclusion of harmful and illegal content. - Sensitive Data Filtering: As part of making Gemma pre-trained models safe and reliable, automated techniques were used to filter out certain personal information and other sensitive data from training sets. - Additional methods: Filtering based on content quality and safety in line with [our policies](https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf). ## Model Development ### Hardware EmbeddingGemma was trained using the latest generation of [Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e), for more details refer to the [Gemma 3 model card](https://ai.google.dev/gemma/docs/core/model_card_3). ### Software Training was done using [JAX](https://github.com/jax-ml/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/). For more details refer to the [Gemma 3 model card](https://ai.google.dev/gemma/docs/core/model_card_3). ## Evaluation ### Benchmark Results The model was evaluated against a large collection of different datasets and metrics to cover different aspects of text understanding. #### Full Precision Checkpoint
| MTEB (Multilingual, v2) | ||
|---|---|---|
| Dimensionality | Mean (Task) | Mean (TaskType) |
| 768d | 61.15 | 54.31 |
| 512d | 60.71 | 53.89 |
| 256d | 59.68 | 53.01 |
| 128d | 58.23 | 51.77 |
| MTEB (English, v2) | ||
|---|---|---|
| Dimensionality | Mean (Task) | Mean (TaskType) |
| 768d | 69.67 | 65.11 |
| 512d | 69.18 | 64.59 |
| 256d | 68.37 | 64.02 |
| 128d | 66.66 | 62.70 |
| MTEB (Code, v1) | ||
|---|---|---|
| Dimensionality | Mean (Task) | Mean (TaskType) |
| 768d | 68.76 | 68.76 |
| 512d | 68.48 | 68.48 |
| 256d | 66.74 | 66.74 |
| 128d | 62.96 | 62.96 |
| MTEB (Multilingual, v2) | ||
|---|---|---|
| Quant config (dimensionality) | Mean (Task) | Mean (TaskType) |
| Q4_0 (768d) | 60.62 | 53.61 |
| Q8_0 (768d) | 60.93 | 53.95 |
| Mixed Precision* (768d) | 60.69 | 53.82 |
| MTEB (English, v2) | ||
|---|---|---|
| Quant config (dimensionality) | Mean (Task) | Mean (TaskType) |
| Q4_0 (768d) | 69.31 | 64.65 |
| Q8_0 (768d) | 69.49 | 64.84 |
| Mixed Precision* (768d) | 69.32 | 64.82 |
| MTEB (Code, v1) | ||
|---|---|---|
| Quant config (dimensionality) | Mean (Task) | Mean (TaskType) |
| Q4_0 (768d) | 67.99 | 67.99 |
| Q8_0 (768d) | 68.70 | 68.70 |
| Mixed Precision* (768d) | 68.03 | 68.03 |
Use Case (task type enum) |
Descriptions |
Recommended Prompt |
|---|---|---|
Retrieval (Query) |
Used to generate embeddings that are optimized for document search or information retrieval |
task: search result | query: {content} |
Retrieval (Document) |
title: {title | "none"} | text: {content} |
|
Question Answering |
task: question answering | query: {content} |
|
Fact Verification |
task: fact checking | query: {content} |
|
Classification |
Used to generate embeddings that are optimized to classify texts according to preset labels |
task: classification | query: {content} |
Clustering |
Used to generate embeddings that are optimized to cluster texts based on their similarities |
task: clustering | query: {content} |
Semantic Similarity |
Used to generate embeddings that are optimized to assess text similarity. This is not intended for retrieval use cases. |
task: sentence similarity | query: {content} |
Code Retrieval |
Used to retrieve a code block based on a natural language query, such as sort an array or reverse a linked list. Embeddings of the code blocks are computed using retrieval_document. |
task: code retrieval | query: {content} |