metadata
license: cc-by-4.0
library_name: saelens
Gemma Scope 2:
This is a landing page for Gemma Scope 2, a comprehensive, open suite of sparse autoencoders for the Gemma 3 model family. Sparse Autoencoders are a "microscope" of sorts that can help us break down a model’s internal activations into the underlying concepts, just as biologists use microscopes to study the individual cells of plants and animals.
There are no model weights in this repo. If you are looking for them, please visit one of our repos:
- https://huggingface.co/google/gemma-scope-2-270m-pt
- https://huggingface.co/google/gemma-scope-2-270m-it
- https://huggingface.co/google/gemma-scope-2-1b-pt
- https://huggingface.co/google/gemma-scope-2-1b-it
- https://huggingface.co/google/gemma-scope-2-4b-pt
- https://huggingface.co/google/gemma-scope-2-4b-it
- https://huggingface.co/google/gemma-scope-2-12b-pt
- https://huggingface.co/google/gemma-scope-2-12b-it
- https://huggingface.co/google/gemma-scope-2-27b-pt
- https://huggingface.co/google/gemma-scope-2-27b-it
Key links:
- Check out the interactive Gemma Scope demo made by Neuronpedia.
- (NEW!) We have a colab notebook tutorial for JumpReLU SAE training in JAX and PyTorch here.
- Learn more about Gemma Scope in our Google DeepMind blog post.
- Check out our Google Colab notebook tutorial for how to use Gemma Scope 2.
- Read the Gemma Scope technical report.
- Check out Mishax, a GDM internal tool that we used in this project to expose the internal activations inside Gemma 3 models.
The full list of SAEs we trained at which sites and layers can be found in our technical report.