Gemma Scope 2 Release Announced
Published:Dec 22, 2025 21:56
•2 min read
•Alignment Forum
Analysis
Google DeepMind's mech interp team is releasing Gemma Scope 2, a suite of Sparse Autoencoders (SAEs) and transcoders trained on the Gemma 3 model family. This release offers advancements over the previous version, including support for more complex models, a more comprehensive release covering all layers and model sizes up to 27B, and a focus on chat models. The release includes SAEs trained on different sites (residual stream, MLP output, and attention output) and MLP transcoders. The team hopes this will be a useful tool for the community despite deprioritizing fundamental research on SAEs.
Key Takeaways
- •Gemma Scope 2 is a new release of SAEs and transcoders for the Gemma 3 model family.
- •It offers improvements over the previous version, including support for larger models and a focus on chat models.
- •The release includes SAEs and transcoders for various layers and model sizes.
- •The team hopes it will be a useful tool for the community.
Reference
“The release contains SAEs trained on 3 different sites (residual stream, MLP output and attention output) as well as MLP transcoders (both with and without affine skip connections), for every layer of each of the 10 models in the Gemma 3 family (i.e. sizes 270m, 1b, 4b, 12b and 27b, both the PT and IT versions of each).”