CorGi: Accelerating Diffusion Transformers with Caching

Published:Dec 30, 2025 12:55
1 min read
ArXiv

Analysis

This paper addresses the computational cost of Diffusion Transformers (DiT) in visual generation, a significant bottleneck. By introducing CorGi, a training-free method that caches and reuses transformer block outputs, the authors offer a practical solution to speed up inference without sacrificing quality. The focus on redundant computation and the use of contribution-guided caching are key innovations.

Reference

CorGi and CorGi+ achieve up to 2.0x speedup on average, while preserving high generation quality.