Transformers On Large-Scale Graphs with Bayan Bruss - #641
Analysis
This article summarizes a podcast episode featuring Bayan Bruss, VP of Applied ML Research at Capital One. The episode discusses two papers presented at the ICML conference. The first paper focuses on interpretable image representations, exploring interpretability frameworks, embedding dimensions, and contrastive approaches. The second paper, "GOAT: A Global Transformer on Large-scale Graphs," addresses the challenges of scaling graph transformer models, including computational barriers, homophilic/heterophilic principles, and model sparsity. The episode provides insights into research methodologies for overcoming these challenges.
Key Takeaways
- •The episode discusses research on interpretable image representations, including interpretability frameworks and contrastive approaches.
- •The episode explores "GOAT," a global transformer designed for large-scale graphs, addressing computational challenges.
- •The research aims to overcome computational barriers in scaling graph models by addressing homophilic/heterophilic principles and model sparsity.
“We begin with the paper Interpretable Subspaces in Image Representations... We also explore GOAT: A Global Transformer on Large-scale Graphs, a scalable global graph transformer.”