Fairness and Robustness in Federated Learning with Virginia Smith -#504
Published:Jul 26, 2021 18:14
•1 min read
•Practical AI
Analysis
This article summarizes a podcast episode of Practical AI featuring Virginia Smith, an assistant professor at Carnegie Mellon University. The discussion centers on Smith's research in federated learning (FL), specifically focusing on fairness and robustness. The episode covers her work on cross-device FL applications, the relationship between distributed learning and privacy techniques, and her paper "Ditto: Fair and Robust Federated Learning Through Personalization." The conversation also delves into the definition of fairness in AI ethics, failure modes, model relationships, and optimization trade-offs. Furthermore, the episode touches upon a second paper, "Heterogeneity for the Win: One-Shot Federated Clustering," exploring how data heterogeneity can be leveraged in unsupervised FL settings.
Key Takeaways
- •The podcast episode discusses fairness and robustness in federated learning.
- •Virginia Smith's research on cross-device FL and her paper "Ditto" are key topics.
- •The episode explores the trade-offs between fairness and robustness, and the use of data heterogeneity.
Reference
“The article doesn't contain a direct quote.”