Research Paper#Federated Learning, Sparsity, L0 Constraint, Probabilistic Gates🔬 ResearchAnalyzed: Jan 3, 2026 19:16
Federated Learning with L0 Constraint for Sparsity
Published:Dec 28, 2025 20:33
•1 min read
•ArXiv
Analysis
This paper addresses the problem of model density and poor generalizability in Federated Learning (FL) due to inherent sparsity in data and models, especially under heterogeneous conditions. It proposes a novel approach using probabilistic gates and their continuous relaxation to enforce an L0 constraint on the model's non-zero parameters. This method aims to achieve a target density (rho) of parameters, improving communication efficiency and statistical performance in FL.
Key Takeaways
- •Proposes a novel method for achieving sparsity in Federated Learning using probabilistic gates and L0 constraint.
- •Addresses the problem of dense models and poor generalizability in FL.
- •Demonstrates improved communication efficiency and statistical performance compared to magnitude pruning.
- •Evaluated on various datasets (synthetic, RCV1, MNIST, EMNIST) and model types (LR, LG, MC, MLC, CNN).
Reference
“The paper demonstrates that the target density (rho) of parameters can be achieved in FL, under data and client participation heterogeneity, with minimal loss in statistical performance.”