Research Paper#Federated Learning, Edge Computing, Deep Learning🔬 ResearchAnalyzed: Jan 3, 2026 19:06
Energy and Memory-Efficient Federated Learning with Ordered Layer Freezing
Published:Dec 29, 2025 04:39
•1 min read
•ArXiv
Analysis
This paper addresses the challenges of Federated Learning (FL) on resource-constrained edge devices in the IoT. It proposes a novel approach, FedOLF, that improves efficiency by freezing layers in a predefined order, reducing computation and memory requirements. The incorporation of Tensor Operation Approximation (TOA) further enhances energy efficiency and reduces communication costs. The paper's significance lies in its potential to enable more practical and scalable FL deployments on edge devices.
Key Takeaways
- •Proposes FedOLF, a novel approach for energy and memory-efficient Federated Learning.
- •Employs ordered layer freezing to reduce computation and memory requirements.
- •Incorporates Tensor Operation Approximation (TOA) to further reduce energy and communication costs.
- •Demonstrates improved accuracy, energy efficiency, and lower memory footprint compared to existing methods.
Reference
“FedOLF achieves at least 0.3%, 6.4%, 5.81%, 4.4%, 6.27% and 1.29% higher accuracy than existing works respectively on EMNIST (with CNN), CIFAR-10 (with AlexNet), CIFAR-100 (with ResNet20 and ResNet44), and CINIC-10 (with ResNet20 and ResNet44), along with higher energy efficiency and lower memory footprint.”