Energy and Memory-Efficient Federated Learning with Ordered Layer Freezing

Research Paper#Federated Learning, Edge Computing, Deep Learning🔬 Research|Analyzed: Jan 3, 2026 19:06
Published: Dec 29, 2025 04:39
1 min read
ArXiv

Analysis

This paper addresses the challenges of Federated Learning (FL) on resource-constrained edge devices in the IoT. It proposes a novel approach, FedOLF, that improves efficiency by freezing layers in a predefined order, reducing computation and memory requirements. The incorporation of Tensor Operation Approximation (TOA) further enhances energy efficiency and reduces communication costs. The paper's significance lies in its potential to enable more practical and scalable FL deployments on edge devices.
Reference / Citation
View Original
"FedOLF achieves at least 0.3%, 6.4%, 5.81%, 4.4%, 6.27% and 1.29% higher accuracy than existing works respectively on EMNIST (with CNN), CIFAR-10 (with AlexNet), CIFAR-100 (with ResNet20 and ResNet44), and CINIC-10 (with ResNet20 and ResNet44), along with higher energy efficiency and lower memory footprint."
A
ArXivDec 29, 2025 04:39
* Cited for critical analysis under Article 32.