Energy and Memory-Efficient Federated Learning with Ordered Layer Freezing
Analysis
Key Takeaways
- •Proposes FedOLF, a novel approach for energy and memory-efficient Federated Learning.
- •Employs ordered layer freezing to reduce computation and memory requirements.
- •Incorporates Tensor Operation Approximation (TOA) to further reduce energy and communication costs.
- •Demonstrates improved accuracy, energy efficiency, and lower memory footprint compared to existing methods.
“FedOLF achieves at least 0.3%, 6.4%, 5.81%, 4.4%, 6.27% and 1.29% higher accuracy than existing works respectively on EMNIST (with CNN), CIFAR-10 (with AlexNet), CIFAR-100 (with ResNet20 and ResNet44), and CINIC-10 (with ResNet20 and ResNet44), along with higher energy efficiency and lower memory footprint.”