Energy-Efficient Federated Learning Frameworks for Edge Devices

Authors

  • Uday Kulkarni Gokhale Rashtreeya Vidyalaya College of Engineering, Bangalore, India Author

DOI:

https://doi.org/10.15662/IJRAI.2022.0506002

Keywords:

Federated Learning, energy efficiency, edge devices, gradient compression, skeleton gradients, reinforcement learning, device selection, mobile edge computing, communication-constrained learning

Abstract

Federated Learning (FL) empowers decentralized model training across edge devices while preserving data privacy—yet the resource constraints of these devices make energy efficiency a critical concern. In 2021, several frameworks emerged tackling this challenge. FedGreen introduces fine-grained gradient compression paired with device-side reduction and server-side aggregation to address energy consumption in mobile edge computing, achieving over 32% reduction in total device energy for 80% test accuracy. arXiv Similarly, FedSkel enables efficient FL on heterogeneous edge systems by updating only essential “skeleton” network parts—delivering 5.52× speedups in convolutional layer backpropagation and reducing communication by 64.8% with negligible accuracy loss. arXiv Another framework, AutoFL, applies reinforcement learning to select devices and their execution targets for each aggregation round, optimizing for convergence time and energy. This approach achieves 3.6× faster convergence and 4.7× greater per-client energy efficiency, rising to 5.2× across the device cluster. arXiv Additional work includes dynamic scheduling in over-the-air FL settings with energy awareness, balancing both computation and communication energy constraints, resulting in a 4.9% accuracy gain under tight energy budgets. arXiv Together, these studies inform an energy-efficient FL framework that integrates gradient compression, selective model updates, intelligent device selection, and energy-aware scheduling. This framework balances energy consumption, model performance, and training latency—helping to extend the utility of FL in resource-limited edge environments.

References

1. FedGreen: Federated Learning with Fine-Grained Gradient Compression for Green Mobile Edge Computing – 32% energy reduction via device-side gradient reduction. arXiv

2. FedSkel: Efficient Federated Learning on Heterogeneous Systems with Skeleton Gradients Update – Speedups and communication efficiency. arXiv

3. AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning – RL-based scheduler improves convergence and energy efficiency. arXiv

4. Dynamic Scheduling for Over-the-Air Federated Edge Learning with Energy Constraints – Energy-aware scheduling improves accuracy amidst constraints.

Downloads

Published

2022-11-01

How to Cite

Energy-Efficient Federated Learning Frameworks for Edge Devices. (2022). International Journal of Research and Applied Innovations, 5(6), 7964-7967. https://doi.org/10.15662/IJRAI.2022.0506002