Green Federated Learning over Wireless Networks
Federated Learning , Distributed Machine Learning , Communication-efficient FL
Motivated by ever-increasing computational resources at edge devices and increasing privacy concerns, a new machine learning framework called Federated Learning (FL) has been proposed. FL enables user devices, such as mobile and Internet of Things (IoT) devices, to collaboratively train a machine learning model by only sending the model parameters instead of raw data. FL is considered the key enabling approach for privacy-preserving, distributed machine learning (ML) systems. However, FL requires frequent exchange of learned model updates between multiple user devices and the cloud/edge server, which introduces a significant communication overhead. Moreover, FL consumes a considerable amount of energy in the process of transmitting learned model updates. In this thesis, we aim to engineer a green FL framework over wireless networks that is both communication-efficient and energy-efficient. To improve communication efficiency, we will design two novel model compression approaches, and to improve energy efficiency, we will develop a joint network scheduling and model updating. Our solutions will be validated extensively through real-world experiments. More specifically, in our first communication-efficient method, we propose the Federated Learning with Autoencoder Compression (FLAC) approach that utilizes the redundant information and error-correcting capability of FL to compress user devices' models for uplink transmission. FLAC trains an autoencoder to encode and decode users' models at the server in the Training State, and then, sends the autoencoder to user devices for compressing local models for future iterations during the Compression State. In our second communication-efficient method, we introduce the Dynamic Sparsification for Federated Learning (DSFL) approach that enables users to compress their local models based on their communication capacity at each iteration by using two novel sparsification methods: layer-wise similarity sparsification (LSS) and extended top-K sparsification. In our energy-efficient FL design, we investigate the energy consumption of transmitting scheduling decisions for FL deployed over a wireless network. To this end, we propose a novel multi-frame framework that enables the coordinator to schedule wireless devices at the beginning of each global round and set the coordinator's transmission module to sleep mode.