PI: Zheng Chen, Linköping University
co-PI: Michael Lentmaier, Lund University
Federated learning (FL) is susceptible to a range of security threats, making the protection of local model updates essential for ensuring secure and privacy-preserving information exchange between participating clients and the central parameter server. Among the various strategies proposed to address privacy concerns in FL, two approaches have garnered significant attention: secure aggregation based on random masking or secret sharing [1], and differential privacy achieved by injecting random artificial noise [2]. This project aims to develop novel techniques to achieve privacy protection in FL with minimal utility degradation, and re-examine the communication-privacy-utility interplay in differentially private FL with practical coding schemes.
Project number: G3
