Privacy Preservation and Verifiability for Federated Learning

dc.contributor.authorZhao, Jianxiangen
dc.contributor.departmentElectrical and Computer Engineeringen
dc.contributor.supervisorNi, Jianbing
dc.date.accessioned2023-01-24T16:53:13Z
dc.date.available2023-01-24T16:53:13Z
dc.degree.grantorQueen's University at Kingstonen
dc.description.abstractFederated learning is a distributed machine learning framework to address the bottleneck of traditional machine learning on data collection and privacy leakage, which allows training a learning model using distributedly stored data without exposing them. In federated learning, multiple clients collaboratively train a single global model that is improved iteratively through clients' local data. Each client receives the global model from an aggregation server. Then, they train it on their private data, and send locally trained models back to the server, which are later integrated into the global model. Iteration after iteration, the federated training continues until the global model is considered well-trained. Federated learning provides basic privacy protection against outsider attackers, but it does not mean user privacy would not be leaked. It has been proved that the local models shared with the server are vulnerable to leaking the raw data maintained by users. To preserve the privacy of users, shared models shall not be in the form of plaintext. However, the encryption of the local models in sharing brings the challenge of model aggregation for the server, which is important to ensure fast convergence of the global model. In addition, it is hard to ensure that the server could honestly aggregate the local models, especially in cross-silo federated learning. If the server does not have enough motivation to coordinate the training, the performance of federated learning cannot be guaranteed. In this thesis, we aim to prevent user privacy leakage from the shared local models, and guarantee the correctness of the global models output by the server. Specifically, we first propose PPA-AFL, a fully asynchronous secure federated learning protocol, which addresses the privacy issue in the asynchronous federated aggregation. The disadvantage of PPA-AFL is that it requires two non-colluding servers and cannot provide a correctness guarantee for the global model. Then, we design PPVA-AFL, a secure and verifiable aggregation protocol for asynchronous federated learning, which simultaneously guarantees the privacy of the local model and the correctness of the global model. In short, we have investigated security issues in federated learning and developed two novel schemes that enhance privacy preservation and introduce verifiability for federated learning.en
dc.description.degreeM.A.Sc.en
dc.identifier.urihttp://hdl.handle.net/1974/31403
dc.language.isoengen
dc.relation.ispartofseriesCanadian thesesen
dc.rightsQueen's University's Thesis/Dissertation Non-Exclusive License for Deposit to QSpace and Library and Archives Canadaen
dc.rightsProQuest PhD and Master's Theses International Dissemination Agreementen
dc.rightsIntellectual Property Guidelines at Queen's Universityen
dc.rightsCopying and Preserving Your Thesisen
dc.rightsThis publication is made available by the authority of the copyright owner solely for the purpose of private study and research and may not be copied or reproduced except as permitted by the copyright laws without written authority from the copyright owner.en
dc.rightsAttribution-NonCommercial 3.0 United States*
dc.rights.urihttp://creativecommons.org/licenses/by-nc/3.0/us/*
dc.subjectFederated Learningen
dc.subjectSecure Machine Learningen
dc.titlePrivacy Preservation and Verifiability for Federated Learningen
dc.typethesisen
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Jianxiang_Zhao_20201604_MASC.pdf
Size:
992.84 KB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
1.67 KB
Format:
Item-specific license agreed upon to submission
Description: