dc.contributor.author | Khorramfar, Mohammadreza | |
dc.date.accessioned | 2023-08-30T21:30:11Z | |
dc.date.available | 2023-08-30T21:30:11Z | |
dc.date.issued | 2023-08-29 | |
dc.identifier.citation | Khorramfar, Mohammadreza. Securing Federated Learning Model Aggregation Against Poisoning Attacks via Credit-Based Client Selection; A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in ... Applied Computer Science. Winnipeg, Manitoba, Canada: University of Winnipeg, 2023. DOI: 10.36939/ir.202308301626. | en_US |
dc.identifier.uri | https://hdl.handle.net/10680/2107 | |
dc.description.abstract | Federated Learning (FL) has emerged as a revolutionary paradigm in the field of machine learning, enabling multiple participants to collaboratively train models without compromising the privacy of their individual training data. However, the distributed and decentralized nature of FL also exposes it to a diverse array of poisoning attacks, wherein adversaries inject malicious updates to compromise the integrity and accuracy of the global model.In this thesis, we embark on a critical exploration of defense strategies against poisoning attacks in FL. Our primary focus lies in proposing and evaluating a robust defense mechanism, aptly named Credit-Based Client Selection (CBCS). Leveraging a credit-based system, CBCS judiciously assigns credit scores to participating clients based on the accuracy and consistency of their historical model updates. By selectively incorporating reliable clients with higher credit scores into the model aggregation process, while subjecting low-credit clients to thorough scrutiny or exclusion, CBCS fortifies the defense against adversarial disruptions. To further enhance our research comprehensiveness, we extend our evaluation to other scenarios that can be explored, such as normal conditions. We carefully assess the efficacy of these strategies across various FL settings.Through an extensive series of experiments conducted on non-iid image classification datasets, we rigorously evaluate the performance of the CBCS defense mechanism. The results show that CBCS effectively identifies and excludes adversarial clients, maintaining model accuracy and data confidentiality in federated learning. The outcomes of our research underscore the profound impact of robust defense strategies on securing federated learning and their pivotal role in advancing collaborative and privacy-preserving machine learning applications.The proposed CBCS defense mechanism illuminates new avenues for enhancing the resilience and security of federated learning systems in the face of adversarial threats.As the world continues to embrace decentralized and privacy-focused learning approaches, our research contributes significantly to the safe and trustworthy deployment of federated learning across diverse domains. | en_US |
dc.language.iso | en | en_US |
dc.publisher | University of Winnipeg | en_US |
dc.rights | info:eu-repo/semantics/openAccess | en_US |
dc.subject | Federated Learning | en_US |
dc.subject | Machine Learning | en_US |
dc.subject | Client Selection | en_US |
dc.subject | Security | en_US |
dc.subject | Neural Networks | en_US |
dc.subject | Aggregation Rules | en_US |
dc.subject | Defense and CNN | en_US |
dc.title | Securing Federated Learning Model Aggregation Against Poisoning Attacks via Credit-Based Client Selection | en_US |
dc.type | Thesis | en_US |
dc.description.degree | Master of Science in Applied Computer Science | en_US |
dc.publisher.grantor | University of Winnipeg | en_US |
dc.identifier.doi | 10.36939/ir.202308301626 | en_US |
thesis.degree.discipline | Applied Computer Science | |
thesis.degree.level | masters | |
thesis.degree.name | Master of Science in Applied Computer Science | |
thesis.degree.grantor | University of Winnipeg | |