Show simple item record

dc.contributor.authorKhorramfar, Mohammadreza
dc.date.accessioned2023-08-30T21:30:11Z
dc.date.available2023-08-30T21:30:11Z
dc.date.issued2023-08-29
dc.identifier.citationKhorramfar, Mohammadreza. Securing Federated Learning Model Aggregation Against Poisoning Attacks via Credit-Based Client Selection; A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in ... Applied Computer Science. Winnipeg, Manitoba, Canada: University of Winnipeg, 2023. DOI: 10.36939/ir.202308301626.en_US
dc.identifier.urihttps://hdl.handle.net/10680/2107
dc.description.abstractFederated Learning (FL) has emerged as a revolutionary paradigm in the field of machine learning, enabling multiple participants to collaboratively train models without compromising the privacy of their individual training data. However, the distributed and decentralized nature of FL also exposes it to a diverse array of poisoning attacks, wherein adversaries inject malicious updates to compromise the integrity and accuracy of the global model.In this thesis, we embark on a critical exploration of defense strategies against poisoning attacks in FL. Our primary focus lies in proposing and evaluating a robust defense mechanism, aptly named Credit-Based Client Selection (CBCS). Leveraging a credit-based system, CBCS judiciously assigns credit scores to participating clients based on the accuracy and consistency of their historical model updates. By selectively incorporating reliable clients with higher credit scores into the model aggregation process, while subjecting low-credit clients to thorough scrutiny or exclusion, CBCS fortifies the defense against adversarial disruptions. To further enhance our research comprehensiveness, we extend our evaluation to other scenarios that can be explored, such as normal conditions. We carefully assess the efficacy of these strategies across various FL settings.Through an extensive series of experiments conducted on non-iid image classification datasets, we rigorously evaluate the performance of the CBCS defense mechanism. The results show that CBCS effectively identifies and excludes adversarial clients, maintaining model accuracy and data confidentiality in federated learning. The outcomes of our research underscore the profound impact of robust defense strategies on securing federated learning and their pivotal role in advancing collaborative and privacy-preserving machine learning applications.The proposed CBCS defense mechanism illuminates new avenues for enhancing the resilience and security of federated learning systems in the face of adversarial threats.As the world continues to embrace decentralized and privacy-focused learning approaches, our research contributes significantly to the safe and trustworthy deployment of federated learning across diverse domains.en_US
dc.language.isoenen_US
dc.publisherUniversity of Winnipegen_US
dc.rightsinfo:eu-repo/semantics/openAccessen_US
dc.subjectFederated Learningen_US
dc.subjectMachine Learningen_US
dc.subjectClient Selectionen_US
dc.subjectSecurityen_US
dc.subjectNeural Networksen_US
dc.subjectAggregation Rulesen_US
dc.subjectDefense and CNNen_US
dc.titleSecuring Federated Learning Model Aggregation Against Poisoning Attacks via Credit-Based Client Selectionen_US
dc.typeThesisen_US
dc.description.degreeMaster of Science in Applied Computer Scienceen_US
dc.publisher.grantorUniversity of Winnipegen_US
dc.identifier.doi10.36939/ir.202308301626en_US
thesis.degree.disciplineApplied Computer Science
thesis.degree.levelmasters
thesis.degree.nameMaster of Science in Applied Computer Science
thesis.degree.grantorUniversity of Winnipeg


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record