Securing Federated Learning Model Aggregation Against Poisoning Attacks via Credit-Based Client Selection
MetadataShow full item record
Khorramfar, Mohammadreza. Securing Federated Learning Model Aggregation Against Poisoning Attacks via Credit-Based Client Selection; A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in ... Applied Computer Science. Winnipeg, Manitoba, Canada: University of Winnipeg, 2023. DOI: 10.36939/ir.202308301626.
Federated Learning (FL) has emerged as a revolutionary paradigm in the field of machine learning, enabling multiple participants to collaboratively train models without compromising the privacy of their individual training data. However, the distributed and decentralized nature of FL also exposes it to a diverse array of poisoning attacks, wherein adversaries inject malicious updates to compromise the integrity and accuracy of the global model.In this thesis, we embark on a critical exploration of defense strategies against poisoning attacks in FL. Our primary focus lies in proposing and evaluating a robust defense mechanism, aptly named Credit-Based Client Selection (CBCS). Leveraging a credit-based system, CBCS judiciously assigns credit scores to participating clients based on the accuracy and consistency of their historical model updates. By selectively incorporating reliable clients with higher credit scores into the model aggregation process, while subjecting low-credit clients to thorough scrutiny or exclusion, CBCS fortifies the defense against adversarial disruptions. To further enhance our research comprehensiveness, we extend our evaluation to other scenarios that can be explored, such as normal conditions. We carefully assess the efficacy of these strategies across various FL settings.Through an extensive series of experiments conducted on non-iid image classification datasets, we rigorously evaluate the performance of the CBCS defense mechanism. The results show that CBCS effectively identifies and excludes adversarial clients, maintaining model accuracy and data confidentiality in federated learning. The outcomes of our research underscore the profound impact of robust defense strategies on securing federated learning and their pivotal role in advancing collaborative and privacy-preserving machine learning applications.The proposed CBCS defense mechanism illuminates new avenues for enhancing the resilience and security of federated learning systems in the face of adversarial threats.As the world continues to embrace decentralized and privacy-focused learning approaches, our research contributes significantly to the safe and trustworthy deployment of federated learning across diverse domains.