Christian Coduri
Evaluating Backdoor Attacks Over Centralized and Distributed Medical Image Processing.
Rel. Alessio Sacco, Guido Marchetto. Politecnico di Torino, NON SPECIFICATO, 2025
|
|
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (15MB) |
| Abstract: |
Machine learning (ML), particularly deep learning models such as Convolutional Neural Networks (CNNs), has shown great promise in medical imaging, supporting clinicians in diagnosis and treatment planning. However, clinical adoption is often limited by the scarcity of annotated data and the risk of dataset bias, as models trained in a single institution may fail to generalize. In addition, data protection regulations, such as the GDPR, restrict the centralization of medical images across hospitals, limiting the development of robust models. Federated Learning (FL) has emerged as a promising paradigm to address these challenges by enabling multiple institutions to collaboratively train a model without sharing raw data. In this approach, each institution updates the shared model using its own dataset and transmits only the parameters to a central server, thereby preserving the privacy of sensitive information. However, FL is still a variant of ML and thus inherits many of its vulnerabilities, while also introducing new threats specific to its distributed nature. In the future, it is reasonable to expect that only a few large, well-resourced hospitals, or institutions in isolated regions, will continue to develop their own centralized and independent models. In contrast, most hospitals and clinics are likely to increasingly adopt federated learning, enabling them to collaboratively train more powerful models while maintaining compliance with regulations. This thesis investigates one of the most stealthy forms of attack: the backdoor attack, a variant of poisoning attacks in which malicious participants inject hidden triggers into a subset of the training data. These attacks are particularly insidious because they cause the model to misclassify inputs with the trigger while maintaining high accuracy on benign data, making detection difficult. Various CNNs for brain tumor classification were developed using pretrained models and transfer learning. After evaluating their performance, backdoor attacks were conducted on selected models. The results showed that employing a trusted pretrained model suited to the dataset, and freezing the feature extractor while updating only the classifier, can substantially reduce the risk of backdoors, even when the dataset is not fully trusted. Moreover, it was also demonstrated that explainability techniques, such as Grad-CAM, can assist in identifying potential attacks. After the single-agent scenario, the same models were deployed in a federated environment and subjected to attacks varying in the number of malicious clients and the poisoning rate. The experiments showed that these two parameters can significantly influence the success or failure of the attack, underscoring the complexity of securing federated settings. Additionally, final considerations are provided on what the server can observe as potential indicators of backdoor activity, which could be used to identify or exclude malicious clients. Future work will build on this analysis to design robust defense mechanisms applicable to both centralized and federated learning. Additional backdoor variants, including those with invisible triggers or adversarial noise, will be investigated and compared. Finally, attention will be given to developing defenses that remain effective under privacy-preserving techniques such as secure computation or differential privacy, ensuring both security and compliance in sensitive applications. |
|---|---|
| Relatori: | Alessio Sacco, Guido Marchetto |
| Anno accademico: | 2025/26 |
| Tipo di pubblicazione: | Elettronica |
| Numero di pagine: | 97 |
| Soggetti: | |
| Corso di laurea: | NON SPECIFICATO |
| Classe di laurea: | Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA |
| Aziende collaboratrici: | NON SPECIFICATO |
| URI: | http://webthesis.biblio.polito.it/id/eprint/37904 |
![]() |
Modifica (riservato agli operatori) |



Licenza Creative Commons - Attribuzione 3.0 Italia