Giada Grillo
Deepfake and Generative AI: Legal Challenges and Technical Strategies for Detection and Prevention.
Rel. Andrea Atzeni, Giuseppe Emiliano Vaciago. Politecnico di Torino, Corso di laurea magistrale in Cybersecurity, 2025
|
PDF (Tesi_di_laurea)
- Tesi
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (4MB) | Preview |
|
|
|
Archive (ZIP) (Documenti_allegati)
- Altro
Licenza: Creative Commons Attribution Non-commercial No Derivatives. Download (8MB) |
| Abstract: |
The rapid evolution of generative artificial intelligence has introduced one of the most pressing challenges in today’s digital ecosystem, the rise of deepfakes. Originally developed as experimental outputs of GANs, autoencoders, and diffusion models, these technologies now enable the creation of hyperrealistic synthetic media such as images, videos, and voices that are nearly indistinguishable from authentic content. While such advancements hold creative and commercial potential, they have increasingly been exploited for identity theft, disinformation, extortion, and reputational damage, producing serious social, economic, and political repercussions. Recent studies show that deepfake-related fraud attempts grew by over 3,000% in 2023, underscoring the urgency of effective countermeasures. Despite this escalating threat, both legal and technical responses remain fragmented. Regulatory initiatives from the EU Artificial Intelligence Act, the Digital Services Act, and the recent Italian DDL introducing the crime of deepfake, to the U.S. Deepfakes Accountability Act, China’s Deep Synthesis Regulations, and the UK Online Safety Act reflect a growing global consensus on the need for transparency, accountability, integrity, and content provenance. However, legislation alone is insufficient without technological mechanisms that can enforce these principles in practice. Likewise, most existing detection techniques are reactive and struggle to keep pace with generative AI’s exponential progress. This thesis therefore proposes a proactive and preventive framework for authenticity verification, securing digital media at the point of creation rather than solely detecting manipulation afterward. The proposed system integrates fragile watermarking, cryptographic hashing, and a blockchain-inspired ledger to embed verifiable traces of origin within digital images and ensure immutable traceability. The design operationalizes legal principles of transparency, integrity, and provenance identified in emerging AI legislation, bridging the gap between normative requirements and technical enforcement. The prototype employs a Least Significant Bit (LSB) fragile watermarking technique to insert imperceptible textual marks that can reveal even minimal alterations. A SHA-256 hashing module generates unique digital fingerprints for both image content and metadata, which are then recorded in a simulated blockchain ledger ensuring an auditable chain of custody. Verification processes cross-check watermarks, hashes, and ledger entries to detect any inconsistencies indicative of tampering. Experimental tests conducted using NIST’s Computer Forensic Reference Data Sets and analyzed through the Autopsy forensic platform confirmed the system’s ability to identify pixel-level modifications, metadata corruption, and unauthorized edits while maintaining evidentiary reliability. Beyond its technical dimension, the research establishes a direct mapping between each system component and its corresponding legal foundation. Watermarking ensures transparency as required by the AI Act and China’s AIGC Measures, hashing safeguards integrity in line with the U.S. Deepfakes Accountability Act, and blockchain mechanisms guarantee accountability and non-repudiation as envisioned by the Italian DDL and UK Online Safety Act. The results demonstrate the framework’s robustness and reliability in detecting manipulation and authenticating digital content. By combining fragile watermarking and cryptographic hashing, the system strength |
|---|---|
| Relatori: | Andrea Atzeni, Giuseppe Emiliano Vaciago |
| Anno accademico: | 2025/26 |
| Tipo di pubblicazione: | Elettronica |
| Numero di pagine: | 91 |
| Soggetti: | |
| Corso di laurea: | Corso di laurea magistrale in Cybersecurity |
| Classe di laurea: | Nuovo ordinamento > Laurea magistrale > LM-32 - INGEGNERIA INFORMATICA |
| Aziende collaboratrici: | NON SPECIFICATO |
| URI: | http://webthesis.biblio.polito.it/id/eprint/38689 |
![]() |
Modifica (riservato agli operatori) |



Licenza Creative Commons - Attribuzione 3.0 Italia