Analysis of Methods and Tools of Proactive Defense against Deepfake
DOI:
https://doi.org/10.31649/1997-9266-2025-180-3-126-132Keywords:
Deepfake, steganography, watermarking, artificial intelligence, machine learning, information attack, information securityAbstract
The development of artificial intelligence has been one of the main trends in recent years. Although this technology serves for the benefit of humanity, it can also be used for malicious purposes, such as spreading disinformation or blackmail. For the realization of the above-mentioned purposes, technologies are often used to create so-called Deepfake content.
The article describes the results of a study of methods and means for active protection against the malicious use of Deepfake. Deepfake is a general name for images, video or audio files, created by means of artificial neural networks and show how a person speaks or performs something that he did not perform in reality. As a result of using artificial intelligence such materials seem to be real for persons who do not know their origin. Without usage of special tools it will be difficult to distinguish the real image from faked one.
Technologies of the content generation by means of artificial intelligence, in particular Deepfake, are often used for the creation of the materials, used for disinformation campaigns, blackmail or other malicious purposes. In this connection, there is a need to develop means for protection against malicious use of Deepfake.
Passive protection methods based on the ability of machine learning models to distinguish authentic content from generated content depend on the architectures of models for generating Deepfake content, so they quickly become outdated as the latter develop more rapidly. Therefore, there is a need to develop active protection methods based on the use of watermarks to track them or to interfere with the operation of Deepfake generation models.
The paper describes and analyzes the existing active methods of protection against Deepfake, as well as analyzes the areas of their target applications, their advantages and disadvantages, technical characteristics, and suggests directions for further research in this area. Special attention is paid to the protection methods based on steganography and watermarking. The advantages of active protection methods over passive ones are considered.
References
Pause Giant AI Experiments: An Open Letter. [Electronic resource]. Available: https://futureoflife.org/open-letter/pause-giant-ai-experiments . Accessed: 26.03.2025.
Six Months Ago Elon Musk Called for a Pause on AI. Instead Development Sped Up. [Electronic resource]. Available: https://www.wired.com/story/fast-forward-elon-musk-letter-pause-ai-development/ . Accessed: 26.03.2025.
Z. Wang, E. P. Simoncelli, and A. Bovik, “Multiscale structural similarity for image quality assessment,” The Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, 2003, 1398 p.
R. Wang, F. Juefei-Xu., M. Luo., Y. Liu., and L. Wang, “FakeTagger: Robust Safeguards against DeepFake Dissemination via Provenance Tracking,” Proceedings of the 29th ACM International Conference on Multimedia, 2020.
E. L. Josephs, C. L. Fosco, and A. Oliva, “Artifact magnification on deepfake videos increases human detection and subjective confidence,” ArXiv, 2023.
S. D. Bray, S. D. Johnson, and B. Kleinberg, “Testing Human Ability To Detect Deepfake Images of Human Faces,” Journal of Cybersecurity, 2022.
T. Wang, X. Liao, K. Chow, X. Lin, and Y.Wang, “Deepfake Detection: A Comprehensive Survey from the Reliability Perspective,” ACM Computing Surveys, 2022, 58 p.
Y. Mirsky, and W. Lee, “The Creation and Detection of Deepfakes: A Survey,” ACM Computing Surveys (CSUR), 2021. v. 54, 1 p.
H. Nguyen-Le, V. Tran, D. Nguyen, and N. Le-Khac, “Deepfake Generation and Proactive Deepfake Defense,” A Comprehensive Survey, 2024, 1 p. (Preprint)
Deepfake Cases Surge in Countries Holding 2024 Elections, Sumsub Research Shows. [Electronic resource]. Available: https://sumsub.com/newsroom/deepfake-cases-surge-in-countries-holding-2024-elections-sumsub-research-shows/ . Accessed: 26.03.2025).
A. Malanowska, W. Mazurczyk, T. K. Araghi, and D. Megías, “Digital Watermarking — A Meta-Survey and Techniques for Fake News Detection,” IEEE Access, 2024, vol. 12, 36311 p.
М. В. Рябокінь, Є. В. Котух, і О. І. «Папилев, Вплив технології Deepfake на Fintech сектор: поточний стан в Україні та стратегії запобігання кіберзлочинності,» Проблеми і перспективи економіки та управління, № 1 (37), с. 310-328, 2024.
P. Neekhara, S. Hussain, X. Zhang, K. Huang, “FaceSigns: Semi-fragile Watermarks for Media Authentication,” ACM Transactions on Multimedia Computing, Communications and Applications, vol. 20, pp. 1-21, 2024.
Y. Zhao, B. Liu, M. Ding, B. Liu, T. Zhu, and X. Yu, “Proactive Deepfake Defence via Identity Watermarking,” Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023, pp. 4602-4611.
S. Lan, K. Liu, Y. Zhao, C. Yang, “Facial Features Matter: a Dynamic Watermark based Proactive Deepfake Detection Approach,” 2024, Preprint arXiv.
Downloads
-
pdf (Українська)
Downloads: 29
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with this journal agree to the following terms:
- Authors retain copyright and grant the journal right of first publication.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in this journal.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).