The rapid advancement of synthetic media technologies has made deepfake harassment a serious cybersecurity and societal concern. Deepfake-based abuse, including non-consensual explicit content and identity impersonation, causes severe psychological, social, and reputational harm to victims. Most existing solutions rely on artificial intelligence and machine learning techniques for detecting manipulated media; however, these approaches are often computationally expensive, dataset-dependent, and unsuitable for forensic or legal use. This project proposes a cybersecurity- centric framework that treats deepfake harassment as a digital forensics problem rather than a prediction task. The system focuses on media integrity verification, metadata forensic analysis, cryptographic hashing, provenance tracking, and secure chain-of-custody mechanisms. A structured reporting workflow enables the generation of forensic integrity reports that support investigation and evidence handling. The proposed framework is implemented as a working prototype and evaluated using controlled test cases to demonstrate its ability to flag suspicious media and preserve reliable digital evidence, thereby strengthening trust in digital media ecosystems.
Deepfake Harassment, Cybersecurity, Digital Forensics, Media Integrity, Provenance Tracking, Cryptographic Verification
. A Systematic Survey of Deepfake Harassment Using Digital Forensics. Indian Journal of Modern Research and Reviews. 2026; 4(3):56-62
Download PDF