GenAI: Deepfakes 2026
A novel methodology for evaluating deepfake detection systems in forensics.
Overview
The creation of deepfakes is a low-cost, low-effort process that is now widely available to make true life-like renderings of people, places and objects. With modern generative AI, a photograph can be harvested from social media and transformed into a hyper-realistic deepfake in seconds.
Current AI detection systems demonstrate performance degradation of 45-50% when transitioning from academic evaluation to operational deployment [1], highlighting the need for comprehensive benchmark datasets that adequately challenge detection capabilities.
What We Do
At NIST, in collaboration with the University of Maryland, we are addressing the need to develop robust detection tools. A novel methodology for evaluating deepfake detection systems through the following initiatives:
- Synthetic Reference Images as Surrogates: We use ultra-realistic AI-generated, entirely synthetic entities—faces that do not belong to real people—to serve as the basis for our study.
- Adversarial Attacks: We don't just test "easy" AI images. We deliberately select synthetic images and perform Adversarial attacks on them while maintaining high human-assessed realism.
- Forensic Manipulations: Using these synthetic subjects, we create challenging benchmarks through face swapping, body swapping, and context manipulation. These are then validated by face verification systems to ensure they mimic the sophisticated tactics used in the real-world.
The Goal
To provide the research community with an adversarially challenging, and operationally relevant benchmark to allow for rigorous evaluation of deepfake detection systems.
Task Coordinator
If you have any questions, please email to the NIST GenAI team
** This project has been reviewed by the NIST Research Protections Office to ensure compliance with human subject research.