Deepfakes have become increasingly convincing, leaving us wondering whether what we see is real. Professor Abe Davis at Cornell explains that videos can no longer be taken at face value—a shift that challenges our usual assumptions about authenticity.
To tackle this, researchers at Cornell have crafted a clever method that adjusts the brightness of lights in a scene—whether from computer screens, lamps, or even outdoor sources—to embed secret codes. Unlike traditional digital watermarks attached directly to the video file, these subtle light variations are integrated during recording, making them much harder for forgers to spot.
Graduate student Peter Michael, who is leading the project, describes how insights from human perception studies helped shape the approach. The secret code is blended into natural light noise, so unless you know what to look for, it remains virtually invisible.
If a video is tampered with—for instance, during an interview or a speech—the embedded codes can expose inconsistencies. A forensic analyst can check a derived “code video” that traces the original timeline, where any missing or altered sections typically show up as blacked-out areas. Notably, the team has even managed to embed up to three distinct codes in a single scene, forcing any potential forger to replicate multiple layers of authenticity.
This technique has been effective in a variety of settings, from outdoor shoots to scenes featuring a range of skin tones. While it doesn’t solve all issues related to deepfakes, it adds an extra layer of defence against manipulated content. As Davis points out, the challenge is ongoing and will only grow more complex as technology evolves. If you rely on video integrity, this is a development worth watching.