Light-Based Watermarks Combat Video Fakes

Arstechnica

The escalating ease with which video footage can be manipulated to deceive viewers has created a significant challenge for fact-checkers struggling to discern authenticity. In response to this growing digital arms race, scientists at Cornell University have unveiled a novel defense: software capable of embedding a unique “watermark” within light fluctuations, thereby revealing when video content has been tampered with. This innovative breakthrough was presented at SIGGRAPH 2025 in Vancouver, British Columbia, following its publication in June in the esteemed journal ACM Transactions on Graphics.

“Video used to be treated as a source of truth, but that’s no longer an assumption we can make,” remarked Abe Davis, a co-author from Cornell University who initially conceived the idea. He underscored the profound shift, noting, “Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it’s only getting harder to tell what’s real.”

According to the researchers, those intent on creating deceptive video fakes possess a fundamental advantage: unfettered access to authentic video material combined with the widespread availability of sophisticated, low-cost editing tools. These tools, often powered by artificial intelligence, can quickly learn from vast datasets, making their fabrications nearly indistinguishable from genuine footage. Progress in generating convincing fakes has, until now, outpaced the development of forensic techniques designed to combat them. A critical element for any effective countermeasure is information asymmetry—the technique must leverage information inaccessible to the manipulators and that cannot be gleaned from publicly available training data.

While digital watermarking techniques exist that utilize information asymmetry, the Cornell team observed that most fall short on other crucial attributes. Many current methods, for instance, necessitate control over the recording camera or direct access to the original, unmanipulated video. Furthermore, a tool like a checksum, while capable of detecting if a video file has been altered, cannot differentiate between standard video compression and malicious interventions, such as the insertion of virtual objects.

The Cornell team’s latest method, dubbed “noise-coded illumination” (NCI), directly addresses these shortcomings by ingeniously hiding watermarks within the apparent “noise” of light sources. Unlike their previous work, which relied on the video creator using a specific camera or AI model, NCI offers broader applicability. This can be achieved through a small piece of software for computer screens and certain types of room lighting, or by attaching a small computer chip to off-the-shelf lamps.

“Each watermark carries a low-fidelity, time-stamped version of the unmanipulated video under slightly different lighting. We call these ‘code videos’,” Davis explained. He elaborated that when someone manipulates a video, the altered segments begin to contradict what is present in these code videos, effectively pinpointing where changes were made. In cases where someone attempts to generate entirely fake video with AI, the resulting code videos appear as mere random variations. Crucially, because the watermark is designed to mimic noise, it remains exceedingly difficult to detect without knowledge of the secret code.

The Cornell team rigorously tested their NCI method against a comprehensive array of manipulation types, including warp cuts, changes to speed and acceleration, compositing, and deepfakes. Their technique demonstrated remarkable robustness, proving resilient to factors such as signal levels below human perception, subject and camera motion, camera flash, variations in human skin tones, different levels of video compression, and both indoor and outdoor settings.

Davis acknowledged that even if an adversary were aware of the technique and somehow managed to decipher the codes, their task would still be substantially more complex. “Instead of faking the light for just one video, they have to fake each code video separately, and all those fakes have to agree with each other,” he noted. Despite this advancement, Davis cautioned that the fight against video manipulation is an “important ongoing problem” that is “only going to get harder.”