Deepfacs are really dangerous but this small process can make them easier to catch them
Updated on: August 13, 2025 12:14 pm IST
It is getting difficult to present Deepfack. During recording a simple light step hides a code that picks up the camera. Later, editing can show as a break in that code.
Are worried about deepfec and how can they affect our societies, systems and political processes? This simple fix can be the key to addressing this issue. A team from the University of Cornell has shown that you can watermark the reality by using light, not use software. Instead of embedding a signature in a file that a bad actor can strip or ignore, he embedded a cool code in the scene while recording. Lamps or panels fit in patterns fitted with a small controller that people do not notice, yet do the camera. The camera captures those ups and downs as part of the image. Later, with the matching key, no one can recover less loyalty code stream than the footage and can check if it line up with the scene. If a face was swapped, an object was pasted, or a section starts again, the code in that area will not match. Whatever you get is manufactured in the investigation of authenticity that travels with frames and does not rely on downstream cooperation from platforms or models.
How light code works and why it helps
In the capture, the system gently controls one or more illumination with a purendom sequence. The variations sit under human perception, so the visual looks normal to the audience in the room and in the camera. Because the camera sensor integrates that light, the code becomes part of each frame. During verification, the software gives a reference signal to the footage and compares it with its expected pattern. A clear match says the scene was recorded under the coded lights. A mismatch highlights areas that are not related. The clever turn is that you can run different codes on different fixtures in the same scene. This makes life difficult for forces because any editing has to change many overlapping light signatures, frames by frames, moving subjects and changing shadows. Interesting engineeringFile -based watermark and metadata have never solved it. They depend on obedient software and can be taken away, can be added again, or never added. A light vigor enhances the bar in signature settings where the truth matters the most, such as interviews, debates, press briefings and court room recording. It does not prevent every attack, and it will not cure material that was never burnt with the system, but it moves the first confidence in the chain and makes the editing expensive and slowing the editing to produce.
Where it can descend next
The practical reverse is that you do not need to change the cameras. You retrofit the lights. A postage stamp size controller can live inside a studio panel, a conference room downlight or a stage stability. The news room can set with coded patterns that look normal on the air. Event organizers can enable coded lighting to show high stakes without changing the run sheet. Fact checkers may ask sources to supply a small verification clip with raw footage, which speeds up reviews and reduces estimates. Standard bodies can define open keys and audit trails so that a single laboratory works beyond verification scales and sellers. None of this is a silver pill. Light can flow. The keys can leak. Outdoor scenes are difficult to control, and the method requires skin tone rendering and care around the flick. The Cornell team frames it as a layer, not as a lock. Pair it with the Provence Log, capture the time attached, and a strong forensic model, and you get a defense that prefer the trust in the moment of recording instead of a late scuffle after a video goes viral. A watermark performed by photons is a refreshing simple method that works easier to prove fake and prove the truth.