Updated on: sept 01, 2025 06:03 pm IST
Updated on: sept 01, 2025 06:03 pm IST
Voice-clone scam is no longer science-fi. In 2025, the modeling using AI is exploding fraud, thanks to realistic deepfac videos, cloned speeches and highly analog messages.
According to Munlock, this year such scam increased by 148%. This is not promotion, this is a clear indication that deception is growing rapidly.
Criminals are using rapidly accessible AIs to make rapid and scale reliable fraud. The US alone may lose up to $ 40 billion annually for AI-competent scams by 2027. A standout case: An employee in Hong Kong was fooled in a video call by a Deepfech CFO and authorized $ 25 million transfer
Action |
advantage |
Stop and verify | Always call back using a verified number before acting. |
Spot | The messages packed with unnecessary personal reference are suspicious. (Turn0search3) |
Reliable platforms | Avoid clicking on the unknown link, even if they come from “reliable” voices. |
Use multi-factor authentication (MFA) | If the copy of the copy fails, a safety trap connects. |
AI copying scams are climbing, and they are targeting themselves. Seeing a known face or hearing a familiar voice is no longer a guarantee. The only reliable defense is doubt, verification and control that forces scammers to slow down.