Although artificial intelligence has transformed a lot of our daily lives, scammers are now able to use it to their advantage. These 10 true stories highlight the frightening new side of AI-driven fraud, from voice cloning to deepfake video conversations.
Paul Davis from Southampton has been repeatedly targeted by AI-driven phishing scams that use doctored images and videos of celebrities such as Elon Musk, Mark Zuckerberg, and Jennifer Aniston. The scammers attempt to convince him he has won a £500,000 prize and a Range Rover, requiring an “activation fee” to claim the non-existent winnings. These scams use neural network-generated images and fake certificates to appear legitimate, with the real aim of collecting personal information and home addresses. Victims are often persuaded to buy non-refundable gift cards. AI expert Dr. Jennifer Williams warns about red flags like odd phrasing, pixelated images, and emotional manipulation. She emphasizes that Facebook does not give away large prizes or vehicles, urging people to remain vigilant.
Source: Yahoo
Ahead of Romania’s May 18 presidential election, scammers are using AI-generated deepfake videos on Facebook to impersonate candidates George Simion and Nicușor Dan, falsely promoting a fake government investment called “Neptun Deep.” The scam claims citizens can earn RON 9,000 monthly by paying a RON 1,400 “activation” fee, redirecting victims to fake news sites and testimonials to boost credibility. While one deepfake video was removed, another remains active and is rapidly spreading. This scheme mirrors previous scams exploiting political figures, fake endorsements, and fabricated financial reviews to lure victims. Authorities urge the public not to click suspicious links, report fraudulent content, and warn others, especially vulnerable groups. The campaign demonstrates how deepfakes and AI are increasingly used to deceive and defraud during high-profile events, blending political disinformation with advance-fee fraud.
Source: Bitdefender
A Manitoba woman, Leann Friesen, recently received a suspicious phone call from someone who sounded exactly like her son, but the conversation felt off. Trusting her instincts, she hung up and called her son directly, discovering he hadn’t called her—revealing the incident as an AI-driven scam using voice cloning. Such scams are on the rise, with criminals using artificial intelligence to mimic the voices of loved ones, often sourced from online clips. Manitoba MLA Diljeet Brar also reported his voice being used in a similar scam targeting a constituent for $5,000. Experts warn that these AI scams are a sophisticated evolution of the “grandparent scam,” now more convincing and targeted, or “spear phishing.” Fraud examiner Keith Elliott advises verifying suspicious calls by contacting the person directly or using bait questions. Canadians lost $638 million to fraud last year, but most cases go unreported, highlighting the need for vigilance and awareness.
Source: CBC
Former NewsChannel 5 meteorologist Bree Smith is advocating for stronger Tennessee laws after being targeted by scammers who used AI to create fake, sexually explicit images and videos of her. Smith testified before a state House subcommittee, describing the devastation caused by imposter social media accounts that used her likeness to solicit money from fans. She supports the Preventing Deep Fake Images Act (HB1299), which would make it a felony to distribute intimate digital depictions without consent and allow victims to sue for damages. Despite reporting the abuse, Smith found little recourse, highlighting the urgent need for legal protections against deepfake exploitation.
Source: The Tennessean
An Ontario senior, Marilyn Crawford, narrowly avoided losing $9,000 in a convincing “grandparent scam” call, where fraudsters impersonated her grandson using what may have been AI-generated voice cloning. Awakened by the call, Marilyn was told her grandson was in jail and needed bail money, prompting her to almost send funds before a bank employee intervened. This type of scam, where callers claim to be a family member in distress, has become more sophisticated with the rise of deepfake technology, allowing scammers to convincingly mimic voices using just a few seconds of audio from social media. Experts warn that oversharing online makes it easier for scammers to target victims, especially seniors. Authorities recommend establishing family code words and being cautious with urgent requests for money. In 2024, Canadians lost nearly $3 million to such scams, highlighting the growing threat posed by AI in financial fraud.
Source: CBC