Emerging Threats of AI-Enabled Fraud in 2025
2024 gave us all a preview of AI-powered fraud threats, including deepfakes, voice clones, and AI-based phishing. And yet this was merely a field test for what’s coming next. In 2025, AI scams are poised to take center stage in scams targeting fintechs and banks.
Financial institutions are already targets of coordinated campaigns, with the volume set to increase through the power of AI. Already, hundreds of Telegram channels connect wannabe fraudsters with scam organizations globally. A new kind of “employment-wanted” ads by people seeking to sell their appearance to be used for deepfakes, have started appearing, giving a glimpse into what is to come.
In one ad, a young woman posts pictures of herself with the caption “Work As An AI Model.” She claims to have previously worked for pig butchering compounds(a type of social engineering fraud) as a Killer (a colloquial term used for employees of such scammers) and now she wants to transition to “AI modeling”.
On Track for $40 Billion in AI-Enabled Fraud
The Deloitte Center for Financial Services predicts that generative AI will be responsible for $40 billion in losses by 2027 with a 32% compound annual growth rate. That rise is drastic enough for the Federal Bureau of Investigation to warn the market about criminals using AI to scale and make their schemes more convincing.
AI Fraud As A Service – A Booming Underground Industry
Point Predictive monitored conversations related to AI and deepfakes in fraud channels on Telegram in 2023 and 2024. And the volume of messages grew from 47,000 in 2023 to over 350,000 in 2024 – a more than seven-fold increase.
In recent years, AI fraud has developed into a thriving industry. For example, Haotian AI provides face-changing software on Telegram. Their ads boast an R&D team of hundreds of programmers and dozens of servers. Even if staff numbers are exaggerated, this speaks to the level of demand they are facing.
Their deepfake face-changing software purports to produce smooth, real-time video that is “difficult to distinguish with the naked eye” and is designed for “overseas calls” – a perfect match for romance scams.
This rapid criminal communication growth has led to many fraud experts anticipating a vast wave of AI-enabled scams in 2025 and beyond.
Here is what you need to prepare for:
1. Compromising Business Emails Using AI and Deepfakes
Deepfake-enhanced Business Email Compromise (BEC) attacks are set to escalate in 2025. Well-established methods are ready for anybody to use, as witnessed in two well-documented schemes in Hong Kong. Fraudsters used AI-generated video and audio to impersonate company executives on Zoom calls, tricking employees into transferring nearly $30 million in funds.
This process is becoming increasingly automated and leads to an overwhelming volume of attacks. VIPRE Security Group reports that 40% of BEC emails are now AI-generated. According to Usman Choudhary, their Chief Product and Technology Officer, “as AI technology advances, the potential for BEC attacks grows exponentially.” Further illustrating this trend, another American firm, Medius, revealed that ~53% of accounting professionals experienced deepfake AI attacks in 2024.
2. AI Romance Chatbots Proliferation
One prominent Nigerian cybercriminal recently posted a video showing a fully automated AI chatbot communicating directly with a victim. The victim believes she is talking to her love interest— a military doctor overseas. In reality, she is talking to a chatbot controlled by the scammer.
These chatbots boost the scam’s believability. They allow the scammer to pick a character and converse fluently with the victim without an accent. The use of fully autonomous AI chatbots is set to explode in 2025.
3. Pig Butchering Operations Shift to AI
Numerous videos document walls of cell phones that work day and night to find people susceptible to pig butchering and cultivate relationships with them. Scam compounds use new technologies to scale their operations.
AI software called “Instagram Automatic Fans” is just one of a large cohort. It sends messages to thousands of people a minute. A message reads something like: “My friend recommended you. How are you?” If you answer, you instantly become a hot lead for a team of fraudsters. And while you may not fall for it now, like other tactics, it is quickly evolving.
Criminal syndicates involved in pig butchering are learning to leverage AI-powered deepfake technology for video calls, voice clones, and chatbots to scale up their operations in 2025.
4. AI Deepfake Extortion Scams to Target High Profile Executives
An orchestrated deepfake email extortion plot recently targeted 100 Singaporean public servants (including ministers), and 30 government agencies.
The emails demanded a $50,000 payment in cryptocurrency. As leverage, scammers threatened to release deepfake videos showing victims in compromising situations. Convincing fakes were created using public content from LinkedIn and YouTube.
With AI-driven deepfake software becoming more accessible, deepfake extortion scams will likely spread to corporations and target high-profile executives. And as the cost of attacks lowers, the reach and choice of targets will keep expanding.
5. Deepfake Digital Arrests Wave May Hit the U.S.
The Indian Express reports more than 92,000 deepfake digital arrest cases in India since January 2024. Cybersecurity experts are worried this could be the next major fraud trend to hit developed economies.
Scammers pose as representatives of law enforcement to psychologically manipulate victims, isolate them, and demand ransoms. They use deepfake video and audio clips of government officials or come up with fabricated evidence to make the scam more believable.
According to the BBC, Indian authorities have been able to trace 40% of the new wave of digital arrests to Southeast Asia. In 2021 pig butchering schemes migrated to the U.S., and some believe digital arrest deepfake scams will come to America in 2025.
The current status quo
AI scams have already changed the world of financial crime. Even though banks and fintechs are rushing to build effective defenses, criminals are simply moving faster.
2025 promises to be a major turning point in scams. New fraud-as-a-service operations are expanding rapidly, facilitated by AI tools available for as little as $20 a month.
The future of AI scams has arrived, and it just may speak in a voice that sounds exactly like yours.
Protecting Yourself From AI Scams
Despite the proliferation of AI scams, there is some hope. Yes, AI scams are getting more convincing, but there are practical measures you can take to protect yourself.
Be wary of unexpected texts, emails, and calls that create urgency or pressure, especially those claiming to be from trusted organizations. If a bank asks for sensitive information on a call, simply hang up and call the bank at the customer service number.
If a family member calls urgently claiming they have been kidnapped, simply ask personal questions only they can answer. Experts also recommend establishing “safe words” with family members. When an urgent call or request is made, that safe word can be used to verify the call’s authenticity.
And if you find yourself on a strange Zoom or FaceTime video call that you suspect might be a deepfake, simply ask the person to stand up or wave a hand in front of their face. If the video glitches, that’s a surefire sign that something is wrong.
This article, written by Frank McKenna, was first published on Forbes.