(Original article in Japanese was published for FinTech Journal on Apr. 23, 2025)
https://www.sbbit.jp/article/fj/161696
Author: Makoto Shibata, Head of FINOLAB
With the rise of generative AI, financial crimes are becoming more sophisticated and harder to detect. In response, Japan is updating its regulations, including key changes to the Act on Prevention of Transfer of Criminal Proceeds, to better prevent fraud. This article highlights the growing threats and how we can prepare for them.
Overview of AI-Driven Financial Crime Trends
This article focuses on:
- Three phishing-related crime methods
- Three deepfake case studies
- Six key countermeasures to protect against evolving fraud
Key Legal Changes and Implications for Fintech
In February 2025, Japan’s National Police Agency announced revisions to anti-money laundering laws, set to take effect in April 2027. Key changes include:
- Individual Identity Verification: Online ID checks using selfies and ID photos will be discontinued. The system will move to using the My Number card’s electronic authentication.
- Corporate Verification: Copies of ID documents will no longer be accepted. Originals are now required.
- Alternatives for Those Without IC-enabled IDs: Documents like resident records must be submitted by mail.
These changes are a response to how AI can now create convincing fake videos (deepfakes) from a single image, making current identity verification methods unreliable.
3 Key Trends in Phishing Attacks
Phishing cases are increasing, with AI making scams more convincing and widespread. Here are three notable trends:
- Voice Phishing (Vishing): AI-generated voice messages pretend to be from agencies like Japan’s Financial Services Agency, tricking people into sharing personal and banking details.
- SMS Phishing (Smishing): Fake texts from delivery companies or telecom providers ask users to click links and input banking info.
- Targeting Corporations: Scammers now also target businesses with fake calls and emails, leading victims to enter corporate banking credentials on fraudulent websites.
These tactics have caused major losses, including a high-profile case involving Yamagata Bank with possible damages of 1 billion yen.
3 Deepfake-Related Crime Cases
Criminals are using AI-generated images and videos to commit fraud. Here are three real cases:
- Hong Kong (2024): A company lost 200 Million HK Dollar after scammers used a deepfake video call to impersonate its CFO and request a money transfer.
- Georgia (2024): Deepfakes of celebrities were used in fake crypto ads, scamming over 6,000 victims out of 27 Million Pound.
- UK (2024): A romance scam using deepfake videos led to a 77-year-old victim losing over 17 Thousand Pound.
6 Measures to Combat Evolving Financial Crimes
To protect against these increasingly sophisticated threats, both tech and human-focused measures are essential:
- Use of deepfake detection tools
- Adoption of multi-factor authentication (MFA)
- Multi-step approval processes for transactions
- Regular employee training
- Promoting skepticism toward impersonation
- Establishing clear incident response protocols
As technology evolves, criminals adapt quickly. Businesses must continuously review and strengthen their security measures to stay ahead.