As financial fraud using generative artificial intelligence becomes more sophisticated, the core of asset protection is shifting from technical defenses to systems for verifying people.
On May 14, local time, fintech media outlet Finextra said existing phishing responses that filtered out typos or awkward sentences are no longer sufficient. It said people must now prepare for AI scams so sophisticated they are difficult to distinguish from reality.
The biggest change is a shift from phishing that targets an unspecified mass audience to customized voice phishing. Criminals can clone the voice of a family member or acquaintance using just a few seconds of audio posted on social media. A typical approach is to fabricate an accident, arrest or overseas crisis and demand an urgent transfer.
These scams target human psychology before technical vulnerabilities. Ninety-five percent of voice phishing cases begin with a conversation. When scammers create extreme urgency with scenarios such as an account freeze or a family crisis, people react before they analyze.
Investment fraud that builds trust over a long period to extract funds was also presented as a major risk. So-called "pig butchering" scams build trust over weeks through text messages, dating apps or online conversations, then steer victims to fake investment platforms. Screens that look like cryptocurrency or gold investment apps show large gains, but at the actual withdrawal stage, the counterparty disappears along with the money. In particular, for retirees it can leave not only financial loss but also a large emotional shock.
Major types of fraud cited include AI voice cloning, cryptocurrency investment scams, impersonation involving taxes and social security, QR code phishing, health coverage information update scams, bitcoin blackmail emails, tech support pop-ups, parcel delivery text scams, threats to cut off public utility services and romance scams. A common feature is pressure for urgency, secrecy and immediate payment.
As a countermeasure, a "Triple A protocol" was proposed, urging people to treat contacts that demand immediate transfers or secrecy as warning signals and to be especially cautious. It is also important not to trust contact information listed in texts or emails as is.
Basic digital security practices were also emphasized. People should use password managers to set different complex passwords for each account and must also enable multi-factor authentication. It is also important not to delay security updates. In particular, devices older than 5 years that can no longer receive security patches should be considered for replacement, it said.
The role of financial experts is also changing. As protecting assets emerges as an important task alongside growing them, they need to create an environment where clients can report safely without hiding suspicious signs of fraud out of shame. The view is that early sharing of warning signs can reduce the spread of damage.
Ultimately, the warning focuses on how AI has increased the sophistication and persuasiveness of financial scams. Responding based only on the content of phone calls, texts and emails is no longer valid. The starting point for defending assets is being highlighted as redesigning the procedures for human judgment rather than relying on technology itself.