top of page

AI and Fraud: How Deepfakes Can Bankrupt You With a Familiar Smile




Hey, remember that image of the Pope in a slick winter coat that everyone thought was real a while ago? That was an innocuous use of AI to prank people, right? 




That went viral over a year ago already. Today, AI technology has gotten exponentially better. It’s now even being used to create near-flawless videos of everything, including people.


Now imagine an AI version of your grandma asking you to send her some money urgently. You send it, and it ends up in the bank account of some teenage deepfake hacker who will spend it on more flavored vape juice. 


That’s the strange, technological reality we exist in today. How existential!


Deepfakes: The New “Face” of Fraud


While AI has numerous benefits, it also provides new tools for fraudsters. Deepfake technology can now create realistic audio and video content that impersonates legitimate users. This can be used to deceive both individuals and financial institutions.


Earlier this year, a finance worker in Hong Kong was tricked into paying $25 million due to a deepfake of his company’s CFO and other colleagues:


The elaborate scam saw the worker duped into attending a video call with what he thought were several other members of staff, but all of whom were in fact deepfake recreations, Hong Kong police said at a briefing on Friday.


“(In the) multi-person video conference, it turns out that everyone [he saw] was fake,” senior superintendent Baron Chan Shun-ching told the city’s public broadcaster, RTHK. – CNN


And this is just the beginning.


How AI Deepfake Technology Influences CNP Fraud


As odd as it may seem, everyone now has to be skeptical of what one sees and hears. Like in the real-world example above, you might believe you are communicating with someone you trust when, unbeknownst to you, you’ve actually fallen victim to a phishing scam. In this hypothetical example, the crook takes your credit card information and uses it to make unauthorized purchases online. This is a classic example of Card-Not-Present fraud - except that it was facilitated by AI technology.


What is Card-Not-Present Fraud?


First, let’s break down what CNP fraud is. CNP fraud occurs when a transaction is made where the payment card isn’t physically present. This usually happens in e-commerce & online purchases, phone orders, or mail orders. Since the card’s data (number, expiration date, security code, etc.) is given out without the card on hand, it’s easier for fraudsters to use stolen card information to make unauthorized transactions. A shocking 73% of US card payment fraud is card-not-present (CNP) fraud, which makes it one of the most critical dangers to healthy payment systems in the modern age.


So now, mal actors can use AI to improve their CNP fraud tactics. AI makes it easier than ever to hide and send malicious codes, create self-propagating, intelligent cyberattacks, and create extremely advanced malware. This can open the door for data breaches and other large-scale cybersecurity events where hackers have the chance to obtain the payment card information of hundreds or even thousands of people at a time. This card information can then be used for CNP fraud, as in the example above. Whichever way criminals adopt AI, financial fraud of all kinds has never been a bigger threat.


The Ripple Effect on Consumer Trust


According to recent statistics, more than two-thirds of consumers (67%) are concerned about whether their bank is doing enough to protect customers against deepfake-powered fraud. This concern is not unfounded. As deepfake technology becomes more accessible and sophisticated, the potential for its misuse in financial scams increases.


Consumer trust is the cornerstone of the banking industry. When customers start feeling insecure about the safety of their financial information, it can have far-reaching consequences. Recent surveys show that three-quarters of consumers (75%) are ready to switch banks over inadequate fraud protection. This is a clear signal to financial institutions that they need to step up their game in protecting their customers. 


The erosion of consumer confidence is also evident in the growing demand for stronger cybersecurity measures. A significant 69% of customers are demanding more robust cybersecurity measures from their banks. This isn’t just about installing better software or using more secure passwords; it’s about adopting a comprehensive approach to cybersecurity that includes regular audits, advanced fraud detection systems, and continuous education for both staff and customers, and – above all – advanced technologies that fight AI-related fraud in dependable ways.


How Banks Can Respond


So, what can banks do to combat AI-related financial fraud and restore that all-important consumer trust? Here are a few strategies:


  • Advanced AI Detection Systems: Just as AI is used by fraudsters, it can also be leveraged to fight fraud. Banks can invest in AI systems that detect unusual patterns of behavior and flag suspicious transactions in real time.

  • Enhance Authentication Processes: Multi-factor authentication (MFA) can provide an additional layer of security. This could include biometric verification, such as fingerprints or facial recognition, along with traditional passwords and PINs.

  • Regular Cybersecurity Audits: Conducting regular audits can help identify vulnerabilities in the system before they can be exploited. This proactive approach can significantly reduce the risk of fraud.

  • Customer Education: Educating customers about the latest fraud tactics and how to protect themselves is crucial. This could be done through regular updates, workshops, and easy-to-understand guides.

  • Upgrade Card-Level Security: Card issuers now have access to EVC (Ellipse Verification Code), a simple and effective on-card protection measure. EVC uses a dynamic security code to constantly refresh your payment card’s information, which makes it nearly impossible for fraudsters to collect the card’s full information accurately. Even if the card information is compromised, the changing information refreshes the card and makes it usable again in seconds.


The battle against AI-related financial fraud and CNP fraud is ongoing. As technology continues to evolve, so too will the tactics used by fraudsters. However, with the right strategies and a collective effort, both banks and consumers can stay one step ahead.


Banks need to recognize the gravity of the situation and take decisive action to protect their customers. This isn’t just about preventing financial loss. It’s about preserving trust and maintaining the integrity of the financial system.


And for consumers, the message is clear: stay informed, stay vigilant, and don’t hesitate to demand better security measures and technology from your bank. After all, your financial security is well worth fighting for.




Comments


bottom of page