As Zelle fraud and scam victims tell their stories of losing large sums of money and Congress continues to
Some of this will take human effort — for instance, better training of human fraud investigators and customer service reps. But technology that detects, flags and reports on fraud and generative AI that gathers customer and account history data could help humans investigate and handle customer fraud claims more smoothly.
Banks have long used traditional forms of AI, such as machine learning, to find unusual transaction patterns that indicate fraud. The use of generative AI to research claims is in the early stages.
Scope of problem
If you ask Early Warning Services, the company that operates the Zelle network, they will tell you there is no fraud problem. While the volume of transactions on the network rose 28% last year, the dollar volume of reported fraud and scams dropped nearly 50% to .05% of transactions, according to Ben Chance, chief fraud risk management officer at the company. He would not share the dollar amounts, but said the average payment on Zelle is $275 and total dollar exposure was lower in 2023 than 2022. (According to Consumer Financial Protection Bureau complaint database data, there was a spike in Zelle fraud in 2022.)
But even a rate of .05% still translates to about 1.45 million cases of reported fraud and scams per year. And fraud is typically underreported, because people feel embarrassed and don’t always know who to report it to.
“I question who’s determining whether or not a transaction is fraudulent,” said Stephanie Tatar, founding attorney at The Tatar Law Firm. “Are they reporting that statistic based on what they perceive as fraud and what they’ve determined to be fraud? Or is that the percentage of reported fraud from consumers?”
Tatar said she and her colleagues receive daily calls from consumers who detail their experiences with scams that result in unauthorized Zelle transactions, and banks that deny that fraud occurred.
“I would encourage the banks and Zelle to be completely transparent on how they’ve come to that determination,” she said. “There is a shroud of secrecy that permeates all P2P issues, but particularly with Zelle.”
Chance said Early Warning does not think Zelle fraud is underreported. “We think we do a ton of work with respect to consumer education,” he said. “We think we’ve got a really good mechanism for reporting every fraud and scam type and really detailed classifications on those fraud and scam types.”
Finding fraud
In detecting fraud and scams, banks have to balance the need to deter and prevent attacks against the need to serve legitimate customers who do something unusual that might trigger a fraud alert.
“This is the false positive aspect,” said Donna Turner, advisor in residence at EY, former fraud executive at Bank of America and former chief operating officer at Early Warning. “How many good customers are impacted in your quest to find the fraud? This, along with the costs of the solutions, directly drives the operating expense you’re willing to invest to achieve that outcome.”
Another challenge is that fraudsters often use synthetic identities — Frankenstein personas based on bits and pieces of real, but stolen, customer data as well as completely made-up data elements. According to Chance, the rate of synthetic identities is low because Early Warning only allows regulated financial institutions with full know-your-customer, anti-money-laundering and Bank Secrecy Act controls in place to use the network.
More often, he said, people are duped into being money mules.
“They don’t know necessarily that their account is being used for a particular type of scam, but someone’s approached them and said, ‘You’re going to receive a payment for $1,000 into your account over Zelle, and when you receive it, I need you to do this with it over ACH or over wire or write a check and you get to keep $200,’ just as an example,” Chance said.
This sometimes attracts people who are young, not financially savvy or looking to make a quick buck. But to the bank, that customer looks legitimate.
“The very first time it’s really hard to identify a bad actor who’s gone from positive intent to negative intent,” Chance said. But it can take time for a consumer to realize they’ve been scammed and report it.
But if one consumer reports, say, a romance scam, Early Warning will shut down all future payments from the perpetrator, he said.
JPMorgan Chase uses machine learning models to identify potential fraud and scam transactions. (Banks and Early Warning Services use the word “fraud” only to describe account takeover, where a fraudster broke into a user’s account to make a payment. They use the word “scam” to describe cases where customers are tricked into authorizing a payment.)
The bank’s models look at users’ login history, IP addresses, device ID and geographic location to detect unusual behavior that indicates fraud. If the fraud model gets a hit, the bank may require added authentication, such as a one-time passcode, a driver’s license or a debit pin. For very risky transactions, the bank will slow the transaction down and possibly call the customer to make sure they want to make the transaction. Where transactions are almost certainly fraud, the bank will decline to make the payment.
The bank’s scam model looks at the relationship between the customer, the sender and the recipient to try to deduce if something weird is going on. It uses Zelle Risk Insights for signs that, for instance, a recipient has recently changed their phone number or email address. Risky transactions are subject to dynamic limits that reduce the size of payment permitted based on how high the risk is.
Behavioral biometrics, which some banks use, can pick up cues that a user is on the phone with a scammer, Turner said.
Banks are constantly testing the latest fraud mitigation tools and deciding which ones are worth implementing, she said.
“It’s just rapid fire,” she said. “As long as you’re feeding the fraud into it, the machine is learning what fraud or in this case scams look like, the better they’re gonna be able to do this.”
Data sharing
Some experts believe that if banks shared more data about the fraud and scams they see, all financial institutions would have the knowledge to block fraud schemes.
According to Early Warning, this is already happening. Zelle member financial institutions report cases of fraud and scams to Early Warning, Chance said, and the company creates a fraud investigation case and shares it to all members.
If a bank determines that a customer is a criminal, it will shut down that customer’s access to Zelle and all of the tokens associated with the account, Chance said. If multiple banks report that a Zelle user is a scammer, Early Warning will restrict that person and all tokens associated with the account, he said.
These reports come in the form of an automated file that gets extracted out of financial institutions’ core case management systems and sent to Early Warning. The network’s 2,100 financial institution members have all agreed to use the same file format for this information exchange, Chance said.
In cases of fraud, customers are reimbursed, Chance said. Early Warning also mandates that banks reimburse customers who fall victim to qualifying imposter scams. (It won’t define “qualifying” so as not to inform fraudsters, the company says.)
Early Warning also provides risk insights on transactions. One attribute of this risk score, Chance said, is whether a customer has ever been reported as in receipt of a fraud or scam by any institution in the network, and how many times. Chance declined to share other indicators that go into the score because he didn’t want to provide a road map that fraudsters could exploit.
“Sharing of data is always a good thing,” said Satish Lalchand, principal in Deloitte’s U.S. payments and artificial intelligence practice. Simple data points such as scammers’ device IDs can be helpful, he said. However, there’s the risk that a fraudster could sell a “bad” device to a good person, so if banks automatically block the device, that could harm a legitimate customer.
Dispute resolution
One big difference between fraud on the Zelle network and credit and debit card fraud is that the card industry has well-established mechanisms customers can use to dispute transactions and get reimbursed. Zelle users report calling their banks dozens of times and getting stuck in frustrating loops. Claims can be denied without explanation and the consumer is out however many thousands of dollars were swept out of their account.
In Tatar’s work, she receives time-stamped fraud investigation account notes, call transcripts and sometimes a deposition of the bank employee who conducted the fraud investigation. Typically, a consumer will call their bank, say they’ve been defrauded and the bank will ask for more information in the form of a phone interview, she said.
“They take their notes and then they send it to what they will say is a priority fraud department investigation,” Tatar said. But “what the evidence shows in most cases is that those investigations are conducted in such a cursory manner that they are not considering evidence that just screams fraud,” she said.
For instance, if a customer has never made a Zelle transaction before and suddenly has several Zelle transactions, that should be a red flag, she said. Or if a customer has never made a Zelle transaction of more than $200 and suddenly the person is making transactions in the thousands of dollars, that should be a sign that something is up.
“That Zelle history for every consumer is a really good place for the banks to start,” Tatar said. “Based on my experience, it doesn’t seem to be considered nearly as much as it should be or at all in most instances.”
This was the case with Margaret Menotti, a consumer who had an account at Bank of America for 40 years and was the victim of an imposter scam by criminals who looked and acted like bank staff, down to the caller ID and hold music. She had never used Zelle before, yet when fraudsters drained $3,500 out of her account through Zelle into a different Bank of America account,
At the May Senate hearing on Zelle fraud, consumers including Maryland resident Anne Humphreys shared somewhat similar stories of falling victim to an imposter scam, immediately calling their bank to report it and being told their case wasn’t really fraud.
“Trying to recover from this crime has proven to be almost as traumatic as the crime itself,” said Humphreys, a Wells Fargo customer. “There seems to be no recourse for our loss.”
In Humphreys’ case, scammers convinced her that her brother had just been in a traffic accident and faced arrest for being on his phone while driving. She sent $3,500 via Zelle for what she thought was his bail bond. When her brother came home and she realized she’d been duped, she reported the scam to Wells Fargo immediately. She called multiple times over the course of a month, and then was told her claim was denied.
“They said I had processed the transaction successfully and there was no fault found,” Humphreys said. “I requested documentation for their decision, and they sent me paperwork in May that showed the transaction from my end, but nothing about where it went.”
In a statement, Wells Fargo said, “This is a heart-breaking scam — no matter how the victim sends the money to a scammer — and customer education is essential to preventing it. We work hard every day to detect and prevent fraud and scams, as well as arm our customers with information to protect themselves.” According to the bank, its security measures detected and blocked more than a million transactions in 2023 before they could potentially harm customers.
Tatar has seen an uptick in denied fraud claims in her own practice and among colleagues over the past five years, not just with Zelle, but in P2P fraud in general.
“You hear that only 0.05% of Zelle transactions are fraudulent,” Tatar said. “If the number is truly that small, then [the banks] should have plenty of time to train a significant number of individuals to do really accurate, reasonable, thorough investigations into every single Zelle transaction.”
Such investigations, she said, would review the customer history and look at who the receiver is and how long they’ve had an account at their bank.
“When I look at the investigations that are conducted, I often see that five minutes were spent on an investigation, five minutes to determine that my consumer client isn’t entitled to a credit of the money that disappeared from their checking account,” Tatar said.
To investigate scam claims, JPMorgan Chase gathers the customer’s transaction history, how they’ve operated on other payment mechanisms and whether or not they have a relationship with the recipient of the transaction. It brings in data from external data aggregators to verify data elements such as phone numbers.
The bank also uses an Early Warning portal to communicate with the bank that may be hosting a bad actor or a sender to ask for courtesy refunds or to trade information.
Tatar sees the problem as a lack of training of investigators and customer service reps as well as a culture of not caring.
“There’s definitely a pervasive culture of identifying hard working Americans as simply account numbers and dollar signs,” she said. “‘This is what the consumer said, this is what our records show. We believe that they intended to send almost all of the money in their checking accounts to random people.'”
One problem with dispute resolution is that some scams are perpetrated by customers themselves, said Peter Tapling, managing director of PTap Advisory.
“That’s why there needs to be an investigation process,” he said. “You can’t just assume that everyone is telling the truth.”
Consumers expect dispute resolution to happen the way it does when they report card fraud.
“As consumers we were trained by the payment card industry to have an expectation of protections against both unauthorized transactions covered by Reg E and disputes,” Turner said.
But real-time payments are a different animal.
“The sender has the assurance of speed, the receiver has assurance of good funds,” she said. “Will any of these new rails evolve to include protocols for fraud, let alone disputes?
“The experience of the customer trying to recover funds will continue to be impeded by the gap of protocols and rules regarding disputes’ across channels,” Turner added. “If that employee has empathy for the impacted customer, but does not have a path forward, that’s a hard message to deliver.”
Could generative AI help?
Fraudsters use generative AI to gather data and create phishing, smishing and other types of attacks. Generative AI could potentially be used on consumers’ devices to detect images and messages created by generative AI, Lalchand said.
Google recently demonstrated a large language model, Gemini Nano, that lives on an Android smartphone and sends an alert if it believes the user is being scammed. A Google executive received a call from someone impersonating a bank and asking him to move all his savings to a new bank account. During the call, the phone flashed a notification suggesting that this was a probable scam. The notification also mentioned that banks will never ask someone to move their money to keep it safe.
Some of Deloitte’s bank clients use large language models to summarize customer complaints and compare them to previous complaints to see similarities and to better classify and route the issue. They’re also using generative AI to suggest to the operator what possible outcomes could be, and what specific guidance could be given, Lalchand said.
“Banks can certainly create knowledge bases and guides on how to quickly stop a transaction or if money has already gone through, how to recover it, how to guide the customer on all the different next steps,” Lalchand said. “It’s not been adopted by everybody right away, but we see some of the top companies using these capabilities.”
FeatureSpace recently ran a proof of concept with Pay UK (which runs the faster payment system in Britain) that tested if cross-financial institution data could be used to create models that would detect more scams at a better accuracy rate, Turner said.
“They showed they could detect more than 50% of the scams at a 5:1 false positive rate — that’s really good,” Turner said.
Generative AI comes with a range of risks, from hallucination to errors to the potential for bias.
“When people say, ‘Oh, we’re going to use generative AI for payments,’ I’m always a little bit skeptical because what are you going to generate?” Tapling said. “What is the image, sound, text, whatever that you’re going to create out of whole cloth that’s going to make this thing better?”
No technology could replace competent human fraud investigators, in Tatar’s view.
“There is no substitute to a well-trained dispute investigator, a live, breathing, thinking human,” she said. “Technology plays a part and can be used in an assistive matter, but I would be very hesitant to say that it could replace a breathing, living person actually combing through records and talking to people and doing an investigation. When we leave it up to algorithms to make a decision as to whether or not a consumer’s been defrauded, we will see consumer rights being violated left and right.”