close
close

Capital One, Mastercard leaders are looking for a return on their AI efforts

Capital One, Mastercard leaders are looking for a return on their AI efforts

Free access pill

Enjoy free access to the best ideas and information – selected by our editors.

Capital One

Despite the initial hype around generative AI use cases that resemble ChatGPT—for example, a chatbot that answers employee questions—banks are now evaluating applications based on ROI considerations.

They may not necessarily be the flashiest of applications, but the rush to find viable use cases will depend on establishing the internal assessment and governance processes to make them happen, experts at Money 20/ 20 this week.

“The real question is one of timing of use cases,” including the readiness of an institution’s technology and data stack and the availability of internal talent, said Prem Natarajan, chief scientist and head of enterprise AI at Capital One. “Everybody thinks they’re ready for customer interactions with this. I’m not sure how ready everyone is.”

For some companies, the business case for “under the hood” AI use cases seems to make sense. Mastercard said it is focused on protecting the transaction environment through AI, including fighting fraud, said Greg Ulrich, the company’s executive vice president and chief AI and data officer.

“We are trying to make the transaction environment more secure. How do you improve fraud patterns?” he said. “How do we make the ecosystem smarter, smarter? It’s a referral engine that helps our partners.”

The payments network is also using the technology to improve customer experiences through personalization and is working on ways to implement AI to improve internal operational efficiency, according to Ulrich. Internal use cases include coding, driving efficiency for engineers, and customer support—examples that exploit AI’s ability to make sense of unstructured data.

Similarly, payments firm TSYS plans to use AI-like capability to fight fraud and cyberattacks by exploiting its ability to detect anomalous transactions and perform real-time scoring, said Dondi Black, executive vice president and director of product at TSYS.

Firms should be diligent in evaluating the effectiveness of testing capabilities as well as the volume and quality of data, Natarajan said. Firms must also be able to properly observe and monitor AI-like patterns.

Build vs. Buy

The institutions present agreed that building everything in-house may not be the most viable option.

“If those technology requirements, which include data, are widely available in the world, and there’s nothing about your data that makes it unique, then there’s no reason to build,” Natarajan said.

Companies should also evaluate, when making build-versus-buy decisions, whether the solution would be something the company intends to differentiate itself with.

“I don’t think you can build differentiation by becoming a system integrator that integrates three or four different solutions from somewhere else,” he said.

Companies may also want to consider how best to provide privacy assurances to customers.

“There may be other assurances you want to provide to your users about their data or the quality of the solutions, or to be able to answer questions about those solutions and do a deep inspection of those things,” Natarajan said . .

For Mastercard, it all comes down to data sensitivity.

“If there’s not really sensitive information that you’re using out there, then generally we’re trying to figure out if there’s an existing solution that we can use,” Ulrich said.

Governance models

Companies looking to launch AI-like uses should have a clear governance model to ensure testing parameters are consistent, ethical principles are applied, and firms cast a wide enough net in their advisory efforts.

“It’s really important to have a governance framework in place for this very early on and have protocols in place about how you’re going to test this, how you’re going to think about the challenges of implementing things like AI, because otherwise it can run into a lot of issues going through him and you can stumble on internal processes,” Ulrich said.

TSYS has established a center of excellence to set standards including data protocols.

“Making sure that the data is complete … not only will it generate a better output in terms of model performance, but it will speak directly to how you inherently maintain confidence in the model and maintain model bias,” said Black, who noted that companies need to use artificial intelligence to continuously retrain models to ensure their effectiveness.

TSYS, in its governance approach, prioritizes the explainability of AI and how decisions are made, with legal and privacy teams at the table, she said.

Meanwhile, Mastercard has established an AI and data council, chaired by Ulrich and the company’s chief privacy officer, to ensure that all relevant stakeholders, including technology, as well as heads of legal, procurement and business units , are consulted on AI strategies, he said. The group is particularly focused on governance, privacy and bias detection. In turn, employees are kept abreast of the risks and opportunities of AI.

Capital One’s Natarajan suggested that privacy, ethical considerations and risk management must be addressed at the outset of any AI rollout and built into processes.

“These are not additional fixes at the end of the deployment cycle. You have to start in the design phase,” he said. Key questions to address involve representativeness and completeness of data, as well as validation and risk management approaches.

It’s also important, he said, to build relationships with AI researchers at universities who are working on the biggest problems.

He drew attention to the bank’s multi-year strategic partnerships with universities. Examples include his role in helping establish the Center for Responsible AI and Decision Making in Finance at the University of Southern California, which was supported by a $3 million gift from Capital One; and a 3 million dollar investment to support the initiatives of Columbia University’s Center for AI and Responsible Financial Innovation.

“The two biggest risks are them rushing … and leaving, so you have to find the balance,” he said.

Lowering the cost of developing AI-like applications will be a boon for companies, Natarajan said.

“Your ongoing cost is really the cost of inference (basically the ongoing cost of running the model), and there’s been at least a two-order-of-magnitude reduction in that cost over the last 18 months, thanks to work at Nvidia and work from many other places,” he said.