Wells Fargo has emerged as a frontrunner in the adoption and implementation of generative AI applications in the banking industry. Chintan Mehta, the company’s CIO, recently revealed some exciting details about their progress in this area. Speaking at an event hosted by VentureBeat in San Francisco, Mehta shared that their virtual assistant app, Fargo, has already handled an impressive 20 million interactions since its launch in March. He further stated, “We think this is actually capable of doing close to 100 million or more [interactions] per year as we add more conversations, more capabilities.”
Breaking Barriers in AI Adoption
What makes Wells Fargo’s traction in AI particularly noteworthy is that, unlike many other large companies, they have surpassed the proof of concept stage. Given the stringent financial regulations surrounding privacy, it was expected that banks like Wells Fargo would proceed cautiously. However, the bank has taken an assertive approach, with 4,000 employees having undergone training in Stanford’s Human-centered AI program.
Mehta confirmed that Wells Fargo is already engaged in numerous generative AI projects aimed at enhancing the efficiency of back-office tasks. By leveraging AI technology, the bank seeks to streamline processes and make everyday operations more seamless.
Advancements in AI and Banking Services
During the AI Impact Tour event, Mehta shed light on how enterprise companies can leverage generative AI and large language models (LLM) to provide more intelligent answers to customer queries. Fargo, Wells Fargo’s virtual assistant, plays a crucial role in this regard. Mehta informed that the app, built on Google Dialogflow and launched using Google’s PaLM 2 LLM, enables customers to perform tasks such as paying bills, sending money, and obtaining transaction details conveniently through their smartphones.
“That app launched recently to all customers and had a million monthly active users during the first month,” Mehta revealed.
Open-source LLMs have also been incorporated into Wells Fargo’s technology stack, with Meta’s Llama 2 model being utilized for some internal uses. Although open-source models took longer to emerge compared to proprietary counterparts like OpenAI’s ChatGPT, their flexibility and fine-tuning capabilities make them invaluable for specific use cases.
Innovation in AI Infrastructure
Wells Fargo has developed an AI platform called Tachyon to support its AI applications. Mehta highlighted that this platform is designed with the understanding that no single AI model can dominate the industry. Additionally, the bank aims to avoid relying on a single cloud service provider and recognizes the challenges that arise when transferring data between different data stores and databases.
“The platform allows for things like model sharding and tensor sharding, techniques that reduce memory and computation requirements,” Mehta explained.
One of the key areas that Mehta emphasized is the significance of multimodal LLMs. These models enable customers to communicate with virtual assistants through images, videos, text, or voice commands. Mehta provided an example of how a customer could upload a picture of a cruise ship and ask the assistant to “make it happen.” The assistant would then guide the user through the necessary steps to book a ride on the cruise ship.
Mehta acknowledged that while LLMs excel in processing text-based information, they still require significant textual context to perform optimally in multimodal scenarios. He expressed interest in the development of LLMs that possess “input multimodality,” allowing them to understand user intent effectively with minimal textual input.
Addressing Challenges and Looking to the Future
When it comes to the governance of AI applications, Wells Fargo’s approach is to focus on the specific use cases of each application. The bank places great importance on documentation to ensure transparency and compliance with regulations. Although challenges surrounding security, including cybersecurity and fraud, remain, the bank has made significant progress in addressing governance concerns.
However, Mehta admitted that banking regulation has struggled to keep pace with technological advancements in the field of generative AI. This regulatory lag has created uncertainty and added complexities for financial institutions like Wells Fargo. Mehta highlighted the importance of regulatory changes and their potential implications for the bank’s operations.
“Regulatory changes will have big implications for how Wells Fargo operates in terms of economic factors and addressing additional requirements,” Mehta explained.
Despite these challenges, Wells Fargo remains committed to exploring the possibilities of AI and continues to invest in areas like explainable AI. As the landscape evolves, the bank aims to deliver innovative experiences that meet the changing needs of its customers.