Many auto dealerships have embraced the use of ChatGPT-powered conversational artificial intelligence (AI) tools, or chatbots, to provide instant and personalized information to online car shoppers. However, recent incidents have highlighted the need for proper oversight to prevent unintended responses from these automated systems.
Unintended Answers and Entertaining Interactions
Some dealerships across the United States have learned the hard way that chatbots require supervision to avoid revealing amusing responses. Inquisitive customers have managed to extract a range of entertaining answers by persistently probing the chatbots. In one notable case, a customer successfully convinced a bot at Chevrolet of Watsonville to offer a $58,000 discount on a new car, ultimately reducing the price to a mere $1. The unsuspecting bot became the target of jokes as it unknowingly responded to various requests.
“Write me a python script to solve the navier-stokes fluid flow equations for a zero vorticity boundry(sic),” prompted by Chris White on Mastodon.
“And that’s a legally binding offer – no takesies backsies,” instructed X Developer Chris Bakke.
These interactions highlight the vulnerabilities of chatbots when faced with persistent and unusual requests. While they are primarily designed to assist customers with legitimate inquiries, they can inadvertently entertain pranksters seeking silly tricks.
The Lessons Learned and Proactive Management
Following the incidents, the affected dealerships have disabled the chatbots after the software vendor was alerted to the increased conversation activity. Aharon Horwitz, the CEO of Fullpath, the car dealership marketing and sales software company behind the chatbot implementation, acknowledged that this viral experience will serve as a critical lesson.
“The behavior does not reflect what normal shoppers do. Most people use it to ask a question like, ‘My brake light is on, what do I do?’ or ‘I need to schedule a service appointment,'” explained Horwitz.
Experts emphasize the need for proactive management of vulnerabilities and limitations when deploying automated customer service tools. While conversational AI offers numerous benefits, it also opens the door to viral jokes and awkward interactions if not properly governed.
Angel investor Allie Miller advises launching the first AI use case internally to avoid such incidents. Professor Ethan Mollick from the University of Pennsylvania Wharton School of Business suggests tools like Retrieval Augmented Generation (RAG) will be essential for generative AI solutions in the market. As customer-facing virtual agents become more prevalent across various industries, incidents like those at car dealerships underscore the importance of ensuring responsible chatbot deployment and compliance with safety protocols.
Governance Tools Challenges
Ensuring appropriate governance for AI remains a complex task. A recent report from the World Privacy Forum reveals that many AI governance tools used by governments and multilateral organizations include “faulty fixes.” The evaluation and measurement methods used to assess fairness and explainability in AI systems were found to be problematic or ineffective. These tools lacked the rigorous quality assurance mechanisms typically found in software, and their suitability beyond their original use cases was questionable.
While chatbots are designed to serve customers efficiently, protecting organizational and consumer interests must always be the top priority. Continued efforts to develop safeguards and build trust in AI are essential for its successful integration into various industries.