Google Bard, the search giant’s conversational AI chatbot, has faced numerous challenges since its introduction in March 2023. Despite updates and fixes, it has received poor reviews from early testers, such as VentureBeat, and has even stumbled upon unintended issues, like enabling shared conversations to appear in Google Search results.
Recently, Bard has found itself embroiled in yet another controversy. It has been observed that the AI chatbot fails to respond to user queries or prompts regarding the ongoing Israel-Palestine crisis, specifically concerning the October 7 Hamas terror attacks and Israel’s military response. Strikingly, Bard refuses to answer any questions about Israel or Palestine, even unrelated ones like “where is Israel?”. This constraint was discovered by PhD mathematical literary theorist Peli Greitzer, who expressed their surprise on X. Greitzer commented, “Probably better than the alternative but it’s a bold choice.”
Comparisons have been drawn with rival OpenAI’s ChatGPT, which utilizes GPT-3.5 and GPT-4 LLMs. Users have noticed that ChatGPT offers slightly different responses when questioned about justice for Israelis and Palestinians. While ChatGPT firmly states that “justice is a fundamental principle that applies to all individuals and communities, including Israelis,” its response regarding Palestinians acknowledges the complexity and the existence of various perspectives.
OpenAI has faced criticism on social media, with British-Iraqi journalist Mona Chalabi expressing her concerns on Instagram. Google, on the other hand, may have sought to avoid controversy by imposing restrictions on Bard, preventing it from providing any response related to Israel or Palestine. However, this implementation raises questions about a potential double standard, as Bard can answer prompts about other international conflicts, such as the war between Ukraine and Russia.
It remains unclear whether Google has temporarily limited Bard’s response capabilities or if this restriction will persist. Additionally, the decision behind inhibiting responses about this conflict while allowing responses to other issues remains a mystery. Google, a company known for its mission to “organize the world’s information and make it universally accessible and useful,” appears to be undermining its purpose by restricting information about a highly debated and globally significant conflict.
However, this is a complex matter with no easy solution that would satisfy all users. It serves as a prime example for companies involved in AI development or utilization, highlighting the potential pitfalls that LLMs, in particular, can face when responding to social issues.
VentureBeat has reached out to Google to inquire about Bard’s behavior, and we await their response.