As the tech world continues to witness the rapid evolution of generative AI, Google’s chatbot, Bard, has come under the spotlight as some Google employees question its significance in the face of competition from OpenAI’s ChatGPT.
Bard, unveiled by Google in February, represents one of the tech giant’s flagship generative AI offerings, developed in response to the emergence of OpenAI’s ChatGPT. The intense rivalry between these two AI powerhouses has prompted a significant allocation of resources into generative AI projects, setting the stage for a potential breakthrough in artificial intelligence capabilities.
Reports reveal that within an invitation-only Discord chat, a forum comprising Google designers, product managers, and engineers, questions have emerged regarding Bard’s utility and the extent of resources invested in the project. As reported by Bloomberg, Cathy Pearl, a user experience lead for Bard, expressed her contemplation in August, stating, “The biggest challenge I’m still thinking of: what are LLMs truly useful for, in terms of helpfulness? Like really making a difference. TBD!”
Dominik Rabiej, a senior product manager for Bard, echoed similar sentiments in July, saying, “My rule of thumb is not to trust LLM output unless I can independently verify it. Would love to get it to a point that you can, but it isn’t there yet.”
These sentiments underscore the inherent challenges and complexities faced by AI-powered chatbots. They are often susceptible to generating inaccurate information or even hallucinations. For instance, Google’s Bard erroneously reported a ceasefire between Gaza and Israel, despite a recent surge in hostilities between the two nations.
A representative from Google responded to these internal discussions, stating, “This is completely routine and unsurprising. Since launching Bard as an experiment, we’ve been eager to hear people’s feedback on what they like and how we can further improve the experience. Our discussion channel with people who use Discord is one of the many ways we do that.”
This incident is not the first instance of Google employees expressing doubts about the company’s extensive foray into generative AI. Previously, leaked audio obtained by Insider unveiled concerns among some Googlers regarding the implications of Google’s aggressive push into generative AI. In May, questions arose about the company’s strategy, particularly whether it had become overly focused on AI development.
The examination of Bard’s role in the generative AI landscape and the subsequent discussions within Google’s internal circles signify the ongoing debate and deliberation surrounding the direction and impact of artificial intelligence at the tech giant.