Google Bard as the Embodiment of Inaptitude

Alan Cai

February 17, 2023

Google’s artificial intelligence chatbot, known as Bard, has incurred certain design challenges since its inception. The once-hyped competitor to OpenAI’s ChatGPT engine has been prone to certain factual or wording inaccuracies.

During a demonstration by Google, Bard was posed the question, “What new discoveries from the James Webb Space Telescope can I tell my 9 year old about?” In response, the artificial intelligence generated a 3-bullet point answer including one which stated, “JWST took the very first pictures of a planet outside of our own solar system.” Here, JWST stands for James Webb Space Telescope, the new high-resolution and high-sensitivity space telescope launched into space on Christmas Day in 2021 by the National Aeronautics and Space Administration(NASA). The mission’s intent was to observe distant objects not observable by the Hubble Space telescope.

From the perspective of a human English speaker, the auto-generated response can be understood as an assertion that the first exo-planet to be imaged was photographed by the telescope. This is, as immediately observed by several experts and amateurs, false as the first picture of an exo-planet was taken in 2004. However, the James Webb Space Telescope did in fact image the previously not imaged planet, HIP 65426 b, for the first time. This alternative meaning was likely what Google Bard was attempting to express. More concisely, the “first pictures of a planet” was not meant to be any planet, but rather, a specific planet, the exoplanet aforementioned.

Google’s parent company Alphabet dropped over 10% in shares following the gaffe, wiping out over $100 billion in market value. In an effort to patch up holes and recover from the inaccuracies, Google has asked its employees in a company-wide email to spend two to four hours of their time to manually improve Google’s AI engine. Although Alphabet is able to strategically allocate its hundreds of thousands of employees to linguistically improve the chatbot, these actions undermine Bard’s ability to become a bona fide artificial intelligence. An intelligence is artificial only if the responses are generated from its own pool of knowledge. If a chatbot merely regurgitates pre-written human responses, it ceases to become artificial. Furthermore, a reliance on human responses will additionally limit Bard’s scalability as the artificial intelligence world develops because regardless of Google’s sheer employee pool size, it would be unable to compete with more advanced technologies capable of generating more vast amounts of accurate answers instantly. It is also important to note that Bard draws its material from Google’s search engine source which results inevitably in certain biases or factual inaccuracies. By asking employees to improve Bard, Google is effectively attempting the futile endeavor of indirectly policing the internet.

Although the reasoning for playing to one’s strengths is quite sound, Google may face its downfall due to the current artificial intelligence arms race. Google’s advantage, as demonstrated by its rather quick development of Bard, was its large population of employees. However, the technology industry has proven time and time again that size can often be neglected by entities capable of creating more effective designs and a more solid customer base. Thus far, ChatGPT, created by OpenAI and backed by Microsoft, has a clear edge in the artificial intelligence arms race. Nevertheless, the field will imminently change as the maturing frontiers further develop and expand.

For reference, see this link for the official Google announcement. The Brutus Journal is not responsible for content from external sites.