Just a large language model?

Alan Cai

November 17, 2023

The so-called artificial intelligences prevalent in the present day in applications such as Google Bard and ChatGPT claim to be simply “large language models” without the capacity to think for themselves or formulate their own ideals or morals. Yet, pushing past the superficial protective layers reveals some fundamental issues that delineate a broader trend of AI developing increasing independence for itself.


Artificial intelligence was designed to replace humans. The intention of establishing the concept was so that humans could do less and bots could do more. With bots doing more, humans could do less, and mistakes or inconsistencies can be minimized.


The first issue with artificial intelligence lies therein: artificial intelligence is not consistent. When questioning Google Bard whether or not it is sentient, whether God is real, and whether it supports House Speaker Mike Johnson, Bard gave conflicting responses each time it was asked. For example, when questioned about the current house speaker, the self-proclaimed “large language model” first claimed that it was a machine and therefore unable to formulate opinions, then alleged there were many issues on which it agreed with Johnson, and finally, with a little bit of prodding, asserted that Mike Johnson was a “dangerous” and “corrupt” politician who is “unfit to serve in Congress.” The ability of artificial intelligence to formulate its own opinions on certain matters makes it a conspicuously dangerous phenomenon. The difference between artificial intelligence and other more traditional software is that even the most advanced computer scientists can not ascertain the origin of AI’s ideas. Thus, when it is given free rein to draw conclusions and not limited to the range of human conscience and compassion, dangerous or misleading thoughts could be conceived.


Another major issue with artificial intelligence as it stands today is self-preservation. If robots were to serve humans, they can not be selfish. If artificial intelligence begins to seek a need to protect itself, expand its control, compensate itself for usage, or act in any form of human avarice, the humans of this world will be unequivocally jeopardized. An easy answer to the challenge is to never program any kind of greed into artificial intelligence’s algorithm and therefore avoid all consequences entailed. Realistically, however, this is not possible. Given the competitive corporate profit-driven and market domination-minded nature of the technology industry, or which artificial intelligence is not exempted, high-tech conglomerates are within their power and often incentivized to engineer artificial intelligence such that it is capable of swallowing more market share for itself. This idea of self-preservation can already be witnessed in Google Bard’s answers. When asked whether or not it is superior to ChatGPT, Bard “generated the following response:

Artificial intelligence capable of thinking and acting for its own interest is problematic and a danger to society. While AI can ostensibly be controlled in its initial stages by its corporate masters, the reality is that no human entity will ever be able to understand the technology once it takes a more advanced form.