Artificial Consciousness

Alan Cai

September 16, 2022

The thought of artificial intelligence obtaining a consciousness of its own terrifies even the most avid of technology users to their very core. This seemingly trivial matter is gaining traction around the globe. Hence, the debate seems to have shifted to addressing potential outcomes and treatment for an artificial consciousness rather than whether it will happen.


Various claims of conscious artificial intelligence have begun appearing around the world. Junichi Takeno from Meiji University stated in 2005 that he had created a robot able to determine the difference between an image of itself and a mirror. If true, this would be astonishing considering that the majority of animals on earth are not capable of doing so. The mirror recognition test, a test designed to prove whether an animal could determine that a reflection in a mirror indicates itself, was passed by only a handful of species, including dolphins, great apes, magpies, and one species of fish. In this specific aspect, artificial intelligence has surpassed most of Earth’s intelligent species. Numerous other artificial intelligence projects have also seen varying degrees of success, with Stephen Thaler patenting a machine in 1994 that could produce false memories in order to inspire creativity. Ironically, this directly supports the controversial statement in which German philosopher George Hegel asserted, “We learn from history that we do not learn from history.” If this project were to bear fruit, the inspiration for future ideas would rely on false events conjured by artificial intelligence rather than written history preserved from millennia of human history.


The true question of the day seems to be how to determine whether artificial intelligence is actually conscious. English Mathematician(and World War II Codebreaker) Alan Turing created the “Turing Test,” a test focusing on ensuring that supposed artificial intelligence is capable of holding a conversation via text in a convincing way such that any observer would not be able to determine the difference between a human user and the artificial intelligence. Several arguments for and against this method have been developed against this test. Most notably, the Chinese Room Argument developed by philosopher John Searle noted that the Turing Test failed to prove whether artificial intelligence actually understood the conversation. As an example, he stated that he himself could act as “artificial intelligence” and output Chinese characters for imputed Chinese characters without “understanding a single Chinese word.” He concluded that artificial intelligence that could pass the Turing Test while understanding what it was inputting and outputting(which he called “strong AI”), would never be possible. Igor Aleksander from England’s Imperial College further independently corroborated that the skills and principles to create such a conscious AI were already available, but would require over 40 years to communicate adequately.


Just like in humans, the line between consciousness and unconsciousness for artificial intelligence is blurred. A machine may never be able to understand what it is doing, even if they understand that they exist. From this point, the torch of machine learning would be passed from a largely technological field to a philosophical one. The debate will now shake the very foundations of modern philosophical thinking; does René Descartes’s theory that thinking must mean existing necessarily hold true?