Caption: (Dall-E 2 produced image)
However, AI is fundamentally different from humans. They simply take in thousands of gigabytes of data and find patterns in the data to perform a task. This can be illustrated in the content they produce. For example, OpenAI's Dall-E 2 creates images. If someone asked it to "draw a sloth in outer space eating a burrito in a Mars space base," it would be able to do so. If you look closely, Dall-E 2 struggles with drawing hands, which generally look wrong and mushed together. This is because the AI tried to imitate the look of hands, but it doesn't understand the concept that sloth hands have three separate fingers.
Furthermore, AI struggles with subjectivity and formulating opinions. AI doesn't have beliefs or experience, and none of their opinions are solely based on previous data. Their opinions and content can be compared to a smoothie. A smoothie is composed of numerous fruits and vegetables, blending some together to create something seemingly "new." This is because the smoothie is indistinguishable from the original bunch of fruits. But it still uses the same core ingredients, there were no new ingredients, no new ideas, only previous ones. Their opinions are limited by the dataset, and "creativity" is nonexistent, only the illusion of creativity.
However, over time, AI will inevitably imitate humans extremely closely. Probably simulating a level of consciousness and autonomy that is on par with humans. Theoretically, they could be trained to simulate human emotions. If AI mimics human emotions so closely, what's stopping them from being considered real? If an AI has an emotional response to the death of their owner, should the AI's simulated responses hold no value?
Long story short, people are naturally emotional beings and could easily be convinced by AI's performance. Imagine an android crying on the ground, and someone finding them. It is highly unlikely that anyone would act indifferent to them simply because "the AI's emotions are fake"; they would also naturally react emotionally. Therefore, the illusion of AI's feelings does hold value, and there are moral implications to consider. The biggest of these is the question of whether or not AI deserves legal rights. After all, if they are so real to humans, why shouldn't they have the same legal protections?
In the video game "Detroit: Become Human," the complexities of androids' humanity are explored. In this world, there are no rights for androids. They are bought by masters, forced to work, and given no wages. Additionally, there is nothing stopping their masters from destroying them or treating them inhumanely. This raises ethical dilemmas, such as whether an android should be held accountable if it kills a human in self-defense because the human tried to shut down the android. If the answer is no, it implies that android lives are inferior to humans, and their lives are not created equal, allowing for discrimination against androids. Consequently, androids would most likely protest for their civil rights, creating a potential future conflict between androids and humans. On the other hand, if the answer is yes, it implies that shutting down an android is murder, making human and android lives equivalent. It’s quick to assume that this is a good thing, and humans should build an inclusive relationship with androids. But granting civil rights to AI is potentially dangerous, as they do not have human experiences and values. Hence AI could act in unethical, threatening, and inconsistent behavior.
While AI has made significant strides in imitating humans, it is fundamentally different from humans and lacks essential aspects of humanity. However, AI's ability to simulate humans raises questions about whether they deserve legal protections and rights. As AI technology continues to advance, it is crucial to consider the moral implications and address the legal status of AI.