The Ethical Issues Regarding Human-Like Artificial Intelligence
Scott Feschuk’s essay titled “The future of machines with feelings” provides a humoristic yet disapproving approach towards the future of Artificial Intelligence, stating that the possibility of creating robots that are able to perceive and use emotions is not far off. They will be able to accurately determine people's emotional states by analyzing their facial expressions, and “ monitor(ing)...gestures or the inflection in (peoples’) voices” (Feschuk, p. 229); thus, becoming somewhat like an interactive human being. However, he suggests that such A.I may not serve society’s interests or enhance it at all. Rather, they will be used as a means to violate people's privacy, and spy on what their owners do in the house, whether “eating, reading...cuddling” (230), and this information will be used to by companies who will then send the relevant types of advertisements for them to see.
For example, if a phone sees that its owner is sad, it might send him or her a “Kleenex...coupon” (230).
Feschuk also suggests that these affective A.I could be used for even more dangerous acts. He points out that hackers can get all sorts of peoples’ information “from the cloud” (229) such as
“credit card information...buck-naked selfies….” (226). Furthermore, if such A.I could listen to people's phone conversations and read their emails, it would facilitate terrorists’ attacks on their enemies. But an important issue that Feschuk fails to address, and perhaps should have mentioned first and foremost, is the possibility of ethical matters that may arise, such as the potential injustice and legal ramifications that come with creating effective A.I, that could foster further problems for humankind.
Firstly, the creation of affective A.I may foster many issues regarding equality and justice. Feschuk says that affective A.I may seem “seem startlingly human” (229), but the fact that he only mentions household appliances such as a “refrigerator” (229), “cable box” (230), “toaster” (220), “beer bottle” (229) with emotions, instead of say, humanoid robots, indicates that he disregards the fact that A.I could ever become a kind of life form. He denies their status as true emotional beings, simply because they do not have a human-like body. Feschuk could have a point in a way, that is, if these emotions are only simulations, and the machines are only following their “algorithms” and “user input” (Thilmany 1), or in other words, what they are programmed to do. However, if the A.I are able to function beyond their programming, then this means that they are autonomous, and if they are sapient, then humans now have an artificial life on their hands, on par with themselves. Feschuk doesn't realize that if robots have emotions, this could mean that they could also be conscious. Although there is still no precise definition as to what consciousness is, the philosopher Christian De Quincey belief that consciousness is “the basic, raw capacity for sentience, feeling, experience, subjectivity, self-agency, intention, or knowing of any kind whatsoever.” (Levy 210).
Creating conscious robots may be possible because, according to the Theory of neural Cognition, the cortical column “amounts for about 82% of the human brain mass” (Touzet 1), and so, theoretically, if scientists manage to replicate this part of the brain, they would at least have a robot with cognitive abilities.
But deciding whether a robot is conscious or not will be a problem, because there is no precise way to actually determine this. Touzet quotes The Theory of Neural Cognition’s statement that a “ robot need only be able to verbalize its internal states … to appear 'conscious'” (11). But this could potentially be misleading; for example, a manufacturer might construct the robot in a way so that is lacking the ability to speak; thus, it won't be able to express itself aloud and show that it is indeed conscious. This could result in the unethical ownership, or even enslavement of conscious life-forms.
Conscious robots would be a different species than humans, just like animals are, and the latter life forms are currently kept as domestic pets and bred on farms. But the decision as to whether A.I should be given the same status as animals could potentially become controversial. Animals are sentient yet they do not share the same cognitive abilities as humans such as language. Robots may be both sentient and have cognition, but the simple fact that they came into existence by human hands seems to render them lower down the social ladder. But human children are created in the womb, or perhaps in test-tubes. A.I are created manually. The point is, both life forms are created by humans. Thus, robots could be regarded as the equivalent of human children. If this view becomes adopted by society, then it would be immoral to enslave A.I. Feschuk doesn't seem to consider this notion, and he only focuses on the fact that these innovations will be assets for “companies” (229) and households. It would be wrong to create conscious robots that have the form of appliances, thereby trapping them in body that may not be mobile, so they will not be able to use limbs to reconstruct themselves to their ideal or have any freedom of mobility whatsoever, thereby assigning them a specific role for all eternity, such as a “dishwasher” (229).
Feschuk does not question how the robots will feel knowing that they were created with the sole purpose of being assets such as to help “ huge corporations test market their TV commercials” (229). What if robots want nothing to do with humans? Feschuk uses sarcasm when he suggests that a human could “have a fight with [his or her] cable fox” (230) which would result in the appliance giving its owner “a...pixelated middle finger” ( 228). This reaction on the part of the cable box is utterly harmless, and it really may be the only way that it could defend itself in a negative situation; thus, it is utterly helpless and this reinforces the fact that it is just an object to be used by its owner with no freedom whatsoever. Feschuk disregards the idea that by giving A.I emotions, it would be analogous to dehumanization, since they would be used by humans for their own interests, and as if this isn't bad enough, the robots’ emotions would enable them to experience the pain and hardships, that is, if they were to suffer abuse. Furthermore, if the owners decide to turn off, for example, their sentient ovens, this would be the equivalent of murder.
Perhaps if humanoids were created, in order to do manual labor such as farming or construction,
then this would make it easier for them to revolt against enslavement, due to their relevant mobility and bodily strength, thereby, increasing the possibility of earning their freedom. Even if they could lead their own lives, there may still be complication in society such as discrimination. Supposedly there are humans who will forever see A.I as a lesser form of life due to the fact that they were created by humans, then A.I may be denied high-paying job opportunities, and they may be victims of harassment.
Even if humanoids were made to look exactly like humans so that they would be indistinguishable, there may be situations where they will need to disclose their identity, such as during romantic dates, which may result in being rejected by the human partner.
If A.I were to be treated equally as humans, then potential legal ramifications may arise. One problem may regard the norms of human reproduction. Human children are the result of their parents’ genetic combinations and it takes about 2 decades or more for them to mature, so that they can become independent and start their own lives. But with a robot, it may not be the same case. In fact, they may not reproduce at all like humans do. A robot may not need a partner for the process of copulation; instead, it could simply reduplicate itself- if it has access to a computer hardware- and it could do so very quickly, perhaps in a matter of seconds. And since the “child” would be born mature, since it is the exact same copy as its parent-a clone, it might then take the liberty of uploading itself, and its progeny in turn would do the same, and in a matter of minutes, or even seconds. There is currently no limit as to how many children humans can have, and so, imposing a law that tells A.I how many they can have would seem unfair. But if they were to reproduce at an extremely rapid rate, and thus, grow faster than the economy, and if the robots’ source for life sustenance is electricity, humans may fear that they will run out of this resource and so, they may begin a genocide of what they would believe to be “surplus” robots by disconnecting them from power sources. Humans have the right for basic needs that can sustain their life, such as food and water-at least, in most parts of the planet. So does this mean that electricity should be free for robots since it may be their source of life? Of course, it may be possible that A.I may choose not to be asexual. Despite Feschuk’s mocking tone, he suggests that humans may want to have “deeper intimacy with [their ] cable box[ s]” (229). Supposedly a robot who develops the capacity for love which may be reciprocated by humans. Since these two species have different bodies, copulation may be difficult, if not impossible, and so, humans may abandon this process altogether, thus, reducing the human population, maybe even drastically. Would it be ethical if humans let their own race die off? Feschuk echoes experts’ view that these A.I will be “ubiquitous” (229); in other words, everywhere, which would potentially increase the number of interspecies marriages. Another issue with interspecies marriage concerns age, specifically that of the A.I. If “baby robots” would not be considered babies by humans since they are already born mature, then would it be legal for an adult human and a young, newly constructed robot to engage in sexual activities, would the human be considered a pedophile? If robots don't age, then many human laws may not apply at all for them, such as the prohibition for underage drinking, smoking etc.