Phil 101: Discussion Questions about Schwitzgebel and Garza’s Defense of the Rights of AI

1. What determines (or as philosophers say, “grounds”) your having the moral status and rights that you do?

What drives the first three, and is compatible with the last two: creatures differ morally only if they have some psychological or social difference.

2. So what doesn’t matter?

…What matters is how such beings think, what they feel, and how they interact with others. Whether they are silicon or meat, humanoid or ship-shaped, sim or ghost, is irrelevant except insofar as it influences their psychological and social properties. (p. 101-2)

[1] Artificial beings, if psychologically similar to natural human beings in consciousness, creativity, emotionality, self-conception, rationality, fragility, and so on, warrant substantial moral consideration in virtue of that fact alone. [2] If we are furthermore also responsible for their existence and features, they have a moral claim upon us that human strangers do not ordinarily have to the same degree. (p. 110)

3. Are AIs Necessarily Psychologically Different than Us?

Whether something is made of silicon chips: future AIs might not be

Whether something was explicitly programmed: even now, some AIs aren’t

Some argue that artificial beings necessarily lack: consciousness, understanding/insight, free will.

Searle and Penrose, at least, seem to allow that technology might well be capable of creating an artificially designed, grown, or selected entity, with all the complexity, creativity, and consciousness of a human being. For this reason, we have described the objections above as “inspired” by them. They themselves are more cautious. (p. 104)

Schwitzgebel and Garza’s response:

A certain way of designing artificial intelligence… might not… achieve certain aspects of human psychology that are important to moral status. (We take no stand here on whether this is actually so.) But no general argument has been offered against the moral status of all possible artificial entities. AI research might proceed very differently in the future, including perhaps artificially grown biological or semi-biological systems, chaotic systems, evolved systems, artificial brains, and systems that more effectively exploit quantum superposition.

[Our argument] commits only to a very modest claim: There are possible AIs who are not relevantly different. To argue against this possibility on broadly Searle-Lovelace-Penrose grounds will require going considerably farther than [those authors] themselves do. Pending further argument, we see no reason to think that all artificial entities must suffer from psychological deficiency. Perhaps the idea that AIs must necessarily lack consciousness, free will, or insight is attractive partly due to a culturally ingrained picture of AIs as deterministic, clockwork machines very different from us spontaneous, unpredictable humans. But we see no reason to think that human cognition is any less mechanical or more spontaneous than that of some possible artificial entities. (p. 104)

FIRST DISCUSSION QUESTION: What is your attitude to the position they’re taking here? Do you agree that it’s possible that some kinds of AIs may be psychologically like us in terms of consciousness and understanding/insight? What about free will? That is, that they might be “psychologically similar to natural human beings in consciousness, creativity, emotionality, self-conception, rationality,… and so on.” If you think this is impossible, what do you think the obstacle is?

4. Are AIs Necesssarily Socially Separate from Us?

On the moral views that emphasize social relations, what matters for what moral claims some creatures have on us is not what social relations those creatures have to each other, but what social relations they have to us.

Perhaps [as Hobbes said, a state of War] is the “Naturall Condition” between species: We owe nothing to alligators and they owe nothing to us… A Hobbesian might say that if space aliens were to visit, they would be not at all wrong to kill us for their benefit, nor vice versa, until the right sort of interaction created a social contract. Alternatively, we might think in terms of circles of concern: We owe the greatest obligation to family, less to neighbors, still less to fellow citizens, still less to distant foreigners, maybe nothing at all outside our species. Someone might think that AIs necessarily stand outside of our social contracts or the appropriate circles of concern, and thus there’s no reason to give them moral consideration. One might hold… that there will always be a relevant [biological relational] difference between AIs and “us” human beings, in light of which AIs deserve less moral consideration from us than do our fellow human beings. (p. 106)

Schwitzgebel and Garza’s response:

However, we suggest that this is to wrongly fetishize species membership. Consider a hypothetical case in which AI has advanced to the point where artificial entities can be seamlessly incorporated into society without the AIs themselves, or their friends, realizing their artificiality. Maybe some members of society have [choose-your-favorite-technology] brains while others have very similarly functioning natural human brains. Or maybe some members of society are constructed from raw materials as infants rather than via germ lines that trace back to homo sapiens ancestors. We submit that as long as these artificial or non–homo-sapiens beings have the same psychological properties and social relationships that natural human beings have, it would be a cruel moral mistake to demote them from the circle of full moral concern upon discovery of their different architecture or origin. Purely biological otherness is irrelevant unless some important psychological or social difference flows from it. (p. 106-7)

SECOND DISCUSSION QUESTION: Is their response persuasive? Their scenario is like the case we’ll discuss where you grew up on a spaceship without adults and only recently learned that some of your “friends” have alien biologies or computer brains. Absent an argument that this makes them psychologically different than you — such as incapable of really feeling pain — would their biological difference by itself mean they had less moral claims on you than other humans? If we had to share limited resources (for example, not enough air to go around in an emergency) is it more appropriate to sacrifice them than humans? Versus being guided by whose company you enjoy more, or who has done you more favors, or letting everyone have an equal random chance?

5. AIs aren’t fragile and unique in ways that make humans morally important

AIs might not deserve equal moral concern because they do not have fragile, unique lives of the sort that human beings have. It might be possible to duplicate AIs or back them up so that if one is harmed or destroyed, others can take its place, perhaps with the same memories or seeming-memories — perhaps even ignorant that any re-creation and replacement has occurred. Harming or killing an AI might therefore lack the gravity of harming or killing a human being. (p. 105)

THIRD DISCUSSION QUESTION: What are their responses to this? Are they persuasive? If it’s ever up to us how fragile or inescapably-unique an AI should be, which choice would be morally better?

6. If AIs have moral status at all, arguably we have more moral responsibilities towards them than we do to human strangers

  1. Costs you 1000 to build a human-grade intelligent robot, then 10/month to maintain. After a couple of years, are you allowed to decide to stop paying the maintainance fee? “You owe your very life to me, you should be thankful for the time I’ve given you.” So long as its existence has been overall worthwhile, we have the right to turn it off when we want.

  2. Humanely raised cow: wouldn’t have existed at all if rancher hadn’t wanted to raise it for meat. Its being killed for rancher’s profit is part of a package deal of getting to live in the first place. So long as it doesn’t suffer needlessly while alive or when being killed, rancher has no obligation to let it live longer rather than shorter.

  3. Couple decides to have a child, who has eight happy years of life. Then they decide they don’t want to pay further expenses, and nobody else nor the state is willing to take over. Morally okay for them to kill the child painlessly? They argue they wouldn’t have had the child if (they knew) they were obliged to keep paying the money and time to care for it until it was eighteen. The child support being a responsibility they always had the choice to stop contributing was a condition of the child’s existence. Otherwise they would have remained childless. “He had eight happy years. He has nothing to resent.”

FOURTH DISCUSSION QUESTION: How morally similar or different are these cases? Why?