Nathan Heller is one of the most consistently engaging, most compelling writers out there, and this new article is one more piece of evidence:
We can think of ourselves as an animal’s peer—or its protector. What will robots decide about us?
Harambe, a gorilla, was described as “smart,” “curious,” “courageous,” “magnificent.” But it wasn’t until last spring that Harambe became famous, too. On May 28th, a human boy, also curious and courageous, slipped through a fence at the Cincinnati Zoo and landed in the moat along the habitat that Harambe shared with two other gorillas. People at the fence above made whoops and cries and other noises of alarm. Harambe stood over the boy, as if to shield him from the hubbub, and then, grabbing one of his ankles, dragged him through the water like a doll across a playroom floor. For a moment, he took the child delicately by the waist and propped him on his legs, in a correct human stance. Then, as the whooping continued, he knocked the boy forward again, and dragged him halfway through the moat.
Harambe was a seventeen-year-old silverback, an animal of terrific strength. When zookeepers failed to lure him from the boy, a member of their Dangerous Animal Response Team shot the gorilla dead. The child was hospitalized briefly and released, declared to have no severe injuries.
Harambe, in Swahili, means “pulling together.” Yet the days following the death seemed to pull people apart. “We did not take shooting Harambe lightly, but that child’s life was in danger,” the zoo’s director, Thane Maynard, explained. Primatologists largely agreed, but some spectators were distraught. A Facebook group called Honoring Harambe appeared, featuring fan portraits, exchanges with the hashtag #JusticeforHarambe, and a meditation, “May We Always Remember Harambe’s Sacrifice. . . . R.I.P. Hero.” The post was backed with music.
As the details of the gorilla’s story gathered in the press, he was often depicted in a stylish wire-service shot, crouched with an arm over his right knee, brooding at the camera like Sean Connery in his virile years. “This beautiful gorilla lost his life because the boy’s parents did not keep a closer watch on the child,” a petition calling for a criminal investigation said. It received half a million signatures—several hundred thousand more, CNN noted, than a petition calling for the indictment of Tamir Rice’s shooters. People projected thoughts into Harambe’s mind. “Our tendency is to see our actions through human lenses,” a neuroscientist named Kurt Gray told the network as the frenzy peaked. “We can’t imagine what it’s like to actually be a gorilla. We can only imagine what it’s like to be us being a gorilla.”
This simple fact is responsible for centuries of ethical dispute. One Harambe activist might believe that killing a gorilla as a safeguard against losing human life is unjust due to our cognitive similarity: the way gorillas think is a lot like the way we think, so they merit a similar moral standing. Another might believe that gorillas get their standing from a cognitive dissimilarity: because of our advanced powers of reason, we are called to rise above the cat-eat-mouse game, to be special protectors of animals, from chickens to chimpanzees. (Both views also support untroubled omnivorism: we kill animals because we are but animals, or because our exceptionalism means that human interests win.) These beliefs, obviously opposed, mark our uncertainty about whether we’re rightful peers or masters among other entities with brains. “One does not meet oneself until one catches the reflection from an eye other than human,” the anthropologist and naturalist Loren Eiseley wrote. In confronting similarity and difference, we are forced to set the limits of our species’ moral reach.
Today, however, reckonings of that sort may come with a twist. In an automated world, the gaze that meets our own might not be organic at all. There’s a growing chance that it will belong to a robot: a new and ever more pervasive kind of independent mind. Traditionally, the serial abuse of Siri or violence toward driverless cars hasn’t stirred up Harambe-like alarm. But, if like-mindedness or mastery is our moral standard, why should artificial life with advanced brains and human guardianships be exempt? Until we can pinpoint animals’ claims on us, we won’t be clear about what we owe robots—or what they owe us.
A simple case may untangle some of these wires. Consider fish. Do they merit what D. H. Lawrence called humans’ “passionate, implicit morality”? Many people have a passionate, implicit response: No way, fillet. Jesus liked eating fish, it would seem; following his resurrection, he ate some, broiled. Few weekenders consider fly-fishing an expression of rage and depravity (quite the opposite), and sushi diners ordering kuromaguro are apt to feel pangs from their pocketbooks more than from their souls. It is not easy to love the life of a fish, in part because fish don’t seem very enamored of life themselves. What moral interest could they hold for us?
“What a Fish Knows: The Inner Lives of Our Underwater Cousins” (Scientific American/Farrar, Straus & Giroux) is Jonathan Balcombe’s exhaustively researched and elegantly written argument for the moral claims of ichthyofauna, and, to cut to the chase, he thinks that we owe them a lot. “When a fish takes notice of us, we enter the conscious world of another being,” Balcombe, the Humane Society’s director for animal sentience, writes. “Evidence indicates a range of emotions in at least some fishes, including fear, stress, playfulness, joy, and curiosity.” Balcombe’s wish for joy to the fishes (a plural he prefers to “fish,” the better to mark them as individuals) may seem eccentric to readers who look into the eyes of a sea bass and see nothing. But he suggests that such indifference reflects bias, because the experience of fish—and, by implication, the experience of many lower-order creatures—is nearer to ours than we might think.
Take fish pain. Several studies have suggested that it isn’t just a reflexive response, the way your hand pulls back involuntarily from a hot stove, but a version of the ouch! that hits you in your conscious brain. For this reason and others, Balcombe thinks that fish behavior is richer in intent than previously suspected. He touts the frillfin goby, which memorizes the topography of its area as it swims around, and then, when the tide is low, uses that mental map to leap from one pool to the next. Tuskfish are adept at using tools (they carry clams around, for smashing on well-chosen rocks), while cleaner wrasses outperform chimpanzees on certain inductive-learning tests. Some fish even go against the herd. Not all salmon swim upstream, spawn, and die, we learn. A few turn around, swim back, and do it all again.
From there, it is a short dive to the possibility of fish psychology. Some stressed-out fish enjoy a massage, flocking to objects that rub their flanks until their cortisol levels drop. Male pufferfish show off by fanning elaborate geometric mandalas in the sand and decorating them, according to their taste, with shells. Balcombe reports that the female brown trout fakes the trout equivalent of orgasm. Nobody, probably least of all the male trout, is sure what this means.
Balcombe thinks the idea that fish are nothing like us arises out of prejudice: we can empathize with a hamster, which blinks and holds food in its little paws, but the fingerless, unblinking fish seems too “other.” Although fish brains are small, to assume that this means they are stupid is, as somebody picturesquely tells him, “like arguing that balloons cannot fly because they don’t have wings.” Balcombe overcompensates a bit, and his book is peppered with weird, anthropomorphizing anecdotes about people sharing special moments with their googly-eyed friends. But his point stands. If we count fish as our cognitive peers, they ought to be included in our circle of moral duty.
Quarrels come at boundary points. Should we consider it immoral to swat a mosquito? If these insects don’t deserve moral consideration, what’s the crucial quality they lack? A worthwhile new book by the Cornell law professors Sherry F. Colb and Michael C. Dorf, “Beating Hearts: Abortion and Animal Rights” (Columbia), explores the challenges of such border-marking. The authors point out that, oddly, there is little overlap between animal-rights supporters and pro-life supporters. Shouldn’t the rationale for not ending the lives of neurologically simpler animals, such as fish, share grounds with the rationale for not terminating embryos? Colb and Dorf are pro-choice vegans (“Our own journey to veganism began with the experience of sharing our lives with our dogs”), so, although they note the paradox, they do not think a double standard is in play.
The big difference, they argue, is “sentience.” Many animals have it; zygotes and embryos don’t. Colb and Dorf define sentience as “the ability to have subjective experiences,” which is a little tricky, because animal subjectivity is what’s hard for us to pin down. A famous paper called “What Is It Like to Be a Bat?,” by the philosopher Thomas Nagel, points out that even if humans were to start flying, eating bugs, and getting around by sonar they would not have a bat’s full experience, or the batty subjectivity that the creature had developed from birth. Colb and Dorf sometimes fall into such a trap. In one passage, they suggest that it doesn’t matter whether animals are aware of pain, because “the most searing pains render one incapable of understanding pain or anything else”—a very human read on the experience.
Animals, though, obviously interact with the world differently from the way that plants and random objects do. The grass hut does not care whether it is burned to ash or left intact. But the heretic on the pyre would really rather not be set aflame, and so, perhaps, would the pig on the spit. Colb and Dorf refer to this as having “interests,” a term that—not entirely to their satisfaction—often carries overtones of utilitarianism, the ethical school of thought based on the pursuit of the greatest good over all. Jeremy Bentham, its founder, mentioned animals in a resonant footnote to his “An Introduction to the Principles of Morals and Legislation” (1789):
The day may come, when the rest of the animal creation may acquire those rights which never could have been withholden from them but by the hand of tyranny. . . . The question is not, Can they reason? nor, Can they talk? but, Can they suffer?
If animals suffer, the philosopher Peter Singer noted in “Animal Liberation” (1975), shouldn’t we include them in the calculus of minimizing pain? Such an approach to peership has advantages: it establishes the moral claims of animals without projecting human motivations onto them. But it introduces other problems. Bludgeoning your neighbor is clearly worse than poisoning a rat. How can we say so, though, if the entity’s suffering matters most?
Singer’s answer would be the utilitarian one: it’s not about the creature; it’s about the system as a whole. The murder of your neighbor will distribute more pain than the death of a rat. Yet the situations in which we have to choose between animal life and human life are rare, and minimizing suffering for animals is often easy. We can stop herding cows into butchery machines. We can barbecue squares of tofu instead of chicken thighs. Most people, asked to drown a kitten, would feel a pang of moral anguish, which suggests that, at some level, we know suffering matters. The wrinkle is that our antennae for pain are notably unreliable. We also feel that pang regarding objects—for example, robots—that do not suffer at all…
Read the whole article here.