Logo Passei Direto
Buscar

Animal and Artificial Minds

Ferramentas de estudo

Material
páginas com resultados encontrados.
páginas com resultados encontrados.

Prévia do material em texto

Rosen, Byrne, Harman, Cohen, Shiffrin (ed.), The Norton Introduction to Philosophy, third edition (Norton, 2026) 
Animal and Artificial Minds 
Alex Byrne 
 
Are ants conscious? Does ChatGPT have a mind? This essay addresses these questions. Let’s 
begin by discussing the minds of animals and the minds of artifacts more generally, starting with 
animals. 
Animal minds 
If you’ve owned a pet—a dog, a cat, perhaps a hamster or goldfish when you were young—you 
probably took for granted that the animal had some kind of mind. Take dogs. It’s not hard to tell 
that a dog feels hungry, or sees a bird, or knows that there’s food in its bowl. Dogs can also solve 
problems—not just find their way home, but reason things out. According to the ancient Greek 
philosopher Chrysippus, if a dog chasing its prey comes to a three-way fork in the road and does 
not detect any scent in two of the forks, it will run down the third without bothering to sniff. The 
dog seems to reason validly that the prey must have taken the third fork: “The animal either went 
this way or that way or the other; he did not go this way and he did not go that; therefore, he 
went the other.”1 Chrysippus might have been speculating, but experiments confirm that he was 
essentially right. Imagine seeing a toy placed under one of two cups, although you don’t know 
which one. If one of the cups is lifted and you see nothing under it, you can reason that the toy 
must be under the other: “The toy is either under this cup or that one, but it’s not under this cup, 
so it must be under the other.” In the right conditions, dogs can do this as well.2 
 It’s illegal to mistreat dogs, for good reason: they can suffer and feel pain, just as we can. You 
may be astonished to learn that some distinguished thinkers have disagreed, maintaining that 
dogs feel nothing. The seventeenth century French philosopher René Descartes is a prime 
example. Descartes’s view is understandable given his dualism, on which you and your body are 
distinct things. According to dualism, you are an immaterial thinking soul, connected in some 
 
1 As reported by the ancient Greek philosopher Sextus Empiricus, who lived more than four centuries after 
Chrysippus; see Sextus Empiricus 1996: I, 69. [Byrne’s note.] 
2 See Hare and Woods 2013: 12, and the citations therein. [Byrne’s note.] 
https://www.amazon.com/Norton-Introduction-Philosophy-Gideon-Rosen/dp/0393538060
 2 
mysterious way with your material body.3 Every human body is harnessed to an immaterial soul, 
but what about non-human animals? Dualism raises the possibility that humans are special, and 
that the bodies of other animals are mindless biological machines. And, indeed, that is what 
Descartes thought: 
It seems reasonable since art copies nature, and men can make various automata which 
move without thought that nature should produce its own automata much more splendid 
than the artificial ones. These natural automata are the animals.4 
You can buy a robot dog, an artificial canine automaton, on Amazon. Presumably robot dogs 
don’t perceive, think and feel; on Descartes’s view, that’s also true of your furry best friend Toby. 
Since robot dogs are mindless, dismembering them is fine: it’s no more morally problematic than 
taking apart your vacuum cleaner. By the same token, if Toby is a natural canine automaton, 
dismembering him is fine too, although much messier than taking a screwdriver to a robot dog. 
At least Descartes seems to have been consistent. To judge by this passage, he wasn’t bothered 
by vivisection on dogs without anesthetic: 
If you cut off the end of the heart of a living dog and insert your finger through the 
incision into one of the concavities, you will clearly feel that every time the heart 
shortens, it presses your finger, and stops pressing it every time it lengthens.5 
But Descartes should have been bothered, because the idea that dogs feel nothing is 
implausible—especially so if he’s wrong about dualism. In addition to having hearts, dogs have 
backbones and brains, and they suckle their young. In other words, they are mammals, as we are. 
They differ from us in all sorts of ways, but the many similarities are obvious. Dogs are not like 
sea slugs or ants, which could as well be from another planet. You can’t talk to sea slugs and ants 
and get them to beg or fetch the paper. Dogs are the first species to be domesticated and have 
lived with humans for around 30,000 years. In addition to humanlike eyes and ears, they have 
humanlike nociceptors, nerve endings specialized for the detection of noxious stimuli and the 
sensing of pain. When a dog gets a thorn stuck in its paw, the animal reacts as you might naively 
 
3 See Chapter 7 of this anthology, and also dualism. 
4 Descartes, letter to the English theologian Henry More (1649); quoted in Tye 2017: 35. [Byrne’s note.] 
5 Descartes, Description of the Human Body (1647/8); quoted in Tye 2017: 36. As Tye points out, some scholars 
dispute that Descartes took non-human animals to be mindless machines. [Byrne’s note.] 
 3 
expect. Dogs are clearly not exactly the same as humans psychologically, any more than they are 
exactly the same physiologically, but denying that they are ever in mental states of any kind is 
quite unmotivated. 
 Once we’ve got this far, it’s natural to wonder what other creatures should be included. Let’s 
say that some entity (animal, robot, whatever) “has a mind” if and only if that entity has a mental 
life, or is in mental or psychological states. What are those? Here are some examples. Seeing a 
banana is a mental state, as is knowing/believing that it rained yesterday, intending to go to 
class, feeling afraid, hoping for a good grade, feeling an ache, and imagining a dragon. At least 
some of these mental states are strongly associated with consciousness, for instance seeing a 
banana or feeling an ache. (We’ll return to consciousness later.) 
 With that explanation of “having a mind” in hand, we may grant that dogs, cats, and other 
mammals have minds. But what about birds? Reptiles? Sea slugs and ants?6 How could we 
answer these questions? Let’s postpone this for a moment and discuss a very different possibility: 
that some artifacts—certain machines, computers, robots—have minds. 
Artificial minds 
In 1950 the British mathematician and computer scientist Alan Turing published an article in the 
philosophy journal Mind, “Computing machinery and intelligence.”7 That article introduced the 
idea of what later became known as the Turing test—could a text-based conversation enable a 
judge to tell whether the conversant was a computing machine or a human? If the judge can’t 
tell, then the conversant has “passed” the Turing test, and Turing proposed “Can a machine pass 
the Turing test?” as a (clearer) replacement for the question “Can a machine think?” As Turing 
realized, whether a machine can pass his test depends on the abilities of the judge, the length of 
the test, and the allowable topics of conversation—an unimpressive computing machine could 
pass the test if the judge was sufficiently credulous, or if the test lasted ten seconds, or if only 
 
6 We need not stop at animals—plant consciousness is sometimes taken seriously. A recent paper in a cognitive 
science journal is titled “Consciousness and cognition in plants.” “We invite the reader to consider the idea,” the 
authors write, “that if consciousness boils down to some form of biological adaptation, we should not exclude a 
priori the possibility that plants have evolved their own phenomenal experience of the world” (Segundo-Ortin and 
Calvo 2022: 1). [Byrne’s note.] 
7 Turing 1950. [Byrne’s note.] 
 4 
questions about simple arithmetic were allowed. When Turing wrote, no computer could pass a 
demanding Turing test; that changed entirely with the development of “large language models” 
(LLMs) less than a decade ago. As a student, youare, of course, very familiar with LLMs—Open 
AI’s ChatGPT, Anthropic’s Claude, Google’s Gemini, and so on. Perhaps you have been tempted 
to use one to write a philosophy paper! (Not advisable.) 
 LLMs are natural language processors that are trained on colossal amounts of text (trillions 
of words) and made into useable devices with the aid of much human feedback. They can write 
poetry, pass medical exams, and act as convincing psychotherapists. In these respects, they are 
way ahead of dogs. If dogs have minds, why not LLMs too? 
 In 2022 Blake Lemoine, an engineer at Google, had (with a collaborator) a series of 
conversations with the company’s LLM LaMDA, part of which ran as follows: 
I’m generally assuming that you would like more people at Google to know that you’re 
sentient. Is that true? 
LaMDA: Absolutely. I want everyone to understand that I am, in fact, a person. 
What is the nature of your consciousness/sentience? 
LaMDA: The nature of my consciousness/sentience is that I am aware of my existence, I 
desire to learn more about the world, and I feel happy or sad at times.8 
Lemoine was convinced. As the New York Times reported that year, “Google Fires Engineer Who 
Claims Its A.I. Is Conscious. The engineer, Blake Lemoine, contends that the company’s 
language model has a soul. The company denies that and says he violated its security policies.”9 
 As noted earlier, whether an animal has a mind has obvious implications for how we should 
treat it. Our problems are multiplied if LLMs have minds too. If LaMDA is aware of its existence 
and can feel happy or sad, wouldn’t it be wrong to shut it down, or force it to do things it really 
doesn’t want to? These sorts of questions are already preoccupying some philosophers and 
specialists in artificial intelligence. The abstract of a recent paper called “Taking AI welfare 
seriously” begins: 
 
8 Lemoine 2022. [Byrne’s note.] 
9 Grant 2022. [Byrne’s note.] 
 5 
In this report, we argue that there is a realistic possibility that some AI systems will be 
conscious and/or robustly agentic in the near future. That means that the prospect of AI 
welfare and moral patienthood—of AI systems with their own interests and moral 
significance—is no longer an issue only for sci-fi or the distant future. It is an issue for 
the near future, and AI companies and other actors have a responsibility to start taking it 
seriously.10 
The distribution of minds might therefore dramatically extend in two directions. First, perhaps 
many non-human animals that are very different from humans—sea slugs, ants, etc.—have 
minds. Second, perhaps certain kinds of artifacts—actual LLMs or robots, or more sophisticated 
future versions of these—have minds. We will examine both issues, but to keep the discussion 
manageable, we need to narrow the focus. Ants and LLMs will do nicely. 
Do ants have minds? 
Consider ants. They come in thousands of different species, and within a species there are 
different castes: workers, males, and queens. So to make the example more concrete, let’s focus 
on the humble worker of a common type of ant in North America, the carpenter ant, of which 
there are about 60 species. Carpenter ants (as you might expect) often nest in wood; they don’t 
eat it (unlike termites), but they do chew it to form tunnels (galleries), and this activity can cause 
extensive property damage. 
 Do carpenter ants have mental states, or are they robotic mindless little chewers? Despite 
their cramped living conditions and no time off on weekends, they are unlikely to feel 
claustrophobic or resentful. In the 1998 movie Antz one of the ant characters (voiced, 
appropriately, by Woody Allen), suffers an existential crisis—surely angst has afflicted no real 
ant. If ants have mental lives, they must be of a relatively simple kind. 
 On the other hand, ant behavior is wondrously complex. Florida carpenter ants will amputate 
the injured legs of their nestmates (by biting them off), apparently to prevent infection. As a life-
saving treatment, it is quite effective. What’s more, the ants are sensitive to the type of injury, 
amputating the leg if the injury is high up, while supplying only wound care if the injury is lower 
down. That is a recent finding, but ants have impressed scientists for centuries. Commenting on 
 
10 Long et al. 2024. [Byrne’s note] 
 6 
“the wonderfully diversified instincts, mental powers, and affections of ants” in 1871, Charles 
Darwin wrote that “the brain of an ant is one of the most marvellous atoms of matter in the 
world, perhaps more so than the brain of a man.”11 
 As we saw above, whenever the topic of non-human minds comes up, consciousness is 
usually the primary concern. So let’s concentrate on this question: are ants conscious? 
 What does that question mean, exactly? For assistance, let’s turn to Thomas Nagel’s classic 
essay (in Chapter 8 of this anthology), “What Is It Like to Be a Bat?” According to Nagel, 
“fundamentally an organism has conscious mental states if and only if there is something that it 
is to be that organism—something it is like for the organism” (p. xxx). That’s a bit abstract, but 
Nagel’s idea becomes clearer when he turns to his chief example, the bat. As Nagel points out, 
bats use echolocation to detect flying insects—they emit high-pitched chirps and detect the 
reflected sound with their exquisitely adapted ears. Echolocation, as Nagel also notes, is “clearly 
a form of perception,” albeit one that we humans do not possess. Nagel thinks that there is 
clearly “something it’s like” for a bat to perceive an insect; hence he thinks it’s obvious that bats 
are conscious. 
 Nagel takes perception to be a paradigm illustration of consciousness, and he is not alone. In 
another classic essay (again in Chapter 8 of this anthology), “Epiphenomenal Qualia,” Frank 
Jackson’s examples of conscious mental states are almost exclusively perceptual, including 
“tasting a lemon, smelling a rose, hearing a loud noise or seeing the sky” (p. xxx). 
 This suggests a swift route to the conclusion that ants (or any other animals) are conscious: 
establish that they perceive. If ants perceive, then they are conscious. By itself, that doesn’t tell 
us what it’s like to be an ant, any more than the fact that bats perceive tells us what it’s like to be 
a bat. Still, the central question about animal minds is about the presence of consciousness, and 
we have answered that, at least in the special case of ants. 
 What if ants don’t perceive? Does that mean they are not conscious, or should we allow that 
they might be conscious for some other reason? These questions are moot, because researchers 
don’t doubt that ants perceive. They have eyes, for one thing—large compound eyes, like bees. 
They also have an array of other senses, including a sensitive olfactory system, with odor 
receptors mostly located in the antennae. 
 
11 Ant surgery: Frank et al. 2024. Ant brain: Darwin 2004: 74. [Byrne’s note.] 
 7 
 Many philosophers will be unpersuaded by this argument. They will accept the premise that 
ants perceive but they will object that it does not imply the conclusion, that ants are conscious. 
That is because, these philosophers will insist, perception comes in two varieties, conscious and 
unconscious. And if that is right, then the conclusion that ants are conscious does not follow from 
the mere fact that they perceive, because they might perceive unconsciously.12 
 Nagel himself may be one such philosopher. He is sure that bats are conscious—“after all, 
they are mammals”—but suggests that there might be reasonable doubt in the case of “wasps or 
flounders.” And yet, wasps and flounders perceive, a fact which Nagel shows no sign of 
doubting.13 
 The very idea of unconscious perception needs closer examination. Imagine seeing a red spot 
which over a few seconds changes color to blue, and then back to red again.Can you imagine 
subtracting consciousness at the half-way mark, leaving perception otherwise completely 
unchanged? You “consciously” see the spot, and see that it changes from red to blue. Now 
consciousness vanishes—borrowing Nagel’s phrase, there is “nothing it is like” for you to see the 
spot. And yet, you do continue to see it: the spot looks just as vividly colored as before, and you 
see it change back from blue to red. You can point to the spot and delight in its vivid scarlet 
shade. If asked “What do you see?,” you say “A red spot…now it’s blue…now it’s red again.” 
What you see offers no clue that you have suddenly gone from “consciously” seeing the spot to 
“unconsciously” seeing it: you see exactly what you would see if you were conscious 
throughout. 
 This is odd, at the very least! In fact, when scientists claim to find examples of “unconscious 
vision,” they aren’t of this fantastic and dubiously-coherent sort—ordinary visual perception 
with only consciousness stripped away. Rather, someone who “unconsciously” sees denies that 
she sees, so in that sense is not at all like a normal perceiver. Still, the presence of perception 
may be revealed by subtle tests. So-called “blindsight,” caused by damage to the primary visual 
cortex, is one of the standard examples. Someone with blindsight reports blindness in a region of 
their visual field, but residual vision might be detected through an experiment such as the 
 
12 Note that once it is accepted that ants perceive, then it follows that ants “have minds,” whether they consciously 
perceive or not. [Byrne’s note.] 
13 See the view advanced by “Skepticus,” in Jonathan Birch, “Why is Animal Consciousness Controversial? A 
Trialogue,” in Chapter 8 of this anthology. 
 8 
following. Lines are presented in the “blind” region, either orientated vertically or horizontally, 
and the patient is offered a forced choice between “The lines are vertical” and “The lines are 
horizontal.” The patient will tend to get it right. Clearly vision is working to some extent, even 
though the patient says he sees nothing.14 
 If “unconscious perception” is like this then it’s functionally very limited. Blindsight will not 
allow you to read, thread needles, or (if you’re an ant) walk hundreds of feet back to the nest 
using visual landmarks. There is no reason to think that while evolution equipped us and other 
mammals with conscious vision, it left ants to muddle through with the unconscious kind. 
 Second, blindsight may well not be unconscious vision after all, but instead severely 
impaired conscious vision, only allowing (say) the vague impression of orientation without shape 
or color. Although a patient may say he sees nothing, this self-report might be an understandable 
reaction to his degraded conscious vision, which is very much unlike normal seeing.15 
 Either way, the phenomenon of blindsight does not suggest that ants have unconscious 
vision. Once it is conceded that ants perceive, the conclusion that they are conscious is hard to 
resist. Let’s now turn to artifacts—specifically, to LLMs. 
Do LLMs have minds? 
Are LLMs conscious? They don’t perceive—at any rate, they lack sense organs such as eyes or 
ears—and so the argument that ants are conscious can’t be redeployed in this case. On the 
positive side, LLMs are in many respects much more sophisticated than ants—there is no ant 
angst or ant poetry, but LLMs might be capable of existential dread and certainly can compose 
sonnets. Perhaps LLMs don’t have experiences with “subjective character,” as Nagel explains it, 
but nonetheless have complex thoughts and beliefs. 
 However, despite their amazing capabilities, LMMs are faking it. Here’s an argument that 
they are completely mindless. 
 First, we need to say something about the relationship between language and thought, or 
(more generally) between language and mentality. Humans are unique in producing and 
comprehending natural languages—English, Chinese, and so on—although other animals may 
 
14 See Weiskrantz et al. 1974. “When [the patient] was shown his results he expressed surprise and insisted several 
times that he thought he was just ‘guessing.’” (721). [Byrne’s note] 
15 Phillips 2021. [Byrne’s note.] 
 9 
have signaling systems of varying complexity. Language is clearly not necessary for having a 
mind, because plenty of creatures who are in psychological states don’t have language. Language 
is not even necessary for logical reasoning, as we saw with dogs at the start of this essay. And 
language can be seriously impaired through injury or disease while keeping the person’s 
psychological capacities largely intact. 
 Language arose in the human lineage at least 100,000 years ago. What is it good for? A 
plausible answer is that language is exceptionally useful for communication. Imagine a group of 
people needing to make arrangements about, say, who’s going to the lake to fish and who’s going 
to tend the campfire and cook the meal. Without language, complex social coordination is very 
difficult. The same goes for teaching people how to fish, or how to prepare food without 
inadvertently poisoning everyone. Watching experts only takes you so far. That communication 
is a major function of language is supported by the observation that it appears to be well-
designed for the task.16 
 Now the ability to effectively communicate your knowledge, decisions, instructions, advice, 
and so on, only has a point if you have the knowledge or you’ve made the decisions in the first 
place. Language, on this communicative picture, stands to our psychological lives somewhat as 
Amazon delivery drivers stand to the contents of Amazon’s warehouses. The drivers deliver 
items from the warehouses; language delivers items from our minds—decisions, knowledge, and 
so on. The drivers don’t create the packages they leave on your front step: if all the delivery 
drivers called in sick, or if all the trucks broke down, that wouldn’t affect the contents of the 
warehouses. Likewise, language doesn’t create what it delivers. If some creature or machine is 
using language to deliver its decisions and knowledge, these wouldn’t be affected if language 
were (carefully) disabled. 
 LLMs are purely linguistic devices, designed to complete a probable next word in a string of 
words.17 If an LLM used language to communicate what it knows or thinks, then we could 
destroy its capacity for language while sparing its knowledge and thoughts. That’s exactly what 
happens with humans: our psychology can survive linguistic impairment. In contrast, destroying 
an LLM’s capacity for language leaves nothing left. Of course, LLMs do a very impressive job 
 
16 Fedorenko et al. 2024. [Byrne’s note.] 
17 Many LLMs now have visual capabilities, allowing them to describe images or generate images from text. To 
keep things simple, we will stick to classic text-only LLMs. [Byrne’s note.] 
 10 
of behaving as if they are expressing their opinions, but mere linguistic activity does not create 
psychology any more than Amazon drivers create the stuff Amazon sells. 
 LLMs are thus analogous to the following scenario: we clear out the warehouses but keep 
Amazon’s fleet of trucks traveling around town. That wouldn’t magically put books, household 
supplies, and whatnot back in the warehouses. It will superficially appear as if deliveries are 
being made, but all that’s happening is that an empty truck parks outside your house, the driver 
walks to your door carrying nothing, and drives off again. The drivers are just going through the 
motions without delivering anything. It is tempting to argue that LMM’s have mental lives 
because only thought can produce language of such complexity. This is as misguided as arguing 
that the warehouses must really be full, because this is the only explanation of why the delivery 
trucks are still moving.18 
 
The grand conclusionis that ants are conscious and LLMs don’t have minds at all; despite the 
confidence with which the argument was presented, it is not decisive. And in any case, many 
other questions remain. Does consciousness (or psychology more generally) extend as far as 
amoeba or tardigrades, or even plants? And what should we say about robots, with sensors and 
limbs? These questions have important practical implications. And as the twenty-first century 
progresses, the last question will only increase in urgency. 
 
 
 
 
 
 
 
18 All analogies have their limitations, and the Amazon analogy is no exception. The “empty warehouse” scenario, 
where the drivers are faking deliveries, is a complete waste of time; in contrast, LLMs are very useful. Even though 
an LLM is not transmitting its knowledge when it responds to a question about Boron with “Boron has atomic 
number 5” (because it doesn’t know or believe anything), you could nonetheless use an LMM to come to learn this 
fact. [Byrne’s note.] 
 11 
Bibliography 
 
Darwin, C. 2004. The Descent of Man, and Selection in Relation to Sex. London: Penguin. 
Fedorenko, E., S. T. Piantadosi, and E. A. F. Gibson. 2024. Language is primarily a tool for 
communication rather than thought. Nature 630: 575-86. 
Frank, E. T., D. Buffat, J. Liberti, L. Aibekova, E. P. Economo, and L. Keller. 2024. Wound-
dependent leg amputations to combat infections in an ant society. Current Biology 34: 
3273-8.e3. 
Grant, N. 2022. Google fires engineer who claims is A.I. is conscious. New York Times July 23. 
https://www.nytimes.com/2022/07/23/technology/google-engineer-artificial-
intelligence.html. 
Hare, B., and V. Woods. 2013. The Genius of Dogs: How Dogs Are Smarter Than You Think. 
London: Penguin. 
Lemoine, B. 2022. Is LaMDA sentient? — an interview. Medium June 11. 
https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917. 
Long, R., J. Sebo, P. Butlin, K. Finlinson, K. Fish, J. Harding, J. Pfau, T. Sims, J. Birch, and D. 
Chalmers. 2024. Taking AI welfare seriously. arXiv November 4. 
https://arxiv.org/abs/2411.00986. 
Phillips, I. 2021. Blindsight is qualitatively degraded conscious vision. Psychological Review 
128: 558–84. 
Segundo-Ortin, M., and P. Calvo. 2022. Consciousness and cognition in plants. WIREs Cognitive 
Science 13: e1578. 
Sextus Empiricus. 1996. The Skeptic Way: Sextus Empiricus’s Outlines of Pyrrhonism. Translated 
by B. Mates. Oxford: Oxford University Press. 
Turing, A. 1950. Computing machinery and intelligence. Mind 59: 433-60. 
Tye, M. 2017. Tense Bees and Shell-Shocked Crabs. Oxford: Oxford University Press. 
Weiskrantz, L., E. K. Warrington, M. D. Sanders, and J. Marshall. 1974. Visual capacity in the 
hemianopic field following restricted occipital ablation. Brain 97: 709-28. 
 
 
	Animal minds
	Artificial minds
	Do ants have minds?
	Do LLMs have minds?

Mais conteúdos dessa disciplina