49
Non-human consciousness and the specificity problem: a modest theoretical proposal ABSTRACT The scientific study of consciousness has made considerable progress in the last three decades, especially among cognitive theories of consciousness such as Global Workspace Theory. However, such theories are challenging to apply to non-human systems insofar as few if any non-humans are likely to implement the precise cognitive architecture of human beings. In this paper, I explore what I term the Specificity Problem, which asks how we can determine the appropriate fineness of grain to employ when applying a given cognitive theory of consciousness to non-human systems. I consider four approaches to this challenge that I term Conservatism, Liberalism, Incrementalism, and Agnosticism, noting problems with each. I go on to offer a diagnosis of the problem, suggesting that it has particular force only insofar as we are committed to a ‘Theory-Heavy’ approach that attempts to settle questions about consciousness in non-human systems using theory-driven considerations alone. After considering and rejecting a ‘Theory-Light’ approach that avoids all commitment to specific theories of consciousness, I argue for what I term a Modest Theoretical Approach. This holds that we should supplement our theories of consciousness with a wider body of scientific evidence concerning behaviour and cognition in non-human systems. By using such evidence, I suggest, it will be

Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

Non-human consciousness and the specificity problem: a modest

theoretical proposal

ABSTRACT

The scientific study of consciousness has made considerable progress in the last three

decades, especially among cognitive theories of consciousness such as Global Workspace

Theory. However, such theories are challenging to apply to non-human systems insofar as

few if any non-humans are likely to implement the precise cognitive architecture of human

beings. In this paper, I explore what I term the Specificity Problem, which asks how we can

determine the appropriate fineness of grain to employ when applying a given cognitive theory

of consciousness to non-human systems. I consider four approaches to this challenge that I

term Conservatism, Liberalism, Incrementalism, and Agnosticism, noting problems with

each. I go on to offer a diagnosis of the problem, suggesting that it has particular force only

insofar as we are committed to a ‘Theory-Heavy’ approach that attempts to settle questions

about consciousness in non-human systems using theory-driven considerations alone. After

considering and rejecting a ‘Theory-Light’ approach that avoids all commitment to specific

theories of consciousness, I argue for what I term a Modest Theoretical Approach. This holds

that we should supplement our theories of consciousness with a wider body of scientific

evidence concerning behaviour and cognition in non-human systems. By using such

evidence, I suggest, it will be possible to identify strong and weak consciousness candidates

in advance, and so adjust the precise formulation of our theories of consciousness in different

cases so as to accommodate the possibility of consciousness in systems with cognitive

architectures and capacities quite different from our own.

Page 2: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

1. Introduction

The last three decades have witnessed an explosion of interest in the science of

consciousness. While the ‘hard problem’ of consciousness (Chalmers, 1995) remains a

contentious philosophical issue, considerable progress has been made within the theories of

consciousness debate in identifying cognitive indicators of conscious and non-conscious

processing. One key challenge for such theories, however, concerns how we can assess

consciousness in non-human systems. At least in principle, we might hope that the same

theories of consciousness that accurately predict conscious awareness in humans could also

tell us whether creatures like rats, fish, or insects were conscious, as well as the conditions

under which we would have good reason to attribute consciousness to artificial systems.

In this paper, I explore a challenge faced by many current theories of consciousness

concerning such attributions of consciousness to non-humans that I term the Specificity

Problem. In short, this concerns the appropriate level of detail that should be adopted by

theories of consciousness in applying them to non-humans. If we spell out our theories in

terms of highly specific processing architectures identified as underpinning consciousness in

humans, we risk the risk of false negatives, inappropriately denying consciousness to non-

human systems in virtue of minor differences in processing. However, if we spell out our

theories in very general abstract terms, we face the opposite worry, namely misclassifying

some very simple systems as conscious entities.

The paper proceeds as follows. In Section 2, I provide a brief background of the

theories of consciousness debate, and identify a particular set of views I term cognitive

theories that I take as my target in what follows. In Section 3, I summarise the Specificity

Problem, explaining how it presents an obstacle for the applicability of views such as Global

Workspace Theory to non-human systems. In Section 4, I discuss four responses to the

Specificity Problem that I term Conservatism, Minimalism, Gradualism, and Agnosticism,

noting difficulties faced by each. In Section 5, I distinguish between Theory-Heavy and

Theory-Light approaches to consciousness, and suggest that the Specificity Problem draws

much of its force from a commitment to the former. However, I also note that a Theory-Light

approach is unlikely to provide all of the answers we need from a science of non-human

consciousness. Finally, in Section 6, I propose an alternative strategy for addressing the

Specificity Problem that I term the Modest Theoretical Approach, which combines elements

of both Theory-Heavy and Theory-Light approaches in supplementing our theories of

consciousness with theory-independent evidence of consciousness in non-human systems.

Page 3: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

2. The theories of consciousness debate

A central goal in the study of consciousness has been to develop an account to explain why

some forms of psychological and neural processing are conscious and others are not.

Frameworks that attempt to offer such explanations are typically known as theories of

consciousness, and the business of assessing and critiquing such theories as the theories of

consciousness debate.1 While the theories of consciousness debate draws on longstanding

philosophical and scientific ideas, as a specialised area of cognitive science involving active

collaboration between philosophers and scientists its modern history began in the 1980s, with

the emergence of frameworks such as Rosenthal’s Higher-Order Thought theory (Rosenthal

1986) and Baars’ Global Workspace account (1988). Since then, a host of other theories have

emerged, with notable examples including Tononi’s Integrated Information Theory or IIT

(Tononi, 2008), biological theories of the kind defended by Ned Block and Victor Lamme

(Ned Block, 2009; Lamme, 2010), and Dehaene and Naccache’s Global Neuronal Workspace

(Dehaene & Naccache, 2001).

The dominant methodology used by researchers in the debate has been to examine the

conditions under which different stimuli are consciously or unconsciously perceived by

normal human subjects. Leading paradigms such as binocular rivalry (Blake & Logothetis,

2002), masking (Kouider & Dehaene, 2007), and dichotic listening (Brancucci & Tommasi,

2011) all rely on the fact that by varying conditions of input or task, it is possible to control

whether subjects can detect and report on the presence of a given stimulus. By identifying the

cognitive and neural dynamics associated with detectability and reportability, we can

construct an account of what distinguishes conscious from unconscious forms of processing,

and use such distinctions to give a more general theory of consciousness.2 Other important

methods used in the science of consciousness include experimental work on patients with

brain damage that causes selective impairment of consciousness (Weiskrantz, 1986), brain-

imaging of patients in conscious and unconscious conditions such as general anaesthesia

(Alkire & Miller, 2005), and the search for neural signatures of consciousness in vegetative

state patients (Sitt et al., 2014).

Considerable controversy remains concerning which if any of the leading theories of

1 A common distinction raised in the theories of consciousness debate is that between phenomenal consciousness (understood as subjective experience) and access consciousness (the ability to deploy information in voluntary action, including report) (Block, 1995) . For ease of exposition and to avoid prejudging theoretical questions, I use the term “consciousness” in this paper to refer to subjective experience, while leaving open the possibility that phenomenal consciousness and access consciousness are coreferring terms.2 However, see Block (2007) for worries about this methodology.

Page 4: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

consciousness is to be preferred, and major philosophical debates still rage concerning

questions such as the relationship between consciousness and reportability (Ned Block,

2007). However, the scientific investigation of consciousness has proven highly productive,

giving rise to a wealth of data about different forms of processing in the brain and the

presence of consciousness in vegetative state patients (Casali et al., 2013), and has led to the

development of key theoretical concepts such as Block’s distinction between phenomenal and

access-consciousness (Block 1995).

Despite these advances in understanding the basis of consciousness in human beings,

however, comparatively little progress has been made within the theories of consciousness

debate concerning consciousness in non-human systems. The relative silence of theorists of

consciousness on this question is due largely to two main factors. The first is that the theories

of consciousness debate has been largely (though not exclusively) framed in terms of how we

can identify what makes individual human mental states conscious, rather than the question

of what capacities underpin a general capacity for conscious experience.3 When

neuroscientists and psychologists use techniques like masking, binocular rivalry, or dichotic

listening to control which states a conscious subject is aware of, for example, they are

typically concerned to identify the neural and cognitive properties that make individual states

either conscious or unconscious. Moving from such findings to the question of which

background cognitive capabilities might be required for consciousness in general is not

trivial. A second related reason why relatively little has been said by theorists of

consciousness about consciousness in non-humans is that such experiments overwhelmingly

rely on verbal report to provide clear evidence of the fact that a given stimulus has been

consciously processed. This makes it hard to draw clear lessons from many of the common

experimental paradigms used to explore consciousness in humans for debates about

consciousness in creatures or systems incapable of making such reports.4

In contrast, extensive discussion of non-human consciousness has occurred in

branches of the scientific and philosophical study of consciousness such as cognitive

ethology and the philosophy of animal minds (see, e.g., Andrews, 2014; Bekoff, 2010;

Godfrey-Smith, 2017; Griffin, 2001). Rather than identifying the specific conditions

3 A related philosophical debate in this regard concerns the relative priority of having conscious states and being a conscious subject (‘creature consciousness’). While some have suggested that the latter notion is wholly derivative upon the former (McBride, 1999), others have suggested that there is at least an important conceptual distinction to be drawn between the two (Manson, 2000). I set this issue aside for present purposes.4 This is not the case for all experimental paradigms in the science of consciousness, of course; see, for example, binocular rivalry experiments in monkeys (Stoerig, Zontanou, & Cowey, 2002) and delayed trace eyeblink conditioning in rabbits (Thompson, Moyer, & Disterhoft, 1996).

Page 5: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

associated with conscious processing of perceptual stimuli, this latter approach is typically

concerned to identify broad similarities in behaviour and neural homologies between humans

and different animal species, and much discussion has focused on the methodological

challenges involved in attributions of consciousness (e.g., Buckner, 2013; Dennett, 1996a;

Tye, 2017; see also Section 5 for more discussion).

However, while the theories of consciousness debate may have had little to say about

non-human consciousness thus far, it seems as though it should be possible in principle to use

our theories of state consciousness to make assessments of consciousness in non-human

systems. Different approaches to state consciousness will of course vary in how willing they

are to make such predictions. Those theories that take consciousness to have a still poorly

understood biological basis (Block 2009; Searle 1980), for example, might claim that we

simply lack the requisite information about the fundamental mechanisms of consciousness to

make confident ascriptions about state and system consciousness outside humans. Likewise,

theories that identify consciousness with specific properties of neurons might be sceptical or

agnostic about the possibility of consciousness in artificial systems that lack clear analogues

of our neurobiological structures (Crick & Koch, 1990).

However, many of the leading contemporary approaches to consciousness are spelled

out in terms that do not make essential reference to biological or neural substrates, and

instead identify it with one or another form of information-processing such as metacognition

or global broadcast. By the lights of these approaches, consciousness involves a specific

cognitive or psychological mechanism. Consequently, it might be possible at least in

principle to answer the question of which non-human systems are conscious by determining

which of them implement the relevant process. We may of course wish to reserve judgment

about consciousness in systems whose internal architecture is still poorly understood, but

such theories might be expected to endorse at least conditional claims about consciousness in

non-human systems: if the system implements the relevant cognitive processes or

mechanisms, it can be assumed to have conscious states.

The idea that we could use our theories of human consciousness settle questions about

non-human consciousness in such a direct ‘Theory-Heavy’ way is of course controversial

(see Section 5, below). However, I take it that many of the leading theories – including

Global Workspace Theory (Dehaene & Naccache, 2001), working memory theories

(Baddeley, 1992), higher-order thought theories (Rosenthal, 2005), and the Attention Schema

theory (Graziano, 2013) – lend themselves to this approach, and I will refer to them

collectively as cognitive theories of consciousness. In practice, of course, individual

Page 6: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

advocates of these approaches may wish to resist some of the assumptions given above, such

as the applicability of theories of consciousness to non-human systems. However, I would

suggest that there are many defenders of cognitive theories of consciousness who are

implicitly or explicitly sympathetic to the outlook just described, and share the aspiration of

applying their approaches to non-human systems via assessing whether they implement the

proposed mechanisms. Such theorists, I will now argue, face a challenge I term the

Specificity Problem.

3. The Specificity Problem

In short, the Specificity Problem concerns how we can make a principled and appropriate

specification of the fineness of grain of our theory of consciousness. When we are dealing

with neurotypical human subjects, the problem can be largely set aside insofar as we can

normally assume that human subjects are likely to possess the same broad cognitive

architecture and the same core cognitive capacities like higher-order thought and global

informational integration. When we turn to non-human systems, however, the problem

becomes more pressing, and we must ask how far a system’s architecture can diverge from

that of human beings while still being correctly said to implement the proposed mechanism of

consciousness. If we insist on a very fine-grained specification of the theory, we run the risk

of failing to identify conscious organisms with different cognitive architectures. If our

specification is too coarse, however, we are liable to overattribute consciousness to simple

systems.

I should note at the outset that the Specificity Problem as sketched above is not

particularly novel. The broader issue of how to choose the appropriate degree of fineness of

grain in spelling out our theories is faced in various guises by many attempts to identify

psychological or cognitive capacities outside of core human cases.5 Likewise worries about

finding the right balance between liberalism and chauvinism in our attributions of mentality

and consciousness to non-human systems have been discussed at length, and have given rise

to famous thought experiments such as Block’s famous ‘China Brain’ and ‘Homunculi-

headed robot’ cases (Block, 1978) and Searle’s Chinese Room (Searle, 1980).

However, while these latter arguments targeted broad theoretical conceptions of the

relation between the mental and the physical, the Specificity Argument is targeted at a

particular set of theories in the current science of consciousness. Moreover, while no-one has 5 For other instances of related problems, see, e.g., Buckner (2015) for the case of cognition, Carruthers (2013) for working memory, and Allen and Fortin (2013) and Clayton et al. (2007) for episodic and ‘episodic-like’ memory.

Page 7: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

any idea how to go about actually instantiating exact functional duplicates of a human brain

using the population of China or via a set of water pumps (Maudlin, 1989), it seems

reasonable to imagine that we may soon be able to build systems that implement the kind of

abstract cognitive properties that many contemporary theories identify with conscious

processing.

To illustrate the point, consider Dehaene and Naccache’s Global Workspace Theory

(hence, GWT). At times, this theory is spelled out in highly abstract terms, for example in the

claim that “consciousness is just brain-wide information sharing” (Dehaene, 2014: 165), but

elsewhere the theory is stated in terms of the operation of specific neural structures such as

the pyramidal neurons abundant in the human prefrontal cortex that underpin long-range

connectivity across different brain regions (Dehaene, 2014: 173).

Even if we grant that consciousness is multiply realisable by different neural or

physical structures, we can raise questions about how similar in cognitive function a non-

human analogue of a proposed mechanism for consciousness must be in order to make

plausible attributions of consciousness to the system. Consider that the human global

workspace makes information available for a wide variety of cognitive functions such as

report, self-awareness, and long-term planning. Which if any of these are necessary for a

creature to possess a global workspace in the strict sense is by no means obvious. As

Carruthers (2018) puts the question, “[a]re some aspects of global broadcasting more

important, or more relevant to the question of consciousness, than others?”

The challenge is by no means unique to GWT. Consider, for example, the higher-

order approach to consciousness which claims that consciousness essentially involves

awareness of our own mental states. This might take the form of some form of internal quasi-

perceptual awareness of our own mental states (Lycan, 1996) or higher-order thoughts

(Rosenthal, 2005). In order to apply such theories to non-human systems, however, we must

identify what counts as a perception or a thought in the sense required for consciousness, and

here arise complex conceptual questions. Metacognition – broadly understood as a capacity

to internally represent one’s own mental states – comes in a host of different forms and is

found to varying degrees in different non-human systems, ranging from simply differentiating

between internally and externally generated changes in perceptual inputs (Merker, 2005),

distinguishing one’s own beliefs from those of others (Kaminski, Call, & Tomasello, 2008),

and being able to act on the degree of confidence in one’s own beliefs (Terrace & Son, 2009).

In all cases, the broad cognitive resources available to non-humans to conceptualise and

structure their metacognitive representations are likely to be very different from those

Page 8: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

possessed by human beings: buttressed by our linguistic capabilities and our rich repertoire of

concepts, a human can represent herself as, for example, having thoughts with complex

logical structure. Whether such complex forms of propositional thought are required to

possess higher-order thoughts in the sense relevant for consciousness is thus not trivially

answered.

I do not wish to overstate the difficulties in overcoming the Specificity Problem. It is

reasonable to hope that there is some appropriate level of specificity in our theories of

consciousness that allows us to make accurate attributions of consciousness to non-human

systems. The challenge is how we can identify it. A key point to note in this regard is that the

experimental methods used to develop our theories of consciousness in the first place may be

ill-suited to the task. We may be able to determine via techniques such as masking and

binocular rivalry that activation of the global workspace suffices for consciousness in

humans, for example, but the question of what we should count as a global workspace in non-

human cases will require different forms of theorising and investigation.

4. Four solutions to specificity

In Sections 5 and 6, I will outline what I take to be a promising methodology for calibrating

the specificity of our theories of consciousness so as to allow them to be accurately applied to

non-humans. First, however, it will be helpful to map some of the broader theoretical terrain,

and consider some of the options available to us in determining how precisely to frame our

theories of consciousness. With this in mind, I will now consider four possible answers to the

Specificity Problem.

4.1 – Conservatism

One relatively straightforward strategy for applying our theories of consciousness to non-

human systems would be to err on the side of conservatism, and assume – absent clear

evidence to the contrary – that the full range of capacities associated with our proposed

theory of consciousness should be included among the necessary conditions of consciousness

in general. In the case of GWT, for example, globally broadcast information can be employed

in reflective thought, deliberation, self-awareness, and verbal report. For other approaches,

the relevant capacities will be somewhat different; a Higher-Order Thought theorist might

require, for example, that any conscious creature possess a capacity for complex,

propositionally structured, self-reflective thought, while a working memory theory of

consciousness might insist on capacities for visuospatial and phonological imagery and

Page 9: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

multimodal encoding of information (Baddeley, 2000).

If we adopt a maximally conservative answer to the specificity problem and take

possession of such capabilities to be necessary conditions for consciousness it seems likely

that few if any non-human animals will qualify as conscious, on the simple basis that they

lack exact analogues of most human cognitive capacities. It should be noted that

comparatively few defenders of leading theories of consciousness explicitly endorse such a

view; Dehaene, for example, states that he would not be surprised if “all mammals, and

probably many species of birds and fish, show evidence of a convergent evolution to the

same sort of workspace.” However, the conservative answer to specificity worries cannot be

dismissed out of hand. The idea that consciousness and higher cognitive functions are closely

linked and that non-human animals may consequently lack consciousness altogether is one

that has found many defenders in the history of philosophy (see, e.g., Descartes, 1649/1970;

Davidson, 1975). It also has at least one prominent contemporary sympathiser in the form of

Peter Carruthers, who notes that while “global workspace theory can provide the basis for a

categorical concept to employ across species, it seems likely that only a small subset of

species (perhaps a set restricted to humans) will qualify as determinately phenomenally

conscious” (Carruthers, 2018).

Nonetheless, few modern readers will be willing to endorse a view that circumscribes

consciousness to such a narrow subset of beings. Such reticence may not have much

evidential force in itself, of course, but even setting aside any sentimental attachments we

may have towards the idea of animal consciousness, there is some implausibility to the idea

that behaviours normally associated with consciousness in human beings – such as those

involved in expressions of pain, or with planning and goal-directed action – might be

accomplished by animals without consciousness. Consider a case where a dog is exposed to

an electric shock, and responds broadly as a human would, making noises of distress and

avoiding the unpleasant stimulus in the future. Absent specific defeaters, such as knowledge

that the dog’s actions are accomplished via dramatically different mechanisms from those of

humans, the simplest explanation for a dog’s pain behaviour is arguably that its actions have

the same cause as that in the human case, namely conscious experience of pain.6 There is

much more to be said here, of course, but arguments such as this hopefully illustrate some of

the implausibility associated with the Conservative position, at least as applied to Global

Workspace Theory.

A further worry for Conservatism is that such a position would be unduly 6 For a more detailed development of this point, see Tye’s discussion of ‘Newton’s Rule’ (2017; Ch.5).

Page 10: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

anthropocentric, insofar as it defines consciousness by reference to a specific mechanism

(such as global broadcast) and cognitive capacities possessed by humans. It seems entirely

possible that a being might exist with intelligence far surpassing our own, for example, with a

consequent strong intuitive claim to conscious status, yet which lacked one or another of our

specific cognitive endowments. Thus it might have a system of internal representations that

was more map-like than conceptual (Boyle, 2019; Camp, 2007) and thus lack the precise

compositional and syntactic features of human thought. Alternatively, it might possess

mechanisms of attention and informational selection quite different from those presupposed

by the Global Workspace Model on a conservative interpretation. While we should not place

great weight on such hypothetical cases (at least without further elaboration), they do serve to

make vivid the peculiarly anthropocentric view of consciousness assumed by the

Conservative position.

4.2 – Liberalism

An alternative approach to the Specificity Problem would be to identify consciousness with

the cognitive capacities and mechanisms proposed by the theory in question at the greatest

possible degree of generality. For GWT, this might be system-wide information sharing,

while a metacognitive approach to consciousness might identify consciousness with any form

of self-representation. Somewhat more carefully, we might claim that the specific cognitive

implementations of consciousness in humans identified by approaches such as Global

Workspace Theories should be understood as determinate instances of a broader

determinable concept of consciousness spelled out in abstract terms. On this framing, for

example, the specific capacities associated with global broadcast in humans would be just one

way of instantiating the broader property of global information-sharing, allowing that quite

different realisations were possible.

While this view avoids the vices of Conservatism, it is vulnerable to the opposite

charge, namely that it is excessively generous in its attributions to consciousness. Abstract

phenomena like global information-sharing and metacognition are ubiquitous both in nature

and in existing artificial systems. And while some would not balk at the possibility of

attributing consciousness to even simple animals like insects (Barron & Klein, 2016), the idea

that we could construct simple robots using existing technology that satisfy our conditions for

consciousness will strike many as radically implausible (Ned Block, 1978; Searle, 1980).

To emphasise the point, note that the kinds of abstract high-level processes identified

with consciousness under a Liberal interpretation of a given theory are likely to be able to be

Page 11: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

implemented independently of the broader capacities of a system (such as its drives,

sensorimotor capacities, or general intelligence). Thus imagine that we wish to build a system

with some capacity for metacognition, broadly understood. A computer scientist might begin

with a video camera feeding inputs to a processing unit that builds up a simple labelled model

of its local environment. She would then add an internal monitoring unit with a set of

symbols for representing different states of the visual processing unit, for example encoding

information of the kind [System S, Visual Processing State V]. If she wished to add more

complexity to the system, she might build in a capacity for the system to activate certain

outputs on the basis of this metacognitive information. She could, for example, give it a

‘boredom threshold’, such that if it detected that its visual processing system had been in the

state for more than a few seconds, it would reorient its sensors.

Note that in spelling out the details of these systems, I have said nothing about their

goals, motor capacities, or inferential capabilities. Indeed, it seems entirely possible that they

could lack these altogether, while nonetheless instantiating some form of metacognition,

liberally construed. Yet in this example, it seems wildly implausible to suppose that the

system might be conscious.

This intuition is admittedly not universally shared; there are those who will welcome

a view of consciousness as ubiquitous in natural and artificial systems. Prominent defenders

of Integrated Information Theory, for example, regard consciousness as “likely widespread

among animals, and… found in small amounts even in certain simple systems” (Tononi &

Koch, 2015). Nonetheless, there are difficulties involved in this approach. For one, note that

the concept of consciousness is deeply embedded in many aspects of folk psychology and

intuitive morality. We care about questions like whether a patient is in a vegetative state or

whether fish feel conscious pain not merely out of scientific curiosity, but because the

answers to these questions are of normative significance. Hence on learning that a system is

conscious, many will be inclined to ipso facto attribute to it some degree of moral status.

While many might be open to saying that a simple robot implements attention, learning, or

memory, it will be harder to accept it possesses moral status.

A further worry for simply ‘biting the bullet’ and adopting a form of Liberalism

applicable to trivial realisations of our theories of consciousness is that it does insufficient

justice to our pretheoretical grip on the kind of systems that might be conscious. For while, as

noted above, there is no guarantee that the correct theory of consciousness will be intuitive, it

seems reasonable to think that, insofar as consciousness is part of our folk psychological

framework, our pretheoretical ideas about it can constitute at least some evidence about the

Page 12: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

phenomenon. Indeed, on one view of the role of such commitments, it should be a guiding

constraint in our search for a theory that it systematise as far as possible our pretheoretical

platitudes about which states and systems are conscious (Lewis, 1972). On this view, we

already have a good implicit handle on consciousness, and the job of a theory is to regiment

that understanding so as, for example, to better handle edge cases.7

Even if we do not wish to adopt this specific methodology, however, we will likely

wish to give some weight to pretheoretical judgments about consciousness, not least in

recognition of the role that such judgments play in grounding broader scientific work on the

phenomenon. As discussed in Section 5, below, when determining whether a given animal

species is conscious, even experts place some weight on pretheoretical assumptions about the

connections between consciousness and forms of intelligent behaviour such as flexibility and

learning. Likewise, most of us would feel comfortable dismissing out of hand a theory of

consciousness that claimed that, for example, children were not conscious until their early

teens, or that nematode worms were conscious and dogs were not.

Without taking our pretheoretical priors as sacrosanct, then, most would wish to give

them some evidential role in assessing scientific theories of consciousness. Consequently, to

the extent that a Liberal approach is grossly at odds with these priors – for example, in

asserting that a very simple robot is conscious – it will, ceteris paribus, be less attractive than

one that specifies the precise cognitive basis of consciousness in a way that avoids doing

violence to our pretheoretical attitudes.

4.3 – Incrementalism

We might attempt to steer a course between Conservatism and Liberalism by adopting a third

approach I will term Incrementalism. In short, Incrementalism as I will describe it would use

the kinds of specific capacities identified by Conservatism as an index for ‘full’

consciousness, while allowing that other systems might have some degree of consciousness,

to the extent that the system approximated the same cognitive mechanisms as those

implicated by a given theory in human consciousness.

To see how this might work, consider how one might adopt an incrementalist reading

of a metacognitive theory of consciousness. ‘Full consciousness’ might be indexed to the

kinds of complex propositionally-structured higher-order thought found in humans, but even

among creatures such as rats and bees incapable of such thought, we might assign some

degree of consciousness on the grounds that they have a capacity for making metacognitive

7 See also Boyd (1999) on the role of 'programmatic' and 'explanatory' definitions for natural kinds.

Page 13: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

judgments of confidence in their decisions (Foote & Crystal, 2007; Loukola, Perry, Coscos,

& Chittka, 2017). For very simple creatures incapable of such judgments, we might attribute

some still lesser but non-zero degree of consciousness on the basis of more basic

metacognitive capacities, such as the ability to distinguish exogenously generated changes in

perceptual inputs from those caused by the creature’s own movements (Merker, 2005).

Incrementalist approaches have obvious attractions insofar as they allow us to cast a

broad net as regards the range of systems that might qualify as conscious, while allowing us

to maintain that in some important sense, a creature such as a lobster or a robot is vastly less

conscious than we are. They also arguably provide a good fit for an evolutionary approach to

consciousness, insofar as it allows that conscious might have emerged gradually in forms

very different from that we associate with human experience.

A fundamental obstacle for many in endorsing such a view, however, will be the very

notion of degrees of consciousness. While it is certainly true that we undergo conscious

experiences that differ in respect of clarity, vividness, and richness, as for example when

waking up or falling asleep, or under the influence of drugs or alcohol, in all of these

instances it is still the case that there is something it is like (Nagel, 1974) to have the relevant

experience; that is, we are still unambiguously in a conscious state. As Carruthers puts it,

“One can be semi-conscious, or only partly awake. But some of the states one is in when

semi-conscious are fully (unequivocally) phenomenally conscious nonetheless” (see also

Simon, 2017). Such familiar examples, then, do not help us make sense of the idea of what it

would mean for a creature to have some (non-total) degree of consciousness. I do not wish to

rest too much on this point; the question of whether consciousness can come in degrees or

exhibit vagueness is a deeply contested one among philosophers (Papineau, 2003; Rosenthal,

2019). Nonetheless, to the extent that the reader finds degrees of consciousness a mysterious

notion, it will be detract from the appeal of an Incrementalist approach.

A further less narrowly conceptual worry for Incrementalism concerns how we might

establish principled criteria for assessing similarity across different non-human systems,

some of which might possess radically different cognitive architectures from our own. If a

system possesses propositionally structured thought, for example, yet lacks sensorimotor

capacities, should we say that it is more or less conscious than one with the opposite

endowments? The point is not that such comparisons are pointless or impossible, but rather

that they can be made only on the basis of some further theoretical perspective on the relative

importance of different cognitive capacities, and this is not one that is likely to be neatly

provided by the specific theory of consciousness we wish to endorse. The basic framework of

Page 14: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

GWT, for example, does not by itself specify which of the myriad consumer systems of

globally broadcast information are especially important for consciousness; they come, so to

speak, as a package deal.

There are of course cases involving patients with brain damage in which we can

ascertain that some capacities are absent or dramatically impaired while consciousness seems

to be broadly intact, such as the loss of episodic memory among patients with hippocampal

lesions (see, e.g., Burgess, Maguire, & O’Keefe, 2002). However, such cases do not settle the

question of the degree to which consciousness might be present. We might decide to

prioritise verbal report, for example, and claim that patients still able to report on their

concurrent experiences are fully conscious. But we could equally claim that, shorn of their

ability to hold onto memories for more than a few seconds, patients with severe anterograde

amnesia had a lower degree of consciousness than non-verbal patients who had fully intact

episodic memory. Such questions are of a deep and largely theoretical kind, and seem

unlikely to be answered just by reference to empirical data.

4.4 – Agnosticism

A last approach I wish to briefly consider is strong agnosticism about the application of

cognitive theories of consciousness to non-humans. By this, I mean the view that questions

about non-human consciousness are ill-posed, not because non-human systems are non-

conscious but because the categories of conscious and unconscious simply cannot be

intelligibly applied to them in light of our settled theory of consciousness. This latter position

is the considered position of Carruthers, who, operating within a Global Workspace

framework, states that “[t]here is no extra property of the mind of the animal that accrues to it

by virtue of its similarity or dissimilarity to the human global workspace. This… suggests

that there is no substantial fact of the matter concerning consciousness in nonhuman animals”

(Carruthers, 2018).

Such robust scepticism about non-human consciousness may turn out to be the only

plausible conclusion to reach in light of our broader theoretical commitments. Nonetheless, it

is deeply at odds with our pre-reflective understanding of the significance of consciousness,

both as a psychological phenomenon and as a moral one. As noted above in relation to

Conservatism, consciousness underpins a wide variety of our moral attitudes. If we are about

to undergo an invasive operation, and we request an anaesthetic, we wish to be assured that

we will not experience conscious pain during the procedure. To be told only that the

Page 15: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

anaesthetic will remove certain high-level cognitive capacities such as self-reflective thought

or episodic memory would surely not fully satisfy us. Likewise, in regards to our ethical

treatment of non-human animals, the primary question that normally concerns us is whether

the creature in question can undergo conscious suffering.

I would suggest that these deep moral intuitions give us reason to seek out a theory of

consciousness that can capture our prereflective commitments and provide guidance as to

which non-human systems warrant ethical consideration on grounds of capacity for suffering.

While some form of agnosticism – whether of a weak or more robust variety – may turn out

to be the only viable account of non-human consciousness available, all things considered,

we should first ensure that there are no other strategies available to enable us to rigorously

apply the current science of human consciousness to non-human systems.

5. Theory-heavy and theory-light approaches to consciousness

The Specificity Problem strikes me as presenting a serious challenge for the application of

cognitive theories of consciousness to non-human systems. In the remainder of the paper, I

wish to present a proposal for how we can gain some traction on the Specificity Problem.

In order to do so, however, I wish first to draw a distinction between two broad approaches to

the problem of non-human consciousness, namely the ‘Theory-Heavy’ and ‘Theory-Light’

approaches. 8

In short, I would suggest that much of the force of the Specificity Problem comes

about from the assumption discussed in Section 2 that we can settle questions about non-

human minds just by reference to our theories of consciousness. On this ‘Theory-Heavy’ way

of approaching the problem, the primary challenge we face in determining which non-human

systems are conscious is to identify the mechanisms that make individual mental states

conscious in humans. With such a theory in hand, we can then turn to comparative

psychology to identify which non-human systems implement the relevant mechanisms and

thus possess at least some conscious states.

As I have suggested, however, such approaches arguably struggle to give a

compelling and principled answer to the Specificity Problem. On a maximally liberal

interpretation of GWT, for example, almost any system with some kind of integrated internal

architecture for information sharing will count as conscious, while on a strict conservative

interpretation, only beings with architectures just like ours will qualify. Needless to say, these

are radical – and radically different – construals of the theory. Some intermediate position

8 This distinction is due to Jonathan Birch (personal communication).

Page 16: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

may of course be available, but the challenge is how we can identify such a position just by

the lights of the internal resources of theory itself.

An alternative strategy for assessing consciousness in non-humans would be a

‘Theory-Light’ approach. Instead of adopting a theory in advance, we might make

assessments of consciousness in non-human systems with reference to a wide range of

pretheoretical and empirical considerations, with the goal of answering the question of which

non-human systems are conscious without direct reference to a specific theory of

consciousness. We might, for example, assess whether a given system is capable of

behaviours naturally associated with consciousness in human beings such as flexible

decision-making and episodic memory. Such assessments could be informed by data from

patients with neural deficits, and we could assess which forms of behaviour can be carried out

independently of consciousness, and which seem to require it. We might also attempt to make

inferences about non-human consciousness by drawing inferences from those brain areas

closely implicated in consciousness in human beings to neural homologues in non-human

animals, though of course such methods have limited application when dealing with artificial

systems and creatures such as octopuses that are phylogenetically distant.

These methods are ubiquitous in the existing literature on non-human consciousness

outside of the theories of consciousness debate, and the Theory-Light approach arguably

captures the implicit methodology in much of cognitive ethology and the philosophy of

animal minds. Researchers in this domain normally do not adopt an explicit theory of state

consciousness to begin with, but instead rely on a broad body of neurobiological and

behavioural data to inform tentative judgments about which animals are likely to be

conscious (Barron & Klein, 2016; Bekoff, 2010; Birch, 2017; Griffin, 2001). Consider, for

example, the following methodological statement from Griffin and Speck (2004):

Although no single piece of evidence provides absolute proof of consciousness, [the]

accumulation of strongly suggestive evidence increases significantly the likelihood

that some animals experience at least simple conscious thoughts and feelings.

The Theory-Light approach may seem to offer a salve for the Specificity Problem:

rather than relying on our theories of state consciousness to answer questions about which

non-human systems are conscious, we can instead adopt a broad holistic approach and make

such determinations on the basis of a wide body of evidence.

However, while I agree with Griffin and Speck that such considerations can be

valuable, I would suggest that the Theory-Light approach also has important limitations. In

Page 17: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

particular, the Theory-Light Approach is arguably somewhat lacking in resources for

determining which of a system’s states are conscious. While certain forms of complex

behaviour (such as multi-step reasoning and the retention of information over time; see

Dehaene, 2014, Ch.3) may require consciousness, there are many simpler states that may or

may not be consciously experienced, including perceptual states, semantic processing, and

even complex motor actions. The Theory-Light Approach might be able to tell us that certain

complex forms of cognition are very likely to be associated with consciousness in a creature,

but is unlikely to settle which of a creature’s simpler mental states are conscious or

unconscious.

Consider, for example, the phenomenon of electroreception in sharks, rays, and many

other cartilaginous fish. Using dedicated sensors termed the Ampullae of Lorenzini, these

animals can detect electrical stimuli, and can use them to find prey and avoid predators. It is

an open question, much as in Nagel’s famous case of echolocation in bats, whether

electroreception gives rise to distinctive conscious experiences. I would note, however, that

even if we concluded based on sophisticated behaviour or neural homologues that sharks and

rays were conscious organisms, this question would not be resolved. It seems entirely

possible that such forms of sensation operate below the level of consciousness, even if their

outputs can modulate more complex behaviour.

Another worry for the Theory-Light approach concerns cases in which behavioural

evidence for consciousness is lacking. Putting things somewhat crudely, the primary source

of evidence for the Theory-Light approach in determining whether a given non-human

system is conscious will be its capacity to engage in relatively sophisticated behaviour.

However, we should not assume at the outset that all conscious systems have such capacities.

As just noted, consciousness in humans takes many forms, including the kind of simple

qualia associated with perceptual experience or bodily sensation. It seems to me to be an open

possibility, then, that there could be cases of consciousness in relatively simple creature such

as lancelets (see Section 6, below) that are likely incapable of any of the sophisticated

behaviours that constitute decisive markers of consciousness yet which could potentially

simple sensory experiences. In the absence of clear neural homologies, a Theory-Light

approach is likely to lack the resources to settle such cases, and thus offers an arguably

incomplete picture of the distribution of consciousness in nature.

6. A Modest Theoretical approach

Page 18: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

I have suggested that both Theory-Heavy and Theory-Light approaches are likely to

individually provide us with only a partial answer to questions about consciousness in non-

human systems. I would suggest instead that the best course is to adopt what I term a Modest

Theoretical Approach. In short, the proposal is that we approach questions about non-human

consciousness using our preferred theory of consciousness to guide enquiry, while taking

advantage of the same independent considerations as the Theory-Light approach in order to

tailor its application to a given case. At least in principle, this may allow us to use the insights

and explanatory power of theories of consciousness while at least partially avoiding the

challenges of the Specificity Problem.

This approach would proceed broadly as follows. First, we would establish with

reference to evidence from behaviour and cognition (and, in some cases, to neural homology)

a given system’s strengths and weaknesses as a consciousness candidate (cf. Birch, 2018, on

'sentience candidates'). Second, on the assumption that systems that are phylogenetically

proximate are likely to use similar mechanisms for consciousness, we determine an initial

level of specificity for the application of our theory.9 Finally, we then apply our preferred

theory of consciousness to the system in question to determine whether or not it is

consciousness.

To give a better idea of how this may work in practice, it will be helpful to consider a

simple example of consciousness in an artificial system. Let us suppose that we adopt GWT

and wish to investigate consciousness in some future robotic system S. We can imagine for

the sake of argument that S is capable of many cognitive feats – complex perceptual learning,

flexible decision-making, and extended autonomous operation, say – roughly equivalent to

those of some organism we are already judge to be conscious. It may also be the case,

however, that S has an internal cognitive architecture rather different from that of a human

being, and lacks any clear analogue of a Global Workspace.

In these circumstances, a Theory-Heavy approach might rule out consciousness on

grounds of architectural dissimilarity, while a Theory-Light approach may take S’s complex

behaviour as providing good evidence of consciousness in its own right. The Modest

Theoretical approach, by contrast, can adopt a more nuanced position incorporating evidence

of both Theory-Light and Theory-Heavy approaches. Recognising that S – by virtue of its

artefactual nature – is unlikely to possess exact analogues of a global workspace, we might

initially adopt a very liberal interpretation of GWT, and attempt to identify whether the

system realised some form of system-wide information integration. If we were to find that S’s 9 Thanks to Patrick Butlin (personal communication) for stressing this point.

Page 19: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

behaviour was accomplished via a non-integrated mechanism such as a lookup table (as in

Searle’s Chinese Room), we might conclude that the system was non-conscious. By contrast,

if we discovered that the internal architecture of the system possessed a coarse-grained

cognitive architecture broadly similar to that of the Global Workspace, we might conclude

that, in light of its complex behaviour, it was a strong consciousness candidate.

Another case where the Modest Theoretical approach may have special attractions is

that of simpler organisms who may not provide us with good behavioural evidence of

consciousness. Consider, for example, an organism such as a lancelet (also known as

Amphioxus; see Fig. 1). Lancelets are a form of marine chordate organism that obtains

nutrition from filtering plankton. In their larval stage, they swim freely, but upon adulthood

burrow into the seabed and remain largely stationary. Reproduction occurs via external

fertilization, and seems to be triggered primarily through external cues relating to temperature

and time of day. Though lancelets possess sense organs (a single forward-facing eye), and a

brain consisting of several thousand neurons, their neuroanatomy is significantly less

complex than that of even the simplest vertebrates. Their range of behaviour is also

correspondingly limited, and even during their more mobile larval stage lancelets engage in a

few fairly simple behavioural ‘routines’ such as swimming upwards in the water column to

feed or swimming quickly away from danger.

Fig.1. An adult lancelet.

Needless to say, there is little reason to expect lancelets to display sophisticated or

versatile behaviours that might provide evidence of consciousness; they do not need to hunt

food or find mates, for example. Hence if we were to rely exclusively on a Theory-Light

Page 20: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

approach that used intelligent behaviour to establish the presence of consciousness, we would

likely have to remain agnostic about lancelet consciousness or give a negative answer.

This latter option may of course be correct, and is the considered judgement of at least

one leading lancelet biologist (Lacalli, 2015). However, there are reasons to at least briefly

consider the possibility of consciousness in lancelets. For one, lancelets are chordates like

humans, and we are more closely related to them than we are to intelligent creatures such as

octopuses and bees. Although the lancelet nervous system is quite different from that of full

vertebrates, as noted, there are important homologies between their nervous system and our

own, and they possess a seeming analogue of the vertebrate mid-brain, for example

possessing a hypothalamus and a region closely resembling the reticular formation (Feinberg

& Mallatt, 2016; Ch.3). One part of the hypothalamus, the postinfundibular neuropile, seem

to serve a particularly important role in integrating sensory information picked up the

lancelet’s sensory systems and in governing the selection of the corresponding appropriate

behavioural routine.

As noted above, lancelets may make a poor consciousness candidate, but one that I

would suggest we cannot trivially rule out. Yet on the assumption that behaviour alone will

give us little clue as to whether lancelets might have some simple form of consciousness, we

might ask how we could settle the question. Here, the Modest Theoretical approach again

arguably has an advantage over the Theory-Light approach insofar as it can bring to bear

considerations from cognitive architecture and the theories of consciousness debate. Working

within the framework of Global Workspace Theory, for example, we might assess whether

lancelets have some equivalent to the Global Workspace. Though this might seem

implausible, Dehaene himself notes that “the architecture of the conscious workspace plays

an essential role in facilitating the exchange of information among brain areas. Thus

consciousness is a useful device that is likely to have emerged a long time ago in evolution”

(Dehaene, 2014: 244). With this in mind, we might attempt to establish whether the firing of

the postinfundibular neuropile in lancelets showed similar dynamics to those associated with

global broadcast in humans, for example in exhibiting a clear distinction between sensory

inputs that triggered ‘global ignition’ and those that did not (Dehaene, 2014: 130-6). A

positive result, were it to obtain, might count as important evidence for lancelet

consciousness, evidence that would be hard to obtain without reference to some specific

theory of consciousness.

Global Workspace Theory is of course only one among many competing theories of

human consciousness, and thus any determinations it makes in edge cases such as these may

Page 21: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

be challenged by proponents of different theories. While this may, as matters stand, limit the

power of our theories in making cast-iron judgments about non-human consciousness, it also

points to a further reason to take new developments in the debate seriously. Imagine for

example that powerful experimental evidence were to emerge suggesting that Global

Broadcast was not necessary for consciousness in humans. Such a development might have

considerable relevance for assessments of consciousness in non-humans, leading us to take

more seriously the possibility of consciousness in systems that lacked any analogue of the

Global Workspace. To give another example, note that the apparent lack of mechanisms for

attentional amplification of sensory stimulation in fish have led some to doubt that fish can

feeling conscious pain (Key, 2015). How much weight we give to this consideration must

surely be moderated by evidence within the theories of consciousness debate concerning

whether human consciousness depends on mechanisms for attentional amplification of

stimuli. If, as some have argued (Koch & Tsuchiya, 2007), consciousness can occur without

any attention at all, the case against conscious pain in fish is considerably weakened. We are

best placed to take full advantage of these sorts of evidence, however, only if, as I have

argued, we give some place for theories of consciousness alongside broader behavioural

evidence within our methodological framework.

I hope that the foregoing discussion makes clear some of the advantages of this

Modest Theoretical Approach. By giving some weight to intelligent behaviour independent of

theoretical commitments, we can avoid ruling out consciousness in systems that are

independently plausible consciousness candidates yet which lack conservative analogues of

our proposed architecture for consciousness, as in the case of our imagined robot. However,

by keeping theoretical criteria in mind, we might also be able to identify consciousness in

systems like lancelets that do not display complex behaviour, at least if their internal

architecture is sufficiently similar to that which we associated with consciousness in human

beings. Moreover, in all of these cases, if we determine based on both independent and

theoretical considerations that a system is likely to be conscious, we can use our specific

theory of consciousness at an appropriate degree of specificity to determine which of the

system’s states are conscious.

The above discussion is just a sketch of the methodology endorsed by the Modest

Theoretical Approach, of course, and there are important details of application still to be

filled in. Even at this initial stage, however, it strikes me as offering a promising strategy for

reaping the benefits of two somewhat independent lines of consciousness research and

answering a wide range of questions that might not be able to be settled via Theory-Heavy or

Page 22: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

Theory-Light approach. It also, I would suggest, allows us to give at least a partial answer to

the Specificity Problem faced by Theory-Heavy approaches. In cases where we have

independent evidence of consciousness, we will have reason to adopt a more liberal

interpretation of our theory of consciousness, especially in dealing with artefactual systems or

those at some phylogenetic remove. By contrast, in cases where we have little independent

behavioural evidence for consciousness, and good phylogenetic reason to expect the

mechanisms of consciousness, if present, to be broadly similar to those of humans, we will be

justified in adopting a more conservative interpretation.

7. Conclusion

This paper has aimed to flesh out an important challenge for the application of theories of

consciousness to non-human systems, namely the Specificity Problem. I considered four

different direct approaches to the problem, namely Conservatism, Liberalism,

Incrementalism, and Agnosticism noting the difficulties they face. I went on to suggest that

the Specificity Problem was threatening primarily insofar as we adopt a Theory-Heavy

approach to consciousness, and thus expect our theory alone to settle questions about

consciousness in non-human systems. After noting the limitations of a Theory-Light

approach, I proposed a Modest Theoretical Approach that advocates for the supplementation

of our theories of consciousness by appeal to the wide range of evidence for non-human

consciousness derived from sources such as animal behaviour research and comparative

psychology. Via such considerations, I argued, we might be able to apply our theories in a

way that allows for principled flexibility as to the appropriate level of specificity to be

adopted in a given case. Such an approach also allows us, I suggested, to remain sensitive to

the fact that there may be conscious creatures with cognitive architectures quite different

from our own, and potentially to identify consciousness in organisms whose behaviour alone

may provide few clues as to its presence.

Page 23: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

REFERENCESAlkire, M. T., & Miller, J. (2005). General anesthesia and the neural

correlates of consciousness. Progress in Brain Research, 150, 229–

244. https://doi.org/10.1016/S0079-6123(05)50017-7

Andrews, K. (2014). The Animal Mind: An Introduction to the Philosophy of

Animal Cognition. Routledge.

Baars, B. J. (1988). A Cognitive Theory of Consciousness. Cambridge

University Press.

Baddeley, A. (1992). Consciousness and working memory. Consciousness

and Cognition, 1(1), 3–6.

Baddeley, A. D. (2000). The episodic buffer: A new component of working

memory?Trends in. Cognitive Sciences, 4(11), 417–423.

Barron, A. B., & Klein, C. (2016). What insects can tell us about the origins

of consciousness. Proceedings of the National Academy of Sciences

Proc Natl Acad Sci USA, 113, 4900–4908.

Bekoff, M. (2010). The Emotional Lives of Animals: A Leading Scientist

Explores Animal Joy, Sorrow, and Empathy — and Why They Matter.

New World Library.

Birch, J. (2017). Animal Sentience and the Precautionary Principle. Animal

Sentience, 2, 16(1).

Birch, J. (2018). Dimensions of Animal Sentience: Grain, Valence, Unity

and Flow. Unpublished.

Blake, R., & Logothetis, N. K. (2002). Visual competition. Nature Reviews.

Neuroscience, 3(1), 13–21. https://doi.org/10.1038/nrn701

Page 24: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

Block, N. (1995). On a Confusion about a Function of Consciousness.

Behavioral and Brain Sciences, 18, 227–287.

Block, Ned. (1978). Troubles with Functionalism. Minnesota Studies in the

Philosophy of Science, 9, 261–325.

Block, Ned. (2007). Consciousness, Accessibility, and the Mesh between

Psychology and Neuroscience. Behavioral and Brain Sciences, 30(5),

481--548.

Block, Ned. (2009). Comparing the Major Theories of Consciousness. In M.

Gazzaniga (Ed.), The Cognitive Neurosciences IV (pp. 1111–1123).

Boyd, R. N. (1999). Kinds, Complexity and Multiple Realization: Comments

on Millikan’s “Historical Kinds and the Special Sciences.”

Philosophical Studies: An International Journal for Philosophy in the

Analytic Tradition, 95(1/2), 67–98. Retrieved from JSTOR.

Boyle, A. (2019). Mapping the Minds of Others.

https://doi.org/10.17863/CAM.37430

Brancucci, A., & Tommasi, L. (2011). “Binaural rivalry”: dichotic listening

as a tool for the investigation of the neural correlate of

consciousness. Brain and Cognition, 76(2), 218–224.

https://doi.org/10.1016/j.bandc.2011.02.007

Buckner, C. (2013). Morgan’s Canon, Meet Hume’s Dictum: Avoiding

Anthropofabulation in Cross-Species Comparisons. Biology and

Philosophy, 28(5), 853–871. https://doi.org/10.1007/s10539-013-

9376-0

Page 25: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

Burgess, N., Maguire, E. A., & O’Keefe, J. (2002). The Human Hippocampus

and Spatial and Episodic Memory. Neuron, 35(4), 625–641.

https://doi.org/10.1016/S0896-6273(02)00830-9

Camp, E. (2007). Thinking with maps. Philosophical Perspectives, 21, 1–

145.

Carruthers, P. (2018). Comparative psychology without consciousness.

Consciousness and Cognition, 63, 47–60.

Casali, A. G., Gosseries, O., Rosanova, M., Boly, M., Sarasso, S., Casali, K.

R., … Massimini, M. (2013). A Theoretically Based Index of

Consciousness Independent of Sensory Processing and Behavior.

Science Translational Medicine, 5(198), 198ra105-198ra105.

https://doi.org/10.1126/scitranslmed.3006294

Chalmers, D. J. (1995). Facing up to the problem of consciousness. Journal

of Consciousness Studies, 2, 3–200.

Crick, F., & Koch, C. (1990). Towards a neurobiological theory of

consciousness. Seminars in the Neurosciences, 2, 263–275.

Davidson, D. (1975). Thought and Talk. In S. Guttenplan (Ed.), Mind and

Language (pp. 7–23). Oxford: Oxford University Press.

Dehaene, S. (2014). Consciousness and the Brain: Deciphering How the

Brain Codes Our Thoughts. Viking Press.

Dehaene, S., & Naccache, L. (2001). Towards a cognitive neuroscience of

consciousness: Basic evidence and a workspace framework.

Cognition, 79, 1–37.

Dennett, D. C. (1996). Kinds of Minds. Basic Books.

Descartes, R., & Kenny, A. J. P. (1970). Philosophical Letters. Blackwell.

Page 26: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

Feinberg, T. E., & Mallatt, J. M. (2016). The Ancient Origins of

Consciousness: How the Brain Created Experience. MIT Press.

Foote, A. L., & Crystal, J. D. (2007). Metacognition in the rat. Current

Biology : CB, 17(6), 551–555.

https://doi.org/10.1016/j.cub.2007.01.061

Godfrey-Smith, P. (2017). Other Minds: The Octopus and the Evolution of

Intelligent Life. HarperCollins UK.

Graziano, M. S. A. (2013). Consciousness and the Social Brain. OUP USA.

Griffin, D. R. (2001). Animal Minds: Beyond Cognition to Consciousness.

University of Chicago Press.

Griffin, D. R., & Speck, G. B. (2004). New evidence of animal

consciousness. Animal Cognition, 7(1), 5–18.

https://doi.org/10.1007/s10071-003-0203-x

Kaminski, J., Call, J., & Tomasello, M. (2008). Chimpanzees know what

others know, but not what they believe. Cognition, 109(2), 224–234.

https://doi.org/10.1016/j.cognition.2008.08.010

Key, B. (2015). Fish do not feel pain and its implications for understanding

phenomenal consciousness. Biology & Philosophy, 30(2), 149–165.

https://doi.org/10.1007/s10539-014-9469-4

Koch, C., & Tsuchiya, N. (2007). Attention and consciousness: Two distinct

brain processes. Trends in Cognitive Sciences, 11(1), 16–22.

https://doi.org/10.1016/j.tics.2006.10.012

Kouider, S., & Dehaene, S. (2007). Levels of processing during non-

conscious perception: A critical review of visual masking.

Philosophical Transactions of the Royal Society B, 362, 857–875.

Page 27: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

Lacalli, T. (2015). Perspective—The Origin Of Vertebrate Neural

Organization. Retrieved from

https://www.oxfordscholarship.com/view/10.1093/acprof:oso/978019

9682201.001.0001/acprof-9780199682201-chapter-55

Lamme, V. A. (2010). How neuroscience will change our view on

consciousness. Cognitive Neuroscience, 13, 204–220.

Lewis, D. (1972). Psychophysical and Theoretical Identifications.

Australasian Journal of Philosophy, 50(3), 249–258.

Loukola, O. J., Perry, C. J., Coscos, L., & Chittka, L. (2017). Bumblebees

show cognitive flexibility by improving on an observed complex

behavior. Science (New York, N.Y.), 355(6327), 833–836.

https://doi.org/10.1126/science.aag2360

Lycan, W. G. (1996). Consciousness and experience. Mit Press.

Manson, N. (2000). State consciousness and creature consciousness: A

real distinction. Philosophical Psychology, 13(3), 405–410.

https://doi.org/10.1080/09515080050128196

Maudlin, T. (1989). Computation and Consciousness. Journal of Philosophy,

86(8), 407. https://doi.org/10.2307/2026650

Mcbride, R. (1999). Consciousness and the state/transitive/creature

distinction. Philosophical Psychology, 12(2), 181–196.

https://doi.org/10.1080/095150899105864

Merker, B. (2005). The liabilities of mobility: A selection pressure for the

transition to consciousness in animal evolution. Consciousness And,

14(1), 89–114.

Page 28: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

Nagel, T. (1974). What is it like to be a bat? Philosophical Review, 83, 435–

456.

Papineau, D. (2003). Could There Be A Science of Consciousness?

Philosophical Issues, 13(1), 205–220. https://doi.org/10.1111/1533-

6077.00012

Rosenthal, D. (2005). Consciousness and Mind. Oxford University Press

UK.

Rosenthal, D. (2019). Consciousness and confidence. Neuropsychologia,

128, 255–265.

https://doi.org/10.1016/j.neuropsychologia.2018.01.018

Rosenthal, D. M. (1986). Two Concepts of Consciousness. Philosophical

Studies, 49(May), 329–59.

Searle, J. R. (1980). Minds, Brains and Programs. Behavioral and Brain

Sciences, 3(3), 417–57.

Simon, J. A. (2017). Vagueness and zombies: Why ‘phenomenally

conscious’ has no borderline cases. Philosophical Studies, 174(8),

2105–2123. https://doi.org/10.1007/s11098-016-0790-4

Singer, P. (1989). All Animals Are Equal. In T. Regan & P. Singer (Eds.),

Animal Rights and Human Obligations (pp. 215--226). Oxford

University Press.

Sitt, J. D., King, J.-R., El Karoui, I., Rohaut, B., Faugeras, F., Gramfort, A., …

Naccache, L. (2014). Large scale screening of neural signatures of

consciousness in patients in a vegetative or minimally conscious

state. Brain: A Journal of Neurology, 137(Pt 8), 2258–2270.

https://doi.org/10.1093/brain/awu141

Page 29: Non-human consciousness and the specificity problem: a ...henryshevlin.com/.../2015/04/The-Specificity-Problem-Fina…  · Web viewThe last three decades have witnessed an explosion

Stoerig, P., Zontanou, A., & Cowey, A. (2002). Aware or unaware:

Assessment of cortical blindness in four men and a monkey.

Cerebral Cortex, 12(6), 565–574.

https://doi.org/10.1093/cercor/12.6.565

Terrace, H. S., & Son, L. K. (2009). Comparative metacognition. Current

Opinion in Neurobiology, 19(1), 67–74.

https://doi.org/10.1016/j.conb.2009.06.004

Thompson, L. T., Moyer, J. R., & Disterhoft, J. F. (1996). Trace eyeblink

conditioning in rabbits demonstrates heterogeneity of learning

ability both between and within age groups. Neurobiology of Aging,

17(4), 619–629. https://doi.org/10.1016/0197-4580(96)00026-7

Tononi, G. (2008). Consciousness as Integrated Information: A Provisional

Manifesto. The Biological Bulletin, 215(3), 216.

Tononi, G., & Koch, C. (2015). Consciousness: Here, there and

everywhere? Philosophical Transactions of the Royal Society B:

Biological Sciences, 370.

Tye, M. (2017). Tense Bees and Shell-shocked Crabs: Are Animals

Conscious? Oxford University Press.

Weiskrantz, L. (1986). Blindsight: A Case Study and Implications. Oxford

University Press.