Moving Forward on the Problem of Consciousness

David J. Chalmers

Department of Philosophy
University of Arizona
Tucson, AZ 85721

chalmers@arizona.edu

This paper is a response to the commentaries in the Journal of Consciousness Studies on my paper "Facing Up to the Problem of Consciousness." I have written it so that it can be understood independently of the commentaries, however, and so that it provides a detailed elaboration and extension of some of the ideas in the original paper.

The 26 commentaries (with links to online versions when they exist) were by Bernard Baars, Douglas Bilodeau, Patricia Churchland, Tom Clark, C.J.S. Clarke, Francis Crick & Christof Koch, Daniel Dennett, Stuart Hameroff & Roger Penrose, Valerie Hardcastle, David Hodgson, Piet Hut & Roger Shepard, Benjamin Libet, E.J. Lowe, Bruce MacLennan, Colin McGinn, Eugene Mills, Kieron O'Hara & Tom Scutt, Mark Price, William Robinson, Gregg Rosenberg, William Seager, Jonathan Shear, Henry Stapp, Max Velmans, and Richard Warner. The complete symposium has been collected into a book, Explaining Consciousness: The Hard Problem, published by MIT Press.

CONTENTS

1 INTRODUCTION

I am very grateful to all the contributors to this symposium for their thoughtful comments. The various papers reflect a wide range of approaches and of views, yielding a rich snapshot of the current state of play on the problem of consciousness. There are some interesting criticisms of my point of view, which I hope to address in this reply in a way that clarifies the central issues at hand, and there are also a number of intriguing positive proposals for confronting the problem. I am honored to have provided an opportunity to bring such a thought-provoking collection of ideas together.

When I wrote my paper, I had no idea that it would be subject to such close analysis. That may be a good thing, as all the hedges, qualifications, and citations I would have added if I had known might have made the paper close to unreadable, or at any rate twice the size. But it also means that the paper - intended as a crisp presentation of some central issues, mostly for non-philosophers - skates quickly over some subtleties and has less flesh on its bones than it might. I will try to flesh out the picture in this piece, while still keeping the discussion at a non-technical level. A more detailed presentation can be found in my book The Conscious Mind, to which I will occasionally point in this response.

Because of the unexpected influence of the "hard problem" formulation, I have occasionally received far more credit than I deserve. So let me state the obvious: the reason the formulation has caught on is that everyone knew what the hard problem was all along. The label just makes it a little harder to avoid. Any number of thinkers in the recent and distant past - including a number of contributors to this symposium - have recognized the particular difficulties of explaining consciousness and have tried to face up to them in various ways. All my paper really contributes is a catchy name, a minor reformulation of philosophically familiar points, and a specific approach to dealing with them.

The papers in the symposium are divided fairly evenly into those that take issue with aspects of my analysis, and those that provide positive approaches of their own. I will concentrate mostly on those in the first class, though I will make a few comments on those in the second. A quick glance at the relevant papers may give the appearance of much disagreement and a sprawling landscape of mutually contradictory points of views; but I think a closer look reveals a much more coherent picture. Once a few minor misunderstandings and verbal disagreements are cleared up, and the various contributions are aligned, one is left with a small number of central "choice points" on which the central disagreements turn. I hope that my reply helps to clarify this landscape.

The reply has three main parts. In the first I consider the critiques of a generally reductive or "deflationary" orientation; in the second I consider those of a generally nonreductive orientation; and in the third I make some comments on the various positive proposals.

2 DEFLATIONARY CRITIQUES

Recall the main conceptual distinction between the easy and hard problems. The easy problems - explaining discrimination, integration, accessibility, internal monitoring, reportability, and so on - all concern the performance of various functions. For these phenomena, once we have explained how the relevant functions are performed, we have explained what needs to be explained. The hard problem, by contrast, is not a problem about how functions are performed. For any given function that we explain, it remains a nontrivial further question: why is the performance of this function associated with conscious experience? The sort of functional explanation that is suited to answering the easy problems is therefore not automatically suited to answering the hard problem.

There are two quite different ways in which a materialist might respond to this challenge. The type-A materialist denies that there is a "hard problem" distinct from the "easy" problems; the type-B materialist accepts (explicitly or implicitly) that there is a distinct problem, but argues that it can be accommodated within a materialist framework all the same. Both of these strategies are taken by contributors to this symposium. I will discuss the first strategy in the next two sections, and the second strategy after that.

2.1 Deflationary analogies

The type-A materialist, more precisely, denies that there is any phenomenon that needs explaining, over and above explaining the various functions: once we have explained how the functions are performed, we have thereby explained everything. Sometimes type-A materialism is expressed by denying that consciousness exists; more often, it is expressed by claiming that consciousness may exist, but only if the term "consciousness" is defined as something like "reportability", or some other functional capacity. Either way, it is asserted that there is no interesting fact about the mind, conceptually distinct from the functional facts, that needs to be accommodated in our theories. Once we have explained how the functions are performed, that is that.

Note that type-A materialism is not merely the view that consciousness is identical to some function, or that it plays a functional role, or that explaining the functions will help us explain consciousness. It is the much stronger view that there is not even a distinct question of consciousness: once we know about the functions that a system performs, we thereby know everything interesting there is to know. Type-A materialism subsumes philosophical positions such as eliminativism, behaviorism, analytic functionalism, and others, but it does not include positions (such as those embraced by Clark and Hardcastle) that rely on an a posteriori identity between consciousness and some physical/functional property. Positions of the latter sort accept that there is a real phenomenon to be accounted for, conceptually distinct from the performance of functions (the a posteriori identity ties together a priori distinct concepts), and therefore count as type-B materialism. Type-A materialism, by contrast, denies that there is a conceptually distinct explanatory target at all.

This is an extremely counterintuitive position. At first glance, it seems to simply deny a manifest fact about us. But it deserves to be taken seriously: after all, counterintuitive theories are not unknown in science and philosophy. On the other hand, to establish a counterintuitive position, strong arguments are needed. And to establish this position - that there is really nothing else to explain - one might think that extraordinarily strong arguments are needed. So what arguments do its proponents provide?

Perhaps the most common strategy for a type-A materialist is to deflate the "hard problem" by using analogies to other domains, where talk of such a problem would be misguided. Thus Dennett imagines a vitalist arguing about the hard problem of "life", or a neuroscientist arguing about the hard problem of "perception". Similarly, Paul Churchland (1996) imagines a nineteenth century philosopher worrying about the hard problem of "light", and Patricia Churchland brings up an analogy involving "heat". In all these cases, we are to suppose, someone might once have thought that more needed explaining than structure and function; but in each case, science has proved them wrong. So perhaps the argument about consciousness is no better.

This sort of argument cannot bear much weight, however. Pointing out that analogous arguments do not work in other domains is no news: the whole point of anti-reductionist arguments about consciousness is that there is a disanalogy between the problem of consciousness and problems in other domains. As for the claim that analogous arguments in such domains might once have been plausible, this strikes me as something of a convenient myth: in the other domains, it is more or less obvious that structure and function are what need explaining, at least once any experiential aspects are left aside, and one would be hard pressed to find a substantial body of people who ever argued otherwise.

When it comes to the problem of life, for example, it is just obvious that what needs explaining is structure and function: How does a living system self-organize? How does it adapt to its environment? How does it reproduce? Even the vitalists recognized this central point: their driving question was always "How could a mere physical system perform these complex functions?", not "Why are these functions accompanied by life?" It is no accident that Dennett's version of a vitalist is "imaginary". There is no distinct "hard problem" of life, and there never was one, even for vitalists.

In general, when faced with the challenge "explain X", we need to ask: what are the phenomena in the vicinity of X that need explaining, and how might we explain them? In the case of life, what cries out for explanation are such phenomena as reproduction, adaptation, metabolism, self-sustenance, and so on: all complex functions. There is not even a plausible candidate for a further sort of property of life that needs explaining (leaving aside consciousness itself), and indeed there never was. In the case of consciousness, on the other hand, the manifest phenomena that need explaining are such things as discrimination, reportability, integration (the functions), and experience. So this analogy does not even get off the ground.

Or take Churchland's example of heat. Here, what cries out for explanation are such things as: heat's abilities to expand metals, the causation of fire, heat transmission between substances, the experience of hotness. All but the last of these are clearly functions, and it is these functions that reductive explanations of heat explain. The existence of such functions is entailed by the microphysical story about heat: in any world that is physically identical to ours, such functions will automatically be present.

If someone were to claim that something were "left out" by reductive explanations of heat (as Churchland suggests they might), or of light (as Paul Churchland suggests they might), what something might they be referring to? The only phenomenon for which the suggestion would be even remotely plausible is our subjective experience of light and of hotness. The molecular theory of heat does not explain the sensation of heat; and the electromagnetic theory of light does not explain what it is like to see. And understandably so: the physicists explaining heat and light have quite reasonably deferred the explanation of their experiential manifestations until the time when we have a reasonable theory of consciousness. One need not explain everything at once. But with consciousness itself, subjective experience is precisely what is at issue, so we cannot defer the question in the same way. Thus once again, the analogy is no help to a reductionist.

In his article "The Rediscovery of Light" (1996), Paul Churchland suggests that parallel antireductionist arguments could have been constructed for the phenomenon of "luminescence", and might have been found plausible at the time. I have my doubts about that plausibility, but in any case it is striking that his arguments about luminescence all depend on intuitions about the conscious experience of light. His hypothetical advocate of a "hard problem" about light appeals to light's "visibility" and the "visual point of view"; his advocate of a "knowledge argument" about light appeals to blind Mary who has never had the experience of seeing; and the advocate of a "zombie" argument appeals to the conceivability of a universe physically just like ours, but in which everything is dark. That the first two arguments trade on intuitions about experience is obvious; and even for the third, it is clear on a moment's reflection that the only way such a universe might make sense is as a universe in which the same electromagnetic transmission goes on, but in which no-one has the experience of seeing.

Churchland might insist that by "luminescence" he means something quite independent of experience, which physical accounts still do not explain: but then the obvious reply is that there is no good reason to believe in luminescence in the first place. Light's structural, functional, and experiential manifestations exhaust the phenomena that cry out for explanation, and the phenomena in which we have any reason to believe. By contrast, conscious experience presents itself as a phenomenon to be explained, and cannot be eliminated in the same way.

A similar critique applies to such examples such as Dennett's "cuteness" (what needs explaining is the structure and functioning of cute people, and our experience and judgment of them as cute), his "perception" (the functioning of perceptual systems plus the experience of perception), and so on. In all such cases, either the analogous arguments are not even prima facie plausible (as in the case of life), or at best, they gain their plausibility through pointing to experiential properties that reductive accounts omit (as in the cases of perception and light). So they can do no work at all in arguing for reductionism about experience.

Indeed, similar remarks can be made about any phenomenon that we observe in the external world. When we observe external objects, we observe their structure and function; that's all. Such observations give no reason to postulate any new class of properties, except insofar as they explain structure and function; so there can be no analog of a "hard problem" here. Even if further properties of these objects existed, we could have no access to them, as our external access is physically mediated: such properties would lie on the other side of an unbridgeable epistemic divide. Consciousness uniquely escapes these arguments by lying at the center of our epistemic universe, rather than at a distance. In this case alone, we can have access to something other than structure and function.

2.2 Is explaining the functions enough?

So, analogies don't help. To have any chance of making the case, a type-A materialist needs to argue that for consciousness, as for life, the functions are all that need explaining. Perhaps some strong, subtle, and substantive argument can be given, establishing that once we have explained the functions, we have automatically explained everything. If a sound argument could be given for this surprising conclusion, it would provide as valid a resolution of the hard problem as any.

Is there any compelling, non-question-begging argument for this conclusion? The key word, of course, is "non-question-begging". Often, a proponent will simply assert that functions are all that need explaining, or will argue in a way that subtly assumes this position at some point. But that is clearly unsatisfactory. Prima facie, there is very good reason to believe that the phenomena a theory of consciousness must account for include not just discrimination, integration, report, and such functions, but also experience, and prima facie, there is good reason to believe that the question of explaining experience is distinct from the questions about explaining the various functions. Such prima facie intuitions can be overturned, but to do so requires very solid and substantial argument. Otherwise, the problem is being "resolved" simply by placing one's head in the sand.

Upon examing the materialist papers in this symposium, such arguments are surprisingly hard to find. Indeed, despite their use of various analogies, very few of the contributors seem willing to come right out and say that in the case of consciousness, the functions are all that need explaining. Only Dennett embraces this position explicitly, and even he does not spend much time arguing for it. But he does spend about a paragraph making the case: presumably this paragraph bears the weight of his piece, once the trimmings are stripped away. So it is this paragraph that we should examine.

Dennett's argument here, interestingly enough, is an appeal to phenomenology. He examines his own phenomenology, and tells us that he finds nothing other than functions that need explaining. The manifest phenomena that need explaining are his reactions and his abilities; nothing else even presents itself as needing to be explained.

This is daringly close to a simple denial - one is tempted to agree that it might be a good account of Dennett's phenomenology - and it raises immediate questions. For a start, it is far from obvious that even all the items on Dennett's list - "feelings of foreboding", "fantasies", "delight and dismay" - are purely functional matters. To assert without argument that all that needs to be explained about such things are the associated functions seems to beg the crucial question at issue. And if we leave these controversial cases aside, Dennett's list seems to be a systematically incomplete list of what needs to be explained in explaining consciousness. One's "ability to be moved to tears" and "blithe disregard of perceptual details" are striking phenomena, but they are far from the most obvious phenomena that I (at least) find when I introspect. Much more obvious are the experience of emotion and the phenomenal visual field themselves; and nothing Dennett says gives us reason to believe that these do not need to be explained, or that explaining the associated functions will explain them.

What might be going on here? Perhaps the key lies in what Dennett has elsewhere described as the foundation of his philosophy: "third-person absolutism". If one takes the third-person perspective on oneself -- viewing oneself from the outside, so to speak - these reactions and abilities are no doubt the main focus of what one sees. But the hard problem is about explaining the view from the first-person perspective. So to shift perspectives like this - even to shift to a third-person perspective on one's first-person perspective, which is one of Dennett's favorite moves - is again to assume that what needs explaining are such functional matters as reactions and reports, and so is again to argue in a circle.

Dennett suggests "subtract the functions and nothing is left". Again, I can see no reason to accept this, but in any case the argument seems to have the wrong form. An analogy suggested by Gregg Rosenberg is useful here. Color has properties of hue, saturation, and brightness. It is plausible that if one "subtracts" hue from a color, nothing phenomenologically significant is left, but this certainly doesn't imply that color is nothing but hue. So even if Dennett could argue that function was somehow required for experience (in the same way that hue is required for color), this would fall a long way short of showing that function is all that has to be explained.

A slight flavor of non-circular argument is hinted at by Dennett's suggestion: "I wouldn't know what I was thinking about if I couldn't identify them by their functional differentia". This tantalizing sentence suggests various reconstructions, but all the reconstructions that I can find fall short of making the case. If the idea is that functional role is essential to the (subpersonal) process of identification, this falls short of establishing that functioning is essential to the experiences themselves, let alone that functioning is all there is to the experiences. If the idea is rather than function is all we have access to at the personal level, this seems false, and seems to beg the question against the intuitive view that we have knowledge of intrinsic features of experience. But if Dennett can elaborate this into a substantial argument, that would be a very useful service.

In his paper, Dennett challenges me to provide "independent" evidence (presumably behavioral or functional evidence) for the "postulation" of experience. But this is to miss the point: conscious experience is not "postulated" to explain other phenomena in turn; rather, it is a phenomenon to be explained in its own right. And if it turns out that it cannot be explained in terms of more basic entities, then it must be taken as irreducible, just as happens with such categories as space and time. Again, Dennett's "challenge" presupposes that the only explananda that count are functions.[*]

*[[[Tangentially: I would be interested to see Dennett's version of the "independent" evidence that leads physicists to "introduce" the fundamental categories of space and time. It seems to me that the relevant evidence is spatiotemporal through and through, just as the evidence for experience is experiential through and through.]]]

Dennett might respond that I, equally, do not give arguments for the position that something more than functions needs to be explained. And there would be some justice here: while I do argue at length for my conclusions, all these arguments take the existence of consciousness for granted, where the relevant concept of consciousness is explicitly distinguished from functional concepts such as discrimination, integration, reaction, and report. Dennett presumably disputes this starting point: he thinks that the only sense in which people are conscious is a sense in which consciousness is defined as reportability, as a reactive disposition, or as some other functional concept.

But let us be clear on the dialectic. It is prima facie obvious to most people that there is a further phenomenon here: in informal surveys, the large majority of respondents (even at Tufts!) indicate that they think something more than functions needs explaining. Dennett himself - faced with the results of such a survey, perhaps intending to deflate it - has accepted that there is at least a prima facie case that something more than functions need to be explained; and he has often stated how "radical" and "counterintuitive" his position is. So it is clear that the default assumption is that there is a further problem of explanation; to establish otherwise requires significant and substantial argument.

I would welcome such arguments, in the ongoing attempt to clarify the lay of the land. The challenge for those such as Dennett is to make the nature of these arguments truly clear. I do not think it a worthless project - the hard problem is so hard that we should welcome all attempts at a resolution - but it is clear that anyone trying to make such an argument is facing an uphill battle.[*]

*[[[One might look to Dennett's book Consciousness Explained for non-circular arguments, but even here such arguments for the relevant conclusion are hard to find. The plausible attacks on a "place in a brain where it all comes together" do nothing to remove the hard problem. The book's reliance on "heterophenomenology" (verbal reports) as the central source of data occasionally slips into an unargued assumption that such reports are all that need explaining, especially in the discussion of "real seeming", which in effect assumes that the only "seemings" that need explaining are dispositions to react and report. I think there may be a substantial argument implicit in the "Orwell/Stalin" discussion - essentially taking materialism as a premise and arguing that if materialism is true then the functional facts exhaust all the facts - but even this is equivalent to "if something more than functions needs explaining, then materialism cannot explain it", and I would not disagree. At best, Dennett's arguments rule out a middle-ground "Cartesian materialism"; the hard problem remains as hard as ever.]]]

In Churchland's paper, this sort of argument is even harder to find. Indeed, it is not always clear who Churchland is arguing with: she does not address the central arguments in the keynote paper at any point, and she often seems to be arguing with someone with views quite different from mine. Her arguments have premises that are consistently more plausible than Dennett's, but they do not come close to establishing the relevant conclusion. I include Churchland as a type-A materialist as she suggests that there is no principled difference between the "hard" and "easy" problems, but her position is sufficiently inexplicit that it is hard to know for sure.

Churchland asks for a systematic difference between the "easy" and "hard" problems, not mentioning the detailed analysis of this difference in my paper. The difference is, of course, that the easy problems are all clearly problems of explaining how functions are performed, and the hard problem is not. Perhaps Churchland, like Dennett, would deny this; unlike Dennett, however, she never addresses the question directly. If she truly holds that the functions (discrimination, integration, access, control, report, ...) are all that we need to account for, then clearly some explicit argument is required. If she does not, then the relevant distinction is present right there.

Churchland notes correctly that phenomena such as attention have an experiential component. I am not sure how this is meant to deflate the problem of experience. Vision has an experiential component, too; that's the "hard" part. We can give neural or cognitive accounts of the functions associated with these phenomena, but it remains unclear why the experiential aspect should accompany those functions. This isn't to deny that it does accompany them. There are deep and intimate links between the "hard" and "easy" phenomena, of which I note some in my paper, and more in my book. So when Churchland criticizes somebody's proposal for ruling out such links, it is not my proposal she is addressing.

Perhaps the problem is that Churchland sets up the "easy"/"hard" distinction as the distinction between the problems of (e.g.) attention, learning, and short-term memory on one hand, and the problem of consciousness on the other. This is not quite my way of doing things: I set up the distinction as that between explaining how functions are performed and explaining subjective experience. It is plausible that the notions of "memory", "attention", and perhaps even "consciousness" subsume elements both of functioning and of subjective experience, as Churchland in effect points out - so there are "easy" and "hard" aspects of memory, attention, and consciousness. To keep things clear, it is best to set up the distinction directly.

Churchland is also right to note that it is not always obvious just where experience is present and where it is not, especially in fringe cases. But it is a philosophical truism that we should not let the existence of fringe cases blind us to the facts about clear cases. One goal of a theory of experience will be to clarify the status of those fringe cases; in the meantime, in cases where experience is clearly present, it is as hard to explain as ever.

And Churchland is also quite right that there is much about the "easy" problems that we do not understand. "Easy" is of course a term of art, and nothing substantial in my arguments rests on it. Churchland's point would be a relevant rebuttal to an argument that rested on it, or to an argument from ignorance, but my argument is nothing of the sort. Facts of the form "we don't know" or "I can't imagine" play no explicit or implicit role in my arguments. Rather, the key is the conceptual point: the problem of consciousness is not a problem about how functions are performed. No matter how much we find out about the mechanisms that perform these functions, the basic explanatory point is unaffected.

Contrast Churchland's case of sensorimotor integration. It's true that we do not know much about the mechanisms here. But we do know what we need to do to explain sensorimotor integration: we need to explain how information from different sensory areas is brought together and put to use in the control of action. This is a problem about how functions are performed: it is guaranteed that once we find the mechanism that performs the function and explain how it works, we will have explained sensorimotor integration. But for consciousness, this guarantee fails: it is not just functions that need to be explained. So the research program that promises so much on the easy problems needs to be augmented where the hard problem is concerned.

So Churchland either needs to argue that functions are all that need to be explained, or she needs to face up to the disanalogy and the explanatory problem directly. Homilies about the progress of science do not carry much weight in this context. We have seen that "normal" (function-explaining) science in the neuroscientific mode has limitations that have to be confronted, not ignored; and if one relies instead on a gesture in the direction of a major conceptual revolution sometime in the future, then one is in effect conceding that the hard problem is very hard indeed.

Proponents of the "no problem" view sometimes like to suggest that their view is supported by the results of modern science, but all the science that I know is quite neutral here: I have never seen any experimental result that implies that functions are all that need to be explained. Rather, this view seems to be rooted in a philosophical claim. This claim does not seem to be supported either by empirical evidence or by non-circular argument; at the end of the day, it may be that the position is grounded instead in some sort of unargued axiom, such as Dennett's third-person absolutism. And to anyone who is impressed by the first-person phenomenology of consciousness, such an axiom will always beg the crucial questions. The position reduces to an unargued denial.

This is not to say that type-A materialism cannot be argued for at all. There are a few sophisticated arguments for such a position in the literature (for example, Shoemaker 1975 and White 1986), but even these ultimately come down to "consider the alternatives", isolating the difficulties that one gets into if one accepts that there is a further phenomenon that needs explaining. There is no doubt that these difficulties (both ontological and epistemological) are considerable; life would be a lot easier if the hard problem did not exist. But I think these difficulties are solvable; and in any case, to deny the problem because of the difficulties has the flavor of solution by decree. So while I think such arguments need to be taken very seriously, they do little to actually remove the problem. To truly make the problem go away, one needs positive and non-circular arguments for the counterintuitive conclusion that the functions are all that need explaining; and such arguments are very hard to find.

Of course, type-A materialism is unlikely to disappear any time soon, and we will probably just have to get used to the fact that there is a basic division in the field: that between those who think the "easy" problems are the only problems, and those who think that subjective experience needs to be explained as well. We can therefore expect two quite distinct sorts of theories of consciousness: those which explain the functions and then say "that's all", and those which take on an extra burden. In the end, the most progress will probably come from internal advances in the respective research programs, rather from the endless battle between the two. So beyond a certain point in the argument, theorists in these camps might just agree to disagree and get on with their respective projects. This way, everyone can move forward.

2.3 Type-B materialism

Type-A materialism offers a clean and consistent way to be a materialist, but the cost is that it seems not to take consciousness seriously. Type-B materialism tries to get the best of both worlds. The type-B materialist accepts that there is a phenomenon that needs to be accounted for, conceptually distinct from the performance of functions, but holds that the phenomenon can still be explained within a materialist framework. This is surely the most attractive position at a first glance. It promises to avoid the extremes of both hard-line reductionism and property dualism, which respectively threaten to deny the phenomenon and to radically expand our ontology.

I was attracted to type-B materialism for many years myself, until I came to the conclusion that it simply cannot work. The basic reason for this is simple. Physical theories are ultimately specified in terms of structure and dynamics: they are cast in terms of basic physical structures, and principles specifying how these structures change over time. Structure and dynamics at a low level can combine in all sort of interesting ways to explain the structure and function of high-level systems; but still, structure and function only ever adds up to more structure and function. In most domains, this is quite enough, as we have seen, as structure and function are all that need to be explained. But when it comes to consciousness, something other than structure and function needs to be accounted for. To get there, an explanation needs a further ingredient.

The type-A materialist gets around this problem by asserting that for consciousness, too, structure and function are all that need to be explained. But this route is not open to the type-B materialist. Given that we have accepted that something more than structure and function needs to be accounted for, we are forced to the conclusion that the "further question" will arise for any account of physical processing: why is this structure and function accompanied by conscious experience? To answer this question, we need to supplement our story about structure and function with something else; and in doing so we move beyond truly reductive explanation.

So while many people think they can reject a Dennett-style "no problem" view and still expect a purely physical explanation of consciousness one day, this view seems untenable for systematic reasons. An account of physical processing may provide the bulk of a theory of human consciousness; but whatever account of processing we give, the vital step - the step where we move from facts about structure and function to facts about experience - will always be an extra step, requiring some substantial principle to bridge the gap. To justify this step, we need a new component in our theories.

There is one route to type-B materialism that one might think remains open; this is the route taken by Clark and Hardcastle. These two are clearly realists about phenomenal consciousness, and they are equally clearly materialists. They reconcile the two by embracing an empirical identity between conscious experiences and physical processes. Although consciousness is not equivalent a priori to a structural or functional property (as type-A materialists might suggest), the two are nevertheless identical a posteriori. We establish this identity through a series of correlations: once we find that consciousness and certain physical processes are correlated, the best hypothesis is that the two are identical. And this postulated identity bridges the explanatory gap.

This is a popular approach, but it has a number of problems. The problems are all rooted in the same place: it makes the identity an explanatorily primitive fact about the world. That is, the fact that certain physical/functional states are conscious states is taken as a brute fact about nature, not itself to be further explained. But the only such explanatorily primitive relationships found elsewhere in nature are fundamental laws; indeed, one might argue that this bruteness is precisely the mark of a fundamental law. In postulating an explanatorily primitive "identity", one is trying to get something for nothing: all of the explanatory work of a fundamental law, at none of the ontological cost. We should be suspicious of such free lunches; and indeed, I think there is something deeply wrong with the idea.

To evaluate the truth of materialism, what matters is whether all facts follow from the physical facts. As I argue at length in my book, in most domains it seems that they certainly do. The low-level facts about physical entities determine the facts about physical structure and function at all levels with conceptual necessity, which is enough to determine the facts about chemistry, about biology, and so on. The facts about genes "fall out" of the facts about the structure and function of DNA, for example. A geneticist does not need a primitive Genetic Identity Hypothesis to cross the divide - "what do you know, whenever there is some unit that encodes and transmits hereditary characteristics, there is a gene!". Rather, to encode and transmit such characteristics is roughly all it means to be a gene; so there is an a priori implication from the facts about the structure and functioning of DNA in a reproductive context to the facts about genes. Even Mary in her black-and-white room could figure out the facts about genes, in principle, if she was equipped with the facts about DNA and the concepts involved.

But the facts about consciousness do not just fall out of the facts about the structure and functioning of neural processes, at least once type-A materialism is rejected. As usual, there is a further question -- "why are these processes accompanied by consciousness?" - and merely repeating the story about the physical processes does not provide an answer. If we have rejected type-A materialism, there can be no conceptual implication from one to the other.

Clark and Hardcastle's answer is to augment one's account of physical processes with an "identity hypothesis" (Clark) or an "identity statement" (Hardcastle), asserting that consciousness is identical to some physical or functional state. Now, it is certainly true that if we augment an account of physical processes with an identity statement of this form, the existence of consciousness can be derived; and with a sufficiently detailed and systematic identity statement, detailed facts about consciousness might be derived. But the question is now: what is the relationship between the physical facts and the identity statement itself?

Neither Clark nor Hardcastle gives us any reason to think that the identity statement follows from the physical facts. When answering the question "why does this physical process give rise to consciousness?", their answer is always "because consciousness and the physical process are identical", where the latter statement is something of a primitive. It is inferred to explain the correlation between physical processes and consciousness in the actual world, but no attempt is made to explain or derive it in turn. And without it, one does not come close to explaining the existence of consciousness.

This identity statement therefore has a very strange status indeed. It is a fact about the world that cannot be derived from the physical facts, and therefore has to be taken as axiomatic. No other "identity statement" above the level of fundamental physics have this status. The fact that DNA is a gene can be straightforwardly derived from the physical facts, as can the fact that H2O is water, given only that one has a grasp of the concepts involved. Papineau (1996) argues that identities are not the sort of thing that one explains; I think this is wrong, but in any case they are certainly the kind of thing that one can derive. Even the fact that Samuel Clemens is Mark Twain, to use Papineau's example, could be derived in principle from the physical facts by one who possesses the relevant concepts. But even if one possesses the concept of consciousness, the identity involving consciousness is not derivable from the physical facts.

(It might be objected that if one possessed an a posteriori concept of consciousness - on which consciousness was identified with some neural process, for example - then the facts about consciousness could be derived straightforwardly. But this would be cheating: one would be building in the identity to derive the identity. In all other cases - genes, water, and so on - one can derive the high-level facts from the low-level facts using the a priori concept alone. One does not need the identity between genes and DNA to derive the fact that DNA is a gene, for example: all one needs is a grasp of the meaning of "gene". That is, in all the other cases, the implication from micro to macro is a priori.)

We might call this this "magic bullet" version of the identity theory: it treats identity as a magic bullet which one can use to kill off all our explanatory problems by drawing disparate phenomena together. But identities do not work like this: elsewhere, they have to be earned. That is, an identity requires an actual or possible explanation of how it is that two phenomena are identical. ("No identification without explanation.") One earns the DNA-gene identity, for example, by showing how DNA has all the properties that are required to qualify as a gene. The original identity theorists in the philosophy of mind (Place 1956; Smart 1959) understood this point well. They consequently buttressed their account with a "topic-neutral" analysis of experiential concepts, asserting that all it means to be an orange sensation is to be the sort of state caused by orange things, and so on; this suffers from all the problems of type-A materialism, but at least it recognizes what is required for their thesis to be true. The type-B materialist, by contrast, posits an identification in place of an explanation.

Indeed, type-B materialism seems to give up on the reductive explanation of consciousness altogether. The very fact that it needs to appeal to an explanatorily primitive axiom to bridge the gap shows that consciousness is not being wholly explained in terms of physical processes: a primitive bridging principle is carrying the central part of the burden, just as it does on the sort of theory I advocate. Calling this principle an "identity" may save the letter of materialism, but it does not save the spirit. When it comes to issues of explanation, this position is just as nonreductive as mine.

Elsewhere in science, this sort of explanatorily primitive link is found only in fundamental laws. In fact, this primitiveness is just what makes such laws fundamental. We explain complex data in terms of underlying principles, we explain those principles in terms of simpler principles, and when we can explain no further we declare a principle fundamental. The same should hold here: by positing a fundamental law, we recognize the price of explanatory primitiveness, rather than pretending that everything is business as usual.

One can draw out the problems in other ways. For example, once it is noted that there is no conceptually necessary link from physical facts to phenomenal facts, it is clear that the idea of a physically identical world without consciousness is internally consistent. (By comparison, a physically identical world without life, or without genes, or without water is not even remotely conceivable.) So the fact that physical processes go along with consciousness seems to be a further fact about our world. To use a common philosophical metaphor: God could have created our world without consciousness, so he had to do extra work to put consciousness in.

Type-B materialists sometimes try to get around this by appealing to Saul Kripke's treatment of a posteriori necessity: such a world is said to be conceivable but not "metaphysically possible", precisely because consciousness is identical to a physical process. (Hardcastle embraces this line, and Clark says something similar). But as I argue in my book, this misunderstands the roots of a posteriori necessity: rather than ruling conceivable worlds impossible, a posteriori constraints simply cause worlds to be redescribed, and the problem returns as strongly as ever in a slightly different form. The issues are technical, but I think it is now well-established that Kripkean a posteriori necessity cannot save materialism here. To declare that the relevant worlds are all "metaphysically impossible", one would have to appeal instead to a far stronger notion of necessity which would put inexplicable constraints on the space of possible worlds. This is a notion in which we have no reason to believe.

So the problems of type-B materialism can be expressed both on intuitive and technical grounds. On the most intuitive grounds: it is a solution by stipulation, which "solves" the problem only by asserting that brain states are conscious states, without explaining how this can be. On slightly more technical grounds: it requires an appeal to a primitive axiom identifying consciousness with a physical process, where this identity is not derivable from the physical facts and is thus unlike any identity statement found elsewhere. On the most technical grounds: it either rests on an invalid appeal to Kripke's a posteriori necessity or requires a new and stronger notion of metaphysical necessity in which there is no reason to believe.

On to some specific points. Clark suggests that the explanatory gap arises only from assuming that consciousness and physical processes are distinct in the first place, and he faults my use of phrases such as "arises from" for begging that question. I think this misses the point: one can phrase the question just as well by asking "Why are certain physical systems conscious?", or even "Why is there something it is like to engage in certain processes?". Such questions are just as pressing, and clearly do not beg any questions against identity.

In fact, ontological assumptions are irrelevant to posing the explanatory question. All that matters is the conceptual distinction between structural/functional concepts and consciousness, a distinction that Clark explicitly accepts. (His talk of "correlation" makes even more clear, as does his observation that it could turn out that the functions do not correlate with experience). Given that it is not a priori that the performance of these functions should be conscious, it follows that an explanation of the functions is not ipso facto an explanation of consciousness, and we need to supplement the explanation with some further a posteriori component. Clark's "identity hypothesis" provides this extra component; but its primitive nature makes it clear that a wholly reductive explanation is not on offer. Indeed, Levine (1983), who introduced the term "explanatory gap", embraces an "identity" picture just like this, but he is under no illusion that he is providing a reductive explanation.

Hardcastle offers her own diagnosis of the roots of the debate, painting a picture of "committed materialists" who can't take the issue seriously, and "committed skeptics" who are entirely sure that materialism is false. I think this picture is far too bleak: in my experience the majority of people are more than a little torn over these issues, and there is plenty of common ground. In particular, I think Hardcastle does materialists a disservice: to characterize materialism as a "prior and fundamental commitment" is to make it into a religion. Materialism is an a posteriori doctrine, held by most because it explains so much in so many domains. But precisely because of this a posteriori character, its truth stands or falls with how well it can explain the phenomena. So materialists cannot just circle the wagons and plead a prior commitment; they have to face up to the problems directly.

In any case, I think the basic intuitive divide in the field is not that between "materialists" and "skeptics", but that between those who think there is a phenomenon that needs explaining and those who think there is not: that is, between type-A materialists and the rest. The issue between Dennett and myself, for example, comes down to some basic intuitions about first-person phenomenology. But once one accepts that there is a phenomenon that needs explaining - as Hardcastle clearly does - the issues are more straightforwardly debatable. In particular, the problems of the type-B position are straightforwardly philosophical, rooted in its need for explanatorily primitive identities and brute metaphysical necessities.

Indeed, I think Hardcastle's defense of her identities makes straightforwardly philosophical missteps. Against someone who raises an explanatory gap question ("why couldn't these physical processes have gone on without consciousness?"), she responds with an analogy, pointing to a water-mysterian who asks "why couldn't water have been made of something else?", and a life-mysterian who asks "why couldn't living things be made from something other than DNA?". But such questions are disanalogous and irrelevant, as they get the direction of explanation backward. In reductive explanation, the direction is always from micro to macro, not vice versa. So even if life could have been made of something else, this blocks the DNA-explanation of life not in the slightest. What matters is that in these cases, the low-level facts imply the high-level facts, with no primitive identity statements required. But this is not so in the case of consciousness; so Hardcastle requires a primitive identity of an entirely different kind, for which analogies cannot help.

For a truly consistent type-B materialism, one would have to face up to these problems directly, rather than trying to slide over them. One would have to embrace explanatorily primitive identities that are logically independent of the physical facts and thus quite unlike any identities found elsewhere in science. One would have to embrace inexplicable metaphysical necessities that are far stronger than any a posteriori necessities found elsewhere in philosophy. And one will have to make a case that such postulates are a reasonable thing to believe in. I am skeptical about whether this is possible, but it is at least an interesting challenge.

But even if type-B materialism is accepted, the explanatory picture one ends up with looks far more like my naturalistic dualism than a standard materialism. One will have given up on trying to explain consciousness in terms of physical processes alone, and will instead be relying on primitive bridging principles. One will have to infer these bridging principles from systematic regularities between physical processes and phenomenological data, where the latter play an ineliminable role. One will presumably want to systematize and simplify these bridging principles as much as possible. (If there are to be brute identities in the metaphysics of the world, one hopes they are at least simple!) The only difference will be that these primitive principles will be called "identities" rather than "laws".

I think it makes far more sense to regard such primitive principles as laws, but if someone insists on using the term "identity", after a while I will stop arguing with them. In the search for a theory of consciousness - the truly interesting question - their theories will have the same shape as mine. The epistemology will be the same, the methodology will be the same, the explanatory relations between principles and data will be the same, and all will be quite unlike those on standard materialist theories in other domains. The names may be different, but for all explanatory purposes, consciousness might as well be irreducible.

2.4 Other deflationary approaches

A different sort of "deflationary" approach is taken by O'Hara and Scutt. Their paper has the juicy title of "There is no hard problem of consciousness", suggesting a Dennett-like reductionism, but the substance of their paper suggests quite the opposite. In fact, they hold that the hard problem is so hard that we should ignore it for now, and work on the easy problems instead. Then perhaps all will become clear, in a decade or a century or two.

Now there is not much doubt that progress on the easy problems is much faster than progress on the hard problem, but O'Hara and Scutt's policy suggestion seems quite redundant. Researchers working on the easy problems already outnumber those working on the hard problem by at least a hundred to one, so there is not much danger of the world suddenly falling into unproductive navel-gazing. But if O'Hara and Scutt are suggesting that no-one should be working on the hard problem, this seems to move beyond pragmatism to defeatism. Granted that the hard problem is hard, it nevertheless seems quite reasonable for a community to invest a fraction of its resources into trying to solve it. After all, we do not know when a solution to the hard problem will come. Even if we do not solve it immediately, it may well be that the partial understanding that comes through searching for a solution will help us in the further search, in our work on the easy problems, and in our understanding of ourselves. It is in the scientific spirit to try.

Sociological issues aside, the substantive issue arising from O'Hara and Scutt's article is that of whether there is any chance of progress on the hard problem any time soon. O'Hara and Scutt do not really provide much argument against this possibility; they simply reiterate that the hard problem is very hard, that we are not assured of a solution, and that scientific progress has often made hard problems seem easier. All this tells us that the prospects for a solution are uncertain, but it does not tell us that they are nonexistent.

In my article I advocated a positive methodology for facing up to the hard problem. Pay careful attention both to physical processing and to phenomenology; find systematic regularities between the two; work down to the simpler principles which explain these regularities in turn; and ultimately explain the connection in terms of a simple set of fundamental laws. O'Hara and Scutt offer no reason to believe that this must fail. They reserve most of their criticism for reductive methods such as those of Crick and Edelman, but that criticism does not apply here. They very briefly criticize a specific suggestion of mine, saying "it is impossible to understand how information can have a phenomenal aspect". They do not substantiate this remark (for my part, I do not find it impossible to understand at all, as long as we realize that a fundamental law rather than a reduction is being invoked) but in any case the criticism seems quite specific to my theory. O'Hara and Scutt give us no reason to believe that a fundamental theory could not be formulated and understood.

I should also clarify a common misunderstanding. O'Hara and Scutt attribute to me the view that understanding the easy problems does not help at all in understanding the hard problem, and others have attributed to me the view that neurobiology has nothing to contribute in addressing the hard problem. I did not make these claims, and do not agree with them. What I do say is that any account of the easy problems, and indeed any neurobiological or cognitive account, will be incomplete, so something more is needed for a solution to the hard problem. But this is not to say that they will play no role in a solution at all. I think it is obvious that empirical work has enriched our understanding of conscious experience a great deal, and I expect that it will continue to do so. A final theory of human consciousness will almost certainly lie in a combination of processing details and psychophysical principles: only using both together will the facts about experience be explained.

So I agree with O'Hara and Scutt that research on the easy problems is of the utmost importance: it is here that the meat and potatoes of consciousness research resides, and attention to this sort of work can help even a philosopher in staying grounded. But to ignore the hard problem entirely would be futile, as understanding conscious experience per se is the raison d'être of the field. Some of us will continue to focus on it directly, and even those working on the easy problems will do well to keep the hard problem in sight out of the corner of their eyes. To paraphrase Kant and stretch things a bit, we might say: hard without easy is empty; easy without hard is blind.

Another proposal that could be construed as "deflationary" comes from Price, who suggests that much of the problem lies in our heads. We should not expect to feel as if we understand consciousness, but this may be no big deal. There are similar explanatory gaps accompanying every causal nexus ("why does event A cause event B?"); it's just that in most cases we have gotten used to them. The explanatory gap in the case of consciousness is analogous, but we are not yet as used to it.

I agree with Price's analogy, but I think it ultimately supports my view of the problem. Why are causal nexi accompanied by explanatory gaps? Precisely because of their contingency (as Price says, there is no "a priori necessity" to them), which is in turn due to the brute contingency of fundamental laws. If we ask "why did pressing the remote control cause the TV set to turn on?", we might get a partial answer by appealing to principles of electromagnetic transmission, along with the circuitry of the two objects, ultimately seeing how this causal chain is the natural product of the underlying dynamics of electromagnetism (for example) as it applies to the material in the vicinity. But this answer is only partial, as we have no answer to the question of "why do those fundamental principles hold?". Those principles are apparently just a brutely contingent fact about the world, and this contingency is inherited by the causal chain at the macroscopic level.

If Price is right that the explanatory gap between brain and consciousness is analogous, then this suggests that the gap is due to some contingency in the connecting principles, because of underlying brutely contingent fundamental laws. Which of course is just what I suggest. We have here an inter-level relationship that could have been otherwise, just as Price points to intra-level relationships in physics that could have been otherwise. Either way, this arbitrariness is ultimately grounded at the point where explanation stops: the invocation of fundamental laws.

It is worth noting that for other inter-level relationships - that between biochemistry and life, for example, or between statistical mechanics and thermodynamics - there is no explanatory gap analogous to the brain-consciousness gap. The reason is precisely that the high-level facts in these cases are necessitated by the low-level facts. The low-level facts themselves may be contingent, but there is no further contingency in the inter-level bridge. (Indeed, the inter-level relationship in these cases is not really causation but constitution.) Because there is no contingency here, the relationship between the levels is transparent to our understanding. Contrapositively, the lack of transparency in the brain-consciousness case is precisely due to the contingency of the psychophysical bridge.

In any case, Price's analogy between the brain-consciousness relation and ordinary causal relations is helpful in seeing why belief in an explanatory gap need not lead one to mysterianism. Rather than elevating the explanatory gap to a sui generis mystery, we recognize that it is of the sort that is ubiquitous elsewhere in science, and especially in fundamental physics. This case is unusual only in that here, the gap is found in an inter-level rather than an intra-level relationship; but the same strategy that works for intra-level relationships works here. Once we introduce fundamental psychophysical laws into our picture of nature, the explanatory gap has itself been explained: it is only to be expected, given that nature is the way it is.

A final view that might be considered "deflationary" has been discussed by McGinn, not so much in his contribution to this symposium as in an earlier paper (McGinn 1989) and most explicitly in his review of my book (McGinn 1996). On McGinn's view, the explanatory gap also arises for psychological reasons, but his reasons differ from Price's. He suggests that there may be a conceptual implication from physical facts to facts about consciousness, which would be a priori for a being that possessed the relevant concepts; but we do not and cannot possess the concepts, due to our cognitive limitations, so we can never grasp such an implication. On this view, materialism turns out to be true, but we can never grasp the theory that reveals its truth.

This intriguing view seems at first glance to offer an attractive alternative to both dualism and hard-line reductionism, but in the end, I am not sure how much of an alternative is. The problem lies in the concept (or concepts) which support the implication from physical to phenomenal facts. What sort of concept could this be? If it is a structural/functional concept, then it will suffer from the same conceptual gap with experiential concepts as any other structural/functional concept (the existence of a gap here is independent of specific details about structure and function, after all). If it is not a structural/functional concept, then there appear to be principled reasons why it cannot be entailed by the physical story about the world, as physics deals only in structure and function.

So we are still faced with the problem that structure and function adds up only to more structure and function. This claim holds true for systematic reasons quite independent of considerations about cognitive limitations, and I doubt that McGinn would deny it. So it seems that McGinn need to assert either that (1) explaining experience is just a problem of explaining structure and function, if only we could grasp this fact, or (2) that something more than structure and function is present in fundamental physics. The first option would make McGinn's position remarkably like Dennett's (the only difference being that Dennett holds that only some of us are limited in this way!), and the second position would fall into the category of expanding fundamental physics, which I will consider below. Either way, once made specific, this view is subject to the pros and cons of the specific position to which it is assimilated. So in the end, it may not open up a distinct metaphysical option.

3 NONREDUCTIVE ANALYSES

3.1 Conceptual foundations

I will now address some critiques from those who take nonreductive positions. It appears that I staked out some middle ground; having discussed objections from my right, it is now time for objections from the left. The intermediate nature of my position may stem from an inclination toward simplicity and toward science. Reductive materialism yields a compellingly simple view of the world in many ways, and even if it does not work in the case of consciousness, I have at least tried to preserve as many of its benefits as possible. So where reductionists think that I have overestimated the difficulty of the hard problem, some nonreductionists think that I may have underestimated it, or alternatively that I have underestimated the difficulty of the "easy" problems.

The latter position - that the hard problem is hard, but that explaining discrimination, reportability, and so on is just as hard - is taken by Lowe and Hodgson, for two apparently different reasons. Hodgson thinks these problems are hard because a physical account cannot even explain how the functions are performed; Lowe thinks they are hard because they require explaining more than the performance of functions. (It is possible that Lowe intends to make both points.) I will address Lowe's position first, and save Hodgson's for my discussion of interactionism and epiphenomenalism.

Why say that explaining reportability, discrimination, and so on requires explaining more than the performance of functions? Lowe says this because he holds that true "reports" and "discriminations" can be made only in systems which have the capacity for thought, which in turn requires consciousness. If externally indistinguishable functions were performed in a system without consciousness, they would qualify as "reports" (and so on) only in a "jejune" sense. So an account of non-jejune reportability requires explaining more than functions.

I have some sympathy with Lowe's position here; in particular, I find it plausible that there is an intimate relationship between consciousness and thought (Lowe suggests that I think otherwise, but I don't think that suggestion can be found in my article). But it seems to me that the issue about "reportability" and so on is largely verbal. Does a sound uttered by a functionally identical zombie really qualify as a "report"? The answer is surely: yes in one sense of "report", and no in another. If Lowe objects to calling it a "report" in any sense at all, one can simply call it a "pseudo-report". Then the easy problems are those of explaining pseudo-reportability, pseudo-discrimination, and the like. Nothing important to my article changes; the distinction between the easy problems (explaining functions) and the hard problem (explaining conscious mentality) is as strong as ever.

Lowe might reply that reportability, so construed, is not a problem of consciousness at all. Again I am sympathetic, but again I think that this is a verbal issue. Plenty of people who take functional approaches to these problems take themselves to be explaining aspects of consciousness in some sense; and there is little point getting into territorial arguments about a word. It's more productive to accept the characterization - if someone holds that "consciousness" has some functionally definable senses, I will not argue with them - but to point out the key problems of consciousness that are being skipped over all the same.

The same goes for Lowe's concerns, shared by Velmans and Libet, about my use of the term "awareness" for a functionally defined concept distinct from that of full-blown consciousness. Again, a word is just a word. As long as we are clear that "awareness" is being used in a stipulative sense, the substantive issues should be clear. In particular, there is certainly no implication that humans are "aware" only in this attenuated sense, as Lowe somehow infers; and it is hard to see how this terminological choice helps blur the "function/sentience" distinction, as Velmans suggests. If anything, explicitly separating consciousness and awareness makes the distinction harder to avoid. Nevertheless, it is clear that enough people are uneasy about the terminology that it it is unlikely to catch on universally. Perhaps another term can play the role, although I suspect that any word choice that dimly suggests mentality would meet similar opposition from some. It's a pity that there is no universal term for this central functional concept; in the meantime I will go on using the term "awareness", with the stipulative nature of the usage always made clear.

The exact relationship between consciousness and "intentional" (or semantic) mental states such as belief, thought, and understanding raises deep and subtle questions that I did not intend to address in my article. Lowe seems to have gotten the impression of a straightforward functionalism about these aspects of mentality, but such an impression was not intended. I am torn on the question of intentionality, being impressed on one hand by its phenomenological aspects, and on the other hand being struck by the potential for functional analyses of specific intentional contents. In my book, I try to hew a neutral line on these deep questions, noting that there is a "deflationary" construal of concepts such as "belief" so that even a zombie might be said to have beliefs (pseudo-beliefs, if you prefer), and an inflationary construal such that true belief requires consciousness. Over time I am becoming more sympathetic with the second version: I think there may be something in the intuition that consciousness is the primary source of meaning, so that intentional content may be grounded in phenomenal content, as Lowe puts it. But I think the matter is far from cut and dried, and deserves a lengthy treatment in its own right. For now, phenomenal content is my primary concern.

3.2 The roots of the hard problem

Robinson, McGinn, and Warner offer proposals about why the hard problem is hard. These are not direct critiques of my view, for the most part, but they fall into the general category of nonreductive analyses, so I will address them briefly here.

Robinson suggests that the hardness lies in the fact that some phenomenal properties - hue properties, for example - have no structural expression. I think there is a considerable insight here. Elsewhere in science, instantiations of structural properties are generally explicable in terms of basic components and their relations, and it seems to be precisely their structure that makes them explicable in this way. The structural properties of experience itself (the geometry of a visual field, for example) form an interesting intermediate case: while they are more amenable to physical explanation than other phenomenal properties, this explanation still requires a nonreductive principle to cross the gap. But these properties may be reducible to structureless phenomenal properties and their relations. If so Robinson may be correct that the core of phenomenal irreducibility lies at the more basic level.

A few questions remain: for example, if it turned out that phenomenal properties had structure "all the way down", might they not be irreducible to physical properties all the same? For reasons like this, I sometimes lean toward an alternative view which locates the irreducibility in an independence of the kind of structure found in the physical domain, and ultimately in the intrinsicness of phenomenal properties, which contrasts with the relational nature of all our physical concepts. But clearly these views are not far apart.

McGinn offers a closely related analysis. He locates the problem in the non-spatial character of consciousness: that is, in the fact that it lacks spatial extension and structure, and therefore does not fit easily into physical space. I think the overall intuition is very powerful. The detailed claim needs to be carefully unpacked, to avoid lumping in consciousness with less problematic non-spatial states and properties (e.g. the legality of an action, which is a complex dispositional property but not a spatial property; and possibly even the charge of a particle), while simultaneously avoiding the need to appeal more controversially to a non-spatial entity bearing the state or property (McGinn seems not to want to rest his case on this appeal; see e.g. his footnote 3). I suspect that once this work is done - adding appropriate restrictions on the class of properties, perhaps - McGinn's analysis will be even closer to those above.

Warner locates the source of the problem in a different place: the incorrigibility of our knowledge of consciousness. I agree with Warner that there is some sense in which some knowledge of consciousness is incorrigible - I know with certainty that I am conscious right now, for example - but it is remarkably tricky to isolate the relevant sense and the relevant items of knowledge. Warner himself notes that plenty of our beliefs about our experiences are mistaken. He gets around this problem by limiting this to cases where our ability to recognize experiences is "unimpaired", but this seems to come dangerously close to trivializing the incorrigibility claim. After all, it is arguably a tautology that an "unimpaired" belief about an experience will be correct. Warner may have a way to unpack the definition of "impairment" so that the claim is non-circular, but this is clearly a non-trivial project.

In Chapter 5 of my book (pp. 207-8), I make some brief suggestions about how to make sense of an incorrigibility claim. In essence, I think that experiences play a role in constituting some of our concepts of experience; and when a belief directs such a concept at the experience which constitutes it, there is no way that the belief can be wrong (in essence, because one's current experience has got "inside" the content of one's belief). Many or most beliefs about experience do not have this specific form, and are therefore corrigible; nevertheless, this may isolate a certain limited class of beliefs about experience that cannot be wrong. (This limited class of beliefs can arguably ground the first-person epistemology of conscious experience, but this is a further complex issue.)

In any case, Warner and I are agreed that there are some beliefs about conscious experience that cannot be wrong. What follows? Warner holds that it follows from this alone that experience cannot be physically explained, as physical science cannot countenance the necessary connections that incorrigibility requires. I am not sure about this. On my account, for example, the necessary connection between belief and experience is an automatic product of the role that the experience plays in constituting the content of the belief; and it is not obvious to me that materialists could not avail themselves of a similar account. Shoemaker (1990) gives an alternative account of incorrigibility from a functionalist perspective, relying on the interdefinition of pains and pain-beliefs. Perhaps Warner would object that neither of these accounts captures the kind of incorrigibility that he is after; but perhaps they capture the kind of incorrigibility in which there is reason to believe. So I am not yet convinced that incorrigibility is truly the source of the mind-body problem, but it is clear that there is much more to be said.

Warner uses these considerations about incorrigibility to suggest, like Lowe, that even reportability - one of my "easy" problems - cannot be physically explained. My reply here is as before. I did not intend reportability to be read in a strong sense that requires the presence of experience. Rather, I intended it to require merely the presence of the reports, functionally construed, so in particular I did not intend it to encompass the incorrigibility of beliefs about experience. (If I were writing the article now, I would modify the wording in the list of "easy" problems to make it absolutely clear that functioning is all that matters.) Certainly, if "reportability" is read in a sense that requires conscious experience, then it cannot be reductively explained.

3.3 Fundamental laws

A further set of issues is raised by my appeal to fundamental laws in a theory of consciousness. Mills thinks that because I invoke such laws to bridge physics and consciousness, I am not really solving the hard problem at all (Price suggests something similar). At best I am providing a sophisticated set of correlations, and finding such correlations was an easy problem all along.

Mills reaches this conclusion because he construes the hard problem as the problem of giving a constitutive (or "non-causal") explanation of consciousness in physical terms. If the problem is construed that way, Mills is quite right that it is not being solved at all. But to define the problem of consciousness this way would be to define it so that it becomes unsolvable: one might call that problem the "impossible problem".

I prefer to set up the hard problem in such a way that a solution is not defined out of existence. The hard problem, as I understand it, is that of explaining how and why consciousness arises from physical processes in the brain. And I would argue the sort of theory I advocate can in principle offer a good solution to this problem. It will not solve the impossible problem of providing a reductive explanation of consciousness, but it will nevertheless provide a theory of consciousness that goes beyond correlation to explanation.

A good analogy is Newton's theory of gravitation. The Newton of legend wanted to explain why an apple fell to the ground. If he had aimed only at correlation, he would have produced a taxonomic theory that noted that when apples were dropped from such-and-such heights, they fell to the ground taking such-and-such time, and so on. But instead he aimed for explanation, ultimately explaining the macroscopic regularities in terms of a simple and fundamental gravitational force. In Newton's time, some objected that he had not explained why the gravitational force should exist; and indeed he had not. But we take Newton's account to be a good explanation of the apple's falling all the same. We have grown used to taking some things as fundamental.

Something similar holds for a theory of consciousness. It would be deeply unsatisfying for a theory of consciousness to stop at "complex brain state B is associated with complex experience C", and so on for a huge array of data points. As in Newton's case, we want to know how and why these correlations hold; and we answer this question by pointing to simple and fundamental underlying laws. Just as one can say "the apple fell because of the law of gravity", we will eventually be able to say "brain state B produced conscious state C because of fundamental law X".

Because something is being taken as primitive, this does not yield as strong an explanatory connection as one finds in cases of reductive explanation, such as the explanation of genes in terms of DNA. But it is an explanation all the same. The case of gravity suggests that what counts in an explanation is that one reduces the primitive component to something as simple as possible, not that one reduces it to zero.

Mills suggests that this is no better than explaining why a sheep is black in terms of the fact that it is a member of the class of black things. But here the explanatory posit is just as complex as what needs to be explained; whereas in our case, the fundamental laws are far simpler than the data. If our "explanation" was "brain B yields experience E" or even "certain oscillations yield consciousness", we would have a problem like Mills': these posits would be so complex and macroscopic that they stand in need of further explanation themselves. For a comprehensive explanation, our basic principles need to be so simple and universal that they are plausibly part of the basic furniture of the world.

Of course one can always ask "why does the fundamental law hold", as Mills and also Robinson suggest. But we should not expect any answer to that question. In physics, we have grown used to the idea that explanation stops somewhere, and that the fundamental laws of nature are not further explained. That is what makes them fundamental. If my negative arguments about consciousness are correct, then we will have to do the same here. We will explain and explain and explain, and eventually our psychophysical explanations will be reduced to a simple core which we will take as primitive. So we do not get something for nothing, but we get a perfectly adequate theory all the same.

Mills is right that once we view things this way, there is a sense in which the hard problem becomes an easy problem (although not an Easy problem), in that there is a clear research program for its solution and there is no reason why it should be intractable in principle. This I take to be precisely the liberating force of taking consciousness as fundamental. We no longer need to bash our head against the wall trying to reduce consciousness to something it is not; instead we can engage in the search for a constructive explanatory theory.

In any case, it seems that Mills does not disagree with me on the issues of substance. Whichever problems one takes to be "hard" or "easy", the deepest problem of consciousness is that of how we can construct an explanatory theory of consciousness which accommodates consciousness in the natural world. And a fundamental theory of consciousness, we agree, is the best way to do just that. I will be happy if we can come up with a theory of consciousness that is only as good as Newton's theory of gravitation!

3.4 Epiphenomenalism and interactionism

A number of contributors worry that my position may lead to epiphenomenalism, the view that consciousness has no effect on the physical world. If the physical domain is causally closed, so that there is a physical explanation for every physical event, and if consciousness is non-physical, then it can seem that there is no room for consciousness to play any causal role. Conversely, it can seem that if consciousness is non-physical and plays a causal role, then there will not be a physical solution even to the "easy" problems. Hodgson and Warner spend some time discussing this issue, and Seager and Stapp allude to it. I discuss this issue at considerable length in my book, but will summarize the state of play as I see it below.

In essence, I think that (1) while epiphenomenalism has no clear fatal flaws, it is to be avoided if possible; that (2) the causal closure of the physical domain is not to be denied lightly; and that (3) denying causal closure does not really help solve the problems of epiphenomenalism, which run deeper than this. Most importantly, I think that (4) it may be possible to avoid epiphenomenalism even while embracing the causal closure of the physical domain, by taking the right view of the place of consciousness in the natural order. I will consider these issues in order.

First, is epiphenomenalism an acceptable view, or should it be rejected out of hand? There is no doubt that the view is counterintuitive to many, but it is also hard to find fatal flaws in it. While we certainly have strong intuitions that consciousness plays a causal role, our evidence for these intuitions lies largely in the fact that certain conscious events tend to be systematically followed by certain physical events. As always, when faced with such a constant conjunction, we infer a causal connection. But the epiphenomenalist can account for this evidence in a different way, by pointing to psychophysical laws, so our intuitions may not carry too much weight here.

Hodgson argues vigorously against epiphenomenalism, largely by appealing to "common sense". I think common sense should not be undervalued here, but it is also inconclusive. At best, it establishes a presumption against epiphenomenalism if other things are equal, not a solid argument against it if other things are not. Hodgson also points to various functions that he thinks could not be performed as well without consciousness; but his arguments all depend once again on the intuition that consciousness is playing a causal role, rather than on an objective analysis of the functions themselves. He also makes an appeal to evolution, but an epiphenomenalist can account for the evolution of consciousness without too many problems: evolution selects for certain physical processes directly, and psychophysical laws do the rest, ensuring that consciousness will evolve alongside those processes. Like all fundamental laws, these psychophysical laws are universal, so we do not need an evolutionary explanation of why these laws hold in the first place.

Other anti-epiphenomenalist argumnts can be made by appealing to the relationship between consciousness and the things we say and judge about consciousness. It seems that the epiphenomenalist must hold that consciousness is causally irrelevant to our utterances about consciousness, which is at least very odd. Some argue that it is more than odd, suggesting that if consciousness were epiphenomenal we could not refer to consciousness, or that we could not know about consciousness; but I think that a close analysis, as I give in my book, suggests that these arguments do not go through, as our knowledge of and reference to consciousness depends on a relationship to consciousness that is much tighter than mere causation.

Warner gives a novel argument against epiphenomenalism, and against any other view that has a causally closed physical domain plus psychophysical laws. He suggests that psychophysical laws must interfere with physical laws, as they automatically entail violations of physical conservation laws. I do not see why this is the case: surely it is at least coherent to suppose that the physical picture of the universe might be supplemented by some psychophysical laws that introduce consciousness but leave the physical domain untouched, Warner's argument relies on the claim that the "production" of experience by a physical process must involve a corresponding decrease in some physical quantity, but I see no reason why this must be so: there will be some physical criterion for the existence of an experience, to be sure, but this criterion may be one that can be satisfied perfectly well in a causally closed physical world. So the conceptual coherence of epiphenomenalism, and that of other views with causal closure plus psychophysical laws, is unthreatened.

Still, all this establishes at best that epiphenomenalism has no fatal flaws. It does not establish that epiphenomenalism is plausible. Not only does epiphenomenalism violate certain aspects of common sense; it also leads to an inelegant picture of nature, with consciousness "dangling" on top of physical processes as a kind of add-on extra. If it turns out that every other position has fatal flaws, then we may have reason to embrace epiphenomenalism; but in the meantime, we have good reason to investigate alternatives.

There are two sorts of alternatives that one might consider. First, we might see if it is plausible to deny the causal closure of the physical domain, thus leaving room for a causal role for experience in an interactionist dualism. Second, we might see if a causal role for experience might be reconciled with the causal closure of the physical domain. The second alternative may sound paradoxical at first, but I think there is a very natural way to make sense of it, which may ultimately provide the deepest resolution of this issue.

But first: is the physical world causally closed? In the paper I accepted that it was, not because I think things have to be that way, but because to deny this is to go a long way out on a limb. One does not have to go out on that limb to embrace the irreducibility of consciousness, so I prefer to stay neutral, lest the baby of consciousness once more be thrown out with the bathwater of Cartesian dualism. Still, are there any good reasons to deny causal closure, and to assert that physical explanations of the various functions are incomplete?

Perhaps the most common such reason is an indirect one: "it must be the case that physical explanations of the functions are incomplete, if consciousness is to play a causal role." This reason has some force, although I think both of its premises can be questioned: we have seen above that it is not obvious that consciousness must have a causal role, and we will see below that consciousness might have a causal role even if the physical domain is causally closed. But in any case, I set this indirect reason aside: the question for now is whether there are any direct reasons. That is, if we set consciousness aside and take a third-person view of the world, is there any reason to believe that physical explanations of these functions are impossible?

Hodgson offers an array of reasons to deny causal closure, but they are mostly grounded in the indirect reason above. Hodgson does not deny that some physical system might perform the functions with which the "easy" problems are concerned; he simply thinks that that is not the way that we do it, as consciousness plays a role in our own case. So his case that the "easy" problems are hard depends largely on the existence of the hard problem, and not on considerations intrinsic to the easy problems themselves. Indeed, I think that "objective" reasons suggesting that no physical systems could perform these functions are very thin on the ground.

The main place where third-person considerations may give reason to deny causal closure is in the intriguing case of quantum mechanics, which both Hodgson and Stapp appeal to. While there are interpretations of quantum mechanics on which the physical domain is causally closed - the interpretations of Bohm and Everett, for example - there are also interpretations on which it is not, and which leave a potential causal role for consciousness wide open. Stapp, for example, favors an interpretation on which consciousness is responsible for "collapsing" the wave function, and Hodgson favors an interpretation on which consciousness determines certain apparent quantum indeterminacies.

Indeed, it can seem that quantum mechanics provides about as perfect a causal role for consciousness as one could imagine in a physical theory. Any indeterminism in quantum mechanics comes in at the point of "collapse", which on the most common interpretations is triggered by "measurement", and it can seem that consciousness is the only non-arbitrary way to distinguish a measurement from other physical events. If so, then consciousness may be present in quantum mechanics' very foundations. Such interpretations are controversial among physicists, but mainly because they presuppose that consciousness is non-physical; if we have already accepted this for independent reasons, this concern loses its bite. (It is interesting that philosophers reject interactionist dualism because they think it is incompatible with physics, whereas physicists reject the relevant interpretations of quantum mechanics because they are dualistic!)

On most days of the week, I lean toward a different interpretation of quantum mechanics (Everett's), but interactionist collapse interpretations have obvious attractions and are not to be dismissed lightly. (I lean toward them about two days a week, and toward Bohm's interpretation on Sundays.) At least it seems clear that interactionist dualism is not incompatible with physical theory, as we understand it today. But I think there is a deeper reason why an appeal to interactionist dualism does not really solve the problems of epiphenomenalism. This is because even interactionism is subject to an epiphenomenalist worry of its own! Perhaps it can get around this worry, but it turns out that the same move is available to theories on which physics is causally closed.

The worry is as follows: for any given interactionist theory, it seems that we can remove the facts about experience, and still be left with a coherent causal story. Take Eccles' theory on which "psychons" in the mind affect physical processes in the brain. Here one can tell a perfectly coherent causal story about psychons and their effect on the brain without ever mentioning the fact that psychons are experiential. On this story, psychons will be viewed as causal entities analogous to electrons and protons in physical theories, affected by certain physical entities and affecting them in turn; and just as with protons and electrons, the fact that psychons have any experiential qualities will be quite inessential to the dynamic story. So one can still give a causal explanation of behavior that does not involve or imply experience. The same would go for a Cartesian theory involving ectoplasm, for Libet's proposal involving a "conscious mental field", and even for the theories that Stapp and Hodgson advocate.

Consider Stapp's view, for example. Presumably when this view is filled out, it will say that certain physical states P give rise to certain experiential states E, and that these states E bring about physical collapses in turn. But however this story works, the fact that the states E are experiential will be quite inessential to the story. One can imagine that a formally identical theory might be formulated from a "God's-eye" point of view, invoking such states E in causing collapses, but never mentioning experience at all. So it is not easy to see how Stapp is giving experience an essential role.

Stapp has sometimes advocated his view by pointing to the "zombie" possibility for classical physics: if physics is causally closed, there is a logical possibility of physically identical zombies with the same behavior, suggesting that experience plays no essential role in our behavior. But interestingly, a similar objection can be made to Stapp's own view. Given that physics works as Stapp suggests, there is a logically possible world with a "quantum zombie". In this world, instead of P causing experience E which causes collapse, P causes collapse directly. There is no consciousness in this world, but all the functions are performed just the same. So there is a sense in which the fact that experience is associated with collapses in our world is superfluous. One can tell a similar conceptually coherent "zombie" story for any interactionist picture, whether Hodgson's or Eccles - just move to a possible world in which any intermediate causal roles are played without any associated experience -- thus suggesting that these problems are not unique to the picture on which the physical world is causally closed.

The real "epiphenomenalism" problem, I think, does not arise from the causal closure of the physical world. Rather, it arises from the causal closure of the world! Even on an interactionist picture, there will be some broader causally closed story that explains behavior, and such a story can always be told in a way that neither includes nor implies experience. Even on the interactionist picture, we can view minds as just further nodes in the causal network, like the physical nodes, and the fact that these nodes are experiential is inessential to the causal dynamics. The basic worry arises not because experience is logically independent of physics, but because it is logically independent of causal dynamics more generally.

The interactionist has a reasonable solution to this problem, I think. Presumably, the interactionist will respond that some nodes in the causal network are experiential through and through. Even though one can tell the causal story about psychons without mentioning experience, for example, psychons are intrinsically experiential all the same. Subtract experience, and there is nothing left of the psychon but an empty place-marker in a causal network, which is arguably to say there is nothing left at all. To have real causation, one needs something to do the causing; and here, what is doing the causing is experience.

I think this solution is perfectly reasonable; but once the problem is pointed out this way, it becomes clear that the same solution will work in a causally closed physical world. Just as the interactionist postulates that some nodes in the causal network are intrinsically experiential, the "epiphenomenalist" can do the same.

Here we can exploit an idea that was set out by Bertrand Russell (1926), and which has been developed in recent years by Grover Maxwell (1978) and Michael Lockwood (1989). This is the idea that physics characterizes its basic entities only extrinsically, in terms of their causes and effects, and leaves their intrinsic nature unspecified. For everything that physics tells us about a particle, for example, it might as well just be a bundle of causal dispositions; we know nothing of the entity that carries those dispositions. The same goes for fundamental properties, such as mass and charge: ultimately, these are complex dispositional properties (to have mass is to resist acceleration in a certain way, and so on). But whenever one has a causal disposition, one can ask about the categorical basis of that disposition: that is, what is the entity that is doing the causing?

One might try to resist this question by saying that the world contains only dispositions. But this leads to a very odd view of the world indeed, with a vast amount of causation and no entities for all this causation to relate! It seems to make the fundamental properties and particles into empty placeholders, in the same way as the psychon above, and thus seems to free the world of any substance at all. It is easy to overlook this problem in the way we think about physics from day to day, given all the rich details of the mathematical structure that physical theory provides; but as Stephen Hawking (1988) has noted, physical theory says nothing about what puts the "fire" into the equations and grounds the reality that these structures describe. The idea of a world of "pure structure" or of "pure causation" has a certain attraction, but it is not at all clear that it is coherent.

So we have two questions: (1) what are the intrinsic properties underlying physical reality?; and (2) where do the intrinsic properties of experience fit into the natural order? Russell's insight, developed by Maxwell and Lockwood, is that these two questions fit with each other remarkably well. Perhaps the intrinsic properties underlying physical dispositions are themselves experiential properties, or perhaps they are some sort of proto-experiential properties that together constitute conscious experience. This way, we locate experience inside the causal network that physics describes, rather than outside it as a dangler; and we locate it in a role that one might argue urgently needed to be filled. And importantly, we do this without violating the causal closure of the physical. The causal network itself has the same shape as ever; we have just colored in its nodes.

This ideas smacks of the grandest metaphysics, of course, and I do not know that it has to be true. But if the idea is true, it lets us hold on to irreducibility and causal closure and nevertheless deny epiphenomenalism. By placing experience inside the causal network, it now carries a causal role. Indeed, fundamental experiences or proto-experiences will be the basis of causation at the lowest levels, and high-level experiences such as ours will presumably inherit causal relevance from the (proto)-experiences from which they are constituted. So we will have a much more integrated picture of the place of consciousness in the natural order. [*]

*[[[There may be other ways to reconcile a causal role for experience with the causal closure of the physical. See Mills (1995) for a different strategy that relies on causal overdetermination. But even if this view avoids epiphenomenalism, it retains a fragmented, inelegant picture of nature.]]]

The Russellian view still qualifies as a sort of "naturalistic dualism", as it requires us to introduce experience or proto-experience as fundamental, and it requires a deep duality between the intrinsic and extrinsic features of physical reality. But underlying this dualism, there is a deeper monism: we have an integrated world of intrinsic properties connected by causal relations. The view can even be seen as an odd sort of "materialism", as it says that physical reality is all there is - but it says that there is much more in physical reality than physical theory tells us about! In the end the name does not matter too much, as long as the picture is clear. (I would be tempted by "fundamentalism" as the most accurate coverall for the sorts of view I embrace, were it not for the associations!)

There are obvious concerns about this view. The first is the threat of panpsychism, on which more later. The second is the problem of how fundamental experiential or proto-experiential properties at the microscopic level somehow together constitute the sort of complex, unified experience that we possess. (This is a version of what Seager calls the "combination problem".) Such constitution is almost certainly required if our own experiences are not to be epiphenomenal, but it is not at all obvious how it should work: would not these tiny experiences instead add up to a jagged mess? I discuss some approaches to this problem later. If it can be avoided, then I think the Russellian view (which turns out to be particularly compatible with an informational "it from bit" view) is clearly the single most attractive way to make sense of the place of experience in the natural order.

It is notable that even an interactionist dualism can be seen as a sort of Russellian view. It draws a slightly different picture of the causal network, and takes certain nodes in this network - the "psychon" or "collapse" nodes, for example - and colors them in. The differences are that not all nodes in the network are colored in in this way (presumably there are some different, unknown intrinsic properties in fundamental matter), and that the experiential nodes in this picture are at a fairly high level. This may actually help avoid the problem above: instead of trying to constitute our consciousness out of innumerable different fundamental nodes, there might turn out to be a single node in each case (or just a few?) which carries the burden. (Though one may well wonder why this single node should have such a complex of intrinsic properties, in the way that our consciousness does!) This avoidance of the constitution problem may in the end turn out to be the greatest virtue of a quantum interactionism.

In the meantime, I think this question is wide open. There are at least three potential ways of seeing the metaphysics here: the epiphenomenalist version, the interactionist version, and the Russellian version. All have pros and cons, and I think the question of their mutual merits is one that deserves much further investigation.

3.5 My psychophysical laws

A few contributors made comments on the three specific proposals I made about psychophysical laws: the principle of structural coherence, the principle of organizational invariance, and the double-aspect view of information. Taking these in turn:


(1) The principle of structural coherence. This is the least controversial of the three proposals, and unsurprisingly there was not much argument with it. It has long been recognized that there is a detailed correspondence between structural properties of the information processed in the brain and structural properties of conscious experience (see the "psychophysical axioms" of Muller 1896 and the "structural isomorphism" of Kohler 1947, for example). My slightly more specific proposal, specifying that the relevant information is that made available for global control, is also implicitly or explicitly present in much current research.

The only criticism is by Libet, who thinks that my equation of the structure of consciousness with the structure of awareness is either trivial or false. I think he is placing too much weight on the use of the word "awareness" here, however; I use the term stipulatively to refer to global availability of information (availability for such processes as verbal report, among other things), and might easily have used another term instead. I suspect that when this verbal issue is set aside, Libet will not find much to disagree with.


(2) The somewhat functionalist principle of organizational invariance, and my arguments for it, met with a bit more disagreement. Velmans objects to it on the grounds that a cortical implant might produce a refined version of blindsight, with excellent performance but no verbal reports of consciousness and hence no experience. But this is no counterexample to the principle: the very absence of verbal reports in these subjects show that they are functionally inequivalent to normal subjects. Perhaps they are "functionally equivalent" in some very loose sense, but the invariance principle requires a much stricter isomorphism than this. The moral is that the processes involved in the production of verbal reports are just as much part of a subject's functional organization as the processes responsible for discrimination and motor action. Indeed, these aspects of organization may be among the prime determinants of conscious experience.

Similarly, Libet says that I rely on a "behavioral" criterion for conscious experience, instead of more convincing criteria such as a subject's verbal report. But a verbal report is a sort of behavioral criterion in its own right; and in any case, it is clear that any subject who is functionally isomorphic to me in the strict sense that the principle requires will produce exactly the same verbal reports, and so will satisfy Libet's criterion. Libet is quite right that there are cases where performance on many tasks is dissociated from verbal report, but such cases are irrelevant to assessing the principle.

A fairly common reaction to these thought-experiments is to suggest that no silicon chip could in fact duplicate the function of a neuron, or at least that one should not beg that question. I agree that this is clearly an open empirical question. The principle says only that if a system is a functional isomorph of a conscious system, it will have the same sort of experiences; it makes no claims about just how such isomorphs might be realized. Silicon chips are just an example. If silicon isomorphs turn out to be possible, then the principle applies to them; if they do not, the scope of the principle will be more limited. Either way, the idea that functional organization fully determines conscious experience is unthreatened by this line of questioning.[*]

*[[[That being said: if the laws of physics are computable, a neuron's behavior is in principle computable too, and it is not implausible that the relevant computations could be hooked up to electrical and chemical mediators with other neurons, at least in principle if not easily in practice. We already have seen artificial hearts, and people are working on artificial retinas; my own money is on the eventual possibility of artificial neurons.]]]

Hardcastle wonders if we can really know what will happen upon duplicating neural function in silicon. Here the answer is no and yes. No, we can't know for sure that neural function can be duplicated perfectly in silicon - that's the same open question as above. But we do know that if function-preserving substitution is possible, the resulting system will make just the same claims, the same behavior, and so on, as the original system. In fact we can know, in advance, precisely how the system will look from the third-person point of view. And even from the first-person point of view, I know that if some of my neurons are switched with identically-functioning silicon chips, I will come out swearing up and down that my qualia never changed. So in the relevant sense, I think we already know as much as we will ever know about how such a system will be.

Indeed, I think that if such substitution is ever possible, nobody will doubt the invariance principle for long. All it will take is a couple of substitutions, with subjects asserting that nothing has changed, and we will hear that there is "empirical evidence" that function-preserving substitution preserves conscious experience. The conclusion may be disputed by a handful of skeptical philosophers, but the subject's own word will be hard to resist. So I think that even now, the conditional assertion - if a functional isomorph is possible, then it will have the same sort of conscious experience - is at least as safe a bet.

Lowe thinks that the invariance principle "sells out completely" to functionalism, but this is a misunderstanding. Even many dualists hold that two subjects with the same brain state will have the same conscious state; presumably they are not thereby "selling out to physicalism", except in a highly attenuated sense of the latter. Consciousness is not reduced to a physical state; it is merely associated with one. By the same measure, to hold that two subjects in the same functional state have the same conscious state is not to sell out to functionalism, except in an attenuated sense. Consciousness is not reduced to a functional state; it is merely associated with one. Functional states, like physical states, turn out to determine conscious states with natural but not logical necessity. The resulting position, nonreductive functionalism, is compatible with the rich construal of mentality that reductive functionalism tacitly denies, precisely because a logical connection between function and experience is avoided.

Lowe may think that even a nonreductive functionalism is a bad thing, but to make that case, further reasons are required. For my part, I think that nonreductive functionalism stands a chance of capturing the most plausible and attractive elements of functionalist doctrines, while ignoring their reductive excesses.

Seager finds it odd that there should be laws connecting complex functional organizations to experience. I think that he is right and wrong to be worried about this. It would indeed be very odd if there were fundamental laws connecting complex organizations to experience (just as it would be odd if there were fundamental laws about telephones), but I do not claim that such laws exist. The invariance principle is intended as a non-fundamental law: eventually it should be the consequence of more fundamental laws that underlie it. Such laws need not invoke complex functional organization directly; they might instead invoke some simple underlying feature, such as information. As long as this feature is itself an organizational invariant (as information plausibly is), the invariance principle may be a consequence.

Seager also worries about the fineness of organizational grain required to duplicate experience. I discuss this in my book: the grain needed for the fading and dancing qualia argument to go through is one that is sufficiently fine to capture the mechanisms that support our behavioral dispositions, such as our dispositions to make certain claims, and also sufficiently fine to allow either (a) that any two realizations be connected by a near-continuous spectrum of realizations (for the fading qualia argument), or (b) that any two realizations be connected by a chain of realizations such that neighboring links in the chain differ only over a small region (for the dancing qualia argument). It is not impossible that a less fine grain will also suffice to duplicate experience, but the arguments will give no purchase on these cases. Seager worries that nature does not know about levels of organization, but again this would be a worry only if the invariance principle were held to be a fundamental law.

Finally, Seager thinks that the association of experience with functional organization leads to a particularly worrisome form of epiphenomenalism. I think it is clear, however, that the arguments he invokes apply to any association of experience with physical properties. There is indeed an interesting problem of "explanatory exclusion" to worry about, as I discussed above, but nothing about this problem is specific to the invariance principle or to any of the psychophysical laws I propose.


(3) The double-aspect analysis of information is by far the most speculative and tentative part of my article, and it is surely the most likely to be wrong. Indeed, as I say in my book, I think it is more likely than not to be wrong, but I put it forward in the hope that it might help progress toward a more satisfactory theory. So I am far from sure that I can defend it against every possible criticism. That being said, I think that a couple of the criticisms of the information-based approach may rest on misinterpretations.

Lowe resists my invocation of Shannonian information as "inappropriate for characterizing the cognitive states of human beings." But as before, I am not trying to reduce mental states to information processing. Such processing is instead invoked as a potential key to the physical basis of consciousness. True, the double-aspect view implies that consciousness has formal properties that mirror the formal properties of the underlying information; I think this claim is clearly plausible from phenomenological investigation, but it is nowhere claimed that these formal properties exhaust the properties of consciousness. Just because the skeletal framework is syntactic, for example, there is nothing to prevent irreducible non-syntactic properties from being present as well. In fact, it is obvious that there are phenomenal properties over and above these formal properties: such properties are precisely what make the phenomenal realization of the information so different from the physical realization. Shannonian information at best provides a framework around which a theory of these intrinsic properties can be hung.

Varela is similarly "dumbfounded" by my appeal to this sort of information, because of the "outmoded cybernetic tradition" it invokes. I am not nearly as certain as Varela that Shannonian information (as opposed to the cyberneticist use of it) is outmoded; indeed, I think one can argue that information states of the kind I describe in my book play a central role even in the computationalist, connectionist, and "embodied" frameworks that Varela endorses. These frameworks may add something to information states - such as a semantic content, or a context within the world - but all these frameworks invoke certain "difference structures" and their causal roles in a cognitive system. And precisely because this difference structure captures an important formal isomorphism between aspects of conscious states and the underlying physical states, the concept of information may provide a framework within which we can make progress. Once it is clear that experience is not being reduced to information, I think the way is cleared for information to play a useful formal role, and perhaps even to play a role in the underlying metaphysics.

Libet, Hardcastle, and Velmans note that some information is nonconscious. As I discuss in my book, there are two ways to deal with this. The first is to find further constraints on the sort of information that is associated with experience; it is entirely possible that some such constraint may play a role in the psychophysical laws. (Velmans offers some interesting suggestions about such constraints, although none of them seem likely candidates to be truly fundamental.) The other possibility is to accept that all information has an experiential aspect: while not all information is realized in my consciousness, all information is realized in some consciousness. This is counterintuitive to many, but I do not think the possibility can be immediately dismissed. I will discuss it when I discuss panpsychism below.

The ontology underlying the informational picture (which Velmans worries about) remains open. I discuss a number of possible interpretations of it in my book. I am most attracted to a Russellian interpretation on which experience forms the "intrinsic" (or realizing) aspect of informational states which are fundamental to physics but characterized by physics only extrinsically. There is at least a kinship between the informational model and the Russellian metaphysics here, and exploiting it would lead to definite double-aspect ontology. ("Physics is information from the outside; experience is information from the inside.") But I am not certain that this can be made to work, and more straightforwardly dualistic interpretations are also available.

I favor the informational view largely because when I look for regularities between experience and the physical processes that underlie it, the most striking correspondences all lie at the level of information structures. We have to find something in underlying physical processes to link experience to, and information seems a plausible and universal candidate. Perhaps the biggest concern about this view is that these informational structures do not lie at a fundamental level in physical processes; as Bilodeau notes, they are curiously abstract to play a role in a fundamental theory. On the other hand, there are ways of seeing information as fundamental to physics itself, so there may be ways in which a connection at a fundamental level can be leveraged to support this striking connection at the macroscopic level. But all that is very much in the realm of open questions.

4 POSITIVE PROPOSALS

A number of contributors made positive proposals about how the hard problem might be approached. These divide into (1) neuroscientific and cognitive approaches; (2) phenomenological approaches; (3) physics-based approaches, and (4) fundamental psychophysical theories. I will not try to assess each proposal at great length, but I will say a few words about the approaches and their relationship to my framework.

4.1 Neuroscientific and cognitive approaches

Proposals with a neurobiological and cognitive flavor were made by Crick and Koch, Baars, and MacLennan. The philosophical orientations of these range from reductionism to property dualism; this alone illustrates that a neurobiological approach to consciousness is compatible with many different philosophical views. Even if neurobiology and cognitive science alone cannot solve the hard problem, they may still play a central role in developing a theory.

Crick and Koch come closest to a reductionist view, although they are appropriately tentative about it. They first divide up the hard problem into three parts and offer an interesting solution to the third, concerning the incommunicability of experience. I think their idea here - that only relations are communicable because only relations are preserved throughout processing - is largely correct. That is, all that is communicable are differences that make a difference, or information states. Of course this is strictly speaking one of the "easy" problems, but it clearly has a close connection to the hard problem; I expect that a good cognitive account of what we can and cannot communicate about consciousness will lead to some very useful insights about the hard problem itself. I develop this point and tie it to an informational view of consciousness in Chapter 8 of my book.

On the hard problem, Crick and Koch suggest that it may be promising to focus first on "meaning". I am less sure about this: meaning seems to be almost as difficult a concept as consciousness, and perhaps even more ambiguous. If one invokes a purely functional construal of meaning - so that meaning comes down to certain correlations with the environment and certain effects on later processing - then a neurobiological account of meaning may be forthcoming, but such a functional account will not tell us why the meaning should be consciously experienced. And if one invokes a richer construal of meaning - one on which meaning is more closely tied to consciousness, for example - then there is more chance that an account of meaning may yield an account of consciousness, but a functional explanation of meaning becomes much less likely. Nevertheless I imagine there are useful insights to be had by treading this path, whether or not it leads to a solution to the hard problem.

An intermediate line is taken by Baars, who argues that a functional theory can at least shed considerable light on subjective experience, but who does not claim that it solves the hard problem. Indeed, he thinks the hard problem is too hard to be solved for now, because it involves an implausible criterion. I think that Baars misinterprets the hard problem slightly, however. To solve the hard problem we need not actually evoke all relevant experiences in ourselves (his "empathy criterion"). The point is not to experience what it is like to be a bat (although that would be nice!), but rather to explain why there is anything it is like to be a bat or a human at all. And this seems like a perfectly reasonable scientific question.

Baars also notes that there are deep causal connections between "easy" and "hard" aspects of our mental lives. This is certainly correct; indeed, I pointed out some such connections in my article. There seems to be a tight connection between global availability and consciousness, for example, as Baars suggests. So this sort of connection is quite compatible with my framework: the distinction between the easy and hard problems is a conceptual distinction, not a claim that the two have nothing to do with each other.

In particular, even once these causal interconnections are granted, one can still ask how and why the "easy" aspects are tied to the "hard". In conversation, Baars has suggested that one should just regard this as a brute fact, noting that psychologists are used to dealing in brute facts! So one might just take it as a brute fact that the contents of a global workspace are consciously experienced, for example. I think there is something to this, but one has to note that this brute fact has some strong consequences. For a start, it implies that a theory of consciousness requires explanatorily primitive principles over and above the facts about processing. Even if "easy" and "hard" phenomena are two different aspects of the same thing, as Baars suggests, this still requires some further principle to tie the two aspects together, and indeed to explain why there are two aspects in the first place.

Of course it is most unlikely that the whole problem will be solved in one bite, so it is entirely reasonable for Baars to leave things at the level of a connection between the global workspace and consciousness. This reflects a common strategy for dealing with consciousness in those areas of psychology that take it seriously: take the existence of consciousness for granted, and investigate just how and where it maps onto cognitive processing. (The literature on the properties of conscious vs. unconscious processes can be read this way, for example.) This way the roots of consciousness may be located, and the path may be cleared for a theory of the underlying connection.

MacLennan aims to take the next step, searching for a simple theory that explains the connection. He accepts that there is an irreducible phenomenal aspect that is systematically associated with neural processes, yielding a property dualism similar to mine but with a neurodynamical flavor; and he develops some ideas about the "deep structure" of the link between neural processes and experience.

I think MacLennan's idea of "protophenomena" (or "phenomenisca") as basic elements of consciousness is particularly interesting, and promises considerable rewards if it can be further developed. For a precise theory, I think we will need an account of (a) precisely when a protophenomenon is associated with a physical process, (b) what sort of protophenomena will be associated, depending on the characteristics of the physical process, and (c) the principles by which protophenomena combine into a unified conscious experience.

None of these questions are trivial, although MacLennan makes a start on all of them. His answer to (a) relies on a one-activity-site-one-protophenomenon principle; for my part I would be surprised if things were so straightforward. It might be that protophenomena are determined by informational states of the system that are not straightforwardly localized, for example. He does not have too much to say about (b) - precisely what makes for the difference between visual and auditory protophenomena, for example? - but he has a preliminary analysis of (c). I suspect that (c) (an analog of the problems faced by the Russellian metaphysics described earlier) may turn out to be the hardest question of all.

In any case, I see the central parts of the projects of Crick and Koch, Baars, and MacLennan as all being compatible with the research program I envisage on the hard problem. At the nuts-and-bolts level, we must try to isolate the neural processes associated with consciousness, and to find detailed and systematic associations between these processes and characteristics of conscious experience. We should do the same at a cognitive level, where it may be that we will find "cleaner" associations if less detail, along with a way of integrating key elements of the neural story into a big picture. A clean association between global availability and consciousness, for example, promises to help make sense of messier associations involving various specific neural processes. Finally, we should search for the fundamental principles that underlie and explain these associations, boiling things down to as simple a system as possible.

All this is compatible both with the scientific worldview and with the irreducibility of consciousness. Once released from the insistent tug of the reductive dream, we are free to engage in the project of relating consciousness to physical processes on its own terms. The resulting science may be all the richer for it.

4.2 Phenomenological approaches

Shear and Varela concentrate on phenomenological approaches to the hard problem. I think that such an approach must be absolutely central to an adequate science of consciousness: after all, it is our own phenomenology that provides the data that need to be explained! If we are to have a detailed psychophysical theory, as opposed to a mere ontology, then we will have to catalog and systematize these data much as happens elsewhere in science; and to do this, patient attention to one's own experience is required.

Of course there are deep methodological problems here. The first is the old problem that the mere act of attention to one's experience transforms that experience. As we become more patient and careful, we may find that we are studying data that are transformed in subtle ways. This is not too much of a problem at the start of investigation -- we have a long way to go until this degree of subtlety even comes into play - but it may eventually lead to deep paradoxes of observership. Phenomenologists from both East and West have proposed ways to deal with this problem, but I think it has a certain resilience. Even if there do turn out to be limits on the fineness of this method's grain, however, I have no doubt that coarse-grained methods can take us a long way.

The second problem is that of developing a language - or better, a formalism - in which phenomenological data can be expressed. In other areas, the advent of such formalisms has led to rapid progress. We still seem to be far from such a formalism here, however. The notorious "ineffability" of conscious experience plays a role here: the language we have for describing experiences is largely derivative on the language we have for describing the external world. Perhaps, as Thomas Nagel has suggested, the structural properties of experience (e.g., the geometric structure of a visual field) will be most amenable to the possibility of formal expression, whether in informational, geometric, or topological terms, or in other terms entirely. I suspect that the residual non-structural properties will pose special problems.

The third difficulty lies in the failure, or at least the limitations, of incorrigibility: our judgments about experience can be wrong. I don't think this difficulty is as damning for phenomenology as it is sometimes made out to be; after all, our judgments about external data can be wrong, too, but science manages just fine. What is important is that our judgments about experience are accurate by and large, particularly when we are paying careful and patient attention. Our introspection must also be critical: we must take care to consider any ways in which it might be going wrong. But if our phenomenological judgments pass these tests, I think one is justified in taking them to be reliable.

Shear's and Varela's papers together make a strong case that a sophisticated phenomenological study is possible. In Shear's wide-ranging paper, the remarks about "pure consciousness" are particularly intriguing. I confess that I find myself among the skeptics where this notion is concerned. I am not sure that I can imagine a consciousness without quality: would not even a "void" experience have a certain voidish quality? (Shear's own position is appropriately cautious here.) But perhaps this is only because I have never experienced such a thing myself. The idea is appealing, at any rate, in the same sort of way that the Russellian idea of a physical world without intrinsic qualities is appealing: the appeal manifests itself both in spite of and because of its flirtation with incoherence. And the potential link that Shear suggests between this idea and a fundamental theory is certainly suggestive.

I am also sympathetic with much of Varela's discussion, in its shape if not in every detail. Varela takes himself to differ with me on some central points, but I am not sure why. The main difference between us seems to be one of emphasis: he emphasizes the phenomenological data, whereas I emphasize the systematicity in the relationship between these data and underlying processes. Perhaps he takes my "extra ingredient" or "theoretical fix" to be something more reductive than I intended. Varela himself seems to endorse the need for an extra ingredient in our theories - namely experience itself - which fits my program well. He may differ by doubting the likelihood of simple underlying laws connecting the physical and phenomenal domains; but if so, he does not give his reasons in this article. In any case, the idea of "neurophenomenology" sounds eminently sensible to me. The test will be whether it can be cashed out in the form of detailed results.

It would be overambitious to suppose that phenomenology by itself offers a solution to the hard problem. The ontological debates are as hard as ever, and phenomenology is largely neutral on them (except, perhaps, in rejecting type-A materialism). But it is absolutely central to the epistemology of the hard problem: without it, we would not even know what needs explaining. In most areas of science, we need an adequate epistemology to get a detailed theory off the ground, and there is no reason to suspect that the case of consciousness will be any different. If so, the sort of careful study advocated by Shear and Varela will be a central component in the path to a solution.

4.3 Physics-based approaches

In getting an empirical theory of consciousness off the ground, the two areas just discussed will play the central roles. Neuro/cognitive science will provide the third-person data and phenomenology will provide the first-person data. As all this goes on, theorists of all stripes will seek to systematize the connection between the two. In the early stages, this connection will be strongest at the "surface" level: researchers will isolate correlations between fairly complex neuro/cognitive processes and relatively familiar characteristics of conscious experience. This high-level project may well be the solid core of consciousness research for many years to come. As the project develops, though, there will be an increasing drive to find the deep structure that underlies and explains these high-level connections, with the ultimate goal being a fundamental psychophysical theory.

We are not close to having such a fundamental theory yet, but this need not stop us from speculating about its form. Many contributors to this symposium do just that, offering proposals about links between consciousness and physical processes at the most fundamental level. In this section and the next, I will discuss these proposals. Those with conservative tastes might stop here: what follows is largely untrammeled speculation in physics and metaphysics about what may be required to bring consciousness within the natural order. I do not know whether any of this is on the right track, but there are plenty of interesting ideas with which I am more than happy to play along.

A number of contributors suggest approaches in which physics plays a central role. I expressed some criticism of physics-based proposals in the keynote paper, but mostly insofar as these were offered as reductive explanations of consciousness. ("Neurons can't do the job, but quantum mechanics can."). None of the current contributors offer that sort of account. Most of them instead offer proposals on which consciousness is taken as fundamental, and is related nonreductively to the entities in physical theories, perhaps in the hopes of finding a natural place for consciousness in the natural order. Such suggestions are not subject to the same sort of criticism, and they certainly cannot be ruled out a priori.

The difference between the two sorts of physics-based proposals is most apparent in the article by Hameroff and Penrose. Previous work had given me the impression that their aim was to explain consciousness wholly in terms of quantum action in microtubules; but this paper makes it explicit that consciousness is instead to be taken as fundamental. In essence, Hameroff and Penrose offer a psychophysical theory, postulating that certain quantum-mechanical reductions of the wave function, brought on when a certain gravitational threshold is attained, are each associated with a simple event of experience. They suggest a kinship with Whitehead's metaphysics; the view might also fit comfortably into the Russellian framework outlined earlier.

This is an intriguing and ambitious suggestion. Of course the details are a little sketchy: after their initial postulate, Hameroff and Penrose concentrate mostly on the physics of reduction and its functioning in microtubules, and leave questions about the explanation of experience to one side. Eventually it would be nice to see a proposal about the precise form of the psychophysical laws in this framework, and also to see how these billions of microscopic events of experience might somehow yield the remarkable structural properties of the single complex consciousness that we all possess. I am cautious about this sort of quantum-mechanical account myself, partly because it is not yet clear to me that quantum mechanics is essential to neural information-processing, and partly because it is not easy to see how quantum-level structure corresponds to the structure one finds in consciousness. But it is not impossible that a theory might address these problems. To know for sure, we will need a detailed explanatory bridge.

Stapp offers a very different sort of quantum-mechanical proposal. Instead of trying to constitute experience out of many low-level quantum-mechanical events, he takes consciousness as a given, and offers a theory of the role it plays in collapsing physical wave functions, thus showing how it might have an impact on the physical world. As I said earlier, this sort of "collapse" interpretation of quantum mechanics needs to be taken very seriously - in the interests both of giving a good account of quantum mechanics and of giving a good account of consciousness - and Stapp's, as developed in a number of papers, is perhaps the most sophisticated version of such an interpretation to date. It certainly offers the most natural picture in which consciousness plays a role in influencing a non-causally-closed physical world.

Stapp's paper is neutral on some central questions that a theory of consciousness needs to answer. He says quite a lot about his mental-to-physical laws, characterizing the role of consciousness in wave-function collapse, but he does not say much about the physical-to-mental laws which will presumably be at the heart of a theory. Such laws will tell us just which physical processes are associated with consciousness, and what sort of conscious experience will be associated with a given physical process. (Of course, we know that experiences have "actualizations" as a physical correlate; but given that Stapp wants pre-existing experiences to cause the actualizations, we need some independent physical criterion for experience. This would then yield a physical criterion for actualization in turn.) As it stands, Stapp's picture seems compatible with almost any physical-to-mental laws. Stapp offers some suggestions about such laws in his book (Stapp 1993), where he proposes that experience goes along with "top-level processes" in the brain; but perhaps it is a virtue of Stapp's broader proposal about the causal role of consciousness that many different psychophysical theories can benefit by invoking it.[*]

*[[[One intriguing if far-out possibility: if Stapp's proposal were granted, it might even be that experimental physics could help determine the psychophysical laws, and determine which systems are conscious, at least in principle. It turns out that different proposals about the physical criteria for collapse have subtly different empirical consequences, although they are consequences that are practically impossible to test in general (see Albert 1992 for discussion). So at least in principle, if not in practice, one could test for the presence or absence of collapse in a given system, and thus for the presence or absence of experience!]]]

Clarke suggests a different connection between physics and consciousness, rooted in the nonlocality of both. The nonlocality of the former is less controversial, in a way: nonlocal causal influences are present in most interpretations of quantum mechanics, with the exception of those by Everett (1973) and Cramer (1986), and nonlocal constitution of physical states is present on most of these in turn. The sense in which mind is nonlocal is less clear to me. I am sympathetic with Clarke's point that mind is not located in physical space, but I am not sure of the link between these two sorts of nonlocality. Clarke argues that the physical structure that supports mind has to be nonlocal; but all that is clear to me is that it has to be nonlocalized, or distributed across space, which is equally possible on a classical theory. But perhaps nonlocal constitution of a physical state could be linked to the unity of consciousness, especially on a view which identifies consciousness with a physical state in such a way that unified consciousness requires a unified substrate: nonlocal physical constitution might unify the basis? The idea might also help in a Russellian metaphysics, though I am not sure that it is required.

Another appeal to physics is made by McGinn, who suggests that accommodating consciousness within the natural order will require a radically revised theory of space. A question immediately suggests itself: will this theory be forced on us to explain (third-person) empirical evidence, or just to accommodate consciousness? I suspect that it must be the latter. All sorts of revisions in our physical theories are made to explain the external world, but they always leave theories cast in terms of some basic mathematical structures and dynamics (whether Euclidean space, four-dimensional space-time, or infinite-dimensional Hilbert space). There are principled reasons why structure and dynamics is all we could possibly need to explain external evidence; and given any theory cast solely in terms of structure and dynamics, the further question of consciousness will arise.

So it seems to me that McGinn needs an empirically adequate theory of space to be revised or supplemented in some fundamental way to accommodate consciousness, while leaving its external predictions intact. But McGinn also strongly wants to avoid epiphenomenalism (see McGinn 1996). I think that the natural way (perhaps the only way) to satisfy these requirements is along the Russellian lines suggested above: there is a pervasive intrinsic property of physical reality, a property which carries the structure and dynamics specified in physical theory but is nevertheless not revealed directly by empirical investigation, and which enables the existence of consciousness. This picture seems to square well with McGinn's remarks about a "hidden dimension" of physical reality. Concentrating on space in particular, we might perhaps think of this property as the "medium" in which the mathematical structures of space are embedded.

It seems clear, at any rate, that McGinn's "hidden dimension" requires us to postulate something new and fundamental over and above what is empirically adequate. As such it seems that he is embracing option (2) of the dilemma I posed him earlier in this paper. And this new fundamental property is a sort of "proto-experience", at least in the sense that it enables the existence of experience. If so then McGinn's view, when unpacked, is in the same sort of ballpark as the views I am advocating. Of course McGinn could be right that we will never be able to form such a theory, for example because of our inability to grasp the relevant proto-experiential concept. On the other hand, he could be wrong; so I for one will keep trying.

Bilodeau takes the most radical physics-based approach, holding (I think) that we have to abandon the idea that there are objectively existing states in fundamental physics. Instead, physical reality crystallizes in some way as a product of experience and the process of inquiry. Once we see that experience is fundamental to the very nature of physical reality in this way, the hard problem may go away.

Bilodeau suggests that this picture is the most natural upshot of quantum mechanics, appealing especially to the writings of Bohr. Now I think this picture is certainly not forced on us by quantum mechanics - there are plenty of ways of making sense of quantum mechanics while maintaining the idea that fundamental physical reality has an objective existence, if only in the form of a superposed wave function. Bilodeau clearly finds these interpretations unappealing, but I (like many others) find them much more comprehensible. Given that macroscopic physical reality has an objective existence, it seems that its causal antecedents must have objective existence (otherwise why would it come into existence?), and in the process of explanation we are relentlessly driven to causal antecedents at more and more fundamental levels. So the only way I can make sense of the idea that fundamental physical reality does not have objective existence is as a form of idealism, on which all physical reality is present only within experience. Bilodeau disclaims this interpretation, however, so this may be a cognitive limitation on my part.

In any case it seems that even under Bilodeau's reasoning, there still needs to be an explanatory theory connecting experiences and brain processes. I am not quite sure what the shape of such a theory will be, but perhaps his version of the metaphysics will be able to give a natural version of it. It would be very interesting to see some of the details.

4.4 Fundamental psychophysical theories

Some of the most intriguing pieces, to me, are those that speculate about the shape of a fundamental theory of consciousness. Many of these proposals invoke some form of panpsychism. Panpsychism is not required for a fundamental theory; it is not written in stone that fundamental properties have to be ubiquitous. Libet and Stapp, for example, both invoke fundamental theories without invoking panpsychism. But the idea of a fundamental theory certainly fits well with panpsychism, and the proposals by Hut and Shepard, Rosenberg, and Seager are all explicitly panpsychist.

Some contributors (e.g. Mills and Hardcastle) roll their eyes at the idea of panpsychism, but explicit arguments against it are surprisingly hard to find. Rosenberg and Seager give nice defenses of panpsychism against various objections. Indeed, both upbraid me for not being panpsychist enough. I do not know whether panpsychism is true, but I find it an intriguing view, and in my book I argue that it deserves attention. If a simple and powerful predictive theory of consciousness ends up endorsing panpsychism, then I do not see why we should not accept it.

Panpsychist views need not ascribe much of a mind to simple entities. Sometimes the term "panexperientialism" is used instead, to suggest that all that is being ascribed is some sort of experience (not thought, not intelligence, not self-awareness), and a particularly simple form of experience at that. And some versions do not even go this far. Instead of suggesting that experience is ubiquitous, such views suggest that some other property is ubiquitous, where instantiations of this property somehow jointly constitute experience in more complex systems. Such a property might be thought of as a proto-experiential property, and the associated view might more accurately be thought of as panprotopsychism.

Of course it is very hard to form a conception of protoexperiential properties. We know no set of physical properties can constitute experience, for familiar reasons. But perhaps some quite alien property might do the job. I was particularly intrigued by Hut and Shepard's postulation of a property `X', where X stands to consciousness as time stands to motion. That is, just as time enables the existence of motion, in combination with space, X enables the existence of consciousness, in combination with the basic dimensions of space-time. This offers an elegant picture of proto-experience quite different from the tempting picture on which proto-experience is "just like experience but less so".

In a way, Hut and Shepard's proposal has a lot in common with McGinn's suggestion of a "hidden dimension" of space which enables the existence of consciousness. As with McGinn, once can ask whether the dimension is truly "hidden", or whether it will manifest itself in our external observations (the physics we have now does a pretty good job, after all). As before, I suspect that such a property has to be hidden, as an empirically adequate theory can always be cast in terms of structure and dynamics that are compatible with the absence of experience. Thus, as before it seems that the new dimension will either (a) be epiphenomenal to the other dimensions (or at least to the projections of those dimensions that we have access to), or (b) related to them as a sort of Russellian "realizing" property, carrying the structure in one of these dimensions and making it real. The latter would be particularly compatible with the idea of turning the hard problem "upside down", on which physical reality is itself somehow derivative on underlying (proto)experiences.

Rosenberg offers a detailed defense of panpsychism, and makes a number of points with which I am particularly sympathetic. He makes a strong case against the existence of fundamental laws that connect consciousness to mere complexity, to aspects of functioning, or to biological properties. While I think there is nothing wrong with the idea of a nonpanpsychist fundamental theory, Rosenberg's discussion eliminates some of the most obvious candidates. (Another possibility worth considering, though: several simple laws might combine to imply that experience only comes into existence in certain complex cases.) And he begins to unpack what panpsychism might involve in a way that makes it clear that the idea is at least coherent.

Rosenberg also makes a strong case for an integrated view of nature, on which consciousness is not a mere tacked-on extra. My keynote paper may carry a flavor of the latter (except for the final paragraph of section VII), but I think the former is the ultimate goal. Perhaps the best path to such an integrated view is offered by the Russellian picture on which (proto)experiential properties constitute the intrinsic nature of physical reality. Such a picture is most naturally associated with some form of panpsychism. The resulting integration may be panpsychism's greatest theoretical benefit.

Seager also provides some motivation for panpsychism, and gives a particularly interesting accounting of its problems. I think his "completeness problem" (a version of the epiphenomenalism problem) is mitigated by embracing the Russellian interpretation, on which the fundamental (proto)experiences are part of the causal order, although there will always be residual worries about explanatory superfluity. (Giving experiences certain anomalous effects doesn't help here; experience-free structural explanations are just as possible either way.) This view would also solve his "no-sign" problem: we cannot expect to have external access to the intrinsic properties that underlie physical dispositions. A solution to the "not-mental" problem must likely wait until we have a theory; presumably we will then be justified in attributing (proto-) mentality in certain cases precisely because of the theory's indirect explanatory benefits in explaining our own experiences. A version of the "unconscious mentality" problem will apply to any view that postulates proto-experiential rather than experiential properties at the fundamental level (how does experience emerge from non-experience?), but this need not be quite as hard as the original hard problem. We know that physical properties cannot imply experience, because of the character of physics, but novel intrinsic proto-experiential properties cannot be ruled out in the same way.

This leaves the "combination problem", which is surely the hardest. This is the problem of how low-level proto-experiential and other properties somehow together constitute our complex unified conscious experiences. (One might also think of it as the "constitution problem", to avoid the implication that constitution must work by simple combination; consider Hut and Shepard's non-combinatorial proposal, for example.) The problem could be bypassed altogether by suggesting that complex experiences are not constituted by the micro-experiences, but rather arise autonomously. This would hold true under many psychophysical theories, including some versions of an informational theory; its main disadvantage is that it once again threatens epiphenomenalism. To make experience causally relevant in the Russellian way, it seems that it has to be constituted out of the intrinsic natures of the fundamental causally relevant entities in physical theory. Unless we embrace an interactionist picture like Stapp's where there is fundamental causation at a high level, it seems that integrating experience into the causal order leads inevitably to the combination problem.

To solve the problem, we have to investigate the principles of composition to which experience is subject. The "problem" may well arise from thinking of experiential composition along the lines of physical composition, when it might well work quite differently. I suggest in my book, for example, that something more like informational composition might be more appropriate. Alternatively, we may try to keep a closer isomorphism between experiential composition and physical composition, but investigate nonstandard manners of physical composition. Seager's invocation of quantum coherence is an intriguing example of such a strategy: in this case physical composition yields a unity that might mirror the unity of experience. To the best of my knowledge, the evidence for widespread stable quantum coherent states at a macroscopic level in the brain is not strong, but this is nevertheless a strategy to keep in mind. A related quantum-mechanical strategy is discussed by Lockwood (1992), who also provides an illuminating discussion of the problem in general. There may well be other interesting ideas waiting to be explored in addressing the problem; it is likely to be a fruitful area for further inquiry.

Of course everything in these last two sections has the air of something put together in the metaphysical laboratory, to use Seager's phrase. It is all extraordinarily speculative, and has to be taken with a very large grain of salt. Like my own speculations about information, these suggestions have not yet been remotely developed to the point where they can be given a proper assessment - indeed, their largely undefined nature may be the reason that I am able to speak reasonably warmly of all of them! And most of them have not yet begun to provide a detailed explanatory bridge from the fundamental level to the complex experiences we know and love. I favor the informational view partly because it seems closer to providing such a bridge than proposals based directly in physics or elsewhere, but even this view is very sketchy in crucial places.

To have a fundamental theory that we can truly assess, we will need a fundamental theory with details. That is, we will need specific proposals about psychophysical laws, and specific proposals about how these laws combine, if necessary, so that ultimately we will be able to (1) take the physical facts about a given system, (2) apply the psychophysical theory to these facts, and thus (3) derive a precise characterization of the associated experiences that the theory predicts. As yet, we do not have a single theory that allows this sort of derivation. Indeed, as I noted above, we may first need to develop a proper formalism (informational, geometrical, topological?) for characterizing experiences before this project can get off the ground. And once we have such a formalism, it may well be extremely hard to devise a theory that even gives the right results in the simplest familiar cases. Once we do have a detailed theory that gives approximately correct results in familiar cases, however, we will know we are on the right track. The ultimate goal is a simple theory that gets things exactly right.

I imagine that it may well be many years until we have a good detailed theory. We will probably first have to concentrate on understanding the "macroscopic" regularities between processing and experience, and gradually work our way down to the fundamental principles that underlie and explain these regularities. Most researchers are now working at the macroscopic level, insofar as they are working on experience at all, and this is as it should be. But we can at least speculate about the form of a fundamental theory, in our more philosophical moments, and there is no reason why we should not try to come up with some details. Perhaps we will prove to have been terribly premature, but we will not know until we try. And in the meantime, I am sure that the attempt will be enlightening.

5 CONCLUSION

Taking a broad view of the metaphysics of the hard problem, here is the lay of the land as I see it.

(1) The first "choice point" is the question of whether there is a problem of consciousness at all, distinct from the problem of explaining functions. Some, the type-A materialists, deny this, though we have seen that there seem to be few good arguments for such a counterintuitive conclusion. Given that there is a further phenomenon that needs explaining, we have seen that one is forced to the conclusion that no reductive explanation of consciousness can be given, and that explanatorily primitive bridging principles are required.

(2) In a second choice point, some (the type-B materialists) try to preserve materialism by arguing that these principles are "identities". But we have seen that these explanatorily primitive identities are unparalleled elsewhere in science, are philosophically problematic, and require the invocation of a new and ungrounded form of necessity. In any case, the form of a theory of this sort will be just like the form of a theory that takes consciousness as fundamental, and these "identities" will function in our explanations just like fundamental laws.

(3) All other theories take experience (or proto-experience) as irreducible, along with irreducible principles relating it to the physical domain. The next choice point is whether to hold onto the causal closure of the physical. Denying this, perhaps through an invocation of wavefunction collapse in quantum mechanics, leads to an interactionist dualism. But the advantages of this denial can be questioned.

(4) Given that the physical domain is a closed causal network, the next choice is that between views which put experience outside this network, with psychophysical laws that make experience epiphenomenal, or which put experience inside this network, by virtue of a Russellian monism on which the intrinsic properties of matter are proto-experiential. The latter offers the most attractive and integrated view, if the "combination problem" can be solved.

(5) The final choice point turns on the form of the psychophysical laws in our theory. This is the meatiest question of all, and can be engaged by researchers in all fields: the earlier questions require some tolerance for metaphysics, but this question is more straightforwardly "scientific". Much work on this question will be independent of specific choices on questions (2)-(4), though some aspects of these choices may inform one's approach to this question at some point.

Progress on the hard problem will likely take place at two levels. On the philosophical level, there will be an ongoing clarification of the issues surrounding (1)-(4), and the arguments for and against the various options at the various choice points. For my part I think the case for introducing new irreducible properties is hard to resist, but the choice points at (3) and especially (4) are still open. On a more concrete level, there will be progress toward specific laws as in (5). A combination of experimental study, phenomenological investigation, and philosophical analysis will lead us to systematic principles bridging the domains, and eventually we hope to be led to the underlying fundamental laws. In this way we may eventually arrive at a truly satisfactory theory of conscious experience.

Bibliography

Albert, D. 1992. Quantum Mechanics and Experience. Cambridge, MA: Harvard University Press.

Baars, B.J. 1996. Understanding subjectivity: Global workspace theory and the resurrection of the observing self. Journal of Consciousness Studies 3:211-17.

Bilodeau, D.J. 1996. Physics, machines, and the hard problem. Journal of Consciousness Studies 3:386-401.

Chalmers, D.J. 1996. The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.

Churchland, P.M. 1996. The rediscovery of light. Journal of Philosophy 93:211-28.

Churchland, P.S. 1996. The hornswoggle problem. Journal of Consciousness Studies 3:402-8

Clark, T. 1995. Function and phenomenology: Closing the explanatory gap. Journal of Consciousness Studies 2:241-54.

Clarke, C.J.S. 1995. The nonlocality of mind. Journal of Consciousness Studies 2:231-40.

Crick, F. and Koch, C. 1995. Why neuroscience may be able to explain consciousness. Scientific American 273(6):84-85.

Cramer, J.G. 1986. The transactional interpretation of quantum mechanics. Review of Modern Physics 58:647-87.

Dennett, D.C. 1996. Facing backwards on the problem of consciousness. Journal of Consciousness Studies 3:4-6.

Everett, H. 1973. The theory of the universal wave function. In (B.S. de Witt & N. Graham, eds.) The Many-Worlds Interpretation of Quantum Mechanics. Princeton: Princeton University Press.

Kohler, D. 1947. Gestalt Psychology. New York: Liveright Publishing Corporation.

Hardcastle, V.G. 1996. The why of consciousness: A non-issue for materialists. Journal of Consciousness Studies 3:7-13.

Hawking, S. 1988. A Brief History of Time. Bantam Books.

Hodgson, D. 1996. The easy problems ain't so easy. Journal of Consciousness Studies 3:69-75.

Hut, P. & Shepard, R. Turning the "hard problem" upside-down and sideways. Journal of Consciousness Studies 3:313-29.

Levine, J. 1983. Materialism and qualia: The explanatory gap. Pacific Philosophical Quarterly 64:354-61.

Libet, B. 1996. Solutions to the hard problem of consciousness. Journal of Consciousness Studies 3:33-35.

Lockwood, M. 1989. Mind, Brain, and the Quantum. Oxford: Blackwell.

Lockwood, M. 1992. The grain problem. In (H. Robinson, ed.) Objections to Physicalism. Oxford: Oxford University Press.

Lowe, E.J. 1995. There are no easy problems of consciousness. Journal of Consciousness Studies 2:266-71.

MacLennan, B. 1996. The elements of consciousness and their neurodynamical correlates. Journal of Consciousness Studies 3:409-24.

Maxwell, G. 1978. Rigid designators and mind-brain identity. In (C.W. Savage, ed.) Perception and Cognition: Issues in the Foundations of Psychology (Minnesota Studies in the Philosophy of Science, Vol. 9). Minneapolis: University of Minnesota Press.

McGinn, C. 1989. Can we solve the mind-body problem? Mind 98:349-66. Reprinted in The Problem of Consciousness (Blackwell, 1991).

McGinn, C. 1995. Consciousness and space. Journal of Consciousness Studies 2:220-30.

McGinn, C. 1996. Review of The Conscious Mind. Times Higher Educational Supplement, April 5 1996, pp. vii-ix.

Mills, E. 1996. Interactionism and overdetermination. American Philosophical Quarterly 33:105-115.

Mills, E. 1996. Giving up on the hard problem of consciousness. Journal of Consciousness Studies 3:26-32.

Muller, G.E. 1896. Zur Psychophysik der Gesichtsempfindungen. Zeitschrift f\"ur Psychologie und Physiologie der Sinnesorgane 10:1-82.

O'Hara, K. & Scutt, T. 1996. There is no hard problem of consciousness. Journal of Consciousness Studies 3.

Papineau, D. 1996. Review of The Conscious Mind Times Literary Supplement 4864 (June 21, 1996), pp. 3-4.

Place, U.T. 1956. Is consciousness a brain process? British Journal of Psychology 47:44-50. Reprinted in (W. Lycan, ed) Mind and Cognition (Blackwell, 1990).

Price, M.C. 1996. Should we expect to feel as if we understand consciousness? Journal of Consciousness Studies 3:303-12.

Robinson, W.S. 1996. The hardness of the hard problem. Journal of Consciousness Studies 3:14-25.

Rosenberg, G.H. 1996. Rethinking nature: A hard problem within the hard problem. Journal of Consciousness Studies 3:76-88.

Russell, B. 1927. The Analysis of Matter. London: Kegan Paul.

Seager, W. 1995. Consciousness, information, and panpsychism. Journal of Consciousness Studies 2:272-88.

Shear, J. 1996. The hard problem: Closing the empirical gap. Journal of Consciousness Studies 3:54-68.

Shoemaker, S. 1975. Functionalism and qualia. Philosophical Studies 27:291-315. Reprinted in Identity, Cause, and Mind (Cambridge University Press, 1984).

Shoemaker, S. 1990. First-person access. Philosophical Perspectives 4:187-214.

Smart, J.J.C. 1959. Sensations and brain processes. Philosophical Review 68:141-56. Reprinted in (D. Rosenthal, ed) The Nature of Mind (Oxford University Press, 1990).

Stapp, H. 1993. Mind, Matter, and Quantum Mechanics. Springer Verlag.

Varela, F. 1995. Neurophenomenology: A methodological remedy for the hard problem. Journal of Consciousness Studies 3.

Velmans, M. 1995. The relation of consciousness to the material world. Journal of Consciousness Studies 2:255-65.

Warner, R. 1996. Facing ourselves: Incorrigibility and the mind-body problem. Journal of Consciousness Studies 3:217-30.

White, S. 1986. Curse of the qualia. Synthese 68:333-68.