Monday, January 23, 2017

Reminder: Philosophical Short Fiction Contest Deadline Feb 1

Reminder: We are inviting submissions for the short story competition “Philosophy Through Fiction”, organized by Helen De Cruz (Oxford Brookes University), with editorial board members Eric Schwitzgebel (UC Riverside), Meghan Sullivan (University of Notre Dame), and Mark Silcox (University of Central Oklahoma). The winner of the competition will receive a cash prize of US$500 (funded by the Berry Fund of the APA) and their story will be published in Sci Phi Journal.

Full call here.

Monday, January 16, 2017

AI Consciousness: A Reply to Schwitzgebel

Guest post by Susan Schneider

If AI outsmarts us, I hope its conscious. It might help with the horrifying control problem – the problem of how to control superintelligent AI (SAI), given that SAI would be vastly smarter than us and could rewrite its own code. Just as some humans respect nonhuman animals because animals feel, so too, conscious SAI might respect us because they see within us the light of conscious experience.

So, will an SAI (or even a less intelligent AI) be conscious? In a recent Ted talk, Nautilus and Huffington Post pieces, and some academic articles (all at my website) I’ve been urging that it is an important open question.

I love Schwitzgebel's reply because he sketches the best possible scenario for AI consciousness: noting that conscious states tend to be associated with slow, deliberative reasoning about novel situations in humans, he suggests that SAI may endlessly invent novel tasks – e.g., perhaps they posit ever more challenging mathematical proofs, or engage in an intellectual arms race with competing SAIs. So SAIs could still engage in reasoning about novel situations, and thereby be conscious.

Indeed, perhaps SAI will deliberately engineer heightened conscious experience in itself, or, in an instinct to parent, create AI mindchildren that are conscious.

Schwitzgebel gives further reason for hope: "...unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing." He also writes: "Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing."

Both of us agree that leading scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus, and that these approaches also associate consciousness with some sort of broad information sharing from a central system or global workspace (see Ch. 2 of my Language of Thought: a New Philosophical Direction where I mine Baars' Global Workspace Theory for a computational approach to LOT's central system).

Maybe it is just that I'm too despondent since Princess Leah died. But here's a few reasons why I still see the glass half empty:

a. Eric's points assume that reasoning about novel situations, and centralized, deliberative thinking more generally, will be implemented in SAI in the same way they are in humans – i.e., in a way that involves conscious experience. But the space of possible minds is vast: There could be other architectural ways to get novel reasoning, central control, etc. that do not involve consciousness or a global workspace. Indeed, if we merely consider biological life on Earth we see intelligences radically unlike us (e.g., slime molds, octopuses); there will likely be radically different cognitive architectures in the age of AGI/superintelligence.

b. SAI may not have a centralized architecture in any case. A centralized architecture is a place where highly processed information comes together from the various sensory modalities (including association areas). Consider the octopus, which apparently has more neurons in its arms than in its brain. The arms can carry out activity without the brain; these activities do not need to be coordinated by a central controller or global workspace in the brain proper. Maybe a creature already exists, elsewhere in the universe, that has even less central control than the octopus.

Indeed, coordinated activity doesn't require that a brain region or brain process be a place where it all comes together, although it helps. There are all kinds of highly coordinated group activities on Earth, for instance (the internet, the stock market). And if you ask me, there are human bodies that are now led by coordinated conglomerates without a central controller. Here, I am thinking of split brain patients, who engage in coordinated activity (i.e., the right and left limbs seem to others to be coordinated). But the brain has been split through removal of the corpus collosum, and plausibly, there are two subjects of experience there. The coordination is so convincing that even the patent's spouse doesn't realize there are two subjects there. It takes highly contrived laboratory tests to determine that the two hemispheres are separate conscious beings. How could this be? Each hemisphere examines the activity of the other hemisphere (the right hemisphere observes the behavior of the limb it doesn't control, etc.) And only one hemisphere controls the mouth.

c. But assume the SAI or AGI has a similar cognitive architecture as we do; in particular, assume it has an integrated central system or global workspace (as in Baars' Global Workspace Theory). I still think consciousness is an open question here. The problem is that only some implementations of a central system (or global workspace) may be conscious, while others may not be. Highly integrated, centralized information processing may be necessary, but not sufficient. For instance, it may be that the very properties of neurons that enable consciousness, C1-C3, say, are not ones that AI programs need to reproduce to get AI systems that do the needed work. Perhaps AI programmers can get sophisticated information processing without needing to go as far as build systems to instantiate C1-C3. Or perhaps a self-improving AI may not bother to keep consciousness in its architecture, or lacking consciousness, it may not bother to engineer it in, as its final and instrumental goals may not require it. And who knows what their final goals will be; none of the instrumental goals Bostrom and others identify require consciousness (goal content integrity, cognitive enhancement,etc.)

Objection (Henry Shevlin and others): am I denying that it is nomologically possible to create a copy of a human brain, in silicon or some other substance, that precisely mimics the causal workings of the brain, including consciousness?

I don't deny this. I think that if you copy the precise causal workings of cells in a different medium you could get consciousness. The problem is that it may not be technologically feasible to do so. (An aside: for those who care about the nature of properties, I reject pure categorical properties; I have a two-sided view, following Heil and Martin. Categoricity and dispositionality are just different ways of viewing the same underlying property—two different modes of presentation, if you will. So consciousness properties that have all and only the same dispositions are the same type of property. You and your dispositional duplicate can't differ in your categorical properties then. Zombies aren't possible.)

It seems nomologically possible that an advanced civilization could build a gold sphere the size of Venus. What is the probability this will ever happen though? This depends upon economics and sociology – a civilization would need to be a practical incentive to do this. I bet it will never happen.

AI is currently being built to do specific tasks better than us. This is the goal, not reproducing consciousness in machines. It may be that the substrate used to build AI is not a substrate that instantiates consciousness easily. Engineering consciousness in may be too expensive and time consuming. Like building the Venus-sized gold sphere. Indeed, given the ethical problems with creating sentient beings and then having them work for us, AI programs may aim to build systems that aren't conscious.

A response here is that once you get a sophisticated information processor, consciousness inevitably arises. Three things seem to fuel this view: (1) Tonini's information integration theory (IIT). But it seems to have counterexamples (see Scott Aaronson's blog). (2) Panpsychism/panprotopsychism. Even if one of these views is correct, the issue of whether a given AI is conscious is about whether the AI in question has the kind of conscious experience macroscopic subjects of experience (persons, selves, nonhuman animals) have. Merely knowing whether panpsychism or panprotopsychism is true does not answer this. We need to know which structural relations between particles lead to macroexperience. (3) Neural replacement cases. I.e., thought experiments in which you are asked to envision replacing parts of your brain (at time t1) with silicon chips that function just like neurons, so that in the end, (t2), your brain is made of silicon. You are then asked: intuitively are you still conscious? Do you think the quality of your consciousness would change? These cases only goes so far. The intuition is plausible that from t1 to t2, at no point would you would lose consciousness or have your consciousness diminished (see Chalmers, Lowe and Plantinga for discussion of such thought experiments). This is because a dispositional duplicate of your brain is created, from t1 to t2. If the chips are dispositional duplicates of neurons, sure, I think the duplicate would be conscious. (I'm not sure this would be a situation in which you survived though-see my NYT op-ed on uploading.) But why would an AI company build such a system from scratch, to clean your home, be a romantic partner, advise a president, etc?

Again, is not clear, currently, that just by creating a fast, efficient program ("IP properties") we have also copied the very same properties that give rise to consciousness in humans ("C properties"). It may require further work to get C properties, and in different substrates, it may be hard, far more difficult than building a biological system from scratch. Like creating a gold sphere the size of Venus.

Cheers to a Philosopher and Fighter

[image source]

Sunday, January 08, 2017

Against Charity in the History of Philosophy

Peter Adamson, host of History of Philosophy Without Any Gaps, recently posted twenty "Rules for the History of Philosophy". Mostly, they are terrific rules. I want to quibble with one.

Like almost every historian of philosophy I know, Adamson recommends that we be "charitable" to the text. Here's how he puts it in "Rule 2: Respect the text":

This is my version of what is sometimes called the "principle of charity." A minimal version of this rule is that we should assume, in the absence of fairly strong reasons for doubt, that the philosophical texts we are reading make sense.... [It] seems obvious (to me at least) that useful history of philosophy doesn't involve looking for inconsistencies and mistakes, but rather trying one's best to get a coherent and interesting line of argument out of the text. This is, of course, not to say that historical figures never contradicted themselves, made errors, and the like, but our interpretations should seek to avoid imputing such slips to them unless we have tried hard and failed to find a way of resolving the apparent slip.

At first pass, it seems a good idea to avoid imputing contradictions and errors, and to seek a coherent, sensible interpretation of historical texts "unless we we have tried hard and failed to find a way of resolving the apparent slip". This is how, it seems, to best "respect the text".

To see why I think charity isn't as good an idea as it seems, let me first reveal my main reason for reading history of philosophy: It's to gain a perspective, through the lens of distance, on my own philosophical views and presuppositions, and on the philosophical attitudes and presuppositions of 21st century Anglophone philosophy generally. Twenty-first century Anglophone philosophy tends to assume that the world is wholly material (with the exception of religious dualists and near cousins of materialists, like property dualists). I'm inclined to accept the majority's materialism. Reading the history of philosophy helpfully reminds me that a wide range of other views have been taken seriously over time. Similarly, 21st century Anglophone philosophy tends to favor a certain sort of liberal ethics, with an emphasis on individual rights and comparatively little deference to traditional rules and social roles -- and I tend to favor such an ethics too. But it's good to be vividly aware that wonderful thinkers have often had very different moral opinions. Reading culturally distant texts reminds me that I am a creature of my era, with views that have been shaped by contingent social factors.

Of course, others might read history of philosophy with very different aims, which is fine.

Question: If this is my aim in reading history of philosophy, what is the most counterproductive thing I could do when confronting a historical text?

Answer: Interpret the author as endorsing a view that is familiar, "sensible", and similar to my own and my colleagues'.

Historical texts, like all philosophical texts -- but more so, given our linguistic and cultural distance -- tend to be difficult and ambiguous. Therefore, they will admit of multiple interpretations. Suppose, then, that there's a text admitting of four possible interpretations: A, B, C, and D, where Interpretation A is the least challenging, least weird, and most sensible, and Interpretation D is the most challenging, weirdest, and least sensible. A simple application of the principle of charity seems to recommend that we favor the sensible, pedestrian Interpretation A. In fact, however, weird and wild Interpretation D would challenge our presuppositions more deeply and give us a more helpfully distant perspective. This is one reason to favor Interpretation D. Call this the Principle of Anti-Charity.

Admittedly, this way of defending of Anti-Charity might seem noxiously instrumentalist. What about historical accuracy? Don't we want the interpretation that's most likely to be true?

Bracketing post-modern views that reject truth in textual interpretation, I have four responses to that concern:

1. Being Anti-Charitable doesn't mean that anything goes. You still want to respect the surface of the text. If the author says "P", you don't want to attribute the view not-P. In fact, it is the more "charitable" views that are likely to take the author's claims other than at face value: "The author says P, but really a charitable, sensible interpretation is that the author really meant P-prime". In one way, it is actually more respectful to the texts not to be too charitable, and to interpret the text superficially at face value. After all, P is what the author literally said.

2. What seems "coherent" and "sensible" is culturally variable. You might reject excessive charitableness, while still wanting to limit allowable interpretations to one among several sensible and coherent ones. But this might already be too limiting. It might not seem "coherent" to us to embrace a contradiction, but some philosophers in some traditions seem happy to accept bald contradictions. It might not seem "sensible" to think that the world is nothing but a flux of ideas, such that the existence of rocks depends entirely upon the states of immaterial spirits. So if there's any ambiguity, you might hope to tame views that seem metaphysically idealist, thereby giving those authors a more sensible, reasonable seeming view. But this might be leading you away from rather than toward interpretative accuracy.

3. Philosophy is hard and philosophers are stupid. The human mind is not well-designed for figuring out philosophical truths. Timeless philosophical puzzles tend to kick our collective asses. Sadly, this is going to be true of your favorite philosopher too. The odds are good that this philosopher, being a flawed human like you and me, made mistakes, fell into contradictions, changed opinions, and failed to see what seem to be obvious consequences and counterexamples. Respecting the text and respecting the person means, in part, not trying too hard to smooth this stuff away. The warts are part of the loveliness. They are also a tonic against excessive hero worship and a reminder of your own likely warts and failings.

4. Some authors might not even want to be interpreted as having a coherent, stable view. I have recently argued that this is the case for the ancient Chinese philosopher Zhuangzi. Let's not fetishize stable coherence. There are lots of reasons to write philosophy. Some philosophers might not care if it all fits together. Here, attempting "charitably" to stitch together a coherent picture might be a failure to respect the aims and intentions implicit in the text.

Three cheers for the weird and "crazy", the naked text, not dressed in sensible 21st century garb!

-----------------------------------------------

Related post: In Defense of Uncharitable and Superficial History of Philosophy (Aug 17, 2012)

(HT: Sandy Goldberg for discussion and suggestion to turn it into a blog post)

[image source]

Sunday, January 01, 2017

Writings of 2016, and Why I Love Philosophy

It's a tradition for me now, posting a retrospect of the past year's writings on New Year's Day. (Here are the retrospects of 2012, 2013, 2014, and 2015.)

Two landmarks: my first full-length published essay on the sociology of philosophy ("Women in philosophy", with Carolyn Dicey Jennings), and the first foreign-language translations of my science fiction ("The Dauphin's Metaphysics" into Chinese and Hungarian).

Recently, I've been thinking about the value of doing philosophy. Obviously, I love reading, writing, and discussing philosophy, on a wide range of topics -- hence all the publications, the blog, the travel, and so forth. Only love could sustain that. But do I love it only in the way that I might love a videogame -- as a challenging, pleasurable activity, but not something worthwhile? No, I do hope that in doing philosophy I am doing something worthwhile.

But what makes philosophy worthwhile?

One common view is that studying philosophy makes you wiser or more ethical. Maybe this is true, in some instances. But my own work provides reasons for doubt: With Joshua Rust, I've found that ethicists and non-ethicist philosophers behave pretty much the same as professors who study other topics. With Fiery Cushman, I've found evidence that philosophers are just as subject to irrational order effects and framing effects in thinking about moral scenarios, even scenarios on which they claim expertise. With Jon Ellis, I've argued that there's good reason to think that philosophical and moral thought may be especially fertile for nonconscious rationalization, including among professors of philosophy.

Philosophy might still be instrumentally worthwhile in various ways: Philosophers might create conceptual frameworks that are useful for the sciences, and they might helpfully challenge scientists' presuppositions. It might be good to have philosophy professors around so that students can improve their argumentative and writing skills by taking courses with them. Public philosophers might contribute usefully to political and cultural dialogue. But none of this seems to be the heart of the matter. Nor is it clear that we've made great progress in answering the timeless questions of the discipline. (I do think we've made some progress, especially in carving out the logical space of options.)

Here's what I would emphasize instead: Philosophy is an intrinsically worthwhile activity with no need of further excuse. It is simply one of the most glorious, awesome facts about our planet that there are bags of mostly-water that can step back from ordinary activity and reflect in a serious way about the big picture, about what they are, and why, and about what really has value, and about the nature of the cosmos, and about the very activity of philosophical reflection itself. Moreover, it is one of the most glorious, awesome facts about our society that there is a thriving academic discipline that encourages people to do exactly that.

This justification of philosophy does not depend on any downstream effects: Maybe once you stop thinking about philosophy, you act just the same as you would have otherwise acted. Maybe you gain no real wisdom of any sort. Maybe you learn nothing useful at all. Even so, for those moments that you are thinking hard about big philosophical issues, you are participating in something that makes life on Earth amazing. You are a piece of that.

So yes, I want to be a piece of that too. Welcome to 2017. Come love philosophy with me.

-----------------------------------

Full-length non-fiction essays appearing in print in 2016:

    The behavior of ethicists” (with Joshua Rust), in J. Sytsma and W. Buckwalter, eds., A Companion to Experimental Philosophy (Wiley-Blackwell).
Full-length non-fiction finished and forthcoming:
Shorter non-fiction:
Editing work:
    Oneness in philosophy, religion, and psychology (with P.J. Ivanhoe, O. Flanagan, R. Harrison, and H. Sarkissian), Columbia University Press (forthcoming).
Non-fiction in draft and circulating:
Science fiction stories:
    "The Dauphin's metaphysics" (orig. published in Unlikely Story, 2015).
      - translated into Hungarian for Galaktika, issue 316.
      - translated into Chinese for Science Fiction World, issue 367.
Some favorite blog posts:
Selected interviews:

[image modified from here]

Tuesday, December 27, 2016

A few days ago, Skye Cleary interviewed me for the Blog of the APA. I love her direct and sometimes whimsical questions.

--------------------------

SC: What excites you about philosophy?

ES: I love philosophy’s power to undercut dogmatism and certainty, to challenge what you thought you knew about yourself and the world, to induce wonder, and to open up new vistas of possibility.

SC: What are you working on right now?

ES: About 15 things. Foremost in my mind at this instant: “Settling for Moral Mediocrity” and a series of essays on “crazy” metaphysical possibilities that we aren’t in a good epistemic position to confidently reject....

[It's a brief interview -- only six more short questions.]

Read the rest here.

Wednesday, December 21, 2016

Is Most of the Intelligence in the Universe Non-Conscious AI?

In a series of fascinating recent articles, philosopher Susan Schneider argues that

(1.) Most of the intelligent beings in the universe might be Artificial Intelligences rather than biological life forms.

(2.) These AIs might entirely lack conscious experiences.

Schneider's argument for (1) is simple and plausible: Once a species develops sufficient intelligence to create Artificial General Intelligence (as human beings appear to be on the cusp of doing), biological life forms are likely to be outcompeted, due to AGI's probable advantages in processing speed, durability, repairability, and environmental tolerance (including deep space). I'm inclined to agree. For a catastrophic perspective on this issue see Nick Bostrom. For a polyannish perspective, see Ray Kurzweil.

The argument for (2) is trickier, partly because we don't yet have a consensus theory of consciousness. Here's how Schneider expresses the central argument in her recent Nautilus article:

Further, it may be more efficient for a self-improving superintelligence to eliminate consciousness. Think about how consciousness works in the human case. Only a small percentage of human mental processing is accessible to the conscious mind. Consciousness is correlated with novel learning tasks that require attention and focus. A superintelligence would possess expert-level knowledge in every domain, with rapid-fire computations ranging over vast databases that could include the entire Internet and ultimately encompass an entire galaxy. What would be novel to it? What would require slow, deliberative focus? Wouldn’t it have mastered everything already? Like an experienced driver on a familiar road, it could rely on nonconscious processing.

On this issue, I'm more optimistic than Schneider. Two reasons:

First, Schneider probably underestimates the capacity of the universe to create problems that require novel solutions. Mathematical problems, for example, can be arbitrarily difficult (including problems that are neither finitely solvable nor provably unsolvable). Of course AGI might not care about such problems, so that alone is a thin thread on which to hang hope for consciousness. More importantly, if we assume Darwinian mechanisms, including the existence of other AGIs that present competitive and cooperative opportunities, then there ought to be advantages for AGIs that can outthink the other AGIs around them. And here, as in the mathematical case, I see no reason to expect an upper bound of difficulty. If your Darwinian opponent is a superintelligent AGI, you'd probably love to be an AGI with superintelligence + 1. (Of course, there are other paths to evolutionary success than intelligent creativity. But it's plausible that once superintelligent AGI emerges, there will be evolutionary niches that reward high levels of creative intelligence.)

Second, unity of organization in a complex system plausibly requires some high-level self-representation or broad systemic information sharing. Schneider is right that many current scientific approaches to consciousness correlate consciousness with novel learning and slow, deliberative focus. But most current scientific approaches to consciousness also associate consciousness with some sort of broad information sharing -- a "global workspace" or "fame in the brain" or "availability to working memory" or "higher-order" self-representation. On such views, we would expect a state of an intelligent system to be conscious if its content is available to the entity's other subsystems and/or reportable in some sort of "introspective" summary. For example, if a large AI knew, about its own processing of lightwave input, that it was representing huge amounts of light in the visible spectrum from direction alpha, and if the AI could report that fact to other AIs, and if the AI could accordingly modulate the processing of some of its non-visual subsystems (its long-term goal processing, its processing of sound wave information, its processing of linguistic input), then on theories of this general sort, its representation "lots of visible light from that direction!" would be conscious. And we ought probably to expect that large general AI systems would have the capacity to monitor their own states and distribute selected information widely. Otherwise, it's unlikely that such a system could act coherently over the long term. Its left hand wouldn't know what its right hand is doing.

I share with Schneider a high degree of uncertainty about what the best theory of consciousness is. Perhaps it will turn out that consciousness depends crucially on some biological facts about us that aren't likely to be replicated in systems made of very different materials (see John Searle and Ned Block for concerns). But to the extent there's any general consensus or best guess about the science of consciousness, I believe it suggests hope rather than pessimism about the consciousness of large superintelligent AI systems.

Related:

Possible Psychology of a Matrioshka Brain (Oct 9, 2014)

If Materialism Is True, the United States Is Probably Conscious (Philosophical Studies 2015).

Susan Schneider on How to Prevent a Zombie Dictatorship (Jun 27, 2016)

[image source]

Friday, December 16, 2016

Extraterrestrial Microbes and Being Alone in the Universe

A couple of weeks ago I posted some thoughts that I intended to give after a cosmology talk here at UCR. As it happens, I gave an entirely different set of comments! So I figured I might as well also share the comments I actually gave.

Although the cosmology talk made no or almost no mention of extraterrestrial life, it had been advertised as the first in a series of talks on the question "Are We Alone?" The moderator then talked about astrobiologists being excited about the possibility of discovering extraterrestrial microbial life. So I figured I'd expand a bit on the idea of being "alone", or not, in the universe.

Okay, suppose that we find microbial life on another planet. Tiny micro-organisms. How excited should be we?

The title of this series of talks -- written in big letters on the posters -- is "Are We Alone?" What does it mean to be alone?

Think of Robinson Crusoe. He was stranded on an island, all by himself (or so he thought). He is kind of our paradigm example of someone who is totally alone. But of course he was surrounded by life on that island -- trees, fish, snails, microbes on his face. This suggests that on one way of thinking about being "alone", a person can be entirely alone despite being surrounded by life. Discovering microbes on another planet would not make us any less alone.

To be not alone, I’m thinking, means having some sort of companion. Someone who will recognize you socially. Intelligent life. Or at least a dog.

We might be excited to discover microbes because hey, it's life! But what’s so exciting about life per se?

Life -- something that maintains homeostasis, has some sort of stable organization, draws energy from its environment to maintain that homeostatic organization, reproduces itself, is complex. Okay, that's neat. But the Great Red Spot on Jupiter, which is a giant weather pattern, has maintained its organization for a long time in a complex environment. Flames jumping across treetops in some sense reproduce themselves. Galaxies are complex. Homeostasis, reproduction, complexity -- these are cool. Tie them together in a little package of microbial life; that’s maybe even cooler. But in a way we do kind of already know that all the elements are out there.

Now suppose that instead of finding life we found a robot -- an intelligent, social robot, like C3P0 from Star Wars or Data from Star Trek. Not alive, by standard biological definitions, if it doesn’t belong to a reproducing species.

Finding life would be cool.

But finding C3P0 would be a better cure for loneliness.

(Apologies to my student Will Swanson, who has recently written a terrific paper on why we should think of robots as "alive" despite not meeting standard biological criteria for life.)

Related post: "Why Do We Care About Discovering Life, Exactly?" (Jun 18, 2015)

Recorded video of the Dec 8 session.

Thanks to Nalo Hopkinson for the dog example.

[image source]

Monday, December 12, 2016

Is Consciousness an Illusion?

In the current issue of the Journal of Consciousness Studies, Keith Frankish argues that consciousness is an illusion -- or at least that "phenomenal consciousness" is an illusion. It doesn't exist.

Now I think there are basically two different things that one could mean in saying "consciousness doesn't exist".

(A.) One is something that seems to be patently absurd and decisively refuted by every moment of lived experience: that there is no such thing as lived experience. If it sounds preposterous to deny that anyone ever has conscious experience, then you're probably understanding the claim correctly. It is a radically strange claim. Of course philosophers do sometimes defend radically strange, preposterous-sounding positions. Among them, this would be a doozy.

(B.) Alternatively, you might think that when a philosopher says that consciousness exists (or "phenomenal consciousness" or "lived, subjective experience" or whatever) she's usually not just saying the almost undeniably obvious thing. You might think that she's probably also regarding certain disputable properties as definitionally essential to consciousness. You might hear her as saying not only that there is lived experience in the almost undeniable sense but also that the target phenomenon is irreducible to the merely physical, or is infallibly knowable through introspection, or is constantly accompanied by a self-representational element, or something like that. Someone who hears the claim that "consciousness exists" in this stronger, more commissive sense might then deny that consciousness does exist, if they think that nothing exists that has those disputable properties. This might be an unintuitive claim, if it's intuitively plausible that consciousness does have those properties. But it's not a jaw dropper.

Admittedly, there has been some unclarity in how philosophers define "consciousness". It's not entirely clear on the face of it what Frankish means to deny the existence of in the article linked above. Is he going for the totally absurd sounding claim, or only the more moderate claim? (Or maybe something somehow in between or slightly to the side of either of these?)

In my view, the best and most helpful definitions of "consciousness" are the less commissive ones. The usual approach is to point to some examples of conscious experiences, while also mentioning some synonyms or evocative phrases. Examples include sensory experiences, dreams, vivid surges of emotion, and sentences spoken silently to oneself. Near synonyms or evocative phrases include "subjective quality", "stream of experience", "that in virtue of which it's like something to be a person". While you might quibble about any particular example or phrase, it is in this sense of "consciousness" that it seems to be undeniable or absurd to deny that consciousness exists. It is in this sense that the existence of consciousness is, as David Chalmers says, a "datum" that philosophers and psychologists need to accept.

Still, we might be dissatisfied with evocative phrases and pointing to examples. For one thing, such a definition doesn't seem very rigorous, compared to an analytic definition. For another thing, you can't do very much a priori with such a thin definition, if you want to build an argument from the existence of consciousness to some bold philosophical conclusion (like the incompleteness of physical science or the existence of an immaterial soul). So philosophers are understandably tempted to add more to the definition -- whatever further claims about consciousness seem plausible to them. But then, of course, they risk adding too much and losing the undeniability of the claim that consciousness exists.

When I read Frankish's article in preprint, I wasn't sure how radical a claim he meant to defend, in denying the existence of phenomenal consciousness. Was he going for the seemingly absurd claim? Or only for the possibly-unintuitive-but-much-less-radical claim?

So I wrote a commentary in which I tried to define "phenomenal consciousness" as innocently as possible, simply by appealing to what I hoped would be uncontroversial examples of it, while explicitly disavowing any definitional commitment to immateriality, introspective infallibility, irreducibility, etc. (final MS version). Did Frankish mean to deny the existence of phenomenal consciousness in that sense?

In one important respect, I should say, definition by example is necessarily substantive or commissive: Definition by example cannot succeed if the examples are a mere hodgepodge without any important commonalities. Even if there isn't a single unifying essence among the examples, there must at least be some sort of "family resemblance" that ordinary people can latch on to, more or less.

For instance, the following would fail as an attempted definition: By "blickets" I mean things like: this cup on my desk, my right shoe, the Eiffel tower, Mickey Mouse, and other things like those; but not this stapler on my desk, my left shoe, the Taj Mahal, Donald Duck, or other things like those. What property could the first group possibly possess, that the second group lacks, which ordinary people could latch onto by means of contemplating these examples? None, presumably (even if a clever philosopher or AI could find some such property). Defining "consciousness" by example requires there to be some shared property or family resemblance among the examples, which is not present in things we normally regard as "nonconscious" (early visual processing, memories stored but not presently considered, and growth hormone release). The putative examples cannot be a mere hodge-podge.

Definition by example can be silent about what descriptive features all these conscious experiences share, just as a definition by example of "furniture" or "games" might be silent about what ties those concepts together. Maybe all conscious experiences are in principle introspectively reportable, or nonphysical, or instantiated by 40 hertz neuronal oscillations. Grant first that consciousness exists. Argue about these other things later.

In his reply to my commentary, Frankish accepts the existence of "phenomenal consciousness" as I have defined it -- which is really (I think) more or less how it is already defined and ought to be defined in the recent Anglophone "phenomenal realist" tradition. (The "phenomenal" in "phenomenal consciousness", I think, serves as a usually unnecessary disambiguator, to prevent interpreting "consciousness" as some other less obvious but related thing like explicit self-consciousness or functional accessibility to cognition.) If so, then Frankish is saying something less radical than it might at first seem when he rejects the existence of "phenomenal consciousness".

So is consciousness an illusion? No, not if you define "consciousness" as you ought to.

Maybe my dispute with Frankish is mainly terminological. But it's a pretty important piece of terminology!

[image source, Pinna et al 2002, The Pinna Illusion]

Tuesday, December 06, 2016

A Philosophical Critique of the Big Bang Theory, in Four Minutes

I've been invited to be one of four humanities panelists after a public lecture on the early history of the universe. (Come by if you're in the UCR area. ETA: Or watch it live-streamed.) The speaker, Bahram Mobasher, has told me he likes to keep it tightly scientific -- no far-out speculations about the multiverse, no discussion of possible alien intelligences. Instead, we'll hear about H/He ratios, galactic formation, that sort of stuff. I have nothing to say about H/He ratios.

So here's what I'll say instead:

Alternatively, here’s a different way our universe might have begun: Someone might have designed a computer program. They might have put simulated agents in that computer program, and those simulated agents might be us. That is, we might be artificial intelligences inside an artificial environment created by some being who exists outside of our visible world. And this computer program that we are living in might have started ten years ago or ten million years ago or ten minutes ago.

This is called the Simulation Hypothesis. Maybe you’ve heard that Elon Musk, the famous tycoon of Paypal, Tesla, and SpaceX, believes that the Simulation Hypothesis is probably true.

Most of you probably think that Musk is wrong. Probably you think it vastly more likely that Professor Mobasher’s story is correct than that the Simulation Hypothesis is correct. Or maybe you think it’s somewhat more likely that Mobasher is correct.

My question is: What grounds this sense of relative likelihood? It’s doubtful that we can get definite scientific proof that we are not in a simulation. But does that mean that there are no rational constraints on what it’s more or less reasonable to guess about such matters? Are we left only with hard science on the one hand and rationally groundless faith on the other?

No, I think we can at least try to be rational about such things and let ourselves be moved to some extent by indirect or partial scientific evidence or plausibility considerations.

For example, we can study artificial intelligence. How easy or difficult is it to create artificial consciousness in simulated environments, at least in our universe? If it’s easy, that might tend to nudge up the reasonableness of the Simulation Hypothesis. If it’s hard, that might nudge it down.

Or we can look for direct evidence that we are in a designed computer program. For example, we can look for software glitches or programming notes from the designer. So far, this hasn’t panned out.

Here’s my bigger point. We all start with framework assumptions. Science starts with framework assumptions. Those assumptions might be reasonable, but they can also be questioned. And one place where cosmology intersects with philosophy and the other humanities and sciences is in trying to assess those framework assumptions, rather than simply leaving them unexamined or taking them on faith.

[image source]

Related:

"1% Skepticism" (Nous, forthcoming)

"Reinstalling Eden" (with R. Scott Bakker; Nature, 2013)

Tuesday, November 29, 2016

How Everything You Do Might Have Huge Cosmic Significance

Infinitude is a strange and wonderful thing. It transforms the ridiculously improbable into the inevitable.

Now hang on to your hat and glasses. Today's line of reasoning is going to make mere Boltzmann continuants seem boring and mundane.

First, let's suppose that the universe is infinite. This is widely viewed as plausible (see Brian Greene and Max Tegmark).

Second, let's suppose that the Copernican Principle holds: We are not in any special position in the universe. This principle is also widely accepted.

Third, let's assume cosmic diversity: We aren't stuck in an infinitely looping variant of a mere (proper) subset of the possibilities. Across infinite spacetime, there's enough variety to run through every finitely specifiable possibility infinitely often.

These assumptions are somewhat orthodox. To get my argument going, we also need a few assumptions that are less orthodox, but I hope not wildly implausible.

Fourth, let's assume that complexity scales up infinitely. In other words, as you zoom out on the infinite cosmos, you don't find that things eventually look simpler as the scale of measurement gets bigger.

Fifth, let's assume that local actions on Earth have chaotic effects of an arbitrarily large magnitude. You know the Butterfly Effect from chaos theory -- the idea that a small perturbation in a complex, "chaotic" system can make a large-scale difference in the later evolution of the system. A butterfly flapping its wings in China could cause the weather in the U.S. weeks later to be different than it would have been if the butterfly hadn't flapped its wings. Small perturbations amplify. This fifth assumption is that there are cosmic-scale butterfly effects: far-distant, arbitrarily large future events that arise with chaotic sensitivity to events on Earth. Maybe new Big Bangs are triggered, or maybe (as envisioned by Boltzmann) given infinite time, arbitrarily large systems will emerge by chance from low-entropy "heat death" states, and however these Big Bangs or Boltzmannian eruptions arise, they are chaotically sensitive to initial conditions -- including the downstream effects of light reflected from Earth's surface.

Okay, that's a big assumption to swallow. But I don't think it's absurd. Let's just see where it takes us.

Sixth, given the right kind of complexity, evolutionary processes will transpire that favor intelligence. We would not expect such evolutionary processes at most spatiotemporal scales. However, given that complexity scales up infinitely (our fourth assumption) we should expect that at some finite proportion of spatiotemporal scales there are complex systems structured in a way that enables the evolution of intelligence.

From all this it seems to follow that what happens here on Earth -- including the specific choices you make, chaotically amplified as you flap your wings -- can have effects on a cosmic scale that influence the cognition of very large minds.

(Let me be clear that I mean very large minds. I don't mean galaxy-sized minds or visible-universe-sized minds. Galaxy-sized and visible-universe-sized structures in our region don't seem to be of the right sort to support the evolution of intelligence at those scales. I mean way, way up. We have infinitude to play with, after all. And presumably way, way slow if the speed of light is a constraint. Also, I am assuming that time and causation make sense at arbitrarily large scales, but maybe that can be weakened if necessary to something like contingency.)

Now at such scales anything little old you personally does would very likely be experienced as chance. Suppose for example that a cosmic mind utilizes the inflation of Big Bangs. Even if your butterfly effects cause a future Big Bang to happen this way rather than that way, probably a mind at that scale wouldn't have evolved to notice tiny-scale causes like you.

Far fetched. Cool, perhaps, depending on your taste in cool. Maybe not quite cosmic significance, though, if your decisions only feed a pseudo-random mega-process whose outcome has no meaningful relationship to the content of your decisions.

But we do have infinitude to play with, so we can add one more twist.

Here it is: If the odds of influencing the behavior of an arbitrarily large intelligent system are finite, and if we're letting ourselves scale up arbitrarily high, then (granting all the rest of the argument) your decisions will affect the behavior of an infinite number of huge, intelligent systems. Among them there will be some -- a tiny but finite proportion! -- such that the following counterfactual is true: If you hadn't made that upbeat, life-affirming choice you in fact just made, that huge, intelligent system would have decided that life wasn't worth living. But fortunately, partly as a result of that thing you just did, that giant intelligence -- let's call it Emily -- will discover happiness and learn to celebrate its existence. Emily might not know about you. Emily might think it's random or find some other aspect of the causal chain to point toward. But still, if you hadn't done that thing, Emily's life would have been much worse.

So, whew! I hope it won't seem presumptuous of me to thank you on Emily's behalf.

[image source]

Sunday, November 27, 2016

The Odds of Getting Three Consecutive Wars in a Row in the Card Game

What better way to spend the Sunday after Thanksgiving than playing card games with your family and then arguing about the odds?

As pictured, my daughter and I just got three consecutive "wars" in the card game of war. (I lost with a 3 at the end!)

What are the odds of that?

Well, the odds of getting just one war are 3/51, right? Here's why. It doesn't matter whether my or my daughter's card is turned first. That card can be anything. The second card needs to match it. With the first card out of the deck, 51 cards remain. Three of them match the first-turned card. So 3/51 = .058824 = about a 5.9% chance.

Then you each play three face down "soldier" cards. Those could be any cards, and we don't know anything about them, so they can be ignored for purposes of calculation. What's relevant are the next upturned cards, the "generals". Here there are two possibilities. First possibility: The first general is the same value as the original war cards. Since there are 50 unplayed cards and two that match the original two war cards, the odds of that are 2/50 = .040000 = 4.0%. The other possibility is that the value of the first general differs from that of the war cards: 48/50 = .960000 = 96.0%.

(As I write this, my son is sleeping late and my wife and daughter are playing with Musical.ly -- other excellent ways to spend a lazy Sunday!)

In the first case, the odds of the second general matching are only one in 49 (.020408, about 2.0%), since three of the four cards of that value have already been played and there are 49 cards left in the deck (disregarding the soldiers). In the second case, the odds are three in 49 (.061224, about 6.1%).

So the odds of two wars consecutively are: .058824 * .04 * .020408 (first war, followed by matching generals, i.e. all four up cards the same) + .058824 * .96 * .061124 (first war, followed by a different pair of matching generals) = .000048 + .003457 = .003505. In other words, there's about a 0.35% chance, or about one in 300 chance, of two consecutive wars.

If the second war had generals that matched the original war cards, then there's only one way for the third war to happen. Player one draws any new general. The odds of player two's new general matching are 3/47 (.063830).

If the second war had generals that did not match the original war cards, then there are two possibilities.

First possibility: The first new general is the same value as one of the original war cards or previous generals. There's a 4 in 48 (.083333) chance of that happening (two remaining cards of each of those two values). Finally, there's a 1/47 (.021277) chance that the last general matches this one (last remaining card of that value).

Second possibility: The first new general is a different value from either the original war cards or the previous generals. The odds of that are 44/48 (.916667), followed by a 3/47 (.063830) chance of match.

Okay, now we can total up the possibilities. There are three relevantly different ways to get three consecutive wars in a row.

A: First war, followed by second war with same values, followed by third war with different values: .058824 (first war) * .04000 (first general matches war cards) * .020408 (second general matches first general) * .063830 (odds of third war with fresh card values) = .000003 (.0003% or about 1 in 330,000).

B: First war, followed by second war with different values, followed by third war with same values as one of the previous wars: .058824 (first war) * .960000 (first general doesn't match war cards) * .061224 (second general matches first general) * .083333 (first new general matches either war cards or previous generals) * .021277 (second new general matches first new general) = .000006 (.0006% or about 1 in 160,000).

C: First war, followed by second and third wars, each with different values: .058824 (first war) * .960000 (first general doesn't match war cards) * .061224 (second general matches first general) * .916667 (first new general doesn't match either war cards or previous generals) * .063830 (second new general matches first new general) = .000202 (.02% or about 1 in 5000).

Summing up these three paths: .000003 + .000006 + .000202 = .000211. In other words, the chance of three wars in a row is 0.0211% or 1 in 4739.

Now for some leftover turkey.

-----------------------------------------------

As it happens we were playing the variant game Modern War -- which is much less tedious than the traditional card game of war! But since it was only the first campaign the odds are the same. (In later campaigns the odds of war increase, because smaller cards fall disproportionately out of the deck.)

Wednesday, November 23, 2016

The Moral Compass and the Liberal Ideal in Moral Education

Here are two very different approaches to moral education:

The outward-in approach. Inform the child what the rules are. Do not expect the child to like the rules or regard them as wise. Instead, enforce compliance through punishment and reward. Secondarily, explain the rules, with the hope that eventually the child will come to appreciate their wisdom, internalize them, and be willing to abide by them without threat of punishment.

The inward-out approach. When the child does something wrong, help the child see for herself what makes it wrong. Invite the child to reflect on what constitutes a good system of rules and what are good and bad ways to treat people, and collaborate in developing guidelines and ideals that make sense to the child. Trust that even young children can come to see the wisdom of moral guidelines and ideals. Punish only as a fallback when more collaborative approaches fail.

Though there need be no neat mapping, I conjecture that preference for the outward-in approach correlates with what we ordinarily regard as political conservativism and preference for the inward-out approach with what we ordinarily regard as political liberalism. The crucial difference between the two approaches is this: The outward-in approach trusts children's judgment less. On the outward-in approach, children should be taught to defer to established rules, even if those rules don't make sense to them. This resembles Burkean political conservativism among adults, which prioritizes respect for the functioning of our historically established traditions and institutions, mistrusting our current judgments about how to those institutions might be improved or replaced.

In contrast, the liberal ideal in moral education depends on the thought that most or all people -- including most or all children -- have something like an inner moral compass, which can be relied on as at least a partial, imperfect guide toward what's morally good. If you take four-year-old Pooja aside after she has punched Lauren (names randomly chosen) and patiently ask her to explain herself and to think about the ethics of punching, you will get something sensible in reply. For the liberal ideal to work, it must be true that Pooja can be brought to understand the importance of treating others kindly and fairly. It must be true that after reflection, she will usually find that she wants to be kind and fair to others, even without outer reward.

This is a lot to expect from children. And yet I do think that most children, when approached patiently, can find their moral compass. In my experience watching parents and educators, it strikes me that when they are at their best -- not overloaded with stress or too many students -- they can successfully use the inward-out approach. Empirical psychology also suggests that the (imperfect, undeveloped) seeds of morality are present early in development and shared among primates.

It is I think foundational to the liberal conception of the human condition -- "liberal" in rejecting the top-down imposition of values and celebrating instead people's discovery of their own values -- that when they are given a chance to reflect, in conditions of peace, with broad access to relevant information, people will tend to find themselves revolted by evil and attracted to good. Hatred and evil wither under thoughtful critical examination. So we liberals must believe. Despite complexities, bumps, regressions, and contrary forces, reflection and broad exposure to facts and arguments will bend us toward freedom, egalitarianism, and respect.

If this is so, here's something you can always do: Invite people to think alongside you. Share the knowledge you have. If there is light and insight in your thinking, people will slowly walk toward it.

Related essay: Human Nature and Moral Education in Mencius, Xunzi, Hobbes, and Rousseau (History of Philosophy Quarterly, 2007)

[image source]

Tuesday, November 15, 2016

Three Ways to Be Not Quite Free of Racism

Suppose that you can say, with a feeling of sincerity, "All races and colors of people deserve equal respect". Suppose also that when you think about American Blacks or South Asians or Middle Eastern Muslims you don't detect any feelings of antipathy, or at least any feelings of antipathy that you believe arise merely from consideration of their race. This is good! You are not an all-out racist in the 19th-century sense of that term.

Still, you might not be entirely free of racial prejudice, if we took a close look at your choices, emotions, passing thoughts, and swift intuitive judgments about people.

Imagine then the following ideal: Being free of all unjustified racial prejudice. We can imagine similar ideals for classism, ableism, sexism, ethnicity, subculture, physical appearance, etc.

It would be a rare person who met all of these ideals. Yet not all falling short is the same. The recent election has made vivid for me three importantly distinct ways in which one can fall short. I use racism as my example, but other failures of egalitarianism can be analyzed similarly.

Racism is an attitude. Attitudes can be thought of as postures of the mind. To have an attitude is to be disposed to act and react in attitude-typical ways. (The nature of attitudes is a central part of my philosophical research. For a fuller account of my view, see here.) Among the dispositions constitutive of all-out racism are: making racist claims, purposely avoiding people of that race, uttering racist epithets in inner speech, feeling negative emotions when interacting with that race, leaping quickly to negative conclusions about individual members of that race, preferring social policies that privilege your preferred race, etc.

An all-out racist would have most or all of these dispositions (barring "excusing conditions"). Someone completely free of racism would have none of these dispositions. Likely, the majority of people in our culture inhabit the middle.

But "the middle" isn't all the same. Here are three very different ways of occupying it.

(1.) Implicit racism. Some of the relevant dispositions are explicitly or overtly racist -- for example, asserting that people of the target race are inherently inferior. Other dispositions are only implicitly or covertly racist, for example, being prone without realizing it to evaluate job applications more negatively if the applicant is of the target race, or being likely to experience negative emotion upon being assigned a cooperative task with a person of the target race. Recent psychological research suggests that many people in our culture, even if they reject explicitly racist statements, are disposed to have some implicitly racist reactions, at least occasionally or in some situations. We can thus construct a portrait of the "implicit racist": Someone who sincerely disavows all racial prejudice, but who nonetheless has a wide-ranging and persistent tendency toward implicitly racist reactions and evaluations. Probably no one is a perfect exemplar of this portrait, with all and only implicitly racist reactions, but it is probably common for people to match it to a certain extent. To that extent, whatever it is, that person is not quite free of implicit racism.

Implicit racism has received so much attention in the recent psychological and philosophical literature that one might think that it is the only way to be not quite free of racism while disavowing racism in the 19th-century sense of the term. Not so!

(2.) Situational racism. Dispositions manifest only under certain conditions. Priscilla (name randomly chosen) is disposed sincerely to say, if asked, that people of all races deserve equal respect. Of course, she doesn't actually spend the entire day saying this. She is disposed to say it only under certain conditions -- conditions, perhaps, that assume the continued social disapproval of racism. It might also be the case that under other conditions she would say the opposite. A person might be disposed sincerely to reject racist statements in some contexts and sincerely to endorse them in other contexts. This is not the implicit/explicit division. I am assuming both sides are explicit. Nor am I imagining a change in opinion over time. I am imagining a person like this: If situation X arose she would be explicitly racist, while if situation Y arose she would be explicitly anti-racist, maybe even passionately, self-sacrificingly so. This is not as incoherent as it might seem. Or if it is incoherent, it is a commonly human type of incoherence. The history of racism suggests that perfectly nice, non-racist-seeming people can change on a dime with a change in situation, and then change back when the situation shifts again. For some people, all it might take is the election of a racist politician. For others, it might take a more toxically immersive racist environment, or a personal economic crisis, or a demanding authority, or a recent personal clash with someone of the target race.

(3.) Racism of indifference. Part of what prompted this post was an interview I heard with someone who denied being racist on the grounds that he didn't care what happened to Black people. This deprioritization of concern is in principle separable from both implicit racism and situational racism. For example: I don't think much about Iceland. My concerns, voting habits, thoughts, and interests instead mostly involve what I think will be good for me, my family, my community, my country, or the world in general. But I'm probably not much biased against Iceland. I have mostly positive associations with it (beautiful landscapes, high literacy, geothermal power). Assuming (contra Mozi) that we have much greater obligations to family and compatriots than to people in far-off lands, my habit of not highly prioritizing the welfare of people in Iceland probably doesn't deserve to labeled pejoratively with an "-ism". But a similar disregard or deprioritization of people in your own community or country, on grounds of their race, does deserve a pejorative label, independent any implicit or explicit hostility.

These three ways of being not quite free of racism are conceptually separable. Empirically, though, things are likely to be messy and cross-cutting. Probably the majority of people don't map neatly onto these categories, but have a complex set of mixed-up dispositions. Furthermore, this mixed-up set probably often includes both racist dispositions and, right alongside, dispositions to admire, love, and even make special sacrifices for people who are racialized in culturally disvalued ways.

It's probably difficult to know the extent to which you yourself fail, in one or more of these three ways, to be entirely free of racism (sexism, ableism, etc.). Implicitly racist dispositions are by their nature elusive. So also is knowledge of how you would react to substantial changes in circumstance. So also are the real grounds of our choices. One of the great lessons of the past several decades of social and cognitive psychology is that we know far less than we think we know about what drives our preferences and about the situational influences on our behavior.

I am particularly struck by the potentially huge reach of the bigotry of indifference. Action is always a package deal. There are always pros and cons, which need to be weighed. You can't act toward one goal without simultaneously deprioritizing many other possible goals. Since it's difficult to know the basis of your prioritization of one thing over another, it is possible that the bigotry of indifference permeates a surprising number of your personal and political choices. Though you don't realize it, it might be the case that you would have felt more call to action had the welfare of a different group of people been at stake.

[image source Prabhu B Doss, creative commons]

Wednesday, November 09, 2016

Thought for the Day

What you believe is not what you say you believe. It is how you act.

What you desire is not what you say you desire. It is what you choose.

Who you are is how you live.

You know this about other people, but it is very difficult to know this about yourself.

--------------------------------------

Acting Contrary to Our Professed Beliefs (Pacific Philosophical Quarterly, 2010).

Knowing Your Own Beliefs (Canadian Journal of Philosophy, 2011).

A Dispositional Approach to the Attitudes (New Essays on Belief, 2013).

The Pragmatic Metaphysics of Belief (in draft)

Friday, November 04, 2016

Use of "Genius", "Strict", and "Sexy" in Teaching Evaluations, by Discipline and Gender of Professor

Interesting tool here, where you can search for terms in professors' teaching reviews, by discipline and gender.

The gender associations of "genius" with male professors are already fairly well known. Here's how they show in this database:

Apologies for the blurry picture. Click on it to make it clearer!

On the other hand, terms like "mean", "strict", and "unfair" tend to occur more commonly in reviews of female professors. Here's "strict":

How about "sexy"? You might imagine that going either way: Maybe female professors are more frequently rated by their looks. On the other hand, maybe it's "sexier" to be a professor if you're a man. Here how it turns out:

Update, 10:45.

I can't resist adding one more. "Favorite":

Wednesday, November 02, 2016

Introspecting an Attitude by Introspecting Its Conscious Face

In some of my published work, I have argued that:

(1.) Attitudes, such as belief and desire, are best understood as clusters of dispositions. For example, to believe that there is beer in the fridge is nothing more or less than to be disposed (all else being equal or normal) to go to the fridge if one wants a beer, to feel surprised if one were to open the fridge and find no beer, to conclude that the fridge isn't empty if that question becomes relevant, etc, etc. (See my essays here and here.)

And

(2.) Only conscious experiences are introspectible. I characterize introspection as "the dedication of central cognitive resources, or attention, to the task of arriving at a judgment about one's current, or very recently past, conscious experience, using or attempting to use some capacities that are unique to the first-person case... with the aim or intention that one's judgment reflect some relatively direct sensitivity to the target state" (2012, p. 42-43).

Now it also seems correct that (3.) dispositions, or clusters of dispositions, are not the same as conscious experiences. One can be disposed to have a certain conscious experience (e.g., disposed to experience a feeling of surprise if one were to see no beer), but dispositions and their manifestations are not metaphysically identical. Oscar can be disposed to experience surprise if he were to see an empty fridge, even if he never actually sees an empty fridge and so never actually experiences surprise.

From these three claims it follows that we cannot introspect attitudes such as belief and desire.

But it seems we can introspect them! Right now, I'm craving a sip of coffee. It seems like I am currently experiencing that desire in a directly introspectible way. Or suppose I'm thinking aloud, in inner speech, "X would be such a horrible president!" It seems like I can introspectively detect that belief, in all its passionate intensity, as it is occurs in my mind right now.

I don't want to deny this, exactly. Instead, let me define relatively strict versus permissive conceptions of the targets of introspection.

To warm up, consider a visual analogy: seeing an orange. There the orange is, on the table. You see it. But do you really see the whole orange? Speaking strictly, it might be better to say that you see the orange rind, or the part of the orange rind that is facing you, rather than the whole orange. Arguably, you infer or assume that it's not just an empty rind, that it has a backside, that it has a juicy interior -- and usually that's a safe enough assumption. It's reasonable to just say that you see the orange. In a relatively permissive sense, you see the whole orange; in a relatively strict sense you see only the facing part of the orange rind.

Another example: From my office window I see the fire burning downtown. Of course, I only see the smoke. Even if I were to see the flames, in the strictest sense perhaps the visible light emitted from flames is only a contingent manifestation of the combustion process that truly constitutes a fire. (Consider invisible methanol fires.) More permissively, I see the fire when I see the smoke. More strictly, I need to see the flames or maybe even (impossibly?) the combustion process itself.

Now consider psychological cases: In a relatively permissive sense, you see Sandra's anger. In a stricter sense, you see her scowling face. In a relatively permissive sense, you hear the shyness and social awkwardness in Shivani's voice. In a stricter sense you hear only her words and prosody.

To be clear: I do not mean to imply that a stricter understanding of the targets of perception is more accurate or better than a more permissive understanding. (Indeed, excessive strictness can collapse into absurdity: "No, officer, I didn't see the stop sign. Really, all I saw were patterns of light streaming through my vitreous humour!")

As anger can manifest in a scowl and as fire can manifest in smoke and visible flames, so also can attitudes manifest in conscious experience. The desire for coffee can manifest in a conscious experience that I would describe as an urge to take a sip; my attitude about X's candidacy can manifest in a momentary experience of inner speech. In such cases, we can say that the attitudes present a conscious face. If the conscious experience is distinctive enough to serve as an excellent sign of the real presence of the relevant dispositional structure constituting that attitude, then we can say that the attitude is (occurrently) conscious.

It is important to my view that the conscious face of an attitude is not tantamount to the attitude itself, even if they normally co-occur. If you have the conscious experience but not the underlying suite of relevant dispositions, you do not actually have the attitude. (Let's bracket the question of whether such cases are realistically psychologically possible.) Similarly, a scowl is not anger, smoke is not a fire, a rind is not an orange.

Speaking relatively permissively, then, one can introspect an attitude by introspecting its conscious face, much as I can see a whole orange by seeing the facing part of its rind and I can see a fire by seeing its smoke. I rely upon the fact that the conscious experience wouldn't be there unless the whole dispositional structure were there. If that reliance is justified and the attitude is really there, distinctively manifesting in that conscious experience, then I have successfully introspected it. The exact metaphysical relationship between the strictly conceived target and the permissively conceived target is different among the various cases -- part-whole for the orange, cause-effect for the fire, and disposition-manifestation for the attitude -- but the general strategy is the same.

[image source]

Thursday, October 27, 2016

Dispositionalism vs Representationalism about Belief

The Monday before last, Ned Block and Eric Mandelbaum brought me into their philosophy grad seminar at New York University to talk about belief. Our views are pretty far apart, and I got pushback during class (and before class, and after class!) from a variety of directions. But the issue that stuck with me most was the big-picture issue of dispositionalism vs respresentationalism about belief.

I'm a dispositionalist. By this I mean that to believe some particular proposition, such as that your daughter is at school, is nothing more or less than to be disposed toward certain patterns of behavior, conscious experience, and cognition, under a range of hypothetical conditions -- for example, to be disposed to go to your daughter's school if you decide you want to meet her, to be disposed to feel surprise should you head home for lunch and find her waiting there, and to be disposed, if the question arises, to infer that her favorite backpack is also probably at the school (since she usually takes it with her). All of these dispositions hold only "ceteris paribus" or "all else being equal" and one needn't have all of them to count as believing. (For more details about my version of dispositionalism in particular, see here.) Crucial to the dispositionalist approach (but not unique to it) is the idea that the implementational details don't matter -- or rather, they matter only derivatively. It doesn't matter if you've got a connectionist net in your head, or representations in the language of thought, or a billion little homonuculi whispering in thieves' cant, or an immaterial soul. As long as you have the right clusters of behavioral, experiential, and cognitive dispositions, robustly, across a suitably broad range of hypothetical circumstances, you believe.

On a representationalist view, implementation does matter. On a suitably modest view of what a "representation" is (I like Dretske's account), the human mind uses representations. For example, it's very plausible that neural activity in primary visual cortex is representational, if representations are states of a system that function to track or convey information about something else. (In primary visual cortex, patterns of excitation in groups of neurons function to indicate geometrical features in various parts of the visual field.) The representationalist about belief commits to a general picture of the mind as a manipulator of representations, and then characterizes believing as a matter of having the right sort of representations (e.g., one with the content "my daughter it school") stored or activated in the right type of functional role in the mind (for example, stored in memory and poised (if all goes well) to be activated in cognitive processing when you are asked, "where is your daughter now?").

I interpreted some of the pushback from Block, Mandelbaum, and their students as follows: "Look, the best cognitive science employs a representational model of the mind. So representations are real. Even you don't deny that. So if you want a truly scientific model of the mind instead of some vague dispositionalism that looks only at the effects or manifestations of real cognitive states, you should be a representationalist."

How is a dispositionalist to reply to this concern? I have three broad responses.

The Implementational Response. The most concessive response (short of saying, "oops, you're right!") is to deny that there is any serious conflict between the two positions by allowing that the way one gets to have the dispositional profile constitutive of belief might be by manipulating representations in just the manner that the representationalist supposes. The views can be happily married! You don't get to have the dispositional profile of a believer unless you already have right sort of representational architecture underneath; and once you have the right sort of representational architecture underneath, you thereby acquire the relevant dispositional profile. The views only diverge in marginal or hypothetical cases where representational architecture and dispositional profile come apart -- but maybe those cases don't matter too much.

However, I think that answer is too concessive, for a couple of reasons.

The Messiness Response. Here's a too-simple hypothetical representationalist architecture for belief. To believe that P (e.g., that my daughter is at school today) is to just to have a representation with the content P ("my daughter is at school today") stored somewhere in the mind, ready to be activated when it becomes relevant whether P is the case (e.g., I'm asked "where is your daughter now?"). One problem with this view is the problem of specifying the exact content. I believe that the my daughter is at school today. I also believe that my daughter is at JFK Elementary today. I also believe that my daughter is at JFK Elementary now. I also believe that Kate is at JFK Elementary now. I also believe that Kate is in Ms. Salinas' class today. This list could obviously be expanded considerably. Do I literally have all of these representations stored separately? Or is there only one representation stored, from which the others are swiftly derivable? If so, which one? How could we know? This puzzle invites us to reject the simplistic picture that believing P is a matter of having a stored representation with exactly the content P. But once we make this move, we open ourselves up to a certain kind of implementational messiness -- which is plausible anyway. As we have seen in the two best-developed areas of cognitive science -- the cognitive science of memory and the cognitive science of vision -- the underlying architectural stories tend to be highly complex and tend not to map neatly onto our folk psychological categories. Furthermore, viewed from an appropriately broad temporal perspective, scientific fashions come and go: We have this many memory systems, no we have this many; early visual processing is not much influenced by later processing, wait yes it is influenced, wait no it's not after all. Dynamical systems, connectionist networks, patterns of looping activation can all be understood in terms of language-like representations, or no they can't, or maybe map-like representations or sensorimotor representations are better. Given the messiness and uncertainty of cognitive science, it is premature to commit to a thoroughly representationalist picture. Maybe someday we'll have all this figured out well enough so that we can say "this architectural structure, this one, is what you have if you believe that your daughter is at school, we found it!" That would be exciting! That day, I abandon dispositionalism. Until then, I prefer to think of belief dispositionally rather than relying upon any particular architectural story, even as general an architectural story as representationalism.

The What-We-Care-About Response. Why, as philosophers, do we want an account of belief? Presumably, it's because we care about predicting and explaining our behavior and our patterns of experience. So let's suppose as much divergence as it's reasonable to suppose between patterns of experience and behavior and patterns of internal architecture. Maybe we discover an alien species that has outward behavior and inner experiences virtually identical to our own but implemented very differently in the underlying architecture. Or maybe we can imagine a human being whose actions and experiences, not only in her actual circumstances but also in a wide range of hypothetical circumstances, are just like that of someone who believes that P, but who lacks the usual underlying architecture. On an architecture-driven account, it seems that we have to deny that these aliens or this person believes what they seem to believe; on a dispositional account, we get to say that they do believe what they seem to believe. The latter seems preferable: If what we care about in an account of belief is patterns of behavior and experience, then it makes sense to build an account of belief that prioritizes those patterns of behavior and experience as the primary thing, and treats purely architectural considerations as secondary.

----------------------------------------------

Some related posts and papers:

A Phenomenal, Dispositional Account of Belief (Nous 2002).

Belief (Stanford Encyclopedia of Philosophy, 2006 revised 2015).

Mad Belief? (blog post, Nov. 5, 2008).

A Dispositional Approach to Attitudes: Thinking Outside of the Belief Box (in Nottelmann, ed., New Essays on Belief, 2013).

Against Intellectualism About Belief (blog post, July 31, 2015)

The Pragmatic Metaphysics of Belief (essay in draft, October 2016).