Philosophy





DECEMBER 24, 2010, 7:05 PM
On Slurs: A Response By ERNIE LEPORE and LUVELL ANDERSON

The Stone is a forum for contemporary philosophers on issues both timely and timeless.

Thanks to all who read and responded to my post for The Stone, “Speech and Harm.” What follows is a general response to those comments, written with Luvell Anderson.

One clarification at the outset is that what was on offer in the original post was not a proposal aimed at reducing slurring, but rather an account of the offensiveness of slurs. Keep in mind that at least two features of slurs must be explained. First, how can some be more offensive than others (including those targeting the same group)? And second, why can some of us use slurs without being offensive while others of us cannot? The proposal that slurs are expressions whose occurrences are prohibited purports to explain these features better than views that invoke the content of the word, i.e. their meaning. In fact, appeals to meaning don’t seem to be capable of coherently accommodating either of these features. This point cannot be stressed enough, since several commentators seem to conflate the aim of the original post with an entirely separate aim, namely, imposing standards of political correctness. This was not intended.

Further, the proposal does not claim prohibition exhausts a slur’s content. Rather, the suggestion is that prohibition explains the phenomenon of offense. Slurs may very well carry negative content (even though they need not), but prohibitionism claims that content does not explain, for instance, the variation in offensive potency among slurs.

Having made these points, we can now turn to worries expressed about the particular proposal. There are three main ones this post will focus on. One is the idea that it is the intention or attitude of the speaker that renders the word offensive. Next, what makes slurs offensive are the ideas a slur’s use seems to permit one to infer about the target. And finally, though prohibitionism explains some of the phenomena surrounding our use of slurs, it can’t be the whole story.

The first main worry is that we should explain the offense of slurs in terms of the hateful attitudes or intentions rather than prohibition. Several commentators expressed this sentiment:

“It is probably less the word itself than the intent (comment 9).”

“…the intent of the user of a slur most often is to castigate the recipient explicitly (comment 22).”

“…the power to hurt in a slur resides in the hatred and contempt that underlies its usage (comment 34).”

“A large part of the offensiveness of slurs comes from the fact that whoever utters one has to deliberately choose to use a term he knows will cause hurt and offense instead of one that is neutral (comment 79).”

These comments represent two strands of thought; one suggests the expression’s meaning depends on what the speaker intends to say and the other suggests that slurs signal something about the speaker’s mental state. The first idea can be captured in the slogan: “A word is only derogatory if I mean it to be.” Those who endorse this view cannot be saying a word means just what its user intends, for we lack that kind of control over the meanings of words. What we can mean by our speech is much more constrained. Those who favor this view must also be taking features of the context of the conversation to play a prominent role in determining word meaning as well, so that a word is derogatory given certain features of its conversational context and the intent of the speaker. But how could that be correct? It certainly didn’t work as an explanation for former Senator George Allen of Virginia. Recall his campaign for senator was derailed by his “playful” reference to an Indian American campaign worker for the opposition. His stated intention (sincere or not) wasn’t enough to change perceptions of his chosen expression’s significance.

A second strand of thought exhibited in the comments suggests slurs express a hateful attitude. When someone slurs another, the speaker expresses an adversarial attitude towards the target and the target’s social group. That attitude is not reported in the statement: when one slurs another he or she does not literally say “I hate Xs”, but rather displays hatred towards Xs. It is as if an exclamation mark ends the slurring statement or the speaker is scowling.

This way of explaining slurs’ offensiveness corresponds with the thought expressed by some commentators that slurring says more about the person using the terms than it does about the target. But this view fails for various reasons. First, using slurs flippantly or without any feelings of hatred towards the target is still offensive. In fact, many find uses of slurs by out-group members offensive even if it is clear the speaker does not possess hateful attitudes toward the target.

This is illustrated by a scene in the movie “Rush Hour,” starring Chris Tucker and Jackie Chan. During a visit to a bar Carter (Tucker) greets the bartender with a slur that, when shared among in-group members, can be taken as a term of endearment. Lee (Chan) follows Carter and essentially mimics what was said (including tone). Needless to say the greeting was not received well. But how could this be if the offense consisted in the attitude underlying its use? It is clear Lee possesses no hateful attitudes, yet his statement is still offensive.

Explaining offense in terms of attitudes also fails to account for the variation of offense among slurs. Presumably, people who use slurs to refer to others exhibit the same hateful attitudes toward a particular group no matter which expression is used. It is also the case that slurs used by members of subordinate groups towards members of dominant groups generally lack the intensity of slurs used by members of dominant groups towards the subordinate ones. This difference is unlikely to be explained by a difference in hateful attitudes underlying the use of those expressions. For these and other reasons, appealing to the underlying attitudes in slur use is insufficient to explain their offense.

Next, another alternative view is raised that warrants attention. One commenter wrote:

What’s offensive about slurs is the way that they institute an inference that all members of a group exhibit some negative property For example, if there’s a slur “z—” directed against members of a group X and alleging exhibition of the negative property F, then any use of “z—” means “…is a member of X and THEREFORE exhibits F.” The key to both the meaning and the harmfulness of the use of the slur here is the unarticulated major premise: “All members of X exhibit negative property F.” Though unarticulated, this is still part of the content of the slur, and it is more or less obvious why this prejudicial assertion is found harmful.

The idea here is that occurrences of slurs permit, in some way, one to infer that members of the targeted group possess negative features, which is offensive. And it is this type of gambit that the non-bigot will want to resist since such assumptions are unsupportable and indefensible. This view is often referred to as “inferentialism.”

However, we should reject this view because it too fails to explain the relevant data. First, the view as stated claims the inference instituted by the slur says all members of the targeted group exhibit the negative property. But we can imagine a bigot who, after being presented with evidence that the victim of his slur does not exhibit the negative property associated with the term, still applies the word to his target. And secondly, for any given slur, what is the specific assumption one is licensed to infer? Here is a homework assignment: take a slur and try to figure out what everyone who hears it would infer. It is doubtful you will come up with a single idea that everyone who knows the term shares.

Another problem for this view is how it could explain why uses of slurs among in-group members generally fail to give rise to negative inferences. Recall appropriated uses of slur terms. These uses do not provoke offense among in-group users. The fan of inferentialism will need to provide a credible story about the differences between these scenarios.

The final worry that if we try to explain the offensiveness of slur only in terms of prohibition we won’t be able to distinguish the offensiveness of ethnic slurs from that of profanities. Comment 65 says that if we consider content to be irrelevant, “the use of prohibited ethnic slurs would have the same effect as profanities which end in “-er” and characterize a person by an act, typically sexual, which in most cases there is an understanding that the other person actually is not (I’m thinking primarily of an “m” word, but not only that).” The upshot is that there is a “profound difference between profane attribution and ethnic slur attribution,” which prohibition alone cannot capture.

It is appropriate to reiterate the point mentioned earlier: prohibitionism does not say slurs are devoid of content, only that their content does not explain variation in offense or appropriated uses. Further, the proposal does not deny that content can be a reason for prohibiting an expression. However, content is not a necessary feature for prohibiting a word; witness the barroom brawl the meaningless “mook” caused in Martin Scorsese’s “Mean Streets.”

One final challenge for the advocate of a content-based explanation: paraphrases of what a slur supposedly means do not match in offense. But why this mismatch if content explains the offense? Further, what content does a slur for some groups carry that others lack, such that the first is more offensive than the latter? These are considerations any serious view about slurs must address. To the best of our knowledge, only prohibitionism purports to offer an account of them.


Ernie Lepore, a professor of philosophy and co-director of the Center for Cognitive Science at Rutgers University, writes on language and mind. More of his work, including the study, “Slurring Words,” with Luvell Anderson, can be found here.


DECEMBER 19, 2010, 5:18 PM
A Real Science of Mind By TYLER BURGE

In recent years popular science writing has bombarded us with titillating reports of discoveries of the brain’s psychological prowess. Such reports invade even introductory patter in biology and psychology. We are told that the brain — or some area of it sees, decides, reasons, knows, emotes, is altruistic/egotistical, or wants to make love. For example, a recent article reports a researcher’s “looking at love, quite literally, with the aid of an MRI machine.” One wonders whether lovemaking is to occur between two brains, or between a brain and a human being.

There are three things wrong with this talk.

First, it provides little insight into psychological phenomena. Often the discoveries amount to finding stronger activation in some area of the brain when a psychological phenomenon occurs. As if it is news that the brain is not dormant during psychological activity! The reported neuroscience is often descriptive rather than explanatory. Experiments have shown that neurobabble produces the illusion of understanding. But little of it is sufficiently detailed to aid, much less provide, psychological explanation.

The idea that the neural can replace the psychological is the same idea that led to thinking that all psychological ills can be cured with drugs.

Second, brains-in-love talk conflates levels of explanation. Neurobabble piques interest in science, but obscures how science works. Individuals see, know, and want to make love. Brains don’t. Those things are psychological — not, in any evident way, neural. Brain activity is necessary for psychological phenomena, but its relation to them is complex.

Imagine that reports of the mid-20th-century breakthroughs in biology had focused entirely on quantum mechanical interactions among elementary particles. Imagine that the reports neglected to discuss the structure or functions of DNA. Inheritance would not have been understood. The level of explanation would have been wrong. Quantum mechanics lacks a notion of function, and its relation to biology is too complex to replace biological understanding. To understand biology, one must think in biological terms.

Discussing psychology in neural terms makes a similar mistake. Explanations of neural phenomena are not themselves explanations of psychological phenomena. Some expect the neural level to replace the psychological level. This expectation is as naive as expecting a single cure for cancer. Science is almost never so simple. See John Cleese’s apt spoof of such reductionism.

The third thing wrong with neurobabble is that it has pernicious feedback effects on science itself. Too much immature science has received massive funding, on the assumption that it illuminates psychology. The idea that the neural can replace the psychological is the same idea that led to thinking that all psychological ills can be cured with drugs.

Perceptual psychology, not neuroscience, should be grabbing headlines.

Correlations between localized neural activity and specific psychological phenomena are important facts. But they merely set the stage for explanation. Being purely descriptive, they explain nothing. Some correlations do aid psychological explanation. For example, identifying neural events underlying vision constrains explanations of timing in psychological processes and has helped predict psychological effects. We will understand both the correlations and the psychology, however, only through psychological explanation.

Scientific explanation is our best guide to understanding the world. By reflecting on it, we learn better what we understand about the world.

Neurobabble’s popularity stems partly from the view that psychology’s explanations are immature compared to neuroscience. Some psychology is indeed still far from rigorous. But neurobabble misses an important fact.

A powerful, distinctively psychological science matured over the last four decades. Perceptual psychology, pre-eminently vision science, should be grabbing headlines. This science is more advanced than many biological sciences, including much neuroscience. It is the first science to explain psychological processes with mathematical rigor in distinctively psychological terms. (Generative linguistics — another relatively mature psychological science — explains psychological structures better than psychological processes.)

Erin Schell

What are distinctively psychological terms? Psychology is distinctive in being a science of representation. The term “representation” has a generic use and a more specific use that is distinctively psychological. I start with the generic use, and will return to the distinctively psychological use. States of an organism generically represent features of the environment if they function to correlate with them. A plant or bacterium generically represents the direction of light. States involved in growth or movement functionally correlate with light’s direction.

Task-focused explanations in biology and psychology often use “represent” generically, and proceed as follows. They identify a natural task for an organism. They then measure environmental properties relevant to the task, and constraints imposed by the organism’s bio-physical make-up. Next, they determine mathematically optimal performance of the task, given the environmental properties and the organism’s constraints. Finally, they develop hypotheses and test the organism’s fulfillment of the task against optimal performance.

This approach identifies systematic correlations between organisms’ states and environmental properties. Such correlations constitute generic representation. However, task-focused explanations that use “representation” generically are not distinctively psychological. For they apply to states of plants, bacteria, and water pumps, as well as to perception and thought.

Explanation in perceptual psychology is a sub-type of task-focused explanation. What makes it distinctively psychological is that it uses notions like representational accuracy, a specific type of correlation.

The difference between functional correlation and representational accuracy is signaled by the fact that scientific explanations of light-sensitivity in plants or bacteria invoke functional correlation, but not states capable of accuracy. Talk of accuracy would be a rhetorical afterthought. States capable of accuracy are what vision science is fundamentally about.

Science of mind is one of the most important intellectual developments in the last half century. It should not be obscured by neurobabble.

Why are explanations in terms of representational accuracy needed? They explain perceptual constancies. Perceptual constancies are capacities to perceive a given environmental property under many types of stimulation. You and a bird can see a stone as the same size from 6 inches or 60 yards away, even though the size of the stone’s effect on the retina differs. You and a bee can see a surface as yellow bathed in white or red light, even though the distribution of wavelengths hitting the eye differ.

Plants and bacteria (and water-pumps) lack perceptual constancies. Responses to light by plants and bacteria are explained by reference to states determined by properties of the light stimulus — frequency, intensity, polarization — and by how and where light stimulates their surfaces.

Visual perception is getting the environment right — seeing it, representing it accurately. Standard explanations of neural patterns cannot explain vision because such explanations do not relate vision, or even neural patterns, to the environment. Task-focused explanations in terms of functional correlation do relate organisms’ states to the environment. But they remain too generic to explain visual perception.

Perceptual psychology explains how perceptual states that represent environmental properties are formed. It identifies psychological patterns that are learned, or coded into the perceptual system through eons of interaction with the environment. And it explains how stimulations cause individuals’ perceptual states via those patterns. Perceptions and illusions of depth, movement, size, shape, color, sound localization, and so on, are explained with mathematical rigor.

Perceptual psychology uses two powerful types of explanation — one, geometrical and traditional; the other, statistical and cutting-edge.

Here is a geometrical explanation of distance perception. Two angles and the length of one side determine a triangle. A point in the environment forms a triangle with the two eyes. The distance between the eyes in many animals is constant. Suppose that distance to be innately coded in the visual system. Suppose that the system has information about the angles at which the two eyes are pointing, relative to the line between the eyes. Then the distance to the point in the environment is computable. Descartes postulated this explanation in 1637. There is now rich empirical evidence to indicate that this procedure, called “convergence,” figures in perception of distance. Convergence is one of over 15 ways human vision is known to represent distance or depth.

Here is a statistical explanation of contour grouping. Contour grouping is representing which contours (including boundary contours) “go together,” for example, as belonging to the same object. Contour grouping is a step toward perception of object shape. Grouping boundary contours that belong to the same object is complicated by this fact: Objects commonly occlude other objects, obscuring boundary contours of partially occluded objects. Grouping boundaries on opposite sides of an occluder is a step towards perceiving object shape.

To determine how boundary contours should ideally be grouped, numerous digital photographs of natural scenes are collected. Hundreds of thousands of contours are extracted from the photographic images. Each pair is classified as to whether or not it corresponds to boundaries of the same object. The distances and relative orientations between paired image-contours are recorded. Given enough samples, the probability that two photographic image-contours correspond to contours on the same object can be calculated. Probabilities vary depending on distance — and orientation relations among the image-contours. So whether two image-contours correspond to boundaries of the same object depends statistically on properties of image-contours.

Human visual systems are known to record contour information. In experiments, humans are shown only image-contours in photographs, not full photographs. Their performance in judging which contours belong to the same object, given only the image-contours, closely matches the objective probabilities established from the photographs. Such tests support hypotheses about how perceptions of object shape are formed from cues regarding contour groupings.

Representation, in the specific sense, and consciousness are the two primary properties that are distinctive of psychological phenomena. Consciousness is the what-it-is-like of experience. Representation is the being-about-something in perception and thought. Consciousness is introspectively more salient. Representation is scientifically better understood.

Where does mind begin? One beginning is the emergence of representational accuracy — in arthropods. (We do not know where consciousness begins.) Rigorous science of mind begins with perception, the first distinctively psychological representation. Maturation of a science of mind is one of the most important intellectual developments in the last half century. Its momentousness should not be obscured by neurobabble that baits with psychology, but switches to brain science. Brain and psychological sciences are working toward one another. Understanding their relation depends on understanding psychology. We have a rigorous perceptual psychology. It may provide a model for further psychological explanation that will do more than display an MRI and say, “behold, love.”

Additional Reading:

Charless C. Fowlkes, David R. Martin, and Jitendra Malik, “Local Figure-Ground Cues are Valid for Natural Images,” Journal of Vision 7 (2007), 1-9.

W.S. Geisler, “Visual Perception and the Statistical Properties of Natural Scenes,” Annual Review of Psychology 59 (2008), 10.1-10.26.

David Knill, “Discriminating Planar Surface Slant from Texture: Human and Ideal Observers Compared,” Vision Research, 38 (1998), 1683-1711.

Stephen E. Palmer, Vision Science: Photons to Phenomenology (Cambridge, Mass.: MIT Press, 2002).

D. Vishwanath, A.R. Girshick, and M.S. Banks, “Why Pictures Look Right When Viewed from the Wrong Place,” Nature Neuroscience (2005), 1401-1410.

D.S. Weisberg, F.C. Keil, J. Goodstein, E. Rawson, and J.R. Gray, “The Seductive Allure of Neuroscience Explanations,” Journal of Cognitive Neuroscience 20 (2008), 470-477.

Tyler Burge is Distinguished Professor of Philosophy at U.C.L.A. He is the author of many papers on philosophy of mind and three books with Oxford University Press: “Truth, Thought, Reason: Essays on Frege,” “Foundations of Mind,” and most recently, “Origins of Objectivity, which discusses the origins of mind in perception and the success of perceptual psychology as a science.





DECEMBER 12, 2010, 3:47 PM
Out of Our Brains By ANDY CLARK


Where is my mind?


The question — memorably posed by rock band the Pixies in their 1988 song — is one that, perhaps surprisingly, divides many of us working in the areas of philosophy of mind and cognitive science. Look at the science columns of your daily newspapers and you could be forgiven for thinking that there is no case to answer. We are all familiar with the colorful “brain blob” pictures that show just where activity (indirectly measured by blood oxygenation level) is concentrated as we attempt to solve different kinds of puzzles: blobs here for thinking of nouns, there for thinking of verbs, over there for solving ethical puzzles of a certain class, and so on, ad blobum. (In fact, the brain blob picture has seemingly been raised to the status of visual art form of late with the publication of a book of high-octane brain images. )

There is no limit, it seems, to the different tasks that elicit subtly, and sometimes not so subtly, different patterns of neural activation. Surely then, all the thinking must be going on in the brain? That, after all, is where the lights are.

As our technologies become better adapted to fit the niche provided by the biological brain, they become more like cognitive prosthetics.

But then again, maybe not. We’ve all heard the story of the drunk searching for his dropped keys under the lone streetlamp at night. When asked why he is looking there, when they could surely be anywhere on the street, he replies, “Because that’s where the light is.” Could it be the same with the blobs?

Is it possible that, sometimes at least, some of the activity that enables us to be the thinking, knowing, agents that we are occurs outside the brain?

The idea sounds outlandish at first. So let’s take a familiar kind of case as a first illustration. Most of us gesture (some of us more wildly than others) when we talk. For many years, it was assumed that this bodily action served at best some expressive purpose, perhaps one of emphasis or illustration. Psychologists and linguists such as Susan Goldin-Meadow and David McNeill have lately questioned this assumption, suspecting that the bodily motions may themselves be playing some kind of active role in our thought process. In experiments where the active use of gesture is inhibited, subjects show decreased performance on various kinds of mental tasks. Now whatever is going on in these cases, the brain is obviously deeply implicated! No one thinks that the physical handwavings are all by themselves the repositories of thoughts or reasoning. But it may be that they are contributing to the thinking and reasoning, perhaps by lessening or otherwise altering the tasks that the brain must perform, and thus helping us to move our own thinking along.

Hiroko Masuike for The New York Times“Brain Cloud (2010)” on display at the Metropolitan Museum of Art in New York as part of a show by John Baldessari.

It is noteworthy, for example, that the use of spontaneous gesture increases when we are actively thinking a problem through, rather than simply rehearsing a known solution. There may be more to so-called “handwaving” than meets the eye.

This kind of idea is currently being explored by a wave of scientists and philosophers working in the areas known as “embodied cognition” and “the extended mind.” Uniting these fields is the thought that evolution and learning don’t give a jot what resources are used to solve a problem. There is no more reason, from the perspective of evolution or learning, to favor the use of a brain-only cognitive strategy than there is to favor the use of canny (but messy, complex, hard-to-understand) combinations of brain, body and world. Brains play a major role, of course. They are the locus of great plasticity and processing power, and will be the key to almost any form of cognitive success. But spare a thought for the many resources whose task-related bursts of activity take place elsewhere, not just in the physical motions of our hands and arms while reasoning, or in the muscles of the dancer or the sports star, but even outside the biological body — in the iPhones, BlackBerrys, laptops and organizers which transform and extend the reach of bare biological processing in so many ways. These blobs of less-celebrated activity may sometimes be best seen, myself and others have argued, as bio-external elements in an extended cognitive process: one that now criss-crosses the conventional boundaries of skin and skull.

One way to see this is to ask yourself how you would categorize the same work were it found to occur “in the head” as part of the neural processing of, say, an alien species. If you’d then have no hesitation in counting the activity as genuine (though non-conscious) cognitive activity, then perhaps it is only some kind of bio-envelope prejudice that stops you counting the same work, when reliably performed outside the head, as a genuine element in your own mental processing?

Another way to approach the idea is by comparison with the use of prosthetic limbs. After a while, a good prosthetic limb functions not as a mere tool but as a non-biological bodily part. Increasingly, the form and structure of such limbs is geared to specific functions (consider the carbon-fiber running blades of the Olympic and Paralympic athlete Oscar Pistorius) and does not replicate the full form and structure of the original biological template. As our information-processing technologies improve and become better and better adapted to fit the niche provided by the biological brain, they become more like cognitive prosthetics: non-biological circuits that come to function as parts of the material underpinnings of minds like ours.

Many people I speak to are perfectly happy with the idea that an implanted piece of non-biological equipment, interfaced to the brain by some kind of directly wired connection, would count (assuming all went well) as providing material support for some of their own cognitive processing. Just as we embrace cochlear implants as genuine but non-biological elements in a sensory circuit, so we might embrace “silicon neurons” performing complex operations as elements in some future form of cognitive repair. But when the emphasis shifts from repair to extension, and from implants with wired interfacing to “explants” with wire-free communication, intuitions sometimes shift. That shift, I want to argue, is unjustified. If we can repair a cognitive function by the use of non-biological circuitry, then we can extend and alter cognitive functions that way too. And if a wired interface is acceptable, then, at least in principle, a wire-free interface (such as links your brain to your notepad, BlackBerry or iPhone) must be acceptable too. What counts is the flow and alteration of information, not the medium through which it moves.

When information flows, some of the most important unities may emerge in regimes that weave together activity in brain, body and world.

Perhaps we are moved simply by the thought that these devices (like prosthetic limbs) are detachable from the rest of the person? Ibn Sina Avicenna, a Persian philosopher-scientist who lived between 980 and 1037 A.D, wrote in the seventh volume of his epic “De Anima (Liber de anima seu sextus de naturalibus)” that “These bodily members are, as it were, no more than garments; which, because they have been attached to us for a long time, we think are us, or parts of us [and] the cause of this is the long period of adherence: we are accustomed to remove clothes and to throw them down, which we are entirely unaccustomed to do with our bodily members” (translation by R. Martin). Much the same is true, I want to say, of our own cognitive circuitry.

The fact that there is a stable biological core that we do not “remove and throw down” blinds us to the fact that minds, like bodies, are collections of parts whose deepest unity consists not in contingent matters of undetachability but in the way they (the parts) function together as effective wholes. When information flows, some of the most important unities may emerge in integrated processing regimes that weave together activity in brain, body, and world.

Such an idea is not new. Versions can be found in the work of James, Heidegger, Bateson, Merleau-Ponty, Dennett, and many others. But we seem to be entering an age in which cognitive prosthetics (which have always been around in one form or another) are displaying a kind of Cambrian explosion of new and potent forms. As the forms proliferate, and some become more entrenched, we might do well to pause and reflect on their nature and status. At the very least, minds like ours are the products not of neural processing alone but of the complex and iterated interplay between brains, bodies, and the many designer environments in which we increasingly live and work.

Please don’t get me wrong. Some of my best friends are neuroscientists and neuro-imagers (as it happens, my partner is a neuro-imager, so brain blobs are part of our daily diet). The brain is a fantastic beast, more than worthy of the massive investments we make to study it. But we — the human beings with versatile bodies living in a complex, increasingly technologized, and heavily self-structured, world — are more fantastic still. Really understanding the mind, if the theorists of embodied and extended cognition are right, will require a lot more than just understanding the brain. Or as the Pixies put it:

Where is my mind?


Way out in the water, see it swimming

[Andy Clark's response to the comments on this post can be found here: "Extended Mind Redux: A Response."]

Andy Clark is professor of logic and metaphysics in the School of Philosophy, Psychology, and Language Sciences at Edinburgh University, Scotland. He is the author of “Being There: Putting Brain, Body, and World Together Again” (MIT Press, 1997) and “Supersizing the Mind: Embodiment, Action, and Cognitive Extension” (Oxford University Press, 2008).



DECEMBER 5, 2010, 5:15 PM
Navigating Past Nihilism By SEAN D. KELLY


“Nihilism stands at the door,” wrote Nietzsche. “Whence comes this uncanniest of all guests?” The year was 1885 or 1886, and Nietzsche was writing in a notebook whose contents were not intended for publication. The discussion of nihilism ─ the sense that it is no longer obvious what our most fundamental commitments are, or what matters in a life of distinction and worth, the sense that the world is an abyss of meaning rather than its God-given preserve ─ finds no sustained treatment in the works that Nietzsche prepared for publication during his lifetime. But a few years earlier, in 1882, the German philosopher had already published a possible answer to the question of nihilism’s ultimate source. “God is dead,” Nietzsche wrote in a famous passage from “The Gay Science.” “God remains dead. And we have killed him.”

An illustration of Friedrich Nietzsche from around 1880.

There is much debate about the meaning of Nietzsche’s famous claim, and I will not attempt to settle that scholarly dispute here. But at least one of the things that Nietzsche could have meant is that the social role that the Judeo-Christian God plays in our culture is radically different from the one he has traditionally played in prior epochs of the West. For it used to be the case in the European Middle Ages for example ─ that the mainstream of society was grounded so firmly in its Christian beliefs that someone who did not share those beliefs could therefore not be taken seriously as living an even potentially admirable life. Indeed, a life outside the Church was not only execrable but condemnable, and in certain periods of European history it invited a close encounter with a burning pyre.

God is dead in a very particular sense. He no longer plays his traditional social role of organizing us around a commitment to a single right way to live.

Whatever role religion plays in our society today, it is not this one. For today’s religious believers feel strong social pressure to admit that someone who doesn’t share their religious belief might nevertheless be living a life worthy of their admiration. That is not to say that every religious believer accepts this constraint. But to the extent that they do not, then society now rightly condemns them as dangerous religious fanatics rather than sanctioning them as scions of the Church or mosque. God is dead, therefore, in a very particular sense. He no longer plays his traditional social role of organizing us around a commitment to a single right way to live. Nihilism is one state a culture may reach when it no longer has a unique and agreed upon social ground.

The 20th century saw an onslaught of literary depictions of the nihilistic state. The story had both positive and negative sides. On the positive end, when it is no longer clear in a culture what its most basic commitments are, when the structure of a worthwhile and well-lived life is no longer agreed upon and taken for granted, then a new sense of freedom may open up. Ways of living life that had earlier been marginalized or demonized may now achieve recognition or even be held up and celebrated. Social mobility ─ for African Americans, gays, women, workers, people with disabilities or others who had been held down by the traditional culture ─ may finally become a possibility. The exploration and articulation of these new possibilities for living a life was found in such great 20th-century figures as Martin Luther King, Jr., Simone de Beauvoir, Studs Terkel, and many others.

But there is a downside to the freedom of nihilism as well, and the people living in the culture may experience this in a variety of ways. Without any clear and agreed upon sense for what to be aiming at in a life, people may experience the paralyzing type of indecision depicted by T.S. Eliot in his famously vacillating character Prufrock; or they may feel, like the characters in a Samuel Beckett play, as though they are continuously waiting for something to become clear in their lives before they can get on with living them; or they may feel the kind of “stomach level sadness” that David Foster Wallace described, a sadness that drives them to distract themselves by any number of entertainments, addictions, competitions, or arbitrary goals, each of which leaves them feeling emptier than the last. The threat of nihilism is the threat that freedom from the constraint of agreed upon norms opens up new possibilities in the culture only through its fundamentally destabilizing force.

There may be parts of the culture where this destabilizing force is not felt. The Times’s David Brooks argued recently for example, in a column discussing Jonathan Franzen’s novel “Freedom,” that Franzen’s depiction of America as a society of lost and fumbling souls tells us “more about America’s literary culture than about America itself.” The suburban life full of “quiet desperation,” according to Brooks, is a literary trope that has taken on a life of its own. It fails to recognize the happiness, and even fulfillment, that is found in the everyday engagements with religion, work, ethnic heritage, military service and any of the other pursuits in life that are “potentially lofty and ennobling”.

There is something right about Brooks’s observation, but he leaves the crucial question unasked. Has Brooks’s happy, suburban life revealed a new kind of contentment, a happiness that is possible even after the death of God? Or is the happy suburban world Brooks describes simply self-deceived in its happiness, failing to face up to the effects of the destabilizing force that Franzen and his literary compatriots feel? I won’t pretend to claim which of these options actually prevails in the suburbs today, but let me try at least to lay them out.

Consider the options in reverse order. To begin with, perhaps the writers and poets whom Brooks questions have actually noticed something that the rest of us are ignoring or covering up. This is what Nietzsche himself thought. “I have come too early,” he wrote. “God is dead; but given the way of men, there may still be caves for thousands of years in which his shadow will be shown.” On this account there really is no agreement in the culture about what constitutes a well-lived life; God is dead in this particular sense. But many people carry on in God’s shadow nevertheless; they take the life at which they are aiming to be one that is justifiable universally. In this case the happiness that Brooks identifies in the suburbs is not genuine happiness but self-deceit.

Melville hoped for a life that steers happily between two dangers: nihilism and fanaticism.

What would such a self-deceiving life look like? It would be a matter not only of finding meaning in one’s everyday engagements, but of clinging to the meanings those engagements offer as if they were universal and absolute. Take the case of religion, for example. One can imagine a happy suburban member of a religious congregation who, in addition to finding fulfillment for herself in her lofty and ennobling religious pursuits, experiences the aspiration to this kind of fulfillment as one demanded of all other human beings as well. Indeed, one can imagine that the kind of fulfillment she experiences through her own religious commitments depends upon her experiencing those commitments as universal, and therefore depends upon her experiencing those people not living in the fold of her church as somehow living depleted or unfulfilled lives. I suppose this is not an impossible case. But if this is the kind of fulfillment one achieves through one’s happy suburban religious pursuit, then in our culture today it is self-deception at best and fanaticism at worst. For it stands in constant tension with the demand in the culture to recognize that those who don’t share your religious commitments might nevertheless be living admirable lives. There is therefore a kind of happiness in a suburban life like this. But its continuation depends upon deceiving oneself about the role that any kind of religious commitment can now play in grounding the meanings for a life.

But there is another option available. Perhaps Nietzsche was wrong about how long it would take for the news of God’s death to reach the ears of men. Perhaps he was wrong, in other words, about how long it would take before the happiness to which we can imagine aspiring would no longer need to aim at universal validity in order for us to feel satisfied by it. In this case the happiness of the suburbs would be consistent with the death of God, but it would be a radically different kind of happiness from that which the Judeo-Christian epoch of Western history sustained.

Library of Congress Herman Melville

Herman Melville seems to have articulated and hoped for this kind of possibility. Writing 30 years before Nietzsche, in his great novel “Moby Dick,” the canonical American author encourages us to “lower the conceit of attainable felicity”; to find happiness and meaning, in other words, not in some universal religious account of the order of the universe that holds for everyone at all times, but rather in the local and small-scale commitments that animate a life well-lived. The meaning that one finds in a life dedicated to “the wife, the heart, the bed, the table, the saddle, the fire-side, the country,” these are genuine meanings. They are, in other words, completely sufficient to hold off the threat of nihilism, the threat that life will dissolve into a sequence of meaningless events. But they are nothing like the kind of universal meanings for which the monotheistic tradition of Christianity had hoped. Indeed, when taken up in the appropriate way, the commitments that animate the meanings in one person’s life ─ to family, say, or work, or country, or even local religious community ─ become completely consistent with the possibility that someone else with radically different commitments might nevertheless be living in a way that deserves one’s admiration.

The new possibility that Melville hoped for, therefore, is a life that steers happily between two dangers: the monotheistic aspiration to universal validity, which leads to a culture of fanaticism and self-deceit, and the atheistic descent into nihilism, which leads to a culture of purposelessness and angst. To give a name to Melville’s new possibility — a name with an appropriately rich range of historical resonances — we could call it polytheism. Not every life is worth living from the polytheistic point of view — there are lots of lives that don’t inspire one’s admiration. But there are nevertheless many different lives of worth, and there is no single principle or source or meaning in virtue of which one properly admires them all.


Melville himself seems to have recognized that the presence of many gods — many distinct and incommensurate good ways of life — was a possibility our own American culture could and should be aiming at. The death of God therefore, in Melville’s inspiring picture, leads not to a culture overtaken by meaninglessness but to a culture directed by a rich sense for many new possible and incommensurate meanings. Such a nation would have to be “highly cultured and poetical,” according to Melville. It would have to take seriously, in other words, its sense of itself as having grown out of a rich history that needs to be preserved and celebrated, but also a history that needs to be re-appropriated for an even richer future. Indeed, Melville’s own novel could be the founding text for such a culture. Though the details of that story will have to wait for another day, I can at least leave you with Melville’s own cryptic, but inspirational comment on this possibility. “If hereafter any highly cultured, poetical nation,” he writes:


Shall lure back to their birthright, the merry May-day gods of old; and livingly enthrone them again in the now egotistical sky; on the now unhaunted hill; then be sure, exalted to Jove’s high seat, the great Sperm Whale shall lord it.



Sean D. Kelly is chair of the department of philosophy at Harvard University. He is the co-author, with Hubert Dreyfus, of “All Things Shining: Reading the Western Classics to Find Meaning in a Secular Age,” to be published in January by Free Press. He blogs at All Things Shining.

No comments: