Monday, July 26, 2010

Dear NYT, please stop writing stories about Memphis politics.

Somebody, for the love of God, please make it stop.

Once again, the New York Times has given us embarrasingly reductive, bordering on cartoonish and condescending, reportage about Memphis politics. This time, it's an article provocatively entitled "Black Candidate Brings Race Into a Primary in Memphis." (I'm shocked that they were able to refrain from putting "black" in ALL CAPS or from putting "race" in scare quotes.) The last time I sighed in exasperation over the NYT's misguided foray into Memphis politics was just a month ago, when they reported on the so-called "new poor" (read: blacks) in our admittedly less-than-fair city. Given their usually subtle, probative and complex treatments of cities like Baltimore and New Orleans, it's just a mystery to me why the country's most well-respected newspaper continues to give Memphis the short-shrift.

Let me be clear. I'm glad to see things like the effects of the housing crisis in Memphis or the heated contest for the 9th Congressional District race, both of which are heavily inflected with racial politics, covered by a national media outlet. But, I mean, c'mon... can't we do better than stupidly simple stories that basically amount to "guess what? blacks and whites don't get along in the South!"? And why does the NYT feel obligated to preface every story about Memphis with a reminder that this is the place where MLK, Jr. was assasinated? Who doesn't know that? (Okay, okay, I'll relunctantly concede that maybe there are people who don't know that, but still, that's no justification for pretending like you can frame the racial issues of 2010 as if they were the same as 1968.) One of the more interesting things about politics in Memphis, in my view, is the fact that-- largely because of our history-- we serve as a kind of petri dish for racial politics in America. Unlike the rest of the country, nobody here bothers to pretend like race isn't a main ingredient in everything political. There's no "race card." The whole freaking deck is raced. (Seriously, check out your deck. You'll notice that it's divided into colors, and that they don't all carry the same weight.) So, the most recent NYT article, which is little more than a j'accuse! directed at former mayor Willie Herenton for "playing the race card," is about as unreflectively carpetbagger as it gets.

Unlike most (white) Memphians, I don't loathe Willie Herenton, who doesn't get enough credit for some of the good things he did do in my opinion, like revitalizing the downtown entertainment district. But I'm voting for (actually, I've already voted for) Steve Cohen, who I think is the better candidate in this race. Before I cast my vote, I thought long and hard about the racial politics in Memphis-- I hope all Memphians do the same-- but none of that long and hard thinking involved anything even remotely resembling what was presented in the NYT article.

Seriously, please make it stop.

Saturday, July 24, 2010

Solicitation

Just today, I posted a solicitation for book recommendations as my Facebook status. I asked for fiction recs-- proscribing Stieg Larsson in advance -- and almost immediately received a host of literary endorsements from my many bibliophile friends (and, a pleasant surprise, from my students as well). I was happy to see that most of the suggestions were titles that were on my short list of must-read's anyway, confirming my longstanding suspicion that shared literary preferences make for good friendships. Even the recommendations that I knew I would ultimately reject for idiosyncratic reasons, like Thomas Mann's Dr. Faustus or Jose Saramago's The Double*, still made sense to me as "good" recommendations. As the suggestions came pouring in, I tried to clarify my solicitation with the following: "I'm basically looking for something to knock my socks off like Diaz's The Brief, Wondrous Life of Oscar Wao did." To which one of my good friends, anotherpanacea (aka, Joshua Miller), replied: "Haven't read Oscar Wao: what's appealing about it that you'd like to replicate?"

That's a great question. And so very hard to answer. My first instinct is to say that anyone interested in surveying my literary tastes should consult my blog-post "Dr. J's Top 25 Books List," but the truth is that most of the books on that list are not fiction, so I'm not sure that it would be all that helpful in this case. After reviewing that list myself, I'm not even sure that the fiction pieces on that list are especially illustrative of what I like in fiction. So, I'm going to try to take on Anotherpanacea's question more directly here, and say what it was about The Brief, Wondrous Life of Oscar Wao that I found so appealing, and that I would like to replicate. (I mentioned to a friend recently that I would describe my current pleasure-reading phase as "post-Diaz," so using Oscar Wao as a kind of touchstone for my literary tastes is probably apropos.) There are a few preliminary caveats that I need to make first, including (1) that most of my non-fiction reading and research involves human rights violations and crimes against humanity, (2) that, against my better judgment, I am often drawn to fiction that replicates the same themes (like Oscar Wao, and The Reader and A Sunday by the Pool in Kigali), and (3) that, well, my professional life is that of a philosopher and, consequently, I'm partial to high-concept fiction.

Let me begin this way: I would list my favortie "contemporary" fiction authors as the following:
-- Philip Roth
-- Milan Kundera
-- J.M. Coetzee
-- Jonathan Safran Foer

Runner-ups to this list (by which I mean authors of whom I have only read one or two novels) are:
-- Ann Patchett
-- Jonathan Franzen
-- Junot Diaz

That said, here's what I loved about Oscar Wao and would like to see replicated:
(1) big, comprehensive, national and transnational political themes, situated as they are in a particular racial or ethnic history
(2) a stark, searing, and probative consideratation of human frailty
(3) a quick, piercing and true dialogue-driven plot
(4) the exposition and exposure of our collective weaknesses, told through the particular trials of a particular character's (universalizable) weaknessess-- on this score, Roth's The Human Stain rivals Oscar Wao for top billing-- and finally, perhaps obviously...
(5) PHILOSOPHY MASKED AS FICTION. (For my literary tastes, it must be philoosophical, and it must be masked.)

So, for those of you who are my FB friends, feel free to offer your suggestions there. For the rest of my blog-readers, here's your opportunity to contribute recommendations.
------------------

* I think one of signs of a mature literary sensibility is the ability-- and, more importantly, the willingness-- to admit what one doesn't like. Literary taste, like all tastes, can be a fickle and idiosyncratic thing. It took me a long, long time to get to the point where I could put a novel that I didn't like down without finishing it. At the risk of overgeneralization-- and there are always exceptions, of course-- I am at the point where I can say with some confidence that I don't especially enjoy the following: Latin American fiction (which includes a lot of magical realism), fiction by German writers or about German themes, Beat Generation lit or its stylistic and thematic heirs (among which I include Chuck Pahluniak), and sci-fi fiction. I can appreciate the literary value of many of the great works in each of these genres... it's just that I don't like to read them.

Thursday, July 22, 2010

Let The Right One In

If you've been stuck under a rock for the last couple of years, you may not be aware that vampires are all the rage right now. Since I'm not a huge fan of scary movies, scary monsters, or scary things in general, I've managed to sidestep any real exposure to the recent vampiremania, though a few weeks back I was hustled into seeing the most recent installment in the Twilight series at the movies with my 11-yr-old neice. As it turns out, American tween girls are not the only ones driving this movement; fascination with the undead is an international phenomenon. Last night, after dinner with a few friends and colleagues, we watched the 2008 Swedish film Låt den rätte komma in ("Let the Right One In"), based on the novel of the same name by John Ajvide Lindqvist. Like the Twilight saga, the story centers on a young vampire-- or, at least, "young" in her bodily form. (She's been 12-yrs-old for the last 200 years.) And like the Twilight saga, it's part murder mystery, part bildungsroman, part love story, and a whole lot of blood. But unlike Twilight, it's not overwrought or melodramatic, and it doesn't have a cloud of global doom (are vampires or werewolves going to take over?) hanging over everything and threatening the existence of the world as we know it. Let the Right One In is, in its own way, just as erotic (and homoerotic) as its American counterpart, but it trains its focus on the subtleties, the complexities and the intensities of just one young relationship, which is as awkward as any other young relationship, only with the superadded complication of involving a vampire.

Let the Right One in is an intensely quiet film. It's set in a working-class suburb of Stockholm, and the glistening, snowy landscape is the perfect mise en scène, projecting as it does the feeling that everything is dead, and yet somehow not, at the same time. The two main characters, Eli (the young vampiress, played by Lina Leandersson) and Oskar (her would-be paramour, played by Kåre Hedebrant), are also very quiet kids, and the sparse dialogue that they're given in the film is almost awkwardly direct and succinct. Both of them emanate vulnerability through huge, expressive eyes, which mask a hidden rage (behind Oskar's) and a hidden violence (behind Eli's). And because so very little is actually said between them, the love-bond that they develop on the basis of their shared misery and loneliness as outcasts becomes like a third character, melding them together and yet somehow maintaining their difference both from each other and from the relationship that they share. The sexual tension between Oskar and Eli-- only a "pre-sexual" tension really, exploratory and hesitant and insecure-- is simultaneously innocent and illicit, provoking the kind of uneasiness on the viewer's part that we are all trained to feel at the shameful spectacle of prepubescents coming into their own. Because this is a vampire film, there is violence, though the worst violence is not the murderous bloodsucking of Eli, but rather the psychic, physical and emotional violence suffered by Oskar, whose slight frame and gentle demeanor make him the daily target of a truly vicious schoolyard bully. Because she grows to love him, Eli tries to impart some of her killer instinct to Oskar-- an instinct that she sees him already harboring deep within, though he is unable to actualize it-- and that fundamentally protective gesture is what allows Oskar to look past the incommensurable natural divide that separates them.

I don't want to give away any spoilers, so I won't tell you the real meaning behind the film's title. But it won't spoil anything for me to say that the injunction "let the right one in" should be heeded in all relationships, living or (un)dead. Especially when we're young, or vulnerable, or scared, or feeling alone, finding the "right one" to let in is a precarious undertaking. And especially in those cases, mistakes can be-- literally or figuratively-- deadly.

I never thought I'd say this about a vampire flick, but Let the Right One In is one of the most surprisingly tender and touching films I've seen since, I don't know, maybe Lost in Translation. My hosts last night told me that there is a remake in the works here in the U.S., to be released sometime in the fall. I'm sure it won't be as good as the original, at least in part because many of the more controversial sub-themes doubtlessly will be left out so as not to offend the delicate sensibilities of American audiences. That's too bad, really, because it's those gray areas that are the emotional heart of the story. So try to see the original if you can. Here's the trailer:

Wednesday, July 21, 2010

Inception

"What's the most resilient parasite? An idea."
-- Dominic Cobb, protagonist in Inception

If you buy the basic premise of the new film Inception, most of our ideas (perhaps all of them) begin deeply in our subconscious, in our dreams. There, they are born and grow, fertilized and fed in a world in which they are bound-- not by logic, not by physical laws, not by morality, nor by possibility or practicality, nor even by death-- but only by the limits of our imagination. In the futuristic sci-fi world of Inception, technology has advanced to the stage where the secret ideas that we keep locked away in our minds can be stolen by thieves who visit our dreams. But the film suggests that to have the deepest recesses of our minds invaded and plundered is not the most grave danger. Worse than having an idea stolen is having an idea planted, a complex and difficult task called "inception." At least according to the film, our minds are hard-wired to detect ideas of foreign origin, and the challenge of inception is to plant the simplest seed of an idea so deeply in the target's subconsciousness that he or she comes to believe that it was self-generated.

The film's creator and director, Christopher Nolan-- who brought us Following (1998), Memento (2000), Insomnia (2002), The Dark Knight (2008), and one of my favorite films of all time The Prestige (2006)-- makes his trade in (more or less) high-concept, sci-fi adventure movies, and Inception is entirely in keeping with what we have seen from him before. Inception is reminiscent of the 1984 thriller Dreamscape, which was also centered on the idea of invading people's dreams for nefarious purposes (and which scared the living bejesus out of me when I watched it as a kid). Unlike Dreamscape, though, no one needs special psychic powers in Inception, as advanced technology and pharmaceuticals make it all possible. Nolan's dreamworld is complex, multi-layered and labyrinthine-- at times, too much so-- but it has its own rules and its own logic that remain, however precariously, just this side of plausible. To its credit, Inception is a visually stunning film and paced well for its 2-1/2 hour length. There are a few too many shoot-em-up action sequences for my taste, and the characters can be a bit one-dimensional, but the real problem with the film is... well... Nolan himself.

Here's what I like about Nolan: he's got some really good ideas. All of his films, including Inception, are in some way explorations of the fragility of the human mind. Nolan has a knack for pressing on those weak spots in our gray matter-- memory, pathology, dreams, obsessions, imagination, delusion-- and seeing what balance of order and chaos ensues. His two best films, Memento and The Prestige, straightforwardly dramatize interesting philosophical questions. We're given characters, a story, a host of twists and turns, and we are left to parse the meaning of it all. In his worst film, The Dark Knight, Nolan stepped in and ham-handedly attempted to parse all the meaning for us, as if he was afraid his audience might be too dense to get the (overestimated) brilliance of his concept. Everything is made obnoxiously, redundantly, explicit in The Dark Knight, even to the point of staging-- and then didactically narrating-- a straight-out-of-the-textbook moral dilemma. (The ferry boat scene in that film is a variation on the prisoner's dilemma, a staple of every introductory Ethics class.) Unfortunately, Inception suffers from the same just-in-case-you-don't-get-it Nolan hubris. It's terribly over-narrated, unnecessarily so, and the frequency with which the characters feel compelled to engage in explanatory dialogue feels forced and more than a little patronizing. What's worse, Nolan really gives us two separate and poorly integrated films here, both of them more or less dramatizing the same idea. The story of the protagonist Dominic Cobb's (Leonardo DiCaprio) attempt to plant an idea in the mind of his target, Robert Fischer (Cillian Murphy), is a complete and coherent story in itself. The story of Dominic's planting of an idea in the mind of his dead wife (Marion Cotillard) is another. Nolan desparately needed to choose one or the other, as the coincidental development of both stories felt like another just-in-case-they-don't-get-it insurance policy.

I don't think Inception was a "bad" film. It was a good film done poorly, in my estimation. But one thing that I find interesting about the particular manner in which it was done poorly is that Nolan's mis-execution of his idea did (ironically) serve as an effective demonstration of that same idea. The premise of the film is that "inception" is a very difficult, if not impossible, task. To "plant" an idea in someone's mind such that it feels home-grown requires circumventing all of the security features of human consciousness that are designed to reject infiltration by foreign assailants. And so, despite my incessant sighing and eye-rolling at the limpid implementation of Nolan's concept in his film, I nevertheless still found myself compelled to think that Inception was, at its heart, radical and visionary. Almost as if someone planted that idea in my head! Whoa.

Go see Inception. It's a bit like someone telling you that they're about to tell you the funniest joke ever, and then laughing hysterically after they deliver the punchline, such that you can't help but think to yourself: yeah, that was hilarious. For all of its faults, Inception is a perfect metaphor for itself.

Sunday, July 18, 2010

I Do Not Think That Word Means What You Think It Means

I'm not ashamed to admit that philosophers can be quite persnickety in our insistence upon precision in language. Much of what we do, after all, involves precisely defining things, concepts, meanings, values, processes, systems, states of being and the like. That's not to say that we are actually settled on precise definitions for everything; in fact, most of the definitions we trade in are perennially, and often hotly, contested. Some strains of philosophical inquiry-- like deconstruction, for example-- even focus the lion's share of their energies on bringing to light the many imprecisions in our concepts and vocabularies. But still, any philosopher worth his or her salt would certainly agree that we are not at liberty to make up any old meaning for the words we use. Not if we want to make sense, anyway.

I recently found myself in a conversation with a PhD in Accounting about the nature and value of "Business Ethics" courses. Now, I'm very skeptical of the value of "applied" ethics courses in general, especially when they are offered/taken independent of ethical theory courses, as they often are. Most applied ethics courses (not all, but most) end up being little more than a survey of case studies, in which students consider various ethical dilemmas and are expected to make some judgment about "right conduct" in those cases. The problem here is that, absent some familiarity with ethical theory, students' judgments end up being merely intuitive at best, grossly arbitrary at worst. It's like trying to teach the game of baseball to students by watching a bunch of baseball games. They probably will be able to intuit some of the rules of baseball that way, but they'll miss many more than they get, and the ones they get won't be formulated in their minds as general principles. So, they might be able to identify the exact same infraction in a different baseball game, but since they're unable to provide the reasoning behind the rules that proscribe certain infractions, any variation in the transgression will appear like an entirely new case to judge. Not that big of a deal, I suppose, if we're only talking about baseball... but a pretty big deal when we're purporting to teach future businessmen and -woman about how to conduct themselves ethically in business.

Anyway, back to my conversation with Dr. Accounting. I basically relayed my skepticism (above) to her and said something like "ergo, business students should be required to take 'ethical theory' classes before they take 'business ethics' classes." I gave her the same baseball analogy and said: "you see, the problem is that you can't really learn the game of baseball by watching a bunch of baseball games. The rules of the game are independent of their application or misapplication in any particular game. 'Ethics' is the same way. There are rules, and a reasoning behind those rules, and it's the rules that need to be learned, which can't be done with any number of case studies." To which Dr. Accounting replied: "Yeah, but each person decides their own ethical 'rules,' right?"

Sigh.

I replied: "But if each person makes up his or her own rules, then it's not really 'ethics.' It's just preference." And before I could get those three beautifully satisfying letters past my lips-- Q. E. D. -- I was slapped in the face with a stare as blank as business is morally vacant.

It won't come as a surprise, I guess, that we never reached a reconciliation in our conversation. But what surprised and frustrated me the most was not only that what we meant we we said "ethics" was so dramatically different, but also that my effort at clarifying my meaning (by making a distinction between "ethics" and "preference") was completely lost on my interlocutor. I remarked later to my (philosopher) colleague, also present for the conversation, that this was one of those cases in which I was painfully reminded that there is sometimes a vast abyss in meaning between the use of certain words that we philosophers employ and that same (otherwise shared) vocabulary as non-philosophers employ it. That realization, all by itself, is not all that disheartening, as every discipline has specialized vocabularies or specialized uses of common vocabularies. What was truly disheartening, though, was the utter disappointment I felt upon realizing that my interlocutor simply could not see the difference between a rule-based system for judgment and the utterly non-systematic application of arbitrary preferences in judgment. And worse, that she described the latter as "ethics."

Ethics. Dr. Accounting, as Inigo Montoya once said, I do not think that word means what you think it means.

Thursday, July 15, 2010

Bon Mots: Coetzee on the Origin of the State

From his most recent novel, Diary of a Bad Year (Viking, 2007, p.3), J.M. Coetzee's chief narrator, Señor C, speculates:

Every account of the origins of the state starts from the premise that "we"-- not we the readers but some generic we so wide as to exclude no one-- participate in its coming into being. But the fact is that the only "we" we know-- ourselves and the people close to us-- are born into the state; and out forbears too were born into the state as far back as we can trace. The state is always there before we are...

...If, despite the evidence of our senses, we accept the premise that we or our forbears created the state, then we must accept its entailment: that we or our forbears could have created the state in some other form, if we had chosen; perhaps, too, that we could change it if we collectively so decided. But the fact is that, even collectively, those who are "under" the state, who "belong to" the state, will find it very hard indeed to change its form; they-- we-- are certainly powerless to abolish it.

It is hardly in our present power to change the form of the state and impossible to abolish it because, vis-à-vis the state, we are, precisely, powerless. In the myth of the founding of the state as set down by Thomas Hobbes, our descent into powerlessness was voluntary: in order to escape the violence of internecine warfare without end (reprisal upon reprisal, vengeance upon vengeance, the vendetta), we individually and severally yielded up to the state the right to use physical force (right is might, might is right), thereby entering the realm (the protection) of the law. Those who chose and choose to stay outseide the compact become outlaw.

The Not-Long-Enough Arm of the Law

Earlier this week, the International Criminal Court (ICC) issued its second arrest warrant for the President of Sudan, Omar al-Bashir, for war crimes and crimes against humanity committed on his watch in the course of the ongoing conflict in Darfur. Included in the charges against Bashir is the crime of genocide, which the ICC claims it has "reasonable grounds to believe him responsible" for committing against the Fur, Masalit and Zaghawa ethnic groups in Sudan. (If you're interested to know what criteria the ICC uses in its execution of warrants for crimes proscribed by the Rome Statute, you can download a .pdf explanation of the "Elements of Crimes.") President Bashir is the first ever sitting head-of-state to be indicted by the ICC, and the first to be charged with genocide. It is alleged that he hoped to evade the ICC warrant by holding and winning a "legitimate" presidential election, though many question the legitimacy of the vote that resulted in his landslide victory in April.

The ICC will deliver to Sudan-- or has already delivered, it's unclear-- the new warrant for Bashir, which Sudan will doubtlessly ignore. Like the United States, Sudan is a signatory to the Rome Statute, but never ratified it. According to the law of treaties, a state that has signed but not ratified a treaty is obliged to refrain from "acts that would defeat the object and purpose of the treaty." However, these obligations no longer hold if the state later makes it clear that it has no intention to become a party to the treaty. Three states-- the United States, Israel, and Sudan-- have done exactly this, effectively "unsigning" the Rome Statute and consequently signalling that they have no legal obligations arising from their original signatures on the statute. I want to be very careful about drawing too-close parallels between the United States, Israel and Sudan on this issue, but the indictment of President Bashir certainly draws attention to the precarious relationship between sitting heads of state and an International Criminal Court that might bring them to account for atrocities committed by or in their countries.

Consider the parallels in the following:

Obviously, Sudan's ratification of the Rome Statute would obligate it to turn over Bashir to the Hague. The Darfur conflict, during which most of the crimes Bashir is accused with were committed, began in February 2003, roughly three years after Sudan initially signed the treaty establishing the ICC. So, it is difficult to imagine noble reasons for Sudan's subsequent reluctance to ratify the Rome Statute. Clearly, this is an attempt to secure impugnity for its head of state and to avoid the investigation and prosecution of human rights violations committed under his administration.

The United States' "unsigning" was performed by George W. Bush in 2002-- again, about two years after its original signing (by Bill Clinton), but more importantly after the September 11 attacks and the beginning of the U.S. "war on terror." Coincidental with Bush's nullification of the U.S.'s signature to the treaty, also in 2002, Congress passed the American Servicemember's Protection Act (ASPA), which contained a number of provisions, including prohibitions on the U.S. providing military aid to countries which had ratified the treaty establishing the court (exceptions granted), and permitting the President to authorize military force to free any U.S. military personnel held by the court. (The ASPA was dubbed by its opponents the "Hague Invasion Act.") Also in 2002, the United States threatened to veto the renewal of all United Nations peacekeeping missions unless its troops were granted immunity from prosecution by the Court, arguably in an attempt to intimidate countries that ratified (or intended to ratify) the treaty for the ICC. In a compromise move, the United Nations Security Council passed Resolution 1422, granting immunity to personnel from ICC non-States Parties involved in United Nations established or authorized missions for a renewable twelve-month period. This was renewed for twelve months in 2003, but the Security Council refused to renew the exemption again in 2004, after pictures emerged of US troops abusing Iraqi prisoners in Abu Ghraib, and the US withdrew its demand.

There is, of course, much debate over whether or not President George W. Bush should be held responsible for war crimes and crimes against humanity committed under (or at the behest of) his administration. But it is abundantly clear that, even if that administration is not in the end responsible for the alleged violations, it is at the very least primarily concerned with securing its own impugnity against prosecution. Much like, well, Sudanese President Bashir.

The interesting question is: why did the ICC issue a warrant for Bashir and not for Bush (or Israeli President Peres)? Arguably, the evidence against the Bush administration for complicity in human rights violations is greater, more easily accessible, more widespread and more universally accepted as damning. The African Union (AU) and many other international organizations have accused the ICC of unfairly "targeting" Africa, a concern that has received far too little serious consideration. At any rate, it will be interesting to see how this case against Bashir proceeds, and whether or not it is able to generate enough support for the ICC to give the Rome Statute some real teeth.

Tuesday, July 13, 2010

Bon Mots: Arendt on the Rights of Stateless People

From The Origins of Totalitarianism (Harcourt/HBJ, 1979, p. 295-6), Hannah Arendt tracks the coincidence of statelessness and rightlessness:

The calamity of the rightless is not that they are deprived of life, liberty, and the pursuit of happiness, or of equality before the law and freedom of opinion-- formulas which were designed to solve problems within given communities-- but that they no longer belong to any community whatsoever. Their plight is not that they are not equal before the law, but that no law exists for them; not that they are oppressed but that nobody wants to even oppress them. Only in the last stage of a rather lengthy process is their right to live threatened; only if they remain perfectly "superfluous," if nobody can be found to "claim" them, may their lives be in danger. Even the Nazis started their extermination of Jews first by depriving them of all legal status (the status of second-class citizenship) and cutting them off from the world of the living and by herding them into ghettos and concentration camps; and before they set the gas chambers into motion they had carefully tested the ground and found out to their satisfaction that no country would claim these people. The point is that a condition of complete rightlessness was created before the right to live was challenged.

Friday, July 09, 2010

The Uncanny Valley 4: Magic, Miracles, and the Necessary Third

As many of you know, I was a tad bit obsessed with a certain theory in robotics known as "the uncanny valley" several months back. I even delivered a philosophy paper this past Spring using the uncanny valley as one way of explaining our aversion to racial passing. (You can read my series of posts on the uncanny valley here.) But, as often happens with passing interests, my
affection for the topic eventually waned-- much to the delight of my friends, I think, who had been listening to me prattle on about it for too long. But just last week, I was hosting a couple of houseguests, and one morning over coffee I had a conversation with one of them that re-sparked my uncanny valley curiosity. (I'll let that friend remain nameless, for fear of his suffering reprisal from those who've heard their fill on this whole issue.) My guest described a project he was working on that dealt with mirror neurons and whether or not they were really the cause for the affective phenomenon we call "empathy." At any rate, in the course of describing our (and other animals') apparent ability to "experience" or "feel" things empathetically-- that is, without experiencing those things first-hand-- my interlocutor described the empathizer's experience as a kind of "simulation." That is, he speculated that when we feel (empathetic) pain upon witnessing anther's injury, for example, what is happening is that our brain is neurologically "mirroring" the experience of the other, in effect manufacturing a simulation of that experience that convinces us, physically and emotionally, that we are experiencing it as well.

Now, what I found interesting about this, and what led me back down the rabbit-hole of the uncanny valley, was that it again raised the fundamental question: is it possible to have the experience of a "perfect" simulation? The theory of the uncanny valley, of course, suggests that it is not possible. As least when it comes to simulations of the human form, the uncanny valley phenomenon seems to show that our brains are structured in such a way as to generate extreme aversion to, and ultimately to reject, simulations that too closely approximate the "real." In my earlier posts on the subject, I elaborated upon this further, speculating that this is because we are cognitively "hard-wired" to protect the distinction between reality and simulation, much in the same way that we need the distinction between "true" and "false," or between a proposition and its negation. Implicit in my earlier musings on this topic, but not quite explicated, was the suggestion that, althought it may be (technologically) possible to manufacture something like a perfect simulation (in the sci-fi, "virtual reality" sense), it would be impossible to experience a perfect simulation qua a "perfect simulation." As I said earlier, if a simulation is perfect, then my experience of it will be indistinguishable from my experience of the "real" thing that it is simulating. Hence, any first-person phenomenological account I give of that experience will NOT be an account of a "simulation" but of (what I take to be, albeit mistakenly) an experience of the "real." (This is why, I speculated, if one were really being decieved by a perfect robot-simulation of a human, the phenomenon of the uncanny valley would disappear. That phenomenon requires that one be at least minimally cognizant of one's experience as the experience of a simulation. The resulting aversion to the too-close approximation of reality, I argued, was an aversion to being decieved.) So, on my account anyway, the really interesting question is not whether reality can be perfectly simulated or not, but whether or not we could ever have the experience of a perfect simulation qua simulation. So, here's a claim that I want to add to my earlier musings on the uncanny valley:

Any possibility for an account of the "perfect simulation" requires, necessarily, a third.

The person experiencing a perfect simulation, if it is indeed "perfect," does not experience it as a "simulation," and consequently will have no conceptual or linguistic tools at his or her disposal for giving an account of it as a "perfect simulation." (This is, of course, the age-old Cartesian worry, played out ad nauseum in brain-in-a-vat thought experiments and The Matrix movies.) I'm not sure that the recent neuroscientific experiments on mirror neurons or the psycho-physiology of empathy really disrupts this claim, given that even those experiments recognize that the first-person experience of pain or joy is categorically different from the empathetic experience of the same. In order for a simulation to be a simulation, it must be recognizably distinct from that which it is simulating. (Otherwise, it would just be "real," right?) So, if there is to be any account of a "perfect simulation"-- again, qua simulation-- it requires that there be an observer, independent from the person experiencing the simulation, who can still experience, understand, and articulate the difference between the simulation and reality. (Someone like the "Morpheus" character in The Matrix... or someone like the observing scientist in the laboratory full of brains-in-vats.) But, of course, if the observer-- who still realizes the "perfect" simulation as a simulation-- is the only one who can give an account of it, then... alas, we still do not really have an account of the "experience" of the perfect simulation. We have an account of the experience of "the experience of the perfect simulation." Why? Because, again, we need the ability to distinguish between a thing and its opposite, the basic law of noncontradiction, in order to give a sensible account of anything we experience. And the person experiencing the perfect simulation no longer has that ability. What that person has is, in "reality," the coincidental experience of A (a real thing) and not-A (a simulation of a real thing, that is, an unreal thing)... only that is a thoroughly irrational experience, both unthinkable and unsayable.

[Why, you may be asking, does this require a "third"? Why not simply a "second"? Well, I formulate it that way because I'm trying to stay within the parameters of the uncanny valley theory, which is not about any-old simulated experience, but the simulation of an experience between two human beings, or a human being and a very-human-like robot. So, in that scenario, I think we need three: the primary subject, the "other" whom he or she is experiencing (which may or may not be a simulated human being), and the "third" observer.]

One last thing, which may or may not make this clearer. (And which, not unrelatedly, hearkens back to a post I did a long time ago entitled Anatomy of an Illusion, inspired by the excellent film The Prestige.) I think the difference between the experience of a perfect simulation, on the one hand, and the experience of the experience of a perfect simulation, on the other hand, highlights something structurally similar to the difference between the experience of "magic" (with an understanding here that "magic" is really illusion and trickery) and the experience of a "miracle." A perfect magic trick aims to entirely veil the part of it that is a "trick," to produce the illusion of not being an illusion, and thus to make its appearance in our experience appear as a miracle. (Material objects don't just disappear and reappear with an 'abracadabra' and a wave of a wand! You can't saw a woman in half and then put her back together! Natural laws cannot be suspended! It must be a miracle!) If we ever really had the experience of a perfect magic trick, one that perfectly masked its illusion, we would, in effect, have the experience of a miracle. But, in fact, when we watch a magician do his trick, even if he is very, very good at it, we are experiencing it as an illusion, as a trick, as a simulation of something miraculous. That is why we can give an account of our experience of it as magic, and not miracle. Compare that to the experience of watching a young child in his or her first encounter with a really good magician. The child experiences the trick as real; in fact, the child does not experience it as a "trick." We adults, looking on and knowing that there is no such thing as "real" magic, can observe the total assimilation of reality and illusion that the child is experiencing, but neither we nor the child are in fact having the experience of a "perfect illusion."

It's either an illusion, in which case it is not perfect, or it's perfect, in which case it is no longer an illusion. In the parlance of The Matrix, you either take the blue pill or the red pill. The consequent implications of that choice in your experience are mutually exclusive. That's why there's no uncanny valley in the perfect simulation and, further, why there may be no "experience" of the perfect simulation at all.

Thursday, July 08, 2010

Bon Mots: Agamben on Ausnahmezustand

In State of Exception (University of Chicago Press, 2003, p.86-7), Giorgio Agamben writes:

The aim of this investigation-- in the urgency of the state of exception "in which we live"-- was to bring to light the fiction that governs this arcanum imperii [secret of power] par excellence of our time. What the "ark" of power contains at its center is the state of exception-- but this is essentially an empty space, in which a human action with no relation to law stands before a norm with no relation to life...

...Of course, the task at hand is not to bring the state of exception back within its spatially and temporally defined boundaries in order to then reaffirm the primacy of a norm and of rights that are themselves ultimately grounded in it. From the real state of exception in which we live, it is not possible to return to the state of law [stato di diritto], for at issue now are the very concepts of "state" and "law." But if it is possible to attempt to halt the machine, to show its central fiction, this is because between violence and law, between life and norm, there is no substantial articulation. Alongside the movement that seeks to keep them in relation at all costs, there is a countermovement that, working in an inverse direction in law and in life, always seeks to loosen what has been artificially and violently linked. That is to say, in the field of tension of our culture, two opposite forces act, one that institutes and makes, and one that deactivates and deposes. The state of excecption is both the point of their maximum tension and-- as it coincides with the rule-- that which threatens today to render them indiscernible. To live in the state of exception means to experience both of these possibilities and yet, by always separating the two forces, ceaselessly to try to interrupt the working of the machine that is leading the West toward global civil war.

Closer than Gitmo, Further from Humane

In case any of you were still operating under the illusion that our prisons at home are somehow better than our secret prisons abroad, Lousiana has stepped in to disabuse you of that misconception. Sheriff Jack Strain of St. Tammany Parish is currently under scrutiny for his practice of confining "suicidal" inmates to 3'x3' metal cages (also known as "squirrel cages"), half-naked, with no bed, blanket or toilet facilities, and in the full view (and constant ridicule) of guards and other inmates. If you're keeping score at home, those cages are exactly HALF the size of the cages required by Lousinana state law for dogs.

Bon Mots: Derrida on the future of the Humanities

From "The University Without Conditions" in Without Alibi (Stanford University Press, 2002, p.231), Jacques Derrida describes "the Humanities of tomorrow":

These new Humanities would treat the history of man, the idea, the figure, and the notion of "what is proper to man." They will do this on the basis of a nonfinite series of oppositions by which man is determined, in particular the traditional opposition of the life form called "human" and of the life form called "animal." I will dare to claim, without being able to demonstrate it here, that none of these traditional concepts of "what is proper to man" and thus of what is opposed to it can resist a consistent scientific and deconstructive analysis.

The most urgent guiding thread here would be the problematization (which does not mean disqualification) of the powerful juridical performatives that have given shape to the modern history of this humanity of man. I am thinking, for example, of the rich history of at least two of these juridical performatives: on the one hand, the Declarations of the Rights of Man-- and of women (since the question of sexual difference is not secondary or accidental here; we know that these Declarations of the Rights of Man were constantly being transformed and enriched from 1789 to 1948 and beyond: the figure of man, a promising animal, an animal capable of promising, as Nietzsche said, remains still to come)-- and, on the other hand, the concept of "crime against humanity," which since the end of the Second World War has modified the geopolitical field of international law and will continue to do so more and more, commanding in particular the scene of worldwide confession and of the relation to the historical past in general. The new Humanities will thus treat these performative productions of law or right (rights of man, human rights, the concept of crime against humanity) where they always imply the promise and, with the promise, the conventionality of the "as if."