Tuesday, February 1, 2011

Can A Place Be the Future?

In a January 26th New York Times op-ed, "25 Years of Digital Vandalism," William Gibson reflects on the Stuxnet attack on Iran's nuclear facilities.  As a genuine futurist, Gibson looks to Stuxnet as a sign of the times--and a bellwether for the future.  He confesses, "I briefly thought that here, finally, was the real thing: a cyberweapon purpose-built by one state actor to strategically interfere with the business of another."  But he's disappointed in the end, to find that Stuxnet is really just another virus--albeit one perhaps appropriated by one government against another.  He is ambivalent about the meaning of this for the future of nuclear security. 

One of Gibson's strengths is his restless, global search for sites of the future.  Here, he looks to Iran, but he is best known for his (highly selective) evocations of Japanese postmodernity.  But this is a never-ending quest--the future proves elusively peripatetic.  As he commented in a 1989 interview, “I think that at one time the world believed that America was the future, but now the future’s gone somewhere else, perhaps to Japan, it’s probably on its way to Singapore soon but I don’t think we’re it” anymore."

But is this an ultimately pointless quest?  To what extent is it useful to think of the future as another place?  On the one hand, in an era of globalization, there's a certain temporal relativism at work.  One way of thinking of financial arbitrage (and other financial instruments) is precisely that: the exploitation of pricing irregularities that are a function of temporal distance.  After a relatively short time, these differences will disappear in a more homogeneous time of globalized capital.  But those are short, and necessarily fleeting, temporal distortions.   

In a sense, thinking of Iran, Japan or Singapore as "the future"is no more credible than looking to other places as representative of the past, a familiar tactic in 19th century anthropology, and still part of racist, ethnocentric depictions of non-Western peoples as "caught" in the "primitive past."  Here, we're just reversing the gaze--now, because of culture, politics or economy, the other place is thought to exist in an accelerated time horizon; looking at their "present" is said to grant us some insight int our future.  

But our more quotidian moments are more obdurately Netwonian or, perhaps the better way to think of it is "more Taylorist."  That is, after the work of F.W. Taylor, time for us is parsed out according to a unified, commodified form, ultimately synchronized into the monolithic, mechanical timepiece of global capital.  

Still, there is a real point to looking past the U.S. or Europe for the future.  And not because it opens up onto some magic window onto the next, big thing.  Call it "cultural arbitrage"--the gap that opens up between global modernity and the kind of hopes and expectations people have for their lives.  Looking somewhere else doesn't mean that our life will become more like their life.   But it does open up the possibility for reflecting on similar conditions in the US.  That is, the "gap" opens up onto our contradictory experiences and expectations and forces us to question the course of our own futures.

We'll be doing this in August of this year with our study abroad course in Seoul, South korea:   Harmony of Modernity and TraditionWe'll be reflecting on exactly those tensions that open up between people's lives and the modernity that we all share.  We'll be visiting temples, shrines, factories, shopping meccas, nightclubs.  Along the way to making sense of it all, we'll reflect on what it means for us as well.  Seoul not as a window onto the future, but as a means for thinking about our mutual futures.


References

Gibson, William (1989).  Interview (February) with Terry Gross on "Fresh Air."  Washington, D.C.: National Public Radio.   

Thursday, January 13, 2011

Technologies of Waiting


I've been reading Orvar Löfgren's and Billy Ehn's
The Secret World of Doing Nothing (University of California, 2010) in preparation for the Spring semester.  It's the first time I've used a work of ethnology  (i.e., a comparison of different cultures) in the classroom, as opposed to the conventional, in-depth monographs that are the bread and butter of US anthropology.

Lofgren and Ehn explore the cultural and social life of non-events, i.e., those parts of our life that we ordinarily "bracket" as irrelevant--the times we wait in line, or idly stare out a window.  There are interesting questions--especially with regards to methodology.  How do you do anthropology when no one thinks it's even worth talking about?  

Not surprisingly, they find that our experiences of these kinds of phenomena are culturally variable, and that "our" (US and Europe) expectations for non-events are very much conditioned by a modernity which 1) sets up a variety of institutions to organize people into spaces to contain "empty" time: waiting rooms, departure gates and 2) places a premium on "productive" time while making it immoral to "waste" time. 

One of the results is an in-built tension between "using" and "wasting" time--a double bind which places people in situations where they must surrender to the "empty" time of waiting while at the same time craving the productive, commodified time of the protestant work ethic.  If modernity replaced meaningful time (Biblical, moral, mythological) with time as an empty variable, then it is not surprising that people would find this unsettling.  Accordingly, there have been many technologies developed to solve the dilemma of empty time:
The accelerated pace of everyday life in the Western world is often said to have influenced the way people feel about waiting.  A whole industry has been built up around diminshing delays. (28) 
One of the major successes, of course, has also been the most Pyrrhic--the automobile has both sped up and slowed down--first by raising expectations for speed and crushing them with the multiplication of sprawl around the world.  Thanks to this effort to speed transportation (and the concomitant spread of suburbs), commute times are high: The average commute where I live (Maryland, USA) is 31 minutes.  China's average commute: 42 minutes.  Tokyo workers: 60 minutes. This hasn't stopped the desire for faster transportation at all.  Indeed, based on The Secret World, one would have to prognosticate that the future will mean various other devices to accelerate.

Still, thinking about waiting and technology, I can imagine other desires besides acceleration.  For one thing, many of the information technologies that we utilize have little to do with "saving" time--in fact, they introduce a number of time effects that include different ways of parsing out time, the frisson of sudden time dilation, the rhythm of turn-taking, etc.  This has been a major draw in gaming: the introduction of "game time" (Tychsen and Hitchens 2009).  Other IT introduces different time effects, the point being less that they introduce "more" speed, then that they demand that the user enter into the new pace.  Social network technologies aren't about speeding up or slowing down along a linear continuum so much as the introduction of different, temporal rhythms.  Aren't these temporalizations another reason for their popularity?

To take this back to Lofgren's and Ehn's book, the growing blight of "empty time" in the form of commuting and bureaucratization may give rise to various technologies of speed (in Virilio's sense), but will also stimulate the development of technologies that introduce new time effects.  "Empty time" acts as a an abstract table upon which variously commodified, variously meaningful time effects can be overlaid--e.g., the rhythm of text messaging and the dialectic of anticipation and expectation produced in the space of that temporalization.  But it's the difference that's important there, not necessarily the speed. 

References
Tyschsen, Anders and Michael Hitchens (2009).  "Modeling and Analyzing Time in Multiplayer and Massively Multiplayer Games."  Games and Culture 4(2). 

Tuesday, December 21, 2010

The Future of Mind

The New York Times has been adding blog content to its online site.  One of the most interesting (and most surprising) additions to the unfortunately named "Opinionator" section has been "The Stone,"  a forum edited by Simon Critchley, chair of the department of philosophy of New School in New York, that began in May. It's a philosophy blog--a welcome addition, especially compared to the blogged content on other newspapers (sports, crime, consumer news, entertainment).

Over the past couple of weeks, the columns have turned to critiques of neuroscience--or, should I say, a critique of popular representations of neuroscience, where every culture and behavior has its materialist correlate measured in the release of dopamine, the firing of neurons.  Which, of course, is on one level entirely true--we are biological creatures, after all. But the results of neuroscience that trickle down intro etiolated newspaper articles present the materialist reduction as "explaining" our complex lives--violence, love, etc.--in a way that seems calculated to shut down curiosity in science by suggesting that everything is on the brink of final explanation.

But "mind", like "body," is instead a perpetual work-in-progress, with room for sociological or even (gasp) anthropological speculations on what may emerge next.  In other words, the study of cognition is inherently future-oriented. 

A couple of the most recent columns come from one of the more well-known cognitive scientists out there, Andy Clark.  He's a popularizer, certainly, but one who has always argued for a more complex model of thinking.  In his December 12th column, "Out of Our Brains," he recapitulates the arguments for a "distributed cognition" (somewhat disingenuously described as a "current" movement even though it's been around for decades).

But he extends those argument to ICTs--information and communication technologies:   

If we can repair a cognitive function by the use of non-biological circuitry, then we can extend and alter cognitive functions that way too.  And if a wired interface is acceptable, then, at least in principle, a wire-free interface (such as links in your brain to your notepad, BlackBerry or iPhone) must be acceptable too.  What counts is the flow and alteration of information, not the medium through which it moves. 

This is not exactly a revolutionary idea.  The example James McClelland and his co-authors gave in their seminal, 1986 paper was a simple arithmetic problem--multiplying 2, three-digit numbers.  How many can do it in their head?  And how many need a "tool" (e.g., pencil and paper) to "think" this problem through to a solution?  And if we accept that the the boundary of cognition can be drawn to encompass the environment (in this case, the pencil and paper) around us, then there is little reason not to consider the information technologies we use in those processes as well.  Extrapolating on this to the future of cognition, we can safely predict that new tools will bring new, complex forms and configurations of cognition.  As Clark concludes:

At the very least, minds like ours are the products not of neural processing alone but of the complex and iterated interplay between brains, bodies, and the many designer environments in which we increasingly live and work.  

Fine.  Thank you Andy Clark, for the observation!

But where I begin to become more interested is with the idea that the "interplay" may go the other way as well.  We take it as axiomatic that--however extended our cognition is into the cell phones we deploy--"cognition" extends from the the "I" outward, a Cartesian intentionality where "I" am the master of my many tools.  But couldn't it happen the other way?  Couldn't we be the "tool" of some machine cognition--a pawn, as it were, in the connectivity of our hand-helds?  We don't, I think, need to stoop to Hollywood science fiction to imagine this--indeed, this is the whole branch of science and technology studies (Actor-Network Theory and its many spin-offs).  Our machines "exert" some of their own priorities onto us, and, rather fittingly, we, accordingly, become more "machine-like" in our thinking.  The moment you've moved outside of a room to get a better cell phone connection is the moment you've done your machine's "bidding"!   But how has this impacted our conversations and relationships with each other? 

We can see this Andy Clark's blog entry itself--"What counts is the flow and alteration of information, not the medium through which it moves."  He already conceives of cognition along the lines of information technologies--as quanta of information sent and received.  He has become (as have all of us) more "computer-like" in our cognition, just as our current development of multiple social networking platforms has made our social life more "network-like".  Or the universality of Graphical Use Interfaces has made us capable (or incapable) of "multi-tasking".  That is, not just adding a new word ("multi-tasking") but enabling people to consider cognitive actions as discrete "applications" that can be simultaneously undertaken like opening multiple windows on a computer screen. 

For the future, these are the interesting, unanswered questions: if we're doing "cell phone" thinking today, what kinds of cognitions will we be embedded in tomorrow?  What machines will we invent to help us think?  And how will those machines "think" with us?

References
McClelland et al. (1986) J.J. McClelland, D.E. Rumelhart and the PDP Research Group (eds).  Parallel Distributed Processing.  Cambridge, MA: MIT Press.

Tuesday, November 30, 2010

Parasitic Twittering at the Anthropology Conference

I posted this at www.wfs.org as well . . .


I’m back from the American Anthropological Association Annual Meeting in New Orleans, Louisiana.  As expected, 6000 of us shuttled between two, huge, corporate hotels on Canal Street, soaking up hundreds of panels, poster sessions, round tables and workshops organized according to our association's unique calculus—unpopular panels (like mine) should be held in cavernous banquet halls, while popular topics should be granted a room the size of a bargain berth on a Carnival cruise.
  
But there was also Twitter.  By all accounts, a few thousand tweets from a handful of people before, during, and after our conference.  You can see them all archived with the #aaa2010 hash code.

There was “Kerim” (as he is known at the anthropology blog, “Savage Minds” [savageminds.org]), alerting anthropologists to the “Twitter Meetup” at a restaurant near the hotel.  “Ethnographic Terminilia” to a party at Du Mois Gallery (uptown).  The jazz funeral for Walter Payton, the celebrated New Orleans bassist.  A book signing at an uptown bookstore.  Hints on getting around town; kvetching about the water “boil alert” (from Friday to Sunday).

Not exactly South By Southwest, was it?  It depends on what you were expecting.

Last year, there was an avalanche of blogging about the political power of twitter in Tehran—later (and rather embarrassingly for journalists who ought to have been more skeptical) revealed to be far less of a revolution than originally depicted.  But it’s par for the course for our society, where technologies are regularly accorded tremendous power to affect social and political change.  Malcolm Gladwell critiqued this tendency towards hyperbole in a recent New Yorker article.  He warns,

"It shifts our energies from organizations that promote strategic and disciplined activity and toward those which promote resilience and adaptability.  It makes it easier for activists to express themselves, and harder for that expression to have any impact.  The instruments of social media are well suited to making the existing social order more efficient.  They are not a natural enemy of the status quo.  If you are of the opinion that all the world needs is a little buffing around the edges, this should not trouble you.  But if you think that there are still lunch counters out there that need integrating it ought to give you pause." (Gladwell 2010)

In many ways, Gladwell is spot-on in his critique.  Too many essayists and academics write about Twitter the way people write about iPads or cell phones or whatever—as pivotal, ultimately deterministic technologies that are going to change the world in some beneficial way.   This is where marketing and scholarship meet: sales hype finds its hyperbolic echo in academic scholarship.  When the reality is less than game-changing, you’d think that these kinds of proclamations would become less common.  But the same commentators just move on to the next social media.

Ultimately, this distracts us from considering what social media do, and what they might do in the future.  Looking back at the modest twitter presence at the anthropology meetings, it would be hard to suggest that twitter represented an alternative to the main conference.  Nothing of the sort, really—most of the tweets were actually commentary, summaries or advertising for papers and presentations at the conference.  But the stuff that got retweeted the most were announcements for off-site events: little challenges to the monopoly of the conference site in the form of meet-ups, gallery showings and book signings.  In other words, nothing there that represented an actual alternative to the conference (not a new way to conference), but little nudges to conference attendees to consider supplemental events outside.

Here, twitter reminds me of Michel Serres on “parasite logic,” the way that a outside, third party (or media) intercedes in a dyadic communication and opens the possibility for new meanings or new action.  As Brown (2002:16-17) writes,

“In information terms, the parasite provokes a new form of complexity, it engineers a kind of difference by intercepting relations. All three meanings then coincide to form a ‘parasite logic’–analyze (take but do not give), paralyze (interrupt usual functioning), catalyze (force the
host to act differently). This parasite, through its
interruption, is a catalyst for complexity. It does this by impelling the parties it parasitizes to act in at least two ways. Either they incorporate the parasite into their midst–and thereby accept the new form of communication the parasite inaugurates–or they act together to expel the parasite and transform their own social practices in the course of doing so.”

Twitter’s power lies in its ability to interrupt, supplement and catalyze different kinds of behavior: a media to impel people to (briefly) diverge from their expected scripts at the conference and, say, take a trolley uptown. This is a powerful potential—one that people like Clay Shirkey have made a career off of extrapolating upon.

But it is, ultimately, a parasite technology, one that requires the presence of more monolithic institutions to function.  That is, it supplements the school, the meeting, the demonstration, rather than moves to replace them.  More than that, its ontology rests on the presence of these more permanent, more powerful structures.  This hardly represents some grand failure on the part of social media—it’s a just a reminder to look to the social contexts of media rather than media themselves.

Doing so can also free us to imagine other parasite technologies—cascades of social media that nudge, prod, intrude, implore.  We move to a future where social technologies will consistently fail to be transcendent—will fail to utterly transform the way we exist and communicate. But ultimately, the parasitic itself can prove transformative.

References

Brown, Steven D. (2002). “Michel Serres.” Theory,
Culture & Society 19(3):1-27.
Gladwell, Malcolm (2010).  “Small Change.”  New Yorker 10.4.2010: 42-49.

Wednesday, September 29, 2010

The Anthropological RPG

While looking for the European journal, Anthropos, I stumbled across another Anthropos--this one an anthropologically-informed RPG start-up comprised of a PhD student in anthropology (Calvin Johns) and a linguistics/ literature Ph.D. (Travis Rinehart).  It looks like they'll be releasing "Early Dark" soon--although I can't tell whether it will get any kind of distribution or whether it will be strictly print-on-demand (POD).  It's a typical, table-top RPG, but with the anthropological twist.

What does it mean to have an anthropologically informed RPG?  In a July interview with Park Cooper (posted on the Comics Bulletin column,  "The Park and Bob Show"),  Rinehart describes their goal as creating "a world that as accurately as possible represents an anthropologically correct vision of human reality (besides magick)," while Johns adds that "We take influence from cultures traditionally demonized, feminized, stereotyped or homogenized in other games."  Moreover, players move across a culturally heterogeneous landscape--"each nation in the game (there are no races, because any intelligent person realizes that race is a mythic category that wasn't even an issue in the world until the last 400 years or so) is a blend of at least two other cultures."  Basically, anthropology old (the emphasis on systemtic generalization) and new (a multicultural, pluralist vision).

Sunday, September 26, 2010

How to avoid staying at the corporate hotel . .

I blogged a bit about my multi-agent systems-informed theories for de-centralized convention planning at the World Future Society . . .This, as the American Anthropological Association again prepares to meet at a non-union venue.

Sunday, September 12, 2010

Blogging . . .Somewhere else

These days, I've been blogging a bit at the World Future Society.  I'm joined there by other future-oriented bloggers . . .

Multimodal Interrogations of Anthropologically Unintended Media - Video link

Matt Durington and I had a wonderful time giving a talk at UBC Okanagan. Thanks to Dr. Fiona McDonald and the Collaborative and Experimental...