Tuesday, December 21, 2010

The Future of Mind

The New York Times has been adding blog content to its online site.  One of the most interesting (and most surprising) additions to the unfortunately named "Opinionator" section has been "The Stone,"  a forum edited by Simon Critchley, chair of the department of philosophy of New School in New York, that began in May. It's a philosophy blog--a welcome addition, especially compared to the blogged content on other newspapers (sports, crime, consumer news, entertainment).

Over the past couple of weeks, the columns have turned to critiques of neuroscience--or, should I say, a critique of popular representations of neuroscience, where every culture and behavior has its materialist correlate measured in the release of dopamine, the firing of neurons.  Which, of course, is on one level entirely true--we are biological creatures, after all. But the results of neuroscience that trickle down intro etiolated newspaper articles present the materialist reduction as "explaining" our complex lives--violence, love, etc.--in a way that seems calculated to shut down curiosity in science by suggesting that everything is on the brink of final explanation.

But "mind", like "body," is instead a perpetual work-in-progress, with room for sociological or even (gasp) anthropological speculations on what may emerge next.  In other words, the study of cognition is inherently future-oriented. 

A couple of the most recent columns come from one of the more well-known cognitive scientists out there, Andy Clark.  He's a popularizer, certainly, but one who has always argued for a more complex model of thinking.  In his December 12th column, "Out of Our Brains," he recapitulates the arguments for a "distributed cognition" (somewhat disingenuously described as a "current" movement even though it's been around for decades).

But he extends those argument to ICTs--information and communication technologies:   

If we can repair a cognitive function by the use of non-biological circuitry, then we can extend and alter cognitive functions that way too.  And if a wired interface is acceptable, then, at least in principle, a wire-free interface (such as links in your brain to your notepad, BlackBerry or iPhone) must be acceptable too.  What counts is the flow and alteration of information, not the medium through which it moves. 

This is not exactly a revolutionary idea.  The example James McClelland and his co-authors gave in their seminal, 1986 paper was a simple arithmetic problem--multiplying 2, three-digit numbers.  How many can do it in their head?  And how many need a "tool" (e.g., pencil and paper) to "think" this problem through to a solution?  And if we accept that the the boundary of cognition can be drawn to encompass the environment (in this case, the pencil and paper) around us, then there is little reason not to consider the information technologies we use in those processes as well.  Extrapolating on this to the future of cognition, we can safely predict that new tools will bring new, complex forms and configurations of cognition.  As Clark concludes:

At the very least, minds like ours are the products not of neural processing alone but of the complex and iterated interplay between brains, bodies, and the many designer environments in which we increasingly live and work.  

Fine.  Thank you Andy Clark, for the observation!

But where I begin to become more interested is with the idea that the "interplay" may go the other way as well.  We take it as axiomatic that--however extended our cognition is into the cell phones we deploy--"cognition" extends from the the "I" outward, a Cartesian intentionality where "I" am the master of my many tools.  But couldn't it happen the other way?  Couldn't we be the "tool" of some machine cognition--a pawn, as it were, in the connectivity of our hand-helds?  We don't, I think, need to stoop to Hollywood science fiction to imagine this--indeed, this is the whole branch of science and technology studies (Actor-Network Theory and its many spin-offs).  Our machines "exert" some of their own priorities onto us, and, rather fittingly, we, accordingly, become more "machine-like" in our thinking.  The moment you've moved outside of a room to get a better cell phone connection is the moment you've done your machine's "bidding"!   But how has this impacted our conversations and relationships with each other? 

We can see this Andy Clark's blog entry itself--"What counts is the flow and alteration of information, not the medium through which it moves."  He already conceives of cognition along the lines of information technologies--as quanta of information sent and received.  He has become (as have all of us) more "computer-like" in our cognition, just as our current development of multiple social networking platforms has made our social life more "network-like".  Or the universality of Graphical Use Interfaces has made us capable (or incapable) of "multi-tasking".  That is, not just adding a new word ("multi-tasking") but enabling people to consider cognitive actions as discrete "applications" that can be simultaneously undertaken like opening multiple windows on a computer screen. 

For the future, these are the interesting, unanswered questions: if we're doing "cell phone" thinking today, what kinds of cognitions will we be embedded in tomorrow?  What machines will we invent to help us think?  And how will those machines "think" with us?

References
McClelland et al. (1986) J.J. McClelland, D.E. Rumelhart and the PDP Research Group (eds).  Parallel Distributed Processing.  Cambridge, MA: MIT Press.

No comments:

Post a Comment