Skip to main content

Robert Fletcher on Cory Doctorow

There's a nice piece on Cory Doctorow by Robert Fletcher in the current issue of Science Fiction Studies (SFS).  It surprised me a little to find it there, since Doctorow is not exactly the sf canon, yet, as I have blogged about here, there is really no better example of the current "structure of feeling" than Doctorow--he's right there, blogging constantly, writing for any magazine that will have him, putting a creative commons license on everything but insisting on the profitability of the whole enterprise.  In short, it would be hard to find a literary figure who does a better job exploring the tensions and contradictions of the neo-liberal, especially when it comes down to the fluidity of information, the role of the state, the constitution of the individual and, in general, the contradictions of a monolithic yet simultaneously superannuated capitalist system.  It's that aspect of his fiction that I find interesting, even when it doesn't quite hold together: the accelerated heteroglossia of a networked era. As Fletcher (81) writes, "Like Dickens's competing roles as artist, advocate, and entrepreneur tell us something about his novels' relations to changing modes of cultural production and to the social organization they entail."  And as Doctorow continues to write past the dot.com crash into the depths of our information-saturated, Orwellian state, we'll see more in his work that chronicles the contradictions of our times.  One of the best parts about what Fletcher identifies as Doctorow's "networked" identity is the perspective it gives us onto the messiness of figuring things out in the global present.  Drawing on the diverse discourses around him to form occasionally refractory assemblages of ideas, and then working those ideas back and forth over the course of several essays, novels and short stories is not only symptom but also synecdoche of the neoliberal present.

Comments

Popular posts from this blog

Networked, Not Virtual: ethnography when you can't go there

(from our storymap ) In my capacity as a fellow in our faculty research center, I've been doing a lot of support work for the unexpected shift to learning-at-a-distance.  At my uni, very few of us have experience teaching online.  The faculty (generally) aren't especially enthusiastic, and there hasn't really been a lot of institutional support.  So, I wasn't surprised when most of the questions I was fielding took the form of: "I do X in my class.  How can I do X online?"  Not surprised because that's the ideological frame distance education has relied upon: an exact homology between offline- and online teaching, with the physical classroom replaced by the discussion board, the lectures by videos.  But actual online courses (not our band aid efforts to stitch together something in a few days) are structured very differently than their physical counterparts.  The best classes maximize their digital affordances and don’t try to simply "reprodu...

SETI: Signs in space/ Enacting space

[From the SETI project, "A Sign in Space" ( https://asignin.space/ )]  “To interpret is to impoverish, to deplete the world — in order to set up a shadow world of ‘meanings,’ Susan Sontag, Against Interpretation  In May, the SETI Institute Artist-in-Residence initiated a piece of collaborative performance–the decoding of an “alien” message, transmitted from the European Space Agency's ExoMars Trace Gas Orbiter (TGO). “A Sign in Space” is a simulation that enlists ordinary people in the work of “decoding” an alien message–one that you can download yourself. Along the way, SETI has hosted a series of workshops (including one from anthropologist Willi Lempert ) designed to help participants through the decoding process–including hints on avoiding ethnocentric (and anthropocentric) assumptions about what this communication could be and what the intentions of extraterrestrial intelligence might entail.  I am a very enthusiastic SETI advocate, but I wonder if “decoding” is re...

Turing Tests and ChatGPT’s Sleight of Hand

  One of the many benchmarks for AI is the “Turing test,” Alan Turing’s adaptation of the “imitation game” where an interrogator must decide which of two respondents is a computer. It is, as many have pointed out, a strangely indirect test, one that depends on the credulity of the human interrogator and the capacity of the machine to deceive (Wooldridge 2020). Will they believe the computer? And will the computer be a good enough liar? As Pantsar (2025) comments, “For the machine to pass the test, it needs to impersonate a human successfully enough to fool the interrogator. But this is puzzling in the wide context of intelligence ascriptions. Why would intelligence be connected to a form of deception?”   On the one hand, measuring AI through its deceptive power has the benefit of avoiding the idiocy of attempting to establish a measure of intelligence, a task deeply imbricated in racial eugenics (Bender and Hanna 2025; Wooldridge 2020). On the other, generative AI applicat...