Skip to main content

Manufacturing the Alien

I've been thinking on and off about aliens these days. One of the reasons must be because I'm on the CONTACT! listserve, which is fairly choc-a-block with speculations on Earth-like planets in other solar systems. The other has to with my research on other "aliens," those non-human agents that are more and more part of our everyday life.

Of course, it's odd to think about these "agents" (software or hardware) as "aliens" at all, but this is exactly what Morton Klass did in a 1983 essay of his I just re-read, "The Artificial Agent: Transformations of the Robot in Science Fiction" (Annals of the American Academy of Political and Social Science 470 (171-179)). Klass spent much of his career as Professor of Anthropology at Barnard College (Columbia University). But his early career was one saturated in science fiction. As the brother of William Tenn (aka Phillip Klass), Morton Klass contributed several sf stories in the 1950s and early 1960s--several which subsequently were re-printed in anthropological science fiction collections like Leon Stover's Apeman, Spaceman.

In this essay, he tries to conjoin those two, otherwise distinct careers in a bit of speculative , cultural analysis on why we feel more comfortable with the alien we've manufactured (the alien we know?) than with the one we don't:

The robot in science fiction was portrayed at first as an alien and as a threat, but the danger was perceived as primarily an economic one--apart, that is, from the theological danger. The robot may drive us from our jobs and otherwise destroy our economic well being, it was felt; it may even threaten to destroy the world as we know it; it may endanger our collective soul. But we have never believed it would dishonour or corrupt us, something we have always assumed that our aliens wanted most of all to do. Perhaps not surprisingly then we seem to be able to live with whatever threat, economic or theological, the robots represent; we do not exhibit horror or revulsion, or even very much trepidation.


What strikes me about this passage is the fate of the robot today. Is it considered alien at all? Perhaps this is one of the reasons I found the movie version of I, Robot so unsatisfying: the robot today is hardly a figure of fear (at least to those people not being bombed by drones). I would even go further and say that the robot isn't really figured as a robot at all, if by that we mean some anthropomorphic, Capek-inspired robot. Instead, we have a wide variety of hardware and software agents that have seamlessly(?) extended our cognition, perception and sociality without actually demanding that we consciously recognize their alien autonomy from us. Of course, robotics labs manufacture extremely life-like robots, but these are not the ones that we encounter in our everyday practice. Our robots have faded into the (human) woodwork--as tools we use. Or, perhaps it's the case that we have become more alien, multiply supplemented by the artificial and hence no longer distinct from some intelligent 'Other".

Comments

Popular posts from this blog

Networked, Not Virtual: ethnography when you can't go there

(from our storymap ) In my capacity as a fellow in our faculty research center, I've been doing a lot of support work for the unexpected shift to learning-at-a-distance.  At my uni, very few of us have experience teaching online.  The faculty (generally) aren't especially enthusiastic, and there hasn't really been a lot of institutional support.  So, I wasn't surprised when most of the questions I was fielding took the form of: "I do X in my class.  How can I do X online?"  Not surprised because that's the ideological frame distance education has relied upon: an exact homology between offline- and online teaching, with the physical classroom replaced by the discussion board, the lectures by videos.  But actual online courses (not our band aid efforts to stitch together something in a few days) are structured very differently than their physical counterparts.  The best classes maximize their digital affordances and don’t try to simply "reprodu...

SETI: Signs in space/ Enacting space

[From the SETI project, "A Sign in Space" ( https://asignin.space/ )]  “To interpret is to impoverish, to deplete the world — in order to set up a shadow world of ‘meanings,’ Susan Sontag, Against Interpretation  In May, the SETI Institute Artist-in-Residence initiated a piece of collaborative performance–the decoding of an “alien” message, transmitted from the European Space Agency's ExoMars Trace Gas Orbiter (TGO). “A Sign in Space” is a simulation that enlists ordinary people in the work of “decoding” an alien message–one that you can download yourself. Along the way, SETI has hosted a series of workshops (including one from anthropologist Willi Lempert ) designed to help participants through the decoding process–including hints on avoiding ethnocentric (and anthropocentric) assumptions about what this communication could be and what the intentions of extraterrestrial intelligence might entail.  I am a very enthusiastic SETI advocate, but I wonder if “decoding” is re...

Turing Tests and ChatGPT’s Sleight of Hand

  One of the many benchmarks for AI is the “Turing test,” Alan Turing’s adaptation of the “imitation game” where an interrogator must decide which of two respondents is a computer. It is, as many have pointed out, a strangely indirect test, one that depends on the credulity of the human interrogator and the capacity of the machine to deceive (Wooldridge 2020). Will they believe the computer? And will the computer be a good enough liar? As Pantsar (2025) comments, “For the machine to pass the test, it needs to impersonate a human successfully enough to fool the interrogator. But this is puzzling in the wide context of intelligence ascriptions. Why would intelligence be connected to a form of deception?”   On the one hand, measuring AI through its deceptive power has the benefit of avoiding the idiocy of attempting to establish a measure of intelligence, a task deeply imbricated in racial eugenics (Bender and Hanna 2025; Wooldridge 2020). On the other, generative AI applicat...