Showing posts with label Robots. Show all posts
Showing posts with label Robots. Show all posts

Tuesday, January 29, 2019

Work Out of Joint: Our Future Lives With Robots and Intelligent Agents

Wired magazine - mostly hagiographies of silicon valley entrepreneurs - capitalist porn - vague reassurances for the future from the uber-wealthy.  500 dollar headphones.  The Senior Associate Editor Jason Kehe was "weary with dystopian prediction of nefarious robots taking jobs from humans," so he challenged seven sf writers to "imagine a world in which the gig economy and automation have redefined the daily grind" (7).  

The results?  A collection of stories--"The Next25 Years: What'll We Do?"--from a stellar group of writers: Laurie Penny, Ken Liu, Charles Yu, Charlie Janes Anders, Nisi Shawl, Adam Rogers and Martha Wells.  And only one killer robot (from Martha Wells) which, to be fair, isn’t killing anyone.  But there's still much here that is dystopian.   But from the next 25 years?  Of course, these aren't futurist prognostications; like any good sf, they’re descriptions of our present--dystopian enough.  Or, as China Mieville has written, “We live in utopia, it just isn’t ours” (Mieville 2015). 

What I found fascinating about this collection was the ways the writers highlight our service to robot- and digital agents; the way, in other words, that we supplement their agency by discounting our own.  In Laurie Penny's "Real Girls," an unemployed writer becomes a simulation of an AI girlfriend:

"Niall explained that a lot of lonely people liked the idea of having a robot girlfriend who was always on call and had no feelings of her own, a remote algorithm that could shape itself to your particular needs--they'd seen it on TV.  But the technology wasn't there yet.
     Hence the front company.  All over the world, Niall said, broke millennials who needed cash fast were signing NDAs and signing on to pretend to be robots" (Penny 2019: 62).
Similarly, Charles Yu's "Placebo" has an actor playing a doctor in order to give a human face to end-of-life decisions being made by a software agent:

"The human in the room is not in charge.  The thing is.  As it should be.  Brad barely made it through a year of junior college.  The black cube in the corner, on the other hand, is a $10 million doctor in a box, running trillions of calculations per second, simulations within simulations within whatever" (Yu 2019: 67).

And a journalist in Charlie Jane Anders's "The Farm" re-edits his story until it can satisfy a convocation of super-charged, robotic trolls: "a virtual machine populated with copies of a few trillion different bots, scraped from the internet, living inside a fake social network" (Anders 2019: 70).  Anything remotely objectionable--anything that might pierce the veil of the phantasmagoria of media news--is summarily rejected.  Yet they still need the human writer, at least for the moment.

I agree with Jason Kehe: we’re missing something in concentrating on the ways robots could be taking (or are taking) jobs away from people.  After all—that cat’s already out of the bag: automation has long been a management tool for the subjugation of labor.  But robots (and intelligent agents) are much more than smarter, more autonomous versions of automated systems from the 1950s and 1960s.  Our interactions with robots are all about shifting agency back and forth from the human to the non-human.

As I described in my (paywalled) essay, "Working for the Robocracy":
“But while the Mechanical Turk certainly exploits the reserve army in its apportionment of low-paid, menial tasks, I would argue that it creates an additional reserve army—this one a robot army that exists at some point in the future.  That is, workers on MTurk (Amazon’s platform) are essentially placeholders for tasks that robots will do later when they’ve acquired the skills in pattern recognition, natural language processing and translation.  This is, in other words, the repetition of a process that began with industrialization: first, reduce the worker to repetitive, machine-like tasks, and then replace them with a machine.  Automated phone calls have a similar quality.  While few consumers prefer automated service calls to person-to-person, the intelligent agent processing the phone call is based on the real (but robotic) work of decades of human workers who have been reduced to an algorithm of scripts in order to sell more product.  That is, the work presupposes the robot, and the robot is therefore able to replace the worker because the worker has already been replaced: forced to become a reified simulacrum of themselves in order to maintain employment, not only in terms of technical operation, but also in intellect and affect.”

The moments when we grant robots agency, or when robots “give” us robotic agency: these are diluvial events happening right now that may tell us a lot about our human-robot futures.  The people in these stories aren't being precisely replaced by machines: they’re being reduced to algorithmic shadows of themselves in order to serve non-human agencies that are supposed to replace them altogether at some middle-point when humans become more robot-like and robots become more human -like.  After all, another way to pass the Turing Test is to lower the bar by making us less human than we are now.  When we are forced to simulate non-human agency in our lives--when we interact with phone trees, utilize ATMs, security systems.  When we learn to interact with the non-human agents in our lives, the first things to go are the skein of affect and discourse that characterize even rudimentary social interactions.  To talk to the machine, we will have to become the machine. 

There's one more story that could fit into this fascinating collection: Phillip K. Dick's Time Out of Joint (1959).  Following the Dick-ian oeuvre, Time Out of Joint is a novel of paranoia, of madness and, ultimately, one that interrogates reality.  Dick’s protagonist, Ragle Gumm, spends his time winning newspaper contests and drinking beer, but that reality gradually unravels to reveal another, where the newspaper contests are a psychological cover for the mathematics of predicting nuclear strikes in a war against lunar colonists battling for independence. 

There’s a lot in Time Out of Joint (and in many other Dick novels) about the ultimate reality of our lives, but the relevance of the novel to the future of work lies in the triviality of Gumm’s labor.  His job – as the sole person capable of predicting nuclear strikes – is suppressed under the triviality of the newspaper contest, “Where Will the Little Green Man Be Next.” He spends all day following pleasure that looks suspiciously like work. 

Indeed: through the magic of neoliberalism, much of our labor goes under the guise of pleasure.  Social media mine our quotidian lives in order to connect us to products, and services, and to mine our connections with others.  Like Dick’s Ragle Gumm, we spend hours each day laboring for a cause we know little about, nor one that we would necessarily agree with were we cognizant of the fate of our data.  This doubling has become axiomatic in late capitalism: our pleasure is simultaneously a labor, while efforts to coat labor in a veneer of pleasure fail to ameliorate its exploitative dimensions.  On some level, then, it’s work all the way down. 

If the Wired stories dwell on the service to the algorithm, and to the reduction of the human to the capacity to simulate robotic agents, then our contemporary “work out of joint” harnesses our pleasure in the service of capitalist algorithms.  Our suspicions—our paranoia—of this subtended labor do little to ameliorate the distinction.  One phantasmagoria erodes to reveal another. 

Facebook’s recent “10 year challenge”.  Was it, people wondered, innocent pleasure or an experiment to tool Facebook’s facial recognition algorithms (O’Neill 2019)?  Facebook dismissed these as paranoid fantasies, but, of course, Facebook runs on the subterfuge of pleasure-as-work.  If this is our present, what future, phantasmagoric palaces will be built in order to conceal our complicity in the exploitation of ourselves and others in the name of corporate profits that we will never share? 


References

Anders, Charlie Jane (2019).  “The Farm.”  Wired (January): 68-71.

Collins, Samuel Gerald (2018).  “Working for the Robocracy.”  Anthropology of Work Review 39(1).

Dick, Philip K (1984 [1959]).  Time Out of Joint.  NY: Bluejay. 

Mieville, China (2015).  “The Limits of Utopia.”  Salvage Zone 1.  Retrieved from http://salvage.zone, November 4, 2017. 

O’Neill, Kate (2019).  “Facebook’s ’10 Year Challenge’ Is Just a Harmless Meme—Right?”  Wired.com, retrieved 1/17/2019. 

Penny, Laurie (2019).  “Real Girls.”  Wired (January): 60-63.

Yu, Charles (2019).  “Placebo.”  Wired (January): 66-67.
-->
-->  -->

Tuesday, July 29, 2008

robots and agents

The robot-gone-awry has been a theme in literature and popular culture from at least Goethe. The 20th century variant generally revolves around advances in robotic technologies that lead to robots displacing humans altogether--basically the Braverman thesis (after Harry Braverman) followed to its natural asymptote. But can the same thing be said of other kinds of non-human agents? I mean--not the anthropomorphic robots produced by various research groups to simulate human feelings, speech, perceptions or cognition, but those agents that swarm in and out of our lives as vaguely intelligent, vaguely autonomous search engines, routers, global positionings, spyware, etc. What about these? The difference between these and more anthropomorphic agents is in a way similar to what Andy Clark (in Natural Born Cyborgs) terms "transparent" versus "opaque" technologies:

A transparent technology is a technology that is so well fitted to, and integrated with, our own lives, biological capacities, and projects as to become (as Mark Weiser and Donald Norman have both stressed) almost invisible in use. An opaque technology, by contrast, is one that keeps tripping the user up, requires skills and capacities that do not come naturally to the biological organisim, and thus remains the focus of attenion even during routine problem-solving activity. (37)


I think I would re-work Clark to include in the list of "opaque" technology agents that emulate human behavior, and thus make human-like demands upon our attenion and concentration, a politics of recognition for robots, as it were, that doesn't exist with more transparent technologies that simply reflect back upon the self to the ultimate amplification of ego.

Monday, June 23, 2008

Manufacturing the Alien

I've been thinking on and off about aliens these days. One of the reasons must be because I'm on the CONTACT! listserve, which is fairly choc-a-block with speculations on Earth-like planets in other solar systems. The other has to with my research on other "aliens," those non-human agents that are more and more part of our everyday life.

Of course, it's odd to think about these "agents" (software or hardware) as "aliens" at all, but this is exactly what Morton Klass did in a 1983 essay of his I just re-read, "The Artificial Agent: Transformations of the Robot in Science Fiction" (Annals of the American Academy of Political and Social Science 470 (171-179)). Klass spent much of his career as Professor of Anthropology at Barnard College (Columbia University). But his early career was one saturated in science fiction. As the brother of William Tenn (aka Phillip Klass), Morton Klass contributed several sf stories in the 1950s and early 1960s--several which subsequently were re-printed in anthropological science fiction collections like Leon Stover's Apeman, Spaceman.

In this essay, he tries to conjoin those two, otherwise distinct careers in a bit of speculative , cultural analysis on why we feel more comfortable with the alien we've manufactured (the alien we know?) than with the one we don't:

The robot in science fiction was portrayed at first as an alien and as a threat, but the danger was perceived as primarily an economic one--apart, that is, from the theological danger. The robot may drive us from our jobs and otherwise destroy our economic well being, it was felt; it may even threaten to destroy the world as we know it; it may endanger our collective soul. But we have never believed it would dishonour or corrupt us, something we have always assumed that our aliens wanted most of all to do. Perhaps not surprisingly then we seem to be able to live with whatever threat, economic or theological, the robots represent; we do not exhibit horror or revulsion, or even very much trepidation.


What strikes me about this passage is the fate of the robot today. Is it considered alien at all? Perhaps this is one of the reasons I found the movie version of I, Robot so unsatisfying: the robot today is hardly a figure of fear (at least to those people not being bombed by drones). I would even go further and say that the robot isn't really figured as a robot at all, if by that we mean some anthropomorphic, Capek-inspired robot. Instead, we have a wide variety of hardware and software agents that have seamlessly(?) extended our cognition, perception and sociality without actually demanding that we consciously recognize their alien autonomy from us. Of course, robotics labs manufacture extremely life-like robots, but these are not the ones that we encounter in our everyday practice. Our robots have faded into the (human) woodwork--as tools we use. Or, perhaps it's the case that we have become more alien, multiply supplemented by the artificial and hence no longer distinct from some intelligent 'Other".

Cybernetics and Anthropology - Past and Present

 I continue to wrestle with the legacy of cybernetics in anthropology - and a future premised on an anthropological bases for the digital.  ...