One of the many benchmarks for AI is the
“Turing test,” Alan Turing’s adaptation of the “imitation game” where an
interrogator must decide which of two respondents is a computer. It is, as many
have pointed out, a strangely indirect test, one that depends on the credulity
of the human interrogator and the capacity of the machine to deceive (Wooldridge
2020). Will they believe the computer? And will the computer be a good enough
liar? As Pantsar (2025) comments, “For the machine to pass the test, it needs
to impersonate a human successfully enough to fool the interrogator. But this
is puzzling in the wide context of intelligence ascriptions. Why would
intelligence be connected to a form of deception?”
On the one hand, measuring AI through its
deceptive power has the benefit of avoiding the idiocy of attempting to
establish a measure of intelligence, a task deeply imbricated in racial
eugenics (Bender and Hanna 2025; Wooldridge 2020). On the other, generative AI
applications seem to have been developed with deception in mind–deception from
all parties. The application designers want to present outputs as just as good
(if not better) than the products of human work. And the humans that utilize
generative AI often seek to present these outputs as their own work. And even
though ethical practice demands that we acknowledge the utilization of AI, the
goal is that people consuming AI material will be unable to tell the difference
or to mark where the machine ends and the human begins. So even though Turing
tests may be a poor measure of machine “intelligence,” they seem to fit the
moment.
But there’s another part of the Turing Test
that is especially relevant: the organization of the “imitation game” itself.
Let’s go back to Turing’s 1950 article. As Sterrett points out in a series of
articles, there are two games described in Turing’s landmark essay, an
"Original Imitation Game” and a “Standard Turing Test” (the terms are
Sterrett’s) (Sterrett 2020: 469). The first one describes two rooms, one for
the interrogator, and the other for a man and woman, who communicate with the
interrogator via a teletype.
In order that tones
of voice may not help the interrogator the answers should be written, or better
still, typewritten. The ideal arrangement is to have a teleprinter
communicating between the two rooms. Alternatively the question and answers can
be repeated by an intermediary. The object of the game for the third player (B)
is to help the interrogator. The best strategy for her is probably to give
truthful answers. She can add such things as '' I am the woman, don't listen to
him! ' to her answers, but it will avail nothing as the man can make similar
remarks. We now ask the question, ' What will happen when a machine takes the
part of A in this game? ' Will the interrogator decide wrongly as often when
the game is played like this as he does when the game is played between a man
and a woman? These questions replace our original, ' Can machines think?
(Turing 1950: 434)
While people have certainly looked at the
gender component here, most of the attention has been on the game itself and
its overall intention: deception (Patterson et al 2018). I want to focus on the
whole arrangement itself.
The game “works” through series of
limitations. The interrogator can’t go into the other room, nor can the human
burst into the interrogator’s room. The interrogator is also prevented from
hearing them. Communication, Turing tells us, is best accomplished through
teletype. Of course, one would need such an arrangement with a computer. Yet,
game rules aren’t just arbitrary, and Turing’s test tells us much about social
hierarchies and workplace organization. By the time Turing wrote his test, the
office had taken its contemporary form–a spatialized organizational chart made
up of offices communicating with each other through a variety of technologies:
mail, telephone, teletype, pneumatic tube. Inter-office communication is also
rendered through a variety of intermediaries and, as “organization man”
develops, cybernetic systems of communication and decision-making come to
dominate managerial processes (Whyte 1956).
In other words, Turing’s “imitation game”
rules are a description of contemporary work. Turing’s readers would have
easily conjured up an image of an organization where people don’t communicate
face-to-face with each other. This has
only become more marked in the intervening decades, where people are duped into
long-term relationships without ever meeting their con-artist. More to the
point, we are more and more called upon to judge between human- and computer
outputs, a task in which humans have proven occasionally successful. But
perhaps only under special conditions.
One of the theses that I have suggested (both
on this blog and in published work) is that the triumph of automation,
algorithm and AI are initially built not upon technological change, but upon
behavioral and cognitive change. Before business owners can begin to replace
workers with algorithmic process and generative AI, before, in other words,
those outputs can be accepted as “just as good” or “better” than human work,
humans and their labor must be constrained, delimited and “de-skilled.” And
more than this - everyone has to be convinced that these narrowly defined
outputs are as good as we humans get. With the Turing test, human communication
is reduced to lines on a teletype. With algorithmic analysis, applying to a
job, getting an apartment or reading a mammogram are reduced to scores that can
be ranked, patterns that can be assigned probability, etc.
As I explained in an earlier essay, this
process of labor alienation unfolds across several steps. First, human labor is
parsed out into a series of constituent functions, consistent with the
Taylorization that transformed factory labor. Second, those functions are
reconceptualized as algorithms: steps in a chain of operations that proceed
from inputs to outputs. These might be scripts for telemarketing, decision
trees for insurance claims, procedures for reporting inventory loss, etc. Next,
workers are confined to those algorithmic choices, and part of the de-skilling
process means penalizing workers for deviating from the script. Finally,
automation (AI applications and AI-infused platforms) replaces workers.
[Produced through ChatGPT]
The important part here is the human side of
the transformation. Humanity must be reduced, and people must be convinced that
algorithmic processes are interchangeable with the etiolated human. In other
words, the Turing test only works if we stay in our room, if we don’t shout, if
we don’t bang on the walls with our fists. And there are 2 levels of deception
- one in convincing (or forcing) people to reduce themselves to narrowly
defined outputs, the other in misrecognizing those algorithmic products as the
total of human possibility.
Does generative AI involve a similar series of
reductions? Of course it does. In all kinds of professions, our labor has been
reduced to the production of “content”: bland and repetitive text,
stereotypical images, boilerplate scripts. People produce like this in
accordance with late capitalism. The only way, after all, to monetize that
YouTube channel is to constantly update with new material. And that new
material must fit the algorithmic desires embedded in the platform. So: you
start making a lot of content, and you align it with the algorithm. The next
step is, of course, to replace you with generative content. ChatGPT may have
come as somewhat of a shock to us educators in 2022, but it must have come to
no surprise for platform laborers, who have been human generative AI for
several years now.
References
Bender, Emily and Alex Hanna (2025). The AI
Con. NY: Harper.
Collins, Samuel Gerald (2018). Welcome to
Robocracy. Anthropology of Work Review.
Pantsar, Markus (2024). “Intelligence is not
deception.” AI & Society.
Patterson, W., Boboye, J., Hall, S.,
Hornbuckle, M. (2018). The Gender Turing Test. In: Nicholson, D. (eds) Advances
in Human Factors in Cybersecurity. AHFE 2017. Advances in Intelligent Systems
and Computing, vol 593. Springer, Cham. https://doi.org/10.1007/978-3-319-60585-2_26
Sterrett SG (2020)
The Genius of the “Original imitation Game” test. Mind Mach 30(4):469–486.
https:// doi. org/ 10. 1007/S11023-020-09543-6
Turing, Alan M.(1950). “Computing machinery
and intelligence”. Mind Q. Rev. Psychol. Philos. LIX(236), 433–460.
Whyte, William H. (1956). The Organization
Man. NY: Simon and Schuster.
Wooldridge, Michael (2020). A Brief History of
Artificial Intelligence. NY: Flatiron Books.