One of the many benchmarks for AI is the “Turing test,” Alan Turing’s adaptation of the “imitation game” where an interrogator must decide which of two respondents is a computer. It is, as many have pointed out, a strangely indirect test, one that depends on the credulity of the human interrogator and the capacity of the machine to deceive (Wooldridge 2020). Will they believe the computer? And will the computer be a good enough liar? As Pantsar (2025) comments, “For the machine to pass the test, it needs to impersonate a human successfully enough to fool the interrogator. But this is puzzling in the wide context of intelligence ascriptions. Why would intelligence be connected to a form of deception?” On the one hand, measuring AI through its deceptive power has the benefit of avoiding the idiocy of attempting to establish a measure of intelligence, a task deeply imbricated in racial eugenics (Bender and Hanna 2025; Wooldridge 2020). On the other, generative AI applicat...
Occasional posts on anthropologically interesting science fiction, anthropological futures and my own future as an anthropologist.