Skip to main content

Posts

Showing posts with the label NodeXL

Stuck in the Turing Matrix - Arxiv Pre-print

  Stuck in the Turing Matrix: Inauthenticity, Deception and the Social Life of AI Samuel Gerald Collins The Turing test may or may not be a valid test of machine intelligence. But in an age of generative AI, the test describes the positions we humans occupy. Judging whether or not something is human or machine produced is an everyday condition for many of us, one that involves taking a spectrum of positions along what the essay describes as a Turing Matrix combining questions of authenticity with questions of deception. Utilizing data from Reddit postings about AI in broad areas of social life, the essay examines positions taken in a Turing Matrix and describes complex negotiations taken by Reddit posters as they strive to make sense of the AI World in which they live. Even though the Turing Test may not tell us much about the achievement of AGI or other benchmarks, it can tell us a great deal about the limitations of human lif...

Abstract for a paper-in-progress: quarantine and sentiment analysis.

      A Beautiful Day in the Neighborhood: sentiment analyses of new connections and communities in a COVID world.     Quarantine re-makes the city around us, re-defining “inside” and “outside,” “home” and “neighborhood.”   “Staying home” means complying with a socially and politically constructed bubble that delimits not only who or what can move from one side or another, but the protocols to be followed when that barrier is breached.   Moreover, transitioning from one to another is not just a matter of spatial movement, it also involves a shift in identity, from the one quarantined to the one not quarantined.   Finally, quarantine is a temporal state: fourteen days, or until the city lifts the quarantine measures.   Under these conditions, what does “home” mean?   What does “inside” mean?   And when one is quarantined, what do more collective identities like “community” and “neighborhood” mean?   U...

#AMANTH2016 WRAP-UP

The American Anthropological Association Annual Meeting is over, and, with it, the brief spurt of Twitter traffic that marks the event.  Here's a graph of Twitter traffic over the course of the week, created on NodeXL through a Twitter search for the hashtag #amanth2016: And some statistics on the graph: Vertices: 1746     Unique Edges: 4090 Edges With Duplicates: 6825 Total Edges: 10915 Here are the 50 most popular twitter accounts by betweenness centrality: americananthro culanth biellacoleman omanreagan thevelvetdays cmcgranahan aba_aaa berghahnanthro ericalwilliams7 allergyphd michelleakline amreese07 jasonantrosio fatimatassadiq anthroboycott peepsforum anthrofuentes teachingculture hilaryagro aaa_cfhr dukepress afeministanthro anandspandian anthrocharya aunpalmquist shahnafisa jahkarta nolan_kline elena_sesma savageminds stanfordpress girlhoodstudies transfo...

Defining anthropological community through #anthroboycott

Back on my pc--and here's my whole visualization for #AAA2015. It's the largest set of tweets I've ever mapped from AAA: 21, 879 edges, 3543 nodes.  I ran it when I got to my office on Monday, November 23 and it covers the whole 8 day window that includes some pre- and post-tweets.  I used the Clauset-Newman-Moore cluster algorithm to group the tweets--said to be particularly effective in revealing community structures in large networks.  Finally, each identified "group" is arranged in its own box, courtesy of the Harel-Koren Fast Multiscale layout algorithm.  Nice!  That said, it's hard to beat Marc Smith , who mapped out the network on Saturday, November 21.  He's got a neater graph than mine--it's his software, after all!  But I still wanted to work through my own data. In many ways, the graph is typical of associations.   Marc Smith et al (2014 ) might call this an example of a "tight crowd": " highly interconnected people w...

Anthropology on the Long Tail

Small Big Data? Of the many hyperbolic predictions in bestselling books devoted to big data, none is more astounding than  Mayer-Schönberger’s and Cukier’s  claims that big data will eliminate the need for sampling (why sample when you’ve got all the data?). But here’s the thing. We don’t have all of the data. Let’s look at Twitter. First, people who tweet are not a representative sample of the population. Second, like most commercial platforms, Twitter has moved towards more proprietary policies on the data they have mined from us. Most of us can only access up to 1% of relevant tweets for a given query. That can still be a lot of tweets, and that data is, for the moment, free.  But is that big data?  In other words, we’ve got sampling bias. If you can detect it, though, you can correct for it— Morstatter et al  recommend bootstrapping the data in order to correct for the biased sample. But it may not be so easy with some of the work we do. For example, t...