You can now refer other people to this newsletter with a single click. Help Thinking Ahead get to 500 subscribers! Current status: 469/500.
In a plot twist that surprises exactly no one, Twitter (sorry, X) updated its privacy policy today to reflect that they might use your public posts to train Elon Musk’s next toy: xAI.
X can now:
… use the information we collect and publicly available information to help train our machine learning or artificial intelligence models for the purposes outlined in this policy.
(From section 2.1 if you were curious.)
Shocker.
xAI’s stated purpose is:
…to understand the true nature of the universe.
How a bunch of tweets (sorry, X posts) will do that is anybody’s guess. By now, it’s pretty clear that large language models have their issues. There doesn’t seem to be any true understanding under the virtual hood.
Let’s also not forget that Twitter (damn, I can’t keep doing it. X, of course) may not provide the most uplifting training material for an LLM model. Sure, there are great people on there who share interesting thoughts and perhaps even the occasional innovative idea. There are also armies of bots who promise you crypto heaven, intolerant asswipes who spit vicarious vitriol, AI-generated influencers…
Understanding the universe’s greatest mysteries, here we come. Sarcasm alert.
Your tweets becoming AI food is not the only thing that changed in X’s privacy policy. This Mashable article has a good overview. Summary:
Compared to the old privacy policy, there are now several very important new types of user data that Twitter collects, including employment and educational history, as well as biometric data. The company also plans to use that data in new ways, most importantly to train AI.
Big brother, your new name is X?
But it’s okay. Click, consent, and log on. It’s almost too easy.
Which brings us to technology creep. In their (highly recommended) book Re-Engineering Humanity, law scholar Brett Frischmann and philosopher of technology Evan Selinger take a closer look at techno-social engineering. Or, how technology is integrated ever more into our lives and how the big (and small) tech companies try to make that integration as seamless as possible so that we don’t notice.
In their book, they discuss several types of ‘technology creep’ — ways in which usually small tweaks culminate in users handing over more information, data, and attention to technological services.
The latest X development illustrates two kinds of creep: surveillance and contract creep.
Surveillance creep is the phenomenon in which many of the algorithmically mediated services we use collect a little bit more of our data with each update. Netflix knows what you watch, your GPS/Strava app knows where you are, Amazon knows what you read, and, thanks to your loyalty card/app, your supermarket knows your spending and eating patterns better than you do. To an extent, this might improve the service that’s offered, but sometimes this overeager data collection jumps beyond that extent. If data is the new oil, a lot of tech companies are drilling down hard.
All these platforms/apps/etc. are legally required to tell you about changes in their privacy policy and user agreement. That’s where contract creep comes in. Make no mistake, every time you’re making an account somewhere, you’re signing an electronic contract. It doesn’t feel that way because that’s how it’s been designed. Terms of service and so on are deliberately obtuse and easy to sign with just a click. Who bothers reading them? I don’t want to trawl through thousands of words of legalese, I want an account on this platform/download this app/… Most electronic contracts are designed to be too annoying to read and too easy to ‘sign’. It takes only a click.
Frischmann and Selinger see three mitigation strategies for this ‘techno-social dilemma’: challenge conventional wisdom (what does the platform/app really do? Does all social media need to look the same?), create gaps (do all apps need to interact? Does that logging on have to happen automatically?); and transaction costs and inefficiencies (if you had to pay for the service, would you still use it? If not, is it truly worth your time?).
I’m still on X, even though Musk’s latest capricious decision and the dwindling engagement mean I’m a lot less active on there than I once was. While I don’t want to spread myself even thinner across different platforms, I am exploring Bluesky, which started as an initiative within the old Twitter and looks a lot like it. I hope its promises of a decentralized social network protocol and its open-source ideal won’t turn out to be castles in the sky…
Related thoughts:
Really interesting/horrifying stuff. Thanks for the book rec!
It seems to me, that of the three aspects of life, paste present and future, AI and other technologies, rely on the past. Only people can truly think ahead, define to goal, plan the process to achieve that goal and weigh the ethics and value of the total. Thinking ahead is uniquely human. Keep it up.