AI agents are coming for your privacy, warns Meredith Whittaker in this article from The Economist (September 9th, 2025).
“An AI agent is a complex system including AI models, software and cloud infrastructure. For the system to do its thing—summarising your email or spending your money—it needs near-total access to your digital life. This is not the familiar request for permission to see your contacts; it is akin to giving “root” access to your entire device. Your browser history, credit-card details, private messages and location data are all poised to become AI fodder—heaped in an unsecure pile of undifferentiated data “context” she says.
It’s important to have voices like hers to explain things.
There is still sooooo much hype around AI, agentic and otherwise—not surprisingly, according to a research I read recently, the less people know about it, the hyper the hype. She speaks about the “application” layer. A friend explained to me the different levels of protection in the privacy/security/anonymity game (those are 3 different concepts which are related but separate): surface (like when you are browsing, or on social media etc… ), application, and system. GrapheneOS (for Android mobiles) offers protection at system level, but it requires a minimum of knowledge on the part of the user.
Unfortunately, I think it will become worse before it becomes better. We have just turned the corner in the development cycle of a new technology (about 20-25 years into the new cycle) when people start to smarten up and discover the harms and ills that come with the new technology. As Paul Virilio said: “when you invent the ship, you invent the shipwreck.”
It took a whole century to 1. realise the harms created by the industrial revolution—mass production and mass consumerism, and 2. start to do something about it—consume more consciously, recycle etc. Hopefully, we won’t take as long with the digital, because if we do, by the time we wake up, we (i.e., humanity) will live in a dystopia that only the most pessimistic Sci-Fi writers could have imagined.
In my mind, one bright light is that, today, we DO hear critical voices, voices that provide convincing arguments to inform and educate. During my PhD, in the mid-late 2010s, I started to become aware of the underlying functioning of the digital ecosystem. I got discouraged, because apart from some small pockets of academic researchers, everyone was so incredibly excited about the developments of digital technologies, and most people could not fathom any other reality than the hyped up image that was presented.
I felt that what Aldous Huxley described in “A Brave New World”—a humanity running towards the cliff singing and dancing—was become reality. Ten years later, I can see that it’s not the case anymore, and that gives me hope.