Datafication, Phantasmagoria of the 21st Century

Tag: Digital Ecology (Page 1 of 3)

Agentic AI & Privacy

AI agents are coming for your privacy, warns Meredith Whittaker in this article from The Economist (September 9th, 2025).

An AI agent is a complex system including AI models, software and cloud infrastructure. For the system to do its thing—summarising your email or spending your money—it needs near-total access to your digital life. This is not the familiar request for permission to see your contacts; it is akin to giving “root” access to your entire device. Your browser history, credit-card details, private messages and location data are all poised to become AI fodder—heaped in an unsecure pile of undifferentiated data “context” she says.

It’s important to have voices like hers to explain things.

There is still sooooo much hype around AI, agentic and otherwise—not surprisingly, according to a research I read recently, the less people know about it, the hyper the hype. She speaks about the “application” layer. A friend explained to me the different levels of protection in the privacy/security/anonymity game (those are 3 different concepts which are related but separate): surface (like when you are browsing, or on social media etc… ), application, and system. GrapheneOS (for Android mobiles) offers protection at system level, but it requires a minimum of knowledge on the part of the user.

Unfortunately, I think it will become worse before it becomes better. We have just turned the corner in the development cycle of a new technology (about 20-25 years into the new cycle) when people start to smarten up and discover the harms and ills that come with the new technology. As Paul Virilio said: “when you invent the ship, you invent the shipwreck.”

It took a whole century to 1. realise the harms created by the industrial revolution—mass production and mass consumerism, and 2. start to do something about it—consume more consciously, recycle etc. Hopefully, we won’t take as long with the digital, because if we do, by the time we wake up, we (i.e., humanity) will live in a dystopia that only the most pessimistic Sci-Fi writers could have imagined.

In my mind, one bright light is that, today, we DO hear critical voices, voices that provide convincing arguments to inform and educate. During my PhD, in the mid-late 2010s, I started to become aware of the underlying functioning of the digital ecosystem. I got discouraged, because apart from some small pockets of academic researchers, everyone was so incredibly excited about the developments of digital technologies, and most people could not fathom any other reality than the hyped up image that was presented.

I felt that what Aldous Huxley described in “A Brave New World”—a humanity running towards the cliff singing and dancing—was become reality. Ten years later, I can see that it’s not the case anymore, and that gives me hope.

Predictive AI Fails At… Predicting

You would think that with all the hype around “AI” (in quotation marks because the word has become a catch-all bag, covering a whole range of poorly defined realities), and our civilisation’s enduring blind faith in the omniscience of digital technologies, at least, the technology would perform its function remarkably well.

I mean wouldn’t you?

Well, it seems not.

The Markup is “a nonprofit newsroom that challenges technology to serve the public good.” (Check here if you want to know more, I have been following them for years, they do remarkable work.)

This is what they found out (see below).

A software company sold a New Jersey police department an algorithm that was right less than 1% of the time. Read the whole article here.

It is NOT a blip. It is NOT an exception, an anomaly, a special case. It is another day in the office for predictive AI. And those issues will NOT go away with the next model iteration.

They are here to stay because they are an intrinsic feature of the technology. As a technology of quantification, AI (or whatever name we want to give the Digital) does NOT and, in fact, can NOT reliably handle qualitative aspects of life.

This is why the likes of Facebook employ human content moderators to detect and remove gore, violence and generally harmful content from the platform. (By the way, those people are often sub-contracted, so they do not appear on the main companies’ annual reports, and their contracts contain a clause they won’t sue if they get PTSD on the job, which they often do. Read here about what happened when they did).

So, despite all the hype, “a rose by any other name would smell as sweet.” When it comes to the social, predictive AI mostly fails at predicting.

OpenAI Wins US$200mn Government Contract

Yesterday (June 16th, 2025), Reuters announced that OpenAI was awarded a million dollars contract to provide AI tools to the U.S. Defense Department.

Reuters reports: “Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Pentagon said.

So, let me make sure I understand this. One of the most powerful military in the world is going to hand control of its critical national security to a tech company which systems routinely hallucinate, distort or make up basic facts, all this not because of a lack of data, but because of an essential deficiency in their ability to understand the REAL world? 😳🙄🙄

This is going to be interesting. Welcome to “The Greatest Show On Earth” by Ringling Bros. and Barnum & Bailey Circus!

This is what leading GenAI critic Gary Markus tells us about AI in his 2023 TED talk.

Gary Markus TED Talk 2023

A bit old, 2023, but still totally relevant two years later. Two years is a millenium in tech time, and the fact that what he says is still relevant shows that no progress has been made on the most crucial claims about LLMs.

👆 from Markus’ TED talk above. The imagination boggles at the recipes OpenAI could concoct when in control of the US national security narrative.

By the way, watch the Q&A with TED Chris Anderson at the end of Markus’ TED talk.

With the possibility of government-led regulation becoming more remote due to the geo-political and military strategic importance of digital tech, a global, non-profit organisation to regulate tech may be increasingly needed. Or increasingly utopian. Or most probably both.

Resources for Digital Privacy

A hacker friend sent me a number of resources that introduce and clearly but simply explain digital privacy. I am sharing these here without much comment.

General Resource

A good general resource: https://www.privacyguides.org/en

Why Privacy Is Important

Very short description of why privacy is important (I get SO MANY questions about why it’s important!) https://www.privacyguides.org/en/basics/why-privacy-matters

This is a blurb on why privacy is important by Mullvad VPN: https://mullvad.net/en/why-privacy-matters

NB: the pdf version is available here: https://mullvad.net/pdfs/Total_surveillance.pdf

Threat Modelling

These 3 articles explain the concept of threat modelling, to understand your own situation in order to know what to do/not do.

https://www.privacyguides.org/en/basics/threat-modeling
https://privsec.dev/posts/knowledge/threat-modeling
https://opsec101.org

Common Threats

A little bit more detail on what kinds of threats most people think about when threat modelling: https://www.privacyguides.org/en/basics/common-threats

And then, once the person has thought about their threat model and has a rough idea about it, then comes the part about choosing and deploying countermeasures.

Tools

This is a question people often ask me: what tools can I use? Here are some references for tools that can be used, depending on the threat model one has identified: https://www.privacyguides.org/en/tools

it is important to remember that it’s difficult to prescribe a one-size-fits-all solution for everyone, because each person’s threat model will be different.

Someone who is only concerned with surveillance capitalism will need to approach things differently vs. a high net worth individual or celebrity concerned about their physical and digital security vs. a political dissident or whistleblower.

Hope this helps!

DATAFIED (Video presentation for the Capra Course Alumni)

DATAFIED: A Critical Exploration of the Production of Knowledge in the Age of Datafication

This presentation by Hélène Liu introduces the main findings of her PhD critical research on the profound epistemological shift that accompanies the digital age. To a large extent, civilisations can be understood by the kind of knowledge they produce, and how they go about knowing what they know.

Inspired by The Arcades Project, the seminal work of early 20th-century philosopher and social critic Walter Benjamin, “DATAFIED” asks what civilisation is emerging at the dawn of the 21st century. The spread of algorithms -based on quantified, discrete, computer-ready data bits- to all qualitative aspects of life has far-reaching consequences.

The fanfare around the novelty aspect of social media and more recently of AI obfuscates the old paradigm ideology of quantification underlying the development of those technologies. The language used since its inception anthropomorphises digital technology and conceals a fundamental difference between datafied and human ways of knowing. As we embark in a new wave of increasingly inescapable digital architectures, it has become more urgent and more crucial to critically investigate their problematic epistemological dimension.

The video begins with an introduction of Hélène Liu and is followed by her talk that concludes with pointers toward a more regenerative ecology of knowing deeply inspired by the knowledge and insights shared during the Capra course (capracourse.net). After her presentation we hear reactions and reflections by Fritjof Capra, the teacher of the Capra Course and co-author of The Systems View of Life.

Presenter: Hélène Liu 
Helene holds Masters degrees from the Institut d’Etudes Politiques de Paris-University of Paris (Economics and Finance), the University of Hong Kong (Buddhist Studies) and a PhD from the School of Design at the Polytechnic University of Hong Kong. She is a long-term meditator and student of Vajrayana Buddhism. She recently produced and is releasing her first music album, The Guru Project (open.spotify.com/artist/3JuD6YwXidv7Y2i1mBakGY), which emerged from a concern about the divisiveness of the algorithmic civilisation. The album brings together the universal language of mantras with music from a diversity of geographies and genres, as a call to focus on our similarities rather than our differences.

NB: The link to the Vimeo is https://vimeo.com/839319910

Regulating Big Tech

Read this article by Joseph Stiglitz (see bio below) in Project Syndicate about the nascent steps to protect data privacy in the US. In February 2024, the Biden administration published an executive order to ban the transfer of “certain types “sensitive personal” data to some countries.

This is a drop in the ocean and the US is way behind in terms of protecting their citizens’ data from being exploited by the players in the data economy (compared to the EU for example). However, it is probably the beginning of a trend toward increased protection against a predatory system that has created too many anti competitive practices and social harms to be listed here. Admittedly, the US is walking on eggshells because regulating the digital seems directly at odd with the US competitive advantage in this domain.

The firms that make money from our data (including personal medical, financial, and geolocation information) have spent years trying to equate “free flows of data” with free speech. They will try to frame any Biden administration public-interest protections as an effort to shut down access to news websites, cripple the internet, and empower authoritarians. That is nonsense.

Over the past 20-25 years, the narrative about digital technology has been consistently driven by Big Tech to hide the full extent of what was really happening. The idealistic beliefs of democratisation, equality, friendship, connection from the early internet served as a smokescreen to the development of a behemoth fundamentally exploitative data industry that pervades all areas of the economy and society.

Today, large tech monopolies use indirect ways to try to quash attempts to change the status quo and counter Big Tech abuses.

Tech companies know that if there is an open, democratic debate, consumers’ concerns about digital safeguards will easily trump concerns about their profit margins. Industry lobbyists thus have been busy trying to short-circuit the democratic process. One of their methods is to press for obscure trade provisions aimed at circumscribing what the United States and other countries can do to protect personal data.

The article details previous attempts to ban any possible provisions preventing executive and congressional power over data regulation and establish special clauses in trade pacts to grant secrecy rights (an ironic state of affairs considering that the early Internet was developed on exactly opposite values). It is important to realise that most efforts are spent on surreptitious (INDIRECT) ways to limit any possibility of regulation through trade agreements for example, what Stiglitz calls “Big Tech’s favoured “digital trade” handcuffs“.

Stiglitz concluding remark reminds us that the stakes are high: ultimately, the choices made today have the potential to impact the democratic order.

Whatever one’s position on the regulation of Big Tech – whether one believes that its anti-competitive practices and social harms should be restricted or not – anyone who believes in democracy should applaud the Biden administration for its refusal to put the cart before the horse. The US, like other countries, should decide its digital policy democratically. If that happens, I suspect the outcome will be a far cry from what Big Tech and its lobbyists were pushing for.

Joseph E. Stiglitz, a Nobel laureate in economics and University Professor at Columbia University, is a former chief economist of the World Bank (1997-2000), chair of the US President’s Council of Economic Advisers, and co-chair of the High-Level Commission on Carbon Prices. He is Co-Chair of the Independent Commission for the Reform of International Corporate Taxation and was lead author of the 1995 IPCC Climate Assessment.

Leaving Traces Online, Identifiers.

Visit this website (or copy and paste https://www.deviceinfo.me) and it will show you a long list of all the identifiers that every website you visit can find out about you, your location, your device etc… All these different data points then used to create a “fingerprint” of your web browser, allowing the rest of your web activity on that same browser/device to be trackable.

NB: You can visit this website from any of your devices (mobile or desktop/laptop).

A Lost Civilisation

My PhD research on the epistemological shifts that accompany the rise of the digital age has alerted me to the high value the digital civilisation puts on turning the non- quantifiable aspects of our lives and our experience into quantified, computer-ready data. I have become more aware of the numerous deleterious consequences of this phenomenon.

Qualitative dimensions of life such as emotions, relationships or intuitive perceptions (to name a few) which draw upon a rich array of tacit knowing/feeling/sensing common to all of life are undermined and depreciated. In many areas of decision-making at the individual and collective levels, the alleged neutrality of the process of digital quantification is put forward as an antidote to the biases of the human mind, unreliable as it is encumbered by emotions and prejudices. While certain areas of the economy lend themselves to quantitative measurement, most crucial aspects of the experience of living do not.

There is a logical fallacy in the belief that digital data are neutral. They are produced in and by social, cultural, economic, historical (etc) contexts and consequently carry the very biases present in those contexts. There is a logical fallacy in the belief that algorithms are neutral. They are highly designed to optimise certain outcomes and fulfil certain agendas which, more often than not, do not align with the greater good.

Far from being a revolution, the blind ideological faith in digital data is directly inherited from the statistical and mechanistic mindset of the Industrial Revolution and supports the positivist view that all behaviours and sociality can be turned into hard data. The enterprise of eradicating uncertainty and ambiguity under the guise of so-called scientific measurement is such an appealing proposition for so many economic actors that we have come to forget what makes us human.

A civilisation that has devalued and forgotten the humanness of being human is a lost civilisation.

UK Police to Double The Use of Facial Recognition (The Guardian) & Fawkes

This is an article published by The Guardian on October 29, 2023.

https://www.theguardian.com/technology/2023/oct/29/uk-police-urged-to-double-use-of-facial-recognition-software

The UK policing minister encourages police department throughout the country to drastically increase the use of facial recognition software, and include passport photos into the AI database of recognisable images.

Excerpts:

Policing minister Chris Philp has written to force leaders suggesting the target of exceeding 200,000 searches of still images against the police national database by May using facial recognition technology.”

He also is encouraging police to operate live facial recognition (LFR) cameras more widely, before a global artificial intelligence (AI) safety summit next week at Bletchley Park in Buckinghamshire.”

Philp has also previously said he is going to make UK passport photos searchable by police. He plans to integrate data from the police national database (PND), the passport office and other national databases to help police find a match with the “click of one button”.”

If the widespread adoption of facial recognition softwares (that can recognise and identify a face even when it is partially covered) concerns you, you may want to consider using FAWKES, an image cloaking software developed at the Sand Lab at the university of Chicago.

The latest version (2022) includes compatibility with Mac 1 chips.

http://sandlab.cs.uchicago.edu/fawkes

This is what the Sand lab website says:

The SAND Lab at University of Chicago has developed Fawkes1, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. At a high level, Fawkes “poisons” models that try to learn what you look like, by putting hidden changes into your photos, and using them as Trojan horses to deliver that poison to any facial recognition models of you. Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable by humans or machines and will not cause errors in model training. However, when someone tries to identify you by presenting an unaltered, “uncloaked” image of you (e.g. a photo taken in public) to the model, the model will fail to recognize you.

I downloaded FAWKES on my M1 MacBook, and while a bit sloe, it works perfectly. You may have to tweak your privacy and security settings (in System Settings) to allow FAWKES to run on your computer. I also recommend to use the following method to open the app the first time you use it: go to Finder > Applications > FAKWES. Right click on the app name and select “Open”.

Be a bit patient, it took a 2-3 minutes for the software to open when I first used it. And it may take a few minutes to process photos. But all in all, it is working very well. Please note that it only seems to work for M1 chip MacBook but not iMac.

« Older posts