Datafication, Phantasmagoria of the 21st Century

Author: admin (Page 1 of 6)

Agentic AI & Privacy

AI agents are coming for your privacy, warns Meredith Whittaker in this article from The Economist (September 9th, 2025).

An AI agent is a complex system including AI models, software and cloud infrastructure. For the system to do its thing—summarising your email or spending your money—it needs near-total access to your digital life. This is not the familiar request for permission to see your contacts; it is akin to giving “root” access to your entire device. Your browser history, credit-card details, private messages and location data are all poised to become AI fodder—heaped in an unsecure pile of undifferentiated data “context” she says.

It’s important to have voices like hers to explain things.

There is still sooooo much hype around AI, agentic and otherwise—not surprisingly, according to a research I read recently, the less people know about it, the hyper the hype. She speaks about the “application” layer. A friend explained to me the different levels of protection in the privacy/security/anonymity game (those are 3 different concepts which are related but separate): surface (like when you are browsing, or on social media etc… ), application, and system. GrapheneOS (for Android mobiles) offers protection at system level, but it requires a minimum of knowledge on the part of the user.

Unfortunately, I think it will become worse before it becomes better. We have just turned the corner in the development cycle of a new technology (about 20-25 years into the new cycle) when people start to smarten up and discover the harms and ills that come with the new technology. As Paul Virilio said: “when you invent the ship, you invent the shipwreck.”

It took a whole century to 1. realise the harms created by the industrial revolution—mass production and mass consumerism, and 2. start to do something about it—consume more consciously, recycle etc. Hopefully, we won’t take as long with the digital, because if we do, by the time we wake up, we (i.e., humanity) will live in a dystopia that only the most pessimistic Sci-Fi writers could have imagined.

In my mind, one bright light is that, today, we DO hear critical voices, voices that provide convincing arguments to inform and educate. During my PhD, in the mid-late 2010s, I started to become aware of the underlying functioning of the digital ecosystem. I got discouraged, because apart from some small pockets of academic researchers, everyone was so incredibly excited about the developments of digital technologies, and most people could not fathom any other reality than the hyped up image that was presented.

I felt that what Aldous Huxley described in “A Brave New World”—a humanity running towards the cliff singing and dancing—was become reality. Ten years later, I can see that it’s not the case anymore, and that gives me hope.

Predictive AI Fails At… Predicting

You would think that with all the hype around “AI” (in quotation marks because the word has become a catch-all bag, covering a whole range of poorly defined realities), and our civilisation’s enduring blind faith in the omniscience of digital technologies, at least, the technology would perform its function remarkably well.

I mean wouldn’t you?

Well, it seems not.

The Markup is “a nonprofit newsroom that challenges technology to serve the public good.” (Check here if you want to know more, I have been following them for years, they do remarkable work.)

This is what they found out (see below).

A software company sold a New Jersey police department an algorithm that was right less than 1% of the time. Read the whole article here.

It is NOT a blip. It is NOT an exception, an anomaly, a special case. It is another day in the office for predictive AI. And those issues will NOT go away with the next model iteration.

They are here to stay because they are an intrinsic feature of the technology. As a technology of quantification, AI (or whatever name we want to give the Digital) does NOT and, in fact, can NOT reliably handle qualitative aspects of life.

This is why the likes of Facebook employ human content moderators to detect and remove gore, violence and generally harmful content from the platform. (By the way, those people are often sub-contracted, so they do not appear on the main companies’ annual reports, and their contracts contain a clause they won’t sue if they get PTSD on the job, which they often do. Read here about what happened when they did).

So, despite all the hype, “a rose by any other name would smell as sweet.” When it comes to the social, predictive AI mostly fails at predicting.

OpenAI Wins US$200mn Government Contract

Yesterday (June 16th, 2025), Reuters announced that OpenAI was awarded a million dollars contract to provide AI tools to the U.S. Defense Department.

Reuters reports: “Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Pentagon said.

So, let me make sure I understand this. One of the most powerful military in the world is going to hand control of its critical national security to a tech company which systems routinely hallucinate, distort or make up basic facts, all this not because of a lack of data, but because of an essential deficiency in their ability to understand the REAL world? 😳🙄🙄

This is going to be interesting. Welcome to “The Greatest Show On Earth” by Ringling Bros. and Barnum & Bailey Circus!

This is what leading GenAI critic Gary Markus tells us about AI in his 2023 TED talk.

Gary Markus TED Talk 2023

A bit old, 2023, but still totally relevant two years later. Two years is a millenium in tech time, and the fact that what he says is still relevant shows that no progress has been made on the most crucial claims about LLMs.

👆 from Markus’ TED talk above. The imagination boggles at the recipes OpenAI could concoct when in control of the US national security narrative.

By the way, watch the Q&A with TED Chris Anderson at the end of Markus’ TED talk.

With the possibility of government-led regulation becoming more remote due to the geo-political and military strategic importance of digital tech, a global, non-profit organisation to regulate tech may be increasingly needed. Or increasingly utopian. Or most probably both.

Schneier on AI mistakes

Bruce Schneier is a fellow and lecturer and Harvard Kennedy School, sit on the boeard of the EFF and is generally recognised as an expert on digitsl security (see his bio here). His blog, “Schneier on Security” is a must go to learn about and keep abreast of tech security.

He recently published a post on a topic that is largely unexplored yet, but of growing importance. AI (let’s call it “AI” for now even though the word covers a wide range of different realities) makes mistakes. Despite the hype and the corporate narrative, anyone who has ever used ChatGPT or other LLMs will have been faced with this stark fact. However, a question that begs to be asked is: how does it make mistakes, and how is this mistake-making process similar or different from the human mistake-making process?

Well, that’s exactly the question Schneier asks in this post. The comments are worth reading as well.

A Powerful Metaphor For Life

A friend sent this to me.

“Yoann Bourgeois Captivates Audience with Powerful Performance About Life”

Many people seem to have seen it. It first appeared on TikTok apparently.

In any case, it’s a beautiful metaphor for life. Of course, falling and bouncing back up. That’s the obvious.

But it’s not only that. It’s the grace. The grace with which he falls and bounces back. As if falling and bouncing back was just the natural flow of life. The NATURAL flow of life.

To say “fall and bounce back” is a way to arrange reality by emphasising specific points in the flow of life: the point when we fall and the point when we bounce back. It’s giving a shape to a reality that could take a completely different shape if the narrative was different.

Emptiness is form, form is emptiness.

Something to meditate upon.

Feats of Innovation

A friend sent me the photo on the left, marvelling at human ingenuity.

I also admired the photo on the left.

And then I reflected…

Everyday, nature performs feats of innovation that we will never be able to replicate.

What does this have to do with a blog about digital technologies?

It is all about what we (as a civilisation) consider as valid knowledge, what ways of knowing we TRUST, what we hold as true, what we admire.

Musings On A Post-Truth World

I was reading the French news. The first page there was a title which read « en direct : la guerre en Ukraine », of course nowadays nobody bats an eyelid when they see that kind of title in a newspaper. We have come to see the direct reporting of wars, atrocities, tragedies or death as a completely natural phenomenon. But if you sit for a second and reflect upon this very simple title, you open a whole new way to understand our civilisation.

Browsing the titles in the French press spurred a reflection about how in just one century, our media have moved from the certainty of modernity to a post-modern world of radical contextualisation. Hundred years ago, mainstream valid knowledge was scientific, linear, and absolutistic (I say “mainstream” because of course Quantum Physics opened a whole new realm in terms of ways of knowing, but the science that was taught in schools was still newtonian). I say this in the sense of Clare Graves level four (blue level), there was one truth. Today we have opened to diversity in such a way that we have gone to the other extreme. Anything goes. The notions of right and wrong have been fully turned into contextual assessments. At the peak of modernity’s trust and faith in the so-called scientific method, which in fact was really a mechanistic worldview and a belief in positivism, the world was a simple aggregation of cause and effect. In this context of course, it became necessary to counterbalance with post-modernity, the view that things were not so straightforward (to put it simply) and that context actually played a major role in the complexity of life. Today we have moved to the other extreme, when universal laws don’t exist anymore. Moving into the extreme of post-modernity has led to the tribalisation of societies, and social platforms largely contributed to this phenomenon.

I was also thinking about the vital importance of explaining that we need to become aware of how we frame what we see. What I mean is if we started to really see and experience social platforms not as neutral means of communication or connection but as environments, therefore highly designed architectures, we would probably naturally behave in different ways when we are online. In fact, we can do this as we lead our life online and off-line. Proprioception and phenomenology, i.e. awareness of self and experience (or rather knowing the world through an embodied experience), are tools to help us do this. The awareness of how built environment carry with them a manipulative agenda is the crux of the matter in this case.

I am not using the word manipulative, in a deprecating sense. Design by nature is a manipulative discipline. But manipulation happens at all levels of communication. To live as a social being means to manipulate in one way or another, “manipulate” our environment, “manipulate” others. Understood in the most primal sense of the word (the Latin term “manus” means “hand”), this kind of manipulation can also be called relationship. Manipulation can imply to “manipulate” someone so they take their medicine every day, thereby enabling them to live their life with increased well-being. The question is: what is the intention behind the design, or the architecture, or the manipulation? As I am writing this, I’m thinking that another word for design could be manipulation. Architecture and the architectural choices represent manipulation and the intention behind the manipulation.

So I was thinking that maybe an interesting provocation could be to reflect on the passage from modernity to post-modernity, and how each of us is positioning ourselves in this very long term trend in the evolution of knowledge production. Are we aware of what’s going on; what meaning do we give to what’s happening in the world at the moment?

Resources for Digital Privacy

A hacker friend sent me a number of resources that introduce and clearly but simply explain digital privacy. I am sharing these here without much comment.

General Resource

A good general resource: https://www.privacyguides.org/en

Why Privacy Is Important

Very short description of why privacy is important (I get SO MANY questions about why it’s important!) https://www.privacyguides.org/en/basics/why-privacy-matters

This is a blurb on why privacy is important by Mullvad VPN: https://mullvad.net/en/why-privacy-matters

NB: the pdf version is available here: https://mullvad.net/pdfs/Total_surveillance.pdf

Threat Modelling

These 3 articles explain the concept of threat modelling, to understand your own situation in order to know what to do/not do.

https://www.privacyguides.org/en/basics/threat-modeling
https://privsec.dev/posts/knowledge/threat-modeling
https://opsec101.org

Common Threats

A little bit more detail on what kinds of threats most people think about when threat modelling: https://www.privacyguides.org/en/basics/common-threats

And then, once the person has thought about their threat model and has a rough idea about it, then comes the part about choosing and deploying countermeasures.

Tools

This is a question people often ask me: what tools can I use? Here are some references for tools that can be used, depending on the threat model one has identified: https://www.privacyguides.org/en/tools

it is important to remember that it’s difficult to prescribe a one-size-fits-all solution for everyone, because each person’s threat model will be different.

Someone who is only concerned with surveillance capitalism will need to approach things differently vs. a high net worth individual or celebrity concerned about their physical and digital security vs. a political dissident or whistleblower.

Hope this helps!

The Nature of (Digital) Reality

Bruce Schneier’s blog “Schneier on Security” often presents thought-provoking pieces about the digital. This one directly relates to the core question of my PhD about the shifting nature of reality in the digital age.

A piece worth reading. You can also browse through the comments on his blog.

Schneier’s self intro on his blog: “I am a public-interest technologist, working at the intersection of security, technology, and people. I’ve been writing about security issues on my blog since 2004, and in my monthly newsletter since 1998. I’m a fellow and lecturer at Harvard’s Kennedy School, a board member of EFF, and the Chief of Security Architecture at Inrupt, Inc.”

« Older posts