Datafication, Phantasmagoria of the 21st Century

Category: AI

The Nature of (Digital) Reality

Bruce Schneier’s blog “Schneier on Security” often presents thought-provoking pieces about the digital. This one directly relates to the core question of my PhD about the shifting nature of reality in the digital age.

A piece worth reading. You can also browse through the comments on his blog.

Schneier’s self intro on his blog: “I am a public-interest technologist, working at the intersection of security, technology, and people. I’ve been writing about security issues on my blog since 2004, and in my monthly newsletter since 1998. I’m a fellow and lecturer at Harvard’s Kennedy School, a board member of EFF, and the Chief of Security Architecture at Inrupt, Inc.”

DATAFIED (Video presentation for the Capra Course Alumni)

DATAFIED: A Critical Exploration of the Production of Knowledge in the Age of Datafication

This presentation by Hélène Liu introduces the main findings of her PhD critical research on the profound epistemological shift that accompanies the digital age. To a large extent, civilisations can be understood by the kind of knowledge they produce, and how they go about knowing what they know.

Inspired by The Arcades Project, the seminal work of early 20th-century philosopher and social critic Walter Benjamin, “DATAFIED” asks what civilisation is emerging at the dawn of the 21st century. The spread of algorithms -based on quantified, discrete, computer-ready data bits- to all qualitative aspects of life has far-reaching consequences.

The fanfare around the novelty aspect of social media and more recently of AI obfuscates the old paradigm ideology of quantification underlying the development of those technologies. The language used since its inception anthropomorphises digital technology and conceals a fundamental difference between datafied and human ways of knowing. As we embark in a new wave of increasingly inescapable digital architectures, it has become more urgent and more crucial to critically investigate their problematic epistemological dimension.

The video begins with an introduction of Hélène Liu and is followed by her talk that concludes with pointers toward a more regenerative ecology of knowing deeply inspired by the knowledge and insights shared during the Capra course (capracourse.net). After her presentation we hear reactions and reflections by Fritjof Capra, the teacher of the Capra Course and co-author of The Systems View of Life.

Presenter: Hélène Liu 
Helene holds Masters degrees from the Institut d’Etudes Politiques de Paris-University of Paris (Economics and Finance), the University of Hong Kong (Buddhist Studies) and a PhD from the School of Design at the Polytechnic University of Hong Kong. She is a long-term meditator and student of Vajrayana Buddhism. She recently produced and is releasing her first music album, The Guru Project (open.spotify.com/artist/3JuD6YwXidv7Y2i1mBakGY), which emerged from a concern about the divisiveness of the algorithmic civilisation. The album brings together the universal language of mantras with music from a diversity of geographies and genres, as a call to focus on our similarities rather than our differences.

NB: The link to the Vimeo is https://vimeo.com/839319910

UK Police to Double The Use of Facial Recognition (The Guardian) & Fawkes

This is an article published by The Guardian on October 29, 2023.

https://www.theguardian.com/technology/2023/oct/29/uk-police-urged-to-double-use-of-facial-recognition-software

The UK policing minister encourages police department throughout the country to drastically increase the use of facial recognition software, and include passport photos into the AI database of recognisable images.

Excerpts:

Policing minister Chris Philp has written to force leaders suggesting the target of exceeding 200,000 searches of still images against the police national database by May using facial recognition technology.”

He also is encouraging police to operate live facial recognition (LFR) cameras more widely, before a global artificial intelligence (AI) safety summit next week at Bletchley Park in Buckinghamshire.”

Philp has also previously said he is going to make UK passport photos searchable by police. He plans to integrate data from the police national database (PND), the passport office and other national databases to help police find a match with the “click of one button”.”

If the widespread adoption of facial recognition softwares (that can recognise and identify a face even when it is partially covered) concerns you, you may want to consider using FAWKES, an image cloaking software developed at the Sand Lab at the university of Chicago.

The latest version (2022) includes compatibility with Mac 1 chips.

http://sandlab.cs.uchicago.edu/fawkes

This is what the Sand lab website says:

The SAND Lab at University of Chicago has developed Fawkes1, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. At a high level, Fawkes “poisons” models that try to learn what you look like, by putting hidden changes into your photos, and using them as Trojan horses to deliver that poison to any facial recognition models of you. Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable by humans or machines and will not cause errors in model training. However, when someone tries to identify you by presenting an unaltered, “uncloaked” image of you (e.g. a photo taken in public) to the model, the model will fail to recognize you.

I downloaded FAWKES on my M1 MacBook, and while a bit sloe, it works perfectly. You may have to tweak your privacy and security settings (in System Settings) to allow FAWKES to run on your computer. I also recommend to use the following method to open the app the first time you use it: go to Finder > Applications > FAKWES. Right click on the app name and select “Open”.

Be a bit patient, it took a 2-3 minutes for the software to open when I first used it. And it may take a few minutes to process photos. But all in all, it is working very well. Please note that it only seems to work for M1 chip MacBook but not iMac.

Siri Beerends, AI Makes Humans More Robotic

Siri Beerends is a cultural sociologist and researches the social impact of digital technology at media lab SETUP. With her journalistic approach, she stimulates a critical debate on increasing datafication and artificial intelligence. Her PhD research (University of Twente) deals with authenticity and the question of how AI reduces the distance between people and machines.

Her TEDx talk caught my attention because, as a sociologist of technology, she looks at AI with a critical eye (and we need MANY more people to do this nowadays). In this talk, she gives 3 examples illustrating how AI does not work for us (humans), but we (humans) work for it. She shows how AI changes how we relates to each other in very profound ways. Technology is not good or bad she says, technology (AI) changes what good and bad mean.

Even more importantly, AI is not a technology, it is an ideology. Why? Because we believe that the social and human processes can be captured into computer data, and we forget about aspects that data cannot capture. Also, AI is based on a very reductionist understanding of what intelligence means, i.e. what computers are capable of, one that forgets about consciousness, empathy, intentionality, and embodied intelligence. Additionally, contrary to living intelligence, AI is very energy inefficient and has an enormous environmental impact.

AI is not a form of intelligence, but a form of advanced statistics. It can beat us in stable environments with clear rules, or in other terms, NOT the world we live in, which is contextual, ambiguous and dynamic. AI at best performs very (VERY!) poorly, at worst creates havoc in the messy REAL world because it can’t adapt to context. Do we want to make the world as predictable as possible? Do we want to become data clicking robots? Do we want to quantify and measure all aspects of our lives she asks. And her response is a resounding no.

What then?

Technological progress is not societal progress, so we need to expect less from AI and more from each other. AI systems can help solve problems, but we need to look into the causes of these problems, the flaws in our economic systems that trigger these problems again and again.

AI is also fundamentally conservative. It is trained with data from the past and reproduces patterns from the past. It is not real innovation. Real innovation requires better social and economic systems. We (humans) have the potentials to reshape them. Let’s not waste our potentials by becoming robots.

Watch her TEDx talk below.

The Dark Side of AI

I came across this article in the Financial Times yesterday (March 19 2022) on the Dark Sides of Using AI to Design Drugs by Anjana Ahuja.

Scientists at Collaborations Pharmaceuticals, a North Carolina company using AI to create drugs for rare diseases, experimented with how easy it would be to create rogue molecules for chemical warfare.

As it happens, the answer is VERY EASY! The model took only 6 hours to spit out a set of 40,000 destructive molecules.

And it’s not surprising. As French cultural theorist and philosopher Paul Virilio once said, “when you invent the ship, you invent the shipwreck”. Just like social platforms can be used both to connect with long lost friends AND to spread fake news, AI can be used both to save lives AND to destroy them.

This is a chilling reminder about the destructive potential of increasingly sophisticated technologies that our civilisation has developed but may not have the wisdom to use well.