Datafication, Phantasmagoria of the 21st Century

Tag: Digital Ecology (Page 1 of 2)

Resources for Digital Privacy

A hacker friend sent me a number of resources that introduce and clearly but simply explain digital privacy. I am sharing these here without much comment.

General Resource

A good general resource: https://www.privacyguides.org/en

Why Privacy Is Important

Very short description of why privacy is important (I get SO MANY questions about why it’s important!) https://www.privacyguides.org/en/basics/why-privacy-matters

This is a blurb on why privacy is important by Mullvad VPN: https://mullvad.net/en/why-privacy-matters

NB: the pdf version is available here: https://mullvad.net/pdfs/Total_surveillance.pdf

Threat Modelling

These 3 articles explain the concept of threat modelling, to understand your own situation in order to know what to do/not do.

https://www.privacyguides.org/en/basics/threat-modeling
https://privsec.dev/posts/knowledge/threat-modeling
https://opsec101.org

Common Threats

A little bit more detail on what kinds of threats most people think about when threat modelling: https://www.privacyguides.org/en/basics/common-threats

And then, once the person has thought about their threat model and has a rough idea about it, then comes the part about choosing and deploying countermeasures.

Tools

This is a question people often ask me: what tools can I use? Here are some references for tools that can be used, depending on the threat model one has identified: https://www.privacyguides.org/en/tools

it is important to remember that it’s difficult to prescribe a one-size-fits-all solution for everyone, because each person’s threat model will be different.

Someone who is only concerned with surveillance capitalism will need to approach things differently vs. a high net worth individual or celebrity concerned about their physical and digital security vs. a political dissident or whistleblower.

Hope this helps!

DATAFIED (Video presentation for the Capra Course Alumni)

DATAFIED: A Critical Exploration of the Production of Knowledge in the Age of Datafication

This presentation by Hélène Liu introduces the main findings of her PhD critical research on the profound epistemological shift that accompanies the digital age. To a large extent, civilisations can be understood by the kind of knowledge they produce, and how they go about knowing what they know.

Inspired by The Arcades Project, the seminal work of early 20th-century philosopher and social critic Walter Benjamin, “DATAFIED” asks what civilisation is emerging at the dawn of the 21st century. The spread of algorithms -based on quantified, discrete, computer-ready data bits- to all qualitative aspects of life has far-reaching consequences.

The fanfare around the novelty aspect of social media and more recently of AI obfuscates the old paradigm ideology of quantification underlying the development of those technologies. The language used since its inception anthropomorphises digital technology and conceals a fundamental difference between datafied and human ways of knowing. As we embark in a new wave of increasingly inescapable digital architectures, it has become more urgent and more crucial to critically investigate their problematic epistemological dimension.

The video begins with an introduction of Hélène Liu and is followed by her talk that concludes with pointers toward a more regenerative ecology of knowing deeply inspired by the knowledge and insights shared during the Capra course (capracourse.net). After her presentation we hear reactions and reflections by Fritjof Capra, the teacher of the Capra Course and co-author of The Systems View of Life.

Presenter: Hélène Liu 
Helene holds Masters degrees from the Institut d’Etudes Politiques de Paris-University of Paris (Economics and Finance), the University of Hong Kong (Buddhist Studies) and a PhD from the School of Design at the Polytechnic University of Hong Kong. She is a long-term meditator and student of Vajrayana Buddhism. She recently produced and is releasing her first music album, The Guru Project (open.spotify.com/artist/3JuD6YwXidv7Y2i1mBakGY), which emerged from a concern about the divisiveness of the algorithmic civilisation. The album brings together the universal language of mantras with music from a diversity of geographies and genres, as a call to focus on our similarities rather than our differences.

NB: The link to the Vimeo is https://vimeo.com/839319910

Regulating Big Tech

Read this article by Joseph Stiglitz (see bio below) in Project Syndicate about the nascent steps to protect data privacy in the US. In February 2024, the Biden administration published an executive order to ban the transfer of “certain types “sensitive personal” data to some countries.

This is a drop in the ocean and the US is way behind in terms of protecting their citizens’ data from being exploited by the players in the data economy (compared to the EU for example). However, it is probably the beginning of a trend toward increased protection against a predatory system that has created too many anti competitive practices and social harms to be listed here. Admittedly, the US is walking on eggshells because regulating the digital seems directly at odd with the US competitive advantage in this domain.

The firms that make money from our data (including personal medical, financial, and geolocation information) have spent years trying to equate “free flows of data” with free speech. They will try to frame any Biden administration public-interest protections as an effort to shut down access to news websites, cripple the internet, and empower authoritarians. That is nonsense.

Over the past 20-25 years, the narrative about digital technology has been consistently driven by Big Tech to hide the full extent of what was really happening. The idealistic beliefs of democratisation, equality, friendship, connection from the early internet served as a smokescreen to the development of a behemoth fundamentally exploitative data industry that pervades all areas of the economy and society.

Today, large tech monopolies use indirect ways to try to quash attempts to change the status quo and counter Big Tech abuses.

Tech companies know that if there is an open, democratic debate, consumers’ concerns about digital safeguards will easily trump concerns about their profit margins. Industry lobbyists thus have been busy trying to short-circuit the democratic process. One of their methods is to press for obscure trade provisions aimed at circumscribing what the United States and other countries can do to protect personal data.

The article details previous attempts to ban any possible provisions preventing executive and congressional power over data regulation and establish special clauses in trade pacts to grant secrecy rights (an ironic state of affairs considering that the early Internet was developed on exactly opposite values). It is important to realise that most efforts are spent on surreptitious (INDIRECT) ways to limit any possibility of regulation through trade agreements for example, what Stiglitz calls “Big Tech’s favoured “digital trade” handcuffs“.

Stiglitz concluding remark reminds us that the stakes are high: ultimately, the choices made today have the potential to impact the democratic order.

Whatever one’s position on the regulation of Big Tech – whether one believes that its anti-competitive practices and social harms should be restricted or not – anyone who believes in democracy should applaud the Biden administration for its refusal to put the cart before the horse. The US, like other countries, should decide its digital policy democratically. If that happens, I suspect the outcome will be a far cry from what Big Tech and its lobbyists were pushing for.

Joseph E. Stiglitz, a Nobel laureate in economics and University Professor at Columbia University, is a former chief economist of the World Bank (1997-2000), chair of the US President’s Council of Economic Advisers, and co-chair of the High-Level Commission on Carbon Prices. He is Co-Chair of the Independent Commission for the Reform of International Corporate Taxation and was lead author of the 1995 IPCC Climate Assessment.

Leaving Traces Online, Identifiers.

Visit this website (or copy and paste https://www.deviceinfo.me) and it will show you a long list of all the identifiers that every website you visit can find out about you, your location, your device etc… All these different data points then used to create a “fingerprint” of your web browser, allowing the rest of your web activity on that same browser/device to be trackable.

NB: You can visit this website from any of your devices (mobile or desktop/laptop).

A Lost Civilisation

My PhD research on the epistemological shifts that accompany the rise of the digital age
has alerted me to the high value the digital civilisation puts on turning the non-
quantifiable aspects of our lives and our experience into quantified, computer-ready
data. I have become more aware of the numerous deleterious consequences of this
phenomenon.

Qualitative dimensions of life such as emotions, relationships or intuitive perceptions (to
name a few) which draw upon a rich array of tacit knowing/feeling/sensing common to
all of life are undermined and depreciated. In many areas of decision-making at the
individual and collective levels, the alleged neutrality of the process of digital
quantification is put forward as an antidote to the biases of the human mind, unreliable
as it is encumbered by emotions and prejudices. While certain areas of the economy
lend themselves to quantitative measurement, most crucial aspects of the experience of
living do not.

There is a logical fallacy in the belief that digital data are neutral. They are produced in
and by social, cultural, economic, historical (etc) contexts and consequently carry the
very biases present in those contexts. There is a logical fallacy in the belief that
algorithms are neutral. They are highly designed to optimise certain outcomes and fulfil
certain agendas which, more often than not, do not align with the greater good.

Far from being a revolution, the blind ideological faith in digital data is directly inherited from the
statistical and mechanistic mindset of the Industrial Revolution and supports the
positivist view that all behaviours and sociality can be turned into hard data. The
enterprise of eradicating uncertainty and ambiguity under the guise of so-called scientific
measurement is such an appealing proposition for so many economic actors that we
have come to forget what makes us human.

A civilisation that has devalued and forgotten the humanness of being human is a lost civilisation.

UK Police to Double The Use of Facial Recognition (The Guardian) & Fawkes

This is an article published by The Guardian on October 29, 2023.

https://www.theguardian.com/technology/2023/oct/29/uk-police-urged-to-double-use-of-facial-recognition-software

The UK policing minister encourages police department throughout the country to drastically increase the use of facial recognition software, and include passport photos into the AI database of recognisable images.

Excerpts:

Policing minister Chris Philp has written to force leaders suggesting the target of exceeding 200,000 searches of still images against the police national database by May using facial recognition technology.”

He also is encouraging police to operate live facial recognition (LFR) cameras more widely, before a global artificial intelligence (AI) safety summit next week at Bletchley Park in Buckinghamshire.”

Philp has also previously said he is going to make UK passport photos searchable by police. He plans to integrate data from the police national database (PND), the passport office and other national databases to help police find a match with the “click of one button”.”

If the widespread adoption of facial recognition softwares (that can recognise and identify a face even when it is partially covered) concerns you, you may want to consider using FAWKES, an image cloaking software developed at the Sand Lab at the university of Chicago.

The latest version (2022) includes compatibility with Mac 1 chips.

http://sandlab.cs.uchicago.edu/fawkes

This is what the Sand lab website says:

The SAND Lab at University of Chicago has developed Fawkes1, an algorithm and software tool (running locally on your computer) that gives individuals the ability to limit how unknown third parties can track them by building facial recognition models out of their publicly available photos. At a high level, Fawkes “poisons” models that try to learn what you look like, by putting hidden changes into your photos, and using them as Trojan horses to deliver that poison to any facial recognition models of you. Fawkes takes your personal images and makes tiny, pixel-level changes that are invisible to the human eye, in a process we call image cloaking. You can then use these “cloaked” photos as you normally would, sharing them on social media, sending them to friends, printing them or displaying them on digital devices, the same way you would any other photo. The difference, however, is that if and when someone tries to use these photos to build a facial recognition model, “cloaked” images will teach the model an highly distorted version of what makes you look like you. The cloak effect is not easily detectable by humans or machines and will not cause errors in model training. However, when someone tries to identify you by presenting an unaltered, “uncloaked” image of you (e.g. a photo taken in public) to the model, the model will fail to recognize you.

I downloaded FAWKES on my M1 MacBook, and while a bit sloe, it works perfectly. You may have to tweak your privacy and security settings (in System Settings) to allow FAWKES to run on your computer. I also recommend to use the following method to open the app the first time you use it: go to Finder > Applications > FAKWES. Right click on the app name and select “Open”.

Be a bit patient, it took a 2-3 minutes for the software to open when I first used it. And it may take a few minutes to process photos. But all in all, it is working very well. Please note that it only seems to work for M1 chip MacBook but not iMac.

Algorithmic Technology, Knowledge Production (And A Few Comments In Between)

So, digital technologies are going to save the world.

Or are they?

Let’s have a no non-sense look at how things really work.

A few comments first.

I am not a Luddite.

[Just a side comment here: Luddites were English textile workers in the 19th century who reacted strongly against the mechanisation of their trade which put them out of work and unable to support their families. Today, they have become the poster-child of anti-progress, anti-technology grumpy old bores, and “you’re a Luddite” is a common insult directed at techno-sceptics of all sorts. But Luddites were actually behaving quite rationally. Many people in the world today react in a similar fashion in the face of the economic uncertainty brought about by technological change.]

That being said, I am not anti-technology. I am extremely grateful for the applications of digital technology that help make the world a better place in many ways. I am fascinated by the ingenuity and the creativity displayed in the development of technologies to solve puzzling problems. I also welcome the fact that major technological shifts have brought major changes in how we live in the world. This is unavoidable, it is part of the impermanent nature of our worlds. Emergence of the new is to be welcomed rather than fought against.

But I am also a strong believer in using discrimination to try to make sense of new technologies, and to critically assess their systemic impact, especially when they have become the object of such hype. The history of humanity is paved with examples of collective blindness. We can’t consider ourselves immune to it.

The focus of my research (and of this post) is Datafication, i.e., the algorithmic quantification of purely qualitative aspects of life. I mention this because there are many other domains that comfortably lend themselves to quantification.

I am using a simple vocabulary in this post. This is on purpose, because words can be deceiving. Names such as Artificial Intelligence (AI) or Natural Language Processing (NLP) are highly evocative and misleading, suggesting human-like abilities. There is so much excitement and fanfare around them that it’s worth going back to the basics and calling a cat a cat (or a machine a machine). There is a lot of hype around whether AI is sentient or could become sentient but as of today, there are many simple actions that AI cannot perform satisfactorily (recognise a non-white-male face for one), not to mention the deeper issues that plague it (bias in data used to feed algorithms, the illusory belief that algorithms are neutral, the lack of accountability, the data surveillance architectures… just to name a few). It is just too easy to discard these technical, political, social issues in the belief that they will “soon” be overcome.

But hype time is not a time for deep reflection. If the incredible excitement around ChatGPT (despite the repeated urge for caution from its founder) is any indication, we are living through another round of renewed collective euphoria. A few years ago, the object of this collective rapture was social media. Today, knowing what we know about the harms they create, it is becoming more difficult to feel deliciously aroused by Facebook and co., but AI has grabbed the intoxication baton. The most grandiose claims are claims of sentience, including from AI engineers who undoubtedly have the ability to make the machines, but whose expertise in assessing their sentience is highly debatable. But in the digital age, extravagant assertions sell newspapers, make stocks shoot up, or bring fame, so it may not all be so surprising.

But I digress…

How does algorithmic technology create “knowledge” about qualitative aspects of life?

First, it collects and processes existing data from the social realm to create “knowledge”. It is important to understand that the original data collected is frequently incomplete, and often reflects the existing biases of the social milieu from where it is extracted. The idea that algorithms are neutral is sexy but false. Algorithms are a set of instructions that control the processing of data. They are only as good as the data they work with. So, I put the word “knowledge” in quotation marks to show that we have to scrutinise its meaning in this context, and use discrimination to examine what type of knowledge is created, what function it carries out, and whose interests it serves.

Algorithmic technology relies on computer-ready, quantified data. Computers are not equipped to handle the fuzziness of qualitative, relational, embodied, experiential data. But a lot of data produced in the world everyday is warm data. (Nora Bateson coined that term by the way, check The Bateson Institute website to know more, it is well worth a read). It is fuzzy, changing, qualitative, not clearly defined, and certainly not reducible to discrete quantities. But computers can only deal with quantities, discrete data bits. So, in order to be read by computers, the data collected needs to be cleaned and turned into “structured data”. What does “structured” mean? It means that it has to be transformed into data that can be read by computers; it needs to be turned into bits; it needs to be quantified.

So this begs the question: how is unquantified data turned into quantified data? Essentially, through two processes.

The first one is called “proxying”. The logic is: “I can’t use X, so I will use a proxy for X, an equivalent”. While this sounds great in theory, it has two important implications. Firstly, a suitable proxy may or may not exist so the relationship of similarity between X and its proxy may be thin. Secondly, someone has to decide which quantifiable equivalent will be used. I insist on the word “someone”, because it means that “someone” has to make that decision, a decision that is far from neutral, highly political and potentially carrying many social (unintended) consequences. In many instances, those decisions are made not by the stakeholders who have a lived understanding of the context where the algorithmic technology will be applied, but by the developers of the technology who lack such understanding.

Some examples of proxied data: assessing teachers’ effectiveness through their students’ test results; ranking “education excellence” at universities using SAT scores, student-teacher ratios, and acceptance rates (that’s what the editors at US News did when they started their university ranking project); evaluating an influencer’s trustworthiness by the number of followers she has (thereby creating unintended consequences as described in this New York Times investigative piece “The Follower Factory”); using credit worthiness to screen potential new corporate hires. And more… Those examples come from a fantastic book by math-PhD-data-scientist turned activist Cathy O’Neil called “Weapons of Math Destruction”. If you don’t have time or the inclination to read the book, Cathy also distills the essence of her argument in a TED talk, “The era of blind faith in big data must end”.

While all of the above sounds like a lot of work, there is data that is just too fuzzy to be structured and too complex to be proxied. So the second way to treat unstructured data is quite simple: abandon it. Forget about it! It never existed. Job done, problem solved. While this is convenient, of course, it becomes clear that this leaves out A LOT of important information about the social, especially because a major part of qualitative data produced in the social realm falls into this category. It also leave out the delicate but essential qualitative relational data that weaves the fabric of living ecosystems. So in essence, after the proxying and the pruning of qualitative data, it is easy to see how the so-called “knowledge” that algorithms produce is a rather poor reflection of social reality.

But (and that’s a big but), algorithmic technology is very attractive, because it makes decision-making convenient. How so? By removing uncertainty (of course I should say by giving the illusion of removing uncertainty). How so? Because it predicts the future (of course I should say by giving the illusion of predicting the future). Algorithmic technology applied to the social is essentially a technology of prediction. Shoshana Zuboff describes this at length in her seminal book published in 2019 “The Age of Surveillance Capitalism: The Fight for a Human Future in the New Frontier of Power”. If you do not have the stomach to read through the 500+ pages, just search “Zuboff Surveillance Capitalism”, you can find a plethora of interviews, articles and seminars she gave since the publication. (Just do me a favour and don’t use Google and Chrome to search, but switch to cleaner browsers like Firefox and search engines like DuckDuckGo). She clearly and masterfully elucidates how Google’s and Facebook’s money machines rely on packaging “prediction products” that are traded on “behavioural futures markets” which aim to erase the uncertainty of human behaviour.

There is a lot more to say on this (and I may do so in a later post), but for now, suffice it to say that just like the regenerative processes of nature are being damaged by mechanistic human activity, life-enhancing tacit ways of knowing are being submerged by the datafied production of knowledge. While algorithmic knowledge creation has a place and usefulness, its widespread use overshadows and overwhelms more tacit, warm, qualitative, embodied, experiential, human ways of knowing and being. The algorithmisation of human experience is creating a false knowledge of the world (see my 3mn presentation at TEDx in 2021).

This increasing lopsidedness is problematic and dangerous. Problematic because while prediction seems to make decision-making more convenient and efficient, convenience and efficiency are not life-enhancing values. Furthermore, prediction is not understanding, and understanding (or meaning-giving) is an important part of how we orient ourselves in the world. It is also problematically unfair because it creates massive asymmetries of knowledge and therefore a massive imbalance of power.

It is dangerous because while the algorithmic medium is indeed revolutionary, the ideology underlying it is dated and hazardous. The global issues and the potential for planetary annihilation that we are facing today arose from a reductionist mindset that sees living beings as machines and a positivist ideology that fundamentally distrusts tacit aspects of the human mind.

We urgently need a pendulum shift to rebalance algorithmically-produced knowledge with warm ways of knowing in order to create an ecology of knowledge that is conducive to the thriving of life on our planet.

Airports at Christmas: Why AI Cannot Rule The World

It is the week leading to Christmas. I’m at the airport waiting for someone to arrive and as I observe what’s happening here, I can’t help myself thinking about the place we have allowed digital technology to take in our lives. In 2022, AI pervades decision-making in all areas of human experience. What this means is that the deepest qualitative dimensions of being alive on this planet are being reduced to computer data, those data are then fed to algorithms designed by computer scientists which have become the ultimate decision-makers in how life is lived on planet earth.

My contention is that the blind faith that we, the “moderns”, have in algorithms and what we call AI (often without really knowing what that means) is misplaced. There is a place for algorithmic decision-making, but the we need to learn to value the qualitative, embodied, experiential dimension of being alive in a human body, with a human mind.

To understand why AI cannot rule the world, go to the arrival level of an airport at Christmas time, and observe. See the reunion between people who love each other, who have missed each other, the smiles on their faces, the tears of joy of finding each other after several weeks, or months or even years of absence, the excitement, the laughter, the warm hugs… And you will realise why the cold logic of AI can’t cover the reality of the experience of being human, of life.

I have little patience for those who profess that the laws of pure logic rule the social and that we can sort everything out with cold data. What about the rich warm relational dimension of being human? Those people go around claiming that logic and science are all we need, but the irony is that they fail to see that they are surrounded by networks of other persons who provide love, care and warm attention.

Feminine & Masculine Ways of Knowing – A Deep Imbalance

The following post is inspired by Safron Rossi’s interview on her book about Carl Jung’s views and influence on modern astrology. In the interview, she says:

“One way to approach this point (Jung’s unique contribution) is why is Jung’s work significant in the field of psychology. And for me, I would say that it has to do with the way he attempted to meld together the wisdom of the past with modern psychological understanding and methods of treatment.

The Jung psychology is one that grows organically from traditional understandings, particularly in the realms of spirituality, religion, mythology, and comparative symbolism. And in an era where psychology was becoming increasingly behavioural and rationalistic, Jung insisted on the importance of a spiritual life because that has been the core of the human experience from time immemorial. Why all of a sudden would the spiritual life really not be so important? It’s a really big question.”

What she mentions is central to the argument of my PhD. Suddenly, in the 19th century, at the time of the industrial revolution, the tacit experience and understanding of living became not so important, or rather, not so reliable as a way of knowing. The belief that emotions are clouding the (rational) mind and that the machine was more reliable than humans because it had no messy emotions became the mainstream ideology.

But tacit knowing (i.e. the qualitative knowing that results from embodied experience and which can also be called intuitive knowing) is a fundamentally feminine way of knowing. Instead with the Industrial Revolution, it has been replaced with faith in masculine ways of knowing, so called scientific, but in fact, “mechanistic” more than “scientific”.

As Mikhail Polanyi argues in his books Personal Knowledge (1958) and The Tacit Dimension (1966), tacit knowing is fully part of science. What I call the statistical mindset is a reductionist, mechanistic way of knowing that solely has faith in mechanistic, explicit and importantly, measurable knowledge.

Here, Rossi says that Carl Jung gave (feminine) tacit knowing a place in modern psychology at a time (the time of the industrial revolution) when disciplines such as psychology and sociology were overwhelmed by the statistical mindset that values measurability above all. Examples of this in the field of psychology is the behavioural school, in sociology, Auguste Comte and positivism.

In Europe, the 19th century was the century when women were believed to be too irrational to make important decisions (like voting for example) and it was also the century when purely statistical, measurable pseudo sciences (e.g., the dark science of eugenics) were born; it was the time when the factory line became the model for everything, mass production, but also the health system, the economy, psychology, education etc…

It is important to realise that the rationalisation of the social sciences was not in and of itself a “bad” thing. In a way, it was also a way to bring some degree of rigour to the field, and more importantly, to experiment with what can and cannot be measured. Walter Benjamin talked about the Phantasmagoria of an age, i.e., the set of belief system that underlies the development of thought during that period of time. Measuring, fragmenting the whole into parts, analysis, control over the environment were all part of the phantasmagoria of the Industrial Revolution and the Modern Age. All disciplines went through this prism (including Design, I may do a post on this later). Jung melded WISDOM into MODERN PSYCHOLOGY, which was very unusual at the time.

Statistical knowledge is predictive knowledge. We use statistics to know something that will happen in the future, like the likelihood of a weather event to happen, or market movements, or usage of public transport etc… It is the best knowledge we have to OPTIMISE, when the values of EFFICIENCY and convenience are primordial (like in urban or business planning for example). It is founded on the masculine principle trait of linear logic (if A and B, then C), and on the equally masculine principle trait of goal orientation (Jung’s definition of masculinity: know what you want and how to go and get it).

This is not in and of itself bad or good, there is no value judgement here. Again, it is not a matter of superiority (which is a masculine concept, i.e., fragmenting and analysing by setting up hierarchies), but of BALANCE. Today, we live in a world (more specifically, the geographies at the centre of power) where feminine ways of knowing, which emphasise regeneration, intuitive insights, collaboration, inter-dependencies and relationality are not trusted and are suppressed, often in the name of science.

Living systems function on the principles of feminine ways of knowing. But it is not really science itself that smothers feminine ways of knowing, it’s the reductionist mechanistic mindset (and the values of efficiency and optimisation) which is applied to areas of life and of living experience where it has nothing to contribute.

As I argue in the PhD, while digital technologies are indeed revolutionary in terms of the MEDIUM they created (algorithmic social platforms), from the point of view of the belief system that underlies them, they in fact perpetuate an outdated mindset (described above) which serves the values of efficiency and optimisation with a disregard for life.

« Older posts