Datafication, Phantasmagoria of the 21st Century

Category: Datafication (Page 1 of 2)

The Nature of (Digital) Reality

Bruce Schneier’s blog “Schneier on Security” often presents thought-provoking pieces about the digital. This one directly relates to the core question of my PhD about the shifting nature of reality in the digital age.

A piece worth reading. You can also browse through the comments on his blog.

Schneier’s self intro on his blog: “I am a public-interest technologist, working at the intersection of security, technology, and people. I’ve been writing about security issues on my blog since 2004, and in my monthly newsletter since 1998. I’m a fellow and lecturer at Harvard’s Kennedy School, a board member of EFF, and the Chief of Security Architecture at Inrupt, Inc.”

DATAFIED (Video presentation for the Capra Course Alumni)

DATAFIED: A Critical Exploration of the Production of Knowledge in the Age of Datafication

This presentation by Hélène Liu introduces the main findings of her PhD critical research on the profound epistemological shift that accompanies the digital age. To a large extent, civilisations can be understood by the kind of knowledge they produce, and how they go about knowing what they know.

Inspired by The Arcades Project, the seminal work of early 20th-century philosopher and social critic Walter Benjamin, “DATAFIED” asks what civilisation is emerging at the dawn of the 21st century. The spread of algorithms -based on quantified, discrete, computer-ready data bits- to all qualitative aspects of life has far-reaching consequences.

The fanfare around the novelty aspect of social media and more recently of AI obfuscates the old paradigm ideology of quantification underlying the development of those technologies. The language used since its inception anthropomorphises digital technology and conceals a fundamental difference between datafied and human ways of knowing. As we embark in a new wave of increasingly inescapable digital architectures, it has become more urgent and more crucial to critically investigate their problematic epistemological dimension.

The video begins with an introduction of Hélène Liu and is followed by her talk that concludes with pointers toward a more regenerative ecology of knowing deeply inspired by the knowledge and insights shared during the Capra course (capracourse.net). After her presentation we hear reactions and reflections by Fritjof Capra, the teacher of the Capra Course and co-author of The Systems View of Life.

Presenter: Hélène Liu 
Helene holds Masters degrees from the Institut d’Etudes Politiques de Paris-University of Paris (Economics and Finance), the University of Hong Kong (Buddhist Studies) and a PhD from the School of Design at the Polytechnic University of Hong Kong. She is a long-term meditator and student of Vajrayana Buddhism. She recently produced and is releasing her first music album, The Guru Project (open.spotify.com/artist/3JuD6YwXidv7Y2i1mBakGY), which emerged from a concern about the divisiveness of the algorithmic civilisation. The album brings together the universal language of mantras with music from a diversity of geographies and genres, as a call to focus on our similarities rather than our differences.

NB: The link to the Vimeo is https://vimeo.com/839319910

A Lost Civilisation

My PhD research on the epistemological shifts that accompany the rise of the digital age
has alerted me to the high value the digital civilisation puts on turning the non-
quantifiable aspects of our lives and our experience into quantified, computer-ready
data. I have become more aware of the numerous deleterious consequences of this
phenomenon.

Qualitative dimensions of life such as emotions, relationships or intuitive perceptions (to
name a few) which draw upon a rich array of tacit knowing/feeling/sensing common to
all of life are undermined and depreciated. In many areas of decision-making at the
individual and collective levels, the alleged neutrality of the process of digital
quantification is put forward as an antidote to the biases of the human mind, unreliable
as it is encumbered by emotions and prejudices. While certain areas of the economy
lend themselves to quantitative measurement, most crucial aspects of the experience of
living do not.

There is a logical fallacy in the belief that digital data are neutral. They are produced in
and by social, cultural, economic, historical (etc) contexts and consequently carry the
very biases present in those contexts. There is a logical fallacy in the belief that
algorithms are neutral. They are highly designed to optimise certain outcomes and fulfil
certain agendas which, more often than not, do not align with the greater good.

Far from being a revolution, the blind ideological faith in digital data is directly inherited from the
statistical and mechanistic mindset of the Industrial Revolution and supports the
positivist view that all behaviours and sociality can be turned into hard data. The
enterprise of eradicating uncertainty and ambiguity under the guise of so-called scientific
measurement is such an appealing proposition for so many economic actors that we
have come to forget what makes us human.

A civilisation that has devalued and forgotten the humanness of being human is a lost civilisation.

Siri Beerends, AI Makes Humans More Robotic

Siri Beerends is a cultural sociologist and researches the social impact of digital technology at media lab SETUP. With her journalistic approach, she stimulates a critical debate on increasing datafication and artificial intelligence. Her PhD research (University of Twente) deals with authenticity and the question of how AI reduces the distance between people and machines.

Her TEDx talk caught my attention because, as a sociologist of technology, she looks at AI with a critical eye (and we need MANY more people to do this nowadays). In this talk, she gives 3 examples illustrating how AI does not work for us (humans), but we (humans) work for it. She shows how AI changes how we relates to each other in very profound ways. Technology is not good or bad she says, technology (AI) changes what good and bad mean.

Even more importantly, AI is not a technology, it is an ideology. Why? Because we believe that the social and human processes can be captured into computer data, and we forget about aspects that data cannot capture. Also, AI is based on a very reductionist understanding of what intelligence means, i.e. what computers are capable of, one that forgets about consciousness, empathy, intentionality, and embodied intelligence. Additionally, contrary to living intelligence, AI is very energy inefficient and has an enormous environmental impact.

AI is not a form of intelligence, but a form of advanced statistics. It can beat us in stable environments with clear rules, or in other terms, NOT the world we live in, which is contextual, ambiguous and dynamic. AI at best performs very (VERY!) poorly, at worst creates havoc in the messy REAL world because it can’t adapt to context. Do we want to make the world as predictable as possible? Do we want to become data clicking robots? Do we want to quantify and measure all aspects of our lives she asks. And her response is a resounding no.

What then?

Technological progress is not societal progress, so we need to expect less from AI and more from each other. AI systems can help solve problems, but we need to look into the causes of these problems, the flaws in our economic systems that trigger these problems again and again.

AI is also fundamentally conservative. It is trained with data from the past and reproduces patterns from the past. It is not real innovation. Real innovation requires better social and economic systems. We (humans) have the potentials to reshape them. Let’s not waste our potentials by becoming robots.

Watch her TEDx talk below.

Algorithmic Technology, Knowledge Production (And A Few Comments In Between)

So, digital technologies are going to save the world.

Or are they?

Let’s have a no non-sense look at how things really work.

A few comments first.

I am not a Luddite.

[Just a side comment here: Luddites were English textile workers in the 19th century who reacted strongly against the mechanisation of their trade which put them out of work and unable to support their families. Today, they have become the poster-child of anti-progress, anti-technology grumpy old bores, and “you’re a Luddite” is a common insult directed at techno-sceptics of all sorts. But Luddites were actually behaving quite rationally. Many people in the world today react in a similar fashion in the face of the economic uncertainty brought about by technological change.]

That being said, I am not anti-technology. I am extremely grateful for the applications of digital technology that help make the world a better place in many ways. I am fascinated by the ingenuity and the creativity displayed in the development of technologies to solve puzzling problems. I also welcome the fact that major technological shifts have brought major changes in how we live in the world. This is unavoidable, it is part of the impermanent nature of our worlds. Emergence of the new is to be welcomed rather than fought against.

But I am also a strong believer in using discrimination to try to make sense of new technologies, and to critically assess their systemic impact, especially when they have become the object of such hype. The history of humanity is paved with examples of collective blindness. We can’t consider ourselves immune to it.

The focus of my research (and of this post) is Datafication, i.e., the algorithmic quantification of purely qualitative aspects of life. I mention this because there are many other domains that comfortably lend themselves to quantification.

I am using a simple vocabulary in this post. This is on purpose, because words can be deceiving. Names such as Artificial Intelligence (AI) or Natural Language Processing (NLP) are highly evocative and misleading, suggesting human-like abilities. There is so much excitement and fanfare around them that it’s worth going back to the basics and calling a cat a cat (or a machine a machine). There is a lot of hype around whether AI is sentient or could become sentient but as of today, there are many simple actions that AI cannot perform satisfactorily (recognise a non-white-male face for one), not to mention the deeper issues that plague it (bias in data used to feed algorithms, the illusory belief that algorithms are neutral, the lack of accountability, the data surveillance architectures… just to name a few). It is just too easy to discard these technical, political, social issues in the belief that they will “soon” be overcome.

But hype time is not a time for deep reflection. If the incredible excitement around ChatGPT (despite the repeated urge for caution from its founder) is any indication, we are living through another round of renewed collective euphoria. A few years ago, the object of this collective rapture was social media. Today, knowing what we know about the harms they create, it is becoming more difficult to feel deliciously aroused by Facebook and co., but AI has grabbed the intoxication baton. The most grandiose claims are claims of sentience, including from AI engineers who undoubtedly have the ability to make the machines, but whose expertise in assessing their sentience is highly debatable. But in the digital age, extravagant assertions sell newspapers, make stocks shoot up, or bring fame, so it may not all be so surprising.

But I digress…

How does algorithmic technology create “knowledge” about qualitative aspects of life?

First, it collects and processes existing data from the social realm to create “knowledge”. It is important to understand that the original data collected is frequently incomplete, and often reflects the existing biases of the social milieu from where it is extracted. The idea that algorithms are neutral is sexy but false. Algorithms are a set of instructions that control the processing of data. They are only as good as the data they work with. So, I put the word “knowledge” in quotation marks to show that we have to scrutinise its meaning in this context, and use discrimination to examine what type of knowledge is created, what function it carries out, and whose interests it serves.

Algorithmic technology relies on computer-ready, quantified data. Computers are not equipped to handle the fuzziness of qualitative, relational, embodied, experiential data. But a lot of data produced in the world everyday is warm data. (Nora Bateson coined that term by the way, check The Bateson Institute website to know more, it is well worth a read). It is fuzzy, changing, qualitative, not clearly defined, and certainly not reducible to discrete quantities. But computers can only deal with quantities, discrete data bits. So, in order to be read by computers, the data collected needs to be cleaned and turned into “structured data”. What does “structured” mean? It means that it has to be transformed into data that can be read by computers; it needs to be turned into bits; it needs to be quantified.

So this begs the question: how is unquantified data turned into quantified data? Essentially, through two processes.

The first one is called “proxying”. The logic is: “I can’t use X, so I will use a proxy for X, an equivalent”. While this sounds great in theory, it has two important implications. Firstly, a suitable proxy may or may not exist so the relationship of similarity between X and its proxy may be thin. Secondly, someone has to decide which quantifiable equivalent will be used. I insist on the word “someone”, because it means that “someone” has to make that decision, a decision that is far from neutral, highly political and potentially carrying many social (unintended) consequences. In many instances, those decisions are made not by the stakeholders who have a lived understanding of the context where the algorithmic technology will be applied, but by the developers of the technology who lack such understanding.

Some examples of proxied data: assessing teachers’ effectiveness through their students’ test results; ranking “education excellence” at universities using SAT scores, student-teacher ratios, and acceptance rates (that’s what the editors at US News did when they started their university ranking project); evaluating an influencer’s trustworthiness by the number of followers she has (thereby creating unintended consequences as described in this New York Times investigative piece “The Follower Factory”); using credit worthiness to screen potential new corporate hires. And more… Those examples come from a fantastic book by math-PhD-data-scientist turned activist Cathy O’Neil called “Weapons of Math Destruction”. If you don’t have time or the inclination to read the book, Cathy also distills the essence of her argument in a TED talk, “The era of blind faith in big data must end”.

While all of the above sounds like a lot of work, there is data that is just too fuzzy to be structured and too complex to be proxied. So the second way to treat unstructured data is quite simple: abandon it. Forget about it! It never existed. Job done, problem solved. While this is convenient, of course, it becomes clear that this leaves out A LOT of important information about the social, especially because a major part of qualitative data produced in the social realm falls into this category. It also leave out the delicate but essential qualitative relational data that weaves the fabric of living ecosystems. So in essence, after the proxying and the pruning of qualitative data, it is easy to see how the so-called “knowledge” that algorithms produce is a rather poor reflection of social reality.

But (and that’s a big but), algorithmic technology is very attractive, because it makes decision-making convenient. How so? By removing uncertainty (of course I should say by giving the illusion of removing uncertainty). How so? Because it predicts the future (of course I should say by giving the illusion of predicting the future). Algorithmic technology applied to the social is essentially a technology of prediction. Shoshana Zuboff describes this at length in her seminal book published in 2019 “The Age of Surveillance Capitalism: The Fight for a Human Future in the New Frontier of Power”. If you do not have the stomach to read through the 500+ pages, just search “Zuboff Surveillance Capitalism”, you can find a plethora of interviews, articles and seminars she gave since the publication. (Just do me a favour and don’t use Google and Chrome to search, but switch to cleaner browsers like Firefox and search engines like DuckDuckGo). She clearly and masterfully elucidates how Google’s and Facebook’s money machines rely on packaging “prediction products” that are traded on “behavioural futures markets” which aim to erase the uncertainty of human behaviour.

There is a lot more to say on this (and I may do so in a later post), but for now, suffice it to say that just like the regenerative processes of nature are being damaged by mechanistic human activity, life-enhancing tacit ways of knowing are being submerged by the datafied production of knowledge. While algorithmic knowledge creation has a place and usefulness, its widespread use overshadows and overwhelms more tacit, warm, qualitative, embodied, experiential, human ways of knowing and being. The algorithmisation of human experience is creating a false knowledge of the world (see my 3mn presentation at TEDx in 2021).

This increasing lopsidedness is problematic and dangerous. Problematic because while prediction seems to make decision-making more convenient and efficient, convenience and efficiency are not life-enhancing values. Furthermore, prediction is not understanding, and understanding (or meaning-giving) is an important part of how we orient ourselves in the world. It is also problematically unfair because it creates massive asymmetries of knowledge and therefore a massive imbalance of power.

It is dangerous because while the algorithmic medium is indeed revolutionary, the ideology underlying it is dated and hazardous. The global issues and the potential for planetary annihilation that we are facing today arose from a reductionist mindset that sees living beings as machines and a positivist ideology that fundamentally distrusts tacit aspects of the human mind.

We urgently need a pendulum shift to rebalance algorithmically-produced knowledge with warm ways of knowing in order to create an ecology of knowledge that is conducive to the thriving of life on our planet.

Datafied. A Critical Exploration of Knowledge Production in The Digital Age (PhD)

This is a short abstract of my PhD research. I will post more details in the coming days and weeks.

I first look at the epistemological processes behind datafied knowledge and contrast them with the processes of tacit knowledge production. I extract 5 principles of tacit knowledge and contrast them to 5 principles of datafied knowledge, and I contend that datafied knowledge is founded on a reductionist ideology, a reductionist logic of knowledge production, reductionist data and therefore, produces a reductionist type of knowledge. Instead of helping us to understand the world we inhabit in more systemic, holistic and qualitative ways, it relies essentially on quantitative, disembodied, computationally structured computer-ready data, and algorithmically optimised processes.

Through the filter of Walter Benjamin’s work “The Arcades Project”, I argue that datafication (defined as the quantification of the qualitative aspects human experience) is a Phantasmagoria, a dream image, a myth, a social experience anchored in a culture of commodification. The digital production of knowledge is supported by a need to reduce uncertainty and increase productivity and efficiency. It essentially serves a predictive purpose. It does not help us to understand the intricate, beautiful, fragile, qualitative, embodied experience of being alive in a deeply interconnected and interdependent world, an experience that to a great extent, defines humaneness and life in general. In this sense, datafied knowledge is hostile to life.

Finally, I call for a rebalance between tacit and datafied ways of knowing, and a shift to a more regenerative ecology of knowledge based on the principles of living systems.

Helene Liu – PhD Thesis Visual Map

Feminine & Masculine Ways of Knowing – A Deep Imbalance

The following post is inspired by Safron Rossi’s interview on her book about Carl Jung’s views and influence on modern astrology. In the interview, she says:

“One way to approach this point (Jung’s unique contribution) is why is Jung’s work significant in the field of psychology. And for me, I would say that it has to do with the way he attempted to meld together the wisdom of the past with modern psychological understanding and methods of treatment.

The Jung psychology is one that grows organically from traditional understandings, particularly in the realms of spirituality, religion, mythology, and comparative symbolism. And in an era where psychology was becoming increasingly behavioural and rationalistic, Jung insisted on the importance of a spiritual life because that has been the core of the human experience from time immemorial. Why all of a sudden would the spiritual life really not be so important? It’s a really big question.”

What she mentions is central to the argument of my PhD. Suddenly, in the 19th century, at the time of the industrial revolution, the tacit experience and understanding of living became not so important, or rather, not so reliable as a way of knowing. The belief that emotions are clouding the (rational) mind and that the machine was more reliable than humans because it had no messy emotions became the mainstream ideology.

But tacit knowing (i.e. the qualitative knowing that results from embodied experience and which can also be called intuitive knowing) is a fundamentally feminine way of knowing. Instead with the Industrial Revolution, it has been replaced with faith in masculine ways of knowing, so called scientific, but in fact, “mechanistic” more than “scientific”.

As Mikhail Polanyi argues in his books Personal Knowledge (1958) and The Tacit Dimension (1966), tacit knowing is fully part of science. What I call the statistical mindset is a reductionist, mechanistic way of knowing that solely has faith in mechanistic, explicit and importantly, measurable knowledge.

Here, Rossi says that Carl Jung gave (feminine) tacit knowing a place in modern psychology at a time (the time of the industrial revolution) when disciplines such as psychology and sociology were overwhelmed by the statistical mindset that values measurability above all. Examples of this in the field of psychology is the behavioural school, in sociology, Auguste Comte and positivism.

In Europe, the 19th century was the century when women were believed to be too irrational to make important decisions (like voting for example) and it was also the century when purely statistical, measurable pseudo sciences (e.g., the dark science of eugenics) were born; it was the time when the factory line became the model for everything, mass production, but also the health system, the economy, psychology, education etc…

It is important to realise that the rationalisation of the social sciences was not in and of itself a “bad” thing. In a way, it was also a way to bring some degree of rigour to the field, and more importantly, to experiment with what can and cannot be measured. Walter Benjamin talked about the Phantasmagoria of an age, i.e., the set of belief system that underlies the development of thought during that period of time. Measuring, fragmenting the whole into parts, analysis, control over the environment were all part of the phantasmagoria of the Industrial Revolution and the Modern Age. All disciplines went through this prism (including Design, I may do a post on this later). Jung melded WISDOM into MODERN PSYCHOLOGY, which was very unusual at the time.

Statistical knowledge is predictive knowledge. We use statistics to know something that will happen in the future, like the likelihood of a weather event to happen, or market movements, or usage of public transport etc… It is the best knowledge we have to OPTIMISE, when the values of EFFICIENCY and convenience are primordial (like in urban or business planning for example). It is founded on the masculine principle trait of linear logic (if A and B, then C), and on the equally masculine principle trait of goal orientation (Jung’s definition of masculinity: know what you want and how to go and get it).

This is not in and of itself bad or good, there is no value judgement here. Again, it is not a matter of superiority (which is a masculine concept, i.e., fragmenting and analysing by setting up hierarchies), but of BALANCE. Today, we live in a world (more specifically, the geographies at the centre of power) where feminine ways of knowing, which emphasise regeneration, intuitive insights, collaboration, inter-dependencies and relationality are not trusted and are suppressed, often in the name of science.

Living systems function on the principles of feminine ways of knowing. But it is not really science itself that smothers feminine ways of knowing, it’s the reductionist mechanistic mindset (and the values of efficiency and optimisation) which is applied to areas of life and of living experience where it has nothing to contribute.

As I argue in the PhD, while digital technologies are indeed revolutionary in terms of the MEDIUM they created (algorithmic social platforms), from the point of view of the belief system that underlies them, they in fact perpetuate an outdated mindset (described above) which serves the values of efficiency and optimisation with a disregard for life.

Web 3.0 Data Ownership, Solution to the Excesses of the Data Economy?

There is much hope at the moment that web 3.0 will provide solutions to the problems brought about by the data economy (by the way, I just realised that with just one sleight of hand, hope becomes hype, and vice-versa). Web 3.0 proposes that users own their own data, instead of leaving it to other actors to use freely. The reasoning is that they can then decide what they want to do with that data, and who they want to release it to and when. We often hear the expression “paradigm shift” when it comes to Web 3.0. Is it? It proposes to solve the issues of surveillance capitalism by shifting data ownership from companies to the users themselves (i.e. users own their own data and the problem will be solved). But are we in fact trying to solve problems with the same tools that created them in the first place?

Karl Polanyi in The Great Transformation (1944) explored how capitalism creates fictitious commodities. Capitalism commodifies nature into land, human activity into work, and exchange into money. Nature, life and exchange are not tradable. Land, work and money are. The word “fictitious” is important here. It suggests that commodification creates tradable products out of something that is not tradable. In the 19th and 20th centuries, industrial capitalism commodified nature, the environment we live in. In the 21st century, surveillance capitalism is commodifying human life.

We have the environmental problems we have today because fundamentally, we see nature as an object to be grabbed, sold and exploited. Similarly, human life today is grabbed (i.e., datafied), traded or used as raw material to create valuable behavioural products that are traded on behavioural markets for profit (See Zuboff “The Age Of Surveillance Capitalism”). The concept of ownership and property is solidly anchored into the capitalist idea that everything out there can be owned and turned into tradable commodity. First, during the industrial revolution, it was nature which was divided, sold and exploited for its resources. Today, under surveillance capitalism, it is human life. Two different objects, but same process. When we talk about paradigm shift, we need to explore whether the avenues we are embarking on right now (such as ownership of one’s own data) truly represent a paradigmatic shift or whether we need to review our assumptions.

Furthermore, users’ ownership of their own data is a neat idea in principle, but its application raises many complex questions because real life is not neat. Ownership of data is not a clear-cut category. The concept of ownership is structured (a Yes or No proposition), life is not. Ownership does not necessarily provide the type of structure that accurately reflects life. There is a large dimension of life that happens outside of this paradigm.

First, having ownership of our own data does not mean that we will have the wisdom to use it well. We have been trained and conditioned for the past 20 years to value convenience above all other things when it comes to using digital technologies. But convenience is not the value most conducive to sustainability. It is more convenient to throw garbage through the window rather than recycle, but in doing so, we create an ecological crisis. By choosing convenience in our digital lives, we also create an ecological crisis. How many of us prefer to visit a website rather than an app (apps have many more prying capacities)? How many of us take the time to change our phones settings to increase privacy, and to review them regularly, to delete apps downloaded once and never used again? How many of us read through privacy policies? How many of us just click yes out of convenience when a website asks us whether to accept all cookies (instead of spending a few minutes customising them)? Not that many. So when given the choice between releasing all data or customising, how many people would actually take the time to choose which data to release and which not to?

Also, releasing one’s data “according to what’s needed” presupposes that we understand very clearly how it is being used, what are the consequences of releasing it, and what is truly needed and what is not. Say I own the data I produce and I can choose to release it or not. That does not solve the issue of what is done with aggregated data once it is released. If that data is transformed slightly in one way or another, is it still mine? Or can someone else trade it or turn it into a product to be traded?

There are tricky questions that pertain to the ambiguous nature of the digital terrain. How do we treat ownership of metadata (the data about data)? How do we treat data that is not about a person but a group of people, or communities? Who owns what in this case, and who decides? And who decides who decides? Who owns the data that is recorded by someone but includes someone else (like for example police patrols, or like when I post a photo of me on Instagram but my friends are also there)? And what happens to the zillions of terabytes of data that are already “out there”, irretrievable? How do we put in place those infrastructures against the backdrop of a probable huge pushback from those who benefit from the data economy? And how do we make sure that the data released is used to perform what it is supposed to perform and not used in another way? Blockchain mechanisms promise absolute certainty and privacy, but this also presupposes that absolutely everything happens on the blockchain.

How, as a caring society, do we protect the vulnerable? How about children? Or those who are not digitally literate (probably 99% of the world, because knowing how to use a smart phone does not equate with digital and technical literacy and awareness)? How about those who live at the fringe of society or at the fringe of the power centres of the world? It’s all very good to say that all our data is in a little box on our phone, but that presupposes that all have physical access to it, and the means to get the phone and the little box. Do we think about this from the point of view of an Indian mother in a village, or a Mexican child, or anyone who is not part of the 1%, or are we (AGAIN) going to develop the next generation technology from the eyes of a white male from a developed country?

Then, there is the essential question of translation. The digital is a translation of real life, it is not real life itself. It is just a map. From the beginning of AI, data science has been trying to create a language that could adequately reflect life, but so far it has not succeeded. Because of historical and technical reasons, the digital language that its used today has been developed along the lines of information theory. Information theory is based on Shannon’s linear communication model. Humans, and life in general, do not communicate like this. The digital has not been able to domesticate and integrate tacit knowledge. This is seen when data science uses proxies for aspects of life that cannot be turned into discrete data, like using US zip code as a proxy for wealth or education for example.

Furthermore, data is not information. Data is a way to classify. Classifications and standards are imbricated in our lives. They operate invisibly, but they create social order (Bowker & Star). Despite all the hype (and the hope) about the digital revolution, the digital is still trying to fit the messiness of life into the clearcut categories of the linear world of the industrial revolution. Data creates classifications, but data is not information. The enterprise of datafication (i.e., turning human life into discrete computer-ready data) is essentially a reductionist enterprise, it does not creates real knowledge, but as Bernard Stiegler once mentioned, “stupid” knowledge. It is the issue with algorithms today. Ownership of data does not address the fundamental fact that datafication creates a world that is not fit for humans, because it denies and destroys that which makes us humans, i.e., tacit knowing.

Finally, as mentioned above, datafication is a process of commodification of human life. For all the benefits of Web 3.0, the decentralised blockchain-based web anchors this process even more strongly into the fabric of society.

TEDx Open Mic Follow Up: What Can we Do?

At the end of the open mic yesterday, you asked me a really important question and I do not think that my on-the-spot answer was explicit enough. The question was: “so what can we do?” It is a complex question. I thought about it last night and today and I would like to add a few more words which I hope will shed a clearer light on this complex matter. Here are some avenues for possible answers.

One way to think about an answer is to look at two possible levels of action. The systemic (policy) and the individual levels. At the policy level, regulation is coming. The EU has been the most aggressive so far (GDPR) but it will take time because this phenomenon is complex and it is unprecedented, which means that at the moment, there are no suitable laws to frame it and we do not even really understand how it works. Regulation will also likely be watered down by powerful networks of influence. In the US, the fact that Facebook and Google WANT the Federal government to come up with regulation clearly shows that they are confident they have the power to lobby and influence the end-result.

However, an increasing number of smart voices are putting forward some creative propositions that could easily and quickly be put into action. One of them for example is Paul Romer (co- recipient of the Nobel Memorial Prize in Economic Sciences in 2018) who advocates a Digital Tax (Digital Ad-Tax) as an incentive for good behaviour. Compelling initiatives are coming from the arts world as well. Adam Harvey did a great work on revealing the hidden datasets that feed the rise of AI-driven facial recognition. Manuel Beltrán’s Cartographies of Dispossession discloses the forms of systematic data dispossession. Taken individually, none of those propositions will make things right, but they all contribute to creating a more sustainable system.

The other level of action is individual; here we ask the question: “what can I do?”

As I said yesterday, I think that right now what is most urgently required is for us to become aware about what this all really means. The different debates around digital platforms technology at the moment (privacy, fake news, misinformation, anti-trust etc) are all parts of the same whole. The datafication of human experience is not only a technological issue, it is not only a social, economic or political issue, it is an ecological issue. Which means that we are dealing with a complex system. Complex systems present dilemma rather than problems. Because they do not lend themselves readily to linear solutions, they ask for a change of mindset. They require to be tackled at the same time from different angles; they need time, flexibility and vision; and they demand from us to do something that humans usually find most challenging: change our existing patterns of behaviour.

We need to change our behaviours. How? To be honest, as users, at the moment, we do not have much leverage. The digital universe we live in has emerged from a legal void and has largely been shaped by the major actors of the digital economy to serve their interests. We can’t opt-out of the terms and conditions of the social platforms we use everyday and keep using their services. Behavioural Economics has revealed what psychologists have known about human nature for a long time: as emotional beings we are easily manipulated. 2017 Nobel Prize recipient behavioural economist Richard Thaler and Cass Sunstein call this nudging and wrote a book on the topic. For the past 30 years or so, BJ Fogg, from the Behaviour Design Lab at Stanford University, has been teaching students how to use technology to persuade people. He calls this discipline by a really interesting name: “captology”. Today, captology helps keep users captive.

However, we are not helpless. We do not have a wide array of choices, but it does not mean we have none. We do have one power. The power of voting with our feet. This means we need to change our behaviours. To say “I can’t leave this platform because everyone is there” is the digital equivalent of saying I will start recycling when the planet is clean. Google is not the only search engine (try DuckDuckGo), Chrome not the only browser (try Firefox), Gmail not the only email provider (try ProtonMail), WhatsApp not the only messaging app (try Signal or Telegram).

We also need to seriously (SERIOUSLY) reassess the personal values that underlie our consumption of digital technologies.

It’s convenient. We are creatures of habits, so convenience has been baked into the design of social tech to make us complacent and lazy. But convenience is not a value that yields the greatest results in terms of ecological sustainability. Today, we understand that our patterns of consumption (food, clothing etc.) affect our environment. So, while it is also more convenient to throw garbage out of the window, despite the efforts required, we recycle. As informed and conscious consumers, we take great pains in consuming consciously. And in doing so, we influence the companies that create the products we consume. Why don’t we adopt the same behaviours when it comes to digital?

It’s free. Would you really expect to go to the supermarket, pile food up in a trolley and leave without paying a cent? Would you find it completely natural to enter a [Prada] shop (fill in with the name of your preferred brand), pick up a few handbags, a jacket or two and some small leather goods and leave with a dashing smile on your face and your credit card safely in your bag? Last time I checked, I think those behaviours were called “stealing” and they were punished by law. As a rule of thumb, we need to remember the most important theorem of the digital age: “when it’s free, it’s not free”. Plus, to go back to the environment analogy, we also considered nature as a free resource to be pilfered for our own profit. See how well we did with that? Just to put things in perspective, a paid account with the most secure email in the world, ProtonMail, costs US$50 a year. This is what you would spend for 8 mocha Frappuccino at Starbucks (and ProtonMail is much better for your health). So don’t be shy, pay for sustainable, clean technology! This requires a major change of mindset, but we will all be better off in the end.

In his book “WTF? What’s the Future and Why It’s Up to Us: Tim O’Reilly says that the master algorithm encoded by the targeted advertising business is optimised to be hostile to humanity. So one last thought. Today, we are still in the social media era, but how about tomorrow? The technologies in the making carry with them a level of intensity and a potential for behaviour modification, control and a possibility for destruction unequaled in the history of humanity (see Jaron Lanier). It took us 60 years to wake up to the slaughtering of our natural environment, we won’t be given so much time to react to the slaughtering of human experience.

« Older posts