Datafication, Phantasmagoria of the 21st Century

Tag: Digital Ecology (Page 2 of 3)

Algorithmic Technology, Knowledge Production (And A Few Comments In Between)

So, digital technologies are going to save the world.

Or are they?

Let’s have a no non-sense look at how things really work.

A few comments first.

I am not a Luddite.

[Just a side comment here: Luddites were English textile workers in the 19th century who reacted strongly against the mechanisation of their trade which put them out of work and unable to support their families. Today, they have become the poster-child of anti-progress, anti-technology grumpy old bores, and “you’re a Luddite” is a common insult directed at techno-sceptics of all sorts. But Luddites were actually behaving quite rationally. Many people in the world today react in a similar fashion in the face of the economic uncertainty brought about by technological change.]

That being said, I am not anti-technology. I am extremely grateful for the applications of digital technology that help make the world a better place in many ways. I am fascinated by the ingenuity and the creativity displayed in the development of technologies to solve puzzling problems. I also welcome the fact that major technological shifts have brought major changes in how we live in the world. This is unavoidable, it is part of the impermanent nature of our worlds. Emergence of the new is to be welcomed rather than fought against.

But I am also a strong believer in using discrimination to try to make sense of new technologies, and to critically assess their systemic impact, especially when they have become the object of such hype. The history of humanity is paved with examples of collective blindness. We can’t consider ourselves immune to it.

The focus of my research (and of this post) is Datafication, i.e., the algorithmic quantification of purely qualitative aspects of life. I mention this because there are many other domains that comfortably lend themselves to quantification.

I am using a simple vocabulary in this post. This is on purpose, because words can be deceiving. Names such as Artificial Intelligence (AI) or Natural Language Processing (NLP) are highly evocative and misleading, suggesting human-like abilities. There is so much excitement and fanfare around them that it’s worth going back to the basics and calling a cat a cat (or a machine a machine). There is a lot of hype around whether AI is sentient or could become sentient but as of today, there are many simple actions that AI cannot perform satisfactorily (recognise a non-white-male face for one), not to mention the deeper issues that plague it (bias in data used to feed algorithms, the illusory belief that algorithms are neutral, the lack of accountability, the data surveillance architectures… just to name a few). It is just too easy to discard these technical, political, social issues in the belief that they will “soon” be overcome.

But hype time is not a time for deep reflection. If the incredible excitement around ChatGPT (despite the repeated urge for caution from its founder) is any indication, we are living through another round of renewed collective euphoria. A few years ago, the object of this collective rapture was social media. Today, knowing what we know about the harms they create, it is becoming more difficult to feel deliciously aroused by Facebook and co., but AI has grabbed the intoxication baton. The most grandiose claims are claims of sentience, including from AI engineers who undoubtedly have the ability to make the machines, but whose expertise in assessing their sentience is highly debatable. But in the digital age, extravagant assertions sell newspapers, make stocks shoot up, or bring fame, so it may not all be so surprising.

But I digress…

How does algorithmic technology create “knowledge” about qualitative aspects of life?

First, it collects and processes existing data from the social realm to create “knowledge”. It is important to understand that the original data collected is frequently incomplete, and often reflects the existing biases of the social milieu from where it is extracted. The idea that algorithms are neutral is sexy but false. Algorithms are a set of instructions that control the processing of data. They are only as good as the data they work with. So, I put the word “knowledge” in quotation marks to show that we have to scrutinise its meaning in this context, and use discrimination to examine what type of knowledge is created, what function it carries out, and whose interests it serves.

Algorithmic technology relies on computer-ready, quantified data. Computers are not equipped to handle the fuzziness of qualitative, relational, embodied, experiential data. But a lot of data produced in the world everyday is warm data. (Nora Bateson coined that term by the way, check The Bateson Institute website to know more, it is well worth a read). It is fuzzy, changing, qualitative, not clearly defined, and certainly not reducible to discrete quantities. But computers can only deal with quantities, discrete data bits. So, in order to be read by computers, the data collected needs to be cleaned and turned into “structured data”. What does “structured” mean? It means that it has to be transformed into data that can be read by computers; it needs to be turned into bits; it needs to be quantified.

So this begs the question: how is unquantified data turned into quantified data? Essentially, through two processes.

The first one is called “proxying”. The logic is: “I can’t use X, so I will use a proxy for X, an equivalent”. While this sounds great in theory, it has two important implications. Firstly, a suitable proxy may or may not exist so the relationship of similarity between X and its proxy may be thin. Secondly, someone has to decide which quantifiable equivalent will be used. I insist on the word “someone”, because it means that “someone” has to make that decision, a decision that is far from neutral, highly political and potentially carrying many social (unintended) consequences. In many instances, those decisions are made not by the stakeholders who have a lived understanding of the context where the algorithmic technology will be applied, but by the developers of the technology who lack such understanding.

Some examples of proxied data: assessing teachers’ effectiveness through their students’ test results; ranking “education excellence” at universities using SAT scores, student-teacher ratios, and acceptance rates (that’s what the editors at US News did when they started their university ranking project); evaluating an influencer’s trustworthiness by the number of followers she has (thereby creating unintended consequences as described in this New York Times investigative piece “The Follower Factory”); using credit worthiness to screen potential new corporate hires. And more… Those examples come from a fantastic book by math-PhD-data-scientist turned activist Cathy O’Neil called “Weapons of Math Destruction”. If you don’t have time or the inclination to read the book, Cathy also distills the essence of her argument in a TED talk, “The era of blind faith in big data must end”.

While all of the above sounds like a lot of work, there is data that is just too fuzzy to be structured and too complex to be proxied. So the second way to treat unstructured data is quite simple: abandon it. Forget about it! It never existed. Job done, problem solved. While this is convenient, of course, it becomes clear that this leaves out A LOT of important information about the social, especially because a major part of qualitative data produced in the social realm falls into this category. It also leave out the delicate but essential qualitative relational data that weaves the fabric of living ecosystems. So in essence, after the proxying and the pruning of qualitative data, it is easy to see how the so-called “knowledge” that algorithms produce is a rather poor reflection of social reality.

But (and that’s a big but), algorithmic technology is very attractive, because it makes decision-making convenient. How so? By removing uncertainty (of course I should say by giving the illusion of removing uncertainty). How so? Because it predicts the future (of course I should say by giving the illusion of predicting the future). Algorithmic technology applied to the social is essentially a technology of prediction. Shoshana Zuboff describes this at length in her seminal book published in 2019 “The Age of Surveillance Capitalism: The Fight for a Human Future in the New Frontier of Power”. If you do not have the stomach to read through the 500+ pages, just search “Zuboff Surveillance Capitalism”, you can find a plethora of interviews, articles and seminars she gave since the publication. (Just do me a favour and don’t use Google and Chrome to search, but switch to cleaner browsers like Firefox and search engines like DuckDuckGo). She clearly and masterfully elucidates how Google’s and Facebook’s money machines rely on packaging “prediction products” that are traded on “behavioural futures markets” which aim to erase the uncertainty of human behaviour.

There is a lot more to say on this (and I may do so in a later post), but for now, suffice it to say that just like the regenerative processes of nature are being damaged by mechanistic human activity, life-enhancing tacit ways of knowing are being submerged by the datafied production of knowledge. While algorithmic knowledge creation has a place and usefulness, its widespread use overshadows and overwhelms more tacit, warm, qualitative, embodied, experiential, human ways of knowing and being. The algorithmisation of human experience is creating a false knowledge of the world (see my 3mn presentation at TEDx in 2021).

This increasing lopsidedness is problematic and dangerous. Problematic because while prediction seems to make decision-making more convenient and efficient, convenience and efficiency are not life-enhancing values. Furthermore, prediction is not understanding, and understanding (or meaning-giving) is an important part of how we orient ourselves in the world. It is also problematically unfair because it creates massive asymmetries of knowledge and therefore a massive imbalance of power.

It is dangerous because while the algorithmic medium is indeed revolutionary, the ideology underlying it is dated and hazardous. The global issues and the potential for planetary annihilation that we are facing today arose from a reductionist mindset that sees living beings as machines and a positivist ideology that fundamentally distrusts tacit aspects of the human mind.

We urgently need a pendulum shift to rebalance algorithmically-produced knowledge with warm ways of knowing in order to create an ecology of knowledge that is conducive to the thriving of life on our planet.

Airports at Christmas: Why AI Cannot Rule The World

It is the week leading to Christmas. I’m at the airport waiting for someone to arrive and as I observe what’s happening here, I can’t help myself thinking about the place we have allowed digital technology to take in our lives. In 2022, AI pervades decision-making in all areas of human experience. What this means is that the deepest qualitative dimensions of being alive on this planet are being reduced to computer data, those data are then fed to algorithms designed by computer scientists which have become the ultimate decision-makers in how life is lived on planet earth.

My contention is that the blind faith that we, the “moderns”, have in algorithms and what we call AI (often without really knowing what that means) is misplaced. There is a place for algorithmic decision-making, but the we need to learn to value the qualitative, embodied, experiential dimension of being alive in a human body, with a human mind.

To understand why AI cannot rule the world, go to the arrival level of an airport at Christmas time, and observe. See the reunion between people who love each other, who have missed each other, the smiles on their faces, the tears of joy of finding each other after several weeks, or months or even years of absence, the excitement, the laughter, the warm hugs… And you will realise why the cold logic of AI can’t cover the reality of the experience of being human, of life.

I have little patience for those who profess that the laws of pure logic rule the social and that we can sort everything out with cold data. What about the rich warm relational dimension of being human? Those people go around claiming that logic and science are all we need, but the irony is that they fail to see that they are surrounded by networks of other persons who provide love, care and warm attention.

Feminine & Masculine Ways of Knowing – A Deep Imbalance

The following post is inspired by Safron Rossi’s interview on her book about Carl Jung’s views and influence on modern astrology. In the interview, she says:

“One way to approach this point (Jung’s unique contribution) is why is Jung’s work significant in the field of psychology. And for me, I would say that it has to do with the way he attempted to meld together the wisdom of the past with modern psychological understanding and methods of treatment.

The Jung psychology is one that grows organically from traditional understandings, particularly in the realms of spirituality, religion, mythology, and comparative symbolism. And in an era where psychology was becoming increasingly behavioural and rationalistic, Jung insisted on the importance of a spiritual life because that has been the core of the human experience from time immemorial. Why all of a sudden would the spiritual life really not be so important? It’s a really big question.”

What she mentions is central to the argument of my PhD. Suddenly, in the 19th century, at the time of the industrial revolution, the tacit experience and understanding of living became not so important, or rather, not so reliable as a way of knowing. The belief that emotions are clouding the (rational) mind and that the machine was more reliable than humans because it had no messy emotions became the mainstream ideology.

But tacit knowing (i.e. the qualitative knowing that results from embodied experience and which can also be called intuitive knowing) is a fundamentally feminine way of knowing. Instead with the Industrial Revolution, it has been replaced with faith in masculine ways of knowing, so called scientific, but in fact, “mechanistic” more than “scientific”.

As Mikhail Polanyi argues in his books Personal Knowledge (1958) and The Tacit Dimension (1966), tacit knowing is fully part of science. What I call the statistical mindset is a reductionist, mechanistic way of knowing that solely has faith in mechanistic, explicit and importantly, measurable knowledge.

Here, Rossi says that Carl Jung gave (feminine) tacit knowing a place in modern psychology at a time (the time of the industrial revolution) when disciplines such as psychology and sociology were overwhelmed by the statistical mindset that values measurability above all. Examples of this in the field of psychology is the behavioural school, in sociology, Auguste Comte and positivism.

In Europe, the 19th century was the century when women were believed to be too irrational to make important decisions (like voting for example) and it was also the century when purely statistical, measurable pseudo sciences (e.g., the dark science of eugenics) were born; it was the time when the factory line became the model for everything, mass production, but also the health system, the economy, psychology, education etc…

It is important to realise that the rationalisation of the social sciences was not in and of itself a “bad” thing. In a way, it was also a way to bring some degree of rigour to the field, and more importantly, to experiment with what can and cannot be measured. Walter Benjamin talked about the Phantasmagoria of an age, i.e., the set of belief system that underlies the development of thought during that period of time. Measuring, fragmenting the whole into parts, analysis, control over the environment were all part of the phantasmagoria of the Industrial Revolution and the Modern Age. All disciplines went through this prism (including Design, I may do a post on this later). Jung melded WISDOM into MODERN PSYCHOLOGY, which was very unusual at the time.

Statistical knowledge is predictive knowledge. We use statistics to know something that will happen in the future, like the likelihood of a weather event to happen, or market movements, or usage of public transport etc… It is the best knowledge we have to OPTIMISE, when the values of EFFICIENCY and convenience are primordial (like in urban or business planning for example). It is founded on the masculine principle trait of linear logic (if A and B, then C), and on the equally masculine principle trait of goal orientation (Jung’s definition of masculinity: know what you want and how to go and get it).

This is not in and of itself bad or good, there is no value judgement here. Again, it is not a matter of superiority (which is a masculine concept, i.e., fragmenting and analysing by setting up hierarchies), but of BALANCE. Today, we live in a world (more specifically, the geographies at the centre of power) where feminine ways of knowing, which emphasise regeneration, intuitive insights, collaboration, inter-dependencies and relationality are not trusted and are suppressed, often in the name of science.

Living systems function on the principles of feminine ways of knowing. But it is not really science itself that smothers feminine ways of knowing, it’s the reductionist mechanistic mindset (and the values of efficiency and optimisation) which is applied to areas of life and of living experience where it has nothing to contribute.

As I argue in the PhD, while digital technologies are indeed revolutionary in terms of the MEDIUM they created (algorithmic social platforms), from the point of view of the belief system that underlies them, they in fact perpetuate an outdated mindset (described above) which serves the values of efficiency and optimisation with a disregard for life.

Datafied. A Critical Exploration of the Production of Knowledge in the Age of Datafication

This is the abstract of my PhD submitted in August 2022

As qualitative aspects of life become increasingly subjected to the extractive processes of datafication, this theoretical research offers an in-depth analysis on how these technologies skew the relationship between tacit and datafied ways of knowing. Given the role tacit knowledge plays in the design process, this research seeks to illuminate how technologies of datafication are impacting designerly ways of knowing and what design can do to recalibrate this imbalance. In particular, this thesis is predicated on 4 interrelated objectives: (1) To understand how the shift toward the technologies of datafication has created an overreliance on datafied (i.e., explicit) knowledge (2) To comprehend how tacit knowledge (i.e. designerly ways of knowing) is impacted by this increased reliance, (3) To critically explore technologies of datafication through the lens of Walter Benjamin’s work on the phantasmagoria of modernity and (4) To discover what design can do to safeguard, protect and revive the production of tacit knowledge in a world increasingly dominated by datafication.

To bring greater awareness into what counts as valid knowledge today, this research begins by first identifying the principles that define tacit knowledge and datafied ways of knowing. By differentiating these two processes of knowledge creation, this thesis offers a foundation for understanding how datafication not only augments how we know things, but also actively directs and dominates what we know. This research goes on to also examine how this unchecked faith in datafication has led to a kind of 21st century phantasmagoria, reinforcing the wholesale belief that technology can be used to solve some of the most perplexing problems we face today. As a result, more tacit processes of knowledge creation are increasingly being overlooked and side-lined. To conclude this discussion, insights into how the discipline of design is uniquely situated to create a more regenerative relationship with technology, one that supports and honours the unique contributions of designerly ways of knowing, are offered.

Fundamental principles framing Grounded Theory are used as a methodological guide for structing this theoretical research. Given the unprecedented and rapid rate technology is being integrated into modern life, this methodological framework provided the flexibility needed to accommodate the evolving contours of this study while also providing the necessary systematic rigour to sustain the integrity of this PhD.

Keywords: datafication, tacit knowledge, phantasmagoria, regeneration, ecology of knowledge

Chris Jones – Designing Designing

A few words from John Thackara (who wrote the afterword of Chris Jones “Designing Designing”) on Chris Jones’ mission and philosophy (the full post can be found on Thackara’s blog).

As a kind of industrial gamekeeper turned poacher, Jones went on to warn about the potential dangers of the digital revolution unleashed by Claude Shannon

Computers were so damned good at the manipulation of symbols, he cautioned, that there would be immense pressure on scientists to reduce all human knowledge and experience to abstract form

Technology-driven innovation, Jones foresaw, would undervalue the knowledge and experience that human beings have by virtue of having bodies, interacting with the physical world, and being trained into a culture. 

Jones coined the word ‘softecnica’ to describe ‘a coming of live objects, a new presence in the world’. He was among the first to anticipate that software, and so-called intelligent objects, were not just neutral tools. They would compel us to adapt continuously to fit new ways of living. 

In time Jones turned away from the search for systematic design methods. He realized that academic attempts to systematize design led, in practice, to the separation of reason from intuition and failed to embody experience in the design process.”

All of the above ring very true today. The reductionist approach to knowledge, the general disdain for the richness of human knowledge and experience, the widespread contempt for embodied knowledge, the radical separation of reason and intuition, the hidden shaping of a new belief system around the superiority of rational machines, the invisible but violent bending of human friendly ways of living to fit machine dominated new ways of living.

Regulation & Regeneration

In the context of an economic environment deficient in self-regulation (also called wisdom), is there a space for outer regulation in Regenerative spaces?

This question was triggered by Musk purchase of Twitter. Since in regenerative communities, we are using metaphors from nature, the take-over of a global platform that carries a massive chunk of the global public debate, which algorithms are opaque and is known to influence the result of elections by a self-professed libertarian billionaire who has clearly indicated that he wants to restore “free speech” (whatever that means) on the platform and who is known to use it for self-serving purposes is a bit like a human-produced toxic algae bloom spreading on live water habitats and killing all life. Never enough seems to qualify the initiative appropriately.

So, in this context, I was wondering about regulation. Living systems, when left to their own device, self-regulate. This is what I would see as “inner regulation”, or in human terms, “wisdom”. I don’t think it would be overly pessimistic to think that inner regulation is found in (very) limited quantity right now in our social and economic environments.

So what about outer regulation? There are many ways outer regulation functions, from the traditional prescriptive approaches to softer ones that involve sway and incentives. Design as a discipline employs the latter ones all the time. I was thinking it is an important discussion to be had in the context of a community focused on Regenerative Economics because many projects start with the best of intentions and fall prey to unintended consequences.

And I am also interested to hear from those of us who have direct experience in designing regulation frameworks in the complex systems that are online communities who share the same purpose. Do we combine incentives for inner regulation with outer regulation, and if so, how? Do we leave it to the invisible hand? I would love to hear different voices chime on this topic.

Web3 Analysis by Moxie Marlingspike

A must-read blog post by Moxie Marlinspike, founder of Signal, sharing his thoughts on Web3.

The basic argument is that although Web3 concept is for decentralization of internet away from platforms, practically it has just reverted back to Web2 (centralized internet) with only superficial trappings of decentralization.

His points:
1) Blockchain and “crypto” (as it’s now commonly referred to meaning blockchain/cryptocurrency rather than the original meaning “cryptography” aka encryption) is discussed in terms of “distributed” and “trustless” and “leaderless”. One might think that this means that every USER involved is a peer in the chain. But practically it’s not about USERS, it’s about SERVERS. The distributed nature is based on SERVERS, not what Moxie calls “clients” (aka YOUR computer, YOUR phone, YOUR device). So the blockchain concept is supposed to follow distributed trustless and leaderless methods between SERVERS. The problem is that your phone is not a server. Your computer is not a server. Your devices are not servers. All of your devices are END-USER devices. Very few people will actually be setting up, running and maintaining their own server. It’s difficult, requires technical knowledge, and time consuming and costs money to maintain.

So what actually ends up happening is that the whole interface of Web3 turns to: Blockchain <-> Servers <-> End-user client devices. And the problem with Web3 so far is that all the end-user interaction with the blockchain has now consolidated to very few servers, aka returned to the phenomenon of platformisation (which describes how Web2 platforms decentralised their API throughout the entire web to centralise data back to their servers in the 2010s). As of now, most of the Web3 “decentralised apps” interact with the blockchain through two companies called Infura and Alchemy. These two companies run the servers in between blockchain and end-user client devices. So if you are using MetaMask and do something with your cryptocurrency wallet in MetaMask, MetaMask will basically communicate to Infura and Alchemy who then communicate with the actual blockchain.

His two sub-complaints to this are:
A) Nobody is verifying the authenticity of information that comes from Infura / Alchemy. There is currently no system in place on the client side (aka MetaMask on user side) to ensure that what information Infura / Alchemy returns to the end-user is actually what is truly on the blockchain. Theoretically if you have 5BTC in your wallet on the blockchain, and you load up MetaMask to query the balance in your wallet, MetaMask might contact Infura / Alchemy requesting your BTC balance and Infura / Alchemy can respond to say you have 0.1BTC. MetaMask won’t verify if that’s actually true, it’s just taken at its word.
B) Privacy concerns with routing all requests via Infura / Alchemy. Moxie’s example is: imagine every single web request you make is first routed through Google before being routed to your actual intended destination.

2) He gives the example of how NFTs are in fact just URLs stored on the blockchain. And these URLs point to servers hosting the actual content. So when you buy an NFT, you only own the URL on the blockchain that DIRECTS to the artwork, NOT the “artwork” itself. He did an exercise where he made an NFT that looks like a picture when viewed through OpenSea, but looks like a poo emoji when accessed via someone’s crypto wallet. Because ultimately the server hosting the image (to which the URL on the actual blockchain points to) is ultimately in control of the artwork.
Even worse, his NFT ended up being deleted by OpenSea. But somehow his NFT ALSO stopped appearing in his wallet. How is this possible? Even if OpenSea deletes the NFT from their website, the NFT should still be on the blockchain, right? Why doesn’t it still show up in his wallet? Well he says that due to this centralisation of supposedly “de-centralised” apps, his wallet is in fact communicating not with the blockchain directly, but through a few centralised platforms (one of which is OpenSea). So because OpenSea deleted his NFT, his wallet also no longer shows the NFT. It doesn’t matter that his NFT still belongs to him on the blockchain if the whole end-user system is totally divorced from the blockchain and instead reliant on the middle servers.

3) Finally, he is saying that Web3 as we know it now is really just Web2 with some fancy “Web3” window dressing. And the window dressing actually makes the whole system run worse than if it just stuck to pure Web2. But why force the window dressing? Simply to sell the whole thing as a next generation Web3 package as part of what he calls a gold rush frenzy over Web3.

The Dark Side of AI

I came across this article in the Financial Times yesterday (March 19 2022) on the Dark Sides of Using AI to Design Drugs by Anjana Ahuja.

Scientists at Collaborations Pharmaceuticals, a North Carolina company using AI to create drugs for rare diseases, experimented with how easy it would be to create rogue molecules for chemical warfare.

As it happens, the answer is VERY EASY! The model took only 6 hours to spit out a set of 40,000 destructive molecules.

And it’s not surprising. As French cultural theorist and philosopher Paul Virilio once said, “when you invent the ship, you invent the shipwreck”. Just like social platforms can be used both to connect with long lost friends AND to spread fake news, AI can be used both to save lives AND to destroy them.

This is a chilling reminder about the destructive potential of increasingly sophisticated technologies that our civilisation has developed but may not have the wisdom to use well.

TEDx Open Mic Follow Up: What Can we Do?

At the end of the open mic yesterday, you asked me a really important question and I do not think that my on-the-spot answer was explicit enough. The question was: “so what can we do?” It is a complex question. I thought about it last night and today and I would like to add a few more words which I hope will shed a clearer light on this complex matter. Here are some avenues for possible answers.

One way to think about an answer is to look at two possible levels of action. The systemic (policy) and the individual levels. At the policy level, regulation is coming. The EU has been the most aggressive so far (GDPR) but it will take time because this phenomenon is complex and it is unprecedented, which means that at the moment, there are no suitable laws to frame it and we do not even really understand how it works. Regulation will also likely be watered down by powerful networks of influence. In the US, the fact that Facebook and Google WANT the Federal government to come up with regulation clearly shows that they are confident they have the power to lobby and influence the end-result.

However, an increasing number of smart voices are putting forward some creative propositions that could easily and quickly be put into action. One of them for example is Paul Romer (co- recipient of the Nobel Memorial Prize in Economic Sciences in 2018) who advocates a Digital Tax (Digital Ad-Tax) as an incentive for good behaviour. Compelling initiatives are coming from the arts world as well. Adam Harvey did a great work on revealing the hidden datasets that feed the rise of AI-driven facial recognition. Manuel Beltrán’s Cartographies of Dispossession discloses the forms of systematic data dispossession. Taken individually, none of those propositions will make things right, but they all contribute to creating a more sustainable system.

The other level of action is individual; here we ask the question: “what can I do?”

As I said yesterday, I think that right now what is most urgently required is for us to become aware about what this all really means. The different debates around digital platforms technology at the moment (privacy, fake news, misinformation, anti-trust etc) are all parts of the same whole. The datafication of human experience is not only a technological issue, it is not only a social, economic or political issue, it is an ecological issue. Which means that we are dealing with a complex system. Complex systems present dilemma rather than problems. Because they do not lend themselves readily to linear solutions, they ask for a change of mindset. They require to be tackled at the same time from different angles; they need time, flexibility and vision; and they demand from us to do something that humans usually find most challenging: change our existing patterns of behaviour.

We need to change our behaviours. How? To be honest, as users, at the moment, we do not have much leverage. The digital universe we live in has emerged from a legal void and has largely been shaped by the major actors of the digital economy to serve their interests. We can’t opt-out of the terms and conditions of the social platforms we use everyday and keep using their services. Behavioural Economics has revealed what psychologists have known about human nature for a long time: as emotional beings we are easily manipulated. 2017 Nobel Prize recipient behavioural economist Richard Thaler and Cass Sunstein call this nudging and wrote a book on the topic. For the past 30 years or so, BJ Fogg, from the Behaviour Design Lab at Stanford University, has been teaching students how to use technology to persuade people. He calls this discipline by a really interesting name: “captology”. Today, captology helps keep users captive.

However, we are not helpless. We do not have a wide array of choices, but it does not mean we have none. We do have one power. The power of voting with our feet. This means we need to change our behaviours. To say “I can’t leave this platform because everyone is there” is the digital equivalent of saying I will start recycling when the planet is clean. Google is not the only search engine (try DuckDuckGo), Chrome not the only browser (try Firefox), Gmail not the only email provider (try ProtonMail), WhatsApp not the only messaging app (try Signal or Telegram).

We also need to seriously (SERIOUSLY) reassess the personal values that underlie our consumption of digital technologies.

It’s convenient. We are creatures of habits, so convenience has been baked into the design of social tech to make us complacent and lazy. But convenience is not a value that yields the greatest results in terms of ecological sustainability. Today, we understand that our patterns of consumption (food, clothing etc.) affect our environment. So, while it is also more convenient to throw garbage out of the window, despite the efforts required, we recycle. As informed and conscious consumers, we take great pains in consuming consciously. And in doing so, we influence the companies that create the products we consume. Why don’t we adopt the same behaviours when it comes to digital?

It’s free. Would you really expect to go to the supermarket, pile food up in a trolley and leave without paying a cent? Would you find it completely natural to enter a [Prada] shop (fill in with the name of your preferred brand), pick up a few handbags, a jacket or two and some small leather goods and leave with a dashing smile on your face and your credit card safely in your bag? Last time I checked, I think those behaviours were called “stealing” and they were punished by law. As a rule of thumb, we need to remember the most important theorem of the digital age: “when it’s free, it’s not free”. Plus, to go back to the environment analogy, we also considered nature as a free resource to be pilfered for our own profit. See how well we did with that? Just to put things in perspective, a paid account with the most secure email in the world, ProtonMail, costs US$50 a year. This is what you would spend for 8 mocha Frappuccino at Starbucks (and ProtonMail is much better for your health). So don’t be shy, pay for sustainable, clean technology! This requires a major change of mindset, but we will all be better off in the end.

In his book “WTF? What’s the Future and Why It’s Up to Us: Tim O’Reilly says that the master algorithm encoded by the targeted advertising business is optimised to be hostile to humanity. So one last thought. Today, we are still in the social media era, but how about tomorrow? The technologies in the making carry with them a level of intensity and a potential for behaviour modification, control and a possibility for destruction unequaled in the history of humanity (see Jaron Lanier). It took us 60 years to wake up to the slaughtering of our natural environment, we won’t be given so much time to react to the slaughtering of human experience.

« Older posts Newer posts »