Datafication, Phantasmagoria of the 21st Century

Category: Digital Ecology (Page 1 of 3)

A Not-So-Dematerialised Internet? Undersea Cables (from “The Conversation”)

Undersea cables are the unseen backbone of the global internet.

Special ships lay data cables across the world’s oceans. Stefan Sauer/picture alliance via Getty Images

Robin Chataut, Quinnipiac University

Have you ever wondered how an email sent from New York arrives in Sydney in mere seconds, or how you can video chat with someone on the other side of the globe with barely a hint of delay? Behind these everyday miracles lies an unseen, sprawling web of undersea cables, quietly powering the instant global communications that people have come to rely on.

Undersea cables, also known as submarine communications cables, are fiber-optic cables laid on the ocean floor and used to transmit data between continents. These cables are the backbone of the global internet, carrying the bulk of international communications, including email, webpages and video calls. More than 95% of all the data that moves around the world goes through these undersea cables.

These cables are capable of transmitting multiple terabits of data per second, offering the fastest and most reliable method of data transfer available today. A terabit per second is fast enough to transmit about a dozen two-hour, 4K HD movies in an instant. Just one of these cables can handle millions of people watching videos or sending messages simultaneously without slowing down.

About 485 undersea cables totaling over 900,000 miles sit on the the ocean floor. These cables span the Atlantic and Pacific oceans, as well as strategic passages such as the Suez Canal and isolated areas within oceans.

a map of the world showing many lines connecting the continents
Undersea cables tie the world together. TeleGeography, CC BY-SA

Laying cables under the sea

Each undersea cable contains multiple optical fibers, thin strands of glass or plastic that use light signals to carry vast amounts of data over long distances with minimal loss. The fibers are bundled and encased in protective layers designed to withstand the harsh undersea environment, including pressure, wear and potential damage from fishing activities or ship anchors. The cables are typically as wide as a garden hose.

The process of laying undersea cables starts with thorough seabed surveys to chart a map in order to avoid natural hazards and minimize environmental impact. Following this step, cable-laying ships equipped with giant spools of fiber-optic cable navigate the predetermined route.

As the ship moves, the cable is unspooled and carefully laid on the ocean floor. The cable is sometimes buried in seabed sediments in shallow waters for protection against fishing activities, anchors and natural events. In deeper areas, the cables are laid directly on the seabed.

Along the route, repeaters are installed at intervals to amplify the optical signal and ensure data can travel long distances without degradation. This entire process can take months or even years, depending on the length and complexity of the cable route. https://www.youtube.com/embed/yd1JhZzoS6A?wmode=transparent&start=0 How undersea cables are installed.

Threats to undersea cables

Each year, an estimated 100 to 150 undersea cables are cut, primarily accidentally by fishing equipment or anchors. However, the potential for sabotage, particularly by nation-states, is a growing concern. These cables, crucial for global connectivity and owned by consortia of internet and telecom companies, often lie in isolated but publicly known locations, making them easy targets for hostile actions.

The vulnerability was highlighted by unexplained failures in multiple cables off the coast of West Africa on March 14, 2024, which led to significant internet disruptions affecting at least 10 nations. Several cable failures in the Baltic Sea in 2023 raised suspicions of sabotage.

The strategic Red Sea corridor has emerged as a focal point for undersea cable threats. A notable incident involved the attack on the cargo ship Rubymar by Houthi rebels. The subsequent damage to undersea cables from the ship’s anchor not only disrupted a significant portion of internet traffic between Asia and Europe but also highlighted the complex interplay between geopolitical conflicts and the security of global internet infrastructure.

Protecting the cables

Undersea cables are protected in several ways, starting with strategic route planning to avoid known hazards and areas of geopolitical tension. The cables are constructed with sturdy materials, including steel armor, to withstand harsh ocean conditions and accidental impacts.

Beyond these measures, experts have proposed establishing “cable protection zones” to limit high-risk activities near cables. Some have suggested amending international laws around cables to deter foreign sabotage and developing treaties that would make such interference illegal.

The recent Red Sea incident shows that help for these connectivity challenges might lie above rather than below. After cables were compromised in the region, satellite operators used their networks to reroute internet traffic. Undersea cables are likely to continue carrying the vast majority of the world’s internet traffic for the foreseeable future, but a blended approach that uses both undersea cables and satellites could provide a measure of protection against cable cuts.

Robin Chataut, Assistant Professor of Cybersecurity and Computer Science, Quinnipiac University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Regulating Big Tech

Read this article by Joseph Stiglitz (see bio below) in Project Syndicate about the nascent steps to protect data privacy in the US. In February 2024, the Biden administration published an executive order to ban the transfer of “certain types “sensitive personal” data to some countries.

This is a drop in the ocean and the US is way behind in terms of protecting their citizens’ data from being exploited by the players in the data economy (compared to the EU for example). However, it is probably the beginning of a trend toward increased protection against a predatory system that has created too many anti competitive practices and social harms to be listed here. Admittedly, the US is walking on eggshells because regulating the digital seems directly at odd with the US competitive advantage in this domain.

The firms that make money from our data (including personal medical, financial, and geolocation information) have spent years trying to equate “free flows of data” with free speech. They will try to frame any Biden administration public-interest protections as an effort to shut down access to news websites, cripple the internet, and empower authoritarians. That is nonsense.

Over the past 20-25 years, the narrative about digital technology has been consistently driven by Big Tech to hide the full extent of what was really happening. The idealistic beliefs of democratisation, equality, friendship, connection from the early internet served as a smokescreen to the development of a behemoth fundamentally exploitative data industry that pervades all areas of the economy and society.

Today, large tech monopolies use indirect ways to try to quash attempts to change the status quo and counter Big Tech abuses.

Tech companies know that if there is an open, democratic debate, consumers’ concerns about digital safeguards will easily trump concerns about their profit margins. Industry lobbyists thus have been busy trying to short-circuit the democratic process. One of their methods is to press for obscure trade provisions aimed at circumscribing what the United States and other countries can do to protect personal data.

The article details previous attempts to ban any possible provisions preventing executive and congressional power over data regulation and establish special clauses in trade pacts to grant secrecy rights (an ironic state of affairs considering that the early Internet was developed on exactly opposite values). It is important to realise that most efforts are spent on surreptitious (INDIRECT) ways to limit any possibility of regulation through trade agreements for example, what Stiglitz calls “Big Tech’s favoured “digital trade” handcuffs“.

Stiglitz concluding remark reminds us that the stakes are high: ultimately, the choices made today have the potential to impact the democratic order.

Whatever one’s position on the regulation of Big Tech – whether one believes that its anti-competitive practices and social harms should be restricted or not – anyone who believes in democracy should applaud the Biden administration for its refusal to put the cart before the horse. The US, like other countries, should decide its digital policy democratically. If that happens, I suspect the outcome will be a far cry from what Big Tech and its lobbyists were pushing for.

Joseph E. Stiglitz, a Nobel laureate in economics and University Professor at Columbia University, is a former chief economist of the World Bank (1997-2000), chair of the US President’s Council of Economic Advisers, and co-chair of the High-Level Commission on Carbon Prices. He is Co-Chair of the Independent Commission for the Reform of International Corporate Taxation and was lead author of the 1995 IPCC Climate Assessment.

Privacy Guides – Restore Your Online Privacy

Privacy Guides is a cybersecurity resources and privacy-focused tools to protect yourself online.

Start your privacy journey here. Learn why privacy matters, the difference between Privacy, Secrecy, Anonymity and Security and how to determine what is the threat model that corresponds best to your needs.

For example, here are some examples of threats. You may want to protect from some but don’t care much about others.

  • Anonymity – Shielding your online activity from your real identity, protecting you from people who are trying to uncover your identity specifically.
  • Targeted Attacks – Being protected from hackers or other malicious actors who are trying to gain access to your data or devices specifically.
  • Passive Attacks – Being protected from things like malware, data breaches, and other attacks that are made against many people at once.
  • Service Providers – Protecting your data from service providers (e.g. with E2EE, which renders your data unreadable to the server).
  • Mass Surveillance – Protection from government agencies, organisations, websites, and services which work together to track your activities.
  • Surveillance Capitalism – Protecting yourself from big advertising networks, like Google and Facebook, as well as a myriad of other third-party data collectors.
  • Public Exposure – Limiting the information about you that is accessible online—to search engines or the general public.
  • Censorship – Avoiding censored access to information or being censored yourself when speaking online.

Here, you can read about Privacy Guides recommendations for a whole range of online privacy tools, from browsers to service providers (cloud storage, email services, email aliasing services, payment, hosting, photo management, VPNs etc), softwares (sync, data redaction, encryption, files sharing, authentication tools, password managers, productivity tools, communication such as messaging platforms etc) and operating systems.

You can also understand some common misconceptions about online privacy (think: “VPN makes my browsing more secure”, “open source is always secure” or “complicated is better” amongst others).

You can also find valuable information about account creation: what happens when you create an account, understanding Terms of Services and Privacy Policies, how to secure an account (password managers, authentication software, email aliases etc). And just as important (maybe more), about account deletion (we leave A LOT of traces in the course of our digital life, and it’s important to become aware of what they are and how to reduce their number).

AND MUCH MORE!

I can’t recommend this website enough. Visit it, revisit it, bookmark it and share it with friends and enemies. 🙂

Siri Beerends, AI Makes Humans More Robotic

Siri Beerends is a cultural sociologist and researches the social impact of digital technology at media lab SETUP. With her journalistic approach, she stimulates a critical debate on increasing datafication and artificial intelligence. Her PhD research (University of Twente) deals with authenticity and the question of how AI reduces the distance between people and machines.

Her TEDx talk caught my attention because, as a sociologist of technology, she looks at AI with a critical eye (and we need MANY more people to do this nowadays). In this talk, she gives 3 examples illustrating how AI does not work for us (humans), but we (humans) work for it. She shows how AI changes how we relates to each other in very profound ways. Technology is not good or bad she says, technology (AI) changes what good and bad mean.

Even more importantly, AI is not a technology, it is an ideology. Why? Because we believe that the social and human processes can be captured into computer data, and we forget about aspects that data cannot capture. Also, AI is based on a very reductionist understanding of what intelligence means, i.e. what computers are capable of, one that forgets about consciousness, empathy, intentionality, and embodied intelligence. Additionally, contrary to living intelligence, AI is very energy inefficient and has an enormous environmental impact.

AI is not a form of intelligence, but a form of advanced statistics. It can beat us in stable environments with clear rules, or in other terms, NOT the world we live in, which is contextual, ambiguous and dynamic. AI at best performs very (VERY!) poorly, at worst creates havoc in the messy REAL world because it can’t adapt to context. Do we want to make the world as predictable as possible? Do we want to become data clicking robots? Do we want to quantify and measure all aspects of our lives she asks. And her response is a resounding no.

What then?

Technological progress is not societal progress, so we need to expect less from AI and more from each other. AI systems can help solve problems, but we need to look into the causes of these problems, the flaws in our economic systems that trigger these problems again and again.

AI is also fundamentally conservative. It is trained with data from the past and reproduces patterns from the past. It is not real innovation. Real innovation requires better social and economic systems. We (humans) have the potentials to reshape them. Let’s not waste our potentials by becoming robots.

Watch her TEDx talk below.

Algorithmic Bias in Education. Case Study From The MarkUp.

The MarkUp (an investigative publication focusing on Tech) has released an investigation into the Wisconsin’s state algorithm used to predict middle school students’ dropout before they graduate from high school.

Read the whole story here.

The algorithm is called The Dropout Early Warning System (DEWS). Students dropout is an important issue that needs to be addressed. Improving the chances of students staying in school and graduating from high school is a laudable goal. The question is: are algorithms reliable tools to do so? As it happens, it seems that they are not.

DEWS has been used for over a decade. The data used to create scores includes test scores, disciplinary records, and race. Students scoring below 78.5% are marked as High Risk (and a red mark appears next to their name). The MarkUp reports that comparisons between DEWS prediction and state records of actual graduations show the system is wrong three quarter of the time, especially for Black and Hispanic students. In other words, the system invalidates the very purpose for the system to exist.

Even more telling: the Department of Public Instruction (DPI) ran its own investigation into DEWS and concluded that the system was unfair. That was in 2021. In 2023, DEWS is still in use. Does this mean that our over-reliance on algorithmic systems has created a situation where we know they fail us, but we have no credible alternative, so we keep using them?

I am reminded of the seminal book by Neil Postman “Technopoly”. He says that in a Technopoly, the purpose of technology is NOT to serve people or life. It justifies its own existence merely by existing. In a Technopoly, technology is not a tool, “people are the tools of their tools” (p68). More importantly, and more problematically, “once a technology is admitted, it plays out its hand; it does what it is designed to do. Our task is to understand what that design is” (p128). It is safe to say that digital technologies have been admitted, but do we really have any understanding of what their design is?

Algorithmic Technology, Knowledge Production (And A Few Comments In Between)

So, digital technologies are going to save the world.

Or are they?

Let’s have a no non-sense look at how things really work.

A few comments first.

I am not a Luddite.

[Just a side comment here: Luddites were English textile workers in the 19th century who reacted strongly against the mechanisation of their trade which put them out of work and unable to support their families. Today, they have become the poster-child of anti-progress, anti-technology grumpy old bores, and “you’re a Luddite” is a common insult directed at techno-sceptics of all sorts. But Luddites were actually behaving quite rationally. Many people in the world today react in a similar fashion in the face of the economic uncertainty brought about by technological change.]

That being said, I am not anti-technology. I am extremely grateful for the applications of digital technology that help make the world a better place in many ways. I am fascinated by the ingenuity and the creativity displayed in the development of technologies to solve puzzling problems. I also welcome the fact that major technological shifts have brought major changes in how we live in the world. This is unavoidable, it is part of the impermanent nature of our worlds. Emergence of the new is to be welcomed rather than fought against.

But I am also a strong believer in using discrimination to try to make sense of new technologies, and to critically assess their systemic impact, especially when they have become the object of such hype. The history of humanity is paved with examples of collective blindness. We can’t consider ourselves immune to it.

The focus of my research (and of this post) is Datafication, i.e., the algorithmic quantification of purely qualitative aspects of life. I mention this because there are many other domains that comfortably lend themselves to quantification.

I am using a simple vocabulary in this post. This is on purpose, because words can be deceiving. Names such as Artificial Intelligence (AI) or Natural Language Processing (NLP) are highly evocative and misleading, suggesting human-like abilities. There is so much excitement and fanfare around them that it’s worth going back to the basics and calling a cat a cat (or a machine a machine). There is a lot of hype around whether AI is sentient or could become sentient but as of today, there are many simple actions that AI cannot perform satisfactorily (recognise a non-white-male face for one), not to mention the deeper issues that plague it (bias in data used to feed algorithms, the illusory belief that algorithms are neutral, the lack of accountability, the data surveillance architectures… just to name a few). It is just too easy to discard these technical, political, social issues in the belief that they will “soon” be overcome.

But hype time is not a time for deep reflection. If the incredible excitement around ChatGPT (despite the repeated urge for caution from its founder) is any indication, we are living through another round of renewed collective euphoria. A few years ago, the object of this collective rapture was social media. Today, knowing what we know about the harms they create, it is becoming more difficult to feel deliciously aroused by Facebook and co., but AI has grabbed the intoxication baton. The most grandiose claims are claims of sentience, including from AI engineers who undoubtedly have the ability to make the machines, but whose expertise in assessing their sentience is highly debatable. But in the digital age, extravagant assertions sell newspapers, make stocks shoot up, or bring fame, so it may not all be so surprising.

But I digress…

How does algorithmic technology create “knowledge” about qualitative aspects of life?

First, it collects and processes existing data from the social realm to create “knowledge”. It is important to understand that the original data collected is frequently incomplete, and often reflects the existing biases of the social milieu from where it is extracted. The idea that algorithms are neutral is sexy but false. Algorithms are a set of instructions that control the processing of data. They are only as good as the data they work with. So, I put the word “knowledge” in quotation marks to show that we have to scrutinise its meaning in this context, and use discrimination to examine what type of knowledge is created, what function it carries out, and whose interests it serves.

Algorithmic technology relies on computer-ready, quantified data. Computers are not equipped to handle the fuzziness of qualitative, relational, embodied, experiential data. But a lot of data produced in the world everyday is warm data. (Nora Bateson coined that term by the way, check The Bateson Institute website to know more, it is well worth a read). It is fuzzy, changing, qualitative, not clearly defined, and certainly not reducible to discrete quantities. But computers can only deal with quantities, discrete data bits. So, in order to be read by computers, the data collected needs to be cleaned and turned into “structured data”. What does “structured” mean? It means that it has to be transformed into data that can be read by computers; it needs to be turned into bits; it needs to be quantified.

So this begs the question: how is unquantified data turned into quantified data? Essentially, through two processes.

The first one is called “proxying”. The logic is: “I can’t use X, so I will use a proxy for X, an equivalent”. While this sounds great in theory, it has two important implications. Firstly, a suitable proxy may or may not exist so the relationship of similarity between X and its proxy may be thin. Secondly, someone has to decide which quantifiable equivalent will be used. I insist on the word “someone”, because it means that “someone” has to make that decision, a decision that is far from neutral, highly political and potentially carrying many social (unintended) consequences. In many instances, those decisions are made not by the stakeholders who have a lived understanding of the context where the algorithmic technology will be applied, but by the developers of the technology who lack such understanding.

Some examples of proxied data: assessing teachers’ effectiveness through their students’ test results; ranking “education excellence” at universities using SAT scores, student-teacher ratios, and acceptance rates (that’s what the editors at US News did when they started their university ranking project); evaluating an influencer’s trustworthiness by the number of followers she has (thereby creating unintended consequences as described in this New York Times investigative piece “The Follower Factory”); using credit worthiness to screen potential new corporate hires. And more… Those examples come from a fantastic book by math-PhD-data-scientist turned activist Cathy O’Neil called “Weapons of Math Destruction”. If you don’t have time or the inclination to read the book, Cathy also distills the essence of her argument in a TED talk, “The era of blind faith in big data must end”.

While all of the above sounds like a lot of work, there is data that is just too fuzzy to be structured and too complex to be proxied. So the second way to treat unstructured data is quite simple: abandon it. Forget about it! It never existed. Job done, problem solved. While this is convenient, of course, it becomes clear that this leaves out A LOT of important information about the social, especially because a major part of qualitative data produced in the social realm falls into this category. It also leave out the delicate but essential qualitative relational data that weaves the fabric of living ecosystems. So in essence, after the proxying and the pruning of qualitative data, it is easy to see how the so-called “knowledge” that algorithms produce is a rather poor reflection of social reality.

But (and that’s a big but), algorithmic technology is very attractive, because it makes decision-making convenient. How so? By removing uncertainty (of course I should say by giving the illusion of removing uncertainty). How so? Because it predicts the future (of course I should say by giving the illusion of predicting the future). Algorithmic technology applied to the social is essentially a technology of prediction. Shoshana Zuboff describes this at length in her seminal book published in 2019 “The Age of Surveillance Capitalism: The Fight for a Human Future in the New Frontier of Power”. If you do not have the stomach to read through the 500+ pages, just search “Zuboff Surveillance Capitalism”, you can find a plethora of interviews, articles and seminars she gave since the publication. (Just do me a favour and don’t use Google and Chrome to search, but switch to cleaner browsers like Firefox and search engines like DuckDuckGo). She clearly and masterfully elucidates how Google’s and Facebook’s money machines rely on packaging “prediction products” that are traded on “behavioural futures markets” which aim to erase the uncertainty of human behaviour.

There is a lot more to say on this (and I may do so in a later post), but for now, suffice it to say that just like the regenerative processes of nature are being damaged by mechanistic human activity, life-enhancing tacit ways of knowing are being submerged by the datafied production of knowledge. While algorithmic knowledge creation has a place and usefulness, its widespread use overshadows and overwhelms more tacit, warm, qualitative, embodied, experiential, human ways of knowing and being. The algorithmisation of human experience is creating a false knowledge of the world (see my 3mn presentation at TEDx in 2021).

This increasing lopsidedness is problematic and dangerous. Problematic because while prediction seems to make decision-making more convenient and efficient, convenience and efficiency are not life-enhancing values. Furthermore, prediction is not understanding, and understanding (or meaning-giving) is an important part of how we orient ourselves in the world. It is also problematically unfair because it creates massive asymmetries of knowledge and therefore a massive imbalance of power.

It is dangerous because while the algorithmic medium is indeed revolutionary, the ideology underlying it is dated and hazardous. The global issues and the potential for planetary annihilation that we are facing today arose from a reductionist mindset that sees living beings as machines and a positivist ideology that fundamentally distrusts tacit aspects of the human mind.

We urgently need a pendulum shift to rebalance algorithmically-produced knowledge with warm ways of knowing in order to create an ecology of knowledge that is conducive to the thriving of life on our planet.

Datafied. A Critical Exploration of Knowledge Production in The Digital Age (PhD)

This is a short abstract of my PhD research. I will post more details in the coming days and weeks.

I first look at the epistemological processes behind datafied knowledge and contrast them with the processes of tacit knowledge production. I extract 5 principles of tacit knowledge and contrast them to 5 principles of datafied knowledge, and I contend that datafied knowledge is founded on a reductionist ideology, a reductionist logic of knowledge production, reductionist data and therefore, produces a reductionist type of knowledge. Instead of helping us to understand the world we inhabit in more systemic, holistic and qualitative ways, it relies essentially on quantitative, disembodied, computationally structured computer-ready data, and algorithmically optimised processes.

Through the filter of Walter Benjamin’s work “The Arcades Project”, I argue that datafication (defined as the quantification of the qualitative aspects human experience) is a Phantasmagoria, a dream image, a myth, a social experience anchored in a culture of commodification. The digital production of knowledge is supported by a need to reduce uncertainty and increase productivity and efficiency. It essentially serves a predictive purpose. It does not help us to understand the intricate, beautiful, fragile, qualitative, embodied experience of being alive in a deeply interconnected and interdependent world, an experience that to a great extent, defines humaneness and life in general. In this sense, datafied knowledge is hostile to life.

Finally, I call for a rebalance between tacit and datafied ways of knowing, and a shift to a more regenerative ecology of knowledge based on the principles of living systems.

Helene Liu – PhD Thesis Visual Map

[HOW TO] Protect From Data Theft? (Privacy)

Many people ask me how to protect from data theft from Big Tech. This is a really important question, so I asked a digital security expert friend of mine. This is his (unedited) reply. Some of those are more directly actionable than others. I will regularly add to the list.

Use a *trustworthy* VPN for all devices like Mullvad or ProtonVPN (or tor/I2P for truly sensitive things) with reliable DNS protection (but also aware VPN has own risks, strictly only to mask your true IP + mask your web activity from your ISP + provide more secure internet when connected to public or unsafe networks).

Use Linux on desktop (or any open source privacy friendly version, just avoid MacOS, Windows and ChromeOS).

Use de-googled android (grapheneOS) on mobile. Neither Androids not iPhones are safe. A mobile phone is most invasive and privacy leaking device in our lives.

Delete all social media and big tech accounts.

Replace the services/apps one uses with open source/libre software alternatives. Email, contacts, calendar, cloud storage, apps on phone etc… Especially avoid any products or services by big tech (e.g. Google docs, Gmail, drive, youtube, search, Chrome, WhatsApp etc…).

Use privacy friendly web browser (recommend “brave” browser) with disabled telemetry and tracking blocking and fingerprinting resistance settings set to maximum.

Use privacy friendly search engine (duckduckgo is OK), do not use Google search, Microsoft Bing, etc.

Understand how internet and web infrastructure works (networking basics) as this is key to knowing how to manage own data trail and emissions. Key part is understanding that every single action taken in relation to internet or digital anything leaves a permanent record and digital trail of breadcrumbs. So to know how to get by using alias information when possible, and to be extremely judicious in providing any true personal data in any digital context. Doesn’t matter that one uses the most private and secure computer system if they just go and share their personal life story and details by posting such on the internet. Disclose as little as possible online, and if needed use false/alias data.

Use end to end encrypted and metadata minimising methods of online communication (e.g. Signal is not perfect but probably best balance between privacy/security and usability/widespread use).

Generally opt to use software and services that rely on well-implemented encryption technology and *end to end* and *zero knowledge* encryption wherever possible.

Do not use regular phone call or SMS (use secure WiFi call or message via secure apps instead).

Airports at Christmas: Why AI Cannot Rule The World

It is the week leading to Christmas. I’m at the airport waiting for someone to arrive and as I observe what’s happening here, I can’t help myself thinking about the place we have allowed digital technology to take in our lives. In 2022, AI pervades decision-making in all areas of human experience. What this means is that the deepest qualitative dimensions of being alive on this planet are being reduced to computer data, those data are then fed to algorithms designed by computer scientists which have become the ultimate decision-makers in how life is lived on planet earth.

My contention is that the blind faith that we, the “moderns”, have in algorithms and what we call AI (often without really knowing what that means) is misplaced. There is a place for algorithmic decision-making, but the we need to learn to value the qualitative, embodied, experiential dimension of being alive in a human body, with a human mind.

To understand why AI cannot rule the world, go to the arrival level of an airport at Christmas time, and observe. See the reunion between people who love each other, who have missed each other, the smiles on their faces, the tears of joy of finding each other after several weeks, or months or even years of absence, the excitement, the laughter, the warm hugs… And you will realise why the cold logic of AI can’t cover the reality of the experience of being human, of life.

I have little patience for those who profess that the laws of pure logic rule the social and that we can sort everything out with cold data. What about the rich warm relational dimension of being human? Those people go around claiming that logic and science are all we need, but the irony is that they fail to see that they are surrounded by networks of other persons who provide love, care and warm attention.

« Older posts