Datafication, Phantasmagoria of the 21st Century

Tag: AI

Don’t Trust AI’s Health Advice (Do I Really Need to Mention This?)

Gary Markus is a professor emeritus of psychology and neural science at NYU, early AI researcher and coder and a fierce critic of the current hype surrounding AI.

I highly recommend subscribing to his Substack “Markus on AI” here. You will certainly learn a thing or two. His critique is rooted in science and comes from an inner understanding of not only the technology itself, but of the interaction between technology and … well, us! We often read or think about the digital in terms of the technology itself, but very seldom do we actually consider what’s really important: our relationship with it.

His latest post “Please don’t trust your chatbot for medical advice” does exactly that, reminding us that despite confidently spitting out information on about every topic in the world, LLMs chatbots routinely hallucinate, make up facts, invent citations and studies, and are generally hopelessly unreliable when it comes to real facts (Markus call this “authoritative bullshit”).

Sometimes they get it right, sometimes they don’t. So, short of doing the research by yourself–which of course defeats the purpose of asking in the first place–there is no way to know whether it’s a great or suicidal idea to follow their advice.

In this post, Markus refers to four serious, reliable studies published in journals of the scientifically rigorous, peer-reviewed type (you remember those?) that clearly tell us: NO, YOU CAN’T TRUST LLMs with health advice specifically and our lives generally (did we REALLY need to be reminded is the question that springs to mind but never mind that).

He also relates the personal experience of a friend whose recently deceased father decided to trust AI rather than his doctors with advice on how to treat his leukemia. You can guess the result in the previous sentence.

He states: “Four studies in four journals published in the space of a few months reaching essentially the same conclusion is a crystal clear indicator that chatbots, especially when used by amateurs, simply cannot be trusted.”

“Crystal clear”. Don’t say you have not been warned!

Siri Beerends, AI Makes Humans More Robotic

Siri Beerends is a cultural sociologist and researches the social impact of digital technology at media lab SETUP. With her journalistic approach, she stimulates a critical debate on increasing datafication and artificial intelligence. Her PhD research (University of Twente) deals with authenticity and the question of how AI reduces the distance between people and machines.

Her TEDx talk caught my attention because, as a sociologist of technology, she looks at AI with a critical eye (and we need MANY more people to do this nowadays). In this talk, she gives 3 examples illustrating how AI does not work for us (humans), but we (humans) work for it. She shows how AI changes how we relates to each other in very profound ways. Technology is not good or bad she says, technology (AI) changes what good and bad mean.

Even more importantly, AI is not a technology, it is an ideology. Why? Because we believe that the social and human processes can be captured into computer data, and we forget about aspects that data cannot capture. Also, AI is based on a very reductionist understanding of what intelligence means, i.e. what computers are capable of, one that forgets about consciousness, empathy, intentionality, and embodied intelligence. Additionally, contrary to living intelligence, AI is very energy inefficient and has an enormous environmental impact.

AI is not a form of intelligence, but a form of advanced statistics. It can beat us in stable environments with clear rules, or in other terms, NOT the world we live in, which is contextual, ambiguous and dynamic. AI at best performs very (VERY!) poorly, at worst creates havoc in the messy REAL world because it can’t adapt to context. Do we want to make the world as predictable as possible? Do we want to become data clicking robots? Do we want to quantify and measure all aspects of our lives she asks. And her response is a resounding no.

What then?

Technological progress is not societal progress, so we need to expect less from AI and more from each other. AI systems can help solve problems, but we need to look into the causes of these problems, the flaws in our economic systems that trigger these problems again and again.

AI is also fundamentally conservative. It is trained with data from the past and reproduces patterns from the past. It is not real innovation. Real innovation requires better social and economic systems. We (humans) have the potentials to reshape them. Let’s not waste our potentials by becoming robots.

Watch her TEDx talk below.