Gary Markus is a professor emeritus of psychology and neural science at NYU, early AI researcher and coder and a fierce critic of the current hype surrounding AI.

I highly recommend subscribing to his Substack “Markus on AI” here. You will certainly learn a thing or two. His critique is rooted in science and comes from an inner understanding of not only the technology itself, but of the interaction between technology and … well, us! We often read or think about the digital in terms of the technology itself, but very seldom do we actually consider what’s really important: our relationship with it.

His latest post “Please don’t trust your chatbot for medical advice” does exactly that, reminding us that despite confidently spitting out information on about every topic in the world, LLMs chatbots routinely hallucinate, make up facts, invent citations and studies, and are generally hopelessly unreliable when it comes to real facts (Markus call this “authoritative bullshit”).

Sometimes they get it right, sometimes they don’t. So, short of doing the research by yourself–which of course defeats the purpose of asking in the first place–there is no way to know whether it’s a great or suicidal idea to follow their advice.

In this post, Markus refers to four serious, reliable studies published in journals of the scientifically rigorous, peer-reviewed type (you remember those?) that clearly tell us: NO, YOU CAN’T TRUST LLMs with health advice specifically and our lives generally (did we REALLY need to be reminded is the question that springs to mind but never mind that).

He also relates the personal experience of a friend whose recently deceased father decided to trust AI rather than his doctors with advice on how to treat his leukemia. You can guess the result in the previous sentence.

He states: “Four studies in four journals published in the space of a few months reaching essentially the same conclusion is a crystal clear indicator that chatbots, especially when used by amateurs, simply cannot be trusted.”

“Crystal clear”. Don’t say you have not been warned!