You would think that with all the hype around “AI” (in quotation marks because the word has become a catch-all bag, covering a whole range of poorly defined realities), and our civilisation’s enduring blind faith in the omniscience of digital technologies, at least, the technology would perform its function remarkably well.

I mean wouldn’t you?

Well, it seems not.

The Markup is “a nonprofit newsroom that challenges technology to serve the public good.” (Check here if you want to know more, I have been following them for years, they do remarkable work.)

This is what they found out (see below).

A software company sold a New Jersey police department an algorithm that was right less than 1% of the time. Read the whole article here.

It is NOT a blip. It is NOT an exception, an anomaly, a special case. It is another day in the office for predictive AI. And those issues will NOT go away with the next model iteration.

They are here to stay because they are an intrinsic feature of the technology. As a technology of quantification, AI (or whatever name we want to give the Digital) does NOT and, in fact, can NOT reliably handle qualitative aspects of life.

This is why the likes of Facebook employ human content moderators to detect and remove gore, violence and generally harmful content from the platform. (By the way, those people are often sub-contracted, so they do not appear on the main companies’ annual reports, and their contracts contain a clause they won’t sue if they get PTSD on the job, which they often do. Read here about what happened when they did).

So, despite all the hype, “a rose by any other name would smell as sweet.” When it comes to the social, predictive AI mostly fails at predicting.