Image processing, chatbots, and “AI”

Chatbots and image-processing systems are often referred to as “AI.” However, at least as of a few years ago, the cutting edge for image-processing was mostly CNNs. In contrast, chatbots involve generative language models wrapped into human-in-the-loop reinforcement learning. These are very different things.

Up until recently, image processing was called computer-aided diagnosis (CAD), and although people were excited about it, and it worked amazingly well for some things, it didn’t get much hype (perhaps because it was usually not called “AI”). Chatbots, which only started working well fairly recently, have generated much of the recent “AI” hype.

Now, both image processing and generative chatbots are being called “AI.” Grouping them under this term—blurring the distinction between them—is problematic. For example, it can be taken to mean that the success of CAD mammography—which surpasses chatbots in terms of clinical reliability—is evidence of “AI” chatbot clinical readiness. Or, it can be taken to mean that the human-like flexibility of chatbots is present in “AI” image processing systems, which inflates expectations about their ability to read scans autonomously. This confusion can have consequences in terms of funding, regulation, and the public’s trust in science more broadly.

Don’t even get me started about calling risk models “AI.”

Leave a comment