Summary: information retrieval (and summary) is not the same as decision making.
OpenAI leadership recently suggested that chatbots, such as GPT-5, “can help you understand your healthcare, and make decisions on your journey.”
Although better understanding of healthcare can lead to better decisions, it’s important to be careful here. Say you go to the doctor and find you have high cholesterol. Asking a chatbot (or Google, or PubMed) for some strategies to reduce cholesterol might be OK. Even asking it to pull up and summarize guidelines might be OK (if summarized correctly). However, asking a chatbot whether or not to start a statin is another matter.
The latter is a treatment decision, with pros and cons. Any such decision involves optimizing for expected utility—ie, trying to figure out the best decision to maximize one’s health on average. This requires probabilities, their combination with preferences, and an optimization. Large language models, the technology behind chatbots, do not have the structure to do these things. Hence, there is no guarantee that a chatbot will give a good treatment decision.
Honestly, there is no guarantee a healthcare provider will give a good treatment decision, either. However, the human brain, especially after a lot of training and experience, is able to estimate probabilities, elicit preferences, and (approximately) perform optimizations. Also, healthcare providers learn how to help patients navigate disease despite the medical community’s limitations in its ability to always make optimal treatment choices (limitations which will be reflected in the guidelines); for example, to this end, providers often involve multiple parties (e.g., different specialties, social workers, educators, etc).
In my opinion, however, chatbots should not be disregarded. It is interesting that they can be given a medical case and efficiently pull up, interpret, and summarize guidelines (assuming this is done correctly). I think it’s not unreasonable for patients to use a chatbot during a healthcare journey to search the literature (Google or PubMed can be used for this, too) or as a sort of interactive notebook or journal.
Overall, though, it is important to be careful about using chatbots to make decisions. Even when done right, the major medical-decision-making deliverables of a chatbot, information retrieval and summary, in and of themselves do not equate to decision making. Don’t follow a chatbot’s treatment recommendations blindly—actually, don’t follow anyone’s treatment recommendations blindly, but especially don’t follow a chatbot’s treatment recommendations blindly!
I give more support for the above in this manuscript and substack post (and, for a quick summary, see this blog post).
Leave a comment