Richard Sutton has thought a lot about AGI (actually, I read his textbook some years ago, and still read it sometimes). The views expressed in his recent interview are similar to my own perspective on generative chatbots and medical decisions.
Essentially, Sutton says that imitation will not get us to AGI, there has to be some reward maximization. In my manuscript, I argue that imitating text, or even clinician behavior, will not get us to medical decision making, there has to be some patient utility maximization.
Leave a comment