When we conduct medical research studies, it is important that we design our studies well, analyze the data correctly, and communicate the results accurately. To do these things, one inevitably has to address concepts like experimentation, observation, and—perhaps most of all—causality. Many have studied these topics within computer science, statistics, clinical trials, reinforcement learning, control theory, epidemiology, etc.
Today, some great points on this topic were made here. I agree that causality is the default in trials (and online reinforcement learning) and that the causal frameworks such as do-calculus and directed acyclic graphs are useful (see my views here). I would personally advocate, from a notational and conceptual standpoint, for merging directed acyclic graphs with a more policy-centric approach (as I describe here), which aligns with those working in control theory and reinforcement learning, and also, by nature of its focus on probability, with clinical trials (notably, also, because such a framework involves probabilities, it is a natural fit for clinicians, who become accustomed to probabilities in order to wade through the uncertainty that is medicine). However, either way, it is good to see that the concept of causality and the frameworks developed to assess and discuss it are being discussed.
Taking the best from different frameworks, such as the intuitive visual qualities of directed acyclic graphs, the rigor and transparency of do-calculus and potential outcomes, the focus on study design and statistics of clinical trials, and the general action- and policy-focused frameworks from fields like control theory and reinforcement learning, will be paramount to moving forward.
Advances here will ultimately help us, together, make better sense of evidence and better decisions for patients.
Leave a comment