Dr. Corwin Zigler
You don’t have to look very far in the news these days to see that there is a lot of action in the realm of environmental health policy, mostly due to the sharp change in perspective that came with EPA appointees by the Trump administration. Lurking in the details is a debate surrounding “causal inference” and its specific relation to setting policies that I think would be of interest to members of HPSS.
The importance of air pollution policy and the role of causality has been a topic of increased discussion for the past 10 years or so. A useful summary of the importance of the problem here. A focus on causal inference in this context has motivated quite a few papers – some by members of HPSS – designed to follow the trend of other fields by incorporating rigorous causal inference methods into studies of air pollution policy. In fact, the profile of causal inference methods has rapidly increased in air pollution epidemiology over the past several years, with more researchers adopting more formal perspectives and many more discussions about causal inference at conferences and workshops.
But the importance of inferring causality in these settings (and the inherent challenges) is now being reoriented by Trump appointees to undercut the importance of air pollution regulations. Industry-friendly appointees have replaced scientists on advisory boards. The Clean Air Scientific Advisory Committee in particular: it is charged with reviewing all available science about pollution and health and making a recommendation for policy. The debate of the moment is the review of the National Ambient Air Quality Standards for fine particulate matter (PM2.5). These NAAQS have to be reviewed on a regular schedule.
As an aside, the EPA recently created a rule that disqualified anyone who had received EPA research funding from serving on this committee, effectively disqualifying some of the most credible scientists working in the area.
These efforts are being conducted in parallel to some rules that are similarly disingenuous with regard to transparency in scientific research in that worthwhile virtues are being touted with the ultimate goal of disqualifying decades-old research. Here’s a quote from that linked report that is about statistics, which should give a flavor:
“In addition, this proposed regulation is designed to increase transparency of the assumptions underlying dose response models. As a case in point, there is growing empirical evidence of non-linearity in the concentration-response function for specific pollutants and health effects. The use of default models, without consideration of alternatives or model uncertainty, can obscure the scientific justification for EPA actions. To be even more transparent about these complex relationships, EPA should give appropriate consideration to high quality studies that explore: A broad class of parametric concentration-response models with a robust set of potential confounding variables; nonparametric models that incorporate fewer assumptions; various threshold models across the exposure range; and spatial heterogeneity. EPA should also incorporate the concept of model uncertainty when needed as a default to optimize low dose risk estimation based on major competing models, including linear, threshold, and U-shaped, J-shaped, and bell-shaped models.”
This reads to me more like a laundry list of things that are hard about doing statistics, abusing the fact that sometimes some methods can be inappropriate to imply that all methods are frequently inappropriate. There's a disingenuousness that parallels the legislation on transparency; robust confounding adjustment, model uncertainty, flexible parametric specifications, nonparametric methods, spatial heterogeneity etc. are all worthwhile things in an abstract sense, but to imply that these are things that must always be done before a study can be considered high quality seems just an effort to prevent any study from being considered high quality. For example, it seems to suggest that a study must include both a robust set of parametric models and nonparametric methods, which is technically contradictory.
The causality-specific arguments here are being led in part by Tony Cox, a career consultant with close ties to polluting industries. He now chairs the Clean Air Scientific Advisory Committee. He makes claims to have developed new approaches to causal inference which prove that air pollution does not cause the level of harm that has been established over the past several decades. He packages existing R packages and deploys them to show that there are no causal associations between pollution and health. He essentially sidesteps the issues that members of HPSS would be familiar with when it comes to causal inference – carefully defining potential outcomes to clarify the question/estimand, detailed reasoning about measured and unmeasured confounding, sensitivity analysis, etc. – in favor of an (apparently) black-box approach that, as far as I can tell, would have very little resemblance to the type of causal inference analysis familiar to the bulk of the statistical and epidemiological community.
Dr. Cory Zigler is Associate Professor of Statistics and Data Sciences at the University of Texas at Austin and Dell Medical School. He specializes in methods for the analysis of complex observational studies, and has spent the past several years working to evaluate the health impacts of air pollution regulatory policies. (The opinions and views represented here are the author’s own and do not reflect any group for which the author has an association.)