In their very well known article "The Cult of Statistical Significance", Ziliak and McCloskey write:
"William Sealy Gosset (1876-1962) aka “Student,” working as Head Experimental Brewer at Guinness’s Brewery, took an economic approach to the logic of uncertainty. Fisher erased the consciously economic element, Gosset's "real error." We want to bring it back….Statistical significance should be a tiny part of an inquiry concerned with the size and importance of relationships."
For those unfamiliar with the history of statistics, Gosset was the one that came up with the well known t-test so many of us run across in any basic statistics class. A lot of issues are addressed in this paper, but to me one related theme is the importance of 'practical' or what I might call 'actionable' statistics. And context matters. Are the results relevant for practical consideration? Is the context realistic? Are there proper controls? What about identification? For instance, not long ago I wrote about a study that attempted to correlate distance from a farm field and exposure to pesticides and autism that has been criticized for a number of these things, even though the results were found to be statistically significant. As well as this one attempting to claim that proteins from Bt (read "gmo") corn were found in the blood of pregnant women. And...not to forget, the famous Serelini study that claimed to connect roundup herbicide to cancer in rats, that was so bad that it was retracted. Context, and economics (how people behave in the context of real world decision making scenarios) really matter. Take for instance, California's potential consideration to put roundup on a list of known carcinogens that might actually cause environmental harms in a number of ways magnitudes worse than roundup itself ever could.
So what does this all have to do with bacon? Well recently you might have heard a headline like this: “Processed meats rank alongside smoking as cancer causes – WHO.”
This is a prime example of the importance of putting science and statistical significance, effect sizes, context (like baseline risks in the case of WHO quote above) and practical significance into perspective. Millions of people have heard this headline, taken the science at face value, and either acted on it or given it way more credence and lip service than it deserves. At a minimum every time for the rest of their life they have a piece of bacon they might think, wow, this could be almost as bad or worse than smoking.
Economist Jayson Lusk has a really nice post related to this with several quotes from a number of places, and I'm going to borrow a few here. From an article he links to in the Atlantic:
"the practice of lumping risk factors into categories without accompanying description—or, preferably, visualization—of their respective risks practically invites people to view them as like-for-like. And that inevitably led to misleading headlines like this one in the Guardian: “Processed meats rank alongside smoking as cancer causes – WHO.”
“One thing rarely communicated in these sorts of reports is the baseline level of risk. Let's use Johnson's example and suppose that eating three pieces of bacon everyday causes cancer risk to increases 18%. From what baseline? To illustrate, let's say the baseline risk of dying from colon cancer (which processed meat is supposed to cause) is 2% so that 2 out of every 100 die from colon cancer over their lifetime (this reference suggests that's roughly the baseline lifetime risk for everyone including those who eat bacon). An 18% increase means your risk is now 2.36% for a 0.36 percentage point increase in risk. I suspect a lot of people that would accept a less-than-half-a-percentage point increase in risk for the pleasure of eating bacon….studies that say that eating X causes a Y% increase in cancer are unhelpful unless I know something about my underlying, baseline probably of cancer is without eating X.”
The real cult of statistical significance (and in effect all of the so called science that follows from it) is a cult like believing and following by multitudes that hear about this study or that, overly dramatized by media headlines, (even if it is a solid study, potentially taken out of context and misinterpreted to fit a given agenda or emotive response), and then synthesized into corporate marketing campaigns and unfortunately public policies. Think gmo labeling, gluten free, antibiotic free, climate change policy, ad naseam.
Great post! I wonder if we'll ever see a published article discussing the effect size of a beta that is not statistically significant. Some would, perhaps rightly, consider this a victory for econometrics.
ReplyDelete