Showing posts with label SciComm. Show all posts
Showing posts with label SciComm. Show all posts

Monday, December 05, 2022

Canceling Science and Monetizing Outrage

If we maintain the fantasy of a puritan separation of science and business then innovation will dry up and die. There will be no one left to block and tackle for science or help us navigate the valley of death that lies between a scientific discovery and a cure, product, or better policy. The negative epistemic valence being cast by digital and mainstream media is polluting the commons of scientific communication, hindering the public's ability to distinguish fact from fiction. The implications for health, climate, democracy, and human welfare are tremendous.


Background

In a recent article in the New York Times the intersection between business and science are at the center of debate regarding ongoing climate research by Dr. Frank Mitloehner at UC Davis. This parallels a prior article from some years ago about  Dr. Kevin Folta and his work as it relates to agricultural biotechnology and science communication and outreach.

In the article, it seems to assert that Mitloehner's industry connections and collaboration are compromising his integrity and research as it relates to the relationship between livestock and GHG emissions.

Below are some of the most critical comments from the article:

“Industry funding does not necessarily compromise research, but it does inevitably have a slant on the directions with which you ask questions and the tendency to interpret those results in a way that may favor industry,” said Matthew Hayek, an assistant professor in environmental studies at New York University.

“Almost everything that I’ve seen from Dr. Mitloehner’s communications has downplayed every impact of livestock,” he said. “His communications are discordant from the scientific consensus, and the evidence that he has brought to bear against that consensus has not been, in my eyes, sufficient to challenge it.”

Communicating the Science

Assertions are made, but no evidence is offered in relation to how Mitloehner's research is compromised or in what ways his work contrasts with any scientific consensus. But it certainly puts his communications about his research on the chopping block. This is a big risk of doing science communication and outreach, as I have discussed before here.  In attempting to simplify complex scientific ideas for a broader audience, communicators are at risk for getting called out for any particular nuance they failed to include. It also creates enough space for any critic to write an entire thesis about why you are wrong. As I stated previously:

"Usually this is about how they didn't capture every particular nuance of the theory, failed to include a statement about certain critical assumptions, or over simplified the complex thing they were trying to explain in simple terms to begin with. This kind of negative social harassment seems to be par for the course when attempting to communicate on social media ... A culture that is toxic toward effective science communication becomes an impediment to science itself and leaves a void waiting be filled by science deniers, activists, policy makers, decision makers, and special interests."

One example called out in the NYT article was the production of a video called Rethinking Methane:



The article states: 

“The message of the five-minute video is that, because methane is a relatively short-lived greenhouse gas (once it’s in the atmosphere, it becomes less potent as the years go by), cattle would not cause additional warming as long as their numbers did not grow.”

“The argument leans on a method developed by scientists that aims to better account for the global-warming effects of short-lived greenhouse gases like methane. However, the use of that method by an industry “as a way of justifying high current emissions is very inappropriate” 

When considering sources of GHG emissions, understanding the way methane behaves is fundamental to understanding climate change, and personal and policy decisions related to mitigating future warming.  Understanding this can help direct attention to those areas where we can make the biggest difference in terms impacting climate change. As discussed in Allen et al. (2018):

"While shorter-term goals for emission rates of individual gases and broader metrics encompassing emissions’ co-impacts remain potentially useful in defining how cumulative contributions will be achieved, summarising commitments using a metric that accurately reflects their contributions to future warming would provide greater transparency in the implications of global climate agreements as well as enabling fairer and more effective design of domestic policies and measures."

But instead of diving into the meat (pun intended) of the science, the second statement about this video makes an assertion about using this science to justify high current emissions. 

Is that what Dr. Mitloehner is doing in his many communications, or is it actually the case that when we estimate the impact of climate change he thinks we should be using metrics that do a better job capturing the dynamics of different GHGs?  If his science really led him to dismiss the 'current high emissions' related to methane then why would he be spending time and energy researching and communicating about ways to reduce GHG emissions related to methane via feed additives and other management practices? 

And when we talk about current high emissions related to livestock what do we mean - compared to what?  The article states:

"scientific research has long shown that agriculture is also a major source of planet-warming emissions, ranking below the leading causes — the burning of coal, gas and oil — but still producing almost 15 percent of global emissions, the United Nations estimates."

That is a nice factoid, but it conflates all global emissions from agriculture with livestock emissions. It also makes kind of an ecological fallacy if we attribute that global number to the specific GHG emissions related to livestock of a specific country, particularly when the audience here is U.S. consumers who mostly eat beef produced in the U.S. (where in fact the the contribution to total global GHG emissions is less than 1/2 of 1%.)

The fact about global numbers is relevant to Mitloehner's work only in the sense that his research could have much greater impact in developing countries where GHG emissions may be 10X greater (EPA GHG Emissions Inventory, Rotz et al, 2018). As stated in a recent article in Foreign Policy: 

"Generalizations about animal agriculture hide great regional differences and often lead to diet guidelines promoting shifts away from animal products that are not feasible for the world’s poor....A nuanced approach to livestock was endorsed in the latest mitigation report of the U.N. Intergovernmental Panel on Climate Change (IPCC).....there is great room for improvement in the efficiency of livestock production systems across developing countries" 

But these nuances are lost in the NYT article along with recognition that there are multiple margins to consider when thinking about the tradeoffs related to food production and consumption. Policy should consider the numerous choices consumers and producers make in a modern and global economy in relation to nutrition, energy, and climate.  

Parallels with the Past

When reflecting on this NYT article and the prior article focused on Dr. Kevin Folta I see at least three parallels:

1) An appeal to the nirvana fallacy of a perfect separation between science and business. While this is not explicitly stated, both stories paint a picture of malfeasance and industry influence connected with the work of these scientists, without providing evidence that that their research or findings are biased or conflict with any major consensus. They simply imply that any industry connection is questionable, Guilt by association alone.

2) Establishing some theatre of doubt around the integrity and character of these scientists in the mind of readers, the next step involves a kind of ad hominem reasoning suggesting that because of these industry connections and questionable integrity of the researchers, anything they claim must be false or misleading or contradictory to the mainstream scientific consensus.

Having established the first two parallels, the public is then set up to make a third mistake in reasoning:

3) Argument by intimidation. The implication here is that anyone that references or leans on the work of these scientists must also have questionable integrity or character. This can be invoked as a way to bypass debate and avoid discussing the actual science or evidence supporting the claims one may be making. I'm not saying that the NYT article does this explicitly, but this article pollutes the science communication environment in a way that makes this more likely to happen..

This leads me to ask - why would mainstream media follow this kind of recipe when producing stories?

Changing Business Models for Modern Media

In Jonathan Rauch's book, the Constitution of Knowledge, he discusses how in the old days of print media economies of scale supported the production of real news or reality based content. But new business models have been built on information, not knowledge and are geared toward monetizing eyeballs and clicks. This new business model favors "professionals in the arts of manipulative outrage: the kinds of actors who were more skilled at capturing attention not persuasion and who were more interested in dissemination than communication."

Rauch observes: "By the early 2020s high quality news was struggling to stay in business, while opinion, outrage, derivative boilerplate, and digital exhaust (personal data generated by internet users) enjoyed a thriving commercial market."

Quoting one digital media pundit: "you can't sell news for what it costs to make."

As mainstream media has adopted social and digital media strategies it may not be surprising to see patterns like those above emerge.

 In 2020 former President Barak Obama said in The Atlantic: 

"if we do not have the capacity to distinguish what's true from what's false, then by definition the marketplace of ideas doesn't work. And by definition our democracy doesn't work. We are entering into an epistemological crisis." 

Communicating science is challenging enough. The battle with misinformation and disinformation did not begin or end with the COVID pandemic. It doesn't help when major media outlets would rather cash in on eyeballs and outrage, rather than communicate science.

Related Readings and Resources

Allen, M.R., Shine, K.P., Fuglestvedt, J.S. et al. A solution to the misrepresentations of CO2-equivalent emissions of short-lived climate pollutants under ambitious mitigation. npj Clim Atmos Sci 1, 16 (2018) doi:10.1038/s41612-018-0026-8

C. Alan Rotz et al. Environmental footprints of beef cattle production in the United States, Agricultural Systems (2018). DOI: 10.1016/j.agsy.2018.11.005 

Facts, Figures, or Fiction: Unwarranted Criticisms of the Biden Administration's Failure to Target Methane Emissions from Livestock. https://ageconomist.blogspot.com/2021/12/facts-figures-or-fiction-unfair.html  

The Ethics of Dietary Nudges and Behavior Change Focused on Climate and Sustainability. https://ageconomist.blogspot.com/2022/10/the-ethics-of-dietary-nudges-and.html

Will Eating Less U.S. Beef Save the Rainforests? http://realclearagriculture.blogspot.com/2020/01/will-eating-less-us-beef-save.html


Tuesday, June 28, 2022

The Role of Identify Protective Cognition in the Formation of Consumer Beliefs and Preferences

Background

People pick and choose their science, and often they do it in ways that seem rationally inconsistent. One lens through which we can view this is Bryan Caplan's idea of rational irrationality along with a Borland and Pulsinelli's concept of social harassment costs. 

According to Caplan:

 "...people have preferences over beliefs. Letting emotions or ideology corrupt our thinking is an easy way to satisfy such preferences...Worldviews are more a mental security blanket than a serious effort to understand the world."

This means that:

"Beliefs that are irrational from the standpoint of truth-seeking are rational from the standpoint of utility maximization."

In a previous post, I visualized social harassment costs that vary depending on one's peer group. If these costs exceed a certain threshold (k), consumers might express preferences that otherwise might seem irrational from a scientific standpoint. For example, based on peer group, a consumer might embrace scientific evidence related to climate change, but due to strong levels of social harassment, reject the views of the broader medical and scientific community related to the safety of genetically engineered foods. 



See here, and here.

Identity Protective Cognition

In Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition (Kahan, 2017) the concept of identity protective cognition adds more to the picture. Below are three aspects of identify protective cognition:

  1. What people accept as factual information is shaped primarily by their values and identity
  2. Identity is a function of group membership, i.e. it's tribal in nature
  3. If people choose to hold beliefs that are different from what the 'tribe' believes, then they risk being ostracized (i.e. they face social harassment costs)
  4. As a result, individual thinking and thought patterns evolve to express group membership and what is held to be factual information is really an expression of 'loyalty to a particular identity-defining affinity group.
Additionally Kahan discusses some important implications of this sort of epistemic tribalism. Additional education and more accurate information aren't necessarily effective tactics for addressing the problems of misinformation and disinformation. In fact, what Kahan's and others research have shown is that it can actually make the problem worse. 

'those highest in science comprehension use their superior scientific-reasoning proficiencies to conform policy-relevant evidence to the position that predominates in their cultural group....persons using this mode of reasoning are not trying to form an accurate understanding of the facts in support of a decision...with the benefit of the best available evidence....Instead they are using their reasoning to cultivate an affective stance that expresses their identity and their solidarity with others who share their commitments.' 

In this way, identity protective cognition creates a sort of spurious relationship between what may be perceived as facts and the beliefs we adopt or choices we make. It gives us the impression that our beliefs are being driven by facts when the primary driver may actually be cultural identity. 




This sort of tribalism can result in a sort of tragedy of the science communication commons - similar to the tragedy of the commons in economics where what seems rational from an individual standpoint (adopting the beliefs of the group to avoid punishment) is irrational from the standpoint of accuracy of beliefs and has negative consequences for society at large. As a result:

'citizens of a pluralistic democratic society are less likely to converge on the best possible evidence on threats to their collective welfare.'

Of course this has consequences for elections, the regulatory environment, and decisions by businesses and entrepreneurs in terms of what products to market and where to invest capital and resources. Ultimately this impacts quality of life and our ability to thrive in a world with a changing climate and bitter partisanship and social unrest. 

The Problem and the Solution

As discussed above, this sort of tribal epistemology is not easily corrected by providing correct information or education. In fact it drives one to seek out misinformation in support of one's identity while ignoring what is factually correct. The authors speak broadly about the role that 'pollutants' or 'toxins' in the science communication environment play in promoting this tribal mentality. One form of social harassment cost that may be driving this is cancel culture or call out culture. Cancel culture works like an immune system that scans the network of believers and seeks out non-conforming views, and tags it to be attacked by others in the group. This drives even the brightest to seek out misinformation instead of avoiding it. 




Another pollutant to the science communication environment is troll epistemology and related efforts to produce a 'firehose of falsehood' (see Paul and Matthews, 2016). Whether intentional or not, modern media technology provides the infrastructure to produce an effect similar to modern propaganda techniques pioneered in Russia.  This emphasizes flooding the science communication environment via high-volume and multichannel, rapid, continuous, and repetitive false or unsubstantiated claims with no commitment to objective reality or logical consistency. 


Authors conclude:

'the most effective manner to combat the effect of misconceptions about science and outright misinformation is to protect the science communication environment from this distinctive toxin.'

Related Posts and References

Borland,Melvin V. and Robert W. Pulsinelli. Household Commodity Production and Social Harassment Costs.Southern Economic Journal. Vol. 56, No. 2 (Oct., 1989), pp. 291-301

Frimer, J. A., Skitka, L. J., & Motyl, M. (2017). Liberals and conservatives are similarly motivated to avoid exposure to one another's opinions. Journal of Experimental Social Psychology, 72, 1–12. https://doi.org/10.1016/j.jesp.2017.04.003

Kahan, Dan M., Misconceptions, Misinformation, and the Logic of Identity-Protective Cognition (May 24, 2017). Cultural Cognition Project Working Paper Series No. 164, Yale Law School, Public Law Research Paper No. 605, Yale Law & Economics Research Paper No. 575, Available at SSRN: https://ssrn.com/abstract=2973067 or http://dx.doi.org/10.2139/ssrn.2973067

Paul, Christopher and Miriam Matthews, The Russian "Firehose of Falsehood" Propaganda Model: Why It Might Work and Options to Counter It. Santa Monica, CA: RAND Corporation, 2016. https://www.rand.org/pubs/perspectives/PE198.html.

Friday, June 17, 2022

The Limits of Nudges and the Role of Experiments in Applied Behavioral Economics

In a recent article in Nature, Evidence from a statewide vaccination RCT, authors found that eight different nudges previously shown to be effective for encouraging flu and COVID vaccination failed to show impact when tested on more reluctant populations. I think a knee jerk reaction is that maybe nudges aren't effective ways to increase vaccination after all. But that completely misses a very important aspect of nudges. Nudges work in most cases because humans can be sensitive to context. And applied behavioral design processes work to understand this context and test the impact of interventions to know if they are effective in a given context. At the highest level I think this paper is less about the ineffectiveness of nudges per say and more about the important role of changing context on behavior. 

I'd like to try to unpack more by focusing on the following: 

  • The importance of testing. You can’t blindly chuck nudges over the fence at your customers and simply assume they will be effective just because they worked in prior published studies or in other businesses. In this paper they tested 8 different nudges. If they had just scaled any or all of these without testing we would not have learned anything about effectiveness or the other lessons that follow relating to why they may not have worked. And vice versa - just because something failed to replicate in one context doesn't invalidate prior work or imply it won't work in yours. We just know findings are not generalizable across all contexts. The only way to really know about your context is to test. That is part of the value of business experiments.
  • Context matters. In the paper they discussed important differences in context between late stage COVID vaccination and vaccination earlier in the pandemic, as well as differences between flu and COVID.
  • The utility of behavioral personas and behavioral mapping to guide our thinking about why a given nudge may work or not. To take context a bit further, authors discussed differences in populations (age) and different challenges to flu vs COVID vaccinations and the differential impact related to how both logistical and psychological barriers may have been addressed in different populations and different contexts with different designs. All of these are things that we can point to or think about in the framework of behavioral mapping. Other issues related to 1) different kinds of hesitancy and changing norms over time, 2) whether some participants may have already been vaccinated (and not mentioned perhaps how prior infection may have changed the sense of effectiveness or urgency). These things may relate more to the kinds of personas that any given nudge may speak to. Although the paper doesn't discuss behavioral mapping or developing personas their utility here seems palpable.
  • Behavioral design frameworks. Additionally, authors discussed the impact of things like message saturation and novelty effects in addition to timing. These are things that I tend to think about in the context of Stephen Wendel’s CREATE action funnel as a design framework that speaks to issues like the importance of Cue and Timing. (Actually every aspect of CREATE speaks to almost all of the aspects of this messaging in some way).
  • The importance of operationalizing applied behavioral science through repeatable iterative cycles of learning. Even if one constructed behavioral maps and personas in the design of these nudges, the findings in this paper (and in many instances where we leverage experiments to test impact) dictate that we go back and revise our maps and personas based on learnings like these.

There has also been some recent discussion about the failure of nudges because they focus too much on individual behavioral (i-frame) vs. larger systemic issues  (s-frame). It seems to me that best practices in the 'diagnosis' phase of behavioral design process would be helpful in both of these areas if the behavioral lens is widened to include deeper thinking about the broader system (s-frame). As discussed in The Consitution of Knowledge: A Defence of Truth Jonathan Rouch discusses the challenges of changing behavior when beliefs and identity become tightly braided together. Sometimes people first have to be moved to a 'persuadable place emotionally' and their 'personal opinions, political identities, and peer group norms' have to be 'nudged and cajoled simultaneously, which is a long slow process.' To quote Jim Manzi, you can't test your way out of a bad strategy. It does not mean that we should give up on leveraging applied behavioral science to make a positive change in society, but it does make understanding of the larger ecosystem in the implementation of nudges all the more critical. 

As discussed in a recent article in The Behavioral Scientist:

"Our efforts at this stage will determine whether the field matures in a systematic and stable manner, or grows wildly and erratically. Unless we take stock of the science, the practice, and the mechanisms that we can put into place to align the two, we will run the danger of the promise of behavioral science being an illusion for many—not because the science itself was faulty, but because we did not successfully develop a science for using the science." 

The authors follow with 6 guidelines echoing some of the above sentiments above that are well worth reading. 

Reference: 

Rabb, N., Swindal, M., Glick, D. et al. Evidence from a statewide vaccination RCT shows the limits of nudges. Nature 604, E1–E7 (2022). https://doi.org/10.1038/s41586-022-04526-2

Chater, Nick and Loewenstein, George F., The i-Frame and the s-Frame: How Focusing on Individual-Level Solutions Has Led Behavioral Public Policy Astray (March 1, 2022). Available at SSRN: https://ssrn.com/abstract=4046264 or http://dx.doi.org/10.2139/ssrn.4046264


Monday, June 13, 2022

Agricultural Economics in the Healthcare Space

During the pandemic, it wasn't too uncommon to hear the criticism that economists should stay in their lane when it comes to issues related to health. So I thought I would write a short piece discussing what role I have had as an applied (agricultural) economist working in the healthcare space for almost a decade now. 

Economics is the study of people's choices and how they are made compatible. At a high level, agricultural economics focuses on choices related to food, fiber, natural resources, and energy production and consumption. This makes the intersection of food, health, and the environment an interesting space in agricultural economics. 

How do choices in this space impact health? What factors lead individuals to make healthy choices? In graduate school I specifically focused on why people seem to pick and choose their science and the role of evidence in food choices and attitudes toward food technology. What is the role of information and disinformation in the formation of consumer preferences and the choices they make? How can we design better policies, products, services, interventions, or choice architectures for better outcomes? How can we communicate science and risk more effectively? And, what are the best approaches in experimental design and causal inference to measure the impact in these areas? How do we bring this all together to make better decisions as individuals, business leaders, and as a society? At an applied level, which is where I work, this is not so much about making a novel contribution to the literature or advancing the field as much as it is about implementation - applying the principals of economics to develop solutions or provide frameworks to solve or better understand questions and problems in this space.

This line of reasoning has value not just in the context of food choices but for a myriad of behaviors related to healthcare at both the patient and provider level. From a business perspective, this is about how to identify opportunities to move resources from a lower to a higher valued use, and how we monetize behavior change. Of course applying this economic lens also requires bringing an ethical perspective to the table as well, which is important when we consider all of the tradeoffs involved in human decision making. 

When we are faced with wicked problems that may have alternative solutions, we can't just jump directly form the science to a cure, better policy, or product or service.  We learned from the pandemic the difference between having a vaccine and having people get vaccinated. At the end of the day there are no solutions really, only tradeoffs, and we need a framework for understanding those tradeoffs so we can make better decisions about food and health. That is squarely in the lane of theoretical and applied economists.

Related Posts and Readings

Why Study Economics / Applied Economics 

The Convergence of AI, Life Sciences, and Healthcare

The Economics of Innovation in Biopharma

Science Communication for Business and Non-technical Audiences 

The Value of Business Experiments

Statistics is a way of thinking not a toolbox

Causal Decision Making with Non-Causal Models

Rational Irrationality and Behavioral Economic Frameworks for Combating Vaccine Hesitancy 

Consumer Perceptions, Misinformation, and Vaccine Hesitancy

Using Social Network Analysis to Understand the Influence of Social Harassment Costs and Preferences Toward Biotechnology

Fat Tails, The Precautionary Principle, and GMOs

Innovation, Disruption and Low(er) Carbon Beef

Examining Changes in Healthy Days After Health Coaching. Cole, S., Zbikowski, S. M., Renda, A., Wallace, A., Dobbins, J. M., & Bogard, M. American Journal of Health Promotion. (2018)

Intrapersonal Variation in Goal Setting and Achievement in Health Coaching: Cross-Sectional Retrospective Analysis. Wallace A.M., Bogard M.T., Zbikowski S.M. J Med Internet Res 2018;20(1):e32

Saturday, April 03, 2021

Consumer Perceptions, Misinformation, and Vaccine Hesitancy

In graduate school I focused on how consumer consumption patterns signal social viewpoints, and the role of information and misinformation in the process. Particularly interesting was the observation that some consumers had strongly held science based views related to some issues while simultaneously holding other views that were inconsistent with views of the larger scientific community. What could explain this? I hypothesized a utility maximizing model that involved world views and social harassment costs consistent with the idea that viewpoints that may be irrational based on an objective related to scientific truths and evidence can be rational from the standpoint of personal utility maximization. This isn't so different from the idea of coherence, from Kahneman's Thinking Fast and Slow, where they argue that the coherence of the story matters more than the quality of the evidence. 

In 'Finding a vaccine for misinformation' authors address the challenges of misinformation as it relates to vaccine hesitancy and leverage some of the same behavioral economic frameworks. They explain:

"A coherent story works because our minds don't just encode facts and events into memory...we also store bottom line meaning or 'gist' and it is the stored gist, not the facts, that typically guides our beliefs and behaviors"

They go on to explain that our worldview (pre-existing internal stories based on our our mental tapestry of culture, knowledge, beliefs, and life experiences) determines which gist which is stored and resonates.

Part of their strategy for dealing with this is 'inoculating' consumers through gamification so that they are less susceptible to misinformation. I'm not sure gamification is the answer, but at the least what can be learned from this research definitely could lead to progress on this front:

"Introne believes that he can use this approach to target the weakest links in false narratives and bring people closer to changing their minds. He says that if he can deliver information that doesn’t conflict with a person’s belief state but still brings them around to a more accurate point of view, “then I’ve got a pretty powerful thing.”

This reflects a lot of what we have learned over the years. Simply presenting facts and evidence, telling people they are wrong on the internet so to speak, isn't going to change minds or behavior. Our communication has to be much more strategic with laser like intent. 

References:

News Feature: Finding a vaccine for misinformation.Gayathri Vaidyanathan. Proceedings of the National Academy of Sciences Aug 2020, 117 (32) 18902-18905; DOI: 10.1073/pnas.2013249117 https://www.pnas.org/content/117/32/18902

Related References:

Information Avoidance and Image Concerns. Exley, Christine L and Kessler, Judd B. National Bureau of Economic Research. Working Paper No. 8376 January 2021. doi. 10.3386/w28376. http://www.nber.org/papers/w28376

Related Reading:





Sunday, September 16, 2018

Don't Throw Good Science Out With the Dirty COI Bathwater

There has been a lot of conversation lately regarding conflicts of interest in research. This recent tweet by Andrew Kniss resonated a lot with me:
No doubt conflicts of interest are important to be aware of and understand. We definitely want to maintain the highest integrity with regard to science communication and research. However, it is important that COI 'labels' don't become the new 'free-from' label i.e. we don't want a conflict of interest to necessarily become conflated with bad science any more than we would want a GMO label to be conflated with unhealthy or unsustainable. We don't want COI to become the red herring for smear campaigns, political correctness, or a disincentive to doing good science.

About a year ago during one of his excellent talking biotech podcasts (specifically regarding the movie Food Evolution) Kevin Folta made an excellent point about COI and bias in research:

"I've trained for 30 years to be able to understand statistics and experimental design and interpretation...I'll decide based on the quality of the data and the experimental design....that's what we do."

COI involve a delicate balance. I think the recent controversy around COI highlights the importance for producers and communicators of science to get this right. There's also a challenge for consumers of science to know how to recognize good science without throwing it out with the COI bathwater.