Saturday, June 04, 2016

Left vs Right Science vs Risk vs Propensity to Regulate

Jayson Lusk has an interesting post on his blog related to an article in the Journal of Agricultural and Resource Economics finding an interesting relationship between left leaning voters and their willingness to support GMO labeling initiatives:



“One distinction, which I think is missing, is the greater willingness of those on the left to regulate on economic issues, such as GMOs, than those on the right. Stated differently, there are questions of science: what are the risks of climate change or eating GMOs. And then there are more normative questions: given said risk, what should we do about it? Even if the left and the right agreed on the level of risk, I don’t think we should expect agreement on political action.”


If I understand this correctly, I think this implies that if both those on the left and right agreed that there was some 'day after tomorrow' scenario (in terms of climate change) that warranted some type of government intervention, and they agreed that the science says there is a 3% chance of it happening without the intervention, then those on the right might object to the intervention for that given level of risk while a more left leaning person would support it. A right leaning person might suggest more market based alternatives or taking the gamble. But perhaps if the risk were higher, they might support doing more. In other words there might be different thresholds for the level of risk required to support a given policy interventions across the political spectrum.

Of course, the scientific consensus on climate change may not really even be strong enough to know for sure, i.e. the science isn't settled on exactly what scenarios are likely to play out and the probabilities that they will occur. There's a lot of science to support a wide range of probabilities and scenarios based on a number of assumptions. (see herehere, here, and here). So really, I think even the science, risk, and potential outcomes or scenarios are largely based on perceptions and these might actually differ significantly across the political spectrum. Maybe its really about perceived risk.

Just thinking about this a little more what if we specified a model of preferences toward government intervention like that below (this is more an illustration than a serious attempt to look at this empirically):

Pr(SUPPORT POLICY) =  B0 + B1 PERCEIVED RISK + B2 KNOWLEDGE

So if we estimated simple linear probability models as specified above for democrats and republicans (as short hand for political preferences) according to the story line above B1 would be higher for democrats than republicans. (I'm ignoring the use of interaction terms on purpose for simplicity) I wonder if this would also be true for B2, for a given level of knowledge, would B2 be higher for democrats/liberals? I also wonder if PERCIEVED RISK is really a function of KNOWLEDGE? Maybe a different specification would look something like:

Pr(SUPPORT POLICY) = B0 + B1 PERCEIVED RISK(KNOWLEDGE)

where PERCEIVED RISK = f(KNOWLEDGE)

So in this case perhaps B1 would still be higher for those with more left leaning politics. Still I wonder, besides this effect, what if its the case that the level or mean of PERCEIVED RISK is in general higher for those on the left? So you have this effect of a greater inclination for a preference for government intervention given a level of PERCEIVED RISK (via B1) but also a population of left leaning voters with a PERCEIVED RISK levels that are on average some magnitude higher. Both of these effects would likely increase the propensity of supporting government intervention.

Consider also....if PERCEIVED RISK = f(KNOWLEDGE), is the level of KNOWLEDGE about GMOs or climate change the same for those on the left and right and is this really what is partly determining different levels of PERCEIVED RISK?  I'm not sure....how often do we hear arguments from the left that drastic actions or mitigating policies to combat climate change are necessary because of the scientific consensus on climate change when in fact the consensus as it is is pretty weak. Too weak to offer much guidance on actions, or very precise estimates of actual risks. (again see herehere, here, and here). And even some of the world's leading experts in risk modeling tend to have some ideas about GMO risks that can be seriously questioned (see here). There was a really good book a few years back discussing voter preferences and systemic bias regarding economic policy that addressed similar issues (see The Myth of the Rational Voter).

If preferences toward policy can be modeled in this way, an interesting and maybe promising feature is that perhaps the level of knowledge feeding into PERCEIVED risk can be altered. We often hear that science and evidence rarely will change minds when it comes to biotechnology or climate change, however, in a paper recently published by the Journal of the Federation for American Societies for Experimental Biology (FASEB) Jayson and Brandon McFadden observed the following:

1) consumers, as a group, are unknowledgeable about GMOs, genetics, and plant breeding and, perhaps more interestingly

 2) simply asking these objective knowledge questions served to lower subjective, self-assessed knowledge of GMOs (i.e., people realize they didn't know as much as they thought they did) and increase the belief that it is safe to eat GM food. 

I'm not  a PhD Economist or Psychometrician but I would think an approach similar to the structural equation modeling framework I discussed before (depicted below) might get closer to specifying and measuring all of the causal paths and connections between latent constructs around risk perception and the policy environment for GMOs or climate change. Of course that would also require a solid data set and valid survey instruments. Jayson's work seems to be leading the way. These are just my initial thoughts prior to even reading the Jayson and McFadden article or the JARE article mentioned above and honestly I have not reviewed much of the actual literature or survey analysis related to risk and perceptions or policy preferences since graduate school. Maybe a lot of this has been done already.

(click to enlarge)



Sunday, November 01, 2015

The Cult of Statistical Significance...or...no bacon is not really as bad or worse than cigarettes!

 In their very well known article "The Cult of Statistical Significance", Ziliak and McCloskey write:

"William Sealy Gosset (1876-1962)  aka “Student,” working as Head Experimental Brewer at Guinness’s Brewery, took an economic approach to the logic of uncertainty. Fisher erased the consciously economic  element, Gosset's "real error."  We want to bring it back….Statistical significance should be a tiny part of an inquiry concerned with the size and importance of relationships."

For those unfamiliar with the history of statistics, Gosset was the one that came up with the well known t-test so many of us run across in any basic statistics class. A lot of issues are addressed in this paper, but to me one related theme is the importance of 'practical' or what I might call 'actionable' statistics. And context matters. Are the results relevant for practical consideration? Is the context realistic?  Are there proper controls? What about identification? For instance, not long ago I wrote about a study that attempted to correlate distance from a farm field and exposure to pesticides and autism that has been criticized for a number of these things, even though the results were found to be statistically significant.  As well as this one attempting to claim that proteins from Bt (read "gmo") corn were found in the blood of pregnant women. And...not to forget, the famous Serelini study that claimed to connect roundup herbicide to cancer in rats, that was so bad that it was retracted. Context, and economics (how people behave in the context of real world decision making scenarios) really matter. Take for instance, California's potential consideration to put roundup on a list of known carcinogens that might actually cause environmental harms in a number of ways magnitudes worse than roundup itself ever could.

So what does this all have to do with bacon? Well recently you might have heard a headline like this: “Processed meats rank alongside smoking as cancer causes – WHO.” 

This is a prime example of the importance of putting science and statistical significance, effect sizes, context (like baseline risks in the case of WHO quote above) and practical significance into perspective. Millions of people have heard this headline, taken the science at face value, and either acted on it or given it way more credence and lip service than it deserves. At a minimum every time for the rest of their life they have a piece of bacon they might think, wow, this could be almost as bad or worse than smoking.

Economist Jayson Lusk has a really nice post related to this with several quotes from a number of places, and I'm going to borrow a few here. From an article he links to in the Atlantic:

"the practice of lumping risk factors into categories without accompanying description—or,  preferably, visualization—of their respective risks practically invites people to view them as like-for-like. And that inevitably led to misleading headlines like this one in the Guardian: “Processed meats rank alongside smoking as cancer causes – WHO.”

“One thing rarely communicated in these sorts of reports is the baseline level of risk.  Let's use Johnson's example and suppose that eating three pieces of bacon everyday causes cancer risk to increases 18%.  From what baseline?  To illustrate, let's say the baseline risk of dying from colon cancer (which processed meat is supposed to cause) is 2% so that 2 out of every 100 die from colon cancer over their lifetime (this reference suggests that's roughly the baseline lifetime risk for everyone including those who eat bacon).  An 18% increase means your risk is now 2.36% for a 0.36 percentage point increase in risk.  I suspect a lot of people that would accept a less-than-half-a-percentage point increase in risk for the pleasure of eating bacon….studies that say that eating X causes a Y% increase in cancer are unhelpful unless I know something about my underlying, baseline probably of cancer is without eating X.”

The real cult of statistical significance (and in effect all of the so called science that follows from it) is a cult like believing and following by multitudes that hear about this study or that, overly dramatized by media headlines, (even if it is a solid study, potentially taken out of context and misinterpreted to fit a given agenda or emotive response), and then synthesized into corporate marketing campaigns and unfortunately public policies. Think gmo labeling, gluten free, antibiotic free,  climate change policy, ad naseam.

Thursday, October 29, 2015

Applied Microeconomics: The Normative Representative Consumer and Welfare Analysis

In a previous post, I pondered some questions related to using market demand functions to make welfare statements, following broadly Microeconomic Theory by Andreu Mas-Colell, Michael D. Whinston, and Jerry R. Green (MWG).

Making welfare statements about aggregate demand revolves around a few key concepts including a positive representative consumer, a wealth distribution rule, and a social welfare function. At a high level, these concepts seem to represent the technical assumptions and characteristics that need to hold in order to make most of the basic analysis of an intermediate microeconomics course mathematically sound or tractable for applied work. Here is a shot at some high level explanations:

positive representative consumer- at a high level, a hypothetical consumer who's UMP facing society's budget constraint generates a market or economy's aggregate demand function

wealth distribution rule - for every level of aggregate wealth, assigns individual wealth.
This rule or function is what allows us to write aggregate demand as a function of prices and wealth in order to move forward with the rest of our discussion about welfare analysis.

Examples given in MWG include wealth distribution rules that are a function of shareholdings of stocks and commodities which make wealth a function of the market's price vector

social welfare function (SWF) - this assigns utility to the vector of utilities for all 'I' consumers in an economy or market. W(u1,.....,uI) or can be written in terms of indirect utilities W(v1,....vI). 

Maximizing Social Welfare and Defining the Normative Representative Consumer

The wealth distribution rule is assumed to maximize society's social welfare function subject to a given level of aggregate wealth. The optimal solution indicates a particular indirect utility function v(p,w)

Normative Representative Consumer- a positive representative consumer is a normative representative consumer relative to social welfare function W(.) if for every (p,w) the distribution of wealth maximizes W(.). v(p,w) in the optimum is the indirect utility function for the normal representative consumer. 

For v(p,w) to exist, we are assuming a SWF, and assuming it is maximized by an optimal distribution of wealth according to some specified wealth distribution rule.

An example from MWG: When v(p,w) is of the Gormon form, and the SWF is utilitarian, then an aggregate demand function can always be viewed as being generated by a normative representative consumer.  

Tuesday, October 27, 2015

Applied Microeconomics: The Strong Axiom of Revealed Preference,Aggregation, and Rational Preferences

Professionally most of my focus has been empirical and to a great extent, has been agnostic when it comes to micro theory. Take data mining and machine learning, social network analysistext mining or issues related to big data for instance. Or many of the other issues I have taken up at EconometricSense. A lot of what I have worked on has been more about data, algorithms, and experimental design than the nuts and bolts of microeconomic theory.

However, there are some theoretical issues in microeconomics that I either have forgotten, or never really understood that well.

Particularly these issues have to do with the strong axiom of revealed preference, the market aggregated demand function, and welfare analysis as discussed in one of my graduate texts (Microeconomic Theory by Andreu Mas-Colell, Michael D. Whinston, and Jerry R. Green).  From that text  (MWG) I basically get the following:
  • The strong axiom (SA) of revealed preference is equivalent to the symmetry and negative semi-definiteness of the slutsky substitution matrix
  • Violations of the SA mean cycling of choices or violations of transitivity
  • If observed demand follows the SA, then preferences that rationalize demand can always be recovered
  • It is impossible to find preferences that rationalize a demand function when the slutsky matrix is not symmetric
That means, for an individuals' observed demand function, if the slutsky matrix is not symmetric, you can't make welfare statements based on the area beneath the demand curve.

What happens when we aggregate individual demand to get a market demand function? It seems to me that the data of interest in most applied work is going to be related to an aggregate market demand curve. Based on Green et al:
  • the chances of the SA "being satisfied by a real economy are essentially zero"
  • If we allow individuals in an economy to have different preference relations/utility functions, when we aggregate to get a market demand function, the negative semi-definiteness of the slutsky matrix (equivalent to the weak axiom of revealed preference) might hold, but "symmetry will almost certainly not." 
  • While positive effects of an equilibrium might hold, without symmetry the SA does not and we therefore cannot make statements about consumer welfare based on the area beneath an observed empirical market demand function
This last conclusion leaves me with a lot of questions to ponder:
  1. What does that imply with regard to empirical work? It seems to not matter for positive effects (for instance a conclusion that a wage set above equilibrium causes a surplus of labor). 
  2. But, how much does it matter that I can't use an empirical demand function to calculate changes in consumer surplus for a change in prices? Maybe it only matters if I am interested in calculating some amount? 
  3. For any individual, if the SA might holds (which is possible), we certainly know a price increase would reduce consumer surplus, put them on a lower indifference curve and make them worse off. Regardless of the conclusions above, wouldn't that hold for all consumers represented by the aggregate market demand curve? Can't we make a normative statement  (in  terms of a qualitative directional sense even if we can't calculate total surplus) about all consumers even if the SA fails in the aggregate but holds for each individual?
Now,  the MWG text mentioned above does go on in later chapters to discuss the notions of a positive and normative representative consumer as well as a social welfare function and wealth distribution mechanisms and implications for welfare analysis. But I'd really like to know about #3. Can we make directional statements about changes in welfare as long as we know that any attempt at quantification or calculation of surplus would be invalid due to violations of the SA?

Is this a case where one should just take the example of Milton Freidman's pool players who behave as if they know physics? Maybe all of the assumptions (like the SA) fail to hold for a market demand function, but we still feel confident making directional or qualitative welfare statements about price changes because everything else about the model predicts so well?

Any thoughts from readers?

I found it interesting, that the issues in the bulleted statements related to the MWG text were never addressed that I can tell in any of my undergraduate principles or intermediate micro texts, nor even in Nicholson's more advanced graduate text. It just seems like these texts jump from individual demand to market demand as a horizontal summation of individual's demand curves and go straight to welfare analysis and discussions about consumer surplus without discussing these issues related to the SA.

Note: I definitely spent some time with the issue of consumer surplus calculations based on compensated vs uncompensated demand curves. I don't think that is the issue here at all.

****updated modified on October 29, 2015

Wednesday, September 30, 2015

Big Data, Ag Finance, and Risk Management

I recently was reading an article on AgWeb, How the feds interest rate decision affects farmers; and the following remarks stood out to me:

“You need to plan for higher rates. Yellen said in her remarks that the expectation is that the federal funds rate will rise to 1.5% by late 2016, 2.5% in late 2017, and 3.5% in 2018, so increases are coming. You can manage those hits by improving your efficiency and productivity in your fields and in your financials, which will allow you so to provide detailed cost projections and yield estimates to your banker. “Those farmers who are dialing those numbers in will be able to negotiate a better interest rate, simply by virtue of all that information,” Barron says.”


So to me this brings up some interesting questions. How interested are lenders in knowing about farmers data management and how they are leveraging their data generated across their enterprise? Does your farm need an IoT strategy? Or will these things work their way out in the financials lenders already look at regardless?

Regardless of what lenders are after, it would make sense to me that producers would want to make the most of their data to manage productivity and efficiency in both good and bad times. Firms like FarmLink come to mind.

From a research perspective, I would have some additional questions:
  1.  Is there a causal relationship between producers that leverage IoT and Big Data analytics applications and farm output/performance/productivity
  2. How do we quantify the outcome-is it some measure of efficiency or some financial ratio?
  3. If we find improvements in this measure-is it simply a matter of selection? Are great producers likely to be productive anyway, with or without the technology?
  4. Among the best producers, is there still a marginal impact (i.e. treatment effect) for those that adopt a technology/analytics based strategy?
  5. Can we segment producers based on the kinds of data collected by IoT devices on equipment, aps, financial records, GPS etc.?  (maybe this is not that much different than the TrueHarvest benchmarking done at FarmLink) and are there differentials in outcomes, farming practices, product use patterns etc. by segment

See also:
Big Ag Meets Big Data (Part 1 & Part 2)
Big Data- Causality and Local Expertise are Key in Agronomic Applications
Big Ag and Big Data-Marc Bellemare
Other Big Data and Agricultural related Application Posts at 
Causal Inference and Experimental Design Roundup

Friday, September 25, 2015

EconTalk: Matt Ridley, Martin Weitzman, Climate Change and Fat Tails

In a recent EconTalk podcast with Russ Roberts, Matt Ridley comments on a previous discussion with Martin Weitzman regarding the tail risk associated with climate change:

"the fat tail on the distribution, the relatively significant even if small possibility of a really big warming has got a heck of a lot thinner in recent years. This is partly because there was a howling mistake in the 2007 IPCC Report, the AR4 Report (Fourth Assessment Report: Climate Change, 2007), where a graph was actually distorted. And a brilliant scientist named Nick Lewis pointed this out later. It's one of the great, shocking scandals of this, that a graph--and I'm literally talking about the shape of the tail of the graph--was distorted to make a fatter tail than is necessary. When you correct that, the number gets smaller. When you feed in all these 14 papers that I've been talking about, all the latest observational data, 42 scientists involved in publishing this stuff, most of the mainstream scientists--I'm not talking about skeptics here--when you feed all that in and you get the average probability density functions for climate sensitivity, they turn out to have much thinner tails than was portrayed in the 2007. And that Martin Weitzman is basing his argument on. So the 10% chance of 6 degrees of warming in 100 years becomes much less than 1% if you look at these charts now."

Very interesting, because I thought Weitzman's discussion of tail risk was compelling. Unlike Nassem Taleb's characterization of tail risk and GMOs. I think a key to policy analysis must  revolve around getting the distribution correct, particularly the tails of the distribution, then getting the discount rate correct as well. Will there ever truly be a consensus in relation to climate change policy?


EconTalk: Matt Ridly on Climate Change Consensus

In a fairly recent EconTalk podcast with Russ Roberts, Matt Ridley discusses the consensus about climate change:

"if it's true that 97% of scientists are all of a particular view about climate, then let's go and ask what that view is. And if you go and look at the origin of that figure, it was that a certain poll--of 79 scientists, by the way, an extraordinarily small sample--said that, 97% of them agreed that human beings had influenced climate and that carbon dioxide was greenhouse gas....it's not referring to a consensus about dangerous climate change. It's referring to a consensus about humans' ability to affect the climate."

This is similar to what I wrote before back in 2008  after actually reading the IPCC 4th Assessment report. And more recently I have commented on how difficult it may be to solve the knowledge problem and actually attempt to price carbon (for which there is no consensus), and given this consensus view, from a policy perspective, the science just might not support doing anything drastic to try to stop climate change (i.e. carbon taxes, CAFE standards, other regulations).

So I continue to think that you don't  necessarily have to be a climate change skeptic or 'denier' to be a denier on climate policy (or at least push back a little)

Friday, September 11, 2015

Does California's EPA really have an 'intent' to put Glyphosate on its list of 'known' carcinogens?

There have been some recent headlines lately about California's EPA expressing an 'intent' to put glyphosate on its list of 'known' carcinogens.

Here is one example: http://www.ewg.org/agmag/2015/09/california-moves-protect-citizens-monsanto-s-gmo-weed-killer

Yes, a subgroup of the WHO did suggest not long ago that glyphosate was a 'probable' carcinogen, but I wonder if hairdressers, or third or swing shift workers are going to get a warning printed on their payroll slip telling them that along with roundup herbicide, their profession is known to the state of California to cause cancer?

Here's more:

"In recent years use of glyphosate has exploded from 10 million pounds in 1993 to 280 million pounds in 2012. More than 90 percent of soybeans grown in the United States are genetically modified to withstand Roundup, which ends up in the beans themselves. More glyphosate is found in genetically modified soybeans than non-GMO varieties....The widespread use of this toxic herbicide in GMO food production is one reason more than 90 percent of Americans want foods containing genetically engineered ingredients to be labeled. Americans should have the same right as consumers in 64 other countries around the world when it comes to knowing what’s in their food."

'Widespread use of this toxic herbicide?' That is a very interesting statement. Sure, toxic might make sense in comparison to a pure source of crystal clear mountain spring water. But we are not going to sustainably feed the world on rainbows, fresh cut flowers, and crystal clear water. 

Of all of the chemicals used in modern agriculture, roundup is one that should be most applauded by those with environmental and health concerns, not stigmatized. When you consider its relative toxicity compared to a number of chemistries it has replaced, and its prominent and complementary role in GMO crops and the associated drastic reductions in greenhouse gas emissions, increased practice of no-till, and reduced runoff and groundwater pollution (i.e. nitrates in groundwater and algal blooms among other things) you might consider the roundup + roundup ready technology as one of the 'greenest' technologies ever put on the market.

Of course, maybe there is some inherent rent seeking going on behind the scenes, special interests interested in labeling and others might see the success of a sustainable technology like this as huge barrier to their political agenda, or business strategy (think Chipotle). The more this can be stigmatized in the media and through political means (like labeling or California's prop 65 list) the better they set strategically in advancing their agenda. Of course, it also (at least short term) doesn't hurt the other manufacturers of more toxic chemicals and might help get back some market share! I'm sure those happy about the California news would never consider it, but I think a world without roundup (or glyphosate in general) would be a world with more toxic chemical intensive agriculture.

Oh yeah, and Americans deserve the same right as citizens in 64 other countries and the world for that matter of having a food and regulatory system based on sound science and rigorous economic policy analysis.

See also:

Modern Sustainable Agriculture

Public Choice Theory for Agvocates