Thursday, May 2, 2013

Literature reviews: why they rule, how to judge them

Consider the Bayesian definition of evidence: A proposition is evidence of a hypothesis if that proposition is unlikely to be true if the hypothesis is false. The more unlikely it is to be true under a false hypothesis, the stronger evidence it is

Many sciences are messy, social sciences especially. Real-world data is often so filled with variance that economists and sociologists completely ignore the size of the effects found in the study and focus entirely on sign (negative vs. positive correlations). In terms of Bayesian evidence, when data is noisy, it is worse evidence for any given hypothesis, because there is a higher chance of some other hypothesis causing the results.

Worse still, publication bias leads to many more false studies being published than random selection would. Surprising results get published, and often they do not replicate well. Sometimes experiments (or worse, replications) don't get published because they are boring: "We looked and we found nothing" is expected for most topics, so who cares?

(Side note: In economics, you cannot get published in a good journal by performing a replication. You can replicate and add a few details to see what changes, and publish a replication as a side note to a new experiment, but there is no incentive whatsoever to simply perform a replication. This is one of the largest problems in economics: individual studies have a solid chance of being garbage, and when we discourage replication we encourage the propagation of garbage.)

What's a Bayesian to do? Depending on how likely you think it is for a result to attain by chance or another hypothesis reigning, you should distrust many individual studies. You should give higher weight to well-designed studies, of course, but even those can be quite messy. You should give lower weight to studies with surprising findings. If an experiment strongly contradicts a reasonable theory, be suspicious. Most of us don't have time to thoroughly examine the stats in many studies, and publication bias means we can't fully trust peer review. Publication bias is likely to be higher in pseudo-journals published by think-tanks.

But that doesn't mean we have to be completely agnostic on so many topics. There's a solution for those who care about rigor: literature reviews and metastudies. After many studies on a subject have been published, inevitably some folks will take up the task of summarizing what they say on average, and the good ones will even sort out studies into quality of method.

Hence why, when I try to make an empirical point, I almost exclusively rely on lit reviews. (Metastudies are not very common in economics because most econ studies are just not statistically comparable.) They are especially useful when discussing political issues, because you can avoid the charge of cherry-picking, but be wary of where the review is published. Many think-tanks will do lit reviews that are completely cherry-picked.

Other tips for judging lit reviews: Bigger is better; the more studies included, the more representative it is likely to be. Be wary if the summary in the abstract deviates too much from the reports in the text; some reviewers will paint a rosier or gloomier picture based on their political biases rather than the actual literature. Put more weight on a review if you know the political tendencies of the authors point in the opposite direction of the review summary. And finally, not all studies are created equal: put more weight on lit reviews that explain why they find some studies stronger than others.

You will have to know some statistics to properly judge some research designs and decide if the authors of a lit review are right about the strengths and weaknesses of studies. I'll explain which methods I find most convincing in a later post. (Spoiler: I hate instrumental variables.)

And finally, don't forget your Bayesian priors. There may be other evidence for or against a hypothesis the literature on that topic is completely ignoring. I'll discuss this in a later post as well.

2 comments:

  1. It is quite normal to feel a slight pang of disappointment when an interim management assignment comes to an end. Learning to deal with this emotion and using it to spur you on to greater heights is critical to being successful in the interim management arena. lit review

    ReplyDelete
  2. Perhaps you had a sad experience with this kind of articles. Or you just did not find good area management for your publications before. Any way you should not so worry about that, I know there is some good for you http://payforessayz.com/
    , you just need to try. Never give up!

    ReplyDelete