Reading Research

We’re in an age of information overload– it’s everywhere! While the free and open exchange of information is a wonderful thing, it also requires a set of skills in evaluating trustworthiness.

First, here’s how to understand what’s behind the headline. When the results of scientific studies are discussed in the popular press, it’s usually out of context and often overblown. If it sounds too good to be true, it’s time to start asking questions.


  • Start with: who were the participants? Sometimes what’s reported is actually an animal study– useful for advancing the science, but definitely not ready for prime time. If it was on humans, which ones? Surprisingly often, it will be something like seven male German college students. How similar are you to the study sample? Then, consider how many? The larger and more diverse the sample, the more compelling the result. Are there data from 1,000 people, with different ages, races, and genders? Now I’m pretty interested.

  • Then think about what the study measured– what the outcomes were, and whether they matter. Is the outcome relevant to health, performance, well-being, or something else important, or is it something really only meaningful in a lab?

  • Then think about whether the outcomes reported are applicable to your life. Study outcomes are generally reported in terms of statistical significance, which is not the same as clinical significance. A 0.1-point change on a 10-point scale could be statistically significant, but if your pain went from 8.9 to 8.8, who cares?

  • I also suggest looking at how the study was funded. While industry-funded studies aren’t necessarily wrong, you might take the findings for what they are and use common sense and caution. Studies funded by the NIH or other government or university bodies are generally less likely to suffer from conflict of interest. But don’t forget that the published literature as a whole is biased towards reporting positive findings– studies that didn’t find an effect are much less likely to be published (this is called publication bias, or the file-drawer problem).

 

Another thing to chew on when you’re looking at research: it’s important to distinguish between absence of evidence and evidence of absence. In the former, the topic hasn’t been studied (or studied much). We can’t draw conclusions because we simply don’t have the data. In the latter, studies have shown no effect/relationship/whatever you’re looking at. People make all kinds of errors with this distinction– and sometimes folks use it to try to obscure the truth, too. So read carefully, and ask yourself: absence of evidence, or evidence of absence?

Website Powered by WordPress.com.

Up ↑

%d bloggers like this: