Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.
John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.
So very, very true. (Although I would argue with "less than 50%", because if that were the case then we would be fine, just take what the report didn't find to be true. No, I think he means is there are errors in more than 50% of the papers, which could, *or could not* lead to a wrong decision.)
Traditionally a study is said to be "statistically significant" if the odds are only 1 in 20 that the result could be pure chance. But in a complicated field where there are many potential hypotheses to sift through - such as whether a particular gene influences a particular disease - it is easy to reach false conclusions using this standard. If you test 20 false hypotheses, one of them is likely to show up as true, on average.
Odds get even worse for studies that are too small, studies that find small effects (for example, a drug that works for only 10% of patients).
ummm...no... properly conducted small experiments have the same error rate as properly conducted large experiments... Now, the power for a large test is better, but that is a different issue...
So what this article is saying is that sometimes people report the wrong conclusion based on a poor understanding of statistics, when this article itself has no clue. So was this article just an example of that problem?