2015 May

The Trouble with Scientists

“I just received the following note from one of our Inner Circle members.  Below the note is my response.”


Scientists suffer from cognitive biases just like everybody else. The scientific method is supposed to catch and correct such biases, but it is failing to do so. Why? Because to advance as a professional scientist you must publish; learned journals want to publish positive results, not negative ones; so confirmation bias is strengthened; and peer review slows the rate at which wrong claims can be contradicted (3,160 words)



Mike’s response

Good article


One reply on “The Trouble with Scientists”

This is great. Thanks for sharing, Brian. Growing awareness of these problems with how scientific knowledge is disbursed and the incentives for what success as a researcher requires are leading to a growing pool of tech-enabled innovators of the academic publishing process. And websites like this – These issues are grappled with by the powers that be here – Where I work, we’re trying a few things ( Some things (manuscript prep) have gotten a lot of traction, some more attention than traction (independent peer review). Our sense is that the biggest shifts will be generational and come from a groundswell from those ‘under-served’ by the current systems. In general, a movement from a system that centers on large publishers toward a system that is researcher or lab-centered paired with the shrinking costs of sharing info with tech should lead to innovations that attempt to ameliorate some of the problems that come with human bias. Another constraint is funding sources and the requirements that come with accepting money from a private or public funding body.

From the piece that Brian shared:

Oransky believes that, while all of the incentives in science reinforce confirmation biases, the exigencies of publication are among the most problematic. “To get tenure, grants, and recognition, scientists need to publish frequently in major journals,” he says. “That encourages positive and ‘breakthrough’ findings, since the latter are what earn citations and impact factor. So it’s not terribly surprising that scientists fool themselves into seeing perfect groundbreaking results among their experimental findings.”
Nosek agrees, saying one of the strongest distorting influences is the reward systems that confer kudos, tenure, and funding. “To advance my career I need to get published as frequently as possible in the highest-profile publications as possible. That means I must produce articles that are more likely to get published.” These, he says, are ones that report positive results (“I have discovered …”, not “I have disproved …”), original results (never “We confirm previous findings that …”), and clean results (“We show that …”, not “It is not clear how to interpret these results”). But “most of what happens in the lab doesn’t look like that”, says Nosek—instead, it’s mush. “How do I get from mush to beautiful results?” he asks. “I could be patient, or get lucky—or I could take the easiest way, making often unconscious decisions about which data I select and how I analyze them, so that a clean story emerges. But in that case, I am sure to be biased in my reasoning.”

Alicia Parr

Leave a Reply

Your email address will not be published. Required fields are marked *