John Staddon devotes a Martin Center column to the way scientific findings ought to be judged.

The standard system for admitting a manuscript to a scientific journal—or awarding money to a supplicant researcher—is for the editor to submit the work to a small group of experts. Most journals have a number of scientists they consult. For each submission, the editor picks a couple of “relevant” experts and sends each a copy of the manuscript. Then, usually after considerable delay (these folk are usually anonymous and unpaid and have other commitments), the reviewers send in comments and criticisms and either recommend acceptance, acceptance with changes, or rejection.

This is the famous peer review “gold standard” followed by most reputable scientific journals. The system evolved in a day when science was a vocation for a small number of men of mostly independent means, like Charles Darwin, or employed in ways that did not seriously compete with their scientific interests. …

… All has changed in the 21st century. The number of scientists has vastly increased along with their dependence on external sources of funds. Modern scientists need to publish. The number of potential publications is large. On the other hand, since the advent of the internet, the cost of publication has become negligible. So what is the problem? Why is so much review necessary?

The problem is that scientific publication is what economists call a positional good. Nature and Science are top journals because the papers they publish are, for the most part, very good. The papers are good because these journals are perceived as the best and therefore can attract the best submissions, a positive-feedback loop. And “best” is quantified by something called the impact factor which rates a journal according to the number of citations—mentions in other published articles—that its articles receive.