Daniel Tenreiro writes for National Review Online about a key problem plaguing scientific research.
Despite Silicon Valley’s public-relations efforts, which tout the transformative potential of new software, more and more thinkers argue that we are experiencing technological stagnation. Citing disappointing productivity numbers and the comparatively low impact of recent information-technology innovations, Peter Thiel, Tyler Cowen, Larry Summers, and others have made this case in recent years, but theories abound as to why it is happening. On one popular view, expressed most comprehensively by Robert Gordon of Northwestern University, Western researchers have picked all the technological “low-hanging fruit,” such as indoor plumbing, automobiles, and air travel. According to this theory, there are diminishing returns to science; once you’ve discovered fire and electricity, all future innovations will pale in comparison.
Economists Jay Bhattacharya and Mikko Packalen push back on this view in a new paper. “New ideas no longer fuel economic growth the way they once did,” they acknowledge, but rather than resulting from the laws of physics, the dearth of new ideas is a consequence of the incentives faced by scientists.
Because academic papers are evaluated by how many citations they receive, scientists choose low-risk projects that are certain to get attention rather than novel experiments that may fail. Academics cluster into crowded fields because papers in such fields are guaranteed to be read by a high number of researchers.
This is a relatively new phenomenon, as citation analysis of scientific research was introduced only in the 1950s and did not become common until the 1970s. Eugene Garfield, who developed the idea of using citation quantity to evaluate the impact of journals, came to regret its use as a performance indicator for individual researchers.