The Goodhart's law, after economist Charles Goodhart, states: "When a measure becomes a target, it ceases to be a good measure." Similarly, the Campbell’s law establishes: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor."
In this sense, two recent articles from The Economists (Sept 24 & Nov 26th 2016) call the attention to “Incentive Malus” that may affect the quality of scientific publishing.
The Economist asks “Why research papers have so many authors and why bad science persists. Scientific publications are getting more and more names attached to them, but does this have an impact on quality?”
The first of these articles( http://www.economist.com/news/science-and-technology/21707513-poor-scientific-methods-may-be-hereditary-incentive-malus) reports about a recent study published in , by two researchers, Paul Smaldino of the University of California, Merced, and Richard McElreath at the Max Planck Institute for Evolutionary Anthropology, in Leipzig, showing that published studies in psychology, neuroscience and medicine are often replicating already published, instead of spotting to new results.
The authors focused in particular on incentives within science, such as prestige or funding that result from publications, which might lead even honest researchers to produce poor work unintentionally. They found that labs which expended the least effort to eliminate junk science prospered and spread their methods throughout the virtual scientific community. A successful replication would boost the reputation of the lab that published the original result. Failure to replicate would result in a penalty. Ultimately, therefore, the way to end the proliferation of bad science is not to nag people to behave better, or even to encourage replication, but for universities and funding agencies to stop rewarding researchers who publish copiously over those who publish fewer, but perhaps higher-quality papers. This, Dr Smaldino concedes, is easier said than done. Yet his model amply demonstrates the consequences for science of not doing so.
The second article, All together Now, (http://www.economist.com/news/science-and-technology/21710792-scientific-publications-are-getting-more-and-more-names-attached-them-why) points to another mislead incentive: “One thing that determines how quickly a researcher climbs the academic ladder is his publication record. The quality of this clearly matters—but so does its quantity. A long list of papers attached to a job application tends to impress appointment committees, and the resulting pressure to churn out a steady stream of articles in peer-reviewed journals often leads to the splitting of results from a single study into several “minimum publishable units”, to the unnecessary duplication of studies and to the favoring of work that is scientifically trivial but easy to publish.”
“There is another way –continues the article- to pad publication lists: co-authoring. Say you write one paper a year. If you team up with a colleague doing similar work and write two half-papers instead, both parties end up with their names on twice as many papers, but with no increase in workload. Find a third researcher to join in and you can get your name on three papers a year. And so on.
To investigate the matter, The Economist reviewed data on more than 34m research papers published between 1996 and 2015 in peer-reviewed journals and conference proceedings. These are some of the findings:
- Over the period in question, the average number of authors per paper grew from 3.2 to 4.4. At the same time, the number of papers divided by the number of authors who published in a given year (essentially, the average author’s overall paper-writing contribution) fell from 0.64 to 0.51.
- One particular trend behind these numbers is the rise of “guest authorship”, in which a luminary, such as the director of a research centre, is tagged on as an author simply as a nod to his position or in the hope that this signals a study of high quality. That can lead to some researchers becoming improbably prolific. For example, between 2013 and 2015 the 100 most published authors in physics and astronomy from American research centres had an average of 311 papers each to their names. The corresponding figure for medicine, though lower, was still 180.
- Indeed, it is so easy to add a co-author that some have honoured their pets. Sir Andre Geim, who won the 2010 Nobel Prize in physics, listed H.A.M.S. ter Tisha as co-author of a paper he published in 2001 in Physica B, a peer-reviewed journal!
- Another trend is that the meaning of authorship in massive science projects is getting fuzzier. (…) A genomics paper on Drosophila, a much-studied fruitfly, also published in 2015, has 1,014 authors, most of them students who helped with various coding tasks. Such studies are paragons of scientific collaboration and the exact opposite of creating minimum publishable units. But they list as authors people who have contributed only marginally to the success of the project—roles that, in the past, were simply acknowledged in a thanks-to-all sentence but are now the bricks from which careers may be built.