Is science a popularity contest?
By Gustavo Starvaggi França
During a 1964 lecture at Cornell University, Richard Feynman, Nobel laureate in physics and one of the greatest minds of our time, talked about the core principles of the scientific method. In his uniquely playful style, he described what makes a new scientific theory valid. According to Feynman, a new theory first emerges from a guess. Predictions on the basis of our guess are then compared to nature, or tested in an experiment. “If it disagrees with the experiment,” Feynman argued, “it’s wrong. In that simple statement is the key to science. It doesn’t make any difference how beautiful your guess is, it doesn’t matter how smart you are, who made the guess, or what his name is….”
Feynman’s point was that the central pillars of science are the ideas that persist over time, despite attempts to falsify them. In principle, scientists should decide to accept or reject a new theory based on its ability to explain natural phenomena, regardless of who proposed it.
The progress of science, however, does not happen only because of discoveries per se. A successful scientific endeavor depends on the effective communication of a new idea’s meaning and implications, usually through publishing in specialized journals and sharing the results with the general public.
Social media activity seems to undermine Feynman’s claim. In a recent blog post, Lior Pachter, professor of computational biology at Caltech, pointed out that behavior on social media, like downloads and commentary about preprints, can boost the popularity of research papers. This pursuit of popularity may cause distortions in academic publishing, if journals and scientists lean on popularity criteria — and not strictly on research quality — to decide how significant a discovery really is.
Social media has been extensively used by scientists to promote the latest research advances. In 2015, 34.7% of scientists in the American Association for the Advancement of Science used social media to discuss science and follow new findings — the number today is probably much higher. Social media brings people together, overcomes geographical barriers and accelerates information sharing. But it is also a ripe environment for self-promotion.
How does activity on social media affect publishing and scientists’ careers? First we have to recognize that the metrics used to quantify the impact of a researcher’s contribution are interpreted by many scientists and institutions as evidence of success. The “h-index,” for example, quantifies an author’s productivity by analyzing the number of articles and citations. It is supposed to measure both the quality and quantity of a scientist’s output. Even though the h-index calculation has inconsistencies — it disregards that citations are field-dependent and it does not account for author placement in the list of authors — achieving high h-index may represent a significant career accomplishment.
Journal articles have now also been tagged with scores. The “attention score,” provided by Altmetric, ranks an article’s popularity on Twitter, Facebook, blog posts and other sources. The score is displayed with articles of many journals including the bioRxiv, an open access preprint repository for biological sciences which has seen an explosion in the number of papers submitted by researchers who want to spread their work before getting stuck in the painfully slow publishing process.
In his post, Pachter suggested that social media behavior can artificially inflate the attention score. He referred to a bioRxiv preprint entitled “Tracking the popularity and outcomes of all bioRxiv preprints” by Richard Abdill and Ran Blekhman from the University of Minnesota. They analyzed the ultimate fate of preprints once they were published in peer-reviewed journals. The authors found that preprints with more downloads are more likely to be published in journals with higher impact factors (the average number of citations to recently published articles in that journal).
As Pachter noted, the question is whether preprints with a high number of downloads end up in prestigious journals because they reflect legitimately impactful research, or because they attract the attention of journal editors and reviewers, thus affecting their chances of acceptance because of a high citation potential. (One of the main goals of a scientific journal, of course, is to increase its impact factor.)
The latter scenario would reveal a very problematic bias in publishing. First, there are ways to artificially inflate popularity metrics. People can deliberately download articles and preprints multiple times or even use bots (“robots” that automatically interact over the Internet) to maximize their scores, as exemplified in Pachter’s post. Second, highly influential people on social media are network hubs, and the content they share usually gathers a lot of attention. Third, publishing would likely favor whatever is “trendy,” which does not necessarily imply groundbreaking research.
There is no doubt that social media is a valuable resource to speed up science communication. It will become a concern if self-promoting behavior harms the capacity of scientists and journals to recognize big but non-trendy discoveries. Or if publishing turns into a popularity contest where research from the loudest social media voices gets over-hyped.
Science should be driven by curiosity. I still believe that, in the long term, the relevance of our work will speak for itself. But in a world that promptly rewards top-ranked careerists, researchers have been forced to be marketers, too. Playing the numbers game can betray the very purpose of science, which is to understand how Nature works. Sometimes the path to top rankings diverges from the path of doing something genuinely interesting to us. We all know what path Feynman would taken, given the choice. As he might put it: Nature doesn’t care what your Twitter handle is.
Gustavo Starvaggi França is a postdoctoral researcher at NYU Langone Health. He investigates mechanisms of drug resistance in cancer.