As one of the top research universities in the world, Stanford houses a program that takes a meta approach to research itself — the Meta-Research Innovation Center at Stanford (METRICS).
METRICS was launched in 2014 and conducts, essentially, research on research. The interdisciplinary program studies the logistics of conducting research across all fields, examining specifically the methods, reporting, evaluation, reproducibility and incentives of research.
To study incentives, METRICS ideates ways in which researchers can be recognized for their work, with less emphasis on the number of papers they have published and more on the processes they employ. Steven Goodman, co-founder of METRICS and professor of epidemiology and population health of medicine, said the School of Medicine will be releasing a framework to recognize research progress on curricula vitae (CVs) even if a paper has yet to be completed.
“We’ve developed a model for a new CV that the School of Medicine will be putting online in the next few months … if (researchers) take these extra steps to make their work transparent and particularly rigorous, they can show that on their CV … so that’s just one of many incentives,” Goodman said.
In scientific research, reproducibility is one way to establish the reliability of a paper’s results. It is therefore one of the main pillars that METRICS studies, according to the program’s website.
Postdoctoral scholar Amanda Kvarven said she joined METRICS after, while writing her undergraduate thesis, she tried to replicate the findings of a dataset from “a well-published and highly-cited study” and was surprised to find that she could not.
“No matter what I did, I could not get the same results as the original paper,” Kvarven said. “I had always believed that science was reliable and that all scientific results could be trusted, so for me, this replication failure came as quite a shock.”
According to Goodman, the center focuses on studying methods to ensure and enforce honesty during the research process. “METRICS is very much about studying … this general understanding of what makes public research more or less reliable” and then actually making those changes, he said.
John Ioannidis, a METRICS co-director and professor of medicine of epidemiology and population, said there are various conflicts of interest and other biases that plague the publishing process. According to him, this is especially true of the peer review process, in which reviewers and their comments play a substantial role in whether the paper will be published or not.
With this authority, there is leeway for bias in the process, with certain scientific perspectives being promoted over others, Ioannidis said.
“Peer review is replete with biases: There’s financial conflicts, sometimes there is allegiance bias, there [are] just preferences that someone may have based on what they have published and what they have found,” Ioannidis said.
He added there are “some very egregious practices that are infiltrating the system,” which can include fake reviews.
“We have seen that one could create fake reviewers or could create reviewers that are actually the same people as the authors or some of their affiliates.” Ioannidis said. “The papers are then funneled to them and they get accepted.”
Ioannidis’s research has also found that many research results are exaggerated. To him, a major issue is that many journals push to publish more and more articles.
“They want to publish more papers and ideally, they want to publish papers with extreme results, with extravagant results,” Ioannidis said. “That filtering of the literature creates a literature that is exaggerated and probably not accurate.”
Conventional quantitative measures such as the h-index and impact factor define the credibility of many journals. METRICS, however, operates on the idea that science should be measured more qualitatively.
Science is far more than quantitative measures, Ioannidis said. “You change the world … For people in medicine, do you save lives to improve quality of life? Do you produce technology that represents progress?”
According to Ioannidis, recent high-profile cases of research fraud — including in labs previously led by former University President Marc Tessier-Lavigne — highlight the importance of research authenticity and conduct. But he said that cases like Tessier-Lavigne’s are “just the tip of the iceberg.”
“We have about 14 million scientists. We have 200 million scientific papers and scholarly documents that have been published. We have more than six million published every year,” Ioannidis said. “Science is a community effort. It’s a cumulative effort.”
He said anything that can be done “to improve the efficiency of science, its transparency, the rigor, the reproducibility, the accuracy, the credibility [and] the trustworthiness, and diminish errors, biases and of course fraud, is a major game. It’s something that [has] multiplicative impact.”
Mario Malicki, an epidemiology and population health researcher, said it was important for standardized research protocols and methods to become part of better practices in the scientific community.
“High-profile cases of research that split through the quality control mechanisms in science, and research that had detrimental effects on lives, happen,” Malicki said. “It is normal that both the general public and scientific community call for changes so that such things, if possible, never happen again.”
For the future of research, METRICS leaders hope meta-research will explore ways to align science with its purported core — being credible and objective.
“Being able to be transparent … is very essential from the very beginning,” Ioannidis said. “Now we have reached a situation where we don’t have just a handful of scientists, we have millions of scientists, and therefore that self-reflective methods oriented approach becomes even more important.”