Byron SpiceFriday, September 11, 2020Print this page.
The science community has responded to the COVID-19 pandemic with such a flurry of research studies that it is hard for anyone to digest them all. This conundrum underscores a long-standing need to make scientific publication more accessible, transparent and accountable, two artificial intelligence experts assert in a data science journal.
The rush to publish results has resulted in missteps, say Ganesh Mani, an investor, technology entrepreneur and adjunct faculty member in Carnegie Mellon University's Institute for Software Research, and Tom Hope, a post-doctoral researcher at the Allen Institute for AI. In an opinion article in today's issue of the journal Patterns, they argue that new policies and technologies are needed to ensure that relevant, reliable information is properly recognized.
The potential solutions include ways to combine human expertise with AI as one method to keep up with a knowledge base that is expanding geometrically. AI might be used to summarize and collect research on a topic, while humans curate the findings, for instance.
"Given the ever-increasing research volume, it will be hard for humans alone to keep pace," they write.
In the case of COVID-19 and other new diseases, "you have a tendency to rush things because the clinicians are asking for guidance in treating their patients," Mani said. Scientists certainly have responded. By mid-August, more than 8,000 preprints of scientific papers related to the novel coronavirus had been posted in online medical, biology and chemistry archives. Even more papers had been posted on such topics as quarantine-induced depression and the impact on climate change from decreased transportation emissions.
Simultaneously, the average time to perform peer review and publish new articles has shrunk. In the case of virology, the average dropped from 117 to 60 days.
This surge of information is what the World Health Organization calls an "infodemic" — an overabundance of information, ranging from accurate to demonstrably false. Not surprisingly, problems such as the hydroxychloroquine controversy have erupted as research has been rushed to publication and subsequently withdrawn.
"We're going to have that same conversation with vaccines," Mani predicted. "We're going to have a lot of debates."
Problems in scientific publication are nothing new, he said. As a grad student 30 years ago, he proposed an electronic archive for scientific literature that would better organize research and make it easier to find relevant information. Many ideas continue to circulate about how to improve scientific review and publication, but COVID-19 has exacerbated the situation.
Some of the speed bumps and guard rails that Mani and Hope propose are new policies. For instance, scientists usually emphasize experiments and therapies that work; highlighting negative results, on the other hand, is important for clinicians and discourages other scientists from going down the same blind alleys. They also explore other ideas such as identifying the best reviewers; sharing review comments; and linking papers to related papers, retraction sites or legal rulings.
The authors also focused on greater use of AI to digest and consolidate research. Previous attempts to use AI to do so have failed in part because of the often figurative and sometimes ambiguous language humans use, Mani noted. It may be necessary to write two versions of research papers — one written in a way that draws the attention of people and another written in a boring, uniform style that is more understandable to machines.
Mani said he and Hope have no illusions that their paper will settle the debate about improving scientific literature, but hope that it will spur changes in time for the next global crisis.
"Putting such infrastructure in place will help society with the next strategic surprise or grand challenge, which is likely to be equally, if not more, knowledge intensive," they concluded.
Byron Spice | 412-268-9068 | bspice@cs.cmu.edu