Mind Matters Natural and Artificial Intelligence News and Analysis
businessman-keeping-the-growth-in-economy-stockpack-adobe-stock
Businessman keeping the growth in economy

Is There a Solution to Low Quality Research in Science?

Molecular biologist Henry Miller and statistician Stanley Young explain why statistical techniques like meta-analysis won’t solve the basic problem
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Recently, molecular biologist Henry Miller and statistician Stanley Young published a thoughtful essay at Genetic Literacy Project, asking why trust in scientific research is “at an all time low.” Instead of merely blaming the public, their work honestly assesses the reasons for loss of trust.

They’ve now followed it up with a second essay, this time at American Council on Science and Health, “The Validity Of Much Of Published Scientific Research Is Questionable (Part 2),” delving further into the issue.

Can meta-analysis reduce the effect of poor quality research?

Business graphs, charts and magnifying glass on table. Financial development, Banking Account, Statistics

The problem that Miller and Young set out at ACSH is that there is no simple, one-size-fits-all solution to the problem of questionable research findings. For example, a common approach to addressing conflicting research findings is a meta-analysis, a statistical analysis of a large number of studies. It often works, they say — especially when further study consistently supports the meta-analysis results.

So we can find out, for example, whether the consensus opinion is that organic foods are better (controversially, apparently not). Or whether taking zinc helps a cold (probably). Or whether stress is a factor in high blood pressure (yes).

These consensus opinions are not The Truth, of course. Rather, their broad acceptance among researchers provides a social or even legal justification for relying on them when making decisions. But now, here’s what can go wrong, according to Miller and Young:

How are meta-analyses executed? A computer search finds published articles that address a particular question — say, whether taking large amounts of vitamin C prevents colds. From those studies considered methodologically sound, the data are consolidated and carried over to the meta-analysis. (Usually, the person(s) performing the meta-analysis does not have access to the raw data used in the individual studies, so summary statistics from each individual study are carried over to the meta-analysis.) If the weight of evidence, based on a very stylized analysis, favors the claim, it is determined to be accurate and often canonized.

The problem is that there may not be safety in numbers because many individual papers included in the analysis could very well be exaggerated or wrong, the result of publication bias and p-hacking (the inappropriate manipulation of data analysis to enable a favored result to be presented as statistically significant).

“The Validity Of Much Of Published Scientific Research Is Questionable, Parat 2, American Council on Science and Health, February 27, 2024

In short, if a meta-analysis of 100 papers includes 25 papers that used questionable techniques like p-hacking, the fact that the researchers doing the meta-analysis are honest does not mean that the results are an honest survey of the scene. Serious reform won’t come from sophisticated statistical techniques; it requires tackling the honesty problems at the root.

Then there are the paper mills

In closing, Miller and Young point to a news article in Nature from last November, which noted, “An unpublished analysis suggests that there are hundreds of thousands of bogus ‘paper-mill’ articles lurking in the literature.”

An unpublished analysis shared with Nature suggests that over the past two decades, more than 400,000 research articles have been published that show strong textual similarities to known studies produced by paper mills. Around 70,000 of these were published last year alone (see ‘The paper-mill problem’). The analysis estimates that 1.5–2% of all scientific papers published in 2022 closely resemble paper-mill works. Among biology and medicine papers, the rate rises to 3%.

Richard van Noorden, “How big is science’s fake-paper problem?,” Nature, November 6, 2023

Paper mills are a different type of problem from fiddling with statistics to get a desired result (p-hacking, for example); their output is just plain bogus. The bogus stuff may be a small percentage of the published total but if it happens to be concentrated in some fields and undetected, it could also skew the results of, say, a meta-analysis.

It doesn’t sound as though any solution that doesn’t tackle the basic honesty problem is likely to work. Meanwhile, the public should not be blamed for doubt.

You may also wish to read: Scientists attempt an honest look at why we trust science less now. Contemplating the depressing results of a recent Pew survey, a molecular biologist and a statistician take aim at growing corruption in science. The article, unfortunately, doesn’t address the way the panic around COVID leaned heavily on claims about “the science” — which likely discredited science.


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of the forthcoming The Immortal Mind: A Neurosurgeon’s Case for the Existence of the Soul (Worthy, 2025). She received her degree in honors English language and literature.

Is There a Solution to Low Quality Research in Science?