Mind Matters Natural and Artificial Intelligence News and Analysis
3d-rendering-of-high-angle-view-inside-a-conference-hall-stockpack-adobe-stock
3d rendering of High Angle View Inside a Conference Hall

Why Do Universities Ignore Good Ideas?

Funding agencies see if the researcher is tenured or has already received funding. It's a vicious cycle.
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Here is a recent assessment of 2023 Nobel Prize Winner Katalin Kariko:

“Eight current and former colleagues of Karikó told The Daily Pennsylvanian that — over the course of three decades — the university repeatedly shunned Karikó and her research, despite its groundbreaking potential.”

‘Not of faculty quality’: How Penn mistreated Nobel Prize-winning researcher Katalin Karikó | The Daily Pennsylvanian (thedp.com)

Another article claims that this occurred because she could not get the financial support to continue her research.

Why couldn’t she get financial support? “You’re more likely to get grants if you’re a tenured faculty member, but you’re more likely to get promoted to tenure if you get grants,” said Eric Feigl-Ding, an epidemiologist at the New England Complex Systems Institute and a former faculty member and researcher at Harvard Medical School. “There is a vicious cycle,” he says.

Interesting. So, the idea doesn’t matter. What matters to funding agencies is that you have previously obtained funding or are a tenured professor. Really? Are funding agencies this narrow-minded?

Does the Idea Even Matter?

Mr. Feigl-Ding also said, “Universities also tend to look at how much a researcher publishes, or how widely covered by the media their work is, as opposed to how innovative the research is.” But why couldn’t Karikó get published?

Science magazine tells the story of her main paper with Drew Weismann in 2005. After being rejected by Nature within 24 hours: “It was similarly rejected by Science and by Cell, and the word incremental kept cropping up in the editorial staff comments.”

Incremental? There are more than two million papers published each year, and this research, for which Karikó and Weismann won a Nobel Prize, was deemed incremental? If it had been rejected for methods or for the contents being impossible to believe, I think most people could understand the rejection. But incremental?

Obviously, most of the two million papers published each year are really incremental. Yet one of the few papers that we can all agree was not incremental, gets rejected because it was deemed incremental.

Furthermore, this is happening in a system of science in which even Nature admits “disruptive science has declined,” few science-based technologies are being successfully commercialized, and Nature admits that it doesn’t understand why.

What is the obvious strategy for funding agencies if disruptive science has declined and fewer science-based technologies are being successfully commercialized? Do something different. Try a new approach. As Albert Einstein once said, “Insanity is doing the same thing over and over and expecting different results.” He also said, “If at first the idea is not absurd, then there is no hope for it.”

But how many funding agencies are funding something different, and how many professors are proposing something different? Unfortunately, our current method of evaluating faculty, departments, and universities, whether it is done by a funding agency or not, suggests that the answer to both questions is “very few.”

An age-old academic aphorism is that “your dean can’t read but he can count.” Some academic research confirms that internal evaluations depend more on quantity than quality.

Quantity Over Quality

A related factor is Goodhart’s Law, named after the British economist Charles Goodhart, which states, “When a measure becomes a target, it ceases to be a good measure.”

Counting the number of publications has led to reproducibility problems and paper-mills in which you can pay someone to write papers for you, and you can also pay journals to publish your research. And the obsession with quantitative statistics by most top journals, which rarely leads to Nobel Prize-winning research, has led to so-called “p-hacking” to generate a p-value just below the threshold typically of 0.05.

H-indices take this insanity further. They measure the number of publications with specific number of citations. For instance, an h-index of 50 has 50 papers each with at least 50 citations.

Back in 2011, someone noticed that Albert Einstein and Richard Feynman had h-indices of 44 and 37 respectively, indices that would prevent them from getting top university jobs today. Apparently, their h-indices have subsequently rose due to embarrassed physicists, but so what. Clearly Einstein and Feynman’s researcher, both Nobel Laureates, have made a bigger contribution than the 500 computer scientists with h-indices greater than 40 in 2011 (I couldn’t find data for physicists). By the way, Katalin Kariko currently has an h-index of 63.

Funding agencies seem to emphasize h-indices and counting the number of publications as much as internal evaluations at universities do. At least a pan-European agreement seems to think so and thus they want this to change as seen in the title of this Nature op-ed: Grants and hiring: will impact factors and h-indices be scrapped?

What is the end-result of an emphasis on h-indices and counting publications? It forces scientists to forgo risky projects that might not result in publications and instead focuses them on incremental projects in which the chance of publication is high.

Nobel Laureates say similar things. A biochemist, molecular biologist, and a physicist claim they could not get funding for their research in today’s emphasis on less risky projects; the physicist claims he could not get a job today. Another scientist turned policy maker argues that in today’s climate every project must succeed and thus people study only marginal, incremental topics where the path forward is clear, and you can virtually guarantee a positive result.

Good Ideas Rarely Find Funding

Katalin Karikó’s Nobel Prize didn’t prove that universities don’t fund good ideas. It merely reminded us that they rarely do, and with little attention paid to this problem, it won’t be solved in the near future.

Coming up with a better system should be an area of intense research for social scientists. But they are in the same academic system that was discussed above. They must publish their articles and thus do incremental projects that are highly quantitative. Thus, few researchers look at the whole system, and they certainly aren’t proposing a new system, but merely analyzes a small part of the current one.

What do I recommend? Just rethinking the recent past would help as I have done in this paper and should be done in much more detail. Which ideas were most important, which ideas solved problems, and which ideas led to useful products and services? These are questions social scientists need to ask. With so few new science-based technologies being commercialized, isn’t it time to reassess how we measure the performance of researchers, and how universities choose who to hire, retain, or give tenure?


Why Do Universities Ignore Good Ideas?