Science funding is a mess. Could grant lotteries make it better?

Science funding is a mess. Could grant lotteries make it better?

Science funding is a mess. Could grant lotteries make it better?

Finding the best ways to do good. Made possible by The Rockefeller Foundation.

Science has a funding crisis. Nearly all academic researchers in the sciences rely on outside grants in order to pay salaries and buy their equipment.

But that pool of funding is shrinking, grant approval rates are dropping, and researchers are stuck spending more and more of their time and energy applying for grants. Many scientists are saying that the system is broken and the consequences could be disastrous.

By some estimates, many top researchers spend 50 percent of their time writing grants. Interdisciplinary research is less likely to get funding, meaning critical kinds of research don’t get done. And scientists argue that the constant fighting for funding undermines their work, by encouraging researchers to overpromise and engage in questionable practices, overincentivizing publication in top journals, disincentivizing replications of existing work, and stifling creativity and intellectual risk-taking.

So here’s an unconventional idea: What if we gave up on the whole grant application process and distributed grant money by lottery?

Join the Vox Video Lab

Go behind the scenes. Chat with creators. Support Vox video. Become a member of the Vox Video Lab on YouTube today. (Heads up: You might be asked to sign in to Google first.)

Yes, that’s a serious proposal. It was first put forward in 2016 in mBio, a journal by the American Society for Microbiology.

Here’s the idea: Our current grant review process doesn’t select the best proposals, by a long shot. One study found very little correlation between how a grant was scored and whether the research it produced was cited. Another, looking at high-quality proposals, found there was virtually no agreement on their merits — two different researchers might come to vastly different conclusions about whether the grant should be approved. Another analysis looked at successful grants and found that 59 percent of them could have been rejected due to random variability in scoring. Clearly, above some threshold, the process is deeply subjective and not a real measure of quality.

So what if reviewers were merely responsible for deciding whether the papers are above the threshold for immediate rejection? Papers below that threshold then get rejected. Papers above the threshold enter the lottery and grants are awarded at random.

It’s a bizarre idea. But it could solve some of the most urgent problems with the modern grant system.

The way we do grants is broken

If you’re a tenured professor doing academic research in many of the sciences, you’ll be responsible for running a lab. That means you’re responsible for hiring graduate students and postdocs, as well as for paying for equipment, supplies, and publication fees.

Universities pay a share of these expenses, but the bulk is expected to come from research grants. In the US, most of those grants come from the federal government: Biomedical research is funded by the National Institutes of Health (NIH), and other research is funded by the National Science Foundation (NSF).

That means that in order to keep the lights on in your lab and keep paying the salaries of your grad students and postdocs, you need to get grants approved.

This has been the process for decades. But the share of grant applications that are approved has been falling steadily. Last year, the NIH approved 18 percent of the 54,000 requests it considered.

Scientists argue that the process that worked when a much larger percentage of grant applications were approved is failing now that a much smaller percentage are greenlit. The harder it is to get a grant approved, the more time researchers have to spend on writing proposals — and the more they have to change the focus and scope of their research toward whatever will win them money.

“Overall, our funding system is turning scientists into entrepreneurs and managers, and forcing them into roles they have not trained for, never wanted as a career, and which requires a very different mindset than doing science,” an article in the Canadian Journal of Kidney Health and Disease argued.

Some researchers, starved of public funding, are soliciting funding from private industry instead. That, of course, can create its own problems. Last year, the NIH halted a study into the health benefits of alcohol after learning that the researchers and officials had approached the alcohol industry for funding, implying that the results would be supportive of alcohol consumption. Now they’re scrambling to set guidelines for scientists taking industry funding, to ensure they’re still conducting honest science.

The difficulty of getting grants creates a culture where researchers feel that their ability to keep doing science at all — and pay the employees in their lab — depends on their success at squeezing results out of their data. That makes it hard to admit when the data is ambiguous and rewards researchers who are more willing to leap to conclusions or cut corners. “As it stands, too much of the research funding is going to too few of the researchers,” Gordon Pennycook, a PhD candidate in cognitive psychology at the University of Waterloo, wrote to my colleague Julia Belluz for an article about what’s wrong with science. “This creates a culture that rewards fast, sexy (and probably wrong) results.”

There’s a growing awareness, in academia and in public, that science is in the middle of a replication crisis — many published results don’t stand up to scrutiny. It’s possible to use common statistical methods to find exciting “results” even in data that’s just noise. A lot of things drive the replication crisis, but the endless race for funding is certainly one of them. You usually can’t get grants to replicate research, so scientists have to do something new rather than checking on important existing results that might be wrong.

But you can’t do anything too new, either — interdisciplinary research (collaboration across academic departments) is harder to get funding for, which has researchers shying away from it. In general, the pressure to get funding creates incentives for researchers to use shoddy methods and overstate what they can do.

Grant applications are also consuming researchers’ lives. We put more than a decade into training a new PhD in the sciences, and we’ve often put three decades into training a scientist at the top of their field and running a research lab. Their time is really valuable, and we’d like to see them spend it doing science.

Instead, they increasingly spend it writing grant applications. One observational study looked at Australian scientists to check how much time they spend on grants. “An estimated 550 working years of the researchers’ time was spent preparing proposals for Australia’s major health and medical funding scheme,” the study found. “As success rates are historically 20–25%, much of this time has no immediate benefit to either the researcher or society, and there are large opportunity costs in lost research output.”

Furthermore, all that time and effort doesn’t even help the best grants rise to the top. Among grant proposals that are already pretty good, ratings are highly subjective — two scientists will arrive at profoundly different evaluations of the same grant. That means whether one is approved or rejected is mostly a matter of chance. One study evaluated this by asking peer reviewers to review high-quality NIH grant applications as if they were making a grant decision. They computed the inter-rater reliability of the reviewers — that is, how strongly their judgment was correlated. An inter-rater reliability of above 0.7 is considered pretty good. The inter-rater reliability for grant evaluations? Near zero.

“The available evidence suggests that the system is already in essence a lottery without the benefits of being random,” conclude Ferric C. Fang and Arturo Casadevall, the authors of the proposal in mBio.

Can a lottery really solve the problems?

Fang and Casadevall’s idea is pretty straightforward — they think that the NIH, which is the primary source of biomedical research funding in the US and makes $30 billion in grants a year, should replace its peer review program with an abbreviated peer review system that just decides whether an application is too weak for consideration in the lottery.

“We think that scientists are pretty good at ranking proposals into two groups: ‘meritorious’ and ‘non-meritorious,’” Casadevall told me in an email, “but there is a problem when you ask them to stratify the meritorious proposals, since they can’t do that reliably. Hence, the current system is already a lottery except that it is not random.”

In their proposal, grants are then randomly distributed among the applications strong enough for consideration. If this process improved outcomes for NIH grants, other funders might adopt it.

The authors hope the proposal could tackle many of the problems with grant funding. If all applicants need their grant to do is establish that they have cleared a certain, reasonable bar, then hopefully they’ll spend less of their time refining and resubmitting their grant proposals. If the success of their next grant doesn’t rely on getting a flashy publication this time, they might be able to spend more time on sound methodology.

Nonetheless, it’s still a fringe idea. When the success of your lab and your research depends on funding, it can be galling to leave it up to chance. The current system might be broken, but a lot of people have spent their careers successfully navigating it, and they are likely to largely oppose drastic changes like these.

Additionally, there are other concerns — would an NIH lottery just make researchers more dependent on industry funds, which often come with expectations about which results to publish? How would it change what kinds of projects researchers pursue — would they start pursuing more social value, or less of it?

There are other, more measured proposals. For one thing, we could just increase research grant funding, which (thanks to inflation) has been slipping in real dollar terms since 2000. (The political landscape unfortunately doesn’t make this likely.) We could adjust grant application requirements to make grants less time-consuming without making them any more random.

We could have grant money set aside specifically for longer-term, more complex, less flashy research with potentially enormous social value. Similarly, we don’t need to overhaul the process to start rewarding replications, and in fact, many fields are already beginning to look at them more favorably. Researchers in Canada have argued that one Canadian peer review and approval process is more expensive than just fulfilling every single qualified grant, so in some fields, the lottery might not even be a necessary element.

And if the entire system is being overhauled, a lottery isn’t the only way. One team has proposed “collective allocation” — letting scientists vote on funding. Others have said that grants should be awarded based on track record and a one-page summary.

A lottery may, in the end, not be the best possible way to run an overhauled grant process that prioritizes enabling important research. What’s striking is that the current system is so fundamentally broken that a lottery could conceivably be an improvement. That ought to spur us to start debating big changes.

Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.

Sourse: vox.com

No votes yet.
Please wait...

Leave a Reply

Your email address will not be published. Required fields are marked *