Ask a Program Director: Why did my proposal do worse, not better, than last time?

IOS in Focus will occasionally feature IOS Program Director answering frequently asked questions.  Here is the first, from Dr. Tamra Mendelson, a program director in the Behavioral Systems Cluster.

Why did my proposal do worse, not better, than last time?
As a principal investigator, the most depressing email I ever shudder to open is the news that my proposal isn’t going to be funded. And the new pre-proposal system only adds new dimensions to the frustration. A pre-proposal might not be invited for a full proposal (ouch). Or, a pre-proposal might be invited for a full proposal, fare well but not get funded (ouch again) and then, in the next round of pre-proposals, not even be invited again for a full submission (What?!). Or, a pre-proposal could get a unanimous round of Excellents, only to tank as a full proposal. How does that happen? How does a proposal fare worse from one submission to the next, especially since it presumably benefits from the reviews of several experts?

As a rotating program director at NSF, I can see much more clearly now the factors that influence the success of a proposal. Those factors change from one submission to the next, and, unfortunately, the change doesn’t always go the way of the PI. I can think of at least four factors, not necessarily limited to the pre-proposal system, that can explain a decline in ranking from one submission to the next.

The panel. The composition of the review panel changes from year to year. As it should—imagine if we had the same 15 people reviewing our science year after year! Often a full proposal panel is composed of some proportion of the previous pre-proposal panel, but every new pre-proposal panel is almost entirely distinct from the previous year’s. That’s obviously important to keep our fields from canalizing around the ideas of an elite few judges.

But it also seems unfair. It’s a moving target. If your pre-proposal isn’t invited the first time, you use those reviews to make the proposal better. But in the next round, those panelists are long gone and you’re trying to please a whole crop of new people. It can also be a matter of luck. You might get lucky one year and get someone who really understands your stuff and can appreciate its impact, but the next year you might get someone on panel that happens to have a real bugaboo about your method or your question.

But if you’ve ever served on a panel, at least like those I’ve seen, you know how dedicated and careful the reviewers are, how engaging, deep, and considered the panel discussions can be, and how the best proposals simply and easily rise to the top. Yes, the panel can change, and yes, this year you might get stuck with Dr. Bugaboo. But that could have happened any year, and it could just as easily go the other way, faring poorly one year and better the next. It’s not systematically biased.

The science. Science changes too, at least we hope so. New evidence arises, old puzzles are solved, new questions are posed. So the question and methods you propose in one year may not be as exciting in the next. Perhaps a year is too short a time scale to explain a dip in the reviews from one year to the next. But, if you’ve submitted proposals as many times as I’ve submitted some of mine, the changing scientific landscape can become a real problem.

The proposal. It’s actually against NSF rules to submit the exact same proposal from one year to the next, so the proposal necessarily changes. Usually it changes in response to suggestions from the panel and external reviews. Sometimes the interpretation of those suggestions is spot on and the resubmission is significantly stronger, but sometimes a proposal doesn’t change for the better. In response to a request for more preliminary data, you might add weak results that only undermine the case. Or you might overemphasize a reviewer comment that actually didn’t carry much weight in the decision, and ignore the concerns that really held things up. (Note: That’s why it’s so important to call your program director after you recover and read through the reviews. Your program director will have a candid discussion with you about which elements of the panel summary and reviews were the biggest factors in making their decision.) For whatever reason, each submission is a different proposal that might, or might not, have improved.

The pool of competitors. Every time you submit a proposal, it’s thrown into a population of competitors against which it’s judged. That population changes from year to year, and from the pre-proposal panel to the full. In a sense, there is no absolute fitness of a proposal in the NSF merit review system–only relative. Proposals are evaluated in a group, and they’re compared to one another as much as they’re judged on their own merits. So the ultimate placement of a proposal on “the ranking board” (in IOS: high, medium, low quality, or not competitive) is due in large part to how it compares to others in the pool. In the end, resources are finite (too finite) and competition is stiff. The ideas in any given proposal can be important, the methods can be solid, the proposal can improve, and the panel can be stacked in its favor. But if other, stronger proposals suddenly appear, the relative fitness of any proposal can take a hit.

Selection via relative fitness is pretty effective—it explains the persistence and diversity of life on earth, it created humanity and its capacity for science, and it underlies the most widely respected merit review system in the world. That’s not to say there’s not room for improvement in the system; there is, and it’s a topic of constant discussion among the tireless forces at NSF. But given the ground-breaking developments I see on a daily basis coming out of NSF-funded research, something must be working.

P.S. Call your program director.