A Review of Funder Instructions and Grant Reviewer Practices for Assessing the Intellectual Merit and Other Impacts of Research
Download
Key Findings
- Although funders provide criteria by which intellectual merit and broader impacts should be assessed, the criteria are not defined precisely, and the relative emphasis placed on them is left to the reviewer’s discretion.
- Criteria for intellectual merit usually direct reviewers to consider research methods, potential outcomes, and the applicant’s professional experience. In contrast, funders’ criteria for broader impacts usually direct reviewers to consider potential outcomes and are less likely to require input on the proposed methods to achieve outcomes or the applicants’ relevant previous experience.
- Across six large studies of research grant proposals, the overall reliability of reviewer scores is low. At the same time, the differences in proposal scores between funded and unfunded proposals are quite small, suggesting that the reviewers assigned to a proposal will play an important role in these scores.
- Research has suggested many sources of variability across reviewers. Some variability is caused by genuine differences in opinion and contributes to a complete understanding of the strengths and weaknesses of a grant proposal. Other sources of variability are caused by errors and oversights in the review process and decrease the reliability of reviewer scores.
Researchers and policymakers have noted the challenges associated with assessing the broader impacts of scientific research. This report reviews the existing literature on how reviewers at institutions outside of the U.S. National Science Foundation (NSF) assess nonmedical basic and use-inspired research and science, technology, engineering, and mathematics (STEM) education. In doing so, it provides evidence about the criteria used to assess grants and factors having important, unimportant, or unstudied impacts on the evaluation of the intellectual merit and broader impacts of grant proposals. We found that when evaluating intellectual merit, funders and grant reviewers seem to consider both the potential for scientific discovery and the plan for scientific inquiry. In contrast, when evaluating broader impacts, funders and reviewers seem to focus on outcomes, paying less attention to the methods by which these outcomes might be realized. The empirical literature shows that reviewers often do not agree about the merit of grant proposals, though they seem to become more consistent with experience. Several ideas on how to support, modify, or replace peer review have been proposed in the literature, but evidence about the efficacy of these ideas is limited. The findings of this literature review will inform the design of a process evaluation that will assess how NSF applies its Broader Impacts review criterion across its work.
How do you apply evidence?
Take our quick four-question survey to help us curate evidence and insights that serve you.
Take our survey