Skip to main content

Main menu

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
  • Submit
  • About
    • Editorial Board
    • PNAS Staff
    • FAQ
    • Accessibility Statement
    • Rights and Permissions
    • Site Map
  • Contact
  • Journal Club
  • Subscribe
    • Subscription Rates
    • Subscriptions FAQ
    • Open Access
    • Recommend PNAS to Your Librarian

User menu

  • Log in
  • My Cart

Search

  • Advanced search
Home
Home
  • Log in
  • My Cart

Advanced Search

  • Home
  • Articles
    • Current
    • Special Feature Articles - Most Recent
    • Special Features
    • Colloquia
    • Collected Articles
    • PNAS Classics
    • List of Issues
  • Front Matter
    • Front Matter Portal
    • Journal Club
  • News
    • For the Press
    • This Week In PNAS
    • PNAS in the News
  • Podcasts
  • Authors
    • Information for Authors
    • Editorial and Journal Policies
    • Submission Procedures
    • Fees and Licenses
  • Submit
Research Article

Low agreement among reviewers evaluating the same NIH grant applications

View ORCID ProfileElizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford, and Molly Carnes
  1. aCenter for Women’s Health Research, University of Wisconsin–Madison, Madison, WI 53715;
  2. bDepartment of Educational Psychology, University of Wisconsin–Madison, Madison, WI 53706;
  3. cDepartment of Psychology, University of Wisconsin–Madison, Madison, WI 53706;
  4. dDepartment of English, West Chester University, West Chester, PA 19383;
  5. eDepartment of English, University of Wisconsin–Madison, Madison, WI 53706;
  6. fDepartment of Sociology, University of Wisconsin–Madison, Madison, WI 53706;
  7. gDepartment of Medicine, University of Wisconsin–Madison, Madison, WI 53792

See allHide authors and affiliations

PNAS March 20, 2018 115 (12) 2952-2957; first published March 5, 2018; https://doi.org/10.1073/pnas.1714379115
Elizabeth L. Pier
aCenter for Women’s Health Research, University of Wisconsin–Madison, Madison, WI 53715;
bDepartment of Educational Psychology, University of Wisconsin–Madison, Madison, WI 53706;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Elizabeth L. Pier
  • For correspondence: epier@wisc.edu
Markus Brauer
cDepartment of Psychology, University of Wisconsin–Madison, Madison, WI 53706;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Amarette Filut
aCenter for Women’s Health Research, University of Wisconsin–Madison, Madison, WI 53715;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Anna Kaatz
aCenter for Women’s Health Research, University of Wisconsin–Madison, Madison, WI 53715;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Joshua Raclaw
aCenter for Women’s Health Research, University of Wisconsin–Madison, Madison, WI 53715;
dDepartment of English, West Chester University, West Chester, PA 19383;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mitchell J. Nathan
bDepartment of Educational Psychology, University of Wisconsin–Madison, Madison, WI 53706;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Cecilia E. Ford
aCenter for Women’s Health Research, University of Wisconsin–Madison, Madison, WI 53715;
eDepartment of English, University of Wisconsin–Madison, Madison, WI 53706;
fDepartment of Sociology, University of Wisconsin–Madison, Madison, WI 53706;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Molly Carnes
aCenter for Women’s Health Research, University of Wisconsin–Madison, Madison, WI 53715;
gDepartment of Medicine, University of Wisconsin–Madison, Madison, WI 53792
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  1. Edited by Susan T. Fiske, Princeton University, Princeton, NJ, and approved February 5, 2018 (received for review August 23, 2017)

  • Article
  • Figures & SI
  • Info & Metrics
  • PDF
Loading

Significance

Scientific grant peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination in allocating funding, little research has explored how reviewers derive their assigned ratings for the applications they review or whether this assessment is consistent when the same application is evaluated by different sets of reviewers. We replicated the NIH peer-review process to examine the qualitative and quantitative judgments of different reviewers examining the same grant application. We found no agreement among reviewers in evaluating the same application. These findings highlight the subjectivity in reviewers’ evaluations of grant applications and underscore the difficulty in comparing the evaluations of different applications from different reviewers—which is how peer review actually unfolds.

Abstract

Obtaining grant funding from the National Institutes of Health (NIH) is increasingly competitive, as funding success rates have declined over the past decade. To allocate relatively scarce funds, scientific peer reviewers must differentiate the very best applications from comparatively weaker ones. Despite the importance of this determination, little research has explored how reviewers assign ratings to the applications they review and whether there is consistency in the reviewers’ evaluation of the same application. Replicating all aspects of the NIH peer-review process, we examined 43 individual reviewers’ ratings and written critiques of the same group of 25 NIH grant applications. Results showed no agreement among reviewers regarding the quality of the applications in either their qualitative or quantitative evaluations. Although all reviewers received the same instructions on how to rate applications and format their written critiques, we also found no agreement in how reviewers “translated” a given number of strengths and weaknesses into a numeric rating. It appeared that the outcome of the grant review depended more on the reviewer to whom the grant was assigned than the research proposed in the grant. This research replicates the NIH peer-review process to examine in detail the qualitative and quantitative judgments of different reviewers examining the same application, and our results have broad relevance for scientific grant peer review.

  • peer review
  • social sciences
  • interrater reliability
  • linear mixed-effects models

Footnotes

  • ↵1To whom correspondence should be addressed. Email: epier{at}wisc.edu.
  • Author contributions: A.K., C.E.F., and M.C. designed research; A.K., J.R., C.E.F., and M.C. performed research; E.L.P. and A.F. coded data with input from M.J.N.; E.L.P. and M.B. analyzed data; and E.L.P. and M.B. wrote the paper with input from all coauthors.

  • The authors declare no conflict of interest.

  • This article is a PNAS Direct Submission.

  • This article contains supporting information online at www.pnas.org/lookup/suppl/doi:10.1073/pnas.1714379115/-/DCSupplemental.

Published under the PNAS license.

View Full Text
PreviousNext
Back to top
Article Alerts
Email Article

Thank you for your interest in spreading the word on PNAS.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Low agreement among reviewers evaluating the same NIH grant applications
(Your Name) has sent you a message from PNAS
(Your Name) thought you would like to see the PNAS web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Low agreement among reviewers evaluating the same NIH grant applications
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford, Molly Carnes
Proceedings of the National Academy of Sciences Mar 2018, 115 (12) 2952-2957; DOI: 10.1073/pnas.1714379115

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Request Permissions
Share
Low agreement among reviewers evaluating the same NIH grant applications
Elizabeth L. Pier, Markus Brauer, Amarette Filut, Anna Kaatz, Joshua Raclaw, Mitchell J. Nathan, Cecilia E. Ford, Molly Carnes
Proceedings of the National Academy of Sciences Mar 2018, 115 (12) 2952-2957; DOI: 10.1073/pnas.1714379115
del.icio.us logo Digg logo Reddit logo Twitter logo CiteULike logo Facebook logo Google logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Mendeley logo Mendeley

Article Classifications

  • Social Sciences
  • Social Sciences
Proceedings of the National Academy of Sciences: 115 (12)
Table of Contents

Submit

Sign up for Article Alerts

Jump to section

  • Article
    • Abstract
    • Methods
    • Acknowledgments
    • Footnotes
    • References
  • Figures & SI
  • Info & Metrics
  • PDF

You May Also be Interested in

Setting sun over a sun-baked dirt landscape
Core Concept: Popular integrated assessment climate policy models have key caveats
Better explicating the strengths and shortcomings of these models will help refine projections and improve transparency in the years ahead.
Image credit: Witsawat.S.
Model of the Amazon forest
News Feature: A sea in the Amazon
Did the Caribbean sweep into the western Amazon millions of years ago, shaping the region’s rich biodiversity?
Image credit: Tacio Cordeiro Bicudo (University of São Paulo, São Paulo, Brazil), Victor Sacek (University of São Paulo, São Paulo, Brazil), and Lucy Reading-Ikkanda (artist).
Syrian archaeological site
Journal Club: In Mesopotamia, early cities may have faltered before climate-driven collapse
Settlements 4,200 years ago may have suffered from overpopulation before drought and lower temperatures ultimately made them unsustainable.
Image credit: Andrea Ricci.
Steamboat Geyser eruption.
Eruption of Steamboat Geyser
Mara Reed and Michael Manga explore why Yellowstone's Steamboat Geyser resumed erupting in 2018.
Listen
Past PodcastsSubscribe
Birds nestling on tree branches
Parent–offspring conflict in songbird fledging
Some songbird parents might improve their own fitness by manipulating their offspring into leaving the nest early, at the cost of fledgling survival, a study finds.
Image credit: Gil Eckrich (photographer).

Similar Articles

Site Logo
Powered by HighWire
  • Submit Manuscript
  • Twitter
  • Facebook
  • RSS Feeds
  • Email Alerts

Articles

  • Current Issue
  • Special Feature Articles – Most Recent
  • List of Issues

PNAS Portals

  • Anthropology
  • Chemistry
  • Classics
  • Front Matter
  • Physics
  • Sustainability Science
  • Teaching Resources

Information

  • Authors
  • Editorial Board
  • Reviewers
  • Subscribers
  • Librarians
  • Press
  • Site Map
  • PNAS Updates
  • FAQs
  • Accessibility Statement
  • Rights & Permissions
  • About
  • Contact

Feedback    Privacy/Legal

Copyright © 2021 National Academy of Sciences. Online ISSN 1091-6490