Study Groups and Spiritualism
Agencies in the USA, such as the National Institutes of Health (NIH), and the National Science Foundation, name their peer-review committees "Study Groups." In each area of study a dozen or so study group members ("experts") sit around a table and in a day or so allocate millions of tax-payer dollars.
In 2002 I politely declined an invitation to participate in a USA National Science Foundation bioinformatics study group. I would feel more inclined to accept an invitation to sit around a table in a spiritualist session to make contact with deceased loved-ones, than to accept an invitation to sit around a table in a peer-review study session. The spiritualist at least gives comfort to the gullible and her/his own financial dependents, while not greatly harming others. However, as set out in these web-pages and elsewhere (e.g. see chapter on diphtheria in my book), study groups unwittingly delay progress towards solutions to major problems in the biomedical sciences.
Nevertheless, the following year the invitation was repeated and I accepted just to remind myself how little progress there had been made in system reform. 18 peer-reviewers sat round a table and, in two days, examined 34 applications for research funds. The upper limit on funding for each application was $100,000 and, remarkably, each applicant found that his/her proposed work would require exactly that amount. Each reviewer had been sent abstracts in advance and asked to indicate the applications that he/she felt fell within his/her area of expertise. Then 4-5 applications were allocated to each reviewer, who read those four, but not the other applications.
At the meeting, the reviewers responsible for each application were asked to speak about the application (5-10 minutes) and give one of three ratings - high, medium and low. On several occasions, two reviewers would give an application a high rating and two would give the same application a low rating. Even after discussion, reviewers tended to stick to their original rating. It so happened that the highest scoring application was in my area of expertise. Three reviewers, admitting that they had not fully understood the underlying science, gave it a high rating. I pointed out the flaws in the application and gave it a medium rating. The other three reviewers did not change their ratings.
At the end of the second day the NSF officer running the meeting required us to rank the highest scoring applications, since there were only funds for six (no sliding scale of funding). Since each of us had read only a small fraction of the applications, some of us had read none, and some of us had read only a few of the highest rated applications. There was little objective basis for the rankings. A mathematician in the group soon pointed out various flaws in the ranking procedure the officer was proposing. We were tired, and there were planes home to be caught. With much hand-waving a ranking was arrived at. Applications just above the cut-off point received $100,000. Applications just below the cut-off point received zero.
Donald Forsdyke. April 2003
Return to Peer-Review Index
Return to Homepage
Last edited 05 Apr 2003 by D. R. Forsdyke