Media Briefings

EVALUATING RESEARCH QUALITY: How peer review panels should make their REF assessments

  • Published Date: August 2013

With more than five billion pounds of taxpayers’ money due to be doled out to UK universities next year, two economists suggest that the government needs to make its decisions using the ideas of an English vicar who died in the 1700s.

In the run-up to the 2014 Research Excellence Framework (REF), Professors Daniel Sgroi and Andrew Oswald of the University of Warwick have developed a method, drawing on mathematical ideas from the reverend Thomas Bayes, to evaluate in an objective way the research submitted for assessment by each university.

Sgroi and Oswald’s proposal, published in the August 2013 issue of the Economic Journal, boils down to a simple idea. First, form an initial estimate of each journal article’s quality by looking only at the quality of the journal in which it was published.

Next, from the date of publication of the article, use data on citations – the number of times that other researchers mention that article – gradually to update the initial estimate of quality to form a more considered estimate of the article’s quality.

The more an article is cited by others, the more it can be said to be having important influence. Articles that quietly sink without trace in the ocean of scientific publications can be said to have had little influence.

To help the government’s REF peer review panels, Sgroi and Oswald offer mathematical weighting formulae to do all this. They argue that, over time, less and less weight should be put on a journal’s quality.

Across the world, there is growing interest in how to judge the quality of universities. Various rankings have sprung up: the Times Higher Education global rankings of universities; the Jiao Tong world league table; the US News and World Report ranking of elite US colleges; the Guardian ranking of UK universities; and others. Students (and their parents) want to make informed choices, and politicians wish to assess the value that citizens get for taxpayers’ money.

Rightly or wrongly, the UK has been a world leader in formal ways to measure universities’ research performance. For more than two decades, the country has held ‘research assessment exercises’, in which panels of senior researchers have been asked to study the work being done in UK universities and provide quality scores.

The next such exercise is the REF. Peer review panels have recently been appointed and submissions will be made at the end of 2013.

The 2014 REF will allow universities to nominate four outputs (usually journal articles) per member of each department. These nominations will be graded by the panels into categories from a low of 1* up to a high of 4*.

The assigned grades will contribute 65% of the panel’s final assessment of the research quality of a given department in a given university. The remaining part of the assessment will be based on the research work’s non-academic, practical impact, and the research environment of the academic institution.

In many academic disciplines, the REF will allow peer review panels to put some weight on the citations that work has already accrued. Citations data in this context are data on the number of times that articles and books are referenced in the bibliographies of later published articles.

But the imperfection of citations data is well known. In a discipline like Economics, it is likely that only a small number of articles or books will have acquired more than a few dozen citations in the five-year window of the REF process. Occasionally, an important contribution – such as Louis Bachelier’s thesis, published in 1900 – lies un-cited for years.

Alongside citations data, there are also data on the quality of the journals where the outputs are published. Most researchers are aware of journal rankings and would like their work to appear in highly rated journals.

Paradoxically, the rubric of the REF ostensibly requires members of the peer review panels not to use information on the quality of journals. But individual members of a peer review panel do not have to prove that they ignored journal rankings, and it seems possible that peer review panel members will see such a rubric as unenforceable.


Notes for editors: ‘How Should Peer Review Panels Behave?’ by Daniel Sgroi and Andrew Oswald is published in the August 2013 issue of the Economic Journal.

Daniel Sgroi and Andrew Oswald are at the University of Warwick.

For further information: contact Daniel Sgroi on 02476-575557 (email:; Andrew Oswald on 02476-523510 (email:; or Romesh Vaitilingam on +44-7768-661095 (email:; Twitter: @econromesh).