Media Briefings


  • Published Date: August 2013

The average number of citations by other scholars that each paper in an economic journal attracts is an incomplete measure of the quality of a newly published article in that journal. According to Professor David Laband, writing in the August 2013 issue of the Economic Journal, other aspects of journal-level citations provide considerably more information and, taken together, are likely to be more useful in evaluating research quality.

His study provides a range of information about the citations to papers published in 248 economics journals during the period 2001-05. This will be useful for readers – including the peer review panels of the UK’s 2014 Research Excellence Framework (REF) – when attempting to forecast the citations impact of a newly published article.

For academics, the ‘quality’ of journals in which they publish has very important, highly personal short- and long-term consequences. Salary increments are based, in part, on the quality of journals in which they have published their research papers. So too are university promotion and tenure decisions. In some cases, as in the UK in recent years, allocation of public funds to an academic unit may be affected by the ranking of journals that constituent faculty members have published in over a defined period of time.

Within a given academic institution, the distribution of money for salary increases is invariably a zero-sum game – what faculty member A commands in extra money comes at the incremental expense of salary increases to the other n-1 faculty members at that institution and/or in his/her academic unit.

Thus, each individual academic has an intense, personal interest in being able to represent to the relevant bureaucratic authority that his research papers are more important than the research papers published by his internal and external colleagues. Scaling up, the same argument applies in the context of inter-university competition for a fixed pool of public funds.

Journal rankings are of interest because the true quality of an individual’s scholarship will only be revealed over an extended period of time – typically several years at a minimum. This time frame is inconveniently incompatible with the relative frequency with which salary, promotion and budget allocation decisions are made.

Consequently, administrators typically base their assessment of the relative impact of a research paper on what amounts to a ‘prediction’. That prediction is based, in turn, on the ‘reputation’ of the journal in which the paper was published.

This ‘reputation’ might be based on the perceptions of a set of individuals who may or may not be very knowledgeable or it might be based on factual information about the papers published previously in that journal. The prevailing standard is to rank academic journals based on the average number of citations each paper attracts from other scholars over some defined period of time. Such citations reflect ‘impact’.

How useful is the average citation calculation for a given journal as a forecast of the citations that a newly published article likely will attract? Consider two possible circumstances. Journal A’s citation average is driven by publication of a lot of rather good papers; thus, there is relatively low dispersion of citations around the mean.

In contrast, Journal B’s citation average (which is identical to that of Journal A) is driven by the extremely high citation count of a single stellar paper plus a few citations each to the other, mostly mediocre, ones. In this latter case, the average citation count would be characterised by a relatively high dispersion. Information about this dispersion surely would affect one's view of the accuracy and usefulness of any forecast drawn from the journal ranking.

There are several aspects of journal-level citations that, taken together, provide considerably more information than average citations per article. This information is likely to be more useful, in terms of helping someone form more accurate expectations of the quality of a given article published in a journal, than the information provided by average citations per article only.

This study provides data that both illustrates this point and offers a range of information about the citations to papers published in 248 economics journals during the period 2001-05. Quite deliberately, the author provides no ranking of these journals. Rather, he merely provides information that readers may find useful when attempting to forecast the citations impact of a newly published article.


Notes for editors: ‘On the Use and Abuse of Economics Journal Rankings’ by David Laband is published in the August 2013 issue of the Economic Journal.

David Laband is at the Georgia Institute of Technology.

For further information: contact Romesh Vaitilingam on +44-7768-661095 (email:; Twitter: @econromesh); or David Laband via email: