Annual Conference Report

The Society's Annual Conference took place this year at the University of Nottingham, 18th to 20th of April. This report on the event is written by Chris Giles of the Financial Times.

What would I make of academic economics after nearly a decade in journalism? I wondered to myself as I travelled to Nottingham in mid-April for the Royal Economics Society’s annual conference. I had, of course, kept in touch with much of the research coming out of British universities and research centres, but had not experienced such a concentration of academic economists since the mind 1990s.

Would the discipline be ahead of current policy debates? Would its insights be useful? Would researchers be speaking to themselves in their own exclusive codes or would there be an attempt to reach out and influence the wider world? Was the RES conference even relevant for academics and researchers themselves? More personally, I wondered whether I would find ideas for features or news stories and, nagging at the back of my mind, I feared I might be out of my depth, unable to cope with the rigours of academic argument.

Let’s cut to the chase. Three days later, I thought the RES conference in 2006 was, as it has often been described in the past, a mixed bag.

Excellence was in abundance. Imaginative collection and use of data dominated the papers presented; Steven Levitt had better watch out. The invited lectures and the special sessions were a treat. There were easily enough interesting papers to satisfy my demand for new ideas. And I left reassured that economics departments and research units across the country were actively trying to advance economics: helping explain how the world works and how (sometimes) to make it a better place. From discussing the conference with as many people as I could grab, I also found it served many of its own objectives: it stimulated participants, provided ample opportunities for networking, some new ideas and lots of scope for gossip.

But there were also frustrations and irritations that I seemed to share with quite a few participants. For too many people presenting papers, it seemed simply an opportunity to tick the box ‘I’ve presented at the RES conference’, leading to lacklustre presentations delivered to small and rather uninterested audiences. Too much research, particularly in the empirical papers, bordered on the irrelevant, with presenters seeming genuinely embarrassed when they had to provide reasons for why they were presenting their paper. The general sessions, even though they were billed as having a unifying theme, tended to consist of three disparate papers. They were delivered in rapid succession to meet the rigorous half-hour time limit and stimulated little more than an opportunity for others to score points with tricky technical questions. So, for much of my time in the general sessions, I wondered what was the purpose of the conference.

One answer, of course, is that it provides an opportunity for many of the younger and less experienced participants to see the masters at work, presenting the invited lectures. And the fact that there were around 300 participants in Nottingham on each day of the conference guaranteed a large enough audience to attract top-notch speakers to the UK. The invited lectures, alone, made the RES conference worthwhile and that is to the great credit of Gavin Cameron, this year’s programme chair.

My personal highlight was the Sargan Lecture, delivered by David Card of Berkeley. Card had always been, in my eyes, the man who caused controversy a decade ago with his studies of burger flippers, which showed higher minimum wages didn’t necessarily cost jobs. This lecture placed his abilities in a much broader context. It provided a historical sweep of the development of empirical economics, highlighting the development of data, modelling, computing power, to the current state of the art. It was never a story of linear progress, and had as many ups and downs as the British economy before 1992.

Empirical economics before 1970 tended to rest on small macroeconomic models, hamstrung by a lack of data, quantitative techniques and computer power. The revolution occurred in the 1970s, with the development of all three. One of the high points of this era, was Daniel McFadden’s multinomial logit model of 1974 which successfully combined microeconomic theory with the tools to evaluate discrete choice in real life situations. It was enough to win McFadden a share of the 2000 nobel prize. But, Card reminded his audience, the march of empirical economics came to an abrupt halt in the early 1980s as many empirical papers were found not to be replicable, errors had been made in calculations and the results did not hold when others used slightly different techniques or data. People worried that econometrics was just a ‘con’. This led to the search in the late 1980s for better techniques and design driven approaches to ensure that published results reflected reality.

The current state of empirical economics has learnt much from past mistakes, Card said, and he was right to say it has become the driving force behind economic research today. He shared four lessons from his historical sweep. First, you do not need to have theoretical underpinnings for good empirical work, but a good design of the statistical work, preferably with some randomisation, is essential to claim causal links between two variables. Second, you need to pre-specify an outcome to test rather than mining the data for the best story. Third, any empirical work needs serious testing of the statistical models to ensure the results are not dependent on the particular statistical form or techniques used. And fourth, the data must be made available for others to try to replicate the results.

Card outlined the categories into which most current empirical research falls along a spectrum from deeply theoretical models in which data is appended in a calibration exercise, to implicit modelling which uses lots of data and clever techniques to answer questions without a formal structural model behind the analysis. Although there were advantages and disadvantages of each, he saw the future of empirical economics was to bring more structure into the current fashion for designed approaches to specific questions. This would stop the tendency for empirical economics to end up being ‘just a big scatter of individual programme analyses’.

In foreseeing a world with structural models sitting alongside less theoretical approaches, Card must have brought joy to anyone from the Bank of England, since that is exactly the type of approach it has adopted over the past few years in its new macromodel of the UK economy. Card’s analysis of the current state of empirical work, was spot on. If the RES conference was any guide, much more work is at the data and design end of his spectrum, but few papers had any general applicability to wider questions.

Empirical economists should also take note of Card’s final recommendations for the subject. Remember to be modest, he said, because economics does not explain a lot of observed behaviour. Unexplained behaviour still dominates most models.

If David Card’s lecture talked to the profession about what empirical economics can and cannot do, John Sutton’s lecture as President of the Royal Economics Society and Colin Camerer’s Hahn Lecture captured two alternative methods of making economic arguments. Sutton of the London School of Economics made use of powerful anecdotes to underpin his theory that the process of globalisation comes through a competition in capabilities, while Camerer of the California Institute of Technology revealed what we can and cannot deduce from neuroscience to aid our understanding of economics.

From the moment Sutton stood up it was clear that the President’s lecture was exquisitely prepared. He merged economic theory, insights from management theory, trade theories, anecdote and video clips to argue that globalisation can have very diverse effects on countries; much more diverse than standard theory relating production and trading performance to productivity differences might seem to suggest.

His premise was that capabilities — the combination of productivity and quality — determined the outcome of a process of globalisation. Sutton defined two theoretical propositions. First, that high quality products can drive out low quality products completely at any price; and second, that competition in capability building will lead to a limit on the number of viable firms in an industry.

Incorporating these propositions in a model with an open trading environment, Sutton showed that when capabilities are similar between two companies in different jurisdictions, both countries can produce both goods. When capabilities differ markedly, however, all production will shift to the higher quality location; falling wages in the low quality zone would have no effect if the capability of the firm or product is not something consumers demanded.

From this he concluded that opening a country to global competition tends to have three phases. First, there is a destructive phase, in which the jobs of low quality companies are eliminated. This was all too plain to see in Eastern Europe in the early 1990s and no degree of devaluation or real wage reduction was likely to keep those poor quality products. Welfare in poorer countries declines in this phase as they suffer not just a decline in relative wages but also a complete loss of production in many industries.

Second, is the phase of transferring of existing capabilities towards poorer countries. In this phase, which he said characterised much of the world today, some industries - such as motor vehicle components — would rapidly locate in poorer countries since the capabilities are easy to transfer. In these industries relative wages would be very important in determining production location decisions. In other industries where transferring capabilities is more difficult, for example aircraft or machine tools, transfer of capabilities is much more limited. Citing numerous examples of Indian and Chinese car plants, he claimed they could reach world-class standards of product reliability within eight years and that was why motor component production was now rapidly moving to low wage economies.

In his third phase of globalisation, he said companies would compete on the development of future capabilities and the outcome would depend on which companies could maintain an advantage in capabilities in their products, or at least, prevent their products falling too far behind. The most profitable companies would not necessarily survive if they suddenly found themselves lagging or unable to develop new capabilities in tune with changing demand. Sutton concluded that this model fitted the patterns of trade and development much better than traditional theories and the convincing case for globalisation was the second phase, in which wealth and capabilities were spread rather than a more traditional comparative advantage story.

Colin Camerer delivered a tour de force on neuroeconomics, concluding that a better understanding of how the brain works can offer useful insights into human behaviour. He was careful not to dismiss traditional economics or ‘oversell’ neuroeconomics, but he claimed that some of the recent developments challenged accepted fundamental principles of economics.

In particular he listed many experiments on humans and animals that attempted to understand which parts of the brain are used in certain situations, and showed conclusive evidence that preferences, usually assumed to be fixed, are in fact dependent on the circuitry of the brain, which is not necessarily hard-wired. He showed that people with damage to areas of the brain associated with evaluating unfamiliar situations tended to act more rationally in experiments than people with healthy brains. And drug and electronic interference with parts of the brain could also change preferences.

Why does this matter? Well, neuroscience supports three propositions fundamental to economics, Camerer said. First, that rationality is fundamental to animal and human behaviour as it is ‘highly sculpted by evolution’. Second, that there is considerable support for the propositions of behavioural economics: people are more willing to take gambles in familiar situations than in unfamiliar circumstances even if the odds are the same. And third, traditional economics says preferences are fundamental, while neuroeconomics says brain circuitry is the first step. The lecture showed amazing cross-disciplinary knowledge, and was rather intimidating for people who think brain surgery is difficult, but it was a powerful description of the limitations of traditional economics.

The first invited lecture of the conference, the Economic Journal Lecture, was delivered by Narayana Kocerlakota of Minnesota University on household heterogeneity and real exchange rates. I went into this lecture knowing I was likely to be out of my depth and so it proved. But there was no doubt that it was an impressive piece of theoretical work, which had the appealing result that representative agent models were useless at explaining real exchange rate movements, however complicated they were.

In contrast, Kocerlakota concluded that it was possible to develop a model with heterogeneous agents in command of private information in a pareto optimal setting that had some explanatory power over real exchange rates. Intuition wasn’t the paper’s strong point and I left thinking that though a formidable piece of reasoning, I couldn’t quite see currency traders in London and New York using the insights to increase their profits and improve the efficiency of the market.

The difficulty of reporting the special sessions was that they ran in parallel, so I could only go to two out of eight possible events. Whether I chose well in picking intergenerational mobility and macroeconomic stability, I have no idea, but they were both highly enjoyable events, producing many insights.

The session on what we can learn from comparisons of intergenerational mobility did exactly what it said it would. For far too long, research in this area has had the rather nasty problem that it is hard to tell whether the transmission of income between generations is, to put it crudely, big or small.

The papers addressed this question in a variety of ways. The first paper, by Robin Naylor et al, compared intergenerational mobility across countries, showing the dangers of using simple summary statistic to describe a host of differences between countries. While Nordic countries appeared to have much greater mobility than the UK and the US came in last place, the detail revealed a much more nuanced picture. In all countries, for example, over a third of the sons of top income quintile fathers were also in the top quintile. The rich in every country were able to defend their status down one generation to the same degree.

But where there were differences, they were stark. In the UK and the US, sons of rich fathers were much less likely to fall to the bottom of the income distribution, while in the US, the land of the American dream, sons of the poor were least likely to rise to the top.

Jo Blanden of LSE compared two UK cohorts, those born in 1958 and 1970, and found that mobility had fallen. Part of this, she showed, was that non-cognitive skills — behaviour of children, a winning smile, politeness etc — could explain the persistence in income patterns in the 1970 cohort but not in the 1958 cohort. Rich sons, it seems are better behaved and get more education than poor sons, leading to a greater persistence of income differences now.

The third paper from Jason Long and Joseph Ferrie gave a brilliant insight into why the myth of the American Dream has itself been so persistent. Using the fine work of the Mormon Church, which has digitised the UK 1851 census and the US 1850 census, they took a sample of the censuses 30 years later to find how the same people had fared. What it showed was that occupational mobility in the US was huge compared with Britain in the nineteenth century, but in recent decades. And even though US mobility is no longer higher than other countries, the historical accuracy of the myth allows it to persist.

The three papers provoked lively discussion, ably moderated by Stephen Aldridge of the Cabinet Office who confirmed that the work had huge relevance to educational policy and suggested that early intervention is best if it is successful, in promoting greater intergenerational mobility.

The session on macroeconomic stability took as a premise the great moderation in the volatility of economic growth and inflation since the mid 1980s in most industrial countries. Was it good luck or good policy? It asked At the end of a lively session, the consensus in the room was that it is still awfully difficult to know, but good policy, not necessarily monetary policy, was likely to have been crucial.

Chaired by Stephen Nickell, a monetary policy committee member of the Bank of England and soon to be moving to Nuffield College Oxford, the first two papers used structural dynamic general equilibrium models to try to answer the question. First, Alejandro Justiniano of Northwestern University and Giorgio Primiceri of the Federal Reserve used a state of the art model to show that it was a reduction in shocks to the economy not a reduction in policy shocks that caused the moderation. Next, Fabio Canova, Luca Gambetti and Evi Pappa of the Universitat Pompeu Fabra showed that in an international comparison, there were lots of differences between the US, Euroland, and UK and a simple story did not hold in each country.

But both papers were overshadowed by the work of Luca Benati and Paolo Surico of the Bank of England, which used a similar model, but restricted the results only to allow policy shocks. Since their model could still replicate the data, they concluded that these types of models were not helpful in answering in the question.

In discussions afterwards, a general consensus emerged that the models used were not good at distinguishing between policy and luck, and that it was probable that policies and been more influential. Stephen Nickell summed up the proceedings by casting his mind back to the UK in 1979. He predicted that even with the current structure of monetary policy, setting interest rates then ‘would have been a hateful experience’ because indexation was so deeply imbedded in UK institutions, preventing relative price changes and ensuring that inflation shocks were transmitted directly into wage rises.

If the special sessions showed the pretty face of the knowledge frontier, with top class research presented in a relevant forum alongside complimentary papers, the general sessions showed the isolation of much academic work. Sparsely attended, the papers were of varying quality and many could have been much improved if the presenters had listened to David Card and taken his comments about what economics can and cannot say on board.

The thing I found most noteworthy was the general tone of friendliness and cohesion in theoretical sessions, contrasting with an undercurrent of sniping and pedantry in empirical sessions. My particular gripes were the lack of coherent motivation for doing the research, finding empirical results ‘reassuring’ if they conformed with a model rather than examining the implications of those results, a lack of realism about the policies that government’s can choose and calibrated theoretical models which fail adequately to question the model’s applicability.

But my main area of genuine concern was that there have recently been countless examples of cleverly designed policies which have gone spectacularly wrong due to unintended consequences or administrative complexity. Yet, a high proportion of empirical micro work seemed to be concerned with how government could tweak policy further. If ever there was need for research into the benefits of simplicity, it is now, but I saw little of such work in evidence. That was sad.

It made me dwell on the train back to London on how the RES conference could be improved. There is no point suggesting an end to the general sessions, since the presenters of the papers comprise most of the audience for the special sessions and the invited lectures. That will always be the case. But there is a strong argument in favour of trying to encourage more debate, interaction, and time for questions and discussion into the general sessions, alongside an attempt to create a unifying theme in each session to arouse greater interest.

For the conference as a whole, there would be great demand for sessions, which were based less around papers and more structured around arguments, whether they are theoretical, empirical or technical. That would be the best way of generating a more relevant discussion and raising levels of interest for participants and the outside world alike. Economics lives at the centre of public discourse in Britain, but the RES conference is largely irrelevant to it. That is the unfortunate truth.

I left to fly directly to Washington to attend the G7, International Monetary Fund and World Bank Spring conferences. I expected them to be much worse than the RES, since we tend to sit in a Washington bunker, waiting for a bland communiqué and listening to spin from finance ministers and central bankers. But to all our surprise, the 2006 meetings broke a rule of a lifetime and agreed a big reform of the IMF. The world is divided on its relevance, but the event, at least was pleasingly different.

That should be the challenge to the organising committee for the 2007 RES conference in Warwick: spring some surprises to lift a good conference onto a higher level.

Page Options