The Annual Conference

The Society’s Annual Conference took place this year at the University of Warwick , 17th to 19th March. This report of the event is written by Tim Harford, a Financial Times columnist, and author of The Logic of Life and The Undercover Economist.

On the train from London to Coventry, the signs were unmistakable: I was surrounded by scruffy people reading The Financial Times, or poring over documents composed in LaTex. There was no escaping the conclusion that the cream of British economics was riding the train with me.

I idly wondered whether we economists would be much missed if there was a catastrophic accident. More missed than usual, was my guess. Not only has the ‘Freakonomics’ boom been slow to fade — although make no mistake, it is fading — but the credit crunch has made bread-and-butter economic ideas seem more important than they do in happier times. Both the freaky and the traditional were on display this year’s Royal Economic Society conference, and I found much to enjoy in both areas.

My initial impressions, though, were anthropological. Surely only economists would produce the incentive-compatible mechanism for making sure sessions ran on time? By nominating the final presenter to take care of timekeeping, the organisers produced a theoretically-appealing procedure that worked nicely in practice. (I was as aware of this as anyone, as I skipped from one session to another. The number of sessions I caught running late: zero.) The rest of the logistics ran smoothly enough to be invisible, leaving plenty of time to focus on the substance of the conference.

Frank Hahn Lecture: Susan Athey
Economics can be difficult, and difficult ideas do not always get the credit they deserve. When Tom Schelling and Robert Aumann shared the Nobel memorial prize for economics back in 2005, journalists — myself included — focused overwhelmingly on Schelling’s contribution rather than Aumann’s. Why? Because we didn’t really understand what it was that Aumann was supposed to have achieved, whereas Schelling had done all kinds of amazing things, such as fall out with Henry Kissinger and help write the script to Dr Strangelove.

Similarly, when Susan Athey picked up the John Bates Clark medal for the best American economist under the age of 40, it was hard not to notice the contrast with the most famous recent winner, Steven Levitt. Levitt had shown that abortion was an effective crime-control policy and that sumo wrestlers cheat; Athey had shown… well, explaining that to any non-economist was always going to be a tough job. (My own news story on Professor Athey’s award, hastily penned on a Sunday, did not make it into the Financial Times.)

I wasn’t surprised to find Professor Athey’s lecture living up to both sides of her reputation: it was impressive, but also mathematically demanding.

The most intuitive application of Athey’s analysis is to describe the problems that firms have when they are trying to collude with each other. Such firms are attempting to depart from a one-shot competitive equilibrium, to find some way of sharing the market at much more profitable prices than the competitive market rate.

Fortunately for their customers, this task is not easy. Binding agreements are usually not available, so collusion must be sustained with the threat of punishment in later periods. And given that each firm’s costs or local demand conditions are likely to be different, vary over time and be private information to only one firm, there is no obvious way of dividing the collusive spoils.

Colluding firms face a basic trade-off between using their hidden information and trying to find the most efficient division of profits over time, or ignoring the private information and using a simple rule such as ‘equal prices, constant market shares’.

Professor Athey pointed out that the same basic analysis could be applied to many other situations. For instance, monetary policy faces a similar trade-off: should the monetary authority use discretion (an attempt to use private information efficiently) or simple rules?

Indeed, many ongoing relationships are attempts to divide surplus fairly — or at least, in a way that keeps all sides happy — in a situation where the costs and benefits of action vary over time and are private information. (Should you put the children to bed, or should your spouse? It is fair to take turns, but you’re particularly tired and stressed this evening… how can you prove that, and make up the favour when the time comes?)

As Athey explained, these models promise to answer important questions about the design of institutions. In the case of collusion, for example, the model should clarify the effectiveness of anti-trust rules restricting side-payments or banning communication, or auction design details such whether the winning bid should remain secret after an auction has closed. Could such rules foil collusion, or simply make it more wasteful?

It turns out that it is possible to design sustainable surplus-sharing mechanisms which allow a lot of discretion — for example, colluding firms can trade-off market share today against market-share tomorrow in an attempt to shift supply to the lowest-cost producer. However, Professor Athey’s broad conclusion was that it is often more trouble than it is worth, and that rules-based systems will often be preferred by colluding firms. What is more, anti-trust policies run the risk of reducing the efficiency of the cartel — by tipping the cartel towards a simpler rule-based system — without winning lower prices for customers.


Economic Journal Lecture: Hyun Shin
Professor Shin’s Economic Journal lecture was a master class: in timing, style and clear, focused thinking. The timing was self-evident: who, after all, would not have wanted to hear about the credit crunch as Bear Stearns was collapsing? The style was distinctive: a deadpan commentary over (someone else’s) obscenity-packed cartoons brought guffaws from the audience.

And the thinking? It is very easy to take a subject like the credit crunch and tell a plausible story based on hand-wringing, scary-sounding numbers and prognostication. Hyun Shin instead laid out very clearly why he thought the standard ‘pass the hot-potato’ story about of securitised sub-prime lending didn’t seem to fit some basic facts. The ‘hot-potato’ story is that banks took bad loans, repackaged them to look safe, and then sold them onto an unsuspecting dupe. But as Shin points out, if that is the way the system has been working, why have the most informed participants, the banks, taken such huge losses?

Professor Shin began by distinguishing between the practice of selling a bad loan, and the practice of issuing liabilities backed by an income stream from the bad loan. Asset-backed security issuers — as the name suggests —do the latter, keeping the original bad loan. In other words, the hot potato stays in the financial system; isn’t passed to the final investor. And although final investors can and do make losses, the securitising bank is on the front line, and losses can wipe out the bank’s equity. If this is a system where the hot potato ends up burning the greatest fool, the banking system is that fool: Shin described his recent, widely-cited calculations (with three co-authors) that banks and other leveraged institutions stand to suffer about two-thirds of the losses from sub-prime defaults.

The challenge, then, is to answer three questions. First, why do apparently-sophisticated banks act as the greatest fool? Second, what are the economic conditions that tend to allow bubbles to form? And third, what are the crisis dynamics on the way up and down? Professor Shin analysed the strings of promises through which financial crises can cascade. His conceptual framework presented three types of actor: end-user borrowers, such as house-buyers; financial intermediaries; and external backers of the financial system, who have provided debt or equity finance to the banks. He solved the end-game – that is, who would get what if the entire system was subject to some kind of bankruptcy procedure? — and then worked backwards to solve the ex-ante values of different financial claims in the system, all of which needed to be consistent in equilibrium.

Shin identified as a critical parameter in his model the vulnerability of the economy to stochastic shocks. When this parameter, j, is large, all assets in the economy tend to be subject to simultaneous shocks. When j is small, the shocks are more independently distributed, and the law of large numbers ensures that default rates are highly predictable. That, in turn, makes a dramatic difference to the amount of capital banks need to hold to guard bankruptcy due to a cluster of debts going bad. When j is small, banks can vastly expand their balance sheets.

This model, explained Shin, matched the inflation of the credit bubble. The ‘great moderation’ in inflation and macroeconomic volatility during the 1990s reduced the estimated value of j, meaning banks wished to expand their balance sheets and cast around for loans to make, even on assets of dubious quality.

‘It’s like trying to inflate a balloon’, he explained. ‘If you want it to expand, you have to fill it with something.’ The hot air that filled banks’ balance sheets were subprime debt instruments.

Professor Shin concluded by looking at the history of some regional foreclosure crises. ‘Looking at history,’ he said, ‘we’re just in the foothills of a very, very long foreclosure crisis.’


Denis Sargan Lecture: Ernst Fehr
Ernst Fehr set himself the task of explaining some of his recent laboratory experiments, designed to investigate how situations with the potential for moral hazard unfolded, given both the opportunity for self-interested reputation-building, and for non-selfish motives such as a desire for fairness or a taste for reciprocity.

Professor Fehr outlined the basic theory: a few ‘reciprocators’ — who, in contrast to the stalwart rational self-interested utility maximiser, are willing to spend resources punishing transgressions and rewarding good deeds — can have a disproportionate effect in a repeated game, because selfish utility-maximisers will mimic them in order to enjoy the benefits of the reciprocator’s reputation. The idea is firmly established in theory, but where is the evidence?

The question matters because, as Fehr observed, markets with moral hazard problems don’t work the same way as markets with perfect information. In trying to solve moral hazard problems, sensible people will tend towards some kind of long-term relationship, where reputation plays an important role, But these mechanisms, while they can solve moral hazard problems, lead to their own difficulties: rents are not competed away, while wages and prices do not respond enough to exogenous shocks.

Fehr outlined his experimental design: a laboratory experiment in which principals and agents needed to pair up to earn rewards, and where there was always either excess supply of or excess demand for agents. Agents could exploit principals by shirking after being paid, but frequently they did not, either because they were instinctive reciprocators, or because they were selfish but concerned to preserve their reputation as a non-shirker. As Fehr pointed out, looking at behaviour in the final round — where no reputational concerns applied — was a simple way for the experimenter, with hindsight, to distinguish between reciprocators and selfish agents.

Fehr found that even when reciprocity is a weak tendency, the interaction of reciprocity and reputation was a powerful disincentive for shirkers.

These results are fairly well known — thanks to earlier work from Fehr and others — but Fehr also went on to show that the result of the reciprocal norms that developed in the laboratory was to create wage rigidity. Where agents were in excess supply or excess demand, rents had to be shared between agents and principals to give agents some incentive not to shirk. Wages, then, turned out not respond much to excess supply and excess demand.

I confess to being a sceptic about the generalisability of these laboratory experiments, but Fehr made a robust case, not only by discussing replication (the record is mixed) but by demonstrating the precision of the questions that could be asked in the laboratory.


Reflections
The measure of a successful conference is not the average quality of the presentations but the quality of the best. By that standard I felt the RES conference was certainly a success. As well as the three main lectures, I could have highlighted some excellent special sessions, and singled out individuals who had something useful and original to say, and said it well.

From my selfish point of view, my three days in Warwick were days well spent. Several ideas I picked up at the conference made their way into the pages of the Financial Times in short order. With permission, I have given two examples here. I firmly intend to be back again next year to seek further inspiration.

Economic Forecasting - first published in FT Weekend Magazine, 17 May 2008. Reprinted with permission.

Economic forecasting is a long standing joke, but the laughter has turned harsh and bitter in the wake of the credit crisis. The conventional wisdom now seems to be that economic forecasting is impossible, and that economic forecasters are charlatans.

‘In that case,’ asked Professor David Hendry in a spring lecture at the Royal Economic Society, ‘why am I wasting my time on this?’ For one of Britain’s most respected economists, Hendry gives the strong impression of a man ploughing a lonely furrow.

His choice of field — the theory of economic forecasting — is to blame. It is viewed with scepticism not only by laymen but by most academic economists, too. But his research, a heady mix of bewildering computer-assisted mathematics and straightforward common-sense, has convinced me that economic forecasting shouldn’t be consigned to the realm of quackery quite yet.

There is a simple reason why most economic forecasts are useless, which is that forecasting is hard. We don’t fully understand the underlying economic processes that produce the results we wish to forecast (growth, inflation, house prices), nor can we measure all the variables accurately, nor anticipate the sudden shifts caused by politics or technological change. Some forecasts — notably of the price of shares and other assets — are intrinsically self-defeating, because if it was obvious that share prices would rise, they would have risen already.

But one of Hendry’s insights — developed with co-author Michael Clements — is that not all of these difficulties produce bad forecasts. What really screws up a forecast is a ‘structural break’, which means that some underlying parameter has changed in a way that wasn’t anticipated in the forecaster’s model.

These breaks happen with alarming frequency, but the real problem is that conventional forecasting approaches do not recognise them even after they have happened. Oil price forecasters have been predicting that the oil price will fall since 2000; all the while it has been climbing. The reverse problem applied during the 1980s: oil prices collapsed but the expert consensus was that the price would recover soon. That consensus persisted for years. The pound appreciated sharply in 1997; for the next eight years, forecasters predicted that the appreciation would soon be reversed.

In all these cases, the forecasts were wrong because they had an inbuilt view of the ‘equilibrium’ oil price or sterling exchange rate. In each case, the equilibrium changed to something new, and in each case, the forecasters wrongly predicted a return to business as usual, again and again. The lesson is that a forecasting technique that cannot deal with structural breaks is a forecasting technique that can misfire almost indefinitely.

Hendry’s ultimate goal is to forecast structural breaks. That is almost impossible: it requires a parallel model (or models) of external forces, anything from a technological breakthrough to a legislative change to a war. Some of these structural breaks will never be predictable, although Hendry believes forecasters can and should do more to try to anticipate them.

But even if structural breaks cannot be predicted, that is no excuse for nihilism. Hendry’s methodology has already produced something worth having: the ability to spot structural breaks as they are happening. Even if he cannot predict when the world will change, his computer-automated techniques can quickly spot the change after the fact.

That might sound pointless. In fact – given that traditional economic forecasts miss structural breaks all the time – it is both difficult and useful.

Talking to Hendry, I was reminded of one of the most famous laments when the credit crisis first broke in the summer. ‘We were seeing things that were 25-standard deviation moves, several days in a row,’ said Goldman Sachs’s chief financial officer. One day should have been enough to realise that the world had changed.

Digital piracy - first published in FT Weekend Magazine, 5 April 2008. Reprinted with permission.

What should top record labels, software giants and other media companies do about digital piracy? There are two obvious options: get tough and defend intellectual property rights with every legal and technological trick in the book, or tolerate some illegal copying in the hope of generating buzz and making money in some other way.

This is a debate that generates strong opinions, and where you stand would seem to depend on whether you’re an industry accountant or a new economy guru. (Chris Anderson, editor-in-chief of Wired, coined the phrase ‘Freeconomics’ to describe giving cheap things away for free in order to create buzz.)

But look closer and you realise that the corporate suits aren’t all adopting the same strategy. The music industry doesn’t seem to be able to make up its mind, first turning a blind eye to traditional mix-tape piracy, then cracking down on illegal file-sharing while raising the price of CDs, and finally slashing the prices of CDs in an attempt to compete head-on with downloads, legal and illegal.

Even more perplexing, Microsoft seems to hold two opinions at once: doing its best to prevent piracy on the Xbox console, but (as far as this outsider can tell) accepting that piracy of its Office suite of software is a fact of life.

Karen Croxson is a young economist at Oxford University who claims that there is method in the madness. She argues that there will never be a single correct trade-off between sales lost to piracy and sales generated by the buzz from pirated copies in circulation. That is because there are different kinds of potential consumer in different markets, or even in the same market at different times. A company’s most profitable response to piracy depends on what sort of consumers it is facing.

For example, the consumers who would pay for console games if given no alternative are probably the type of consumers who are happy to use pirated copies: tech-savvy youngsters. That means that an extra pirated copy in the console market is quite likely to mean a lost sale.

But the customers who will pay most for corporate software are, well, corporations. They won’t want to risk being caught and sued for piracy, so an extra pirated copy in the corporate software market probably isn’t a lost sale at all. The guilty party isn’t a customer, but a home user or a student who would never have stumped up full price. Thanks to piracy, though, that home user is now learning how to use Word and PowerPoint and making the legal copies of Microsoft Office more valuable.

Croxson can even make sense of the record industry’s apparent volte-face with the pricing of CDs. When Napster was starting up and piracy was still a marginal activity, it made sense for record labels to write off a few cheapskate customers as a marketing expense and raise average prices to everyone else — presumably the older, more prosperous customers who were willing to pay for legal music. But as the pirated sector embraced even those customers, the best strategy was to fight back by slashing prices.

In Croxson’s world, then, ‘promotional piracy’ is an alternative to discounted pricing. Both approaches are a way for companies to advertise their products or expand their user base. And as with discounted pricing, promotional piracy only makes sense as a strategy if there is a decent supply of customers who will eventually pay full price, which is not always true.

Corporations may be able to do more to maximise the gains or minimise the losses from piracy. Why not offer two versions of the product: a cheap-to-pirate, lower quality product, and a high-end offering with incorporating tight security? If Croxson is right, for some industries, piracy is a wonderful distribution channel.


Page Options