Media Briefings

A BOT IN THE ECHO CHAMBER: Evidence that fake #Brexit news spread by fake users helped to drive the two sides apart

  • Published Date: April 2017

Automated software agents – ‘bots’ – used to spread either ‘leave’ or ‘remain’ social media stories during and after the Brexit referendum drove the two sides of the debate further apart. That is the central finding of research by Yuriy Gorodnichenko, Tho Pham and Oleksandr Talavera, to be presented at the Royal Economic Society's annual conference at the University of Bristol in April 2017.

The authors analyse more than 28 million tweets sent between 24 May and 17 August 2016 with a #Brexit hashtag. They find that stories biased to either leave or remain spread by bots shifted human opinions, provided that the humans were already sympathetic to the content. So a bot supporting leaving the EU had a stronger effect on a leave supporter than a remain supporter – the ‘echo chamber’ effect. Leave supporters were also more likely to be influenced by bots compared with remain supporters.

‘Social bots spread and amplify misinformation, thus influencing what humans think about a given issue’, the authors say. ‘Policy-makers and social media should seriously consider mechanisms to discourage the use of social bots to manipulate public opinion.’


The rise of the internet has changed the way people acquire and communicate information. Indeed, social media like Facebook or Twitter are now key means to access and share information for hundreds of millions of people. The intensity and volume of information flows on social media is incredibly high and, during high-impact events, public engagement through social media platforms can quickly become overwhelming.

Furthermore, given public debates’ ‘fog of war’ amplified by the tsunami of news and interpretations on Facebook, Twitter, and the like, public opinion may be influenced by factors that were not considered important before.

For example, not all social media users are real. Some, if not many, of them are actually automated agents, so-called social bots. And real users, or humans, can be deceived by bots.

For example, bots could be used to influence social network users’ opinions by inflating the popularity of certain topics or spreading fake news. Many of us (real users) have direct, or indirect, interaction with bots without knowing their real identity. In this new setting, could public opinion in social media be manipulated by bots?

We provide new insights into information dissemination in social media during high-impact events using Twitter data related to the EU Referendum between 24 May 2016 and 17 August 2016. Our data set contains 28.6 million tweets with the ‘#Brexit’ hashtag. We document properties of tweets (pro-leave or pro-remain tweets) and users (bot agents or human agents) and track the information flows through interactions among users.

We find that information about Brexit was spread quickly among different types of agents (for example, bots versus humans). Most of the reaction happened within 50-70 minutes suggesting that for issues critically important for people or issues widely covered in media, information rigidity could be very low.

We also observe that a shock to a given agent (for example, a ‘remain’ supporter) had a persistent effect on this type (for example, other ‘remain’ supporters). This response is stronger compared with responses to shocks emanating from other agent types (fore example, ‘leave’ supporters or ‘remain’ bots).

Another important finding is that human opinions about Brexit could be manipulated by bots. The degree of influence depends on whether a bot provides information consistent with the priors of a human.

More specifically, a bot supporting leaving the EU has a stronger effect on a ‘leave’ supporter than a ‘remain’ supporter. Further investigation shows that ‘leave’ supporters are more likely to be influenced by bots compared with ‘remain’ supporters.

These results suggest that dissemination of information is consistent with ‘echo chambers’ – that is, a situation in which people select themselves into social groups with similar-minded people so that their beliefs are reinforced and thus information from outsiders may be ignored. As a result, information is more fragmented rather than more uniform across people.

In summary, social media could indeed affect public opinion in new ways. Specifically, social bots could spread and amplify misinformation thus influencing what humans think about a given issue.

Moreover, social media users are more likely to believe (or even embrace) fake news that is in line their opinions. At the same time, these users distance themselves from reliable information sources reporting news that contradicts their beliefs. As a result, information polarisation is increased, which makes reaching consensus on important public issues more difficult.

Obviously, regulating information flows is an extremely delicate business in a democratic society characterised by diverse views and tolerance for this diversity. But cherishing diversity does not mean that one should allow dumping lies and manipulations to the extent that the public cannot make a well-informed decision.

If so, policy-makers and social media should seriously consider mechanisms to discourage the use of social bots to misinform the public and to manipulate public opinion.


‘Social network, sentiment and political outcomes: Evidence from #Brexit’
Yuriy Gorodnichenko (UC Berkeley), Tho Pham (Swansea University) and Oleksandr Talavera (Swansea University)

Tho Pham
Swansea University