Could Twitter cause a recession? The Bank of England turn to social media monitoring.
The Bank of England’s chief economist has announced that the bank will be monitoring social networks, to better understand the state of the economy.
According to Sky News, Andy Haldane has been tasked with using “unconventional sources” of data to make economic predications. The thinking is that by monitoring, say, how many people are searching for new jobs it could help flag up a downturn before the “official figures” pick up on it. Which is a good idea – as it means that conceivably the ups-and-downs of boom and bust could be handled more smoothly, with the Bank and the government able to make use of its economic tools earlier, heading off any bigger problems. For example – if mortgage data shows a sudden spike in defaults, then the government could do something about the inaccurate rating of mortgage bonds a bit sooner, and hopefully avoid something like the 2008 crisis.
With the plan to monitor social networks like Twitter and Facebook though, I can’t help but worry that unless the BoE are really sure what they are doing, they should approach it with extreme caution.
You can understand why monitoring social media is an attractive option: Collecting official data is really hard. For example, one of our key economic indicators is the rate of inflation – but you don’t just figure out what it is by sticking an inflationometer into a vat of economic activity. Essentially, it is a very carefully calculated estimate, based on tracking the prices of certain goods and services. These goods and services change all of the time to, because they are handpicked in an attempt to be representative of the sorts of things that people buy. In 2014, video on demand services and DSLR cameras were added.
This is the same for many official statistics – and choosing what to track can be as important as the tracking itself. In other words, there’s an element of art, to the science.
So really, Twitter is no different. The way social media monitoring works is by using complex “semantic” algorithms to try and figure out the meaning behind messages. For example, looking for tweets that contain words like “made redundant”, “overdrawn” or “pay rise” might be, when counted as a whole, a good indicator as to the health of the economy.
However, sentiment analysis is a new field, and by its very nature is hard to do. Just ask former Scottish First Minister Alex Salmond who according to this Scotsman article thought he was on track to massively win the independence referendum in the days leading up to it. Even though all of the conventional polls were suggesting Scotland would remain part of the United Kingdom, and nearly every expert agreed, Salmond maintained his belief on the strength of a Canadian company that was analysing social media, which was noticing that amongst Twitter users there was seemingly greater support for independence. Despite the social media noise though, the reality was rather different.
You don’t have to be a genius to see why this approach is flawed. Whereas traditional polls are demographically weighted, what is popular on Twitter is dominated by who can shout loudest. The huge majority of old people who voted No probably mostly don’t use social media. And in terms of simply volume, if you’re a strong partisan on either side of the issue you’re going to be tweeting about it much more than the people who are less engaged… even though your votes are worth exactly the same amount. (The Scottish National Party’s digital ‘Cybernats’ supporters have also become infamous for their social media zealotry, so if anyone was endlessly banging on about the yes vote in the run-up to the vote, it was them.)
Even if you wanted to weight your social media sample for demographics, then this is also hard to do: Sure, you can guess someone’s details – such as their age, socioeconomic group and gender from their profile, but this is going to be a series of guesses. This, along with the practice of trying to read meaning from millions of tweets in the first place is full of multiplying uncertainties. And even if your hypothetical systems have correctly identified that this person is a 70 year old, low income, working class pensioner from Glasgow who is enthusiastically voting Yes, doesn’t the fact that she is using Twitter in the first place, unlike the rest of her cohort, make her weird and unrepresentative?
When analysing social media there is also the problem of contagion. This wouldn’t be a problem if you’re trying to monitor, say, how many people are tweeting about the X-Factor, but when it comes to economic indicators, how reliable could it conceivably be? Imagine if Stephen Fry tweeted one night “Tell me about the time you lost your job” and received thousands of replies about job losses – would this set off an alarm at the Bank of England due to spikes in the use of the phrase “job loss”? How far into organising an emergency meeting of the Monetary Policy Committee would they get before someone checks Twitter to discover it was just a false alarm?
Heck, could Twitter users even conspire to crash the economy? What if a service like Thunderclap was used by North Korean hackers, using thousands of dummy or compromised accounts, to tweet urging everyone to withdraw their cash from RBS? We see social media “storms” every other day as virtual pitchforks are wielded – what if this was all directed at causing a run on the banks? Noting the Twitter frenzy, the Bank of England could start hiking interest rates to disincentives the withdrawing of cash, and before you know it there is economic instability as nobody knows what is going on.
Obviously this seems all a bit ludicrous, and any social media monitoring will be done in tandem with many other data sources. Perhaps I’m just naive – perhaps the social media monitors have a plan to combat all of this layer upon layer of uncertainty? I hope the Bank of England knows what it is doing.