ÐÓ°ÉÂÛ̳

REF: 2021

Impact case study

Improving election polling methodologies

 

The Joint Inquiry … after the 2015 General Election identified that the main cause of polling error was unrepresentative samples. As such, one of the most pressing issues for the industry to tackle has been to improve the quality of their sample, an undertaking pursued across the membership of the British Polling Council.

Evidence submitted by polling companies

House of Lords’ inquiry into the politics of polling

Professor Jouni Kuha

Research by

Professor Jouni Kuha

Department of Statistics

Statistical research by ÐÓ°ÉÂÛ̳ identified why polling during the 2015 general election campaign was inaccurate, leading to improvements in opinion polling across the industry. 

What was the problem?

In the UK, public confidence in political opinion polls was badly damaged by the polls’ failure to forecast correctly the outcome of the 2015 general election. Throughout the campaign, polls by all major companies consistently placed Conservative and Labour parties neck and neck, with an average vote share of 34 per cent each. In the event, on 7 May 2015 the Conservatives gained 38 per cent of the vote in Great Britain and Labour 31 per cent, providing the Conservatives under David Cameron a clear parliamentary majority, with 330 seats to Labour’s 232.  

Previous elections had seen polling inaccuracies of a similar magnitude. However, this result garnered particularly negative attention as the level of inaccuracy meant the polls failed to identify the winning party. In suggesting the results would be a dead heat, the polls shaped party strategies and much of the media coverage ahead of the election, which focused on hung parliaments and possible configurations for another coalition government.  

In the aftermath of the election, the polling industry’s reputation was severely damaged and there were calls for state regulation of polling, or banning the reporting of polls during an election campaign. 

What did we do?

Immediately after the election, the British Polling Council (BPC) and Market Research Society (MRS) set up an independent inquiry into the performance of the polls, led by Professor Patrick Sturgis (then University of Southampton, now ÐÓ°ÉÂÛ̳).  

Professor Jouni Kuha was invited to join the inquiry panel as its sole statistician, based on his previous work on survey methodology, including election polling, and his collaborative research with social scientists. 

The inquiry examined the methodology of the opinion polls, considering possible causes for the industry-wide errors, including late swings in party support, errors in poll wording, differences between online and phone surveys, and errors in how pollsters weight their samples of respondents. Given the similarity of all final polls, they also addressed whether “herding” could be a factor, whereby pollsters make poll-design and reporting decisions in the light of other published polls, causing their results to cluster together. 

The panel examined aggregate and raw polling data provided by the polling companies for pre-election polls – and for post-election surveys in several cases where pollsters had re-contacted participants. They compared these data with post-election data from the 2015 rounds of the British Election Study (BES) and the British Social Attitudes (BSA) surveys, which use random-probability sample design. This is the gold standard for recruiting respondents but is time-consuming and expensive. Polling companies are unable to do this, so instead use quota sampling to find respondents for their phone or online surveys. Pollsters then weight the raw data so their sample is, for example, representative of the population by age, gender, and income; they also apply a weighting to account for the likelihood of respondents to vote.  

The BES and BSA surveys proved much more accurate in 2015, and so provided a point of comparison to test pollsters’ survey weighting methodologies. This indicated that the flaws in opinion polling could be attributed to sampling practice, and the panel’s research was able to rule out the other potential causes of error such as postal voting and late swing.  

that the systematic errors were largely in the Labour and Conservative vote share, and the gap between them (smaller parties’ vote shares were largely accurate). The primary cause for this was pollsters having unrepresentative samples of Labour and Conservative supporters, which were not mitigated by their weighting procedures applied to the raw data. Professor Kuha worked principally on the inquiry’s work on the representativeness of the samples and weighting methods.  

What happened?

. The British Polling Council and Market Research Society, which are membership bodies, swiftly adopted several of these recommendations on the transparency of reporting. The BPC introduced rules specifying that its membership should publish which variables they use to weight data, and when they make any changes to their methodology during the course of an election campaign, which would help improve trust in polling.  

Given the main cause of error was the quota sampling, the inquiry also recommended measures to improve pollsters’ methodologies. If polling companies are unable to fund the costlier random-sampling methods, they should endeavour to increase the pool of respondents, for example through longer survey periods. The inquiry also suggested pollsters should review their processes for weighting samples, including considering incorporating new variables to account for under-sampled populations. Polling companies have used the inquiry’s rigorous, system-wide analysis of polling methodologies to take steps to improve their practice, making changes to their sampling, weighting, and turnout adjustment procedures. For example, YouGov has since introduced weighting by political interest and education.  

The panel further recommended improving how uncertainty, statistical significance and margins of error for polls are calculated and reported on, and to discourage media companies from inaccurately over-reporting small, or statistically insignificant changes in polls. In 2018, the BPC introduced a new requirement for its members to publish a statement about the level of uncertainty in poll estimates of parties’ vote shares.  

When the House of Lords’ Select Committee on Political Polling and Digital Media produced its into political polling in 2018, it relied on the inquiry’s work for its technical analysis of what went wrong during the 2015 election. The inquiry’s research influenced the Lords’ own recommendations, which included an expanded oversight and advisory role for the BPC, but not government regulation. 

The polling industry’s efforts appeared to bear fruit in the polls ahead of 2019’s general election, for which the companies further refined their methodologies within the framework outlined in the inquiry report. The polls correctly predicted the result, and the BPC described their predicted vote shares as “more accurate … than in any contest since 2005”.  

Election polling garners uniquely high-profile media coverage for the polling industry. These improvements also affect the reputation of the profession, and so the commercial success of the UK polling industry.  

By helping to improve the quality of polling, the inquiry has improved the accuracy of the information available to parties, the media, and the public, and improved the media reporting of polls, and thereby contributed to the better conduct of democratic politics.  

More by Jouni Kuha

Article

Author(s) Jouni Kuha

Article

Author(s) Jouni Kuha

Article

Author(s) Jouni Kuha

Article

Author(s) Jouni Kuha