Un dernier billet en anglais, et je ne dirai plus rien sur l’élection présidentielle de cette année. Je ne pense même pas faire de “post-mortem” du premier tour, vu que mes prédictions n’ont rien d’exceptionnel (loin de là) et que les réponses les plus intéressantes viendront après les résultats – et l’étude des reports de voix – des législatives.
If you have read any of my two previous blog posts on the respective electoral prospects of Marine Le Pen and Emmanuel Macron in the upcoming French presidential election, you might have noticed that I did not use either polling or survey data to produce my predictions, relying entirely instead on past elections.
This is because I hold strong views about both opinion polls and electoral survey data when it comes to France. I detail these views below, as some kind of a methodological appendix to my previous posts.
Note that my electoral predictions are nothing but semi-educated guesses, and that my methodological views are just the same: I have no scientific credentials in the fields of electoral studies, opinion polling or survey methodology.
Like the previous ones, this post is written in English because it is addressed to friends who do not read French, but who keep asking about the French presidential election and the risk that it might turn into a local equivalent of the Brexit “Leave” vote or Donald J. Trump’s election to the U.S. presidency.
Update, 24-25 March: added a section on non-random polls, including a link to (and several citations from) the UK 2015 “polling miss” report, as well as a link to the French Polls Committee.
French opinion polls
As in any other wealthy representative democracy, opinion polls provide a large share of the electoral horse race spectacle. And just like anywhere else, these opinion polls have sometimes been awfully wrong, even though their average historical record is, by and large, broadly accurate (more on that later).
There are lots of reasons why one might be skeptical of opinion polls in general, and French opinion polls in particular. The classical Bourdieusian critique applies here, and French political scientist Alain Garrigou has done lots to popularise it: polls are partly performative, insofar as they frequently ask questions that respondents had not thought about before. In the context of a two-round election, this particularly applies to hypothetical “second-round” voting scenarios.1
Another problem, which is perhaps more prevalent in French opinion polling than elsewhere, has to do with sampling. In recent years, the issue of whether opt-in Internet sampling creates representative samples has attracted a lot of discussion, but an older and, in my view, more problematic issue, has to do with quota sampling, another non-probabilistic method that is widely used by French polling institutes, principally because it is much less expensive than proper random sampling.
The main problem with quota sampling is that it creates a layer of opacity around the precise re-weighting methods used to make the poll as representative as possible of the target population – an operation called “redressement” in the jargon of French polling institutes, which develop their own “in-house” methods for doing so, and then sit on it like some kind of industrial secret.2
This is a huge issue: even when quota sampling seems to be able to “emulate” simple random sampling, we have no idea how that happens. To me, this issue makes the defence of quota sampling hopeless, perhaps even more than the trivial observation that the standard error of a quota sample has no formal relationship to sample size, which makes the presentation of error margins in quota-based polling results highly misleading to the public.3
Troubles with non-random polls
Allow me to elaborate for a few paragraphs on the point made above.
The idea of “emulating” simple random sampling is, at its core, fundamentally flawed. Nature cannot be fooled: pseudo-randomness is non-randomness. Even simple random samples will contain nonrandom (missing data) components, which are hard enough to deal with. Adding non-randomness “at the root” of the sampling procedure is just madness in the present context.
And yet this is something that polling companies seem to be okay with, and not just in France: just next door, the British polling market seems to be configured exactly the same way, and even though they have less things to predict over there since the party system contains less (national) elements, they also, from time to time, get it completely wrong, precisely because their samples are non-random.
Here’s the conclusion of the committee that worked on the serious polling miss that occurred in the UK before the 2015 general election, which I mentioned earlier as the true starting point of the “Brexit-Trump” series of failed predictions:
… the primary cause of the polling miss in 2015 was unrepresentative samples… The statistical adjustment procedures applied to the raw data did not mitigate this basic problem to any notable degree. The other putative causes can have made, at most, only a small contribution to the total error. (p. 4, emphasis in original)
And a few pages down in the same document:
… all GB pollsters in 2015 took a common approach to sampling and estimation: they assembled a quota sample of eligible individuals, which was then weighted to known population totals. They asked sample members their vote intention and likelihood of voting, derived a predicted sub-sample of voters, and produced weighted estimates of vote intention for this sub-sample. (p. 22).
And then the report goes on to explain how the BES and BSA surveys are fundamentally different, because they use simple random sampling, ad because “both surveys produced good post-election estimates of the difference in vote shares between the Conservatives and Labour” (p. 27; important word here: “post” means that the surveys were run after the election, so considerations about turnout and voting uncertainty did not apply).
Issues with non-random quota samples are discussed in detail in the Francophone Wikipedia entry for opinion polls. The entry has multiple issues to do with its sources and the balance of its sections, but it otherwise makes a good read on the topic of why opinion polling is a problematic source of information, especially when it comes to voting behaviour. And if you need an example of a recent opinion poll to illustrate how black-boxed polling methodology is, here’s one.4
What can electoral polls be used for?
The flaws outlined above do not mean that opinion polls cannot produce either interesting forecasts or useful information. Measures of voting uncertainty, for instance, can be accurately measured through polls, although their generalisation to the electorate is still a very risky exercise absent of detailed information on the composition and re-weighting/redressement of poll samples. Still, compared to past elections, it seems that voting uncertainty is currently very high, especially on the left.
Another interesting use of opinion polls for electoral purposes consists in aggregating as many polls as possible, re-weight their results according to what we know of their past performance and overall reliability, and then use these results to forecast electoral results. This style of electoral forecasting has been popularised by Nate Silver‘s forecasts of U.S. elections, some of which were highly accurate, some of which were far off the mark.5
There is currently one attempt to build a Silver-style, “polls-only” forecast of the French 2017 presidential election, which came to my knowledge thanks to a tweet by François Massot. The forecast is produced by Victor Poughon and published on the “Depuis 1958” (“Since 1958”) website. The current result of the forecast, which is based on transparent polling data, unambiguously predicts that Emmanuel Macron will win the election:6
The unequivocal result of the 2017 forecast is, to me, less interesting than the “retrofit” of the model to previous elections. The Bayesian methodology of the forecast shows the model fit to the 2012 presidential election, with the difference between the actual (red) and predicted (black) results:
I also find this kind of exercise interesting because it produces a rough estimate of how much of the vote might go to the first-tier (20%+ vote shares), second-tier (10-20% vote shares) and third-tier candidates (5% or less vote shares). In 2012, these shares respectively represented around 55%, 38% and 7% of the first-round vote. For 2017, the model predicts 44%/44%/12% vote shares, which is consistent with estimates of higher voting uncertainty:
Final observations: note how wide the credible intervals reflected in the predicted vote shares shown above are. Also note that the model does not say anything about turnout, which might be lower this year. Last, note that the model cannot inform us about who is voting for whom, i.e., about voter realignments (which can also differ between presidential and legislative elections). For all these reasons, we need other forms of electoral data to supplement what polls might provide.
French electoral surveys
Especially perhaps in a multi-party system, highly detailed and sophisticated electoral surveys are much, much more important than polls to get the data generating process – i.e. what makes people cast a vote, and for whom – right. Do we have (access to) such surveys in France?
The answer is: kind of. Here are, for instance, some results from the most recent CEVIPOF/Ipsos/Steria electoral survey, the Enquête Électorale Française 2017, which also show high rates of voting uncertainty, especially when it comes to the left-wing candidates:
What credit might we lend to that survey? There is, in fact, no way to tell, except for CEVIPOF or Ipsos/Steria insiders. The survey data are not public, and the kind of sketchy technical explanations provided in the media hint that even for the official partners of the survey, French newspaper Le Monde and the Jean-Jaurès Foundation, the exact methodology that produced the data is a black box.
Lack of transparency from private polling institutes is one thing; but partly-publicly funded research is different, or at least, in my view, should be. The conclusions that insiders are drawing from the Enquête Électorale Française 2017 are highly valuable, but they do not allow outsiders to learn anything from the survey data itself, or to reuse the data in different ways. In short, French electoral data, just like many other forms of data, suffer from serious availability and cumulativity issue.
Further problems with survey data
There are several other problems with “flat microdata” (repeated cross-sections), which make up most of polling and electoral survey data.
First, to answer some questions, we need proper longitudinal (panel) data, that is, data that are produced by asking repeated questions to the same respondents at different points in time. Second, for many other questions, we need relational (network) data, of the kind produced by family-wide sampling (when the entire household is sampled). Last, in the many cases (including voting) where geography matters, we often need spatial (geolocated) data.
A few years ago, my co-blogger Joël called for the launch of a longitudinal French National Election Study, suggesting that the survey should allow multilevel analysis—that is, a form of analysis that can observe and control for the existence of hierarchical (e.g. local/regional/national) patterns:
Such a survey exists in the United States. France has no equivalent survey; the French national statistics agency does not seem to be interested in running one; and the European Social Survey, which has many of the required qualities that such a survey would call for, can only imperfectly fit the bill. Last, the French Ministry of the Interior cannot produce such a survey, out of fear of being accused to produce a partisan survey.
As a result of the many different limitations outlined above, there is no data source to turn to when one wants to analyse voting intentions in France. Sure, there are polls and electoral surveys, but neither of them are currently transparent enough to allow for independent reanalysis. Instead, what we have in France is a small club of (French) election specialists, working on disparate data sources that only very imperfectly cumulate.
Things might be getting better: France has a data repository, the Réseau Quételet, which lets researchers access some of the existing electoral studies produced under the supervision of various research teams. Note, however, that the current “French Election Studies,” a by-product of the Comparative Study of Electoral Systems (CSES), are limited to presidential-election years, and are available only for 2007 and 2012, even though they seem to have been running for longer than that.
Further reading
This blog has many posts about opinion polls, some of which focus on (older) French presidential elections: see here, here and here.
For a fuller assessment of French survey data, take a look at the appendix in Vincent Tiberj’s last book on cohort and generational effects among French citizens. The book itself is absolutely excellent, and the appendix provides a useful list of surveys – many of them cross-national – that contain data on French social and political attitudes7
- And outside of electoral periods, one might argue that polls co-produce the presidential approval ratings that they claim to measure. Notwithstanding, I do not agree with Garrigou that prohibiting electoral polls is the way to go. [↩]
- Furthermore, there is no French equivalent of either the ICPSR or the Pew Research Center, which means that there is no repository of polling data to turn to in order to inspect raw survey data. As a consequence, the entire system of French opinion polls is effectively black-boxed. [↩]
- The committee in charge of regulating polls has stated that polls should come with an empirical “margin of uncertainty” rather than a theoretical one, which actually makes things worse, since “empirical” can mean pretty much anything here. [↩]
- The linked example characteristically provides none of the methodological explanations that should come with any polling result: how were the samples collected, how was nonresponse addressed, what the denominators are for population/electorate estimates, etc. [↩]
- For another example of aggregated-polls-based forecasting, see Chris Hanretty’s forecast of the Italian 2013 general election. [↩]
- Note that the 2017 forecast goes beyond the first round of the election, and actually attempts to predict second-round results. Also note that Macron has always been high in the approval-rating polls, which likely played a role in his candidacy. In past elections, Alain Poher and Édouard Balladur also ran for the French presidency (in 1969 and 1995 respectively) with similarly high approval ratings in the polls. Both lost. [↩]
- Namely: EB, EVS, ESS, EES, CEVIPOF and TriÉlec 2012 surveys, CNCDH Racism Barometer, Insee Participation Survey, Ined TeO survey, and the 1967 French National Election Survey by Philip E. Converse and Roy Pierce. One might consider adding the ISSP and the Insee EDP (census extract) to the list. [↩]
OpenEdition vous propose de citer ce billet de la manière suivante :
François (16 mars 2017). Polls, surveys and the known unknowns of the French presidential election. Polit’bistro. Consulté le 4 décembre 2024 à l’adresse https://doi.org/10.58079/syt4
Merci pour cet article, toujours aussi riche et précis. Content d’avoir appris l’existence de “Depuis 1958” et du Github associé.