Look, suffice it to say, I’m hardly thrilled by the events of Tuesday. But unlike so many whose minds I truly can’t begin to grasp, I’m not unfriending anyone who might have actually chosen to vote for That Guy. I’m far more judgemental of his character and intellect, largely based on actual testimony of people who actually know and have worked with him, and that’s not including his former cabinet members who seemed to have been coming out of the woodwork of late in the hopes of selling a book. I haven’t bought many books lately and based upon my monthly cash flow I’m not likely to be doing so any time soon.
But I’m a data-driven realist first and foremost and there’s simply too much overwhelming evidence that’s been produced from the dissection of Tuesday’s events that points to the clear conclusion that no matter how unintelligent the winner may indeed be, he’s got an awful lot of genuises (evil or not) surrounding him that produced a far superior game plan and more impactful messaging than did She. The ever-engaging Chris Cillizza dropped this extremely compelling narrative in his THE MORNING newsletter just minutes ago as concrete evidence:
This chart, from the Financial Times, tells that story in stark terms: Trump improved on his 2020 showing in 48 of the 50 states. And in many heavily Democratic states — California, New York, Illinois — he improved by LARGE numbers.
(I)f you go even more local, the numbers are equally eye-popping. This, from my friend Derek Thompson at The Atlantic, is remarkable:
This is a moment of soul-searching for urban progressives. Chicago, Houston, and Dallas shifted ~10 points right vs ’20. Miami moved 19 pts right. Queens 21 pts right. The Bronx: 22 pts.
All of this is now producing two overwhelming reactions from those who are just coming out of a self-inflicted blackout and/or food coma. How the F could this happen and Why the F didn’t anyone see this tsunami coming?
The former, of course, can be answered (or at least attempted to be) by far wiser pundits than moi. The latter, well, that’s a bit more up my alley and, frankly, it’s impacting me at the moment way more than the spectre of 47 triumphantly returning to Washington. Upon further review, perhaps some of us in that line of work should have been paying as much attention as did my longtime research comrade in arms David Giles, who yesterday shared a telling piece from SCIENTIFIC AMERICA’s Allison Parshall that dropped late last week that proved to be eerily prescient:
Polls are a staple of preelection coverage and postelection scrutiny in the U.S. The results of these political surveys drive news cycles and campaign strategy, and they can influence decisions of potential donors and voters. Yet they are also growing more and more precarious.
“These days, we are using this technique that’s very vulnerable” to making huge mistakes, says Michael Bailey, a professor of American government at Georgetown University and author of the recent book Polling at a Crossroads: Rethinking Modern Survey Research.
Those mistakes may be familiar for those who followed the last two presidential elections, when polls underestimated Trump’s support. Pollsters are hoping to learn from their mistakes, but their results are still largely a judgment call. Here’s why.
People don’t respond to polls anymore
For decades, pollsters have been dealing with an “ongoing crisis” of falling response rates, Karpf says. Polls are only as good as their sample: the wider, more representative swath of the public that responds to polling calls, the better the data. The ubiquity of the landline telephone in the latter half of the 20th century was a unique gift to pollsters, who could rely on around 60 percent response rates from randomly dialed phone numbers to hear from a representative slice of the population, Bailey explains.
Today technological changes—including caller ID, the rise of texting and the proliferation of spam messages—have led very few people to pick up the phone or answer unprompted text messages. Even the well-respected New York Times/Siena College poll gets around a 1 percent response rate, Bailey points out. In many ways, people who respond to polls are the odd ones out, and this self-selection can significantly bias the results in unknowable but profound ways.
“The game’s over. Once you have a 1 percent response rate, you don’t have a random sample,” Bailey says.
So as a result, predictive modeling and weighting–essentially what panels that measure media consumption do–are increasingly being employed to make up for these sampling gaps. Which leads to an even more Captain Obvious conclusion:
The assumptions in these models could easily be wrong
Pollsters are generally making defensible, good-faith decisions about how to stretch and compress their data into the shape of the voting electorate. But these are still educated guesses, and reasonable minds may differ. Even though they are all reasonable assumptions, they are different ones. Which assumptions are right, we don’t know,” says David Karpf, who researches technology and elections at George Washington University.
The 2020 election showed that there were aspects of Trump’s support that could not be fully accounted for with the demographic variables that pollsters had come to rely on. So this year many are using a blunter technique to compensate: weighting respondents’ answers based on who they say they voted for last time around, a method called recall-vote weighting. This makes the 2024 polls conform to 2020’s turnout—and, in practice, inflates Trump’s support.
Pollsters are “leaning hard” into recall-vote weighting this time around, Bailey says. But this technique has a few key limitations. First, it’s not clear that the electorate in 2024 will look like 2020.
I dare say that the findings Cillizza shared proves that point.
And Giles added his own takes based on his own experience:
I think there’s another issue at play. Our social discourse has become so fractured. I think people are simply much less likely to tell the truth to a stranger when a pollster reaches out.
And especially when a pollster, human or virtual, asks a question in a way where a deflection or a lie can easily be offered in return. I, for one, designed far less consequential surveys about TV pilots by asking for five-point scale assessments of various aspects of the show’s elements. We would rarely ask the direct question “would you watch?” but, rather, derive from the degree of investment what the likelihood was that, if all else was equal, they’d at least be open to doing so. Given his experience with likes of NBCU and Viacom, I’m certain Giles has a few experiences of his own to support his view.
Which is why I’m at least rattling my own saber enough to strongly suggest that those in the political research world need to step up and start looking at how we media types dealt with these kind of issues. Some are complex, and solutions aren’t cheap. But some are more easily addressed.
For example, that one per cent response rate becomes a far bigger issue when surveys are conducted as a series of one-time processes. Given the variety of variables at hand well beyond age and sex the concept of any two surveys being ones where one can track progress over time becomes pure assumption.
Panels like Nielsen’s might not be perfect, but they at least produce results from the same respondents over a period of time, anywhere from six months to two years. Yes, people are incentivized, but nowhere near enough to be truly impactful. And Nielsen does seem to be open to including credible first party data to buttress their own information. Since the media business seems to have its troubles, perhaps they should consider partnering with a political research company to make their sample available to not only capture answers to questions but also correlate viewing of content to self-described political leanings? When national parties are spending north of a billion dollars on advertising in the hopes of changing people’s minds, wouldn’t some sort of barometer of how effective those campaigns may be going with behavorial rather than survey data be worth something? And I for one would love to see results from the same person before and after consequential events like debates or, yes, changes in candidates.
I’d further offer that firms like Comscore and Videoamp, which already have in place ways to be more inclusive with behavior on different devices, be open to such alliances as well. As more and more viewing of video content tends to occur on the same device where websites are being read, the potential to create a subset that would measure content consumption from a single respondent is strong–which then can open up additional ways to verify impact and engagement.
Nearly two decades ago I worked with a fledgling company eventually acquired by Nielsen which offered an alternative to traditional dial-testing by measuring rapid eye movement from respondents. I am confident that some evolution of that methodology would be readily integratable into an app that an opt-in sample would grant permission to allow tracking of what they are engaging with on their devices and for how long.
You would be correct that there are obvious workarounds to these sort of concepts, and one could argue that someone who chooses to use a VPN to avoid anyone being able to track how and where they consume media would be less likely to be part of such a sample. But if the price of eggs can rise to the top of some of their decision-making chains, perhaps the ability to be better able to afford them might just be enough to at least get a representative sample of them to participate?
There’s probably even better solutions out there, and likely there are folks more well-heeled than moi who might already be working on them. This sure looks like an opportune week to be doing so.
So here’s my message to pollsters under attack: There’s plenty more like David Giles and yours truly out there, and it sure seems like you need some fresh ideas to stay relevant. And you’re now officially on the clock for 2028.
Looking forward to further conversation,
SL
Until next time…