US election polls look good for Joe Biden. But
can they be trusted?
After Donald Trump’s surprise victory in 2016, some
Democrats are reluctant to believe the numbers
Tom
McCarthy
@TeeMcSee
Email
Mon 28 Sep
2020 08.30 BSTLast modified on Mon 28 Sep 2020 08.31 BST
https://www.theguardian.com/us-news/2020/sep/28/us-election-2020-are-polls-accurate
The numbers
are looking pretty good for Democrats in Arizona. The party’s Senate candidate
is up more than five points in polling averages, and Joe Biden leads Donald
Trump by more than three points in the same averages. If those numbers hold,
the state could hand Democrats the Senate and Biden the White House in one fell
swoop.
But a key
Democratic organizer in the state can’t say whether he thinks the numbers will
hold – because he does not believe the numbers exist in the first place.
“I would
say the polls are a mirage,” said Larry Bodine, president of the Democrats of
Greater Tucson group. “After 2016, I decided from then on, I was just not going
to rely on what the polls had to say, and instead rely on what my fellow
Democratic volunteers encounter out in the field.”
Inside Democratic
party offices from coast to coast, and under a good number of roofs where
anti-Trump voters dwell, polling results that show promise for Biden and
down-ticket Democrats are being handled with a similar mix of arms-length
trepidation and not-today-Satan refusal. Feeling they were misled by polling to
believe that Hillary Clinton was a shoo-in in 2016, only to be ambushed by
Trump’s win, many progressives in 2020 vow that they’re done with the numbers
game.
“I’m really
active in Democratic circles and pretty much nobody talks about the polls,”
said Bodine. “I believe that all the positive polls do is give a false sense of
security.”
That
attitude seems to have little downside for political organizers. But the
question of the reliability of polling has broader implications for campaigns,
for public policy – and ultimately for daily American life, on issues ranging
from racial discrimination in interactions with police to skepticism about a
potential coronavirus vaccine.
Ultimately,
the health of polling is bound up in the health of the democracy, analysts say.
Asking people what they think is, among other things, an expression of faith
that what the American people think matters – a notion that can seem even more
worthwhile amid Trump’s demand to “get rid of the ballots” in November.
With those
stakes hanging overhead, and under intense public scrutiny on the eve of a
watershed election, pollsters across the country have made adjustments to
address their mistakes of 2016 and are working hard to capture an accurate
snapshot of 2020.
The picture
is not simple. While some key state polls were off in 2016, the national polls
in aggregate were right on target, showing Clinton three points ahead at the
end; she won the popular vote by two points but lost in the electoral college.
The
mistakes last time, according to a full buffet of postmortem analyses,
included: pollsters did not have an eye on educational attainment as a
potential fault line in the electorate; they were foiled by an unusual wave of
undecided voters breaking for Trump at the last minute; there was too little
polling in key swing states to really know what was going on; conclusions
extrapolated from that paucity of data were broadcast with far too much
certainty; and there might have been some “shy” Trump voters who didn’t want to
say they were supporting him.
The results
remain a political shock. In the final days of the 2016 election, the “average”
of the scant polling in Wisconsin had Clinton ahead 6.5 points. In Michigan, Clinton’s
average “lead” was 3.6 points, while in Pennsylvania it was what looks in
retrospect like an extremely tenuous 2.1 points. Yet Trump won all three states
and with them the White House. The immediate criticism of the Clinton campaign
was that it had failed to visit the upper Midwest, taking the voters of
Michigan and Wisconsin for granted, lulled by the siren song of reassuring
polls.
But have
the polls improved since then?
Changes
since last time
The
well-known and widely followed Franklin & Marshall College poll based in
Lancaster, Pennsylvania, had Clinton ahead by double digits in the state in its
last poll before the 2016 election. Trump won the state by a razor margin of
fewer than 50,000 votes, or less than a percentage point.
The
director of the poll, G Terry Madonna, said an unusual wave of late-deciding
voters mostly breaking in the same direction – toward Trump – created the
polling blind spot.
“In some
cases, including ours, we were out of the field, meaning we completed the
interviews, 10 days before the election,” Madonna said. “What we found in exit
polls was that in the last 10 days 20-some per cent of voters made up their
mind or they changed their mind and then went for Trump far more than for
Clinton.”
Madonna said
this year the poll would stay in the field longer – and he sees fewer undecided
voters this time.
“When you
have 85-90% of Republicans saying they approve of the job Trump’s doing, and
Democrats are in single digits – people are locked in in this race,” Madonna
said. “There’s a relatively small number of undecided voters. And it may turn
out that what they do might make a difference, but it’s probably more important
for the campaigns to get out their base of voters.”
For readers
who have decided not to ignore the polls: Franklin & Marshall released a
poll on Thursday morning that showed Biden up by 6 points in the must-win
state, where averages have Biden ahead by 4.1 points.
Other
state-level pollsters, including the Muhlenberg College Institute of Public
Opinion in Bethlehem, Pennsylvania, have expanded their methodologies since the
final days of 2016.
Christopher
Borick, the poll director, said that this year the poll was not only looking to
the most familiar categories for insights on voter behavior – gender, age,
region, party and race – but had also added one more category: educational
attainment.
“We had
never included educational attainment as a variable in our weighting formula,
largely because it never mattered,” Borick said. “When you went back
historically over time, if you had weighted, it didn’t do much – it was a wash.
But now we’re seeing more of a divide, where the upper-level attainment are
voting one way and the lower are voting the other way. And that really started
to emerge in this decade and blossom in 2016, to the point where I think it’s
silly to ignore that.
“And so we
are now weighting with education attainment, and it does slightly move the
polls. So our last poll had Joe Biden up four, with an educational weight built
in. If we had not done that, like we did not in 2016, his lead would have been
six points.”
Communicating
the limits of polling
Of 453
pollsters ranked by the FiveThirtyEight data analysis web site, the Muhlenberg
College Institute of Public Opinion survey is one of only six to be awarded the
top rating of A+. The poll gets the top rating based on its reliable track
record, minimal observable bias, methodological rigor and the fact that it does
the expensive, difficult, time-consuming kind of polling, meaning live
telephone interviews, including calling cellphones.
Muhlenberg’s
last poll of the presidential field in Pennsylvania before the 2016 election
had Clinton with a narrow, single-digit lead. The actual result – Trump won the
state by less than 1% – was within the poll’s margin of error. Statistically
speaking, the poll was not wrong.
But Borick
points out that when the gap between a poll number and an election result falls
across the line separating winner from loser, it’s impossible to tell anyone
the poll was not wrong.
“If you
looked at 2012, the polls were just about as off with Barack Obama against Mitt
Romney, and they understated Obama’s performance that year,” Borick said. “But
no one really cared at the end, if Obama won by one point or four points or
five points, because it was all on the same side of the ledger. The error was
going in the direction of the person that won it anyway.”
“If you
cross over to the other side, by even one vote, two votes, half a per cent,
whatever – it changes the whole outcome, but the math isn’t all that
different.”
This
election cycle could prove unusually challenging for pollsters because of a
significant climb in the number of voters casting ballots early and via mail,
said Nate Silver, founder of FiveThirtyEight. Silver forecast all 50 state
results correctly in the 2012 presidential election and was one of the few
polling analysts to articulate clearly on the eve of the 2016 election that
Trump had a path to victory.
“I do think
the transition to high rates of mail voting is one of the bigger potential
sources of polling error, especially with the mail vote likely to be
disproportionately Democratic, although it’s hard to know in which direction
the error might occur,” Silver tweeted this week.
Borick said
it was down to pollsters, especially academic pollsters, to explain what their
results mean to a public that does not always think about election outcomes in
terms of percentage likelihood and margin-of-error.
“I think
pollsters, people that do public opinion research, want to be able to give citizens
a sense of where the broader public is on issues, on races, and to do that
accurately. And to also educate about the limits of the very things we do,”
Borick said.
“We sample.
We take small groups to make inferences about big groups. And that inherently
has error involved in it. And trying to communicate the reality of what we do
is a big role for us.”
Bodine, the
Arizona organizer, says those lessons are not for him.
“Democrats
should not take anything for granted, and my advice for them is to call up
their local Democratic party and get active,” he said. “Stop yelling at the TV,
stop complaining about what they see on the news and get out there and do
something about it.”
Sem comentários:
Enviar um comentário