By now, most hard-core political junkies have heard all about the polling shot heard round the world, last week's stunning success of Ann Selzer and the Des Moines Register "Iowa Poll."
For those who missed it, here is a brief synopsis: The final Register poll of likely caucus-goers released on New Year's Eve gave Barack Obama a seven-point lead over Hillary Rodham Clinton (32 percent to 25 percent). That margin surprised in part because no other poll showed Obama doing as well, but mostly because the poll also forecast "a dramatic influx of first-time caucus-goers, including a sizable bloc of political independents."
Selzer displayed integrity and courage in sticking with her numbers.
Among the Iowa pollsters who disclosed their results to Pollster.com earlier last fall, most reported that first-time caucus-goers were somewhere between 20 and 30 percent of their sample, although the range went as high as 43 percent. Selzer's result reset the curve. Sixty percent of her respondents said 2008 would be their first caucus. Within hours, pollsters working for Barack Obama's campaign were questioning the result as unreliable and "unprecedented." The only way the poll would be accurate, Edwards consultant Joe Trippi said, is if "220,000 people vote."
When the dust cleared on Thursday night, Obama had won, 239,000 voters had participated in the Democratic Caucuses and the network exit polls reported that 57 percent were first-time caucus attendees.
So how did she do it?
At the root of this story is a fundamental conflict among pollsters about the best way to select and model "likely voters." How much should a pollster depend on past history in designing a pre-election survey?
As little as possible, the classic philosophy says. Design the survey so that it makes the fewest possible assumptions and let the answers of the voters determine the makeup of the likely electorate. More importantly, this philosophy says, make no prior assumptions about the demographics of the likely voters, because the turnout in the next election may differ from the last.
Campaign pollsters, on the other hand, have been more willing to mine past vote statistics to help guide the designs of their surveys. Most, for example, have long stratified their samples to control for turnout. If say, Ohio's Cuyahoga County accounted for 12 percent of the state's voters in 2000 and again in 2004, it is a safe bet that Cuyahoga will account for roughly 12 percent of the vote again in 2008. So a campaign pollster will usually build that assumption into their design.
Over the years, however, that willingness to make more conservative assumptions about the regional distribution of likely voters has morphed into more of an anything-goes approach. Many campaign and media pollsters now routinely apply turnout "models" that impose demographic weighting targets culled from voter history or exit polls from past elections. Weighting on the basis of party identification has become commonplace.
The desire to place such controls on the data is understandable, given the difficulty of conducting telephone polls in an environment of declining response and coverage rates. The more hands-on approach may work well in most ordinary elections with reasonably predictable turnout patterns. Problems arise when things stop being ordinary.
And of course, the 2008 Iowa caucuses were anything but ordinary. And that brings us back to Ann Selzer.
It is not so much the poll numbers that were remarkable. As one of her colleagues put it on the listserv of the American Association for Public Opinion Research (AAPOR), Selzer's results were "very much in line with what one would expect from any good polling organization." The short story is that she adhered to the more classic, hands-off philosophy of selecting likely caucus-goers and thus picked up the coming deluge of caucus newcomers rather than weighting it away.
To her credit, the challenges were enormous. The turnout may have been huge by historical standards, but even at 239,000, it still amounted to not quite 11 percent of Iowa's voting age population. Identifying those voters would have been hard enough under ordinary conditions, but Selzer faced an added twist: The caucuses would be held on January 3, so she would have to conduct her final poll between Christmas and New Year's Day, when pollsters usually shut down for fear of missing holiday travelers.
A month before the caucuses, Selzer was asked how she would handle the challenge of interviewing so close to the holidays. "We've been doing the most important thing a pollster can do," she said, "and that is worrying." That is what good pollsters do: Sweat every detail, anticipate every challenge and then put faith in the data produced by their methodology.
No, rather what was most impressive was the integrity and courage that Selzer displayed in sticking by her numbers in the face of nearly universal skepticism. As I wrote on my blog last week, if Selzer had wanted to play it safe, she could have weighted her results by party or past participation, bringing her results into line with other polls. Few at the Register or in the political world would have questioned her judgment. But instead, she stood her ground and put her reputation on the line because she believed in her method and her numbers.
And that says a lot.
-- Mark Blumenthal is editor and publisher of Pollster.com. His e-mail address is email@example.com.