So how bad was it for the pollsters Tuesday night? Bad. "The biggest losers of the night were the pollsters," wrote Arianna Huffington, echoing a chorus of network talking-heads. "The media needs to learn to not trust polls" read the headline over the media analysis by MSNBC's Steve Adubato. Even two of the polling world's most respected voices piled on. "One of the most significant miscues in modern polling history," writes Andrew Kohut in today's New York Times, and ABC News polling director Gary Langer dubbed the failure "unprecedented" and a "fiasco" in his online column.
The polling industry would be well advised to use this moment as a learning opportunity and draw a lesson from the polling debacle of 1948.
Some of my pollster colleagues will remind us that the miscues of Tuesday night have more precedent than we might want to admit. After all, just eight years ago the National Council on Public Polls (NCPP) labeled the problems of accurately polling primary races as "almost insurmountable." But whatever the underlying explanation, we might want to follow the advice that Republican pollster Dave Sackett frequently offers his clients: "The truth is what people believe... no matter how hard you try to reframe the problem, the public believes what it believes."
As such, the polling industry would be well advised to use this moment as a learning opportunity, both for itself and for all those who depend on our data, and draw a lesson from the polling debacle of 1948.
That year, despite pre-election polls showing Thomas Dewey leading Harry Truman by margins of 5 percent to 15 percent, Truman won by a surprise 4.4 percent. The failure set up the unforgettable photo of a beaming Truman holding a copy of the Chicago Tribune with its premature "Dewey Defeats Truman" headline.
The 1948 debacle created a crisis for the major pollsters, as newspaper subscribers canceled their service. "It looked like the end of the world," recalledBurns W. "Bud" Roper, who later became chairman of the company founded by his father, Burns Roper. "Our company was going down the tubes."
What had gone wrong? The private pollsters could not provide a definitive answer. So a week after the election, with the cooperation of virtually every prominent public pollster, the independent Social Science Research Council (SSRC) convened a panel of academics to assess the pollsters' methods. After "an intensive review carried through within the span of five weeks," their Committee on the Analysis of Pre-election Polls and Forecasts issued a report that would ultimately reshape public opinion polling as we know it.
The report led pollsters to move away from antiquated "quota sampling" methods in favor of more scientifically sound random probability sampling. The findings also inspired pollsters to start tracking elections through Election Day, to develop better methods to identify "likely voters," to write questions that would better simulate preferences "if the election were held today," and to do a better job pushing and probing "undecided" voters. They also learned to continue tracking all the way through to Election Day.
Yet as much as the SSRC report helped reshape polling, the irony is that it never fully resolved the debate over what had gone wrong in 1948. Many argue that the quota sampling methods and premature use of telephone interviewing led to a Republican bias. Yet the major pollsters of the day believed their major error was that they stopped polling too soon, a few weeks before the election.
The most useful lesson from the 1948 example that could be applied to 2008's New Hampshire polling calamity is the urgency and transparency of the SSRC committee's approach to addressing the problem. Members moved quickly, as their report explains, out of a sense that "extended controversy regarding the pre-election polls ... might have extensive repercussions upon all types of opinion and attitude studies."
The American Association for Public Opinion Research "commended" the SSRC effort and urged its member organizations to cooperate. "The major polling organizations," most of which were commercial market researchers competing against each other for business, "promptly agreed to cooperate fully, opened their files and made their staffs available for interrogation and discussion." The committee published its report in an "incomplete and unpolished form" that included detailed statistics and specifics on the methods used by the various pollsters, in the hopes that it would "encourage more detailed and penetrating assessments of the election polling experience of 1948."
Such an approach would have obvious benefits today, but with an Internet twist. In 1948, the SSRC had to publish a book to disseminate its findings. Today the "incomplete and unpolished" data -- including questionnaires, crosstabulation and even raw respondent level data -- could be put online immediately, allowing survey researchers, statisticians and methodologists around the world to collaborate in a "wiki"-like assessment.
Is such cooperation and disclosure likely in the hypercompetitive environment of 2008? Perhaps not, but consider how far we have come from the spirit of transparency that guided Gallup, Roper, Crossley and their colleagues in the aftermath of the 1948 fiasco. On Tuesday afternoon, while waiting for vote returns, I checked the releases of the 13 polls released in the last few days before the New Hampshire primary. Only five had published the party composition of their samples (the percentage of registered independents); two more quickly responded to my email requests. But half the New Hampshire pollsters disclosed nothing about this most elementary characteristic of their samples. And only two of the New Hampshire pollsters -- both academic institutions -- routinely released reports that included the demographic compositions of their "likely voter" populations.
If modern polls were bullet proof, if their "likely voter" screens and models were identical, if the public had total confidence in the results, then the "you-can-say-we-did-a-poll" school of disclosure might suffice. However, when data consumers are ready to stop trusting polls, something needs to give. If the pollsters of 2008 are not willing to follow the example of the giants of 1948, then the media companies that foot the bill for most of the public polling we consume need to step up and make it so.
-- Mark Blumenthal is editor and publisher of Pollster.com. His e-mail address is email@example.com.