For those obsessed with the Democratic presidential race (and I presume that's just about everyone reading these words), the poll story of the week is the apparently dramatic turn in North Carolina as measured in two polls conducted by a little-known company called Public Policy Polling (or PPP).
A new survey shows Obama surging to a 21-point lead in North Carolina.
On March 17, at the peak of the feeding frenzy over the controversial sermons by the Rev. Jeremiah Wright and on the eve of Barack Obama's speech on the subject, PPP conducted a survey [PDF] showing Obama running just 1 statistically meaningless percentage point ahead of Hillary Rodham Clinton (44 percent to 43 percent) in North Carolina. Three other polls done earlier in March, including another from PPP, had shown Obama ahead by margins of 3 points to 8 points.
Then on Monday of this week, PPP conducted a follow-up survey [PDF] showing Obama surging ahead to a 21-point lead (55 percent to 34 percent).
Did the Obama speech work that well? As is often the case with this year's crop of pre-election polls, the answer is not simple.
One complication is that PPP uses an automated methodology that continues to give the willies to many traditional live-interviewer pollsters. The interactive voice response (IVR) methodology involves a recorded voice that asks respondents to answer questions by pressing keys on their touch-tone phones. By eliminating live interviewers, IVR pollsters can conduct surveys at lower cost, but the technology involves trade-offs: They cannot choose a random respondent within the household, cannot validate the age or gender of the respondent based on the sound of their voice, and are limited to very brief questionnaires, on the theory that few voters will stay on the phone for long without a real person on the other end of the line.
On the other hand, IVR pollsters argue that they are able to deliver accurate results because the automated call more closely replicates the mechanics of the secret ballot. Without an interviewer involved, they argue, voters are less apt to exaggerate their intent and more likely to reveal their true preferences. And they have some empirical support for that claim. University of Wisconsin professor Charles Franklin and I have compared IVR pre-election polls to those conducted with live interviewers in 2004 and 2006 and found no clear difference in their predictive accuracy. The automated pollster SurveyUSA routinely puts out "report cards" showing that its surveys compare favorably to those of other pollsters.
Still, the traditional polling establishment remains highly skeptical of the compromises inherent in automated telephone polling. ABC News does not consider IVR polling "air-worthy," and the New York Times [PDF] flatly concludes that results from IVR polls are "not reliable." (My own take, published in the pages of Public Opinion Quarterly, is more positive.)
While many will attribute the recent gyrations on the PPP North Carolina polls to the automated methods used, a more straightforward issue is likely to have exaggerated the apparent change. On the most recent survey, PPP changed the way it sampled likely Democratic primary voters.
Some background: Most media pollsters use a random digit dial (RDD) sampling methodology that gives every telephone household a chance to be selected. Most campaign pollsters now sample from lists of registered voters, especially in primary elections. They miss many voters without listed phone numbers but claim to improve on accuracy by using vote history data available for each voter on the lists (more on the RDD vs. list sample debate here and here).
PPP's methodology is a good example of how pollsters use the information on voter lists to identify "likely voters." According to Tom Jensen, communications director at PPP, their traditional primary polling method is to sample from the universe of registered Democrats, and those with no party affiliation, who voted in either the 2006 or 2004 primary election. They also look at the demographics of registered Democrats and previous primary electorates (using past exit polls and statistics gleaned from the voter lists) and use those to set demographic weighting targets for gender, age and race.
The strength of this approach, in theory, is a more accurate representation of actual primary voters. The weakness is that it depends much more on subjective judgments made by the pollsters, and the wisdom of those judgments determines the ultimate accuracy of the poll.
The PPP experience so far this year is a case in point. While the company had good success with this procedure in previous years' primaries, according to Jensen, they found that they were "completely understating Barack Obama's support" in polls conducted through Feb. 5. So they made a change: They started including Democrats and unaffiliated voters without any primary vote history who had cast ballots in the 2006 general elections, and they also adjusted their weighting targets to allow for a younger mix of voters.
In North Carolina, PPP held off on making this change until this week's survey. The rationale was that the company is North Carolina-based (with a greater focus on local elections), and they had assumed until very recently that the presidential race would be resolved before the North Carolina primary. As such, Jensen says, they had been expecting a more typical turnout, "until last week when the [presidential] candidates started to visit the state."
So the bottom line is that the March 17 PPP survey (and those conducted earlier in North Carolina) sampled from a universe of 444,247 registrants with a history of voting in past Democratic primaries. The most recent PPP poll -- the one showing Obama with a 21-point lead -- sampled from a larger universe of 874,224 that also includes non-Republican general election voters from 2006.
In both cases, PPP relies on an initial instruction at the beginning of the call that asks sampled voters who "do not intend to vote in the primary" to "please hang up." But the company was unable to provide data on how many chose to opt out of the survey at that point in the call.
The original sampling universe was arguably too narrow, given that 544,922 Democrats voted in the largely uncontested primary in May 2000, and 691,875 voted in the primary in May 1992. However, it is impossible to determine how much of the apparent increase in Obama's support comes from the change in the selection criteria. Surprisingly, PPP did not anticipate the need to tabulate the new results among the subgroup of respondents who would have been interviewed using the old sample selection method. Jensen promises they will do so on their next polls, so stay tuned.
-- Mark Blumenthal is editor and publisher of Pollster.com. His e-mail address is email@example.com.