April 11, 2018
The following addresses the question of the scientific validity of the National Citizen Survey, commissioned by the City of Coronado and conducted by the National Research Center, as it relates to the potential to allow for multiple responses from the same person or by those not targeted to take the survey.
National Research Center uses best research practices, and ensures that survey results are unbiased and that its findings can be trusted. The central concern is the chance that the survey could be completed more than once, thereby undermining the credibility of findings and creating the appearance that the survey is not scientific.
National Research Center President and CEO Tom Miller, who earned a Ph.D. in research and evaluation methods from the University of Colorado, Boulder, appreciates the opportunity to respond:
All questions about survey methods, including the question raised about the possibility of multiple responses, boil down to a single important question: Are the results likely to be a valid representation of the opinions of the adult population of Coronado? Corollary to that question is: Does the survey process itself create the optimal likelihood of garnering a representative set of community opinions?
National Research Center's survey methods are based on years of research on survey data collection. Company principals have written two books published by the International City/County Management Association, and have published research in scholarly and lay journals and tested in the field for more than 20 years. The essence of the company's approach is to follow a set of practices proven to maximize the chances of accurate findings.
- Complete coverage of all dwelling units in a jurisdiction
- Random/systematic sampling of housing units
- Unbiased selection of adult residents in selected housing units
- The anonymity of responses promised whenever feasible
- Self-addressed and stamped reply envelope
- Unbiased questions targeted for local government use that keep the survey as short and simple as possible
- Layout easy to follow
- Multiple contacts to remind response and offer new response materials. Importantly, contacting potential respondents multiple times encourages responses from people who may have different opinions or habits than those who would respond with only a single prompt.
- Multiple data collection modes (mail, web) as needed
- Signed cover letter from City Manager to evoke civic responsibility
- Adherence to American Association of Public Opinion Research Transparency Initiative requirements to make public the key survey methods.
The biggest challenge in survey research these days is to get residents to respond. Response rates have fallen in all locations, for every purpose and by every data collection mode. The threat of multiple responses in most local government surveys is minimal because of the time required to complete the survey and the low stakes of the survey questions. Optimal survey methods for garnering the largest and broadest response include making it very easy for residents to participate and ensuring that responses will be anonymous so that answers are honest. Sometimes residents forget or are too busy to complete their survey or they misplace it, so the best survey practice is to give residents multiple opportunities to give their responses. To keep results anonymous, there is no code affixed to the questionnaire that would be required to respond to the survey and no secret identifying marks on the survey so that there is no way to link responses to individuals.
Secret codes can be used to identify residents and residents know this. In National Research Center's experience, such codes affixed to surveys even in locations not readily visible often are removed by residents from mailed surveys and codes required for entry to a web or online survey reduce response rates and change responses because responses no longer are given in anonymity.
Multiple survey responses are unlikely to have any noticeable effect on the results. In the National Research Center's research lab, the company has tested the possibility that residents would accidentally or intentionally respond more than once. By using identifying codes, the company determined that on average, about.5% to 1% of respondents have appeared twice in a sample. But imagine that as many as 5% of a sample comprised duplicate respondents in Coronado, a magnitude never seen by National Research Center. That would mean that 7 or 8 residents out of 300 responding residents ignored the clear instructions to respond only once and responded, instead, twice. Such an example of "ballot stuffing" would have no noticeable effect on results. Below is an example of that point.
Imagine that the results of the question about public art found 50% of 300 respondents, or 150 respondents, felt there was "too much" public art - with a margin of error ranging from 44% to 56%.
Now assume that the eight residents who completed the survey twice, submitting 16 duplicates, or about 5% of the 300 respondents, indicated there was "too much" public art in Coronado and those eight duplicates were removed. With 142 of 292 respondents believing that there was too much public art, the new figure would be 48.6% of respondents saying "too much," with a margin of error from 42.6% to 54.6%. Even with this unlikely magnitude of "double voting," there is no meaningful difference in results. A few "extra" responses do no harm and offsets the decrease in response rate expected if a code were required to respond. But what about some individual or some group that seeks to write in large numbers of duplicate responses to the anonymous survey? If there were mass intention to undermine the instructions of the survey - written and signed by the City Manager - the timing and volume of scores of unsought duplicate responses would reveal a signature of untypical dimensions that would be spotted easily by researchers. Such a pattern has not appeared.
Below are excerpts from two articles that support National Research Center's point that giving potential survey respondents multiple opportunities to respond is survey research best practice.
1. In: "Survey Completion Rates and Resource Use at Each Step of a Dillman-Style Multi-Modal Survey" by Andrea Hassol, et al. Abt Associates Incorporated, article submitted for publication to "Public Opinion Quarterly":
"In designing data collection strategies, survey researchers must weigh available resources against expected returns…
Considerable research has been conducted on methods for improving response rates to surveys. Several key factors are known to affect response rates, including salience of the survey (Herberlein and Baumgartner, 1978), form of mailing and monetary incentives (Dillman, 1991), and multiple contacts (Linsky, 1975, Dillman, 1991). A response-maximizing approach to multi-modal surveys, as best articulated by Dillman (1978), includes:
- A respondent-friendly questionnaire
- Up to five contacts with recipients of the survey, a brief pre-notice letter sent a few days prior to the arrival of the questionnaire; a questionnaire mailing that includes a detailed cover letter; a thank you/reminder postcard; a replacement questionnaire; a final contact, possibly by telephone.
- Inclusion of stamped return envelopes
- Personalized correspondence
- A token financial incentive
2. In: "The Effects of MultiWave Mailings on the External Validity of Mail Surveys" by Michael Dalecki, et al. Journal of the Community Development Society. 19(1) June 2010. University of Delaware:
Abstract: Survey data, particularly mail questionnaires, are very useful in community development work. With relatively low cost, a practitioner can obtain valid information to determine community needs, support for programs and general attitudes and opinions of local citizens. Low response rates, however, can have serious effects on the validity of the data. Previous research has shown that follow-up mailings are essential to obtaining a high response rate to mail surveys. This paper examines the potential for sample bias if the number of mailings is reduced. Differences between groups responding to three waves of mailings to a statewide Pennsylvania survey (N = 9,957) are examined via log-linear techniques, using continuation ratio models. The results indicate that initial respondents differ from laggard respondents on five demographic characteristics, but differences diminish between early laggard and late laggard respondents. The implication is that single mailings of questionnaires could cause serious threats to the validity of the data. Multiple mailings and other methods to maximize response rates are necessary to improve the quality of survey data for community development work.