The PIE Center conducts four public opinion surveys each year that focus on critical issues in Florida: water quantity and quality, immigration reform, endangered and invasive species and food production practices. The surveys are designed to represent the attitudes, behaviors and experiences of all Floridians and are repeated annually to track changes in opinion over time. By understanding and conversing about public opinion, agricultural and natural resources leaders can address misconceptions and promote extended discussions on possible solutions.
PIE Center Associate Director Alexa Lamm answers some common questions about how the PIE Center constructs and analyzes the surveys. Read more detailed information here.
[accordion]
[acc_item title=”Q: How does the PIE Center get people to take the surveys?”]A: We use what is called non-probability sampling. Non-probability sampling targets specific people in order to fill specific quotas of interest. In order to get people to take our surveys, we work with a company called Qualtrics that provides incentives to people who sign up to give their opinions about a variety of topics. When we have a survey ready to go, we ask Qualtrics to target specific demographics that align with the general Florida population. We ask them to get as close as they can to the 2010 census data as it applies to age, race, ethnicity, educational background and geographical location across the state. Qualtrics will send our survey to specific people based on their profiles. As they get responses and fill the demographic quotas that we require, Qualtrics ends the survey.
Probability sampling, on the other hand, is randomly contacting individuals from a list of every single phone number or address. People who do surveys over the telephone take all the home phone numbers in the state of Florida, randomly select a specific number and call them. If they do not get a response from a phone number, then they would randomly select another number until they get 500 responses.[/acc_item]
[acc_item title=”Q:What are the benefits and drawbacks to the two sampling methods?”]A: Non-probability sampling is not random. People have to be willing to do this, and they need to opt in or sign up. There are always certain populations that have better access to the web. For example, individuals living in rural areas with dial-up Internet are probably not going to sign up or spend the time it would take them to complete a survey, so there are limitations. But there are limitations to probability sampling, too. Not everybody has a home phone anymore, and those that do often do not want to respond, as screening calls has become a major issue in contacting respondents through the phone. That’s where the weighting comes in, to assist in minimizing our limitations.[/acc_item]
[acc_item title=”Q: Does the fact that people volunteered themselves to take the survey change how representative the sampling is?”]A: It may, but these are limitations in any type of survey methodology. In our case, there may be a certain type of person opting to complete surveys of this type, so allowing some selection bias to come into effect. But there is also selection bias when it comes to individuals that are willing to talk to someone on the phone these days. People who are at home for a variety of reasons are more available and may be more willing to answer a phone survey than someone who works full time or primarily uses their cell phone. So there is selection bias whether we use probability sampling or non-probability sampling. It is just the type of selection bias and how we handle it. Any type of social science research has limitations. Social science research is not like doing research in a lab setting. It is not controlled. What weighting does is it helps us accommodate for the sampling technique.[/acc_item]
[acc_item title=”Q: What is weighting?”]A: Weighting fine-tunes our data to match the Census demographics even closer. Certain demographic elements might be a little overrepresented in our sample than what is present in the state, so those responses are counted a little less. The attributes that are underrepresented, then, count for a little bit more. The weighting is never drastic, usually less than 5 points (out of 100). Weighting is not anything new, and it is not that big of a mystery. We just use simple math.
Sometimes, for example, we have a few more Asian Americans in our sample than what is in Florida’s actual population. Or sometimes fewer African Americans respond to the survey than what we would consider to be representative of the state. Our results are never wildly far off but the weighting allows us to fine-tune the demographics so the data is truly representative of Florida’s population.[/acc_item]
[acc_item title=”Q: How do you weight?“]A: We weight the data before we calculate anything. We weight on gender, ethnicity, race, age and rural-urban continuum codes. Each of Florida’s ZIP codes is assigned a rural-urban continuum code, one through eight. The code describes how rural the area is based on population and distance to a major metropolis.
As for the weighting process, let’s use an example. For example, one of our respondents is a Hispanic male. His answers, unweighted, count as a 1.00, just like any other respondent to our survey. We might find that the Hispanic population is a little bit overrepresented in our sample, so we would weight his responses to count at a 0.93. However, if the male population is slightly underrepresented in our sample, this respondent would get an extra fraction of a point. Now his responses would be weighted at a 0.97. We do this for every demographic category for every respondent. We use a program called SPSS to do this automatically and remove any chance for human error.[/acc_item]
[acc_item title=”Q: Why does the PIE Center get 500 responses?”]A: Five hundred is the standard number of responses needed to be representative of a state. As you increase your sample size, error is reduced. There does come a point, though, where the gain is less than the effort exerted to get those responses. Is 1,000 responses better than 500? Absolutely. Is it necessary for us to survey the entire population? No.[/acc_item]
[acc_item title=”Q: If the responses are weighted, why do you include demographics in the report?”]A: We have been reporting the raw information that shows who physically took our report, but that’s not how their answers are necessarily counted. We will likely not include this raw information in future reports to avoid the confusion.[/acc_item]
[acc_item title=”Q: What can we expect from the surveys in 2014?”]A: We will add interesting and timely questions based on new developments in each issue area. For example, this year’s water survey asked about the “Water War” going on between Florida and Georgia. We also added some questions that measure Floridians’ awareness of different policies, which is very similar to the food survey we did. This will allow us to be a bit more consistent across the four topics. Also, we are adding questions that measure Floridians’ willingness to change their behavior when associated with costs. We have seen that residents are interested in conserving water, for example, but will they still be interested if it means their lawns are not as green?[/acc_item]
[/accordion]