Skip to main content

2022 Collaborative Midterm Survey FAQs

We will continue to update the FAQ list as we receive questions, adding the most recently asked questions to the top of each grouping.

  • Photo of a road sign displaying the words "Midterm 2022"

Questions?

View the recording of our informational webinar or peruse frequently asked questions sorted thematically below. Email our team at midtermsurvey@cornell.edu with additional questions.

Budget

What can the $225,000 budget cover? For example, is this the budget just to conduct the survey or does it also cover other costs as well? 

The budget of up to $225,000 per team covers any costs associated with the team’s data collection and deliverables. We understand that specific expenditures may differ across teams.

We believe we may be "under budget" with n=6,400. Do you have a preference between "stick to n=6,400 and propose the budget that associates with it" or "boost the n to what you feel would work best for your portion of the research as long as it's $225K or less"?

Given this specific tradeoff, “boost the n to what you feel would work best for your portion of the research as long as it’s $225K or less” seems preferable. 

Back to top

Proposal Details:

Can you provide some thoughts on how you want the proposals to look?

You can preview the questions asked in the proposal and response options here

Are you looking for the selected team to just provide the data collection support or also to incorporate various adjustment and estimation strategies post data collection?

In addition to collecting data, teams will provide post-stratification weights and/or adjustments. Teams should detail their weighting/adjustment strategies in the proposal.

Back to top

Survey interview and fielding details:

Is it expected that different questions will be asked of pre- vs. post-election completes?

No, it is not expected that questions will differ pre/post-election, although teams may propose some different questions if they wish. 

Is a longitudinal design (i.e., collect pre-election completes and then re-interview them post-election, as in the CES or ANES) expected?

No re-interviews are expected.

Are there expectations for the percentage of completes that will be pre- vs. post-election?

There are no specific expectations. If a team proposes a specific allocation of responses pre vs. post, we will be interested in the rationale. We will also work with the selected teams to coordinate anticipated interview dates as much as possible.

What organization(s) would be identified to respondents as the study sponsor(s)?

We will emphasize the overall survey and sponsor, not specific organizations.

How are you thinking about fieldwork timing, especially with methodologies with differing field period requirements? (i.e. online, non prob ~2-3 days vs. ABS ~3-4 weeks?)

We will work with teams to maximize overlap of field dates. That said, we recognize that different methodologies may necessitate some differences. 

Is it a problem if respondents are being compensated in either the probability or non-probability samples?

Not a problem at all. It is completely fine to provide compensation to respondents.

Back to top

Additional Methodological Questions

Do you have an ideal array of methodologies you would want included within the 3 vendors / $675k?

We do not have a predetermined array of methodologies. We will be most interested in the rationale for the proposed methods (or combination of methods) and why the team believes they are  ideally suited to accomplish the survey goals. Changing patterns of technological engagement, declining and differential response rates, and rapidly evolving survey methods mean that past performance of methods no longer ensures future performance, so we do not have preconceptions about which combination of methods is best. 

There is a growing interest in disaggregating to the local-level (including FIPS). Will we be able to do that?

If a team wants to include specific local-level indicators in their sampling strategy, that's fine.

Back to top

Weighting:

Is it expected that pre- and post-election completes will be weighted separately to allow for separate pre- and post-election analyses?

We envision a single weight for all completes. That said, a team could propose to provide pre and post-election weights for those specific analyses.

Back to top

Questionnaire:

The RFP states that 25 of the questions will differ across surveys – that is, there will be a core questionnaire of 55 questions, plus three modules of 25 questions per module. Will the awardee have input into the choice of questions (e.g., to suggest certain items that may be useful for calibrating nonprobability data) or will they be chosen wholly by the funder?

Yes, the awardee will have input in these questions.

How will it be decided who receives each survey version?

Since awardees will have input, we will work with awardees to determine which 25-question module goes with which team.

Will the awardee be permitted to include quality control questions in the survey instrument?

Yes.

Will the awardee have input about mode-specific question verbiage – for example, minor text differences to ask the same question in an online mode vs. a phone mode?

Yes.

Will the questionnaire be comprised of only closed-ended questions, or will there be full open-ends as well? If full open-ends are included, what level of code development/coding for text responses will be required for survey responses?

There will be some open-ended questions. We would expect limited coding, such as recoding misspellings or different capitalization into the same word. Additional specifics would be discussed with the teams.

Are the study PIs / Senior Advisors looking for support from each awardee to cognitively pretest questions for respondent comprehension before the start of data collection on October 26? Or will the final survey instrument provided to each firm by the PIs and Senior Advisors only include questions that have already been field tested for respondent comprehension?

We anticipate that the majority of questions will have been previously field-tested, but awardees are welcome to provide input into question wording and to conduct further pre testing.

Will (or can) the questionnaire include fields that would allow linking to voter files (name, address, etc.)?

Yes. If you have plans to link to voter files or other auxiliary data, include this in the proposal. (Question 23 asks for this type of detail.)

Could the modules have an experimental component?

Yes. 

Do you have a sense of what will be asked on the 55 core portion of the survey?

We want to make sure we are not redundant in what we propose. Project teams will definitely have input on the additional questions and you should feel free to reference proposed content and ideas for the additional questions as appropriate, but this will not be a focus of the proposal evaluations.

Back to top

Sampling and Sample Size:

Are there minimum sample size and subgroup requirements for each survey version?

The only requirements are a minimum probability (1,200) and a minimum overall sample size (6,400).

Will California, Wisconsin, and Florida samples use one of the three survey versions, or will each state have a set of state-specific questions?

Each state will get each of the three versions.

Are proposals all supposed to oversample those three states in particular?

We’re interested in the most compelling strategies to yield accurate survey results at the national level and in these three states in particular. Oversamples could be one strategy for the states but we are not requiring that strategy.

Will the selected research teams be coordinating on sample/potential panels (i.e. if folks use web/text panels) to limit double dipping?

Yes, avoiding sampling approaches that could result in unintentionally re-interviewing the same respondent(s) will be a priority.

Do you envision the population of interest to be more the general adult population in the United States or the voting population in the United States? 

Because this is an election survey, we are most interested in identifying likely and eligible voters. That said, a team could propose to interview the general adult population but include information that allows for identifying voters to allow analyses of both the general adult population and the voting population.

Do you have a recommendation for what state sample sizes would be reasonable? 

We are intentionally not making specific recommendations on sample size. Instead, we are interested in each proposal’s rationale for the sample sizes they propose.

Back to top

Hard to Reach Respondents:

We understand one of the proposal evaluation criteria will be how a firm will strategize to include hard-to-reach populations like non-English speakers. Will each firm separately be responsible for translating the final survey instrument into non-English languages? Or will there be a coordinated effort to use the same translations for consistency across firms if the firms’ study design includes the same proposed languages?

We will work with teams to help coordinate a common strategy regarding languages.

Back to top

Deliverables:

In addition to final weighted data and study documentation, what other deliverables are expected by each firm (for example, cross-tabulations, topline, written analysis, data visualization)?

Only a topline report and full methodological details.

What interim deliverables are expected during data collection, and how frequently are they needed?

Interim data delivery is not strictly required. However, one interim data delivery with methodological information during data collection would help speed up the data archiving process at the Roper Center. We also anticipate that some teams may wish to share results of their soft launch with us.

What are the preferred or compatible data format or formats for the PIs and Senior Advisors (for example, SAS, SPSS, STATA, Excel)?

For archiving purposes, csv, spss portable, and Stata would be ideal, but we can work with any format.

Back to top

Data Launch/Hackathon:

For firms attending the data launch and Hackathon in NYC, what presenting role, if any, would the firms play in the event? Or would firms be attendees/observers only?

We are planning for firms/teams to have a presentation role, but we have not finalized the schedule.

Back to top

Teams:

Is it up to 3 researchers per proposal or in total?

Up to three teams will be selected to collect data. There is no limit to the number of individuals on each team. It could be a single person or a large team.

Our team was already planning on conducting a midterm survey. Can we submit a proposal based on this already planned project or do proposals needs to be proprietary to this project?

Yes, as long as the original project can meet the requirements of this project (e.g., field dates, sample sizes, use the project questionnaire, archive the data and all methodological details with the Roper Center and the project, etc.).

Are firms that typically work with a political party eligible to apply?

Yes. We expect proposals from a range of potential sources, which might include university researchers, students, market research firms, traditional survey firms, political polling firms, non-profit organizations, etc. Proposals will be evaluated anonymously so the project team will not have any information about past clients a particular proposal team has worked with.

Back to top

General:

Just curious—why Wisconsin as opposed to other states?

We acknowledge that other states play a critical role in the midterms. One reason we chose WI is that the state seemed especially challenging for pollsters in 2020. For example, a Washington Post-ABC news poll from October 20-25, 2020 (random sample of 809 likely voters in Wisconsin, 71% cell phone, 29% landline) showed Biden up by 17 percentage points when he won by less than one percentage point. Given our goal of understanding the implications of various methodological approaches for survey accuracy, we thought including WI would be especially valuable.

Who is eligible to apply to the Data Innovation Award? 

Undergraduate and graduate students are able to apply, as well as early career professionals - within 6 years of highest degree. 

Back to top

 

  • We'd love to hear your ideas, suggestions, or questions!

    Are you
    Would you like to be contacted for further assistance?