How-to

Screening Question Design — How to Let Only the Right Respondents Through

A practical guide to designing screening questions (SC) for surveys. Covers incidence rate (IR), quota planning, common failure patterns, and the design rules that prevent off-target respondents from polluting your data.

"We collected N=500, but when I ran the analysis, almost none of them were our actual target." — a self-inflicted disaster that still happens routinely. Eight times out of ten, the root cause is a poorly designed screening section (SC). And yet, screening is treated as a warm-up to the main survey and rarely gets the time it deserves — a pattern industry articles call out repeatedly.

This article walks through how screening questions should actually be designed: the basic structure, common failure modes, a five-step design process, the math behind incidence rate (IR) and quotas, and the rules you cannot skip in operations. If you take quantitative research seriously, this is a piece of survey design that quietly determines whether your data is worth analyzing.

1. What screening questions are

Screening questions (SC) are the cluster of questions placed before the main survey to verify whether the respondent matches your target population. Respondents who don't qualify are terminated immediately; only qualified respondents move into the main questionnaire (MQ).

What screening does

  • Filters down to the target — confirms demographics, behaviors, attitudes
  • Allocates to quotas — manages cell-level completion targets
  • Optimizes cost — in panel research, minimizes payouts to non-qualifiers
  • Protects data quality — keeps off-target answers from contaminating analysis

Typical structure

[SC1] Gender: Male / Female / Other
[SC2] Age: numeric input (auto-bins into age bands)
[SC3] Region / state
[SC4] Did you purchase any product in category X in the last 3 months? Yes / No
[SC5] Purchase frequency: at least monthly / less than monthly / never

→ Anyone who said "No" to SC4 or "never" to SC5 is terminated
→ Qualified respondents continue to [MQ Q1]

Academically, Couper (2008) Designing Effective Web Surveys frames screening as the mechanism that minimizes the gap between your "intended population" and your "achieved sample". When screening fails, the assumptions underlying every downstream statistic fail with it.

2. Why screening matters — what goes wrong when it fails

Failure 1: off-target respondents pollute the data

You set out to survey "people who visited a café in the last six months," loosen the screener, and end up with people guessing based on faint memories. Before any statistical analysis, the responses themselves are no longer reliable.

AAPOR's Standard Definitions define eligibility verification as a foundational requirement for controlling sampling and coverage error — and the SC is what carries out that verification.

Failure 2: quotas blow up and you have to re-field

You set quotas of "100 women in their 20s / 100 in their 30s / 100 in their 40s," start fielding, and the 20s cell fills up almost immediately while the 40s cell never completes. If you backfill only the 40s, you compound bias on top of bias.

Failure 3: misjudging IR vaporizes the budget

You target "subscription coffee delivery users" assuming an IR of 10%, then discover the actual IR is 2% — and your cost is 5× the plan. Without an IR estimate up front, panel projects routinely run into budget disasters.

Pew Research Center's methodology documentation treats IR estimation and sample design as a single, inseparable planning step.

3. The five-step screening design process

Field practice converges on five steps.

Step 1: define population and target precisely

Write out, in one sentence, who you're trying to represent: e.g., "U.S. women aged 20–49 who purchased cosmetics online in the last 3 months." Every screening question is justified against this sentence.

Step 2: enumerate the conditions to verify

Decompose the target sentence into checkable conditions:

  • Demographic conditions — gender, age, region, occupation, income
  • Behavioral conditions — purchase, usage, frequency, recency
  • Attitudinal conditions — category interest, brand awareness

Step 3: order the questions

The default heuristic is broad → narrow, harmless → sensitive, recall-light → recall-heavy.

PositionExampleWhy
TopGender, ageCoarse attributes for the whole sample
MiddleRegion, occupationMid-grained filters
LaterUsage, purchase frequencySensitive or recall-heavy
EndDetailed conditionsLast-mile checks for qualifiers only

Step 4: design quotas

Set cell-level completion targets and back-calculate required contacts from IR. This is where our guide to determining sample size becomes the input for the screener.

Step 5: pilot before going live

Run a small pilot (N=30–100) before the main fielding. Measure actual IR, completion time, and where respondents drop out. Skipping the pilot virtually guarantees that the main fielding will surface a problem you could have caught in a day.

4. Five rules to follow when designing the SC

Rule 1: one condition per question (MECE)

Don't compress multiple conditions into one question. Asking "Are you a woman in her 20s who purchased cosmetics in the last 3 months?" makes the routing logic unworkable. Split conditions into separate questions and combine them in skip logic.

Rule 2: always include a "none of these" answer

Provide "None of the above" or "I have not purchased it" explicitly. Krosnick (1991) Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys shows that without an opt-out, respondents tend toward satisficing — picking something plausible to keep going — and slip past your screener.

Rule 3: distinguish "currently" from "in the past"

"Do you use product X?" is ambiguous between current and past use. Use explicit timeframes: "currently using," "used in the past 3 months," "used in the past 12 months," "never used."

Rule 4: place sensitive items at the end of the SC

Income, health, religion, political views — these belong near the end of the screener, not the top. Front-loading sensitive items spikes drop-off.

Rule 5: don't leak the "right answer"

Never preface the survey with "this study is for users of product X." It creates an incentive for non-qualifiers to claim qualification, which destroys validity at the root. In industry circles, telegraphing the target is described as a self-inflicted wound — that level of basic.

5. The math: incidence rate and quota design

Use this formula at the design stage to keep quotas from collapsing.

Required contacts

Required contacts=Target completesIR×Completion rate\text{Required contacts} = \frac{\text{Target completes}}{\text{IR} \times \text{Completion rate}}

Example: target 500 completes, IR 10%, completion 80% → 500 / (0.10 × 0.80) = 6,250 contacts.

Per-cell management

A typical women × age-band quota plan:

CellTargetEstimated IRCompletion rateRequired contacts
Women, 20s10012%80%1,042
Women, 30s10010%80%1,250
Women, 40s1008%75%1,667
Women, 50s1005%70%2,857

It is normal to vary IR and completion estimates per cell — vendor benchmarks from Centiment and Pollfish repeatedly observe that older cells have lower IR and lower completion than younger cells.

6. Editorial view — five rules that make a real difference

Tracking industry reports and public case studies, here are five things we'd push hard on.

1. Spend as much time on the SC as on the main questionnaire. The SC isn't a warm-up — it's a separate design discipline that determines whether the rest of the project is salvageable. Teams that spent 30 minutes on the SC and got blindsided in analysis show up in industry articles again and again. Allocate at least one hour, ideally two. This is not investment; it's a precondition for usable data.

2. Don't underestimate self-report bias — it will eat you alive. When you ask "in the past 3 months," human memory tends to inflate the recency window by 1.5–2× — a finding repeatedly demonstrated in survey-response research (Tourangeau et al., 2000). If your screener says "in the past 3 months," design the project assuming you're effectively asking "in the past 6 months."

3. Skipping the pilot will cost you a week. A pilot of N=50 catches 70–80% of the problems that would otherwise blow up your main fielding. Pilot in one day, fix in another, and you've spent two days. Skip the pilot and you'll lose one to two weeks reworking the field. The ROI is so lopsided there's no defense for skipping.

4. Fear "false qualifies" more than "false disqualifies." A loose SC lets non-target respondents through and corrupts the data. A tight SC drops some real targets and raises cost. The first failure mode is far more expensive — wrong decisions cost orders of magnitude more than recruiting cost. When in doubt, screen tighter, not looser.

5. Don't sequence the SC purely for quota convenience. Asking gender and age up front is efficient for routing. But putting sensitive items or recall-heavy items at the top spikes drop-off. Optimize for total completion rate, not for the screener's own routing efficiency. This trade-off shows up in nearly every methodological guide for a reason.

7. Screening in the Survey Tool Kicue

Kicue ships every component you need for serious screening operations.

The SCREEN question type

The SCREEN type lets you mark each option as "qualify" or "terminate" directly. Compared to the workaround of bolting skip logic onto a generic SA question, the design intent is explicit and easier to review and modify later.

Skip logic and display logic

When qualification depends on a combination of SC questions, use skip logic and display logic together. Conditions like "Continue only if SC1 = A AND SC3 = B" are straightforward to express.

Quota management with real-time monitoring

The quota module tracks per-cell targets against live completion counts. Once a cell (e.g., women in their 20s) hits its target, the system can auto-screen-out further qualifiers in that cell.

URL parameters for panel handoff

When traffic comes from an external panel, URL parameters carry over the panel ID, gender, age band, and any other attributes you've already verified — so you don't need to re-ask them in the SC.

Choosing the right tool — Free plan limits, branching support, AI capabilities, and CSV export vary widely across tools. See our free survey tool comparison to find the right fit for this approach.

Summary

Checklist for designing and operating screening questions:

  1. The SC is not a warm-up — it's a separate design domain. It minimizes the gap between intended and achieved samples.
  2. Eight in ten failures originate in the SC — off-target leakage, quota collapse, IR misjudgment.
  3. Five steps: define population → enumerate conditions → order questions → design quotas → pilot.
  4. Five rules: one condition per question, always include "none," distinguish current vs past, sensitive items last, never leak the right answer.
  5. Required contacts = target completes / (IR × completion rate) — estimated per cell.
  6. Teams that take the SC seriously win in analysis. It's the lifeline of the entire project.

In our experience reading industry reports, "teams that cut corners on the SC end up cutting corners on analysis too." Whether you treat the screener as an independent design object is the fork in the road that determines survey quality.


References (9)

Academic and methodological

Standards bodies and methodology centers

Vendor benchmarks (treated as industry observations)


Want to design and run screened surveys end-to-end? Try the free survey tool Kicue. The SCREEN question type, skip logic, and live quota management ship out of the box, so your screening design carries straight through to analysis.

Related articles

Ready to create your own survey?

Upload your survey file and AI generates a web survey form in 30 seconds.

Get started for free