How-to

How to Calculate Survey Sample Size — Confidence Levels, Formulas, and the 384 Rule

Survey sample size is decided by confidence level, margin of error, and population size. At 95% confidence and ±5% error, you need 384 — that's the standard. Lookup tables, formulas, subgroup adjustments, and the practical sweet spot, in 5 steps.

The bottom line: at 95% confidence and ±5% margin of error, you need 384 respondents when the population is large enough. That's the industry baseline derived from Cochran's formula. The reason is simple: once the population exceeds about 10,000, the required sample size mathematically converges to a near-constant. Whether your population is 1 million or 100 million, you need roughly 400 respondents.

But "384 and you're done" isn't quite right. Subgroup analysis, IR (incidence rate), and response quality can each push the practical requirement to 1.5–3× larger. This piece walks through deciding the right sample size in five steps.

Step 1: Decide the goal with three numbers (2 min)

Sample size is determined by three parameters:

① Confidence level

"If you ran the same survey 100 times, how many times should the result fall within the true range?"

ConfidenceUse caseZ-value
90%Quick reads, exploratory1.645
95% (standard)Industry standard1.96
99%Medical / regulatory2.576

95% is the default for academic papers, market research, and operational decisions.

② Margin of error

"How many percentage points of error are you willing to accept around your reported %?"

Margin of errorUse caseRequired N
±10%Rough or pilot~100
±5% (standard)Industry standard~384
±3%High-precision research~1,067
±1%National polling, census-grade~9,604

Cutting margin of error in half quadruples the required sample size. That's why the jump from ±5% to ±3% pushes N from 384 to 1,067.

③ Population size

PopulationRequired N (95% confidence, ±5% error)
10080
500217
1,000278
5,000357
10,000370
100,000+~384

The required sample converges as population grows. For surveys of "all consumers" or "all SNS users," just remember 384 and you're set.

Step 2: Use the formula (only when needed, 3 min)

For most cases the lookup table above is enough, but knowing the formula expands your judgment.

Cochran's formula (infinite population)

n = (Z² × p × (1−p)) / e²
  • Z: Z-value for confidence (1.96 at 95%)
  • p: Expected proportion (use 0.5 if unknown — that's the maximum)
  • e: Margin of error (0.05 at ±5%)

For 95% confidence, ±5% error, p=0.5:

n = (1.96² × 0.5 × 0.5) / 0.05² = 3.8416 × 0.25 / 0.0025 = 384.16 → 385

Finite population correction

If your population is ≤10,000, you can reduce the requirement:

n_adj = n / (1 + (n − 1) / N)

Example: Population = 1,000 → 385 / (1 + (385−1)/1000) = 385 / 1.384 = 278.

Online calculators

Don't want to do the math? SurveyMonkey's calculator and others compute this in seconds.

Step 3: Adjust for subgroup analysis (5 min)

This is where How and specialist articles diverge — and the most-missed step.

If you collect 384 for the overall picture and then split by gender × age group, each cell has 30–50 respondents. Subgroup numbers are no longer reliable at that depth.

Cell sizing for subgroups

Analysis depthRequired total N
Overall only (single GT)384
Gender × age (8 cells)50 × 8 = 400–800
Gender × age × region (24 cells)30 × 24 = 700–1,500
4-way crosses (48 cells)1,500–3,000

For subgroup analysis, N≥30 per cell is the floor. Below that, subgroup means and percentages become essentially uninterpretable.

A common failure: "we'll just analyze it deeper too"

A client asks "can we also look at age × occupation × region?" — and you accept while only collecting 384. The result: each cell has 5 respondents, the cross-tabs are noise. If you accept, scope to N=1,500–2,000 for a real 3-way subgroup analysis.

Step 4: Factor in IR and response rate (3 min)

In the field, "how many to invite" and "how many will reply" are two different numbers.

Formula

Required invitations = Required N ÷ IR ÷ Completion rate
  • IR (incidence rate): Of those invited, the share that actually qualifies for the survey
  • Completion rate: Of those who started, the share that finished

Example: 384 women in their 30s who use grocery delivery

  • IR: among 30s women, ~40% use grocery delivery → IR = 0.4
  • Completion rate (typical): 70% → 0.7
Required invitations = 384 / 0.4 / 0.7 = 1,371

So you need to screen-distribute to ~1,400 people to land 384 qualified completes. See the screening question design and operations guide for the full mechanics.

When IR is unknown

Use prior internal data or industry reports (Statista, trade associations). If neither exists, run a N=30–50 pilot to measure IR before committing to the main fielding. See the pilot testing guide.

Step 5: The practical sweet spot (3 min)

Theoretical N and field N differ. In practice, most projects land between N=200 and N=500.

Field rules of thumb

ScenarioRecommended N
Internal team survey30–100
Customer satisfaction (overall trend)200–400
Customer satisfaction (with subgroup analysis)500–1,000
Brand / market research1,000–3,000
Public policy / large-scale stats5,000+

Add a "comfortable buffer"

  • Estimate completion rate low (use 60% rather than 70%)
  • Account for cleaning losses (5–10% will be excluded post-cleaning)
  • Leave room for unexpected subgroup analysis (design for the largest case)

Plan to collect 1.2–1.5× the theoretical minimum as practical buffer.

Three common pitfalls

1. Stopping at "384 is enough." For overall trends only, 384 works. For subgroup analysis it doesn't. Before agreeing to the sample plan, confirm the cross-tab axes the team plans to use. "Age × occupation × region" requires 1,500+, not 384.

2. Treating required N as required invitations. Assuming "invite 384, get 384 responses" is the most common mistake. Completion rates vary widely by channel (panel: 60–80%, email lists: 5–15%, social: 1–5%). Always reverse-calculate: invitations = N ÷ IR ÷ completion rate.

3. Skipping the pilot when IR is unknown. Going to main fielding with "I'll assume 40% IR" — only to discover actual IR is 10% — quadruples your invitation costs. A 30–50 pilot to measure IR prevents this kind of cost disaster.

Choosing the right tool — Free plan limits, branching support, AI capabilities, and CSV export vary widely across tools. See our free survey tool comparison to find the right fit for this approach.

Summary — 5 steps

StepTaskTime
1Decide goal with three numbers (confidence, error, population)2 min
2Use the formula or lookup table3 min
3Adjust for subgroup analysis5 min
4Calculate invitations from IR and completion rate3 min
5Add a practical buffer (×1.2–1.5)2 min
Total15 min

The answer to "how many respondents do I need?" is 384 for overall trends, 500–1,500 for subgroup analysis in most practical cases. For the full statistical foundations, see the sample size design guide.


After you've sized the sample

Once N is set, revisit it during aggregation and significance testing. Both "N too small to detect a real difference" and "N so big that trivial differences become statistically significant" are common field traps. See survey aggregation and significance testing for the analysis-time view.


Try Kicue — a free survey tool: upload a questionnaire file and AI auto-generates the web survey form. Screening, quota management, and live response monitoring ship as standard, so the sample design carries cleanly into operations.

Related articles

Ready to create your own survey?

Upload your survey file and AI generates a web survey form in 30 seconds.

Get started for free