How-to

Designing Past 'What People Say They Do' — Social Desirability Bias in Surveys

A research-grounded guide to social desirability bias (SDB) in surveys. Covers when and why respondents give 'socially acceptable' answers instead of honest ones, classic mitigation methods, and five design rules to get closer to truth.

When you ask "What's your annual income?", the share of people who report it accurately is generally 60–80%. The rest inflate it, deflate it, leave it blank, or pick something nearby — all of which create gaps between what people say and what is. This isn't laziness; it's Social Desirability Bias (SDB), a psychologically predictable mechanism. The everyday English phrase is "what people say they do, vs. what they actually do." This article puts that gap on a research-grounded footing.

This piece walks through the structure of SDB, the cognitive mechanisms behind it, the question types where it shows up most, classic mitigation techniques, and design-level moves you can make. Beyond the obvious cases (CSAT/NPS), this matters in HR pulse surveys, health research, and social studies where decisions hinge on the data — so the focus here is what to be deliberate about so your numbers come closer to truth.

1. What SDB really is

Social Desirability Bias (SDB) is the tendency for respondents to answer in ways that present them favorably, rather than reporting their actual opinions, behaviors, or attributes. The everyday phrase "saying what's expected of me" captures the same thing.

The academic definition

Crowne & Marlowe (1960) "A New Scale of Social Desirability Independent of Psychopathology" introduced the Marlowe-Crowne Social Desirability Scale (MCSDS) to measure SDB and put it at the center of survey methodology and psychometrics for over 60 years. Paulhus (1984) later split SDB into "Impression Management" and "Self-Deception" as two separate dimensions, codified in the Balanced Inventory of Desirable Responding (BIDR).

Two directions

SDB pushes responses in two directions:

DirectionWhat happensExamples
Inflation (over-reporting)Desirable behaviors / attributes reported above realityVoting, exercise, donations, income
Deflation (under-reporting)Undesirable behaviors / attributes reported below realityDrinking, smoking, prejudice, absenteeism

Tourangeau & Yan (2007) "Sensitive Questions in Surveys", the canonical methodological review in Psychological Bulletin, summarizes: "For sensitive questions, the gap between expressed and actual behavior commonly runs 10–30%."

2. Why "saying what's expected" creeps in — the cognitive mechanisms

SDB isn't capricious. It comes from predictable structures in the way people respond.

Mechanism 1: impression management

Respondents are aware of how their answer might be seen by others. Even on anonymous surveys, self-image management kicks in toward the surveyor, the survey company, and society at large, pulling answers away from the truth. The sociological foundation is Goffman (1959) The Presentation of Self in Everyday Life (Anchor Books).

Mechanism 2: self-deception

Sometimes respondents believe their own façade. Someone earnestly thinks "I'm an environmentally aware person" while their actual behavior says otherwise. They aren't consciously lying, which makes this hardest to address in design.

Mechanism 3: conformity to social norms

When certain values (health, environment, fairness) are framed as "correct", responses drift in that direction. Even hint at the norm in the survey's preamble or wording, and the answers move — repeatedly demonstrated in Schuman & Presser (1981) Questions and Answers in Attitude Surveys (Academic Press).

Mechanism 4: avoiding cognitive load

Verbalizing the truth is cognitively expensive. Picking the "safe" answer is a way to avoid that load — overlapping with the satisficing behavior described in Krosnick (1991) Response Strategies for Coping with the Cognitive Demands of Attitude Measures in Surveys, and especially common in the latter part of long surveys.

3. Where it shows up most

SDB doesn't spread evenly. It concentrates in specific topics.

Where SDB hits hard

Synthesizing the Tourangeau & Yan (2007) review:

DomainTypical bias
Health behaviorsExercise / vegetable intake over-reported; smoking / drinking under-reported
MoneyIncome / savings over-reported; debt / spending under-reported
Politics & civicVoting / donations / volunteering over-reported
Prejudice & biasAwareness of bias under-reported (asking directly drives unconscious answer adjustment)
Sex & illegal behaviorNumber of partners often inflated by men, deflated by women
Workplace & HROvertime / time-off use / boss ratings drift toward the polite answer

Where SDB is weak

The effect is small in some areas:

  • Personal preferences (food, entertainment taste)
  • Functional product evaluation (usability, design)
  • Demographics (age band, gender, region)

CSAT-style "satisfaction with our service" sits in the middle. Industry literature has noted repeatedly that "the politeness norm of not complaining" can mix in significant SDB-style bias in many cultures, especially East Asian ones.

4. Classic mitigation methods

Academia has built a toolkit for reducing SDB.

4-1. Anonymity assurance

The most basic move. Stating up front that "responses can't be tied to individuals" lowers the impression-management drive. DeMaio (1984) "Social Desirability and Survey Measurement: A Review" (in Turner & Martin (Eds.), Surveying Subjective Phenomena, Vol. 2. Russell Sage Foundation) reports that guaranteed anonymity improves both response rate and accuracy on sensitive items.

In practice:

  • Up-front explanation: "We won't use this in any way that identifies you. Aggregate analysis only."
  • Architectural separation of respondent ID and content (relevant when handling URL parameters in Kicue)
  • Separate survey administrator from data consumer (the client team can't read individual responses)

4-2. Indirect questioning

Instead of "Do you…?", ask "Do people you know…?" or "Generally, do people…?" Respondents talk about others while unconsciously projecting themselves. Fisher (1993) Social Desirability Bias and the Validity of Indirect Questioning demonstrates the effectiveness empirically.

4-3. Randomized response technique (RRT)

Warner (1965) Randomized Response: A Survey Technique for Eliminating Evasive Answer Bias introduced this classic technique. A coin flip determines whether the respondent answers honestly or gives a fixed response — knowing their answer "can't be deciphered" frees them to be honest. Implementation is complex, so it's rare in web surveys, but it's used in sensitive political research.

4-4. List experiment / item count technique

"How many of the following apply to you?" — count only, no item-by-item disclosure. Respondents don't have to admit which one applies, which lowers SDB. Glynn (2013) What Can We Learn with Statistical Truth Serum? provides a meta-analysis of the technique.

4-5. Concurrent measurement of self-presentation

Embed the MCSDS or BIDR alongside the main survey, and adjust responses statistically by SDB score. Standard in academic studies, but in practical surveys the trade-off against survey length usually rules it out.

5. Five design-level moves

The strict academic methods (4-3, 4-4, 4-5) often don't translate to operational surveys. Five practical design moves that lower SDB:

Move 1: assert anonymity twice — at the start and right before sensitive items

State "we won't identify you individually" both at the survey opener and immediately before each sensitive question. People forget the opening message; the in-context reminder is what works.

Move 2: explicitly say "there's no right answer"

"There are no right answers — please share your candid view" / "Negative feedback is welcome" lowers the psychological cost of choosing the "undesirable" answer. Industry articles return to this point regularly.

Move 3: visually symmetric scales

Make sure "very satisfied" and "very dissatisfied" carry equal weight in label, color, and placement. Highlighting only "very dissatisfied" in red ironically makes people avoid it — keep the two ends visually symmetrical. Detailed scale design lives in our Likert scale design guide.

Move 4: pair direct and indirect questions

For example, in leadership evaluation:

  • Direct: "Rate your manager's leadership on a 7-point scale."
  • Indirect: "How do you think your team as a whole rates your manager's leadership?"

Collect both, and flag SDB whenever direct and indirect diverge significantly.

Move 5: place sensitive items later

Putting sensitive questions at the top of a survey primes wariness across all subsequent items (order effect). Place them after some trust has built — mid-survey to late — as covered in our question order effects guide.

6. Editorial view — five rules for getting closer to truth

From tracking industry reports and public cases, five things we'd push hard on.

1. Don't try to eliminate SDB. Design with it. SDB is wired into how people respond, so driving it to zero isn't possible. A more realistic posture: estimate how much bias each question carries, and account for it when interpreting the data. "We assume 5–10% inflation on the satisfaction question" is a more useful stance than "the data is the truth."

2. Teams that "trust the survey" overinterpret the results. Industry articles consistently feature confident "We learned X from N=1,000!" claims — usually under-counting SDB in the analysis. Surveys are useful, but they reach truth only when paired with behavioral data and qualitative interviews. Teams that don't make important decisions on a survey alone end up with better calls.

3. Don't let "anonymity" become boilerplate. "Your privacy is protected" reads as throwaway language. Be specific about who sees what, and who doesn't. "Your individual response is never shared with the team — only aggregate numbers" beats "data is handled securely" by a wide margin.

4. Always pilot sensitive items. A pilot of N=30–50 surfaces non-response rates, midpoint clustering, and neutrality concentration. Going straight to main field with 30% blanks is a textbook design failure that the pilot would have caught.

5. SDB varies by industry, country, and generation. Don't lift overseas benchmarks blindly. SDB expression differs sharply across cultures, as noted by Johnson & van de Vijver (2003) "Social Desirability in Cross-Cultural Research" (in Cross-Cultural Survey Methods. Wiley). In some markets ("complaining is rude," "humility is virtuous"), scores trend systematically lower than European/American benchmarks. Importing CSAT/NPS benchmarks from another region and concluding "we're underperforming" is dangerous.

7. Bias mitigation in the Survey Tool Kicue

Kicue ships components for SDB and broader response-bias mitigation as standard.

Anonymity in practice

The public survey URL lets you collect responses anonymously as long as you don't pass an individual ID through URL parameters. If you instead pass a CRM customer ID or email through URL parameters, responses become linkable to individuals — in which case respondents must be told how their data will be handled. If you say "we'll aggregate anonymously," design the run without personal IDs in the URL — that's the precondition for SDB mitigation.

Sequencing sensitive items via skip and display logic

Combine skip logic and display/conditional logic to build a funnel from demographics → general questions → sensitive questions. The order-effects argument is detailed in our question order effects guide.

SDB mitigation overlaps tightly with other survey-design topics. See also our Likert scale design guide, matrix question design, screening question design, open-ended question design, and question order effects.

Choosing the right tool — Free plan limits, branching support, AI capabilities, and CSV export vary widely across tools. See our free survey tool comparison to find the right fit for this approach.

Summary

Checklist for handling social desirability bias:

  1. SDB is built into human response. Don't expect to eliminate it — design with it.
  2. It hits hardest in health, money, politics, prejudice, and sex — but 5–10% inflation in CSAT/NPS is also normal.
  3. Classic mitigations: anonymity assurance / indirect questioning / RRT / list experiment.
  4. Five design moves: anonymity twice / "no right answer" notice / symmetric scale design / direct-indirect pair / sensitive items later.
  5. Don't import overseas benchmarks blindly — many cultures show systematically lower scores.
  6. Don't make important calls on a survey alone — pair with behavior data and qualitative work.

The "what people say vs. what they do" gap is a domain where the intelligence to recognize the limits of measurement and design with them is what matters. Teams that accept SDB and bake it into design and interpretation, instead of trying to stamp it out, end up with better decisions.


References (14)

Academic and methodological

Vendor and practitioner guides


Want to design and run surveys closer to truth, end to end? Try the free survey tool Kicue. With 15+ question types, URL parameter-based attribute handoff, and granular skip and display logic, anonymity-first design and bias-aware sequencing carry directly into operations.

Related articles

Ready to create your own survey?

Upload your survey file and AI generates a web survey form in 30 seconds.

Get started for free