Blog
Latest updates, guides, and tips from Kicue.
Page 2 / 3
Survey Data Cleaning — Detecting Careless Responding and Setting Exclusion Thresholds
Survey data quality is decided in the post-fielding processing. This guide walks through the detection metrics for careless responding (straight-lining, speeding, IRV, Mahalanobis distance) and how to set exclusion thresholds based on the academic literature.
Read morePilot Testing for Surveys — How Far to Validate Before Going Live
Skip the pilot and a wording defect surfaces in main fielding becomes a 1–2 week rework. This guide covers what N=30–100 can and can't tell you, how to combine cognitive interviews with quantitative pretests, and the operational loop to run pilot → fix → main fielding cleanly.
Read moreSurvey Question Wording — Double-Barreled, Leading, and the 7 Pitfalls That Distort Your Data
Survey question wording can shift response distributions by 10–30 points. This guide covers the 7 high-risk patterns (double-barreled, leading, double negatives), Tourangeau's 4-stage cognitive model, and rewriting rules with before/after examples.
Read moreDesigning Past 'What People Say They Do' — Social Desirability Bias in Surveys
A research-grounded guide to social desirability bias (SDB) in surveys. Covers when and why respondents give 'socially acceptable' answers instead of honest ones, classic mitigation methods, and five design rules to get closer to truth.
Read moreLikert Scale Design Guide — 5-Point vs 7-Point vs 9-Point and the Midpoint Question
A research-grounded guide to designing Likert scales. Covers how to choose the number of points, whether to include a neutral midpoint, label design, and the long-running statistical debate — the foundational measurement device behind CSAT, NPS, and CES.
Read moreDesigning Open-Ended Survey Questions — How to Get Both Quality and Quantity
A research-grounded guide to designing open-ended (OA / FA) survey questions. Covers how question wording, text-area size, and probes affect response quality, with practical rules for designing items respondents will actually answer.
Read moreQuestion Order Effects in Surveys — How Earlier Items Bias Later Answers
A research-grounded guide to question order effects in surveys. Covers Primacy, Recency, Anchoring, and Question-order effects, with five design rules and operational judgment on when to randomize and when to fix.
Read moreMatrix Question Design — 5 Pitfalls That Quietly Distort Your Data
How to design matrix (grid) questions that don't wreck your data. Cognitive load, straight-lining, optimal grid size, and the design patterns that protect quality — backed by academic research and field practice.
Read moreScreening Question Design — How to Let Only the Right Respondents Through
A practical guide to designing screening questions (SC) for surveys. Covers incidence rate (IR), quota planning, common failure patterns, and the design rules that prevent off-target respondents from polluting your data.
Read moreQuantitative vs Qualitative Research: When to Use Surveys, Interviews, and Focus Groups
When to run a survey, an in-depth interview, or a focus group — the structural differences, practical decision criteria, and how mixed methods combine them.
Read more