"We added open-ended questions and got back mostly blanks or 'nothing in particular.'" That's one of the most common complaints in survey operations. Open-ended (free-text) questions are the only way to capture customers' actual words, but a poorly designed one quickly produces three pains at once: nobody writes / responses are too short / nothing can be analyzed downstream.
This article walks through what open-ended questions are actually for, why they're skipped, when to use them vs. closed-ended items, the wording rules, and the UX choices that determine whether respondents engage. The naïve belief that "more open-ends = deeper insight" is only half right — the focus here is on what to be deliberate about so people actually write.
1. What an open-ended question is
An open-ended question (also OA / FA: Free Answer) gives the respondent no options and asks them to type their own answer in their own words.
What open-ended items are good at
- Catching what you didn't anticipate — surfaces options or framings the designer never considered
- Getting the why behind a score — supplements numeric ratings with reasoning
- Preserving the customer's actual words — useful for marketing copy and internal storytelling
- Filling gaps in closed-ended coverage — captures what people meant by "Other"
Typical structure
Q. How would you rate our service overall? (10-point scale)
→ Score = 6
Q. Please tell us why you gave that score. (open text)
→ "Pricier than competitors, but the support quality makes up for it."
Schuman & Presser (1979) The Open and Closed Question framed open-ended items as "the way to surface salient issues that closed-ended items can't see." At the same time, the same line of research has noted for 40+ years that leaning on open-ends increases cognitive load sharply, which spikes blanks and one-word answers.
2. Why people don't write — the cognitive load problem
The single biggest enemy of open-ended questions is cognitive load. Picking from options is "choose what's there." Open-ends require "verbalize this from scratch," which is far more expensive cognitively.
Empirical evidence on response rate and quality
Smyth, Dillman, Christian & McBride (2009) "Open-Ended Questions in Web Surveys" is the reference paper on how open-ends behave in web surveys. Key observations:
- Position effects — open-ends near the start of a survey see sharply higher non-response; open-ends late in the survey get shorter answers.
- Length distribution — most answers run 30–100 characters; less than 10% exceed 200.
- "Nothing" / "N/A" rate — 15–30% of all open-ended responses are functionally non-answers (one word, negation).
The four-stage burden
What's actually happening in the respondent's head can be decomposed into four stages:
- Comprehending the question — what am I being asked?
- Retrieving from memory — what relevant experiences come to mind?
- Verbalizing — converting thought into text (the heaviest step)
- Typing — physically inputting characters
Mapped onto Tourangeau, Rips & Rasinski (2000) The Psychology of Survey Response and its comprehension → retrieval → judgment → reporting model, closed-ended items mostly tax only "reporting"; open-ends tax all four stages.
3. Open vs. closed — when to use which
The reflex of "more open-ends will deepen our customer understanding" is half wrong. The two formats have different jobs.
How to choose
| Goal | Best format | Why |
|---|---|---|
| Quantify and compare | Closed (SA/MA) | Statistical analysis presupposes structured data |
| Discover unknown options | Open-ended | Comes into its own in hypothesis-formation |
| Get reasons behind a score | Score + open-ended | Numeric + qualitative pair |
| Capture trends at scale (large N) | Closed | Reading 1,000 free-texts is impractical |
| Few but deep voices | Open-ended (interview-supplement) | When N is small, open-ends shine |
"If a closed item can do it, use closed"
Geer (1991) "Do Open-Ended Questions Measure Salient Issues?" found that about 80% of the information from an open-ended item can be captured by a well-designed closed-ended item. Conversely, the remaining 20% — the unexpected discoveries — is what open-ends are for. Drop the assumption that "all-open-ended = deeper understanding."
4. Five wording rules
Five rules on the question side to make open-ends people actually answer.
Rule 1: name the specific target in the question
❌ "Do you have any feedback?" — too abstract; respondents don't know what to write. ✅ "If there's anything you'd like our support team to improve, please describe it specifically." — target (support) and angle (improvement) are explicit.
Rule 2: keep the question 1–2 sentences
Long question text scatters attention before respondents finish reading. Aim for around 40–80 characters of essential information. Holland & Christian (2009) "The Influence of Topic Interest and Interactive Probing" shows that verbose question text directly raises non-response.
Rule 3: distinguish "why" from "how"
"Why did you say that?" pulls reasons / background. "How could we improve it?" pulls solutions / requests. Mixing them confuses respondents about what to write — and that confusion is a major source of "nothing in particular."
Rule 4: connect to the previous item
Tie the open-end to the prior score or selection: "Please tell us why you gave that score..." / "If you chose 'Dissatisfied' in Q3..." Done well, this can roughly double response rate in our experience reading vendor case studies. Without context, respondents don't know whom you're asking, for what, and lose motivation to write.
Rule 5: provide examples or placeholders
A blank input is psychologically intimidating. Placeholders and helper lines like "e.g., delivery was late, the product description differed from what arrived" make it easier to start writing. But strong examples bias responses toward those examples, so present multiple directions (positive and negative both) when you do.
5. UX rules
Question wording is half the battle; the input field design decides the other half.
Text-area size shapes response length
Israel (2010) "Effects of Answer Space Size on Responses to Open-Ended Questions in Mail Surveys" demonstrated that the size of the input space scales with the length of answers. The same effect holds in web surveys.
| Input field | Likely length | Use case |
|---|---|---|
| Single-line input (OA) | 5–20 chars | Short answers like product name, company name |
| 3–4 lines | 30–80 chars | Concise reasons or requests |
| 5–8 lines | 100–300 chars | Detailed experience narratives, improvement ideas |
| Auto-expanding | unlimited | UGC / review-style collection |
A perverse pattern is that "let's just give them lots of room" with a 10-line box backfires — the empty space looks intimidating and non-response goes up. Sizing the field intentionally to the length you actually want is the right move.
Required vs optional
- Required → response rate goes up, but "none," ".", "n/a" inputs slip through and don't actually improve quality.
- Optional → only people with something to say write. Higher-quality content but smaller N.
The mainstream operational pattern is "open-ends optional by default, but required for outliers (e.g., very low scorers)" — a conditional-required setup.
Mobile input cost
Soft-keyboard typing on mobile is a real physical burden, and open-ended response length on mobile is typically 30–50% shorter than on desktop in vendor reports. If your audience is heavily mobile, expect open-ended yield to be far lower than you'd guess from desktop testing.
6. Designing for downstream AI analysis
LLM-powered analysis of open-ends (coding, sentiment, summarization) is now mainstream, and designing with downstream analysis in mind changes operational efficiency dramatically.
What plays well with AI analysis
- A clearly-scoped target — items like "about our support team" classify cleanly with high precision.
- One topic per item — combining "price + quality + delivery" into one open-end forces the model to disentangle mixed themes.
- Score + reason as a pair — pairing a numeric score with the reason makes "reasons behind high vs. low scores" straightforward to analyze.
What plays poorly with AI analysis
- Abstract questions — "do you have any feedback?" stumps models and humans alike.
- Multi-topic items — tagging breaks down.
- Items that only get ultra-short answers — under-30-character responses are usually noise even for LLMs.
For methodology details, see our guide to AI analysis of open-ended responses.
7. Editorial view — five rules that move the needle
From tracking industry reports and public cases, five things we'd push hard on.
1. Cap open-ends at 2–3 per survey. Teams that "want more customer voice" sometimes ship surveys with 5 or 6 open-ends — and we see this pattern in industry articles repeatedly. In practice, the more open-ends you add, the more steeply the later ones decay in quality. Pick the most important 1–2 and replace the rest with closed items. That's the realistic balance for yield and quality.
2. If "nothing in particular" exceeds 30%, redesign the item. A 30%+ non-response rate is a design signal. Either the question is too abstract, badly placed, or disconnected from the prior item. Don't shrug and conclude "people just aren't interested" — it's normal to recover 5+ points purely by rewording.
3. Avoid required open-ends as a default. Required items raise response counts but also the rate of "." / "x" / "none" garbage. Data quality typically gets worse, not better. High-quality 60% from optional > mixed-quality 95% from required is the operational consensus we keep seeing.
4. Don't size the text area on hope. "Let's give them lots of room and they'll write more" is a wish, not a rule. Israel (2010) shows the opposite can happen. Pick a size that matches the length you actually want — too much empty space intimidates and shrinks responses.
5. In an AI-analysis era, design items to be classifiable from the start. "We'll throw the data into an AI later and figure it out" runs into a hard truth: abstract questions produce un-tag-gable answers, even for LLMs. At design time, write down 5–10 tags you'd want to apply to the responses, and confirm that the question can produce answers that fit those tags. That single discipline cuts downstream analysis cost roughly in half.
8. Open-ends in the Survey Tool Kicue
Kicue ships the components you need for serious open-ended operation as standard.
OA / FA question types
Open-ended question types come in two flavors: OA (single-line input) and FA (multi-line, long-form). OA is fixed to a single line and best for short answers, while FA supports multi-line entry for detailed narratives. Required/optional and min/max character counts can be configured independently.
Related design articles
Open-ended design connects tightly to other survey-design topics. See also our matrix question design, screening question design, question order effects, and methods for AI analysis of open-ended responses.
Choosing the right tool — Free plan limits, branching support, AI capabilities, and CSV export vary widely across tools. See our free survey tool comparison to find the right fit for this approach.
Summary
Checklist for designing and operating open-ended questions:
- Open-ends are for "unexpected discovery." Numeric analysis and large-scale trend reading belong to closed items.
- Non-response of 15–30% is the baseline. Design improves it sharply but cannot make it zero.
- Five wording rules: name the target / 1–2 sentences / distinguish why vs. how / connect to prior item / provide examples to start
- Text-area size shapes response length — too-large boxes can backfire by intimidation
- 2–3 open-ends per survey is the practical ceiling — quality of later open-ends decays steeply
- In an AI era, design for classifiability up front — retrofitting at analysis time is expensive
The naïve belief that "more open-ends = deeper customer voice" is often counterproductive. Use closed where closed will do, and concentrate open-ends on the 2–3 items that genuinely need them — that's the core of getting both quantity and quality from free text.
References (9)
Academic and methodological
- Schuman, H., & Presser, S. (1979). The Open and Closed Question. American Sociological Review.
- Smyth, J. D., Dillman, D. A., Christian, L. M., & McBride, M. (2009). Open-Ended Questions in Web Surveys: Can Increasing the Size of Answer Boxes and Providing Extra Verbal Instructions Improve Response Quality? Public Opinion Quarterly.
- Holland, J. L., & Christian, L. M. (2009). The Influence of Topic Interest and Interactive Probing on Responses to Open-Ended Questions in Web Surveys. Public Opinion Quarterly.
- Israel, G. D. (2010). Effects of Answer Space Size on Responses to Open-Ended Questions in Mail Surveys. Journal of Official Statistics.
- Geer, J. G. (1991). Do Open-Ended Questions Measure Salient Issues? Public Opinion Quarterly.
- Tourangeau, R., Rips, L. J., & Rasinski, K. (2000). The Psychology of Survey Response. Cambridge University Press.
Vendor and practitioner guides
Want to design open-ended items end-to-end with collection and aggregation under one roof? Try the free survey tool Kicue. OA (single-line) and FA (multi-line) question types are built in, so you can match the field size to the depth of answer you need to maximize quality and yield.
