"Delight your customers." That slogan was CX orthodoxy for a decade — until a 2010 Harvard Business Review article, based on 75,000 customer interactions, flipped it. "Stop trying to delight your customers. Reduce their effort instead." That counterintuitive claim is the origin of what we now call CES (Customer Effort Score).
This guide walks through what CES is, where it came from, how to calculate it, how it compares with NPS and CSAT, what benchmarks look like, and the operational pitfalls teams run into. CES captures something CSAT and NPS can miss — the friction customers silently carry — and it's particularly powerful in support and customer success contexts.
1. What CES Is — The Original HBR Research
CES originated in Matthew Dixon, Karen Freeman, and Nicholas Toman's 2010 HBR article "Stop Trying to Delight Your Customers".
What the 75,000-interaction study found
Dixon and colleagues analyzed more than 75,000 customer interactions across support channels — both rep-assisted and self-service. Their findings:
- "Delighting" customers barely moves loyalty (repurchase, spending, advocacy)
- Reducing customer effort predicts loyalty better than CSAT or NPS in service contexts
- In support situations, "the easy solution" wins over "the memorable experience" by a large margin
This research directly challenged the then-dominant "wow the customer" school of thought.
The original question
The original CES question was a single item:
"How much effort did you personally have to put forth to handle your request?" 1 (Very Low Effort) to 5 (Very High Effort)
One question. One number. Strong predictive power for loyalty behavior — the appeal of CES.
The modern form — 7-point agreement scale
The current standard phrasing evolved from the original. CEB (now Gartner) revised it in 2013 to what's most widely used today:
" made it easy for me to handle my issue." 1 (Strongly Disagree) to 7 (Strongly Agree)
Phrasing the question as agreement with a positive statement (rather than asking about effort directly) reduces respondent strain and is now the default in most implementations.
2. Calculation Methods and Scale Design
Vendor-published guidance converges on three common calculation approaches.
Method 1: Average Score
The simplest — average all responses:
On a 7-point scale, 5.5+ is considered good; on a 5-point scale, 4.0+ is the threshold (per commentary from CustomerSure and other vendors). Treat these as widely shared industry reference values, not rigorously validated benchmarks.
Method 2: Percentage of "Easy"
Take the share of respondents who picked 5, 6, or 7 on a 7-point scale (analogous to CSAT's Top 2 Box):
Vendor commentary frequently treats 85%+ as "excellent."
Method 3: Net CES
Borrowing the NPS logic, subtract "difficult" (1–3) from "easy" (6–7):
Which one should you use?
The most common choice is Method 1 (average) — simple, easy to trend over time. The caveat: averages hide the distribution, so complement it with a count of low-scoring respondents ("N respondents scored below 4") to avoid missing high-effort outliers.
3. How Is CES Different from NPS and CSAT — Choosing the Right Tool
As covered in the CSAT guide, each metric has a distinct job:
| Metric | Measures | When to ask | Typical use |
|---|---|---|---|
| CES | Effort to complete a task | Right after the task | Support process, self-service, onboarding steps |
| CSAT | Satisfaction with a specific experience | Right after the event | Support quality, onboarding quality |
| NPS | Long-term loyalty / advocacy | Periodic | Executive KPI, brand health |
Where CES excels
CES is the strongest tool for finding process friction:
- After a support interaction — did the resolution process have unnecessary steps?
- After self-service usage (help center / FAQ) — did they find what they needed without struggle?
- After each onboarding milestone — did they get stuck setting up?
- After cancellation/offboarding — was exit unnecessarily complex?
Where CES doesn't fit
CES is wrong for:
- Overall product evaluation — CSAT or NPS is better suited
- Emotional satisfaction — CES is deliberately neutral; "delight" isn't its domain
- Long-term loyalty prediction — NPS is the established tool
CES is a process-improvement metric, not a brand health metric. That distinction matters.
4. CES Benchmarks
Absolute cross-industry CES comparisons are tricky because the scale format, industry, and interaction type all materially affect the number. Vendors agree on this caveat consistently. That said, commonly cited reference values:
Benchmarks by scale format
| Scale | Good | Excellent |
|---|---|---|
| 5-point (average) | 4.0+ | 4.5+ |
| 7-point (average) | 5.5+ | 6.0+ |
| % of Easy method | 70%+ | 85%+ |
From Formbricks and Qualtrics among others. These are industry reference points, not peer-reviewed validated numbers, but multi-vendor convergence makes them useful for target-setting.
Industry variation
- E-commerce / retail — simpler processes push CES higher
- B2B SaaS support — technical complexity pulls CES lower
- Finance / insurance — procedural complexity means industry-wide CES runs lower
"Our CES is 5.0, so we're good" isn't meaningful without same-industry comparison and your own time-series trend.
Gartner's famous stat
Gartner research frequently cited in industry commentary: 94% of customers with a low-effort experience indicate intent to repurchase, and 88% indicate intent to increase spending. Vendor-published numbers — not peer-reviewed — but widely referenced to motivate CES programs.
5. Common CES Design Pitfalls
Five patterns that recur in public case studies and industry commentary:
1. Sending the survey too late
CES needs to land right after the task completes. A few days later, memory decays and emotions normalize — a genuinely frustrating experience ends up scoring neutral. Target 1–few hours after resolution.
2. Inconsistent scale formats
If different touchpoints use 5-point and 7-point scales, or the wording varies between "effort" and "ease," time-series comparison breaks. Standardize the scale and phrasing across the organization.
3. Skipping the follow-up free-text
The score alone is half the value. The free-text answer explaining why a respondent scored low is where the actionable diagnosis lives. Default setup should route low scorers to a "what was difficult?" prompt.
4. Using CES as a standalone KPI
CES measures process friction, not business health. Making CES the sole KPI blinds the team to brand and product issues. It belongs in a stack alongside NPS and CSAT, each doing the job it's designed for.
5. Assuming high CES = loyal customer
Dixon's original finding was that low effort predicts loyalty. That's correlation, not blanket causation. Low effort is necessary but not sufficient — you still need the product to deliver value.
6. Editorial Take — Four Rules for Making CES Actually Work
After tracking public cases and industry commentary over time, four principles we'd push hard on:
1. Segment CES by touchpoint — never run one global CES. Support CES, onboarding CES, cancellation CES, self-service CES — all different numbers telling different stories. A single company-wide "CES = 5.2" effectively says nothing. Break it out per touchpoint or skip the exercise.
2. Put someone on the low-score free-text queue weekly. Collecting scores and stopping there wastes the most valuable part. Low-score verbatims are where the actionable diagnoses live. Decide who reads them every week, or the program quietly becomes theater.
3. Explicitly assign each metric to a decision. If you're running CES, CSAT, and NPS in parallel, write down which number drives which decision. "Support KPI = CES, product KPI = NPS, interaction quality = CSAT." Without this mapping, none of them ends up driving real calls.
4. Report in trend and segment, not absolute. "CES = 5.4" out of context tells you almost nothing. "+0.3 QoQ, weekday vs weekend delta of 0.5, new-user vs existing-user delta of 1.2" — that's the framing executives can act on.
7. Designing CES in the Survey Tool Kicue
Kicue ships with the features a CES program needs:
- 5-point and 7-point scale questions — supports both the original and CEB-revised formats (scale reference)
- Low-score follow-up by design — use display conditions to show a "what was difficult?" free-text only to low-scoring respondents
- URL parameter integration with external systems — if your Zendesk / Intercom or support tool appends ticket IDs and agent IDs to the CES public URL, Kicue auto-binds them for segment analysis (URL parameter docs)
- GT / cross-tab analytics — counts and percentages per scale step are visualized. Percentage-of-Easy (sum of the top three options 5–7) can be read directly from the GT view or computed after CSV / Excel export
Upload a questionnaire file, and the platform auto-generates the CES design — survey structure, branching, and follow-up logic.
Choosing the right tool — Free plan limits, branching support, AI capabilities, and CSV export vary widely across tools. See our free survey tool comparison to find the right fit for this approach.
Recap
A CES operational checklist:
- Academic grounding: Dixon 2010 HBR — 75,000-interaction study
- Original form: single-item effort question; modern form: 7-point agreement scale
- Three calculation methods — average is the most common
- CES measures process friction, not brand health
- Benchmarks vary widely by scale, industry, and interaction type — compare on trend and segment, not absolutes
- Never run CES alone — pair it with NPS and CSAT
CES surfaces something CSAT and NPS miss: the quiet friction that makes customers leave. Deployed per touchpoint with follow-up verbatims captured, it's one of the highest-leverage CX metrics a team can run.
References (10)
Academic and original source
- Dixon, M., Freeman, K., & Toman, N. (2010). Stop Trying to Delight Your Customers. Harvard Business Review.
- MeasuringU: 10 Things to Know about the Customer Effort Score.
Industry benchmarks and vendor commentary
- Qualtrics: Customer Effort Score (CES) & How to Measure It.
- CustomerSure: What is Customer Effort Score (CES)?
- Formbricks: Customer Effort Score (CES): Questions, Formula & Benchmarks (2026).
- SurveyMonkey: Customer Effort Score — What Is CES And How To Measure It.
- Giva: Customer Effort Score (CES): How to Calculate & Improve It.
- Balto: What is a Good Customer Effort Score (CES)?
Related articles
Run CES, CSAT, and NPS programs on one platform with Kicue — a free survey tool that supports the full CX metric stack.
