How-to

Conjoint Analysis in Practice — Designing for Preference and Price Sensitivity

How to use conjoint analysis to measure consumer preference and price sensitivity. Covers the three main formats (full-profile, CBC, MaxDiff), attribute-level design, sample sizing, and share simulations — backed by the academic literature.

"Product A scored 4.2/5, Product B scored 4.1/5. Let's launch A." The meeting wraps. The product that actually sold? B. Anyone who has worked in marketing research has lived this misjudgment. The reason is simple: monadic single-question evaluations don't measure preference. Consumers who rate both A and B as "fine" make their actual purchase based on trade-offs between price, packaging, and features.

The statistical method that measures those trade-offs is conjoint analysis. This piece walks through what conjoint solves, the three main formats and when to use which, attribute-level design, sample sizing, how to interpret results, the editorial pitfalls, and how to implement conjoint in a survey tool. It's a core method in marketing research, yet rigorous practitioner-focused English content on it is surprisingly thin.

1. What conjoint analysis solves

Why monadic evaluation can't capture preference

A 5-point question like "How interested are you in Product A?" measures liking when an item is shown alone — monadic evaluation. The problem is that real purchases aren't made in isolation; they're made through trade-offs.

Example: laptop choice

  • A: high performance, heavy, expensive
  • B: medium performance, light, mid-priced
  • C: low performance, light, cheap

In monadic evaluation, A typically scores highest (everyone "likes" high performance). But add the constraints "+50,000 yen" or "+500g" and preferences shift dramatically.

Why conjoint became the industry standard

Green & Srinivasan (1990) Conjoint Analysis in Marketing systematized conjoint as a method to statistically measure trade-offs across multiple attributes. It's widely used in marketing, economics, and public policy, and predicts actual purchase behavior more accurately than monadic evaluation in repeated empirical comparisons.

Common practical applications:

  • New product concept testing: which attribute combinations get chosen
  • Pricing strategy: price elasticity and optimal price points
  • Package optimization: which package elements matter most
  • Positioning: relative appeal vs. competitors

2. Three main formats — and when to use which

Conjoint comes in three main formats. Pick by purpose and sample size.

Format 1: Full-Profile Conjoint

Show cards (profiles) containing all attributes; respondents rank or rate them. The classical method.

  • Design: 8–16 cards × all attributes
  • Respondent load: medium to high
  • N: 100–300
  • Strengths: intuitive, direct utility estimation
  • Weaknesses: cognitive load grows fast as attribute count increases (5–6 attributes is the practical cap)

Format 2: Choice-Based Conjoint (CBC)

Multiple choice tasks where respondents pick "which would you choose" from 2–4 alternatives. The dominant modern approach.

  • Design: 8–15 tasks × 2–4 alternatives per task
  • Respondent load: medium
  • N: 200–500 (aggregate part-worths), 500–1,000 (HB models)
  • Strengths: closer to real purchasing, supports share-of-preference simulation
  • Weaknesses: design and analysis complex (orthogonal design + hierarchical Bayes)

Louviere, Hensher, & Swait (2000) Stated Choice Methods established the random-utility theoretical foundation.

Format 3: MaxDiff (Maximum Difference Scaling)

Pick "most important / least important" from 4–5 items per task. Specialized for importance measurement.

  • Design: each task pairs "best/worst" choice from 4–5 items
  • Respondent load: low
  • N: 200–500
  • Strengths: clean rank scaling, handles many items (10–30)
  • Weaknesses: not suited for price or numerical attribute simulation

When to use which

GoalRecommendedTypical N
Predict price sensitivity / purchase intentCBC500
Rank importance across many itemsMaxDiff300
Utility measurement on few attributes (4–5)Full-Profile or CBC200
Share simulationCBC + HB500–1,000

In practice CBC is 70–80%, MaxDiff is 20–30%, full-profile is rare in modern usage.

3. Designing attributes and levels

Conjoint quality is determined by design. Mess up attributes and levels, and no amount of N will rescue the results.

Attribute count

  • CBC: 4–7 attributes is standard; 8+ degrades quality through cognitive load
  • Full-Profile: 4–5 attributes is the cap
  • MaxDiff: 10–30 "items" works

Levels per attribute

2–5 levels per attribute. 3–4 is most common.

Example: smartwatch conjoint

AttributeLevels
Price199/199 / 299 / 499/499 / 799
Battery1 day / 3 days / 7 days
Health featuresHeart rate / + sleep / + sleep + stress
BrandBrand A / Brand B / Brand C / Our brand

→ 4 × 3 × 3 × 4 = 144 theoretical profiles. Orthogonal design compresses to 12–16 tasks.

Orthogonal design

Statistical method to minimize correlation between attributes. R's AlgDesign package and Sawtooth Software are the standard tools.

In practice: define attributes and levels, the tool generates the task set automatically. Detailed orthogonality theory isn't required for operation, but paying attention to tool warnings about design integrity is.

Attribute design caveats

  • Independence: if "brand" and "price tier" are correlated in the real market, the conjoint assumption breaks
  • Realistic ranges: price levels far outside market reality produce nonsensical responses
  • Sweet spot: 5–6 attributes × 3–4 levels balances cognitive load and analytical power

4. Sample-size guidance

Conjoint information content is tasks × N, so it requires fewer respondents than monadic methods.

CBC sample-size guide

Analysis depthRecommended N
Aggregate utility values200–300
Segment-level utilities (attribute × age, etc.)200/cell → 600–800
Hierarchical Bayes (HB) for individual utilities500–1,000
Share simulation (market prediction)500–1,500

Orme (2010) Getting Started with Conjoint Analysis and Sawtooth Software's industry rule: N=300 is the practical floor for CBC.

Symptoms of insufficient N

  • Part-worth standard errors too large to compare attributes
  • HB model fails to converge
  • High variance in segment-level share simulations

For sample design fundamentals, see how to calculate survey sample size and survey aggregation and significance testing.

5. Interpretation — Part-worths, importance, share simulation

Conjoint outputs translate to practice through three key numbers.

Part-worth utilities

How much each attribute level contributes to preference, quantified.

Example (smartwatch):

Attribute levelPart-worth
Price $199+1.2
Price $299+0.4
Price $499−0.8
Price $799−0.8
Battery 1 day−0.5
Battery 3 days+0.0
Battery 7 days+0.5

Sum the part-worths to compute the preference strength of any profile.

Relative importance

Each attribute's share of influence on preference. Compute "max utility − min utility" per attribute, normalize to 100%.

Example:

  • Price: 35%
  • Battery: 25%
  • Health features: 20%
  • Brand: 20%

A direct decision input: "Price is most important; health features and brand are equally weighted."

Share of preference

The choice share each profile would receive when competing against others.

Example: our new product X vs. competitor A vs. competitor B

  • Our X ($499, 5-day battery): 42% share
  • Competitor A: 31%
  • Competitor B: 27%

"Drop our X to $399 and share rises to 55%" — price-share simulations like this are conjoint's most valued output.

Hierarchical Bayes (HB)

Estimates individual-level part-worths rather than aggregate. Train (2009) Discrete Choice Methods with Simulation systematized the methodology.

  • Strengths: better individual share prediction, more precise segmentation
  • Required N: 500–1,000
  • Tools: Sawtooth Software CBC/HB, R bayesm, Python Choice-Models

6. Editorial view — five conjoint pitfalls

From the literature and field practice, the five things we'd push hardest on.

1. "Just throw in more attributes" is the biggest blowup. Attribute count drives respondent cognitive load up exponentially, triggering satisficing — picking the middle option on every task. 4–6 attributes is the realistic CBC ceiling. Resisting "let's also add..." is the single most important discipline for keeping results credible. The same mechanism is documented in our survey wording pitfalls article — high cognitive load breaks data quality.

2. Price levels outside market reality break the result. Pricing at 19vs.19 vs. 999 produces only the obvious answer ("price is most important"). Stay within ±20–30% of real market range and look at segment-level price sensitivity.

3. Watch attribute correlations in the real market. In a market where "brand" and "price tier" correlate (luxury brand = high price), an orthogonal design generates profiles that don't exist in reality (luxury × low price), confusing respondents. Exclude unrealistic combinations at design time or correct via HB post-hoc.

4. Don't read share simulations as absolutes. Share simulation excels at relative comparison, not absolute prediction. "42% conjoint share = 42% market sales" is a misread. Real purchases involve buying frequency, distribution, and ad awareness too. Read shares as "how much shifts from the current state", not as standalone forecasts.

5. Don't use MaxDiff for price sensitivity. MaxDiff is the right tool for importance ranking, but wrong for utility on price or numerical attributes. "Which feature matters most?" → MaxDiff. "What's the price sensitivity?" → CBC. Mixing the two — using MaxDiff to measure price — breaks share simulation.

7. Implementing conjoint in the Survey Tool Kicue

Honest framing: Kicue is not a dedicated conjoint tool. Production-grade conjoint analysis is best run in Sawtooth Software / Conjointly / Qualtrics CoreXM.

That said, simplified conjoint implementation is feasible in Kicue.

Simplified CBC in Kicue

What Kicue can't do natively

  • Orthogonal design generation: pre-generate task sets in Sawtooth or R support.CEs, then import into Kicue
  • Hierarchical Bayes estimation: run via bayesm in R / Python after export
  • Share simulators: dashboards built in Tableau / Power BI

Tool guidance by use case

Use caseRecommended tool
Academic research / production marketing researchSawtooth Software CBC/HB
Mid-sized business researchConjointly / Qualtrics CoreXM
Light-touch / budget-constrainedKicue + R/Python post-processing

Kicue works well for "conjoint prototype trials", "pre-conjoint attribute screening", and "MaxDiff-style importance ranking alternatives". For full production, point to specialized tools.

Choosing the right tool — Free plan limits, branching support, AI capabilities, and CSV export vary widely across tools. See our free survey tool comparison to find the right fit for this approach.

Summary

Conjoint analysis checklist:

  1. Monadic evaluation can't capture preference — conjoint statistically analyzes trade-offs.
  2. Three formats: CBC (mainstream) / MaxDiff (importance ranking) / Full-Profile (classic).
  3. 4–6 attributes × 2–5 levels is the cognitive-load / analytical-power sweet spot.
  4. CBC sample sizes: 200–300 (aggregate) / 500–1,000 (HB) / 500–1,500 (share prediction).
  5. Three numbers to read: part-worths / relative importance / share simulation.
  6. Five pitfalls: too many attributes / unrealistic prices / attribute correlation / absolute share reading / MaxDiff misuse.
  7. Kicue suits simplified prototypes; production runs on Sawtooth / Conjointly / Qualtrics.

Conjoint is a "80% design, 20% analysis" method. Invest time in attribute-level design and sampling plan, and even N=300 produces decision-grade numbers. As a core marketing-research skill, it's high-leverage across CX / EX / product development.


References (9)

Academic and methodological

Standards bodies and methodology centers

Industry guides (treated as practitioner observations)


Want to prototype conjoint? Try Kicue — a free survey tool. SCREEN questions, skip logic, URL parameters, and raw data export ship as standard, so the connection to external tools (R / Python / Sawtooth) is smooth.

Related articles

Ready to create your own survey?

Upload your survey file and AI generates a web survey form in 30 seconds.

Get started for free