Understanding when to use a monadic versus sequential monadic design is key to getting reliable insights in online surveys. Here’s a simple guide to help you decide which method works best for your study.
What's the difference?
- Monadic: Each respondent evaluates only one concept or product.
- Sequential Monadic: Each respondent evaluates multiple concepts, one after another.
Typically, monadic designs are used for clean, unbiased feedback on very different or complex concepts, and sequential monadic designs are used when concepts are similar, or when resources are limited.
When to use Monadic Design
Choose a monadic design if your research requires clean, isolated feedback, without the influence of comparisons.
It works best when:
- You must avoid bias from comparison: Respondents see only one concept, so their evaluation isn’t influenced by others.
- Large sample size is available: Since each person sees only one concept, you’ll need more respondents to cover all concepts.
- Testing very different concepts: If concepts vary widely (e.g., different product categories), sequential testing may confuse or fatigue respondents.
- There's a need for isolated feedback: Ideal for early-stage concept testing or when you want pure reactions without context.
- Concepts are content-heavy: If concepts include long text or require deep evaluation with multiple follow-up questions, monadic design ensures respondents aren’t overwhelmed.
When to Use Sequential Monadic Design
Choose a sequential monadic design if your research requires direct comparisons across concepts.
It works best when:
- Comparing concepts directly: Each respondent evaluates multiple concepts, allowing for within-subject comparisons.
- You have limited sample size or budget: Fewer respondents are needed since each person evaluates more than one concept.
- Testing similar concepts: Works well when concepts are variations of the same product (e.g., different packaging designs).
- Controlling for individual differences: Since the same person evaluates multiple concepts, variability due to personal preferences is reduced.
How Many Concepts Should Respondents See?
When using sequential monadic design, deciding how many concepts each respondent should evaluate depends on several factors:
- Survey length and fatigue: Too many concepts can overwhelm respondents and reduce data quality.
- Complexity of concepts: Simple variations (like packaging) can be tested in larger sets, while complex ideas may require fewer per respondent.
- Budget and sample size: Smaller budgets may push toward fewer respondents seeing more concepts.
- Research goals: If the goal is fine-grained comparison, fewer concepts per respondent may yield more reliable insights.
To calculate the relationship between sample size and concept exposure, you only need three of the four following variables:
- Total sample size
- N per concept
- Concepts seen
- Total concepts
Once three of these numbers are known, you can use the following formulas to calculate the fourth number:
-
Total Sample size = (N per concept) X (Total concepts) / (Concepts seen)
- Example: 100 X 4 / 1 = N400
-
N per concept = (Total sample size) X (concepts seen) / (Total concepts)
- Example: N400 X 1 / 4 = N100 per concept
-
Concepts seen = (N per concept) X (Total concepts) / (Total sample size)
-
Example: 100 X 4 / 400 = 1 Concept seen
-
What do I use if my goal is to conduct a TURF analysis?
If you need to conduct a TURF analysis with your results, then you must conduct a sequential monadic test because TURF requires a full evaluation of each item in the set. This ensures accurate measurement of both reach (how many people are reached by at least one item) and frequency (how many items reach each individual), which are critical to identifying the optimal product combination for maximum audience coverage.
How do I set up a sequential monadic study with aytm?
There are two main (but related) ways we recommend setting up sequential monadic studies; using Smart Loops for an efficient way to build a repeating set of questions across ideas, or using group logic to manually define question sets to be randomized and max assignment.
Smart Loops are a tool that allows you to build a question loop “template” within your survey. You only need to set up the questions one time and define what makes each loop, called “runs”, unique, such as a concept name or concept image. Smart Loops will greatly reduce your programming time and effort using a special input table and reference logic in the Survey Editor.
Group Logic allows you to randomize and randomly assign any questions or sets of questions within a survey as you define it. This can be a better option when the question sets to be randomized or assigned are not similar enough in structure to be set up using a Smart Loop..
You can learn more about Smart Loops and Group Logic here in the Help Center, or compare the two tools and learn how to use them in the Lighthouse Academy!