Star Rating vs. Sliders
You'll have to choose whether you want respondents to rate each brand with stars (which is the default mode on our platform) or sliders. If you choose to use a star rating, you'll be able to use 5, 7 or 9 stars for each attribute. Choosing sliders will give you more flexibility. You'll be able to choose from our library of pre-written Likert scales, edit the answers, write your own custom answers, and even adjust the scores from 1 to 99, which are provided automatically for you when an answer is chosen. By default we assume 10 points for one star OR the lowest answer on your Likert Scale, and we go all the way up to 50, 70, or 90 for a top rating. If you decide to edit the Likert scale OR adjust the scoring, please make sure you know exactly what you're doing, since it may drastically affect your model and data visualization. |
Grouped by entity vs. Grouped by attribute
|
|
Results
|
On the Results page, there are several ways to view your results. We use Bayesian Averages (BA) to calculate
|
Respondent View
In Your Results
You'll have to choose whether you want respondents to rate each brand with stars (which is the default mode on our platform) or sliders. If you choose to use a star rating, you'll be able to use 5, 7 or 9 stars for each attribute. Choosing sliders will give you more flexibility. You'll be able to choose from our library of pre-written Likert scales, edit the answers, write your own custom answers, and even adjust the scores from 1 to 99, which are provided automatically for you when an answer is chosen. By default we assume 10 points for one star OR the lowest answer on your Likert Scale, and we go all the way up to 50, 70, or 90 for a top rating. If you decide to edit the Likert scale OR adjust the scoring, please make sure you know exactly what you're doing, since it may drastically affect your model and data visualization.
The last important decision here is to choose how to group these two lists. By default, we'll group them by entities - or brands, in our example. That means that each brand will be presented as a separate question, with attributes listed as sub-questions below. This grouping may be easier on respondents since it helps them to activate memories of all their experiences with a given brand or entity, enabling respondents to rate each brand or entity by all attributes you're testing.
If you switch to 'group by attribute', each question will ask about one attribute at a time, such as "Food healthiness", and will contain all compared brands on the page. It might introduce a higher cognitive toll on respondents, since they'll have to access a lot more memories across brands in order to answer each of the questions. Even so, in some cases this might be more valuable, since it'll help focus attention on comparing all brands across a given attribute. Please note that this experiment will take as many questions as there are items in the list you're presenting.