Measuring brand positions with rating scales often leads to poor discrimination and a halo effect, resulting in poorly identified positions and impoverished or incorrect assessments of brand choice drivers. This article compares two alternatives – a comparative scale and brand-anchored Maximum Difference scaling – to the industry standard Likert scale. The results suggest there is a better alternative.

Background

Brand image research is a staple of applied marketing research. A brand image questionnaire produces a matrix composed of multiple brand ratings on multiple attributes. In addition, a brand study should collect a measure of each respondent's relative preference for the brands (often a brand rating, but ideally brand choice or brand share).

Some brand study key deliverables include:

  • Reports of significant attribute rating differences between brands
  • Perceptual maps of the brands' positions, relative to one another and relative to the attributes
  • Quadrant analysis showing a joint plot of each brand's performance and attribute importance
  • Multinomial logit (MNL) choice modeling that quantifies each attribute's impact on brand choice/share.

Brand position measures and brand-attribute ratings carry a heavy load: they are the raw material for all four analyses mentioned above.

Commonly used brand research rating scales include:

  • The Likert scale, a 5-point fully word anchored scale ranging from "strongly agree" to "strongly disagree"
  • A performance scale with 5 scale points such as "excellent", "very good", "good", "fair", "poor"
  • A 5 - 10 point endpoint-anchored scale with "describes the brand completely" on one pole and "does not describe the brand at all" on the other.

Unfortunately, brand rating scales suffer from limitations. Primary among these is the well-documented "halo effect": people who like a brand may give it higher ratings on all attributes, while those who dislike a brand tend to give it lower ratings on all attributes. When respondents have this tendency, the result is that all the ratings are highly correlated (known as multicollinearity). This causes a variety of problems when running derived importance models, because it makes the effects of the individual attributes hard to statistically separate. This can be so severe that sign reversals occur, leading to counter intuitive findings such as low quality or high price increasing the likelihood of brand choice.

Additionally, many respondents are "high raters" (they tend to rate all brands positively), and rating scales often are not very sensitive to differences among brands. Thus, researchers plan on large sample sizes to detect significant between-brand differences, or live with the consequences.

Possible Improvements for Measuring Brand Position

Brand position measurement relies heavily on brand studies; however, the substantial problems that plague brand rating scales, open the door for alternative measurements. Described below are two alternative brand position measuring methods compared to standard Likert brand position measures.

A web-based survey was completed by 443 respondents who qualified as, "at least once a week users of fast food restaurants." The survey focused on four fast food restaurant brands (Wendys, McDonalds, Taco Bell and Subway) and on nine attributes suggested by past experience to be significant drivers of fast food brand choice. All respondents also reported brand usage (both brand share and brand used most often) and various demographics. The Appendix describes how respondents were assigned to the test cells.

The first alternative is a "comparative rating scale," with 5 fully word-anchored points, which anecdotal evidence and past experience suggest may make for a more discriminating rating scale.

Here's how we operationalized the comparative rating scale:

The second measurement alternative is "brand-anchored MaxDiff scaling," or B-A MaxDiff. B-A MaxDiff combines two methods – maximum difference scaling and brand-anchored conjoint analysis.

Brand-anchored conjoint analysis (Louviere and Johnson 1990) seeks to measure the brand image of soft attributes by using brand names to anchor attribute levels in conjoint analysis. So, if we want to measure the quality of the taste experience afforded by dining at McDonalds, we anchor the taste attribute with the level McDonalds, like this:

As good tasting as McDonalds

The resulting utilities from conjoint analysis are measures of the brand's positions on the attributes.

Maximum difference scaling (Finn and Louviere 1992, Louviere, Swait and Anderson 1995) uses a unique response format. The format requires respondents to identify their most and least preferred choices from an experimentally designed list of objects:

Respondents choose one thing they like most and least like, as in the example above.

Research on maximum difference scaling has found it to be more discriminating than rating scales or than the method of paired comparisons (Cohen 2003). It seems to work well on general attitude measurement (Sa Lucas 2004) and on a specific type of pricing research (Chrzan 2004). Because it features a strongly constrained response format, maximum difference scaling is also immune from differences in scale use patterns. This makes it ideal both for cross-cultural research and for segmentation research.

Combining the maximum difference scaling with brand-anchored measurement yields B-A MaxDiff. Following is an example of a B-A MaxDiff question from the fast food survey.

Finally, a Likert scale was used as the control:

Analysis and Results

The Likert and comparative ratings are used in the analysis "as is." For the B-A MaxDiff questions we ran CBC/HB to produce respondent-level utilities to serve as brands' positions measures on the nine attributes. We standardized the resulting utilities across respondents to prevent differences in the MNL scale parameter in the HB utilities from impairing subsequent analyses.

The analyses cover both output types from a brand study – (a) analyses concerning brand positioning and differentiation, and (b) branch choice analyses. They use a variety of statistical methods including discriminant analysis, analysis of variance and multinomial logit (MNL).

Regarding brand positioning analyses and differentiation, we compare:

  • Perceptual complexity – the dimensionality of brand-attribute relationships
  • Brand differentiation – the extent to which the three kinds of brand position data allow us to detect differences among brands
  • Face validity of perceptual maps – the credibility and consistency of the brand positions that result from each of the three brand positioning metrics.

Regarding analysis of brand choice, we have:

  • Goodness-of-fit for brand choice models – how well the three kinds of brand position measurement manage to predict brand choice
  • Face validity of the MNL model parameters – how credible are the coefficients that result from MNL choice models using the three kinds of brand positioning metrics as predictors

The table summarizes the results. Please refer to the complete article using the following link to view all exhibits:

Summary

The traditional Likert ratings perform more poorly than the two new measures of brand position, both in terms of discriminant and predictive validity. Since brand studies mostly concern brand position (discrimination) and brand choice (prediction), Likert ratings should not be used for brand image research.

The comparative ratings and B-A MaxDiff measures produce similar outputs concerning brand position, although the B-A MaxDiff method is the more discriminating of the two. The comparative ratings decisively best the B-A MaxDiff with respect to the strength and plausibility of the resulting brand choice model.

Some data indicates using MaxDiff sets with fewer than all nine attributes may have improved B-A MaxDiff measurements, but until further testing confirms this, it appears the comparative ratings are the most defensible measurement for brand image research.

Published by Maritz Research
Date: Volume 19 - January 2006


About Maritz Research | As one of the world’s largest marketing research firms, Maritz Research, a unit of Maritz Inc., helps many of today’s most successful companies improve performance through a deep understanding of their customers, employees and channel partners. Founded in 1973, it offers a range of strategic and tactical solutions concentrating primarily in the hospitality, automotive, financial services,telecommunications, retail, pharma workplace and technology industries. The company has achieved ISO 9001 registration, the international symbol of quality. It is a member of CASRO and official sponsor of the American Marketing Association. Based in St. Louis, Maritz Inc. provides market and customer research, communications, learning solutions, incentive initiatives, meetings and event management, rewards and recognition, travel management services, and customer loyalty programs. Maritz has a presence in 42 countries, with key offices in the United States, Canada, the United Kingdom, France, Germany, and Spain. For more information, visit .

About Maritz Research

As one of the world's largest marketing research firms, Maritz Research, a unit of Maritz, helps many of today's most successful companies improve performance through an actionable understanding of their customers, employees, and channel partners. Founded in 1973, Maritz Research offers a range of strategic and tactical solutions concentrating primarily in the automotive, financial services, hospitality, telecommunications and technology and retail industries. The company has achieved ISO:2 0252 registration, the international symbol of quality. Maritz Research is a member of CASRO and official sponsor of the American Marketing Association.