For many organizations, questionnaires are popular tools for gauging internal and external performance. It is understandable that organizations aim to save money on survey initiatives, but collecting employee and market data with poorly designed tools is wasting money in the short- and long- terms.

In the short-term, poor questionnaires waste development time and money. In the long-term, organizations can make incorrect and costly business decisions based on data from poorly designed questionnaires. That said, it is unfortunate that many organizations create, conduct and analyze questionnaires on their own – without any guidance from specialists. This can be a crucial business mistake.

Readers are treated here to a rare insider conversation with one of the most prominent experts in tests, measurement and questionnaire research. Meet Dr. Rense Lange, a pioneer in applying modern test theory methods to business analytics. In addition to serving on the faculty of the University of Illinois, the Southern Illinois University School of Medicine, and Central Michigan University, Dr. Lange has worked for ten years as the lead psychometrician at the Illinois State Board of Education.

Being an exclusive part of the 20|20 Skills™ assessment team (), Dr. Lange recently fielded some pointed questions from me about the pitfalls of questionnaire studies of which organizations should be aware.

Is survey and questionnaire design a science?

Definitely, yes. Unfortunately, I know that too many surveys are constructed based on “instincts” and whatever else seems important to untrained people. But, this really is not the way to design good surveys, especially when trying to predict people’s behaviors. But, unfortunately, all too often questions are selected based on how interesting or appealing they sound or based on the favorite insights of ones’ superiors in the organization.

Instead of a haphazard approach, one needs a systematic, explicit (and correct) theory about people’s behaviors, including factors like respondents’ beliefs, attitudes and intentions. Even when dealing with the simplest and shortest of questionnaires one has to know all these issues so that the right ones can be included or omitted. I usually follow Fishbein and Ajzen’s Theory of Reasoned Action. This approach is very general, while giving excellent validity.

Designing surveys and questionnaires is often regarded as something anyone - even non-specialists - can do. Why is this?

There are all kinds of bad reasons for this. The most important being that given we can all talk, we also can all write good questions. This reasoning has the same flaws as saying that since we all went to school we are educational experts, or since we have all been sick we all have medical qualifications.

This will not work for serious questionnaires. I find it amazing to see how million dollar decisions are quite often made based on shoddy questionnaires, guided mainly by questionable insights and theories that have long been discarded in the scientific psychological literature.

Maybe this is because people do not know how behaviors can be predicted quite reliably from the right indicators. Also, people tend to over-emphasize their own pet explanations and insights based on anecdotal evidence or mistaken media reports. I have even seen cases where making a questionnaire was mostly seen as a matter of typing - so, they had the secretary do everything.

What are the biggest mistakes non-specialists make when designing their own questionnaires?

The main lesson is to be humble. In other words, what the question writer thinks is important should be completely secondary to what the intended respondents think. Do not assume that you already know what people think and why they think so. If you really did, then making a survey would be superfluous, right? So, instead, try to really listen and keep an open mind.

For one thing, this means that one should always do a pilot study before doing the main study. Believe me, no matter how well you think you know people, you will be surprised!

Most people - even many survey vendors - analyze questionnaire and survey data with traditional approaches like raw-score sums, percentile rankings or percentages. What is wrong about these standard approaches?

Standard design, analysis and reporting are often wrong and incomplete at so many levels. For instance, take the use of rating scales such as “agree completely,” “agree somewhat,” and so on. Here it is often assumed that (a) using more answer categories is always better, (b) some “neutral” category is needed to allow people to be non-committal. Both of these “insights” are wrong. Most people cannot handle more than six pieces of information at a time, so do not give them more response categories than that. In fact, to be on the safe side, four categories are probably fine. Also, neutral categories are usually counterproductive; they rarely get you the information you want. Often, neutral categories do not reflect uncertainty or indecision, but rather they hide socially undesirable answers. That is, they mean “I do not want to say” or “does not apply.”

The preceding is a direct consequence of the use of poor methods of analysis. I believe that our method of analysis should tell us whether middle categories are used inconsistently - just like they should tell us whether someone is giving valid data in the first place. Second, middle categories are often seen as a panacea for avoiding missing data that would foil standard statistical procedures.

For these reasons, I rely almost exclusively on the use of Rasch scaling. It is infeasible to go into great detail here, but this approach is unique in that (a) missing data are inherently acceptable, (b) we can clearly judge the quality of the data and the questionnaire from the responses, (c) one obtains linear (i.e., interval-level) measures and (d) it can be determined whether (and if so, how much) the data are biased by factors such as age, gender and other demographics. Such information allows organizations to make more targeted and valid business decisions, whereas traditional approaches like raw-score sums, percentages and percentile rankings are severely limited and can even be misleading.

Are there any exciting developments in the field of assessment, surveys and questionnaires?

Surprisingly, professional survey companies almost never use Rasch scaling. Thus, the business advantage for those who apply these methods in their analytics is huge. My personal favorite development is the application called “Action Plans.” Here we build a mathematical model of the data and this allows us to identify statistical outliers. We then feed this into our software to generate an interpretation for the observed misfit. The result is a person specific and tailor-made diagnostic profile that can be used in a variety of ways.

For instance, Action Plans form the basis for the highly successful 20|20 Skills™ personnel assessment test. Also, we have applied it to educational testing where experts now tout it as a form of “curriculum sensitive” testing.

Our teams have recently completed the theoretical work to apply the preceding approach to whole groups as well as individual test/questionnaire takers. Thus, we can now segment markets, organizations, etc and provide mathematically correct profiles for entire groups and subgroups. Exciting stuff indeed!

The Bottom Line

The insights from our conversation with Dr. Lange can be summarized in three key points:

  • Writing surveys is a skill that requires a specialist to provide proper wording of questions and the collection of valid data. Most people lack this expertise, so seriously consider investing in professional assistance.
  • Do not assume you already know what your intended audience thinks – maintain an open mind and have the patience to make evidence-based, not belief-based, business decisions.
  • Proper questionnaire analysis is as important as proper questionnaire design. The most specific and valid findings derive from modern test theory methods, like Rasch scaling. Raw-score sums, percentages and percentile rankings are severely limited and can motivate wrong business decisions.

Keep these points in mind to help avoid pitfalls in your survey initiatives. Doing so will transform your questionnaires from uncertain exercises to highly effective business tools.


About Jim Houran
James Houran holds a Ph.D. in Psychology and is President of 20│20 Skills™ Assessment (). He is an 18-year veteran in research and assessment on peak performance and experiences, with a special focus on online testing. His award-winning work has been profiled by a myriad of media outlets and programs including the Discovery Channel, A&E, BBC, CNN, NBC’s Today Show, Wilson Quarterly, USA Today, New Scientist, Psychology Today, Court TV, Forbes.com and Rolling Stone. For information on the Best Practice 20│20 Skills™ assessment system and industry analytics, contact: James Houran, Ph.D. [email protected] 516.248.8828 x 264

HVS International is a hospitality services firm providing industry skill and knowledge worldwide. The organization and its specialists possess a wide range of expertise and offer market feasibility studies, valuations, strategic analyses, development planning, and litigation support. Additionally, HVS International supplies unique knowledge in the areas of executive search, investment banking, environmental sustainability, timeshare consulting, food and beverage operations, interior design, gaming, technology strategies, organizational assessments, operational management, strategy development, convention facilities consulting, marketing communications, property tax appeals and investment consulting. Since 1980, HVS International has provided hospitality services to more than 10,000 hotels throughout the world. Principals and associates of the firm have authored textbooks and thousands of articles regarding all aspects of the hospitality industry. Click here for more...

Jim Houran
516.248.8828 x 264
HVS