By J. W. Traphagan 5 minute Read
Rarely a week goes by in which I fail to receive some sort of emailed survey asking me about organizational climate, diversity and equity issues, or how I feel about some product I glanced at online. We live in a data-driven society and there seems to be an insatiable appetite for data generated through surveys. Platforms like Qualtrics and Survey Monkey have made it easy for just about anyone to create and disseminate a survey.
Unfortunately, much of the data obtained is garbage.
Why? Because most of the surveys are poorly designed and poor design of a research instrument inevitably generates misleading, inadequate, and unreliable results. A few examples will help to illustrate this.
When my son was attending a private high school, leaders decided it would be a good idea to survey the parents about their experiences with the school. Of course, they were looking for responses that would support a marketing campaign on which they had embarked, but the survey was deeply flawed. One question asked if I thought my child had received a rigorous education at the school. This seems fine on the surface, but if you think about it, the answer may be quite difficult to provide.
In my case, the answer was not yes or no, which were the options, it was yes and no. In some classes he received a rigorous education and in others he didn’t. In fact, a couple of classes were quite poor in quality and I was left questioning what my tuition dollars were supporting. Had the question been designed with a scale, say from 1 to 10, on which I could rate the rigor of my son’s education, it would have been much easier to answer. As it was, I could not answer the question. I stopped at that point and did not complete the survey.
Another survey I was recently asked to do focused on diversity, equity, and inclusion (DEI) issues. This is an important topic and we need to understand how people think about DEI in the workplace. One of the first items presented the statement “the organization is doing a good job of providing training and resources needed to support DEI issues.” Answers were to be provided using a Likert scale: agree, somewhat agree, disagree, etc.
This is a classic example of the double-barreled question. One might agree that the organization is doing a good job of providing training but disagree that it is doing so with resources. If one thinks along those lines, then it becomes impossible to answer the question. I do, so I couldn’t answer the question. When I tried to move on, the next question was blocked. It was mandatory to answer each question in order to progress to the next one.
The problems here are significant. First, there is no way to accurately interpret the results because the person exploring the data generated by that question won’t be able to tell if people were responding to resources or training or both. Any conclusion drawn from the data is meaningless if we don’t have a clue about what respondents intended with their answers.
Second, because respondents were required to answer the question, there is a good chance that some people just answered anything in the middle so that they could move on to the next question. Thus, the results may well not reflect the actual attitudes of some respondents. This particular question also did not have a neutral option such as no opinion. I’m sure the designers of the survey thought it was important to get an opinion—but what if one’s opinion is neutral? If many people respond with “no opinion” or “neutral” it is telling us something. Perhaps the question is bad or perhaps there is a great deal of ambiguity in how people feel about the topic in question. The lack of a clearly stated opinion among respondents to a question can be a valuable insight into how people feel about the issue. But the question must be crafted properly to draw that sort of interpretation from the results.
It gets worse. Because the survey prohibited non-answers, anyone who didn’t have an answer either ended up quitting or making up answers just to move on. If it was the former, because the organization was small and, thus, the sample was small, only a few people not completing the survey could significantly skew the results. If it was the latter, the reliability of the responses would be open to serious question.
Why is this important? Because leaders often make policy decisions on the basis of a combination of survey data and other resources. If the results of the survey prove difficult to interpret due to poor survey design, then any interpretation is suspect. And if the survey data are of poor quality, then there is a good chance that decisions will also be of poor quality. This becomes compounded by the fact that the designers of poorly constructed survey instruments often are completely unaware of the deficiencies in their surveys. Thus, they are unaware that the data will also be riddled with problems.
If leaders are going to base decisions on poor survey data, they might just as well consult tarot cards or get a Ouija Board. The likelihood of making quality, informed decisions is probably about the same.
The good news is that there’s a simple solution to this problem. Either hire some employees who have training in survey and research design or make sure people asked to design surveys get the training necessary to craft good questions using appropriate measures, understand how to interpret results, and can recognize when a question was poorly constructed and, therefore, needs to be discarded.
That training should come from the social sciences—not marketing—because marketing surveys often are the worst offenders for these types of problems. In fact, hiring a few people with degrees in fields like sociology or anthropology and charging them with leading institutional research efforts is the best way to ensure acquisition of quality data that can help leadership in making sound decisions.