Full disclosure. If you are writing crappy surveys and work in the Continuing Medical Education (CME) space in any capacity--I probably know it. If I don't actually read them or revise them, colleagues and clients send them to me. Often it is out of frustration or buyers remorse where they hold out hope for a few slim insights able to be liberated from poorly structured questionnaires.
Or a colleague stumbles upon a turd of a survey quite by accident and sends it in disbelief. Do you want the good news or the bad news first? Okay, the good news? I wouldn't dream of identifying your shoddy work publicly. Mostly because you are all doing it. The bad news? I keep all of them in a file. Why? Because I can help you--but maybe the solution isn't immediately apparent. In fact, the last two were from a professional oncology society and coincidentally a company that somehow self-identifies as patient education experts, also in the oncology space.
Because we don't sample entire populations when we create surveys or analyze data it is critical that questions are worded in a consistent way so participants are "answering" the same question. It is also important to avoid ordering questions in a way to introduce bias--I always shuffle the surveys I create to avoid inherent errors in survey design. I suggest you do the same.
Research methodology studies have done the heavy lifting for you. Integrating Likert scales haphazardly in the absence of understanding why we select certain types of scales for different types of question types is at your detriment.
You also don't want to create survey satisficing. Many survey respondents will select an acceptable answer at the expense of the correct answer. Lazy survey design often allows the respondent to make easy selections and select responses that are 'good enough' but usually not the best or accurate measures of their behavior or beliefs.
Satisficing is a decision-making strategy or cognitive heuristic that entails searching through the available alternatives until an acceptability threshold is met.--Wikipedia
A book titled, Online Panel Research: A Data Quality Perspective lists the 4 important characteristics of auxiliary variables:
- Must be measured in survey
- Population distributions must be known
- Correlated with all measures of interest
- Correlated with the response probabilities
I will address more topics over the next few months especially as I continue to consult and present workshops about survey design and methodology. Let me end on a soft statistic about response rates since I am asked this one ALL the time.
"Journals such as the Journal of the American Medical Association (JAMA) mandate a minimum response rate (60 % in this case) to be considered for publication."
Be familiar with the benchmark when preparing surveys (rigorous methodology and relevance) but also when looking at survey data. It will help you determine how much of the territory is actually being mapped.