The New Year launches new business objectives, insights, and well-intentioned re-alignment for companies to improve strategic offerings.
This year, data and how to get some, appears to top many resolutions and skill gap assessments. Before discussions of data analytics can be of benefit, we first need to make sure we are collecting high value information.
The debate often raging between professional survey design and implementation typically begins at survey length. The concern (understandably) is dwindling response rates potentially due to participant fatigue, multi-platform usability, non-response, drop-off, or questionable response styles.
The results are often surveys front-loaded with low cognitive load demographic questions at the expense of actual outcomes of interest. What we need from a business perspective are detailed and comprehensive analyses--and yes, longer survey instruments.
I am a fan of demographic variables but also admit low correlation with outcomes of interest.
I look to other industries for innovation in "information gathering" primarily because healthcare is relatively new to the conversation.
That is my only defense of the low quality survey instruments all the way down the stakeholder chain.
What if we create blocks of questions or modules and assign them to respondents randomly? Believe it or not there is a ton of research in this arena but I will try to share the highlights here. Especially my favorite--Cyborgs or Monsters. Techniques for fusing survey modules: Respondent matching and data imputation
Hot deck imputation involves replacing missing values of one or more variables for a non-respondent (called the recipient) with observed values from a respondent (the donor) that is similar to the non-respondent with respect to characteristics observed by both cases.--Rebecca R. Andridge and Roderick J. A. Little
For example, if there are 500 questions, they may be evenly distributed in 10 blocks with each block containing 50 questions dictated by themes under investigation. This block structure is subsequently used to generate split questionnaire design, whereby the total population of respondents is split into several groups and exposed to different sections of the questionnaire. For example, 2000 total participants in a 10-split design equals 200 participants for each split. With the “between block design” each split consists of selected blocks and participants answer all questions in the block.Here, it is also assumed that splits are distributed randomly and evenly amongst participants.--Halder A, et al.
The trick is to explore and discover new ways to expand the quality of the data you are collecting--perhaps a monster or a cyborg is all you need...