data&donuts
  • Data & Donuts (thinky thoughts)
  • COLLABORATor
  • Data talks, people mumble
  • Cancer: The Brand
  • Time to make the donuts...
  • donuts (quick nibbles)
  • Tools for writers and soon-to-be writers
  • datamonger.health
  • The "How" of Data Fluency

data & donuts

"Maybe stories are just data with a soul." -- Brene Brown

Trends with benefits: improving surveys

1/17/2017

 
The New Year launches new business objectives, insights, and well-intentioned re-alignment for companies  to improve strategic offerings.

This year, data and how to get some, appears to top many resolutions and skill gap assessments. Before discussions of data analytics can be of benefit, we first need to make sure we are collecting high value information.

The debate often raging between professional survey design and implementation typically begins at survey length. The concern (understandably) is dwindling response rates potentially due to participant  fatigue, multi-platform usability, non-response, drop-off, or questionable response styles.

The results are often surveys front-loaded with low cognitive load demographic questions at the expense of actual outcomes of interest. What we need from a business perspective are detailed and comprehensive analyses--and yes, longer survey instruments.
​
I am a fan of demographic variables but also admit low correlation with outcomes of interest.
Picture
I look to other industries for innovation in "information gathering" primarily because healthcare is relatively new to the conversation.

That is my only defense of the low quality survey instruments all the way down the stakeholder chain. 

​What if we create blocks of questions or modules and assign them to respondents randomly? Believe it or not there is a ton of research in this arena but I will try to share the highlights here. Especially my favorite--Cyborgs or Monsters. Techniques for fusing survey modules: Respondent matching and data imputation

To avoid getting too buried in unfamiliar terms or one specific methodology my goal is to introduce granularity and detailed response without the risks of low value responses or drop-out. Hot deck imputation is a method used to fill the gaps between unassigned blocks of questions. The use of auxiliary variables serve to link answers with preserved correlations.
Hot deck imputation involves replacing missing values of one or more variables for a non-respondent (called the recipient) with observed values from a respondent (the donor) that is similar to the non-respondent with respect to characteristics observed by both cases.--Rebecca R. Andridge and Roderick J. A. Little
The Council of American Survey Research Organizations or CASRO presented a research paper Cyborgs vs. Monsters: Assembling Modular Mobile Surveys to Create Complete Data Sets.
Picture
The Cyborg or the Monster approach to creating powerful insights while managing the limitations of longer survey design is one way to modernize quantified response in the new digital ecosystem beyond pen or pencil survey instruments.
Picture
The Cyborg is made from parts and pieces to represent missing pieces of data. Hot-deck imputation conserves the total sample size by creating an answer from a randomly selected but similar hot-respondent.

​We are seeking to determine what the respondent "would have" replied utilizing synthetic data derived from the characteristics of the respondent.

The second method described makes assumptions that certain respondents are identical or Monsters. The task here is to find similar respondents and "stitch them together" as if they were a single individual. You now have a complete data set but the sample size is smaller. The desired outcome may be that only actual data is used in the calculations--no "synthetic parts".


Proceedings of the Association for Survey Computing share a similar "block" method of breaking surveys into manageable units. Shorter Interview, Longer Surveys: Optimizing the survey participant experience whilst accommodating ever expanding client demands illustrates the block approach separated by splices in the survey. The green blocks below indicate which data "block" is included in each split.
Picture
For example, if there are 500 questions, they may be evenly distributed in 10 blocks with each block containing 50 questions dictated by themes under investigation. This block structure is subsequently used to generate split questionnaire design, whereby the total population of respondents is split into several groups and exposed to different sections of the questionnaire. For example, 2000 total participants in a 10-split design equals 200 participants for each split. With the “between block design” each split consists of selected blocks and participants answer all questions in the block.Here, it is also assumed that splits are distributed randomly and evenly amongst participants.--Halder A, et al.
Skip logic can also be useful for creating a question hierarchy where certain responses assign unique algorithms and procedures with split questionnaire design--to break longer surveys into "split" or shorter surveys.

The trick is to explore and discover new ways to expand the quality of the data you are collecting--perhaps a monster or a cyborg is all you need...



Comments are closed.

    Telling stories...

    Finding, curating, tidying, analyzing, and communicating your data creates many opportunities for discussion and collaboration...

    Take a look around...
    Follow @datamongerbonny

    Categories

    All

    twitter...

    Tweets by datamongerbonny
Proudly powered by Weebly
  • Data & Donuts (thinky thoughts)
  • COLLABORATor
  • Data talks, people mumble
  • Cancer: The Brand
  • Time to make the donuts...
  • donuts (quick nibbles)
  • Tools for writers and soon-to-be writers
  • datamonger.health
  • The "How" of Data Fluency