data&donuts
  • Data & Donuts (thinky thoughts)
  • COLLABORATor
  • Data talks, people mumble
  • Cancer: The Brand
  • Time to make the donuts...
  • donuts (quick nibbles)
  • Tools for writers and soon-to-be writers
  • datamonger.health
  • The "How" of Data Fluency

hello data
I visualize data buried in non-proprietary healthcare databases
https://unsplash.com/@winstonchen

A map is not the territory...

1/16/2017

 
Picture
Alfred Korzybyski was a scholar and philosopher of science known for a popular phrase, "A map is not the territory". I interpret this specifically to my work in survey design although the original intent was mathematical. Quite simply, surveys are only approximations of true behavior, knowledge, or even competence. The actual "truths" are beyond the limits of verbal or even written description. This speaks to my general aversion for poor survey design that reduces clinical behavior to a random selection of responses neglectful of the true complexity of practicing medicine.

Full disclosure. If you are writing crappy surveys and work in the Continuing Medical Education (CME) space in any capacity--I probably know it. If I don't actually read them or revise them, colleagues and clients send them to me. Often it is out of frustration or buyers remorse where they hold out hope for a few slim insights able to be liberated from poorly structured questionnaires.

Or a colleague stumbles upon a turd of a survey quite by accident and sends it in disbelief. Do you want the good news or the bad news first? Okay, the good news? I wouldn't dream of identifying your shoddy work publicly. Mostly because you are all doing it. The bad news? I keep all of them in a file. Why? Because I can help you--but maybe the solution isn't immediately apparent. In fact, the last two were from a professional oncology society and coincidentally a company that somehow self-identifies as patient education experts, also in the oncology space.

Because we don't sample entire populations when we create surveys or analyze data it is critical that questions are worded in a consistent way so participants are "answering" the same question. It is also important to avoid ordering questions in a way to introduce bias--I always shuffle the surveys I create to avoid inherent errors in survey design. I suggest you do the same.

Research methodology studies have done the heavy lifting for you. Integrating Likert scales haphazardly in the absence of understanding why we select certain types of scales for different types of question types is at your detriment. 

You also don't want to create survey satisficing. Many survey respondents will select an acceptable answer at the expense of the correct answer. Lazy survey design often allows the respondent to make easy selections and select responses that are 'good enough' but usually not the best or accurate measures of their behavior or beliefs.
Satisficing is a decision-making strategy or cognitive heuristic that entails searching through the available alternatives until an acceptability threshold is met.--Wikipedia
How many of you are familiar with nonresponse bias? If you aren't measuring or considering its effects you probably have not been adjusting for it in your data. Undetected nonresponse bias is different than response bias especially if it is related to your outcome of interest. In a nutshell, if a percentage of respondents do not respond to the survey--and there are measurable differences between those that did and did not--you likely have nonresponse bias.
The Medical Expenditure Survey or MEPs describes the process of weighting the findings in their survey respondents. It can be a bit complex but nonetheless can be easily measured in smaller survey populations by the addition of auxiliary variables.

A book titled, Online Panel Research: A Data Quality Perspective lists the 4 important characteristics of auxiliary variables:

  1. Must be measured in survey
  2. Population distributions must be known
  3. Correlated with all measures of interest
  4. Correlated with the response probabilities
Relying on your demographic questions is the lazy method. I know these measures are readily available but they tend to correlate weakly with measures of interest and also the likelihood of responding to surveys.

I will address more topics over the next few months especially as I continue to consult and present  workshops about survey design and methodology. Let me end on a soft statistic about response rates since I am asked this one ALL the time.

"Journals such as the Journal of the American Medical Association (JAMA) mandate a minimum response rate (60 % in this case) to be considered for publication."

Be familiar with the benchmark when preparing surveys (rigorous methodology and relevance) but also when looking at survey data. It will help you determine how much of the territory is actually being mapped.

Comments are closed.
    Sign up for our newsletter!
    Picture
    Browse the archive...
    follow us in feedly
    Picture
    Thank you for making a donution!
    donations=more content
    In a world of "evidence-based" medicine I am a bigger fan of practice-based evidence.

    ​Remember the quote by Upton Sinclair...


    “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”

    Follow the evolution of Alzheimer's Disease into a billion dollar brand
    Picture
Proudly powered by Weebly
  • Data & Donuts (thinky thoughts)
  • COLLABORATor
  • Data talks, people mumble
  • Cancer: The Brand
  • Time to make the donuts...
  • donuts (quick nibbles)
  • Tools for writers and soon-to-be writers
  • datamonger.health
  • The "How" of Data Fluency