Do you write survey questions? Use them as a measure of knowledge, competency, or behavioral change? Perhaps you are thankful that you don't have to write them, your job is simply to do the analytics.
Your enterprise analytics are set it and forget it. But what if I asked you about your process? I just finished grading assignments in a university class I am facilitating about Understanding Data. They are lost when asked to describe discrete vs. continuous data types and distinguish between qualitative ordinal, binomial, and nominal data. It made me want to do a re-wind on data gathering that I think you will find helpful. If you need hands-on help, sign up for the first tutorial in a series I am launching Thursday, Big Data on A Less Big Budget. But let's not get ahead of ourselves. We start with data identification, sourcing, and gathering for a specific reason. The types of data we collect will determine the types of analyses we can do--and thus, how meaningful our insights might be. If it was up to me, I would like to see more discrete choice modeling analytics using a Python library like ChoiceModels. "discrete choice models, or qualitative choice models, describe, explain, and predict choices between two or more discrete alternatives" |
But many teams want to keep relying on Likert scales or Likert-type questions, let me explain how this impacts your analytics plan.
Why don't we make sure we are at least doing the best we can. |
In my experience, once you get past the murky distinction of nominal, ordinal, and numeric, there is little thought to how you need to adjust your analytics. The time to make these important decisions is immediately after you create a data question or draft a set of objectives for a need assessment.
To collect measures of attitude, character, and personality that better assess post-activity behavior of a healthcare provider after an educational intervention, you likely rely on Likert attitudinal scales and apply them to professional learning and/or behavior change.
The problem is you need to distinguish between Likert-type items and Likert scales. Likert-type questions are when you design single response questions that are not specifically related. You can list them separately and your intention is not to combine the responses or aggregate a score.
When you analyze Likert-type responses, you can sense an ordinal measurement--there is a directionality or a "greater than" or "lesser than" relationship. We think of this as ordinal because you can't say much about the distance between the measures or distinguish the intensity of the difference. Because of this you would use descriptive statistics like mode or median or frequencies to assess variability. This might also include chi-square.
Likert scales are a collection of similar questions where you create a composite score based on a quantitative measure of say how a healthcare provider selects a first-line therapeutic option for a patient with "mild to moderate" rheumatoid arthritis. Because the individual scores are generating a composite or overall measure, now we would use the mean and standard deviations as a measure of variability. These interval scale items would include Pearson's r, t-test, ANOVA, and regression.
To collect measures of attitude, character, and personality that better assess post-activity behavior of a healthcare provider after an educational intervention, you likely rely on Likert attitudinal scales and apply them to professional learning and/or behavior change.
The problem is you need to distinguish between Likert-type items and Likert scales. Likert-type questions are when you design single response questions that are not specifically related. You can list them separately and your intention is not to combine the responses or aggregate a score.
When you analyze Likert-type responses, you can sense an ordinal measurement--there is a directionality or a "greater than" or "lesser than" relationship. We think of this as ordinal because you can't say much about the distance between the measures or distinguish the intensity of the difference. Because of this you would use descriptive statistics like mode or median or frequencies to assess variability. This might also include chi-square.
Likert scales are a collection of similar questions where you create a composite score based on a quantitative measure of say how a healthcare provider selects a first-line therapeutic option for a patient with "mild to moderate" rheumatoid arthritis. Because the individual scores are generating a composite or overall measure, now we would use the mean and standard deviations as a measure of variability. These interval scale items would include Pearson's r, t-test, ANOVA, and regression.
This is why decisions around your data analytics start at the question development stage. What type of information are you hoping to gather? What data types do you need to answer your data question?
â€‹You will know this right away if you don't out source data gathering to PubMed published articles or pre-existing survey data.
Beware of low hanging fruit...
â€‹You will know this right away if you don't out source data gathering to PubMed published articles or pre-existing survey data.
Beware of low hanging fruit...
Summary of DCA from Columbia University Mailman School of Public Health...
Discrete choice models are appropriate for evaluating behaviors that operate within a framework of rational choice. Health care professional, when confronted with a discrete set of options will opt for maximal benefit or utility. The options most likely to improve patient outcomes for example and/or abide within a framework of evidence-based medical choice.
These decisions are made from both the choices presented at the point of care AND the characteristics of the professional making the final selection. It follows from this assumption that utility of a choice is a function of the characteristics of the possible choices and the characteristics of the person making the choice.
Here is a resource I found extremely helpful--reach out if you are interested in exploring DCA collaboratively.
E. Lancsar, J. Louviere. Conducting discrete choice experiments to inform healthcare decision making. A userâ€™s guide.PharmacoEconomics, 26 (2008), pp. 661â€“677
These decisions are made from both the choices presented at the point of care AND the characteristics of the professional making the final selection. It follows from this assumption that utility of a choice is a function of the characteristics of the possible choices and the characteristics of the person making the choice.
Here is a resource I found extremely helpful--reach out if you are interested in exploring DCA collaboratively.
E. Lancsar, J. Louviere. Conducting discrete choice experiments to inform healthcare decision making. A userâ€™s guide.PharmacoEconomics, 26 (2008), pp. 661â€“677