![]() In my professional work I often review and analyze continuing medical education (CME) outcomes. There is one reoccurring dilemma that I want to try and help you avoid. Now let me qualify this by saying I know how limited CME budgets can be and how many of the steps associated with survey design or question structure may seem straightforward enough that you attempt to fly solo. That's fine too but I think I can help keep your journey safe and more importantly statistically rigorous. I think it will be easier to appreciate the guidance if you try to appreciate the difference between a problem and an issue. Once you define a specific problem it becomes easier to create a checklist of concise issues that you have the power to resolve immediately. For example, a limited budget for statistical consulting may undermine your wish list of developing strong outcomes data (problem) but limited understanding of a data framework can be resolved with a helping hand or a reference resource (issue). Many question designers rely on evaluating percent correct on a knowledge type question. Perhaps you even integrate a case study or an informative question stem followed by a list of potential decisions or answers. Outcomes data that I recently reviewed had these question types coded as either correct or incorrect. As I read on in the reports I see that the t-test or ANOVA statistical procedures were proudly cited but there is one problem. Dichotomous data is not normally distributed and these statistical procedures require the assumption that the data are normally distributed. Inferences from your data are now invalid and do not report the findings that you had hoped for. I have encountered numerous occasions where the go-to statistical procedure was the t-test (or ANOVA) without consideration of whether it is actually the correct test to use. When I design survey instruments I prefer a Likert approach but again this introduces additional statistical complexity. Why Likert? In medicine there is rarely a wrong or right question that can be applied to all patients. Because of the nuances of the healing arts I try to avoid labeling and therefore analyzing a response as either correct or incorrect. The most useful advice I can recommend when you are designing these elements with limited resources--clarify what you intend to do with your data. If you are interested in showing that scores differ when considering different groups of participants (primary care physician vs. specialists), you may treat your scores as numeric values, provided they fulfill usual assumptions about variance (or shape) and sample size. If instead you are interested in highlighting how response patterns vary across subgroups, then you should consider item scores as discrete choice among a set of answer options and look for log-linear modeling, ordinal logistic regression, item-response models or any other statistical model that allows to cope with polytomous items. Stay tuned for additional insights and discussions...pick your onions carefully! Comments are closed.
|
Sign up for our newsletter!
Browse the archive...
Thank you for making a donution!
In a world of "evidence-based" medicine I am a bigger fan of practice-based evidence.
Remember the quote by Upton Sinclair... “It is difficult to get a man to understand something, when his salary depends upon his not understanding it!” |