I am a recovering continuing medical education professional. I was lucky. In most cases, I worked with a talented team of like-minded thinkers not afraid to point out the inconsistencies in what we sold, and what we implemented.
I migrated my understanding of outcomes research outside the penumbra of CME and observed what was happening industry adjacent. When the stakes are higher in hospital systems, medical practices, and in organizations with an ethical health economics platform there was much to learn. The first thing to go <<Insert happy dance>> were poorly designed learning objectives and pre- and post-tests.
My intent isn't to over simplify but in a nutshell, a CME provider or medical education company responds to a request for proposal or writes a grant hoping for pharmaceutical company financial support of an educational activity or intervention. You can read more over here. Obviously these requests are tailored to interests of funders often to their own detriment but that is a topic for another day.
Read more thoughtful suggestions and a summary by Steven E. Nissen, Chairman of the Department of Cardiovascular Medicine at the Cleveland Clinic and Professor of Medicine at the Cleveland Clinic Lerner College of Medicine at Case Western Reserve University, Reforming the continuing medical education system published in 2015.
The delivery of medical education relies on the traditional methods for didactic learning such as lectures and seminars in customary settings such as auditoriums or classrooms. The content is largely driven by the educator without any assessment of the true educational needs of the learner. Using these traditional delivery methods, there are few opportunities to assess whether the educational program actually changed physician behavior or resulted in improvement in patient outcomes.--Steven E. Nissen, MD
The majority of programs seek to show increased knowledge or competence post intervention--at all costs. Often relying on relative change vs. absolute change or a host of biases only augmented by choice architecture and low rigor of survey methodology. Sample questions are often problematic in asking self identification of barriers in changing behaviors, how thoughts and attitudes have changed post educational activity, or leading questions such as
"As a result of the education, you plan to: (Learner checks all that apply)."
Recent research by Michael B Wolfe and Todd J Williams, Poor metacognitive awareness of belief change provides an informative discussion regarding changing beliefs summarizing that recollections of initial beliefs tended to be biased in the direction of subjects' current beliefs.
I suggest you read the research for granular understanding of methodology and outcomes but in summary, there were two experiments. In the first, 128 subjects prescreened from a total of 548 read a summary of scientific literature describing the effectiveness of spanking. Don't focus on the topic but do try and redirect to the application of how we measure learning in CME. Their post reading impressions were gathered and evaluated by argument-focused processing and additional methods. Here are the findings as reported:
I do work with data savvy medical education professionals but mainly in academic CME programs. I tend to believe the rigor and methodology tend to be more scientific and less exposed to short cuts and low hanging fruit. I know, I know that sounds harsh but having worked with dozens of CME departments I have seen my share of suspicious "wrinkled tubes in medicine cabinets"... the short cuts and cost cutting combined with biases introduced by attempts to maximize funding and under-deliver outcomes.
Here are my top tools for measuring provider behavior change--your mileage may vary...
1. Use a decision analysis format. Medicine is not practiced in a multiple choice format. There are trade offs. I have data to prove it. This graphic is from an educational program asking providers to rank considerations at the point of care. We ask multiple questions in this format to help contextualize decisions at the point of care (1 is highest priority, 8 is the lowest). Think about the granularity of insight when cross-tabulations of these findings are available for a wide variety of therapeutic decisions.
If you are restricted to pre- and post-activity based on senior management you can still increase relevance.
2. Provide context of original response when querying about current behavior.
3. Because learners grasp currently available tools and insights when making decisions, duplicate the post-activity milieu similarly. Asking learners about their current behavior in a vacuum is dangerous. The brain hates gaps and will fill them with whatever is quickly recalled.
4. This is a small study, I get it. But look at the bibliography of the Wolfe and Williams article. Follow the thread to the foundational research.
5. Listen to the podcast below.
In the show, psychologists Michael Wolfe and Todd Williams, take us though their new research which suggests that because brains so value consistency, and are so determined to avoid the threat of decoherence, we hide the evidence of our belief change. That way, the story we tell ourselves about who we are can remain more or less heroic, with a stable, steadfast protagonist whose convictions rarely waver ...
If you appreciate the time and effort to bring stories to life won't you make a small donution--see what I did there? Thanks!
Sign up for our newsletter!
Browse the archive...
Thank you for making a donution!
In a world of "evidence-based" medicine I am a bigger fan of practice-based evidence.
Remember the quote by Upton Sinclair...
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”