The way we measure value is often flawed. I am not considering the squishy definition of value that provides an estimate of low- or high-value outcomes in healthcare. I am more or less referring to how we evaluate physician learning in continuing medical education. I work with data often provided post-activity once all the front-end objectives and metrics become fire-walled and set in stone. Obviously not ideal but these datasets provide a systemic glimpse of persistent misconceptions in how we are evaluating learning outcomes.
Continuing medical education (CME) professionals assigned the task of working with outcomes data seek ways to describe significant patterns in informative and narrative formats. Numbers and units combine to yield accurate measurements. But what is the difference between accuracy and precision? How many participants answered this question? How many answered the question correctly pre- and post-activity? These are all examples of accuracy. Basically, are my results aligned with the truth. It becomes problematic when we make leaps of faith and claim, if these measurements are accurate, the participants in my program clearly have learned something they didn't know before participating in my learning intervention.
When you measure, you must interpret the measurement against the standard established by the tool. In the process, you put a little bit of yourself into the measurement and, for this reason, the tool you use to measure has a big impact on the result you get. The existence of a measurement, not surprisingly, means that someone actually measured it. There is a natural limit to how well I can measure objects depending on how well I can ‘see’ it as well as how good of a tool I am using to see it.--TedEd
So far sounds pretty straightforward but what if we aren't using the right tool or aren't using the tool properly?
Now we are considering precision. If you have developed the right tool based on specificity and exactness of measurement then you now have the confidence to report your findings.
Basically you can accurately measure before and after statistics for your educational interventions and report what you find but if you want to be precise you need to switch to a more "finely incremented" tool.
What can we do to achieve this higher level of granularity? First, ask the right demographic questions, evaluate pre-existing heuristics and biases, measure exposure to ongoing education (how are clinical questions answered on a daily basis, record other CME or CE programs completed during your evaluation window), and don't be afraid to include measures that reflect a readiness to change.
Be sure to hit the apple every time...
Thoughtful discussions about content development and outcomes analytics that apply the principles and frameworks of health policy and economics to persistent and perplexing health and health care problems
freelance analyst and content media professional
Browse the archive...
Thank you for making a donution!
Snackable content for your business needs...
I am a medical/health economics writer/ data analyst, ultra-runner, and mom.
It isn't enough to label everything evidence-based and consider it business as usual.
Question the quality of the evidence. The motivation for disseminating the evidence.
Who stands to benefit the most from its uptake?
Remember the quote by Upton Sinclair...
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it!”