Meet your continuing medical education (CME) faculty...
Not sure how likely CME is to influence decisions at the point of care. Even large companies are evaluating participants based on multiple choice or Likert questions--not looking at probabilities or risk/benefit considerations facing health care providers on a per patient basis.
I explored the reported details from financial disclosure statements on Open Payments Data and noted the following compensation rates for a single CME panel activity.
Here is a typical CME activity in Rheumatology.
Expert Faculty A--BELOW the National Mean by $1,279.50/ABOVE the Specialty Mean by $1,632.60
Expert Faculty B--ABOVE the National Mean by $208,641.34/ABOVE the Specialty Mean by $206,573.10
Expert Faculty C--ABOVE the National Mean by $226,765.21/ABOVE the Specialty Mean by $220,628.17
Expert Faculty D--ABOVE the National Mean by $16,668.49/ABOVE the Specialty Mean by $10,531.45
Expert Faculty E--ABOVE the National Mean by $120,973.39/ABOVE the Specialty Mean by $114,836.35
Now before you consider me a curmudgeon, I did a little research first. I am not about targeting the small and scrappy companies--I am about the big companies with large pockets that honestly should know better.
I did a quick glance of the accompanying slide compendium and noted a lack of effect sizes, preponderance of p-values, a rogue hazard ratio here and there and a blatant lack of statistical rigor. The data and findings were based on abstracts only so we are even deeper in the weeds when trying to make sense of the insights being recommended by the panel.
Here are a few reasons we need effect sizes--we need to make assumptions about a normal distribution, p-values depend on the size of the effect AND the size of the sample. You can play this out by either having a huge effect size in a small population or a tiny effect size in a very large population.
If your objective is to educate physicians about the latest in clinical research--shouldn't the emphasis be on the significance of the effect, not the sample size?
Because I work with a lot of teams that rely on survey data--I am focused on how to generate data with robust and rigorous research methodology. Unfortunately in CME we seem to be tethered to old ways of thinking and continually limit ourselves based on the limitations of existing frameworks.
One of the first sources of a useful data structure (conjoint analysis) came to me from a 1989 paper, Courtyard by Marriott: Designing a Hotel Facility with Consumer-Based Marketing Models.
This looks complicated but the figure below describes 50 factors that describe hotel features and services and the associated (167) levels are categorized under seven facets (External factors, Rooms, Food, Lounge, Services, Leisure, Security).
You can easily define facets (or attributes) in whatever therapeutic area you are targeting for intervention. Think how we can create metrics if we instead focus on rheumatoid arthritis for example. Combination therapy, route of administration, location of administration (home or infusion clinical), frequency of administration, time needed for infusion, experience with DMARD, chance of efficacy, onset of action, cost, etc...
I use examples from RA because of the complexity of measuring physician behavior with current standards of multiple choice or Likert. It is impossible. There is always a heuristic, trade-off, or utility consideration that you are not bothering to measure.
Look at what we can generate if we build data models. I created a data model using information from a paper by Scalone and colleagues, Patients', physicians', nurses', and pharmacists' preferences on the characteristics of biologic agents used in the treatment of rheumatic diseases.
What you are left with are deep insights regarding considerations at the point of care. Discrete choice modeling looks at several factors within a decision RELATIVE to each other, forcing respondents to make tradeoffs among different options.
What we learn from this type of data allows us to gain deeper insight into which factors respondents find most important.
Here is how we do it:
•Putting together an attribute list
•Define competitive space or preferences for a product or behavior
•Construct Experimental design
•Combinations of attributes are not correlated but balanced across products or behaviors
•Pre-test--instrument is tested for clarity, length and efficacy.
•followed by an editing and revision cycle that ultimately produces the field survey instrument.
•Send out to audience
•Set up data
•choosing variables to include in the modeling process
Agreed it would be easier to write simplistic multiple choice questions that isolate respondents from the actual influences at the point of care--but to what end?
The data has no value and is not moving the needle on improving the quality of healthcare.
If that troubles you--send me a tweet or email to discuss your data needs. Brainstorming is always free!
Now offering customized webinar learning live...