Measuring Outcomes is the Least We Can Do

8148831696_35b5c8b5e1_kWhenever an intervention is applied in a medical interaction it results in outcomes. Mind you, not just one outcome, but many. Take surgery for example. There are biological outcomes (new ACL graft, muscle strength, etc). There are functional outcomes (can now cut and pivot without instability, jumps higher, etc). There are psychological outcomes (satisfaction, quality of life improved, etc). And finally there are social outcomes (able to participate with their team, family, etc). Outcomes not only measure benefits but costs (money, time, discomfort, etc).

But these are all patient specific. What about societal outcomes? Overall costs/benefits to a society or other population as a whole. Those are outcomes too.

All of these outcomes are ways that we can explore the question, “Did the intervention ‘work’?” As you can see, many of these outcomes can be weighted differently by different people which often makes that question very difficult to answer. This makes the word “outcomes” kind of nonsensical.

So what are we talking about?

Typically when a medical provider says “outcomes” they are referring to a meaningful change from the patient’s perspective. These are technically called “patient reported outcomes” commonly shortened to “PRO” in the literature. Now, before I go any further, I want to make one thing very clear:

If the treatment is not making a meaningful improvement on patient reported outcomes, then it is not worth doing.

In my clinic we use patient reported outcomes with every single patient. We take them at initial evaluation, at every progress note, and at discharge. We use the ones supported by the literature as the best (or one of the best) for the outcome we are trying to measure. Knee osteoarthritis? WOMAC. Hip pain in an athlete? HOS. Psychological readiness to return to sport after ACLR? ACL RSI. There are so many great ones with so much published research to consider that picking the exact one that is best for your situation can be difficult.

Why does this matter?

Let’s take the example of a rotator cuff repair. You can do the surgery and assure that the previously torn rotator cuff is now fixated down to the bone with a perfectly healed, super-strong long-term bond. That’s great and shows an improved biological outcome. But was about patient reported outcomes?

If long-term after the surgery the patient has the same level of pain and dysfunction as they did before the surgery, well then what was the point? In other words, if their DASH score does not show meaningful improvement two years later, the surgery was a waste of everyone’s time and money, REGARDLESS OF HOW PRETTY THEIR ROTATOR CUFF LOOKS NOW ON MRI.

This is not just true of surgeries but of any intervention. For example: If you correct someone’s knee valgus (you probably didn’t by the way) but it doesn’t change their Kujala score then it doesn’t matter. This isn’t very complicated to understand, but it still surprises me how many providers don’t track these at all.

This is also good science because it is a form of falsification. You can be quite definitive when you say, “No meaningful improvement in patient reported outcomes therefore the intervention is not effective.” The most powerful tool that a scientist has is falsification because it is so absolute.

But there is a funny thing about falsification…

The inverse is NOT NECESSARILY TRUE

So I said that when there IS NO meaningful improvement on patient reported outcomes, the intervention IS NOT worth doing. But that does not mean that when there IS meaningful improvement to patient reported outcomes that the intervention IS worth doing.

When you falsify something in science it slams a door shut. But when you FAIL TO FALSIFY something, it opens additional doors. When you fail to falsify, you must seek alternative explanations.

Why did that patient reported outcome improve? Was it natural history? Was it regression to the mean? Was it something specific to the intervention? What are those possibilities? Was it something non-specific about the intervention? What are those possibilities?

The point of future research is to answer these questions systematically. If we fix the rotator cuff does it matter? If the rotator cuff tears back off does it matter? If we sham the surgery does it matter? If we work on scapular strengthening does it matter? Does “scapular strengthening” do anything different than any other shoulder exercise? Does the amount of time spent with the patient during rehab have as much of an effect as any specific rehab intervention?

And on and on and on and on and…

Yeah, but who cares as long as the patient “feels better”?

That is just a lazy razor and I thought you were better than that. When you ask nothing more about an intervention than whether or not the patient improved you fail to ask the deeper questions. You leave yourself open to so many fads and so much pseudoscience.

  • I exercised and my back felt better
  • I did Reiki and I could do more activity afterwards
  • I went on vacation and my shoulder was less achy
  • I spent a day relaxing at the spa and I now feel stronger
  • Since I recovered from surgery my hip feels great!
  • After spending the afternoon with some crystals my knee pain is less

After all of these things their patient reported outcomes might look better but does that make these interventions medical? Justified? Necessary? Maybe. Maybe not.

The point is that this in isolation is not a good way to judge something as being any of these things.

Knowing the reasons why something is having a positive effect can help you as the provider focus on those key components. If the effects are from natural history, simply educate and reassure. If the effects are placebo in nature, focus on directing the placebo towards something actually shown to be effective (or at least something that is activity-based and they can do for themselves), and getting rid of the otherwise ineffective treatment. If the effects are due to the mind/body/soul organizing around load (whatever type or intensity the exercise may be), just load it.

Sometimes the effects identified are better served by referring out to another provider. They may need a surgeon or a medical work-up or a social worker. Remember your scope of practice. It is no secret that I believe the best tool within the scope of the physical therapist is education and managing load tolerance. The specifics of how we use that varies.

Yet these treatments remain in the name of “Outcomes”

“We are all capable of believing things which we know to be untrue, and then, when we are finally proved wrong, impudently twisting the facts so as to show that we were right. Intellectually, it is possible to carry on this process for an indefinite time: the only check on it is that sooner or later a false belief bumps up against solid reality, usually on the battlefield.”

– George Orwell

Often times what I see happening instead is a provider becoming really attached to a particular treatment. They like doing it, their patients like receiving it, and it defines who they are as a clinician (true for surgeons, physical therapists, everyone). When the research starts to show that their treatment likely is not doing anything very specific and likely a placebo, they hide behind the fact that patients get better. Something MUST be going on! They are looking for ANY evidence to support its use. “Placebo is just a placeholder for what we don’t yet understand! I am AHEAD of the science!”

That’s not good enough

Just like when surgery is no better than placebo and injections are no better than placebo, physical therapy interventions shown to be no better than placebo should be abandoned.

“Sure it’s a placebo but it’s cheaper and less dangerous!” is not the best slogan to rally behind…

In summary…

  • All results from an intervention can be described as “outcomes”
  • Patient reported outcomes always matter and should be tracked on every patient
  • Although improvements in patient reported outcomes are necessary, in isolation they are not a good way to judge an intervention
  • “Sure it’s a placebo but it’s cheaper and less dangerous!” is not the best slogan to rally behind

The featured image is “A Participant filling out the course evaluation form” by Katya Boltanova via Flickr.