Creating a Research Agenda

This is Part 2 of my 5-part series giving details about my campaign platform for running for President-Elect of the American Academy of Sports Physical Therapy. Voting opens April 1, 2021 and I would greatly appreciate your support.

My campaign platform was developed by having conversations with our members over the past year. One thing I kept hearing from both clinicians and early career researchers (ECR) was that they felt frustrated that a lot of what is published in the physical therapy literature seems a bit…disorganized. This perceived lack of organization makes it difficult for researchers to plan out their next steps and for clinicians to develop a deeper understanding regarding the foundations of their practice.

I knew that this idea of creating an Academy research agenda was popular, but I was surprised by the amount of excitement it seemed to generate in the feedback that I received after announcing it.

Are you criticizing researchers?

Not at all. As I said, this is a comment that I have also heard from researchers, specifically ECR and clinician-researchers with limited resources/guidance. They are also aware that there are problems that have been exposed in the “publish or die” environment creating known issues about reproducibility. One solution to this is better coordinated, less ambitious studies performed in a systematically coordinated fashion.

Don’t get me wrong. Large ambitious studies are great! We have some landmark ones produced by some of the well-established pioneering labs in sports physical therapy. But a lot of the questions we have as a profession require many many small steps to answer them well in a clear and systematic way. Sure, some labs are large and organized enough to handle one or two of those tracks internally. But we have a lot of questions where little progress has been made over the years, and our Academy can provide incentives to change that.

I don’t understand what you’re getting at

Ok. An example might help. Pick a diagnosis. It doesn’t matter which one. Patellar femoral pain, ACL rehabilitation, shoulder instability, whatever. Now let’s get a group of clinicians, researchers, and methodologists to collaborate to establish what we know and don’t know. And I don’t mean what we think we know, but what we actually know.

Hang on – This kind of sounds like systematic reviews or Clinical Practice Guidelines (CPG) which already exist

So far yes. And these are very necessary, but these are also backward looking. They merely show what the previous research looked like and has concluded. What I am talking about is forward looking. Explicitly providing a guide to what future research should look like to fill in the gaps that we still have and build a more complete foundation for moving forward.

This group would then set out a research agenda on that specific topic. “Here is what we know; here is what we want to know. The first steps from where we are now look like this.” That would be mapped out step by step moving forward.

The very first step would be very well defined. Ideally, they would describe the actual design of that first study. The collaboration may even conclude their first agenda session with “Her group will work on conducting step one while his group will be ready to conduct a replication of step one. In the meantime, I will get my group setup to be ready to conduct step two, depending on the findings of step one.”

Or it may be simply posting the agenda publicly for other researchers to raise their hand and say, “I’m well set up to conduct step three,” while another says, “I could run the replication study!” It also would allow someone else to say, “I’m not sure this ‘known premise’ is actually well validated. Wouldn’t we need a study with this particular design to draw such a conclusion?” “Very good point! That’s now on the agenda.” This is the collaborative and self-correcting ideal of science at work.

This sounds hard

Oh my yes! I think it’s harder than many people realize. If you want to show that Intervention A is effective for Diagnosis B, you can’t just throw A at B and see if it “works” (contrary to so much that has been published). You need to determine the specific driver of B and specifically how A has an effect on that driver. Then you need to show it actually changes the driver in the way that you think it does. Then you have to generate a hypothesis which predicts a specific effect size of A on that driver. Then test that. Then generate a hypothesis which predicts the effect size of the driver change on the presentation on B. Then test that.

Oh, and replicate each step in independent labs. Each step and each replication is at least a single study, but can often require multiple studies before moving to the next step. And this is just to establish basic efficacy. Now you need to establish effectiveness.

It is also a “divide and conquer” strategy. Spread the grunt work of small foundational studies across multiple small yet coordinated groups to have a large lab come in with the final large cohort effectiveness studies. To use a sports analogy: Small labs help load the bases for the big labs to then bat cleanup.

This doesn’t happen organically. It takes systematic organization at a high level. It requires strategic planning.

For researchers it highlights a collaborative path forward. For clinicians, it later provides a systematic trail showing how and why we know the things that we know.

You said something about incentives?

Doing one narrowly focused but extremely necessary study in isolation that addresses a single one of these steps may not have a high likelihood of getting published due to the lack of an obvious immediate application to practice. But if the authors can reference to the study being the first step that checks the box of an Academy research agenda that ultimately does apply to practice, that creates a greater incentive for publishers to accept it, assuming it is well done and answers the question well (pre-planning would help assure that). The authors are saying that this study is the foundation of another upcoming study. Not just “More research needed” but rather “This exact next study is the next step and it is on deck to be conducted upon the release of this publication”.

At this point, everything I am talking about from the Academy side is volunteer based. But what if we put some money into it? What if we set money aside in our budget or set up a fund that members can donate to? We can start looking at awarding grant money for the “next step” on our research agendas. Money is often the ultimate incentive.

What if researchers have their own agendas that they want to follow?

We would only be providing guidance and incentives to those who want it. As I said before, many research groups have their own independent agendas already. They are also already well funded. Not to mention that there is huge value to “rogue ideas” that no one else is considering. The creation of a research agenda does not take away. It adds. This isn’t a zero sum game.

How long until we have all the answers?

Hahahaha! The goal is not to have all the answers. Quite the contrary. Grasping at “final answers” too soon usually results in studies that ultimately aren’t that informative. They try to answer so much that they end up answering very little. Or as Sir Isaac Newton put it:

“To explain all in nature is too difficult a task for any one person or even for any one age. Tis much better to do a little with certainty and leave the rest for others that come after than to explain all things by conjecture without making sure of any thing.”
– Sir Isaac Newton

The goal here is simply for the Academy to do what it can to help systematically provide a framework that will move knowledge forward in a way that truly helps our members be better clinicians and better researchers.

This is not for us, but for those who come after us.

Questions/comments about the AASPT developing a research agenda? Contact me!

Look for my post next week when I will discuss creating more organizational structure to the Academy.