This past week we launched an online web app for our network providers that allows our physicians to instantly pull up and analyze their stem cell results. Sometime in the next one to two months, we’ll add a similar feature to the main website for patients. Both show complete transparency in the reporting process for patient outcomes. Contrast that to some of the fictional outcomes we’re beginning to see reported by other clinics and it’s clear that I need a blog post on being a good consumer of outcome data.
What Is Outcome Data?
The outcome of a procedure is whether it worked or not. There are many different ways to measure outcome for painful conditions—from pain scores to validated functional questionnaires to percentage improvement (called a SANE score for Single Assessment Numeric Evaluation). Each has its advantages and disadvantages. For example, a 0–10 pain score can tell you how much pain relief a patient obtained, but it can’t tell you if the patient can now walk long distances. A functional questionnaire does ask about walking, running, climbing, and so on, but it doesn’t tell you how much impact the patient felt the procedure had on his or her life. A SANE score can give you a quick sense of how well the patient felt the procedure worked with a percentage improvement (e.g., I feel 50% better), but it says nothing about pain or function. As a result, in our patient registry of almost 9,000 stem-cell-treated patients, we have always used all three types of metrics.
How You Collect the Data Matters
We know that how outcome data is collected matters. There are many methods used, and some create data that looks better than it should. In addition, there are many different questionnaires to use.
It would make common sense that if the physician who performed the procedure is the one collecting the information, this could cause a serious problem. Patients are human and want to make their doctor happy, so they’re likely to report a better result than they otherwise would to a disinterested party. Despite this, we still see this happen.
In addition, we need to collect outcome data only from the patient as the doctor has a huge incentive in believing that his or her treatment works (which is called “confirmation bias”). You’d be surprised to learn that this last important point is mostly lost on the orthopedic-surgery community, which has the distinction of being the only medical specialty that routinely allows physicians a say in the final outcome. As I’ve blogged before, this is likely one reason that orthopedic-surgery outcomes look so good in case-series studies only to crash and burn in placebo-controlled trials. In the former you can game the system and make sure your surgery results look great; in the latter you don’t know who got the surgery, so there’s nothing to game.
Finally, there are many different outcome questionnaires that are used, which are “validated.” This simply means that someone has done the research to make sure that the questionnaire accurately measures improvement. There are also terms like “MCID,” which means minimal clinically important difference. This is the lowest change in score that a patient would associate with a meaningful result.
The Good, the Bad, and the Ugly of Outcome Reporting
I’d like to show you how to be a good consumer of outcome information by showing you first some problems and then giving you some rules to follow. It’s not hard to navigate these waters as long as you’re armed with information and a good “gut sense.”
Our first example refers back to my blog on Stem Cell Institute of America. Yesterday a colleague showed me a statement about outcomes from their website:
“Regenerative Cell Therapy has amazing results on a wide range of conditions. Most people generally feel significant, if not complete relief from this non-surgical procedure. In a recent study from an amniotic manufacturer, they found that in a group of over 60 participants the average pain scale went from an 8 to a 0 in just 5 weeks for all the participants!”
OK, let’s break down the issues:
- “In a recent study” —There is no citation to this study. Almost all studies are listed on the US National Library of Medicine, and if they’re not there yet, this study would have to be presented somewhere (e.g., a medical conference).
- “the average pain scale went from an 8 to a 0 in just 5 weeks for all the participants”—This statement has many different issues. First, for the average to go to 0, all patients would have to report complete relief of their pain. This is a 100% success rate! The only issue is that no treatment ever used in the annals of medicine has a 100% success rate for pain relief, and this includes stem cells. Second, the beginning pain score is an 8/10. This one may be harder to understand, but for arthritis, this is a very, very high beginning pain score. In fact, the real number across the more than 4,000 knee stem-cell-treated patients in our registry is about a 5–6. 8/10 means that the patients have a chronic nerve-pain problem and not arthritis. I’ll explain this one below in another example. Third, the report looks like it’s from 5 weeks out. Countless feel-good therapies work well for a couple of weeks and then fall apart at a couple of months.
Conclusions? This report of outcome information is likely fiction or heavily doctored. There is no citation, the average pain going to 0 for a pain treatment is pretty much impossible, the 100% success rate is fantasy, the preprocedure pain score of 8 is out of place in an arthritis study, and we purportedly only have results from 5 weeks.
My Second Example—The 8/10 Knee Pain Fiction
To follow-up on the above fantasy report of outcomes, my second example comes from a conference where I was asked to serve as an expert reviewer of the stem cell talks. One report of knee arthritis outcome stood out. It was submitted by a third-party fat-stem-cell company, and it was attributed to a physician in Florida who used bone marrow concentrate and fat stem cells. What was odd was a 8+/10 preprocedure average pain score for knee arthritis patients. When I asked our biostatistician to pull the average preprocedure pain score for more than 1,000 treated knee arthritis patients, it was a 5–6. I actually knew the consultant that this physician had hired to collect the data, so I hunted that person down. I quickly learned that the physician’s staff had neglected to collect most of the preprocedure data as instructed by the consultant. Hence, given the physician’s reported 100% data collection, the 8/10 starting point seemed to be “an estimate” provided by him or his staff. Is that kosher? Nope. So I told the conference organizers that the slide had to go.
This wasn’t the first time that I had this experience with this physician. Way back in 2009 when he first began using stem cells to treat knees, I found our outcome information on his site. I confronted him that he was using a completely different procedure, so our information didn’t apply to his therapy. He then removed our information but replaced it with a general statement saying that his patients reported about 80% improvement. When I asked him where he got that number, he admitted that it wasn’t based on any data that he had collected—it was merely his estimate. Arggghhh!
Third Time’s a Charm—Placing Your Finger on the Outcome Scale
This last one I’ve blogged on before, but it fits well here. Regrettably, this is a study published in a journal, but it shows the issues with much of the orthopedic-surgery literature. This study looked at fat stem cells or a fat graft for knee arthritis. A startling 91% of the approximately one thousand treated patients were classified as having more than 50% improvement at one year after the injection. Those results look fantastic, easily beating any other report for any other injectable knee stem cell treatment, until of course you take a peek under the outcome covers. When I did, turns out that the doctors who did the study used their assessment of outcome as a whopping 60% of the final outcome score! Hence, the results reported were mostly an optimistic physician reported guess—just like the examples discussed above.
What We at Regenexx Do Is—as Usual—Very Different
Regenexx is very different in every way, and outcome reporting is no exception. Here’s what we do:
- Our data is collected by a third-party nonprofit.
- We use a true registry format where patients are pinged at set time points and then annually for life.
- Our data is reported “as is,” or you’ll know why patients were excluded.
- We never allow the treating physician or anybody but the patient to report the outcome.
- We use an outcome metric that even allows for negative scores! The modified SANE metric we use allows patients to report if they got worse, something that nobody else does. Why? Because no matter how good you are, you will find a handful of patients who get worse no matter what you do.
- We transparently report our outcome data on our website. As discussed, in the next one to two months, that will go live, allowing patients to slice and dice the outcome data in our nonprofit registry.
- We use our data to make informed candidacy decisions. We place our patients in “Good,” “Fair,” and “Poor” categories. So not everyone is a good candidate for what we do.
How You Can Spot Bad Outcomes
If you want to find out if the data you’re being shown is doctored or real, follow these three simple rules:
- Just like your mother said, if it looks too good to be true, it probably is. The Stem Cell Institute of America data is a great example of this problem. No medical therapy ever devised has a 100% or close to 100% success rate.
- Make sure the data is 100% patient reported. This may not be easy to find out, but unless the clinic can tell you the exact validated questionnaires they use that you can look up, run! If the physician has any say in how the outcome is calculated, run!
- Look for things that look fishy. While you would need to be an expert to know that 8/10 is too high an average pain score for knee arthritis, you don’t need to be one to see that the statement that there is an 80% success rate is a bit too general to be data based. So a statement that said that 79.9% of the patients reported success is better, but there is no definition of “success”—which is another problem.
The upshot? I hope educating you on outcome data has helped you be a better consumer of healthcare in general and more specially a better consumer of stem cell procedures. We at Regenexx take great pride in that we’ve always tried to do this right, so it’s upsetting to see clinics fabricating or massaging data. Just like in anything you buy, “caveat emptor.”