The issue at stake was a big one. It asked whether changes to the original protocol of the PACE trial ended up stacking the deck.
It was easy to see how they might have; some of the recovery criteria were so weird as to seem ridiculous. The changes in the studies protocol, made it possible, for instance, to be ill enough to get into the study and yet, by one criteria, be considered recovered at the same time. The PACE report’s conclusion that CBT and GET produced “statistically significant “recovery” rates of 22% seemed like overkill to many people who had tried CBT/GET as well.
Suspecting that some sort of fix was in advocates fought for years trying to get at the raw data. In response, Queen Mary University of London asserted that the release of the data would imperil confidentiality, and spark hostile attacks by unhinged patients. Those arguments were swept aside by a UK Tribunal which took the QMUL, not the patients, to task for unprofessional behavior.
Indeed, there’s been little evidence of the kind of “professional behavior” patients might expect from the medical establishment. Richard Horton, the chief editor of The Lancet, has seemed at times almost unable to contain himself. He rued the million or so dollars he said the UK government had spent responding to irrelevant and vexatious Freedom of Information Act requests. He stated that the fast-tracked study had undergone “endless” rounds of peer review. In his view the people protesting were nothing more than a hostile clique out to destroy good science.
Horton’s dissembling in a 2011 interview turned out to be breathtaking. Horton suggested that the patient communities ability “to engage in a proper scientific discussion” would be tested by the study. Horton promised to publish letters in Lancet and then failed to publish a letter from 42 researchers and doctors regarding flaws in the trial.
In the end the ME/CFS advocates simply wanted to re-analyze the raw data. Given the rather large changes made in the very prominent trial (the most expensive ever undertaken in ME/CFS) a re-analysis made sense. Changing protocols in midstream is generally considered no-no (Ampligen got hit hard by that), but the biggest problem wasn’t that changes were made, but that every single change that was made seemed designed to make the trial more successful.
That was a big issue given that the media’s response to it. The trial, in fact, wasn’t much of a success but the modest benefits it found were hyped up a media on the lookout for a positive outcome. David Tuller, whose expose jump started the PACE issue, accused the PACE trial authors of aiding and abetting the media in their efforts to hype the trial.
Freedom from Fibro Summit Encore Weekend – Watch Any Presentation
Watch any of the 40-plus presentations from Dr. Murphree’s Freedom from Fibro Summit for free this encore weekend. If exploring alternative health options is something for you – or if you just want to explore what’s out there – Dr. Murphree’s Summits provide a great overview of the possibilities this large field of medicine presents.
The Summit provides simple techniques to reduce pain and anxiety, provides updates on the latest research, diet options (one of which has helped me greatly), ways to boost energy, the latest on fibromyalgia research (my presentation), etc.
Click here to check out the encore weekend and here to see a prior blog on it.
Instead of a very expensive trial that indicated how moderate the benefits of CBT and GET were for people with chronic fatigue syndrome, the PACE trial came to be viewed as a sort of vindication of CBT/GET. The PACE trial results are now used to justify CBT/GET as primary treatments for this disease. The trial is featured prominently and positively, for instance, in the professional website UpToDate which many doctors go to get up-to-date information on treatments.
The raw data – obtained by an ME/CFS patient named Alem Matthees – was promptly turned over to two statisticians, Philip B. Stark of the University of California, Berkeley, and Bruce Levin of Columbia University. Their preliminary results were reported in a post titled “No ‘Recovery’ in PACE Trial, New Analysis Finds” posted on The Virology Blog yesterday. (Check out a PdF of the re-analysis here.)
It turns out the QMUL spent a bunch of the University’s money fighting the release of the data for a good reason; the re-analysis indicated that the altered recovery criteria did indeed dramatically increased recovery rates. How dramatically? By about 400%.
Stark and Levin noted that the altered protocol allowed about 13% of the participants to be classified as having a “significant disability” and yet have “normal” physical functioning according the recovery criteria.
Instead of 22% of patients recovering using CBT and GET only 7% and 4% did using the original criteria. Because those numbers are statistically similar to the 3% and 2% of patients who recovered while getting specialized medical care or pacing, the trial actually indicated that CBT/GET did not significantly contribute to recovery at all.
Julie Rehmeyer reported in an opinion piece, “Bad science misled millions with chronic fatigue syndrome. Here’s how we fought back”, that only 20% of the patients improved under the original protocol compared to the 60% reported to have improved in the original study. Since 10% of the people getting specialist medical care improved as well – and everybody got specialist medical care – the finding actually suggests that CBT and GET may have significantly improved the lives of only about 10% of those getting those therapies.
The Bigger You Are The Harder You Fall
It appears that the PACE trial is about to get bitten by one of its greatest strengths – it’s size. Large studies are able to pick up small results that smaller studies cannot. Many a researcher has suggested that if only his/her study had been a bit larger it would have had more positive results. The willingness of the U.K. and the Netherlands to produce large, well-funded CBT/GET studies is one of the reasons CBT/GET has so dominated the treatment picture in ME/CFS.
The problem with definitively large trials is that they are definitive. The PACE study was so large (n=640) it should have been able to pick up any possible positive result. Even with its huge size, though, Stark and Levin couldn’t find any effects on recovery.
The Beginning of the End of the PACE Trial and ???
This preliminary reanalysis isn’t the end of the PACE trial but it’s probably the beginning of the end. Stark and Levin will continue their analysese of the raw data, and we’ll surely see a publication in a journal at some point. It’s hard to imagine the PACE trial could survive a published study exposing this massive study – really, the crown jewel of the British attempt to establish CBT/GET – as an object lesson in how studies should not be done.
If that happens everyone involved with producing the study, funding the study and protecting the study will get hurt. The authors of the study will likely take an awful hit given the cost of the study and the many papers that have been based on it, but the funders will have to answer the $8 million that went down the drain as well.
Then there’s Queen Mary of London University and The Lancet.
QMUL Analysis Deepens Questions About Universities Objectivity
QMUL released a reanalysis of the data that indicated just how far this University has gone astray. Essentially QMUL did what they have always done; instead of a comprehensive and fair analysis of all the issues, they ignored the main questions and focused on other ones.
Their reanalysis focused entirely on whether the reanalysis showed that CBT and GET were more effective than pacing or ordinary medical treatment – a question, quite frankly, that no one had any interest in.
Simon Wessely Strikes Out
In an email exchange with Julie Rehmeyer, Simon Wessely stuck to his talking point “The message remains unchanged,” she said he wrote, calling both treatments “modestly effective.” He summarized his overall reaction to the new analysis this way: “OK folks, nothing to see here, move along please.”
A ten percent response rate with no effect on recovery, however, is not “modest” benefit; it’s a negligible benefit. We’ll learn more in the final paper but a ten percent response doesn’t sound like a statistically significant benefit at all. If not, the largest CBT/GET study ever done could end up having no benefit at all.
For his part Peter White is still arguing that the raw data should not have been released.
Time to Dig Deeper
Simon Wessely suggested that it’s time to move along but it’s actually time to dig deeper. If the published paper is anything like the preliminary paper released yesterday, it’ll be time to ask some hard questions regarding scientific biases at work in the UK and at The Lancet.
What does it say, for instance, about QMUL’s commitment to the scientific process that it resisted the release of the raw data so vociferously, and then produced a reanalysis that skips the main issues? Why, under Richard Horton’s direction, was the The Lancet allowed to fast-track such a flawed paper and why have they allowed their chief editor to so emotionally inject himself into the subject? Why, in short, has The Lancet allowed Richard Horton to do such damage to its sterling reputation?
Rebecca Goldin, a statistician, tore apart the trial earlier this year, and it was skewered at a statistician’s conference. According to MEAction Professor Levin stated that the their defense of the PACE trial had diminished the respect The Lancet and Psychological Medicine’s (a journal publishing a PACE recovery study) are held in “worldwide”.
Julie Rehmeyer reported that Ron Davis would like to see the paper used as an example of how not to do science.
It looks like the AHRQ panel – which down-graded its CBT/GET recommendations after taking the Oxford definition into account – may be due for another re-analysis, as well. The PACE trial made its short list of high-quality studies but it seems inconceivable that will remain.
The PACE trial’s end might not be too far off. Stark and Levin did their preliminary re-analysis quickly. It’ll take longer to get the final reanalysis done and published. Once that happens the controversy will bleed more into the scientific community, and calls for retraction will surely mount.
Rehmeyer warns, however, that retractions are rare and can take years to achieve. Whether the Lancet is willing to continue take the hit it’s probably going to get in the press and elsewhere for the paper may be the determining factor. The PACE trial controversy is starting to leak a bit into the mainstream press (see “The Implosion of a Breakthrough Study on Chronic Fatigue Syndrome“) but it’s nothing compared to what we’ll probably see after Levin et. al. publish their paper.
The re-analysis is vindication of years of work by advocates such as Tom Kindlon, Alem Mahees, Carly Mayhew (co-authors of the paper), Julie Rehmeyer and many others. It wouldn’t have been possible, of course, without the dedication and commitment of David Tuller whose investigative series surely deserves a prize for medical reporting, and the support of Vincent Racaniello at The Virology Blog.
I doubt if the Pace study will be retracted. I think The Lancet is to arrogant. We need now a peer reviewed publication made by scientists not patiënts like Alem.
It is really ridiculous that Whte and Chalder are getting away with it, so it seems. It is fraud. In theory they can go to jail for this.
I think we will get that peer reviewed publication. The preliminary re-analysis is really complex – really detailed – it looks just like a journal article. Once that gets published Lancet is on the clock I think! (I hope!)
No, it isn’t fraud and they can’t go to jail. The PACE authors exaggerated their findings, but it is equally important that critics not exaggerate what they did wrong. PACE is a flawed and overstated study, but certainly not a criminal one.
The goal now should be to spread the word among medical practitioners in a way that influences the care of patients. Formal retraction would be nice, but it isn’t likely to happen. Many dangerous medical practices have been abandoned over the years, even without retracted studies. That is what needs to happen for GET.
I’ll also be surprised if this gets retracted. It seems like the study’s results haven’t been technically invalidated; it’s just much clearer those results have been so watered down (recovery doesn’t require driving, working, walking, basic physical functioning) that they had no real meaning in the first place. Still, I’m glad the authors have been forced to clarify that the patients didn’t meet even the authors’ own original criteria for “recovery.”
I think the Levin et al can make the case that the study was so poorly designed as to make – as you suggest – its results meaningless. We shall see.
I think that Psychology Magazine will retracted their recovery publication at one time.
Good timing that patients are coming out and reporting success with antiretrovirals on reduced number of days, or on lower doses of. Get that message into the office of the PACE trial authors. They came up with names like GETSET ! What would have come next? GETSETGO …. ? They need sectioning, literally, not joking !
A recommendation to do exercise as tolerated leaving some energy left over for healing is not necessarily a bad thing because exercise generally improves health in humans. The focus of the study should have been on whether it improved health and whether it was tolerable without worsening symptoms. Positing it as a cure though was bad science and an abrogation of responsibility because it attributed causation from the outset rather than seeking more modestly to confirm that the health giving mechanisms of exercise were operating at least to some degree still in CFS. The study is frustrating because it does not seek to further understanding. Even if you think that cfs is psychological you should then try to conceptualise how this might be so and test for the presence of this concept. A lot of psychological research is deeply flawed and hard to replicate though. It’s not really a science which is why it tries to avoid discussion of mechanism and pass of correlation as causation.
Not sure this is correct with ME/CFS. I did that for years and still have progressively gotten worse. Personally I think the amount , if any , is so negligible to be worthless in stopping deconditioning and may be harmful in the long run.
I know some media has picked this information up but this needs to be spread to all media outlets as the implication on the health of people and the delay in scientific research and funding has been very drastically hurt by this study.
People need to know what happened with this study in the UK and how it influenced the world and treatment of ME.
People need to see how sick we are.
I hope that this study will be used as an example of how manipulating data in a study resulting in harm, wasted funds, inhuman suffering and harm results in a true retraction of a large study.
This has to be the largest patient community with not one approved treatment and that delay in todays day and age is not acceptable.
I would hope that the community can solicit a media consultant or firm to begin the process of exposing what happened to ME patients for over 100 yrs.
And a final note is that a commercial needs to be created to show what real ME looks like, those severely ill and that all patients may reach that point potentially. I hope that we can do this and I firmly believe we nees to address this issue by asking for help from media based firm that can assist this process through and through by engaging and unifying our ME organizations.
You cannot deny the severity of Whitney and millions of others stuck in this state of a living death. ME is real and needs to be showcased effectively.
The 4 to 7% improvement recorded according to an early reanalysis of the data follow the original protocol needs to be put into its own context as well:
a) The entry criteria did allow “long time tired” people without ME/CFS according to stricter criteria to enter.
b) I got a CBT/GET treatment too. Let’s analyze further:
* Before the treatment I could slowly walk on average 6 out of 7 days a week for 1800 meters with a 15 minute rest on a seat in the middle of it. Next to that I walked some small distances during the day. I could also wash my dishes and prepare a basic meal about 4 to 5 times a week. The other days I heated something in the microwave oven.
* At the start of the therapy I could walk 2 times 6 minutes at about 2 km/h or 2 times 200m for a total of 400m. That is less than the 1800m I did before the treatment, but I had to drive to the hospital, park my car, walk from the parking lot to the fitness area… and back too.
* After 1 month, I could walk 2 times 10 min minutes at about 2,5 km/h or a total of over 830m. If you put that into a chart, that is a huge improvement of over 115% hence an convincingly good result… or not?
* But after a few weeks, I found out how to park closer to the hospital, about 300m on average (600m round trip). So instead of gaining 430m of walking ability, I lost 170m on average. Further more I could only walk once a week the previous 1800m and I could no longer wash my dishes; my parents had to do so while visiting. I also had to eat nothing but microwave food.
So, the huge 115% gain is actually:
* (assuming getting to the hospital plus walking on the treadmill was worth as much as walking 2500m all-inclusive previously): 4*2500m + 2*2500m = 15000m a week at start.
* 1*2500m + 2*(2500-170)m = 7160m a week.
=> Actual total gain in walking capacity: -52.3% ignoring the loss of ability to wash my dishes or prepare a meal after 1 month!
Two month later at the end of the therapy, it was a lot worse. I never had to fill in the second questionary that was promised in order to measure our improvement. I kept going down in a powerful spiral for 2 months until I ended up homebound and in a wheelchair.
Am I lucky that this was only a benign piece of therapy…
As one of those who has been critical of this study when critics were still seen even here as loony and dangerous – it’s important not to go too far the other way. The study was seriously flawed but the reanalysis does show some minor, possibly statistically significant (will know when the final paper is published) benefit. So the questions should be around were these really ME patients, was this possible minor benefit cost effective (very doubtful), was this worth any harm caused to other patients and how do you separate those who might benefit from those who are harmed?