The issue at stake was a big one. It asked whether changes to the original protocol of the PACE trial ended up stacking the deck.
It was easy to see how they might have; some of the recovery criteria were so weird as to seem ridiculous. The changes in the studies protocol, made it possible, for instance, to be ill enough to get into the study and yet, by one criteria, be considered recovered at the same time. The PACE report's conclusion that CBT and GET produced “statistically significant “recovery” rates of 22% seemed like overkill to many people who had tried CBT/GET as well.
[fright]
[/fright]Suspecting that some sort of fix was in advocates fought for years trying to get at the raw data. In response, Queen Mary University of London asserted that the release of the data would imperil confidentiality, and spark hostile attacks by unhinged patients. Those arguments were swept aside by a UK Tribunal which took the QMUL, not the patients, to task for unprofessional behavior.
Indeed, there’s been little evidence of the kind of “professional behavior” patients might expect from the medical establishment. Richard Horton, the chief editor of The Lancet, has seemed at times almost unable to contain himself. He rued the million or so dollars he said the UK government had spent responding to irrelevant and vexatious Freedom of Information Act requests. He stated that the fast-tracked study had undergone “endless” rounds of peer review. In his view the people protesting were nothing more than a hostile clique out to destroy good science.
Horton’s dissembling in a 2011 interview turned out to be breathtaking. Horton suggested that the patient communities ability “to engage in a proper scientific discussion” would be tested by the study. Horton promised to publish letters in Lancet and then failed to publish a letter from 42 researchers and doctors regarding flaws in the trial.
In the end the ME/CFS advocates simply wanted to re-analyze the raw data. Given the rather large changes made in the very prominent trial (the most expensive ever undertaken in ME/CFS) a re-analysis made sense. Changing protocols in midstream is generally considered no-no (Ampligen got hit hard by that), but the biggest problem wasn’t that changes were made, but that every single change that was made seemed designed to make the trial more successful.
That was a big issue given that the media’s response to it. The trial, in fact, wasn’t much of a success but the modest benefits it found were hyped up a media on the lookout for a positive outcome. David Tuller, whose expose jump started the PACE issue, accused the PACE trial authors of aiding and abetting the media in their efforts to hype the trial.
Instead of a very expensive trial that indicated how moderate the benefits of CBT and GET were for people with chronic fatigue syndrome, the PACE trial came to be viewed as a sort of vindication of CBT/GET. The PACE trial results are now used to justify CBT/GET as primary treatments for this disease. The trial is featured prominently and positively, for instance, in the professional website UpToDate which many doctors go to get up-to-date information on treatments.
Recovery....What Recovery?
The raw data – obtained by an ME/CFS patient named Alem Matthees – was promptly turned over to two statisticians, Philip B. Stark of the University of California, Berkeley, and Bruce Levin of Columbia University. Their preliminary results were reported in a post titled “No ‘Recovery’ in PACE Trial, New Analysis Finds” posted on The Virology Blog yesterday. (Check out a PdF of the re-analysis here.)
It turns out the QMUL spent a bunch of the University's money fighting the release of the data for a good reason; the re-analysis indicated that the altered recovery criteria did indeed dramatically increased recovery rates. How dramatically? By about 400%.
Stark and Levin noted that the altered protocol allowed about 13% of the participants to be classified as having a “significant disability” and yet have “normal” physical functioning according the recovery criteria.
[fleft]
[/fleft]Instead of 22% of patients recovering using CBT and GET only 7% and 4% did using the original criteria. Because those numbers are statistically similar to the 3% and 2% of patients who recovered while getting specialized medical care or pacing, the trial actually indicated that CBT/GET did not significantly contribute to recovery at all.
Julie Rehmeyer reported in an opinion piece, “Bad science misled millions with chronic fatigue syndrome. Here’s how we fought back”, that only 20% of the patients improved under the original protocol compared to the 60% reported to have improved in the original study. Since 10% of the people getting "specialized medical care" improved as well – and everybody got specialized medical care – the finding actually suggests that CBT and GET may have significantly improved the lives of only about 10% of those getting those therapies.
The Bigger You Are The Harder You Fall
It appears that the PACE trial is about to get bitten by one of its greatest strengths – it’s size. Large studies are able to pick up small results that smaller studies cannot. Many a researcher has suggested that if only his/her study had been a bit larger it would have had more positive results. The willingness of the U.K. and the Netherlands to produce large, well-funded CBT/GET studies is one of the reasons CBT/GET has so dominated the treatment picture in ME/CFS.
The problem with definitively large trials is that they are definitive. The PACE study was so large (n=640) it should have been able to pick up any possible positive result. Even with its huge size, though, Stark and Levin couldn't find any effects on recovery.
The Beginning of the End of the PACE Trial and ???
This preliminary reanalysis isn’t the end of the PACE trial but it’s probably the beginning of the end. Stark and Levin will continue their analysese of the raw data, and we’ll surely see a publication in a journal at some point. It’s hard to imagine the PACE trial could survive a published study exposing this massive study – really, the crown jewel of the British attempt to establish CBT/GET - as an object lesson in how studies should not be done.
If that happens everyone involved with producing the study, funding the study and protecting the study will get hurt. The authors of the study will likely take an awful hit given the cost of the study and the many papers that have been based on it, but the funders will have to answer the $8 million that went down the drain as well.
Then there’s Queen Mary of London University and The Lancet.
QMUL Analysis Deepens Questions About Universities Objectivity
QMUL released a reanalysis of the data that indicated just how far this University has gone astray. Essentially QMUL did what they have always done; instead of a comprehensive and fair analysis of all the issues, they ignored the main questions and focused on other ones.
Their reanalysis focused entirely on whether the reanalysis showed that CBT and GET were more effective than pacing or ordinary medical treatment – a question, quite frankly, that no one had any interest in.
Simon Wessely Strikes Out
In an email exchange with Julie Rehmeyer, Simon Wessely stuck to his talking point “The message remains unchanged,” she said he wrote, calling both treatments “modestly effective.” He summarized his overall reaction to the new analysis this way: “OK folks, nothing to see here, move along please.”
A ten percent response rate with no effect on recovery, however, is not “modest” benefit; it’s a negligible benefit. We’ll learn more in the final paper but a ten percent response doesn’t sound like a statistically significant benefit at all. If not, the largest CBT/GET study ever done could end up having no benefit at all.
For his part, Peter White is still arguing that the data shouldn't have been released in the first place.
Time to Dig Deeper
[fright]
[/fright]Simon Wessely suggested that it's time to move along but it’s actually time to dig deeper. If the published paper is anything like the preliminary paper released yesterday, it’ll be time to ask some hard questions regarding scientific biases at work in the UK and at The Lancet.
What does it say, for instance, about QMUL’s commitment to the scientific process that it resisted the release of the raw data so vociferously, and then produced a reanalysis that skips the main issues? Why, under Richard Horton’s direction, was the The Lancet allowed to fast-track such a flawed paper and why have they allowed their chief editor to so emotionally inject himself into the subject? Why, in short, has The Lancet allowed Richard Horton to do such damage to its sterling reputation?
Rebecca Goldin, a statistician, tore apart the trial earlier this year, and it was skewered at a statistician’s conference. According to MEAction Professor Levin stated that the their defense of the PACE trial had diminished the respect The Lancet and Psychological Medicine’s (a journal publishing a PACE recovery study) are held in “worldwide”.
Julie Rehmeyer reported that Ron Davis would like to see the paper used as an example of how not to do science.
It looks like the AHRQ panel – which down-graded its CBT/GET recommendations after taking the Oxford definition into account – may be due for another re-analysis, as well. The PACE trial made its short list of high-quality studies but it seems inconceivable that will remain.
The PACE trial's end might not be too far off. Stark and Levin did their preliminary re-analysis quickly. It'll take longer to get the final reanalysis done and published. Once that happens the controversy will bleed more into the scientific community, and calls for retraction will surely mount.
Rehmeyer warns, however, that retractions are rare and can take years to achieve. Whether the Lancet is willing to continue take the hit it's probably going to get in the press and elsewhere for the paper may be the determining factor. The PACE trial controversy is starting to leak a bit into the mainstream press (see "The Implosion of a Breakthrough Study on Chronic Fatigue Syndrome") but it's nothing compared to what we'll probably see after Levin et. al. publish their paper.
The re-analysis is vindication of years of work by advocates such as Tom Kindlon, Alem Mahees, Carly Mayhew (co-authors of the paper), Julie Rehmeyer and many others. It wouldn't have been possible, of course, without the dedication and commitment of David Tuller whose investigative series surely deserves a prize for medical reporting, and the support of Vincent Racaniello at The Virology Blog.
It was easy to see how they might have; some of the recovery criteria were so weird as to seem ridiculous. The changes in the studies protocol, made it possible, for instance, to be ill enough to get into the study and yet, by one criteria, be considered recovered at the same time. The PACE report's conclusion that CBT and GET produced “statistically significant “recovery” rates of 22% seemed like overkill to many people who had tried CBT/GET as well.
[fright]
Indeed, there’s been little evidence of the kind of “professional behavior” patients might expect from the medical establishment. Richard Horton, the chief editor of The Lancet, has seemed at times almost unable to contain himself. He rued the million or so dollars he said the UK government had spent responding to irrelevant and vexatious Freedom of Information Act requests. He stated that the fast-tracked study had undergone “endless” rounds of peer review. In his view the people protesting were nothing more than a hostile clique out to destroy good science.
Horton’s dissembling in a 2011 interview turned out to be breathtaking. Horton suggested that the patient communities ability “to engage in a proper scientific discussion” would be tested by the study. Horton promised to publish letters in Lancet and then failed to publish a letter from 42 researchers and doctors regarding flaws in the trial.
In the end the ME/CFS advocates simply wanted to re-analyze the raw data. Given the rather large changes made in the very prominent trial (the most expensive ever undertaken in ME/CFS) a re-analysis made sense. Changing protocols in midstream is generally considered no-no (Ampligen got hit hard by that), but the biggest problem wasn’t that changes were made, but that every single change that was made seemed designed to make the trial more successful.
That was a big issue given that the media’s response to it. The trial, in fact, wasn’t much of a success but the modest benefits it found were hyped up a media on the lookout for a positive outcome. David Tuller, whose expose jump started the PACE issue, accused the PACE trial authors of aiding and abetting the media in their efforts to hype the trial.
Instead of a very expensive trial that indicated how moderate the benefits of CBT and GET were for people with chronic fatigue syndrome, the PACE trial came to be viewed as a sort of vindication of CBT/GET. The PACE trial results are now used to justify CBT/GET as primary treatments for this disease. The trial is featured prominently and positively, for instance, in the professional website UpToDate which many doctors go to get up-to-date information on treatments.
Recovery....What Recovery?
The raw data – obtained by an ME/CFS patient named Alem Matthees – was promptly turned over to two statisticians, Philip B. Stark of the University of California, Berkeley, and Bruce Levin of Columbia University. Their preliminary results were reported in a post titled “No ‘Recovery’ in PACE Trial, New Analysis Finds” posted on The Virology Blog yesterday. (Check out a PdF of the re-analysis here.)
It turns out the QMUL spent a bunch of the University's money fighting the release of the data for a good reason; the re-analysis indicated that the altered recovery criteria did indeed dramatically increased recovery rates. How dramatically? By about 400%.
Stark and Levin noted that the altered protocol allowed about 13% of the participants to be classified as having a “significant disability” and yet have “normal” physical functioning according the recovery criteria.
[fleft]
Julie Rehmeyer reported in an opinion piece, “Bad science misled millions with chronic fatigue syndrome. Here’s how we fought back”, that only 20% of the patients improved under the original protocol compared to the 60% reported to have improved in the original study. Since 10% of the people getting "specialized medical care" improved as well – and everybody got specialized medical care – the finding actually suggests that CBT and GET may have significantly improved the lives of only about 10% of those getting those therapies.
The Bigger You Are The Harder You Fall
It appears that the PACE trial is about to get bitten by one of its greatest strengths – it’s size. Large studies are able to pick up small results that smaller studies cannot. Many a researcher has suggested that if only his/her study had been a bit larger it would have had more positive results. The willingness of the U.K. and the Netherlands to produce large, well-funded CBT/GET studies is one of the reasons CBT/GET has so dominated the treatment picture in ME/CFS.
The problem with definitively large trials is that they are definitive. The PACE study was so large (n=640) it should have been able to pick up any possible positive result. Even with its huge size, though, Stark and Levin couldn't find any effects on recovery.
“We argue that if significant differences between groups cannot be detected in sample sizes of approximately n=160 per group, then this may indicate that CBT and GET simply do not substantially increase recovery rates.”
The Beginning of the End of the PACE Trial and ???
This preliminary reanalysis isn’t the end of the PACE trial but it’s probably the beginning of the end. Stark and Levin will continue their analysese of the raw data, and we’ll surely see a publication in a journal at some point. It’s hard to imagine the PACE trial could survive a published study exposing this massive study – really, the crown jewel of the British attempt to establish CBT/GET - as an object lesson in how studies should not be done.
If that happens everyone involved with producing the study, funding the study and protecting the study will get hurt. The authors of the study will likely take an awful hit given the cost of the study and the many papers that have been based on it, but the funders will have to answer the $8 million that went down the drain as well.
Then there’s Queen Mary of London University and The Lancet.
QMUL Analysis Deepens Questions About Universities Objectivity
QMUL released a reanalysis of the data that indicated just how far this University has gone astray. Essentially QMUL did what they have always done; instead of a comprehensive and fair analysis of all the issues, they ignored the main questions and focused on other ones.
Their reanalysis focused entirely on whether the reanalysis showed that CBT and GET were more effective than pacing or ordinary medical treatment – a question, quite frankly, that no one had any interest in.
Simon Wessely Strikes Out
In an email exchange with Julie Rehmeyer, Simon Wessely stuck to his talking point “The message remains unchanged,” she said he wrote, calling both treatments “modestly effective.” He summarized his overall reaction to the new analysis this way: “OK folks, nothing to see here, move along please.”
A ten percent response rate with no effect on recovery, however, is not “modest” benefit; it’s a negligible benefit. We’ll learn more in the final paper but a ten percent response doesn’t sound like a statistically significant benefit at all. If not, the largest CBT/GET study ever done could end up having no benefit at all.
For his part, Peter White is still arguing that the data shouldn't have been released in the first place.
Time to Dig Deeper
[fright]
What does it say, for instance, about QMUL’s commitment to the scientific process that it resisted the release of the raw data so vociferously, and then produced a reanalysis that skips the main issues? Why, under Richard Horton’s direction, was the The Lancet allowed to fast-track such a flawed paper and why have they allowed their chief editor to so emotionally inject himself into the subject? Why, in short, has The Lancet allowed Richard Horton to do such damage to its sterling reputation?
Rebecca Goldin, a statistician, tore apart the trial earlier this year, and it was skewered at a statistician’s conference. According to MEAction Professor Levin stated that the their defense of the PACE trial had diminished the respect The Lancet and Psychological Medicine’s (a journal publishing a PACE recovery study) are held in “worldwide”.
Julie Rehmeyer reported that Ron Davis would like to see the paper used as an example of how not to do science.
“The study needs to be retracted,” Davis said. “I would like to use it as a teaching tool, to have medical students read it and ask them, ‘How many things can you find wrong with this study?’”
It looks like the AHRQ panel – which down-graded its CBT/GET recommendations after taking the Oxford definition into account – may be due for another re-analysis, as well. The PACE trial made its short list of high-quality studies but it seems inconceivable that will remain.
The PACE trial's end might not be too far off. Stark and Levin did their preliminary re-analysis quickly. It'll take longer to get the final reanalysis done and published. Once that happens the controversy will bleed more into the scientific community, and calls for retraction will surely mount.
Rehmeyer warns, however, that retractions are rare and can take years to achieve. Whether the Lancet is willing to continue take the hit it's probably going to get in the press and elsewhere for the paper may be the determining factor. The PACE trial controversy is starting to leak a bit into the mainstream press (see "The Implosion of a Breakthrough Study on Chronic Fatigue Syndrome") but it's nothing compared to what we'll probably see after Levin et. al. publish their paper.
The re-analysis is vindication of years of work by advocates such as Tom Kindlon, Alem Mahees, Carly Mayhew (co-authors of the paper), Julie Rehmeyer and many others. It wouldn't have been possible, of course, without the dedication and commitment of David Tuller whose investigative series surely deserves a prize for medical reporting, and the support of Vincent Racaniello at The Virology Blog.
Attachments
Last edited: