Data analysis Puts PACE Trial on Slippery Slope to Retraction

Discussion in 'General Discussion' started by Cort, Sep 22, 2016.

  1. Cort

    Cort Founder of Health Rising and Phoenix Rising Staff Member

    The issue at stake was a big one. It asked whether changes to the original protocol of the PACE trial ended up stacking the deck.

    It was easy to see how they might have; some of the recovery criteria were so weird as to seem ridiculous. The changes in the studies protocol, made it possible, for instance, to be ill enough to get into the study and yet, by one criteria, be considered recovered at the same time. The PACE report's conclusion that CBT and GET produced “statistically significant “recovery” rates of 22% seemed like overkill to many people who had tried CBT/GET as well.

    Suspecting that some sort of fix was in advocates fought for years trying to get at the raw data. In response, Queen Mary University of London asserted that the release of the data would imperil confidentiality, and spark hostile attacks by unhinged patients. Those arguments were swept aside by a UK Tribunal which took the QMUL, not the patients, to task for unprofessional behavior.

    Indeed, there’s been little evidence of the kind of “professional behavior” patients might expect from the medical establishment. Richard Horton, the chief editor of The Lancet, has seemed at times almost unable to contain himself. He rued the million or so dollars he said the UK government had spent responding to irrelevant and vexatious Freedom of Information Act requests. He stated that the fast-tracked study had undergone “endless” rounds of peer review. In his view the people protesting were nothing more than a hostile clique out to destroy good science.

    Horton’s dissembling in a 2011 interview turned out to be breathtaking. Horton suggested that the patient communities ability “to engage in a proper scientific discussion” would be tested by the study. Horton promised to publish letters in Lancet and then failed to publish a letter from 42 researchers and doctors regarding flaws in the trial.

    In the end the ME/CFS advocates simply wanted to re-analyze the raw data. Given the rather large changes made in the very prominent trial (the most expensive ever undertaken in ME/CFS) a re-analysis made sense. Changing protocols in midstream is generally considered no-no (Ampligen got hit hard by that), but the biggest problem wasn’t that changes were made, but that every single change that was made seemed designed to make the trial more successful.

    That was a big issue given that the media’s response to it. The trial, in fact, wasn’t much of a success but the modest benefits it found were hyped up a media on the lookout for a positive outcome. David Tuller, whose expose jump started the PACE issue, accused the PACE trial authors of aiding and abetting the media in their efforts to hype the trial.

    Instead of a very expensive trial that indicated how moderate the benefits of CBT and GET were for people with chronic fatigue syndrome, the PACE trial came to be viewed as a sort of vindication of CBT/GET. The PACE trial results are now used to justify CBT/GET as primary treatments for this disease. The trial is featured prominently and positively, for instance, in the professional website UpToDate which many doctors go to get up-to-date information on treatments.

    Recovery....What Recovery?

    The raw data – obtained by an ME/CFS patient named Alem Matthees – was promptly turned over to two statisticians, Philip B. Stark of the University of California, Berkeley, and Bruce Levin of Columbia University. Their preliminary results were reported in a post titled “No ‘Recovery’ in PACE Trial, New Analysis Finds” posted on The Virology Blog yesterday. (Check out a PdF of the re-analysis here.)

    It turns out the QMUL spent a bunch of the University's money fighting the release of the data for a good reason; the re-analysis indicated that the altered recovery criteria did indeed dramatically increased recovery rates. How dramatically? By about 400%.

    Stark and Levin noted that the altered protocol allowed about 13% of the participants to be classified as having a “significant disability” and yet have “normal” physical functioning according the recovery criteria.

    Instead of 22% of patients recovering using CBT and GET only 7% and 4% did using the original criteria. Because those numbers are statistically similar to the 3% and 2% of patients who recovered while getting specialized medical care or pacing, the trial actually indicated that CBT/GET did not significantly contribute to recovery at all.

    Julie Rehmeyer reported in an opinion piece, “Bad science misled millions with chronic fatigue syndrome. Here’s how we fought back”, that only 20% of the patients improved under the original protocol compared to the 60% reported to have improved in the original study. Since 10% of the people getting "specialized medical care" improved as well – and everybody got specialized medical care – the finding actually suggests that CBT and GET may have significantly improved the lives of only about 10% of those getting those therapies.

    The Bigger You Are The Harder You Fall

    It appears that the PACE trial is about to get bitten by one of its greatest strengths – it’s size. Large studies are able to pick up small results that smaller studies cannot. Many a researcher has suggested that if only his/her study had been a bit larger it would have had more positive results. The willingness of the U.K. and the Netherlands to produce large, well-funded CBT/GET studies is one of the reasons CBT/GET has so dominated the treatment picture in ME/CFS.
    The problem with definitively large trials is that they are definitive. The PACE study was so large (n=640) it should have been able to pick up any possible positive result. Even with its huge size, though, Stark and Levin couldn't find any effects on recovery.

    The Beginning of the End of the PACE Trial and ???

    This preliminary reanalysis isn’t the end of the PACE trial but it’s probably the beginning of the end. Stark and Levin will continue their analysese of the raw data, and we’ll surely see a publication in a journal at some point. It’s hard to imagine the PACE trial could survive a published study exposing this massive study – really, the crown jewel of the British attempt to establish CBT/GET - as an object lesson in how studies should not be done.

    If that happens everyone involved with producing the study, funding the study and protecting the study will get hurt. The authors of the study will likely take an awful hit given the cost of the study and the many papers that have been based on it, but the funders will have to answer the $8 million that went down the drain as well.

    Then there’s Queen Mary of London University and The Lancet.

    QMUL Analysis Deepens Questions About Universities Objectivity

    QMUL released a reanalysis of the data that indicated just how far this University has gone astray. Essentially QMUL did what they have always done; instead of a comprehensive and fair analysis of all the issues, they ignored the main questions and focused on other ones.
    Their reanalysis focused entirely on whether the reanalysis showed that CBT and GET were more effective than pacing or ordinary medical treatment – a question, quite frankly, that no one had any interest in.

    Simon Wessely Strikes Out

    In an email exchange with Julie Rehmeyer, Simon Wessely stuck to his talking point “The message remains unchanged,” she said he wrote, calling both treatments “modestly effective.” He summarized his overall reaction to the new analysis this way: “OK folks, nothing to see here, move along please.”

    A ten percent response rate with no effect on recovery, however, is not “modest” benefit; it’s a negligible benefit. We’ll learn more in the final paper but a ten percent response doesn’t sound like a statistically significant benefit at all. If not, the largest CBT/GET study ever done could end up having no benefit at all.

    For his part, Peter White is still arguing that the data shouldn't have been released in the first place.

    Time to Dig Deeper

    Simon Wessely suggested that it's time to move along but it’s actually time to dig deeper. If the published paper is anything like the preliminary paper released yesterday, it’ll be time to ask some hard questions regarding scientific biases at work in the UK and at The Lancet.

    What does it say, for instance, about QMUL’s commitment to the scientific process that it resisted the release of the raw data so vociferously, and then produced a reanalysis that skips the main issues? Why, under Richard Horton’s direction, was the The Lancet allowed to fast-track such a flawed paper and why have they allowed their chief editor to so emotionally inject himself into the subject? Why, in short, has The Lancet allowed Richard Horton to do such damage to its sterling reputation?

    Rebecca Goldin, a statistician, tore apart the trial earlier this year, and it was skewered at a statistician’s conference. According to MEAction Professor Levin stated that the their defense of the PACE trial had diminished the respect The Lancet and Psychological Medicine’s (a journal publishing a PACE recovery study) are held in “worldwide”.

    Julie Rehmeyer reported that Ron Davis would like to see the paper used as an example of how not to do science.
    It looks like the AHRQ panel – which down-graded its CBT/GET recommendations after taking the Oxford definition into account – may be due for another re-analysis, as well. The PACE trial made its short list of high-quality studies but it seems inconceivable that will remain.

    The PACE trial's end might not be too far off. Stark and Levin did their preliminary re-analysis quickly. It'll take longer to get the final reanalysis done and published. Once that happens the controversy will bleed more into the scientific community, and calls for retraction will surely mount.

    Rehmeyer warns, however, that retractions are rare and can take years to achieve. Whether the Lancet is willing to continue take the hit it's probably going to get in the press and elsewhere for the paper may be the determining factor. The PACE trial controversy is starting to leak a bit into the mainstream press (see "The Implosion of a Breakthrough Study on Chronic Fatigue Syndrome") but it's nothing compared to what we'll probably see after Levin et. al. publish their paper.

    The re-analysis is vindication of years of work by advocates such as Tom Kindlon, Alem Mahees, Carly Mayhew (co-authors of the paper), Julie Rehmeyer and many others. It wouldn't have been possible, of course, without the dedication and commitment of David Tuller whose investigative series surely deserves a prize for medical reporting, and the support of Vincent Racaniello at The Virology Blog.

    Attached Files:

    Last edited: Sep 23, 2016
    AnIrishGuy and Andrew P like this.
  2. AnIrishGuy

    AnIrishGuy Member

    Small error: there were 641 participants in the PACE Trial not 480. 480 would be the number if you counted up 3 of the 4 groups.
  3. Cort

    Cort Founder of Health Rising and Phoenix Rising Staff Member

    Right...Amazing how big that study was! I can understand them wanting to protect it. This looks like a huge win for the ME/CFS community and a devastating loss for the PACE authors, QMUL and Lancet.
    Last edited: Sep 23, 2016
    Neunistiva likes this.
  4. WhyWhyWhy

    WhyWhyWhy Member

    Please give proper attribution related to the authors of this article as they all appeared to have a hand in working on the statistics:

    UK Margaret likes this.
  5. Cort

    Cort Founder of Health Rising and Phoenix Rising Staff Member

    I guess you just did that. I appreciate that Alem Mathews, Tom Kindlon and Carly Maryhew are co-authors and, in fact, I didn't notice that until you mentioned it. I went off of MEAction's report of the study which I think does the right thing by highlighting the statisticians involved in the study.

    I think we want to highlight that this is the product of professional statisticians; reading the paper makes it pretty clear that this a very sophisticated statistical analysis. I will get Tom's and the others names in there, though - thanks for mentioning them.
    Last edited: Sep 23, 2016
  6. Well written, Cort. Thanks for breaking down the details of a complicate report.

    But, what I really want to know is what is "standard medical care". In Boston, standard medical care is a shrug of the shoulders or (Komaraff)" you're sick but I don't know how to treat you.

    Do share1
    UK Margaret likes this.
  7. Cort

    Cort Founder of Health Rising and Phoenix Rising Staff Member

    :)...If its a shrug of the shoulders in Boston its probably a blank stare in the U.K. I imagine it just means that they were seeing some sort of specialist which given the way the UK operates may very well have been a psychiatrist.
  8. UK Margaret

    UK Margaret Member

    Cort, specialists in ME don't exist in the UK, well at least not in the mainstream NHS. And yes standard care here is hmm - here take this anti-depressant and if it gets really bad we'll give you pain killers. Bye (and preferably don't come back).

    IF we get referred it may be to a neurologist if we display neuro symptoms, but we will just be told it's functional movement disorder, whatever that is, and you are promptly discharged back to your GP. If we have lots of pain we may get referred to a pain management clinic where the treatment, if you get any, is more painkillers and yes you guessed it, CBT. If we don't care to "engage" with said CBT we are labelled uncooperative and promptly discharged back to your GP. If we have fibromyalgia as well we get referred to a rheumotolagist who says, yes you have fibro but sorry there's nothing I can do and you are promptly discharged back to your GP.

    So all in all a blank stare would be about as helpful as what we do get.
  9. Cort

    Cort Founder of Health Rising and Phoenix Rising Staff Member

    I'm sorry to hear that but I would have been surprised to have heard anything else. What a system....Good luck with everything!
    UK Margaret likes this.
  10. KME

    KME Member

    In academic writing the order in which authors are listed is meaningful, and generally reflects descending order of contribution i.e. amount of work put in. Matthees is first author, Kindlon is second co-author, Maryhew is third co-author, Stark is fourth co-author, Levin is fifth co-author. This article will be referred to as Matthees et al 2016, or in its long form, Matthees, Kindlon, Maryhew, Stark & Levin 2016, according to academic writing norms. When being discussed within an article, it will be discussed as "Matthees et al found...", not "Stark and Levin found..."

    Authors generally agree the order in which names are listed when preparing a manuscript according to the amount of work each put in. Credit is given for work, not qualifications or seniority. Both Matthees and Kindlon have had academic letters and articles published and have independently demonstrated their ability to use statistical analysis to critique PACE. But most importantly, they are first and second authors of the paper. The authors' decision on who is listed as first author, second etc needs to be respected when referring to the paper. Well done to Matthees and his four co-authors on a careful and compelling reanalysis.
    Last edited: Sep 24, 2016
  11. Cort

    Cort Founder of Health Rising and Phoenix Rising Staff Member

    Actually the first and the last authors are the two most important authors. To my understanding the first author is usually the one who actually did the work and the last author is the senior author overseeing it.

    Alem Matthees has not published any studies - his two Pubmed citations are comments on other ME/CFS studies. His background according to Research Gate is this
    Tom Kindlon has published at least one review and many comments but he's not a statistician either and his skills list in Research Gate does not include statistics.

    They'e both played critical roles in the PACE saga and they are both patient advocates - very effective patient advocates - and should be saluted as such. Neither have degrees in statistics or work in the field, however. In this case I focused on the statisticians instead of the advocates.

    I agree that the citation should read Mathees, etc. but besides the fact that neither Tom or Alem has the background to write a paper like that (have you read it?), I think it's far better for us as community to have the two established statisticians highlighted instead of two patient advocates with no academic background in the pertinent field (statistics).

    One approach gets the paper credibility - the other leads to dismissal. That's why I made sure to highlight Levin and Stark and will continue to highlight their involvement - and I will make sure to cite the paper properly.
    Last edited: Sep 24, 2016
    IrisRV likes this.
  12. KME

    KME Member

    There’s no problem here. I’m delighted that you highlighted this paper, and delighted that you highlighted the statisticians’ co-authorship. I agree that this increases credibility in the medical sphere. However, for true credibility in the medical sphere, publication in a major peer-reviewed journal is required. Hopefully this will follow, whether by these or other authors, based on the subset of PACE trial data that Alem Matthees successfully secured.

    My point is simply that the paper should be referenced as Matthees et al (2016), since he is listed as first author.

    I’m not sure why you ask whether I have read the paper. I have. I am a person with ME, and a former health professional and academic. I’m aware of what Matthees and Kindlon have and have not done. Kindlon, for example, was midway through a mathematics degree when he became too unwell to continue. I would not feel comfortable making assumptions about what they are capable of, or of who contributed what. And I don’t think this matters. It’s a compelling collaboration between intelligent, informed patient advocates and professional statisticians, and thank you for highlighting it to a wider audience.
  13. Cort

    Cort Founder of Health Rising and Phoenix Rising Staff Member

    The paper was so statistically dense that I felt that only a statistician could have produced it and I guess I misunderstood you there.

    I will certainly reference as Matthees et. al. as I should have from the beginning, and will continue to make sure that I acknowledge Matthees, Kindlon, Tuller and other patient advocates who played such a critical role in getting us to where we are with the PACE trial.

    Thanks for your nice comments :)
  14. KME

    KME Member

    Great. Just noticed that the preamble to the paper on the Virology blog clarified patients’ and statisticians’ respective roles as follows:

    “ME/CFS patients developed and wrote this groundbreaking analysis, advised by two academic co-authors.”

    Tuller wrote in his 22 September 2016 blog: “Last weekend, several smart, savvy patients helped Mr. Matthees analyze the newly available data, in collaboration with two well-known academic statisticians, Bruce Levin from Columbia and Philip Stark from Berkeley.”

    Can't thank all authors of this paper enough.
    IrisRV likes this.
  15. Cort

    Cort Founder of Health Rising and Phoenix Rising Staff Member

    So they did. Wow....That is a dense scientific paper.

    Great blog by Tuller, though! I hadn't seen it before. Lancet and Psychological Medicine are burying their heads in the sand. PM said - just run another trial.

    I'm aghast at the lack of integrity from these journals...It's scary

    I think Tuller is right. The trick was to keep the data out of the public's hands not by using scientific means but by branding ME/CFS patients as unstable

    Powerful stuff!
    Last edited: Sep 26, 2016