Sunday's NYTimes ran an article tackling a complex ethical dilemma in cancer care: The witholding of treatment in clinical trials. Because I was treated in 3 clinical trials in the 1990s, the topic is close to my heart.
Scientists have advanced the treatment of disease using the scientific method. By that, I mean they have tested a theory using rigorous methods that give a reliable and reproducible answer. This answer is not at all likely to be due to chance.
The gold standard of testing new treatments is the double-blind placebo-controlled randomized clinical trial.
In many medical situations for which effective therapies exist, researchers try to improve on the standard therapies by testing treatments in randomized clinical trial (RCT). Here, patients are randomly sorted into groups to compare different types of treatment for the same condition. In most cases, patients receive either the therapy being studied or state-of-the-art standard therapy (and not placebo -- "sugar pill" or, more accurately, an inactive intervention).
For decades now, the RCT has been the way scientists have reined in emotions that might bias the results. The RCT has saved millions of patients from being treated with therapies that, in truth, don't work. The RCT has preventing researchers from pursuing blind alleys (lines of investigation that will not yield effective therapies) and prompted them to pursue promising new lines of investigation.
So what is the problem? The problem is that many well-respected scientific researchers, and droves of desperate patients, believe we can do better. They believe that given modern technology, the old gold standard is now slowing progress and, most contentiously, keeping optimal therapies from patients who might benefit.
In my next post, I'll explain why they feel this way.
I think one of the problems with this sort of research has been "individuality." You might have a new therapy where 33% get better, 33% no change and 33% get worse. It will not show efficacy, yet the 33% who might do better on this never get a chance. We need to be able to identify who gets better on it and why and make it available for these groups, regardless of the wider population who it might not help.
Posted by: sue c | September 23, 2010 at 12:43 PM
The article was discussed at the LRF conference this weekend in San Francisco. I haven't yet finished my evaluation form. Would you mind if I suggest you as a speaker?
Posted by: Roz | September 28, 2010 at 08:44 AM