Why Oncologists Don’t Like In Vitro Chemosensitivity Tests

In human experience, the level of disappointment is directly proportional to the level of expectation. When, for example, the world was apprised of the successful development of cold fusion, a breakthrough of historic proportions, the expectations could not have been greater. Cold fusion, the capacity to harness the sun’s power without the heat and radiation, was so appealing that people rushed into a field about which they understood little. Those who remember this episode during the 1990s will recall the shock and dismay of the scientists and investors who rushed to sponsor and support this venture only to be left out in the cold when the data came in.

Since the earliest introduction of chemotherapy, the ability to select active treatments before having to administer them to patients has been the holy grail of oncologic investigation. During the 1950s and 60s, chemotherapy treatments were punishing. Drugs like nitrogen mustard were administered without the benefit of modern anti-emetics and cancer patients suffered every minute. The nausea was extreme, the bone marrow suppression dramatic and the benefits – marginal at best. With the introduction of cisplatin in the pre Zofran/Kytril era, patients experienced a heretofore unimaginable level of nausea and vomiting. Each passing day medical oncologists wondered why they couldn’t use the same techniques that had proven so useful in microbiology (bacterial culture and sensitivity) to select chemotherapy.

And then it happened. In June of 1978, the New England Journal of Medicine (NEJM) published a study involving a small series of patients whose tumors responded to drugs selected by in vitro (laboratory) chemosensitivity. Eureka! Everyone, everywhere wanted to do clonogenic (human tumor stem cell) assays. Scientists traveled to Tucson to learn the methodology. Commercial laboratories were established to offer the service. It was a new era of cancer medicine. Finally, cancer patients could benefit from effective drugs and avoid ineffective ones. At least, it appeared that way in 1978.

Five years later, the NEJM published an update of more than 8,000 patients who had been studied by clonogenic assay. It seemed that with all the hype and hoopla, this teeny, tiny little detail had been overlooked: the clonogenic assay didn’t work. Like air rushing out of a punctured tire, the field collapsed on itself. No one ever wanted to hear about using human tumor cancer cells to predict response to chemotherapy – not ever!

In the midst of this, a seminal paper was published in the British Journal of Cancer in 1972 that described the phenomenon of apoptosis, a form of programmed cell death.  All at once it became evident exactly why the clonogenic assay didn’t work. By re-examining the basic tenets of cancer chemosensitivity testing, a new generation of assays were developed that used drug induced programmed cell death, not growth inhibition. Cancer didn’t grow too much, it died too little. And these tests proved it.

Immediately, the predictive validity improved. Every time the assays were put to the test, they met the challenge. From leukemia and lymphoma to lung, breast, ovarian, and even melanoma, cancer patients who received drugs found active in the test tube did better than cancer patients who received drugs that looked inactive. Eureka! A new era of cancer therapy was born. Or so it seemed.

I was one of those naive investigators who believed that because these tests worked, they would be embraced by the oncology community. I presented my first observations in the 1980s, using the test to develop a curative therapy for a rare form of leukemia. Then we used this laboratory platform to pioneer drug combinations that, today, are used all over the world. We brought the work to the national cooperative groups, conducted studies and published the observations. It didn’t matter. Because the clonogenic assay hadn’t worked, regardless of its evident deficiencies, no one wanted to talk about the field ever again.

In 1600, Giordano Bruno was burned at the stake for suggesting that the universe contained other planetary systems. In 1634, Galileo Galilei was excommunicated for promoting the heliocentric model of the solar system. Centuries later, Ignaz Semmelweis, MD, was committed to an insane asylum after he (correctly) suggested that puerperal sepsis was caused by bacterial contamination. A century later, the discoverers of helicobacter (the cause of peptic ulcer disease) were forced to suffer the slings and arrows of ignoble academic fortune until they were vindicated through the efforts of a small coterie of enlightened colleagues.

Innovations are not suffered lightly by those who prosper under established norms. To disrupt the standard of care is to invite the wrath of academia. The 2004 Technology Assessment published by Blue Cross/Blue Shield and ASCO in the Journal of Oncology and ASCO’s update seven years later, reflect little more than an established paradigm attempting to escape irrelevance.

Cancer chemosensitivity tests work exactly according to their well-established performance characteristics of sensitivity and specificity. They consistently provide superior response and, in many cases, time to progression and even survival. They can improve outcomes, reduce costs, accelerate research and eliminate futile care. If the academic community is so intent to put these assays to the test, then why have they repeatedly failed to support the innumerable efforts that our colleagues have made over the past two decades to fairly evaluate them in prospective randomized trials? It is time for patients to ask exactly why it is that their physicians do not use them and to demand that these physicians provide data, not hearsay, to support their arguments.

About Dr. Robert A. Nagourney
Dr. Nagourney received his undergraduate degree in chemistry from Boston University and his doctor of medicine at McGill University in Montreal, where he was a University Scholar. After a residency in internal medicine at the University of California, Irvine, he went on to complete fellowship training in medical oncology at Georgetown University, as well as in hematology at the Scripps Institute in La Jolla. During his fellowship at Georgetown University, Dr. Nagourney confronted aggressive malignancies for which the standard therapies remained mostly ineffective. No matter what he did, all of his patients died. While he found this “standard of care” to be unacceptable, it inspired him to return to the laboratory where he eventually developed “personalized cancer therapy.” In 1986, Dr. Nagourney, along with colleague Larry Weisenthal, MD, PhD, received a Phase I grant from a federally funded program and launched Oncotech, Inc. They began conducting experiments to prove that human tumors resistant to chemotherapeutics could be re-sensitized by pre-incubation with calcium channel blockers, glutathione depletors and protein kinase C inhibitors. The original research was a success. Oncotech grew with financial backing from investors who ultimately changed the direction of the company’s research. The changes proved untenable to Dr. Nagourney and in 1991, he left the company he co-founded. He then returned to the laboratory, and developed the Ex-vivo Analysis - Programmed Cell Death ® (EVA-PCD) test to identify the treatments that would induce programmed cell death, or “apoptosis.” He soon took a position as Director of Experimental Therapeutics at the Cancer Institute of Long Beach Memorial Medical Center. His primary research project during this time was chronic lymphocytic leukemia. He remained in this position until the basic research program funding was cut, at which time he founded Rational Therapeutics in 1995. It is here where the EVA-PCD test is used to identity the drug, combinations of drugs or targeted therapies that will kill a patient's tumor - thus providing patients with truly personalized cancer treatment plans. With the desire to change how cancer care is delivered, he became Medical Director of the Todd Cancer Institute at Long Beach Memorial in 2003. In 2008, he returned to Rational Therapeutics full time to rededicate his time and expertise to expand the research opportunities available through the laboratory. He is a frequently invited lecturer for numerous professional organizations and universities, and has served as a reviewer and on the editorial boards of several journals including Clinical Cancer Research, British Journal of Cancer, Gynecologic Oncology, Cancer Research and the Journal of Medicinal Food.

5 Responses to Why Oncologists Don’t Like In Vitro Chemosensitivity Tests

  1. Dr. Nagourney

    Would it be safe to say that one of the reasons medical oncologists don’t like in vitro chemosensitivity tests is that it may be in direct competition with the randomized controlled clinical trial paradigm – a fiercely defended relic of our ignorance? Cell culture assays measure the “efficacy” of anti-cancer drugs. The randomized clinical trial measures the “efficacy” of anti-cancer drugs. And the new molecular testing rates the efficacy of population research vs rating the efficacy of drugs “actually” tested against an individual’s cancer cells.

    The oncologist’s trade group, American Society of Clinical Oncologists (ASCO) says oncologists should make chemotherapy treatment recommendations on the basis of published reports of clinical trials and a patient’s health status and treatment preferences. All the rigorous clinical trials identified are the best treatments for the “average” patient (do cancer cells like Coke or Pepsi). But cancer is far more heterogeneous in response to various individual drugs than are bacterial infections. The tumors of different patients have different responses to chemotherapy.

    The ASCO tech assessments say that chemotherapy sensitivity and resistance assays (CSRAs) should not be used outside the confines of a clinical trial setting. The same people who maintain that assay-directed therapy should not be used until proven in prospective randomized clinical trials, are the same people whose entire careers are utterly dependent upon mega-trials 100% funded by pharmaceutical companies (that, plus fees from speeches they give for these companies), are the same people who control the clinical trials system, the grant review study sections, and the journal editorial boards.

    No wonder ASCO doesn’t recommend the use of CSRAs (no matter how good they are) to select chemotherapeutic agents for individual patients outside of the clinical trial setting. Besides the authors of these tech assessments trying to invent a brand new criterion for validating a laboratory test, they’d like to have these tests in clinical trials. Tens of thousands of scientists pushing a goal of finding the tiniest improvements in treatment rather than genuine breakthroughs that fosters redundant problems and rewards academic achievement and publication above all else.

    Why is ASCO (and others) protecting the status of treatments which are only marginally and minimally and inconsistently effective? This prevents serendipitous and fortuitous discovery. Truly effective treatment don’t need prospective randomized trials. Even ASCO points out, because the number of available chemotherapeutic agents has increased enormously over the past few years, the emphasis on the rationale for these assays have never been stronger. As the number of possible treatment options supported by completed randomized clinical trials increases, the scientific literature becomes increasingly vague for guiding physicians.

    With all these uncertainties, would it be wrong to make a clinical decision based on CSRAs? Should it be denied to patients who walk in the door asking for it? Patients who want this testing, after a thorough discussion about the peer-reviewed studies and experience that supports it, should not be hindered by restrictive ASCO policy. I never heard that ASCO had been knighted a regulatory agency.

    Until the controlled, randomized trialist approach has delivered curative results with a high success rate, the choice of physicians and patients to integrate promising insights and methods like chemoresponse assays, remains an essential component of this kind of treatment technology.

  2. Michael Castro, MD says:

    Nice comment… In 1992, the Church publicly forgave Galileo for his “crime” of the heliocentric theory…a lesson in the slow pace of circumspection by authoritative bodies… seems we haven’t yet overcome an analogous religious intolerance in medical oncology and I’m not holding out for an apology from ASCO any time soon, but eventually it may come… Certainly, the insistence on population medicine at a time when the technology for individualized medicine has arrived borders on religious intolerance, not the scientifically curious patient advocacy patients want and naively expect… I’m afraid this intolerance is buttressed by the economic incentives of giving drugs to as many individuals as possible – a double problem….

  3. Robert Jereski says:

    I get that institutions and people have various incentives to maintain a course of treatment based on clinical trials even when treatment results are not improving with new trials. But what i don’t understand is why those developing an ostensibly better treatment approach would dismiss the value of prospective randomized trials and publishing. A scientific approach would use both to prove effectiveness of the diagnostic tool advocated here.

    A listing of randomized trials supportive of cell death chemo-sensitivity assays and publishing in peer-reviewed journals about the results of such trials and/or treatment would serve patients well.

  4. I’ve been researching cell function analysis for the last 12 years, and even during that period, I’ve come across the resistance mechanism from the so-called trialist system, time and time again. I’ve researched the history of this apprehension; one example coming from NCI’s feeble attempt to study assay-directed therapy of lung cancer some years ago.

    The study was a failure because it was done with established permanent cell-lines (instead of fresh cells), which have been conclusively proven to have no predictive value at all with respect to the clinical activity spectrum. The result was a dismal 11% response.

    The NCI used “cell-lines” because the major expertise of the investigators who carried out any study was in the creation of cancer cell-lines, and they wanted to see if they could perform assays on these cell lines to use in patient therapy. The results showed they were able to test successfully only 22% of specimens received, including only 7% of primary lesions.

    This contrasts with a 75% overall success rate reported by earlier investigators who used the same assay system in “fresh” tumor and a routinely obtained >95% success rate using improved (cell death) methods available today.

    The NCI spent $15 million on a single-cell suspension “fresh” tumor assay with cell proliferation (cell growth) rather than cell death as an endpoint. When that didn’t work, they folded their hand and specifically discouraged future applications of cell culture testing in their grant and contract guidelines, dating from the late 1980’s.

    They never supported any drug development work based on primary cultures of three-dimensional (3D) cell clusters with cell death endpoints, which very nicely recapitulate known disease specific activity endpoints.

    Then later, there were sophisticated programs to discover gene expression microarrays which predict for responsiveness to drug therapy. The NCI had a huge lab working on microarrays to look for patterns of mRNA and protein expression which are predictive of chemotherapy response. They spent 2 years trying to find patterns which correlated using the NCI’s various established ovarian “cell-lines.”

    They thought they had something, but when they started to apply them to “fresh” tumor specimens, none of the results in the “cell-lines” was applicable to the “fresh” tumors. Everything they worked out in the “cell-lines” was not worth anything and they had to start over from square one.

    However, the limitations and non-applicability of the NCI efforts, failed to realize that the way to identify informative gene expression patterns is to have a “gold standard” and the (cell-death) cell culture assays are by far the most powerful, efficient, useful “gold standard” to have, adding the potential value of the assays to individualize cancer therapy.

    It was routine for the NCI to append statements to grant and contract initiative announcements that applications relating to cell culture assays were strongly discouraged. Dr. Daniel Von Hoff (after his failed attempt at old technology cell-proliferation assays) published a paper around 1990 in which he stated that clinical trials of cell culture assays would never be supported. And the cooperative groups have utterly refused to do the studies. Why should they? Five times as much work for much less (financial) reward.

    There was an enormous amount of published, peer-reviewed research documenting the “accuracy” of cell culture assays. Scores of studies in thousands of patients. Based on both response and survival, but all of it excluded from the ASCO and insurance industry reviews. And it’s the only evidence existing to validate any other medical test used as an aid in drug selection.

    Disallow the introduction of published, peer-reviewed evidence documenting accuracy. While allowing the introduction of hearsay, unstated, undocumented, undescribed, unpublished, unpeer-reviewed non-evidence.

    And the fact that “proving” efficacy in one situation would do nothing to prove efficacy in any other situation. This is why the FDA demands clinical trials data showing efficacy for each and every indication relating to drugs.

    Let’s say a plan assay-directed clinical trial in relapsed NSCLC proves efficacy. All we prove is that it improves things for one small indication. Relapsed NSCLC, not ovarian cancer. And it gets worse. The year after the close of the study, two new drugs become available and the assay-directed clinical study only proves efficacy with the old drugs. It doesn’t prove efficacy involving the new and improved drugs. A constantly moving target.

    So then you say, just go out and get a grant to do another one. Sounds like the Twilight Zone! If NCI can’t do it (a.k.a. Von Hoff), nobody can. And you wonder why some NCI hospital is not doing chemosensitivity tests? Thank goodness for private researchers.

    BTW. In the first head-to-head clinical trial comparing gene expression patterns (molecular gene testing) with personalized cancer cytometric testing (also known as functional profiling or chemosensitivity testing), and personalized cancer cytometrics was found to be substantially more accurate (Arienti et al, Journal of Translational Medicine 2011, 9:94).

    What the investigators did was to examine the “Target Now” types of targets and compare clinical responses against the results with functional analyses, establishing that when one measures the biology of the disease it provides as more robust prediction of response. The “driver” term is less operative as these genes are not causative of the disease but causative of drug resistance.

  5. Pingback: BOOK REVIEW: OUTLIVING CANCER | CANCER STORY

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: