Chemosensitivity Testing – What It Is and What It Isn’t

Several weeks ago I was consulted by a young man regarding the management of his heavily pre-treated, widely metastatic rectal carcinoma. Upon review of his records, it was evident that under the care of both community and academic oncologists he had already received most of the active drugs for his diagnosis. Although his liver involvement could easily provide tissue for analysis, I discouraged his pursuit of an ex vivo analysis of programmed cell death (EVA-PCD) assay. Despite this, he and his wife continued to pursue the option.

As I sat across from the patient, with his complicated treatment history in hand, I was forced to admit that he looked the picture of health. Wearing a pork pie hat rakishly tilted over his forehead, I could see few outward signs of the disease that ravaged his body. After a lengthy give and take, I offered to submit his CT scans to our gastrointestinal surgeon for his opinion on the ease with which a biopsy could be obtained. I then dropped a note to the patient’s local oncologist, an accomplished physician who I respected and admired for his practicality and patient advocacy.

A week later, I received a call from the patient’s physician. Though cordial, he was puzzled by my willingness to pursue a biopsy on this heavily treated individual. I explained to him that I was actually not highly motivated to pursue this biopsy, but instead had responded to the patient’s urging me to consider the option of performing an EVA-PCD assay. I agreed with the physician that the conventional therapy options were limited but noted that several available drugs might yet have a role in his management including signal transduction inhibitors.

I further explained that some patients develop a process of collateral sensitivity, whereby resistance to one class of drugs (platins, for example) can enhance the efficacy of other class of drugs (such as, antimetabolite) Furthermore, patients may fail a drug, then be treated with several other classes of agents, only then a year of two later, manifest sensitivity to the original drug.

Blog artOur conversation then took a surprising turn. First, he told me of his attendance at a dinner meeting, some 25 years earlier, where Dan Von Hoff, MD, had described his experiences with the clonogenic assay. He went on to tell me how that technique had been proven unsuccessful finding a very limited role in the elimination of “inactive” drugs with no capacity to identify “active” drugs. He finished by explaining that these shortcomings were the reason why our studies would be unlikely to provide useful information.

I found myself grasping for a handle on the moment. Here was a colleague, and collaborator, who had heard me speak on the topic a dozen times. I had personally intervened and identified active treatments for several of his patients, treatments that he would have never considered without me. He had invited me to speak at his medical center and spoke glowingly of my skills. And yet, he had no real understanding of what I do. It made me pause and wonder whether the patients and physicians with whom I interact on a daily basis understand the principles of our work. For clarity, in particular for those who may be new to my work, I provide a brief overview.

1.    Cancer patients are highly individual in their response to chemotherapies. This is why each patient must be tested to select the most effective drug regimen.
2.    Today we realize that cancer doesn’t grow too much it dies too little. This is why older growth-based assays didn’t work and why cell-death-based assays do.
3.    Cancer must be tested in their native state with the stromal, vascular and inflammatory elements intact. This is why we use microspheroids isolated directly from patients and do not grow or subculture our specimens.
4.    Predictions of response are not based on arbitrary drug concentrations but instead reflect the careful calibration of in vitro findings against patient outcomes – the all-important clinical database.
5.    We do not conduct drug resistance assays. We conduct drug sensitivity assays. These drug sensitivity assays have been shown statistically significantly to correlate with response, time to progression and survival.
6.    We do not conduct genomic analyses for there are no genomic platforms available today that are capable of reproducing the complexity, cross-talk, redundancy or promiscuity of human tumor biology.
7.    Tumors manifest plasticity that requires iterative studies. Large biopsies and sometimes multiple biopsies must be done to construct effective treatment programs.
8.    With chemotherapy, very often more is not better.
9.    New drugs are not always better drugs.
10.  And finally, cancer drugs do not know what diseases they were invented for.

While we could continue to enumerate the principles that guide our practice, one of the more important principles is humility. Medicine is a humbling experience and cancer medicine even more so. Patients often know more than their doctors give them credit for. Failing to incorporate a patient’s input, experience and wishes into the treatment programs that we design, limits our capacity to provide them the best outcome.

With regard to my colleague who seemed so utterly unfamiliar with these concepts, indeed for a large swath of the oncologic community as a whole, I am reminded of the saying “There’s none so blind as those who will not see.”

Reposted from March 23, 2012

Expert Advice – Another Wrinkle

Few dictates of modern medicine could be considered more sacrosanct than the prohibition of excess salt intake in our daily diets. For more then five decades every medical student has had the principle of dietary salt reduction drummed into his or her heads. Salt was the bane of human health, the poison that created hypertension, congestive heart failure, stroke, renal failure and contributed to the death of untold millions of people in the western society. At least so it seemed.

Three articles in the 08/14/2014 New England Journal of Medicine raise serious questions about the validity of that heretofore established principle of medical therapeutics.

Two of the articles utilized urinary sodium and potassium excretion as a surrogate for dietary intake to examine impact on blood pressure, mortality and cardiovascular events overall. A third article applied a Bayesian epidemiologic modeling technique to assess the impact of sodium intake on cardiovascular mortality.

salt shaker-nihThe first two articles were unequivocal. Low sodium intake, that is, below 1.5 to 2 grams per day was associated with an increase in mortality. High sodium intake that is, greater than 6 grams per day, was also associated with an increase in mortality; but the middle ground, that which reflects the usual intake of sodium in most western cultures did not pose a risk. Thus, the sodium intake associated with the western diet was safe. What is troubling however is the fact that very low sodium diets, those promulgated by the most established authorities in the field, are in fact hazardous to our health.

It seems that every day we are confronted with a new finding that refutes an established dogma of modern medicine. I have previously written blogs on the intake of whole milk or consumption of nuts, both of which were eschewed by the medical community for decades before being resurrected as healthy foodstuffs in the new millennium. One by one these pillars of western medicine have fallen by the wayside. To this collection, we must now add the low-salt diet.

Thomas Kuhn in his 1962 book, The Structure of Scientific Revolutions, stated that a new paradigm would only succeed if a new one arises that can replace it. Perhaps these large meta-analyses will serve that purpose for sodium intake and health. One can only wonder what other medical sacred cows should now be included in these types of inquiries?

As a researcher in the field of human tumor biology and purveyor of the EVA-PCD platform for prediction of chemotherapy drug response and oncologic discovery, I am intrigued but also encouraged, by the scientific community’s growing ability to reconsider its most established principles as new data forces a re-examination of long held beliefs. It may only be a matter of time before more members of the oncologic community re-examine the vast data supporting the predictive validity of these Ex Vivo Analyses and come to embrace these important human tumor phenotypic platforms. At least we can hope so.

Toward A 100% Response Rate in Human Cancer

Oncologists confront numerous hurdles as they attempt to apply the new cancer prognostic and predictive tests. Among them are the complexities of gene arrays that introduce practicing physicians to an entirely new lexicon of terms like “splice variant, gene-rearrangement, amplification and SNP.”

Althougcancer for dummiesh these phrases may roll of the tongue of the average molecular biologists (mostly PhDs), they are foreign and opaque to the average oncologist (mostly MDs). To address this communication shortfall laboratory service providers provide written addenda (some quite verbose) to clarify and illuminate the material. Some institutions have taken to convening “molecular tumor boards” where physicians most adept at genomics serve as “translators.” Increasingly, organizations like ASCO offer symposia on modern gene science to the rank and file, a sort of Cancer Genomics for Dummies. If we continue down this path, oncologists may soon know more but understand less than any other medical sub-specialists.

However well intended these educational efforts may be, none of them are prepared to address the more fundamental question: How well do genomic profiles actually predict response? This broader issue lays bare our tendency to confuse data with results and big data with big results. To wit, we must remember that our DNA, originally provided to each of us in the form of a single cell (the fertilized ovum) carries all of the genetic information that makes us, us. From the hair follicles on our heads to the acid secreting cells in our stomach, every cell in our body carries exactly the same genetic data neatly scripted onto our nuclear hard-drives.
What makes this all work, however, isn’t the DNA on the hard drive, but instead the software that judiciously extracts exactly what it needs, exactly when it needs it. It’s this next level of complexity that makes us who we are. While it is true that you can’t grow hair or secrete stomach acid without the requisite DNA, simply having that DNA does not mean you will grow hair or make acid. Our growing reliance upon informatics has created a “forest for the trees” scenario, focusing our gaze upon nearby details at the expense of larger trends and insights.

What is desperately needed is a better approximation of the next level of complexity. In biology that moves us from the genotype (informatics) to the phenotype (function). To achieve this, our group now regularly combines genomic, transcriptomic or proteomic information with functional analyses. This enables us to interrogate whether the presence or absence of a gene, transcript or protein will actually confer that behavior or response at the system level.

I firmly believe that the future of cancer therapeutics will combine genomic, transcriptomic and/or proteomic analyses with functional (phenotypic) analyses.

Recent experiences come to mind. A charming patient in her 50s underwent a genomic analysis that identified a PI3K mutation. She sought an opinion. We conducted an EVA-PCD assay on biopsied tissue that confirmed sensitivity to the drugs that target PI3K. Armed with this information, we administered Everolimus at a fraction of the normal dose. The response was prompt and dramatic with resolution of liver function abnormalities, normalization of her performance status and a quick return to normal activities. A related case occurred in a young man with metastatic colorectal cancer. He had received conventional chemotherapies but at approximately two years out, his disease again began to progress.

A biopsy revealed that despite prior exposure to Cetuximab (the antibody against EGFR) there was persistent activity for the small molecule inhibitor, Erlotinib. Consistent with prior work that we had reported years earlier, we combined Cetuximab with Erlotinib, and the patient responded immediately.

Each of these patients reflects the intelligent application of available technologies. Rather than treat individuals based on the presence of a target, we can now treat based on the presence of a response. The identification of targets and confirmation of response has the potential to achieve ever higher levels of clinical benefit. It may ultimately be possible to find effective treatments for every patient if we employ multi-dimensional analyses that incorporate the results of both genomic and phenotypic platforms.

Outliving Hospice

Outliving CancerFor those of you who have read my book Outliving Cancer you will recognize the chapter entitled “Outliving Hospice.” It is the description of one of my lung cancer patients.

The saga began in 2005, when this gentleman with metastatic lung cancer under the care of the Veteran’s Administration in Los Angeles presented to our group requesting a biopsy for an EVA-PCD assay to select therapy. Diagnosed some months earlier his lung cancer had progressed following first line platinum-based chemotherapy. He was deemed untreatable and placed on hospice.

At his request, one of our surgical colleagues conducted a biopsy and identified a treatment combination borrowed from work done some years earlier by Japanese investigators. It worked perfectly for a year allowing him to return to a normal life.

At year two however, he relapsed. At that point, we confronted a dilemma – would we accept the inevitability of his progressive disease, fold our tent, and allow the patient to return to hospice care; or conduct yet another biopsy to determine the next line of therapy? If you have read the book, then you know how the story plays out. The new biopsy revealed the unexpected finding that the tumor had completely clocked around to an EGFR-driven cancer, highly sensitive to erlotinib (Tarceva). Placed upon oral Tarceva, he has been in remission ever since.

When I saw Rick, two weeks ago at our six month routine follow up he provided a copy of his February 2014 PET/CT scans which, once again, RickHelm Small Imagerevealed no evidence of progressive disease. With the exception of the skin rashes associated with the therapy, he maintains a completely normal life. During our discussion he apprised me of an interesting fact. His survival, now approaching 10 years, according to him, constitutes not only the longest survivorship for any patient under the care of the Los Angeles VA, nor any patient under the care of the VA in California, no, he is the longest surviving actively treated metastatic non-small cell lung cancer under the care of the Veteran’s Administration. Period! While I cannot, with certainty, vouch for this fact, I am quite certain that he is among the best outcomes that I have seen.

There are several points to be gleaned. The first is that every patient deserves the best possible outcome. The second is that hospice care is in the eye of the beholder. The third is that patients must take charge of their own care and demand the best possible interventions available. As an aside, you might imagine that a federal agency responsible for the costly care of tens of thousands of lung cancer patients every year would pay attention to results like Rick’s. Might there be other patients who could benefit from Ex-Vivo Analysis for the correct selection of chemotherapeutics?  One can only wonder.

In Cancer – If It Seems Too Good to Be True, It Probably Is

The panoply of genomic tests that have become available for the selection of chemotherapy drugs and targeted agents continues to grow. Laboratories across the United States are using gene platforms to assess what they believe to be driver mutations and then identify potential treatments.

Among the earliest entrants into the field and one of the largest groups, offers a service that examines patient’s tumors for both traditional chemotherapy and targeted agents. This lab service was aggressively marketed under the claim that it was “evidence-based.” A closer examination of the “evidence” however, revealed tangential references and cell-line data but little if any prospective clinical outcomes and positive and negative predictive accuracies.

I have observed this group over the last several years and have been underwhelmed by the predictive validity of their methodologies. Dazzled by the science however, clinical oncologists began sending samples in droves, incurring high costs for these laboratory services of questionable utility.

In an earlier blog, I had described some of the problems associated with these broad brush genomic analyses. Among the greatest shortcomings are Type 1 errors.  These are the identification of the signals (or analytes) that may not predict a given outcome. They occur as signal-to-noise ratios become increasingly unfavorable when large unsupervised data sets are distilled down to recommendations, without anyone taking the time to prospectively correlate those predictions with patient outcomes.

Few of these companies have actually conducted trials to prove their predictive values. This did not prevent these laboratories from offering their “evidence-based” results.

In April of 2013, the federal government indicted the largest purveyor of these techniques.  While the court case goes forward, it is not surprising that aggressively marketed, yet clinically unsubstantiated methodologies ran afoul of legal standards.

A friend and former professor at Harvard Business School once told me that there are two reasons why start-ups fail.  The first are those companies that “can do it, but can’t sell it.”  The other types are companies that “can sell it, but can’t do it.”  It seems that in the field of cancer molecular biology, companies that can sell it, but can’t do it, are on the march.

Does Chemotherapy Work? Yes and No.

A doctor goes through many stages in the course of his or her career. Like Jacques’ famous soliloquy in Shakespeare’s “As you Like It,” the “Seven Ages of Man,” there are similar stages in oncologic practice.

In the beginning, fresh out of fellowship, you are sure that your treatments will have an important impact on every patient’s life. As you mature, you must accept the failures as you cling to your successes. Later still, even some of your best successes become failures. That is, patients who achieve complete remissions and return year after year for follow-up with no evidence of disease, suddenly present with, a pleural effusion, an enlarged liver or a new mass in their breast and the whole process begins again.

I met with just such a patient this week. Indeed when she arrived for an appointment, I only vaguely remembered her name. After all, it had been 13 years since we met. When she reintroduced herself I realized that I had studied her breast cancer and had found a very favorable profile for several chemotherapy drugs. As the patient resided in Orange County, CA, she went on to receive our recommended treatment under the care of one of my close colleagues, achieving an excellent response to neo-adjuvant therapy, followed by surgery, additional adjuvant chemotherapy, and radiation. Her decade long remission reflected the accuracy of the assay drug selection. She was a success story, just not a perfect success story. After all, her large tumor had melted away using the drugs we recommended and her 10 year disease-free interval was a victory for such an aggressive cancer.

A dying leukemia cell

A dying leukemia cell

So what went wrong? Nothing, or more likely, everything. Cancer chemotherapy drugs were designed to do one thing very well, stop cancer cells from dividing. They target DNA synthesis and utilization, damage the double helix or disrupt cell division at the level of mitosis. All of these assaults upon normal cellular physiology target proliferation. Our century long belief that cancer was a disease of cell growth had provided us a wealth of growth-inhibiting drugs. However, in the context of our modern understanding of cancer as a disease of abnormal cell survival (and the need to kill cells outright to achieve remissions), the fact that these drugs worked at all can now be viewed as little more than an accident. Despite chemotherapy’s impact on cell division, it is these drugs unintended capacity to injure cells in ways they cannot easily repair, (resulting in programmed cell death) that correlates with response. Cancer, as a disease, is relatively impervious to growth inhibition, but can in select patients be quite sensitive to lethal injury. While cancer drugs may have been devised as birth control devices, they work, when they do work at all, as bullets.

There is an old joke about aspirin for birth control. It seems that aspirin is an effective contraceptive. When you ask how this simple headache remedy might serve the purpose, the explanation is that an aspirin tablet held firmly between the knees of a young woman can prevent conception. The joke is emblematic of chemotherapy’s effect on cancer as a drug designed for one purpose, but can prove effective through some other unanticipated mechanism.

Chemotherapy does work. It just does not work in a manner reflective of its conceptualization or design. Not surprisingly it does not work very well and rarely provides curative outcomes. Furthermore, its efficacy comes at a high price in toxicity with that toxicity reflecting exactly what the chemotherapy drugs were designed to do; stop cells from growing.  It seems that the hair follicles, bone marrow, immune system, gastrointestinal mucosa and reproductive tissues are all highly proliferative cells in their own right. Not surprisingly, chemotherapy extracts a heavy price on these normal (proliferative) tissues. It is the cancer cells, relatively quiescent throughout much of their lives that escape the harmful effects.

As a medical oncologist in the modern era, I have recognized only too well the shortcomings of conventional cytotoxic drugs. It is for this reason that I use a laboratory platform to select the most effective drugs from among the many badly designed agents. Culling from the herd those few good drugs capable of inducing lethal injury these are the ones that the EVA-PCD assay selects for our patients. Applying this approach, we have doubled responses and prolonged survivals.

Over the past decade we have focused increasingly on the new signal transduction inhibitors and growth factor down regulators. If we can double the response rates and improve survivals using our laboratory assay to select among bad drugs, just imagine what our response rates will be when we apply this approach to good drugs.

Is There a Role for Maintenance Therapy in Cancer Treatment?

There is a long tradition of maintenance therapy in pediatric oncology. Children with acute lymphoblastic leukemia uniformly receive three stages of therapy: induction, consolidation, and finally maintenance. The maintenance stage consists of weekly, or even daily therapies.

The historical experiences of relapse in this population lead investigators to consistently expose these patients to drugs for a period of years. Despite the apparent success of this approach in childhood cancers, long-term maintenance therapy did not gain popularity in adult oncology. Why?

There are probably several reasons. One reason is that childhood leukemia is among the most chemo-responsive diseases in medicine. As such, there are many active drugs available for treatment and many non-cross-resistant maintenance schedules that can be employed.

A second reason is the relative tolerability of drugs like oral thioguanine or mercaptopurine that are used in chronic maintenance therapy. By contrast adult tumors rarely achieve complete remissions. The number of active drugs has historically been very limited and the tolerance of long-term treatments characteristically poor.

Despite this, there is an appealing rational for maintenance therapy. Once we recognized and incorporated the tenents of apoptosis and programmed cell death into cancer management, we were forced to reconsider many of the principles of older treatment protocols.

Conceptually, maintenance allows for a cytotoxic exposure when the cell enters a “chemosensitive” period in its life cycle.  Cancer cells that are “out surviving” their normal counterparts often do so in a quiescent stage (G0 Gx). In order to capture these cells, drugs must be present in the body when these cells awaken from their dormancy. As we have now achieved increasingly durable remissions in diseases like breast cancer, small cell lung and ovarian, we are confronting patients in long-term complete remission. When you add to this newfound population the availability of comparably mild agents, like the low dose Gemcitabine/Cisplatin doublet, we now have at our disposal active drugs that can be safely continued for long periods of time.

Using laboratory selection to identify first line (induction), second line (consolidation) and finally third line (maintenance) schedules, we can now offer our patients well-tolerated combinations that offer the hope of more durable remissions.

The GOG 178, in which continued taxol dosing provided more durable remission in ovarian cancer, provided the first inklings of this. Unfortunately, taxol is toxic. And the more durable remissions came at an increasingly high price: neuropathy, myelosuppression, alopecia, fatigue and malaise, which greatly limited the utility of this approach. Yet it does not limit its theoretical attractiveness as we continue to develop targeted agents with more selective activity and modified toxicity profiles. We anticipate maintenance therapies will become more widespread.

Based upon our experiences to date, we are successfully using this approach with our patients who achieve good clinical remissions.

Outliving Cancer

You can find more information about our use of maintenance therapy, in Chapter 14 of the book Outliving Cancer.

This blog was originally posted in August 2011.

Chemosensitivity Testing Captures Attention of “Nature Biotechnology”

Nature Biotech largecoverAn interesting editorial appeared in the February 2013 issue of Nature Biotechnology titled “Dishing out cancer treatment.” The lead line reads, “Despite their limitations, in-vitro assays are a simple means for assessing the drug sensitivity of a patient’s cancer . . . we think assays deserve a second look.”

The author describes the unequivocal appeal of laboratory analyses that are capable of selecting drugs and combinations for individual patients. At a time when 100’s of new drugs are in development, drug discovery platforms that can mimic human tumor response in the laboratory are becoming increasingly attractive to patients and the pharmaceutical industry. While the author, rooted in contemporary molecular biology, examines the field through the lens of genomic, transcriptomic, proteomic and metabolomic profiling, he recognizes that these analyte-based approaches cannot capture the tumor in its microenvironment, yet we now recognize that these micro-environmental influences are critical to accurate response prediction.

As one reads this piece, it is instructive to remember that no other platform can examine the dynamic interaction between cells and their microenvironment. No other platform can examine drug synergy. And no other platform can examine drug sequence.

It is these complexities however, that will guide the next generation of drug tests and ultimately the process of drug discovery. Even the most ardent adherents to genomic profiling must ultimately recognize that genotype does not equal phenotype. Yet, it is the tumor phenotype that we must study.

I am gratified that the editors of so august a journal as Nature Biotechnology have taken the time to reexamine this important field. Perhaps, if our most scientific colleagues are beginning to recognize the importance of functional analyses, it may be only a matter of time before the clinical oncology community follows suit.

The editor’s final line is poignant, “After years spent on the sidelines, perhaps in-vitro screening methods deserve another look.” We couldn’t agree more.

Why Oncologists Don’t Like In Vitro Chemosensitivity Tests

In human experience, the level of disappointment is directly proportional to the level of expectation. When, for example, the world was apprised of the successful development of cold fusion, a breakthrough of historic proportions, the expectations could not have been greater. Cold fusion, the capacity to harness the sun’s power without the heat and radiation, was so appealing that people rushed into a field about which they understood little. Those who remember this episode during the 1990s will recall the shock and dismay of the scientists and investors who rushed to sponsor and support this venture only to be left out in the cold when the data came in.

Since the earliest introduction of chemotherapy, the ability to select active treatments before having to administer them to patients has been the holy grail of oncologic investigation. During the 1950s and 60s, chemotherapy treatments were punishing. Drugs like nitrogen mustard were administered without the benefit of modern anti-emetics and cancer patients suffered every minute. The nausea was extreme, the bone marrow suppression dramatic and the benefits – marginal at best. With the introduction of cisplatin in the pre Zofran/Kytril era, patients experienced a heretofore unimaginable level of nausea and vomiting. Each passing day medical oncologists wondered why they couldn’t use the same techniques that had proven so useful in microbiology (bacterial culture and sensitivity) to select chemotherapy.

And then it happened. In June of 1978, the New England Journal of Medicine (NEJM) published a study involving a small series of patients whose tumors responded to drugs selected by in vitro (laboratory) chemosensitivity. Eureka! Everyone, everywhere wanted to do clonogenic (human tumor stem cell) assays. Scientists traveled to Tucson to learn the methodology. Commercial laboratories were established to offer the service. It was a new era of cancer medicine. Finally, cancer patients could benefit from effective drugs and avoid ineffective ones. At least, it appeared that way in 1978.

Five years later, the NEJM published an update of more than 8,000 patients who had been studied by clonogenic assay. It seemed that with all the hype and hoopla, this teeny, tiny little detail had been overlooked: the clonogenic assay didn’t work. Like air rushing out of a punctured tire, the field collapsed on itself. No one ever wanted to hear about using human tumor cancer cells to predict response to chemotherapy – not ever!

In the midst of this, a seminal paper was published in the British Journal of Cancer in 1972 that described the phenomenon of apoptosis, a form of programmed cell death.  All at once it became evident exactly why the clonogenic assay didn’t work. By re-examining the basic tenets of cancer chemosensitivity testing, a new generation of assays were developed that used drug induced programmed cell death, not growth inhibition. Cancer didn’t grow too much, it died too little. And these tests proved it.

Immediately, the predictive validity improved. Every time the assays were put to the test, they met the challenge. From leukemia and lymphoma to lung, breast, ovarian, and even melanoma, cancer patients who received drugs found active in the test tube did better than cancer patients who received drugs that looked inactive. Eureka! A new era of cancer therapy was born. Or so it seemed.

I was one of those naive investigators who believed that because these tests worked, they would be embraced by the oncology community. I presented my first observations in the 1980s, using the test to develop a curative therapy for a rare form of leukemia. Then we used this laboratory platform to pioneer drug combinations that, today, are used all over the world. We brought the work to the national cooperative groups, conducted studies and published the observations. It didn’t matter. Because the clonogenic assay hadn’t worked, regardless of its evident deficiencies, no one wanted to talk about the field ever again.

In 1600, Giordano Bruno was burned at the stake for suggesting that the universe contained other planetary systems. In 1634, Galileo Galilei was excommunicated for promoting the heliocentric model of the solar system. Centuries later, Ignaz Semmelweis, MD, was committed to an insane asylum after he (correctly) suggested that puerperal sepsis was caused by bacterial contamination. A century later, the discoverers of helicobacter (the cause of peptic ulcer disease) were forced to suffer the slings and arrows of ignoble academic fortune until they were vindicated through the efforts of a small coterie of enlightened colleagues.

Innovations are not suffered lightly by those who prosper under established norms. To disrupt the standard of care is to invite the wrath of academia. The 2004 Technology Assessment published by Blue Cross/Blue Shield and ASCO in the Journal of Oncology and ASCO’s update seven years later, reflect little more than an established paradigm attempting to escape irrelevance.

Cancer chemosensitivity tests work exactly according to their well-established performance characteristics of sensitivity and specificity. They consistently provide superior response and, in many cases, time to progression and even survival. They can improve outcomes, reduce costs, accelerate research and eliminate futile care. If the academic community is so intent to put these assays to the test, then why have they repeatedly failed to support the innumerable efforts that our colleagues have made over the past two decades to fairly evaluate them in prospective randomized trials? It is time for patients to ask exactly why it is that their physicians do not use them and to demand that these physicians provide data, not hearsay, to support their arguments.

Do We Already Have the Tools We Need to Cure Cancer?

The rapid-fire sequence of the annual American Association of Cancer Research (AACR) meeting, held in May, followed by the annual American Society of Cllinical Oncology (ASCO) meeting, held in June, provides the opportunity to put scientific discoveries into perspective as they find their way from theoretical to practical.

Members of AACR, the basic science organization, ponder deep biological questions. Their spin-offs arrive in the hands of members of ASCO as Phase I and Phase II trials, some of which are then reported at ASCO meetings.

Many of the small molecules my laboratory has studied over the years are now slowly making their way from “Gee Whiz” to clinical therapy. At the ASCO meeting I attended many of the Phase I sessions, where alphabet soup compounds had their first “in-human” trials. As most of these compounds are familiar to me, I was very interested in these early, though highly preliminary, results.

Departing from one Developmental Therapy (Phase I) session, with visions of signal transduction pathways in my head, I attended a poster discussion on triple negative breast cancer. For those of you unfamiliar with the term, it refers to an increasingly common form of breast cancer that doesn’t mark for the usual estrogen, progesterone, or HER-2 features. Often occurring in younger patients, this form of breast cancer can be aggressive and unresponsive to some forms of therapy. Much work has gone into defining sub-types of this disease and slow progress is being made.

As I examined the posters, one caught my eye, “Clinical Characteristics and Chemotherapy Options of Triple Negative Breast Cancer: Role of Classic CMF regimen. (Herr, MH et al, abstract #1053, ASCO 2012.) What these investigators showed in a series of 826 breast cancer patients was that those treated with the oldest drug combination for breast cancer (CMF) did better than those who received the more modern and more intensive anthracycline or taxane-based regimens. CMF, originally developed by Italian investigators in the 1970s, was the principal therapy for this disease for two decades before it was replaced, first by anthracycline and later by taxane-based treatments. What struck me was the unexpected superiority of this old regimen over its more modern, toxic and expensive brethren.

I began to wonder about other modern therapies and their real impact upon cancer outcomes. One study in HER-2 positive patients revealed relative equivalency between weekly taxol, every three-week Taxotere and Abraxane-based therapy. Once again, the cheaper, older, less toxic Taxol regimen proved superior. While most of the attendees at the ASCO meeting were considering how the newest VEGF inhibitor Regorafenib, or the addition of aflibercept, might impact their practices, I was somewhat underwhelmed by the results of these statistically significant, but clinically marginal survival advantages, all associated with great expense.

As I pondered the implications of the CMF results in triple negatives and those of the taxol results in HER-2 positives, I considered other old-fashioned therapies with newfound potential. Among them, losartan, the angiotensin antagonist that influences tumor stroma or the results of an earlier published study that identified intraconazole (a widely available anti-fungal therapy), as an inhibitor of the hedgehog pathway. While the pharmaceutical industry promotes the use of vismodegib, a hedgehog inhibitor for basal cell skin cancer, and dozens of trials examine VEGF and FGF inhibitors, I wondered whether losartan or intraconazole or other simple compounds and combinations might not already provide many of the tools we need. Is it possible that effective treatments for cancer are at hand?

Lacking the tools to decipher the signals and combine the agents to greatest effect, are we destined to continue to blindly administer increasingly expensive, toxic, yet arguably no more effective therapies? With the myriad of drugs and combinations available today, might it be that we “can’t see the forest for the trees.”