What is Personalized Cancer Therapy?

Personalized therapy is the right treatment, at the right dose for the right patient. Like the weather, however, it seems that everyone’s talking about it, but no one is doing anything about it.

In its simplest form personalized care is treatment that is designed to meet an individual’s unique biological features. Like a key in a lock, the right drug or combination opens the door to a good outcome.

Lock & Keys 1When over the years I lectured on the development of the cisplatin/gemcitabine doublet, my two boys were quite young. I would show a slide depicting a doorknob with a key in the keyhole. I likened our lab’s capacity to identify sensitivity to the cisplatin/gemcitabine combination as “unlocking” an individual’s response.

At the time my wife and I would leave the key in the inside of the front door enabling us to unlock it when going out. We reasoned at the time that our 2-year-old would not be strong enough, nor tall enough to turn the key and let himself outside. We reasoned wrong, for one day our son Alex reached up, turned the key and opened the door right in front of us. Lesson learned: Given the right key, anyone can open a door.

I continued my analogy by saying that even Arnold Schwarzenegger would be unable to open a door given the wrong key, but might, if he continued trying, snap it off in the lock.

The right key is the right treatment, effortlessly unlocking a good response, while the wrong key is the wrong treatment more often than not too much, too late, akin to a solid tumor bone marrow transplant.

In recent years, personalized care has come to be considered synonymous with genomic profiling. While we applaud breakthroughs in human genomics today, there is no molecular platform that can match patients to treatments. The objective response rate of just 10 percent, almost all in breast and ovarian cancer patients in one study (Von Hoff J Clin Oncol 2010 Nov 20:28(33): 4877-83), suggests that cancer biology is demonstrably more complex than an enumeration of its constituent DNA base pairs. The unilateral focus on this area of investigation over others might be described as “the triumph of hope over experience” (James Boswell, Life of Samuel Johnson, 1791).

But hope springs eternal and with it the very real possibility of improving our patients outcomes. By accepting, even embracing, the complexity of human tumor biology we are at the crossroads of a new future in cancer medicine.

William Withering (1741-1799) the English physician and botanist credited with discovering digitalis as the therapy for dropsy, e.g. congestive heart failure (An Account of the Foxglove and some of its Medical Uses, Withering W. 1785), had absolutely no idea what a membrane ATPase was, when he made his remarkable discovery. It didn’t matter. Cardiac glycosides provided lifesaving relief to those who suffered from this malady for fully two centuries before Danish scientist, Jens Christian Skou, identified these membrane bound enzymes, for which he was awarded a Nobel Prize in 1997.

Similarly, penicillin, aspirin, and morphine were in all use for decades, centuries, even millenia before their actual modes of action were unraveled. Medical doctors must use any and all resources at their disposal to meet the needs of their patients. They do not need to know “how” something works so much as they (and their patients) need to know “that” it works.

The guiding principle of personalized medicine is to match patients to therapies. Nowhere in this directive is there a prescription of the specific platform to be used. Where genomic signatures provide useful insights for drug selection, as they do in APL (ATRA, Arsenic trioxide); NSCLC (EGFr, ROS1, ALK); CML (Imatinib, Dasatanib) then they should be used.

However, in those disease where we haven’t the luxury of known targets or established pathways, i.e. most human malignancies, then more global assessments of human tumor biology should, indeed must, be used if we are to meet the needs of our patients. Primary culture analyses like the EVA-PCD® provide a window onto human tumor biology. They are vehicles for therapy improvement and conduits for drug discovery. Scientists and clinicians alike need to apply any and all available methodologies to advance their art. The dawn of personalized medicine will indeed be bright if we use all the arrows in our quiver to advance clinical therapeutics and basic research.

Reposted from May 2012

Investigators in Boston Re-Invent the Wheel

A report published in Cell from Dana-Farber Cancer Institute describes a technique to measure drug cov150hinduced cell death in cell lines and human cancer cells. The method “Dynamic BH3 profiling” uses an oligopeptidic BIM to gauge the degree to which cancer cells are “primed” to die following exposure to drugs and signal transduction inhibitors. The results are provocative and suggest that in cell lines and some human primary tissues, the method may select for sensitivity and resistance.

We applaud these investigators’ recognition of the importance of phenotypic measures in drug discovery and drug selection and agree with the points that they raise regarding the superiority of functional platforms over static (omic) measures performed on paraffin fixed tissues. It is heartening that scientists from so august an institution as Dana-Farber should come to the same understanding of human cancer biology that many dedicated researchers had pioneered over the preceding five decades.

Several points bear consideration. The first, as these investigators so correctly point out: “DBP should only be predictive if the mitochondrial apoptosis pathway is being engaged.” This underscores the limitation of this methodology in that it only measures one form of programmed cell death – apoptosis. It well known that apoptosis is but one of many pathways of programmed cell death, which include necroptosis, autophagy and others.

While leukemias are highly apoptosis driven, the same cannot so easily be said of many solid tumors like colon, pancreas and lung. That is, apoptosis may be a great predictor of response except when it is not. The limited results with ovarian cancers (also apoptosis driven) are far from definitive and may better reflect unique features of epithelial ovarian cancers among solid tumors than the broad generalizability of the technique.

A second point is that these “single cell suspensions” do not recreate the microenvironment of human tumors replete with stroma, vasculature, effector immune cells and cytokines. As Rakesh Jain, a member of the same faculty, and others have so eloquently shown, cancer is not a cell but a system. Gauging the system by only one component may grossly underestimate the systems’ complexity, bringing to mind the allegory of elephant and the blind man. Continuing this line of reasoning, how might these investigators apply their technique to newer classes of drugs that influence vasculature, fibroblasts or stroma as their principal modes of action? It is now recognized that micro environmental factors may contribute greatly to cell survival in many solid tumors. Assay systems must be capable of capturing human tumors in their “native state” to accurately measure these complex contributions.

Thirdly, the ROC analyses consistently show that this 16-hour endpoint highly correlates with 72- and 96-hour measures of cell death. The authors state, “that there is a significant correlation between ∆% priming and ∆% cell death” and return to this finding repeatedly. Given that existing short term (72 – 96 hour) assays that measure global drug induced cell death (apoptotic and non-apoptotic) in human tumor primary cultures have already established high degrees of predictive validity with an ROC of 0.89, a 2.04 fold higher objective response rate (p =0.0015) and a 1.44 fold higher one-year survival (p = 0.02) are we to assume that the key contribution of this technique is 56 hour time advantage? If so, is this of any clinical relevance? The report further notes that 7/24 (29%) of ovarian cancer and 5/30 (16%) CML samples could not be evaluated, rendering the efficiency of this platform demonstrably lower than that of many existing techniques that provide actionable results in over 90% of samples.

Most concerning however, is the authors’ lack of recognition of the seminal influence of previous investigators in this arena. One is left with the impression that this entire field of investigation began in 2008. It may be instructive for these researchers to read the first paper of this type in the literature published in in the JNCI in 1954 by Black and Spear. They might also benefit by examining the contributions of dedicated scientists like Larry Weisenthal, Andrew Bosanquet and Ian Cree, all of whom published similar studies with similar predictive validities many years earlier.

If this paper serves to finally alert the academic community of the importance of human tumor primary culture analyses for drug discovery and individual patient drug selection then it will have served an important purpose for a field that has been grossly underappreciated and underutilized for decades. Mankind’s earliest use of the wheel dates to Mesopotamia in 3500 BC. No one today would argue with the utility of this tool. Claiming to have invented it anew however is something quite different.

Genomic Profiling for Lung Cancer: the Good, the Bad and the Ugly

Genomic profiling has gained popularity in medical oncology. Using NextGen platforms, protein coding regions of human tumors known as exomes can be examined for mutations, amplifications, deletions, splice variants and SNPs. In select tumors the results can be extremely helpful. Among the best examples are adenocarcinomas of the lung where EGFr, ALK and ROS-1 mutations, deletions and/or re-arrangements identified by DNA analysis can guide the selection of “targeted agents” like Erlotinib and Crizotinib.

An article published in May 2014 issue of JAMA reported results using probes for 10 “oncogenic driver” mutations in lung cancer patients. They screened for at least one gene in 1,007 patients and all 10 genes in 733. The most common was k-ras at 25%, followed by EGFR in 17% and ALK in 8%. The incidence then fell off with other EGFr mutations in 4%, B-raf mutations in 2%, with the remaining mutations each found in less than 1%.

Median survival at 3.5 vs 2.4 years was improved for patients who received treatments guided by the findings (Kris MG et al, Using multiplex assays of oncogenic drivers in lung cancers to select targeted drugs. JAMA, May 2014). Do these results indicate that genomic analyses should be used for treatment selection in all patients? Yes and no.

Noteworthy is the fact that 28% of the patients had driver mutations in one of three genes, EGFr, HER2 or ALK. All three of these mutations have commercially available chemotherapeutic agents in the form of Erlotinib, Afatinib and Crizotinib. Response rates of 50% or higher, with many patients enjoying durable benefits have been observed. Furthermore, patients with EGFr mutations are often younger, female and non-smokers whose tumors often respond better to both targeted and non-targeted therapies. These factors would explain in part the good survival numbers reported in the JAMA article. Today, a large number of commercial laboratories offer these tests as part of standard panels. And, like k-ras mutations in colon cancer or BCR-abl in CML (the target of Gleevec), the arguments in favor of the use of these analyses is strong.

Non-small cell lung cancer

Non-small cell lung cancer

But what of the NSCLC patients for whom no clear identifiable driver can be found? What of the 25% with k-ras mutations for whom no drug exists? What of those with complex mutational findings? And finally what of those patients whose tumors are driven by normal genes functioning abnormally? In these patients no mutations exists at all. How best do we manage these patients?

I was reminded of this question as I reviewed a genomic analysis reported to one of my colleagues. He had submitted a tissue block to an east coast commercial lab when one of his lung cancer patients relapsed. The results revealed mutations in EGFr L858R & T790M, ERBB4, HGF, JAK2, PTEN, STK11, CCNE1, CDKN2A/B, MYC, MLL2 W2006, NFKB1A, and NKX2-1. With a tumor literally bristling with potential targets, what is a clinician to do? How do we take over a dozen genetically identified targets and turn them into effective treatment strategies? In this instance, too much information can be every bit as paralyzing as too little.

Our preferred approach is to examine the small molecule inhibitors that target each of the identified aberrancies in our laboratory platform. We prefer to drill down to the next level of certainty e.g. cellular function. After all, the presence of a target does not a response make.

In this patient I would conduct a biopsy. This would enable us to examine the drugs and combinations that are active against the targets. A “hit” by the EVA-PCD assay would then isolate the “drivers” from the “passengers” and enable the clinician to intelligently select effective treatments. Combining genomic analyses with functional profiling (phenotypic analyses) provides the opportunity to turn speculative observations into actionable events.

This is the essence of Rational Therapeutics.

In Cancer – If It Seems Too Good to Be True, It Probably Is

The panoply of genomic tests that have become available for the selection of chemotherapy drugs and targeted agents continues to grow. Laboratories across the United States are using gene platforms to assess what they believe to be driver mutations and then identify potential treatments.

Among the earliest entrants into the field and one of the largest groups, offers a service that examines patient’s tumors for both traditional chemotherapy and targeted agents. This lab service was aggressively marketed under the claim that it was “evidence-based.” A closer examination of the “evidence” however, revealed tangential references and cell-line data but little if any prospective clinical outcomes and positive and negative predictive accuracies.

I have observed this group over the last several years and have been underwhelmed by the predictive validity of their methodologies. Dazzled by the science however, clinical oncologists began sending samples in droves, incurring high costs for these laboratory services of questionable utility.

In an earlier blog, I had described some of the problems associated with these broad brush genomic analyses. Among the greatest shortcomings are Type 1 errors.  These are the identification of the signals (or analytes) that may not predict a given outcome. They occur as signal-to-noise ratios become increasingly unfavorable when large unsupervised data sets are distilled down to recommendations, without anyone taking the time to prospectively correlate those predictions with patient outcomes.

Few of these companies have actually conducted trials to prove their predictive values. This did not prevent these laboratories from offering their “evidence-based” results.

In April of 2013, the federal government indicted the largest purveyor of these techniques.  While the court case goes forward, it is not surprising that aggressively marketed, yet clinically unsubstantiated methodologies ran afoul of legal standards.

A friend and former professor at Harvard Business School once told me that there are two reasons why start-ups fail.  The first are those companies that “can do it, but can’t sell it.”  The other types are companies that “can sell it, but can’t do it.”  It seems that in the field of cancer molecular biology, companies that can sell it, but can’t do it, are on the march.

Does Chemotherapy Work? Yes and No.

A doctor goes through many stages in the course of his or her career. Like Jacques’ famous soliloquy in Shakespeare’s “As you Like It,” the “Seven Ages of Man,” there are similar stages in oncologic practice.

In the beginning, fresh out of fellowship, you are sure that your treatments will have an important impact on every patient’s life. As you mature, you must accept the failures as you cling to your successes. Later still, even some of your best successes become failures. That is, patients who achieve complete remissions and return year after year for follow-up with no evidence of disease, suddenly present with, a pleural effusion, an enlarged liver or a new mass in their breast and the whole process begins again.

I met with just such a patient this week. Indeed when she arrived for an appointment, I only vaguely remembered her name. After all, it had been 13 years since we met. When she reintroduced herself I realized that I had studied her breast cancer and had found a very favorable profile for several chemotherapy drugs. As the patient resided in Orange County, CA, she went on to receive our recommended treatment under the care of one of my close colleagues, achieving an excellent response to neo-adjuvant therapy, followed by surgery, additional adjuvant chemotherapy, and radiation. Her decade long remission reflected the accuracy of the assay drug selection. She was a success story, just not a perfect success story. After all, her large tumor had melted away using the drugs we recommended and her 10 year disease-free interval was a victory for such an aggressive cancer.

A dying leukemia cell

A dying leukemia cell

So what went wrong? Nothing, or more likely, everything. Cancer chemotherapy drugs were designed to do one thing very well, stop cancer cells from dividing. They target DNA synthesis and utilization, damage the double helix or disrupt cell division at the level of mitosis. All of these assaults upon normal cellular physiology target proliferation. Our century long belief that cancer was a disease of cell growth had provided us a wealth of growth-inhibiting drugs. However, in the context of our modern understanding of cancer as a disease of abnormal cell survival (and the need to kill cells outright to achieve remissions), the fact that these drugs worked at all can now be viewed as little more than an accident. Despite chemotherapy’s impact on cell division, it is these drugs unintended capacity to injure cells in ways they cannot easily repair, (resulting in programmed cell death) that correlates with response. Cancer, as a disease, is relatively impervious to growth inhibition, but can in select patients be quite sensitive to lethal injury. While cancer drugs may have been devised as birth control devices, they work, when they do work at all, as bullets.

There is an old joke about aspirin for birth control. It seems that aspirin is an effective contraceptive. When you ask how this simple headache remedy might serve the purpose, the explanation is that an aspirin tablet held firmly between the knees of a young woman can prevent conception. The joke is emblematic of chemotherapy’s effect on cancer as a drug designed for one purpose, but can prove effective through some other unanticipated mechanism.

Chemotherapy does work. It just does not work in a manner reflective of its conceptualization or design. Not surprisingly it does not work very well and rarely provides curative outcomes. Furthermore, its efficacy comes at a high price in toxicity with that toxicity reflecting exactly what the chemotherapy drugs were designed to do; stop cells from growing.  It seems that the hair follicles, bone marrow, immune system, gastrointestinal mucosa and reproductive tissues are all highly proliferative cells in their own right. Not surprisingly, chemotherapy extracts a heavy price on these normal (proliferative) tissues. It is the cancer cells, relatively quiescent throughout much of their lives that escape the harmful effects.

As a medical oncologist in the modern era, I have recognized only too well the shortcomings of conventional cytotoxic drugs. It is for this reason that I use a laboratory platform to select the most effective drugs from among the many badly designed agents. Culling from the herd those few good drugs capable of inducing lethal injury these are the ones that the EVA-PCD assay selects for our patients. Applying this approach, we have doubled responses and prolonged survivals.

Over the past decade we have focused increasingly on the new signal transduction inhibitors and growth factor down regulators. If we can double the response rates and improve survivals using our laboratory assay to select among bad drugs, just imagine what our response rates will be when we apply this approach to good drugs.

The High Cost of Cancer Care

Scott GottleibAn article by Scott Gottlieb, MD, in Forbes (Medicare Nixes Coverage for New Cancer Tests), described Medicare reimbursement for new molecular diagnostics. As many readers are aware, there have been a growing number of diagnostic tests developed and marketed over recent years designed to identify and monitor the progress of cancer. Many of these tests are multiplexed gene or protein panels that identify prognostic groups using nomograms developed from prospective or retrospective analyses. The 21-gene Oncotype DX and related Mammoprint, are among the most widely used. Related tests for lung, colon, and other cancers are in development.

With the explosion of assays designed to personalize cancer care, comes the expense associated with conducting these analyses. Medicare, as the largest provider of medical insurance in the United States, is at the leading edge of cost containment. Not surprisingly, HHS has a jaundiced view of adding tests without clear cost benefit.

The issue is far broader than cost analysis. It goes to the very heart of what we describe as personalized medicine. Every patient wants the right treatment for their disease. Every laboratory company wants to sell their services. Where the supply and demand curve meet however, is no longer set by market forces. In this instance, third party reimbursers set the fee and the companies then need to determine whether they can provide their service at that cost.

The problem, as with all economic analysis, is meeting patient’s unlimited wants with limited resources. Two solutions can be envisaged. On the one hand, medical care progressively moves to a scenario of haves and have nots wherein only wealthier individuals can afford to obtain those drugs and interventions that are beyond the price range of most. On the other hand, care is rationed and only those treatments and interventions that rise to the highest level of evidence are made available.

While the subject of this article was sophisticated diagnostic tests, it will only be a matter of time before these same econometric analyses begin to limit the availability of costly drugs like highly expensive targeted agents. In a recent editorial published in blood, leading leukemia experts pointed out that 11 of the 12 recently approved drugs each cost $10,000 or more per month.

As we examine the rather grim prospect of unaffordable or rationed care, a glimmer of hope can be seen. Using expensive and relatively insensitive molecular diagnostic tests to select expensive targeted agents could be replaced by less expensive testing platforms. The dramatic, yet brief responses observed for many targeted agents reflect the shortcoming of linCray Computer v2ear thinking applied to the manifestly non-linear human biology, characterized by cross talk, redundancies and unrecognized hurdles. To address these complexities phenotypic analysis (the phenotype being the end product of genomic, transcriptomic and proteomic events) provide global assessments of tumor response to drugs, combinations and signal transduction inhibitors. These more discriminating results identify cellular response at the level of biology, not just informatics. While it is theoretically possible that high-throughput genomic analyses using neural networks and high throughput computer analyses may ultimately provide similar information, it is unlikely that most patients will have ready access to a Cray computer to decipher their results.

We need to stop working hard and start working smart. The answers to the many questions raised by the Forbes article regarding resource allocation in cancer treatment may already be at hand.

Why Oncologists Don’t Like In Vitro Chemosensitivity Tests

In human experience, the level of disappointment is directly proportional to the level of expectation. When, for example, the world was apprised of the successful development of cold fusion, a breakthrough of historic proportions, the expectations could not have been greater. Cold fusion, the capacity to harness the sun’s power without the heat and radiation, was so appealing that people rushed into a field about which they understood little. Those who remember this episode during the 1990s will recall the shock and dismay of the scientists and investors who rushed to sponsor and support this venture only to be left out in the cold when the data came in.

Since the earliest introduction of chemotherapy, the ability to select active treatments before having to administer them to patients has been the holy grail of oncologic investigation. During the 1950s and 60s, chemotherapy treatments were punishing. Drugs like nitrogen mustard were administered without the benefit of modern anti-emetics and cancer patients suffered every minute. The nausea was extreme, the bone marrow suppression dramatic and the benefits – marginal at best. With the introduction of cisplatin in the pre Zofran/Kytril era, patients experienced a heretofore unimaginable level of nausea and vomiting. Each passing day medical oncologists wondered why they couldn’t use the same techniques that had proven so useful in microbiology (bacterial culture and sensitivity) to select chemotherapy.

And then it happened. In June of 1978, the New England Journal of Medicine (NEJM) published a study involving a small series of patients whose tumors responded to drugs selected by in vitro (laboratory) chemosensitivity. Eureka! Everyone, everywhere wanted to do clonogenic (human tumor stem cell) assays. Scientists traveled to Tucson to learn the methodology. Commercial laboratories were established to offer the service. It was a new era of cancer medicine. Finally, cancer patients could benefit from effective drugs and avoid ineffective ones. At least, it appeared that way in 1978.

Five years later, the NEJM published an update of more than 8,000 patients who had been studied by clonogenic assay. It seemed that with all the hype and hoopla, this teeny, tiny little detail had been overlooked: the clonogenic assay didn’t work. Like air rushing out of a punctured tire, the field collapsed on itself. No one ever wanted to hear about using human tumor cancer cells to predict response to chemotherapy – not ever!

In the midst of this, a seminal paper was published in the British Journal of Cancer in 1972 that described the phenomenon of apoptosis, a form of programmed cell death.  All at once it became evident exactly why the clonogenic assay didn’t work. By re-examining the basic tenets of cancer chemosensitivity testing, a new generation of assays were developed that used drug induced programmed cell death, not growth inhibition. Cancer didn’t grow too much, it died too little. And these tests proved it.

Immediately, the predictive validity improved. Every time the assays were put to the test, they met the challenge. From leukemia and lymphoma to lung, breast, ovarian, and even melanoma, cancer patients who received drugs found active in the test tube did better than cancer patients who received drugs that looked inactive. Eureka! A new era of cancer therapy was born. Or so it seemed.

I was one of those naive investigators who believed that because these tests worked, they would be embraced by the oncology community. I presented my first observations in the 1980s, using the test to develop a curative therapy for a rare form of leukemia. Then we used this laboratory platform to pioneer drug combinations that, today, are used all over the world. We brought the work to the national cooperative groups, conducted studies and published the observations. It didn’t matter. Because the clonogenic assay hadn’t worked, regardless of its evident deficiencies, no one wanted to talk about the field ever again.

In 1600, Giordano Bruno was burned at the stake for suggesting that the universe contained other planetary systems. In 1634, Galileo Galilei was excommunicated for promoting the heliocentric model of the solar system. Centuries later, Ignaz Semmelweis, MD, was committed to an insane asylum after he (correctly) suggested that puerperal sepsis was caused by bacterial contamination. A century later, the discoverers of helicobacter (the cause of peptic ulcer disease) were forced to suffer the slings and arrows of ignoble academic fortune until they were vindicated through the efforts of a small coterie of enlightened colleagues.

Innovations are not suffered lightly by those who prosper under established norms. To disrupt the standard of care is to invite the wrath of academia. The 2004 Technology Assessment published by Blue Cross/Blue Shield and ASCO in the Journal of Oncology and ASCO’s update seven years later, reflect little more than an established paradigm attempting to escape irrelevance.

Cancer chemosensitivity tests work exactly according to their well-established performance characteristics of sensitivity and specificity. They consistently provide superior response and, in many cases, time to progression and even survival. They can improve outcomes, reduce costs, accelerate research and eliminate futile care. If the academic community is so intent to put these assays to the test, then why have they repeatedly failed to support the innumerable efforts that our colleagues have made over the past two decades to fairly evaluate them in prospective randomized trials? It is time for patients to ask exactly why it is that their physicians do not use them and to demand that these physicians provide data, not hearsay, to support their arguments.

Scientifically-based Functional Profile Under Fire

Winston Churchill once said, “Democracy is the worst form of government, except for all the others that have been tried.” I am reminded of this quote by a “conversation” that recently took place on a cancer patient forum.

A patient wrote that they had requested that tissue be submitted for sensitivity analysis and their physician responded by describing this work as a scam. A scam is defined by the American Heritage Dictionary as slang for a “fraudulent business scheme.”

Continuing Churchill’s thread, we might respond, “that laboratory directed therapies are the worst form of cancer therapy, except for all the others that have been tried.”

Using functional profiling we measure the effect of drugs, radiation, growth factor withdrawal and signal transduction inhibition upon human tumors. Using our extensive database we compare the findings with the results of similar patients – by diagnosis and treatment status – to determine the most active and least toxic drug or combination for each patient.

The test isn’t perfect. Some patient’s cancer cells (about 5 – 7 percent of the time), do not survive the transport and processing, so no assay can be performed at all. Some patients are resistant to all available drugs and combinations. And finally, based on the established performance characteristics of the test, we can only double or in some circumstances triple, the likelihood of a clinical response.  This is all well documented in the peer-reviewed literature.

Despite this, it appears that in the eyes of some beholders these strikingly good results constitute a “scam.” So let us, in the spirit of fairness, and academic discourse examine their results.

First, it must be remembered that today in 2012 only a minority of cancer patients actually show objective response to available cancer therapies. Five-year survivals, the benchmark of success for advanced disease in oncology (those whose disease has spread beyond the primary site), have not changed in more than five decades.

The highly lauded clinical trial process, according to a study from the University of Florida, only provides a better outcome for a new drug over an old one, once for every seven clinical trials conducted

More disturbing, only one out of 14 clinical trials provide a survival advantage of 50 percent or greater for the successful treatment group.

According to a study from Tuft’s University, it takes 11 years and more than $1,000,000,000 dollars for a new drug to receive FDA approval.

And in a study published in the New England Journal of Medicine only 8 percent of drugs that complete Phase I (safe for human use) ever see the light of day for clinical therapy. This is the legacy of NCCN-guided, University-approved, ASCO-authorized clinical therapeutics programs to date.

As a practicing medical oncologist I am only too familiar with the failings of our modern clinical trial system. Having witnessed the good outcomes of our own patients on assay-directed protocols whose benefits derive from the intelligent use of objective laboratory data for the selection of chemotherapy drugs, I for one will NEVER return to business-as-usual oncology, regardless of what moniker the naysayers might choose to attach to this approach.

The Unfulfilled Promise of Genomic Analysis

In the March 8 issue of the New England Journal of Medicine, investigators from London, England, reported disturbing news regarding the predictive validity and clinical applicability of human tumor genomic analysis for the selection of chemotherapeutic agents.

As part of an ongoing clinical trial in patients with metastatic renal cell carcinoma (the E-PREDICT) these investigators had the opportunity to conduct biopsies upon metastatic lesions and then compare their genomic profiles with those of the primary tumors. Their findings are highly instructive, though not terribly unexpected. Using exon-capture they identified numerous mutations, insertions and deletions. Sanger sequencing was used to validate mutations. When they compared biopsy specimens taken from the kidney they found significant heterogeneity from one region to the next.

Similar degrees of heterogeneity were observed when they compared these primary lesions with the metastatic sites of spread. The investigators inferred a branched evolution where tumors evolved into clones, some spreading to distant sites, while others manifested different features within the primary tumor themselves. Interestingly, when primary sites were matched with metastases that arose from that site, there was greater consanguinity between the primary and met than between one primary site and another primary site in the same kidney. Another way of looking at this is that your grandchildren look more like you, than your neighbor.

Tracking additional mutations, these investigators found unexpected changes that involved histone methyltransferase, histone d-methyltransferase and the phosphatase and tensin homolog (PTEN). These findings were perhaps among the most interesting of the entire paper for they support the principal of phenotypic convergence, whereby similar genomic changes arise by Darwinian selection. This, despite the observed phenotypes arising from precursors with different genomic heritages. This fundamental observation suggests that cancers do not arise from genetic mutation, but instead select advantageous mutations for their survival and success.

The accompanying editorial by Dr. Dan Longo makes several points worth noting.  First he states that “DNA is not the whole story.” This should be familiar to those who follow my blogs, as I have said the same on many occasions.  In his discussion, Dr Longo then references Albert Einstein, who said “Things should be made as simple as possible, but not simpler.” Touché.

I appreciate and applaud Dr. Longo’s comments for they echo our sentiments completely. This article is only the most recent example of a growing litany of observations that call into question molecular biologist’s preternatural fixation on genomic analyses. Human biology is not simple and malignantly transformed cells more complex still. Investigators who insist upon using genomic platforms to force disorderly cells into artificially ordered sub-categories, have once again been forced to admit that these oversimplifications fail to provide the needed insights for the advancement of cancer therapeutics. Those laboratories and corporations that offer “high price” genomic analyses for the selection of chemotherapy drugs should read this and related articles carefully as these reports portend a troubling future for their current business model.

Best Chance for Colon Cancer Survival – Don’t Let It Start

Two papers in the February 23, 2012, New England Journal of Medicine reported important findings in the fight against colon cancer. The first paper (Zuber, AG et al; Colonoscopic Polypectomy and Long-Term Prevention of Colorectal Cancer Deaths) conducted by American investigators establishes the benefit of polyp removal in the prevention of death from colorectal cancer. The study conducted upon 2,602 patients who had adenomas removed reveals a 53 percent reduction in mortality from colon cancer compared with the expected death rate from the disease in this population.

To put this into perspective – virtually no intervention in the advanced disease setting provides a survival advantage. The best we can usually do once the disease is established is an improvement in time to progression. When we do observe a true survival advantage it is usually in the range of a few percentage points and never of this magnitude. How might we explain this astonishingly positive result?

One way to view this finding is to reexamine the biology of cancer. One of the leading experts in the field, Bert Vogelstein, MD, from Johns Hopkins, explained colon carcinogenesis as a pattern of gene perturbations starting at atypia, progressing to carcinoma in situ and ending with invasive, metastatic disease. According to Dr. Vogelstein, the average colon cancer found in a patient at the time of colonoscopy has been present in that person’s colon for 27 years. From there it is only a hop, skip and a jump from one-centimeter adenomatous polyp to metastatic (lethal) disease, all playing out over the last three years in the natural history of the disease. Thus, cancer truly is a disease that doesn’t grow too much, but dies too little and interrupting this process while it is still slumbering can, it would seem, lead to cures.

What I find surprising is the success of the strategy. Since it is now well established that cancer can metastasize when it has achieved the rather diminutive proportions of 0.125 cubic centimeters or less and the average polyp can only be detected at one or more cubic centimeters, it is our good fortune that so many cancers chose not to (or could not) metastasize prior to detection. Reading between the lines, those 12 patients who died of colon cancer as opposed to the expected 25.4 are presumably those with early metastasizing disease. The next frontier will be the detection of these cancers when they are teenagers and not 20-somethings. It may be that proteomic analyses will provide an avenue for earlier detection in the future.

The second article is a European study (Quintero, E et al; Colonoscopy versus Fecal Immunohistochemical Testing in Colorectal-Cancer Screening) that compared colonoscopy with fecal blood testing in a large cohort of patients. While the rates of detection for colorectal cancer were similar, the rates of detecting both advanced and early adenomas, favored colonoscopy (p < .001). This study represents an interesting adjunct to the American study described above. Specifically, if the early detection (and removal) of adenomas can confer a survival advantage then it could be argued that colonoscopy by its virtue of it’s higher detection rate of these precancerous adenomas, is the preferred “screening” modality. With over 50,000 deaths attributed to colorectal cancer in the U.S. each year, the public health benefit of colonoscopies becomes an intersecting point of discussion. Until now, fecal occult blood testing yearly or sigmoidoscopies every several years has been considered equivalent to colonoscopies every 10 years starting at age 50. Do we need to move colonoscopies to the front of the line?

What is most interesting about both these reports is the low-tech nature of the study modalities – and the astonishing efficacy of their application. Colonoscopies have been conducted for decades. They are comparatively simple, do not require affymetrix chips, and yet provide demonstrable benefit that appears to exceed anything offered, to date, by the “genomic revolution.” Perhaps we should all keep an open mind about other comparatively low-tech methodologies that can provide survival advantages.