Is Rationed or Rational Medical Care In Our Future?

We are witness to a sea change in medicine. Doctors and nurses are being replaced by “healthcare providers;” medical judgment is being phased out in favor of therapeutic algorithms; and the considered selection of treatments is giving way to rigid therapy guidelines. All the while, the regulatory environment increasingly precludes the use of “off label” drugs. It is understandable why insurers, governmental entities and hospital chains might welcome these changes. After all, once therapies have been reduced to standardized formulae, one can predict costs, resource allocations and financial exposures to the twentieth decimal place. For many medical conditions, these approaches will provide adequate care for the majority of patients.

But, what of the outliers? What of those complicated disease entities like cancer, whose complexity and variability challenge even the best minds? How do we bang the round peg of cancer therapy into the square hole of formulaic care?

There are several answers. The first is the least attractive: In this scenario, predicated upon cancer’s incidence in an older population, at the end or beyond their productive (and reproductive) years, we simply don’t allocate resources. Most civilized modern societies haven’t the stomach for such draconian measures and will seek less blunt instruments.

The second is a middle of the road approach. In this scenario, standardized guidelines that provide the same treatment to every patient with a given diagnosis are developed. Every medical oncologist knows the drill: FOLFOX for every colon cancer, Cytoxan plus Docetaxel for every breast cancer and carboplatin plus paclitaxel for ovarian cancer. The treatments work adequately well, the schedules are well established, the toxicities are well known and no one is cured. The beauty of this approach is that the average patient has an average outcome with the average treatment. By encompassing these regimens into standardized algorithms, we may soon be able to eliminate physicians entirely — first, with nurse practitioners and physician’s assistants and, ultimately, with computers. What is perhaps most surprising about this scenario has been the willingness of the medical oncology community to embrace it, a sort of professional self-induced extinction. At the time of this writing, this is the predominant model and is becoming increasingly entrenched under the auspices of NCCN and related guidelines. The operative term being guidelines, in as much as these “guidelines” are rapidly becoming “dictates.”

The final approach, and the one I find most appealing, is that which utilizes the clinical, scientific, laboratory and technical acumen of the physician to the maximum. Combining diagnostic skill with scientific insight, the physician becomes the captain of the ship, who must assume control from the autopilot once the vessel has entered the tempest and use his/her experience and training to guide the patient to a soft landing. This requires the capacity to think and demands an up-to-date knowledge of many disciplines. The judicious application of laboratory-directed approaches can further enhance the skillset, introducing objective data that is then used to guide drug and treatment selections. Predicated upon an understanding of the patient’s tumor biology, cancer therapy becomes an intellectual exercise that draws upon literature, and a knowledge of pharmacology and physiology. Adding the wealth of newly developed signal inhibitors to the mix only enhances the odds of a good outcome.

This approach improves responses and eliminates futile care. It provides patients the opportunity to participate in their own management. Correctly delivered, it would make available to every patient any FDA-approved drug. While it would seem to some that this would open the floodgates of drug use, I would strenuously disagree. It would instead limit drug administration to those patients most likely to respond, a goal currently pursed by virtually every major institution, yet accomplished by none. While a handful of targeted approaches have come to fruition in the last few years — erlotinib for EGFR mutation, and sunitinib in kidney cancers — most of the molecular profiling being done today doesn’t aid in the selection of therapy but instead provides negative information (e.g. RAS in colon cancer, ERCC1 over expression in lung) enjoining the physician against the use of a given agent but then leaving the unfortunate patient to fend for themselves amidst a panoply of randomly chosen options.

This is the approach that I have chosen to adopt in my own care of cancer patients. Our rapidly growing successes in ovarian, breast, lung, melanoma, leukemias and other diseases could and should serve as a model for others.

Targeted Therapies for Cancer Confronts Hurdles

The September 1 issue of the ASCO Post, a periodical published by the American Society of Clinical Oncology, features an article entitled “Research in Combining Targeted Agents Faces Numerous Challenges.” Contributors to the article by Margo J. Fromer, participated in a conference sponsored by the Institute of Medicine. These scientists representing both public and private institutions examined the obstacles that confront researchers in their efforts to develop effective combinations of targeted agents.

One of the participants, Jane Perlmutter, PhD, of the Gemini Group, pointed out that advances in genomics have provided sophisticated target therapies, but noted, “cellular pathways contain redundancies that can be activated in response to inhibition of one or another pathway, thus promoting emergence of resistant cells and clinical relapse.”

James Doroshow, MD, deputy director for clinical and translational research at the NCI, said, “the mechanism of actions for a growing number of targeted agents that are available for trials, are not completely understood.” He went on to say that the “lack of the right assays or imaging tools means inability to assess the target effect of many agents.” He added that “we need to investigate the molecular effects . . .  in surrogate tissues,” and concluded “this is a huge undertaking.”

Michael T. Barrett, PhD, of TGen,  pointed out that “each patient’s cancer could require it’s own specific therapy.” This was followed by Kurt Bachman of GlaxoSmithKline, who opined, “the challenge is to identify the tumor types most likely to respond, to find biomarkers that predict response, and to define the relationship of the predictors to biology of the inhibitors.”

When I read this article I dashed to my phone and waited breathlessly for these august investigators to contact me for guidance. It was obvious that they were describing precisely the work that my colleagues and I have been doing for the past two decades. Obviously, there had been an epiphany. The complexities and redundancies of human tumor biology had finally dawned on these investigators, who had previously clung unwaiveringly to their analyte-based molecular platforms.

Eureka! Our day of vindication was at hand. The molecular biologists humbled by the manifest complexity of human tumor biology had finally recognized that they were outgunned and would, no doubt, be contacting me presently. Whole-cell experimental models had gained the hegemony they so rightly deserved. The NCI and big pharma would be beating a path to my door.

But the call never came. Perhaps they lost my number. Yes, that must be it. So let me provide it: 562.989.6455. Remember I’m on Pacific Daylight Time.

Why Some Patients Refuse Chemotherapy – And Why Some of Them Shouldn’t

In the June 13, 2011, issue of Time magazine, Ruth Davis Konigsberg described cancer patients who refuse to take potentially lifesaving therapy. Her article, titled “The Refuseniks – why some cancer patients reject their doctor’s advice,” examined the rationale applied by patients who decline chemotherapy. Many of these patients are rational, articulate, intelligent and capable individuals. While there are those who by virtue of religious belief, underlying depression, or loss of loved ones, decline interventions, many of these patients make compelling arguments in favor of their decisions.

When we examine the basis of these patients’ therapeutic nihilism, much of it reflects the uncertainty of benefit combined with the certainty of toxicity. What these patients articulate is the fundamental dilemma confronted by cancer patients, what we might describe as their logical assessment of “return on investment.”

Everything in life is based on probabilities. Will your husband or wife be true? Will you have a boy or a girl? Will you live to see retirement? Will your nest egg be adequate? Cancer medicine is no different.

Will the treatment I’m being offered extend my life long enough to be worth the short- and medium-term toxicities that I will certainly suffer?

While I cannot address this question with regard to surgery or radiation, I feel uniquely qualified to do so in the context of chemotherapy. What, after all, is a chemosensitivity assay? When correctly performed, it is a laboratory test that dichotomizes groups of patients with average likelihoods of response (e.g. 20%, 30%, 40%, etc.) into those who are more or less likely to respond based on the results. On average, a patient found sensitive in vitro has a twofold improvement in response, while those found resistant have a demonstrably lower likelihood of benefit. We have now shown this to be true in breast, ovarian, and non-small cell lung cancers, as well as melanoma, childhood and adult leukemias, and other diseases.

To address the misgivings of the Refuseniks, we might ask the following question: Would you take a treatment that provided a 30 percent likelihood of benefit? How about a 40 percent? 50 percent? 60 percent? 70 percent? Or 80 percent? While many might decline the pleasure of chemotherapy at a 20-30 percent response rate, a much larger number would look favorably upon a 70 percent response rate. On the flipside, a patient offered a treatment with a 50 percent likelihood of benefit (on average), who by virtue of a lab study realizes that their true response rate is closer to 19 percent (based on resistance in vitro), might very logically (and defensibly) decline treatment. These real life examples reflect the established performance characteristics of our laboratory tests (Nagourney, RA. Ex vivo programmed cell death and the prediction of response to chemotherapy. Current Treatment Options in Oncology 2006, 7:103-110.).

Rather than bemoan the uncertainties of treatment outcome, shouldn’t we, as clinical oncologists, be addressing these patients’ very real misgivings with data and objective information? I, for one, believe so.

The False Economy of Genomic Analyses

We are witness to a revolution in cancer therapeutics. Targeted therapies, named for their capacity to target specific tumor related features, are being developed and marketed at a rapid pace. Yet with an objective response rate of 10 percent (Von Hoff et al JCO, Nov 2011) reported for a gene array/IHC platform that attempted to select drugs for individual patients we have a long way to go before these tests will have meaningful clinical applications.

So, let’s examine the more established, accurate and validated methodologies currently in use for patients with advanced non-small cell lung cancer. I speak of patients with EGFR mutations for which erlotinib (Tarceva®) is an approved therapy and those with ALK gene rearrangements for which the drug crizotinib (Xalkori®) has recently been approved.

The incidence of ALK gene rearrangement within patients with non-small cell lung cancer is in the range of 2–4 percent, while EGFR mutations are found in approximately 15 percent. These are largely mutually exclusive events. So, let’s do a “back of the napkin” analysis and cost out these tests in a real life scenario.

One hundred patients are diagnosed with non-small cell lung cancer.
•    Their physicians order ALK gene rearrangement     $1,500
•    And EGFR mutation analysis     $1,900
•    The costs associated $1,500 + $1,900 x 100 people =    $340,000
Remember, that only 4 percent will be positive for ALK and 15 percent positive for EGFR. And that about 80 percent of the ALK positive patients respond to crizotinib and about 70 percent of the EGFR positive patients respond to erlotinib.

So, let’s do the math.

We get three crizotinib responses and 11 erlotinib responses: 3 + 11 = 14 responders.
Resulting in a cost per correctly identified patient =     $24,285

Now, let’s compare this with an ex-vivo analysis of programmed cell death.

Remember, the Rational Therapeutics panel of 16+ drugs and combinations tests both cytotoxic drugs and targeted therapies. In our soon to be published lung cancer study, the overall response rate was 65 percent. So what does the EVA/PCD approach cost?

Again one hundred patients are diagnosed with non-small cell lung cancer.
•    Their physicians order an EVA-PCD analysis    $4,000
•    The costs associated: $4,000 x 100 people =    $400,000
•    With 65 percent of patients responding, this
constitutes a cost per correctly identified patient =     $6,154

Thus, we are one quarter the cost and capable of testing eight times as many options. More to the point, this analysis, however crude, reflects only the costs of selecting drugs and not the costs of administering drugs. While, each of those patients selected for therapy using the molecular profiles will receive an extraordinarily expensive drug, many of the patients who enjoy prolonged benefit using EVA/PCD receive comparatively inexpensive chemotherapeutics.

Furthermore, those patients who test negative for ALK and EGFR are left to the same guesswork that, to date has provided responses in the range of 30 percent and survivals in the range of 12 months.

While the logic of this argument seems to have escaped many, it is interesting to note how quickly organizations like ASCO have embraced the expensive and comparatively inefficient tests. Yet ASCO has continued to argue against our more cost-effective and broad-based techniques.

No wonder we call our group Rational Therapeutics.