Phar Lap and the Treatment of Leukemia

250px-Phar_LapPhar Lap (1926-1932) was a thoroughbred horse bred in New Zealand. After winning the Melbourne Cup and 37 other races, his victory at the Agua Caliente racecourse in Tijuana, Mexico, established the track record in 1932.

With each victory, his detractors became more strident. He was even the target of an assassination attempt. To prevent him from winning (and thereby disrupting the betting odds) officials would add lead bricks to his saddle. On the occasion of the Melbourne cup of 1930 he carried 138 pounds of lead, yet won the race. A quote from the Sydney Morning Herald dated Wednesday, November 5, 1930, read, “The question was not which horse could win, but could Phar Lap carry the weight. Could he do what no other horse before him had done?”

It appeared that the one thing that race officialdom feared above all else, was a horse that could consistently beat the field and win the race.

The tale of Phar Lap was brought to mind after a colleague forwarded a paper published in the journal Leukemia on August 10, 2012: “The use of individualized tumor response testing in treatment selection: second randomization results from the LRF CLL4 trial and the predictive value of the test at trial entry.” (E Matutes, AG Bosanquet et al, Leukemia, Letter to the Editor.)

Published as a letter to the editor, the paper describes correlations between the TRAC (tumor response to antineoplastic compounds) assay, a short-term suspension culture cell death laboratory assay (very similar to our work) and clinical response, time to progression and overall survival in patients with chronic lymphocytic leukemia (CLL) who received chemotherapy as part of the LRF CLL4 trial conducted in England between 1999 and 2004.

The initial trial was a blinded correlation between laboratory assay results and patient response to one of three treatment regimens. An examination of the data reveals a clear and statistically significant correlation between drug sensitivity and overall survival (p = .0001). The 10-year survival of drug sensitive patients was 28 percent, while the 10-year survival for drug resistant patients was 12 percent.

Significant correlations with survival were observed for known prognostic factors like 17p and 11q deletion, as well as IGHV mutational status. Correlations were also observed between the TRAC assay results and these prognostic factors.

The report goes on to describe a second randomization that took place at the time of disease progression, either failure of first-line therapy or reoccurrence within 12 months. In this part of the study, 84 relapsed patients were allocated to standard therapy and their outcomes were compared with 84 patients allocated to treatment guided by the TRAC assay. The drugs tested in the assay-directed arm included chlorambucil, cytoxan, methylprednisolone, prednisolone, vincristine, doxorubicin, mitoxantrone, 2CDA, fludarabine and pentostatin. In vitro resistance for combinations was defined as resistance to all constituent drugs in the combination, while drug sensitivity was defined as TRAC-assay sensitivity for any of the drugs used in combination. No discussion of synergy analysis was included.

In examining this study, I cannot help but be reminded of Phar Lap. First, marshaling a study of 777 CLL patients, and conducting 544 TRAC analyses, is a phenomenal undertaking for which these authors should be commended.

Second, the observation of a significant correlation between laboratory assay results and overall survival, as well as the biological implications of this platform’s capacity to correlate with molecular markers is a demonstrable and noteworthy success, however unheralded.

Where the analogy with poor Phar Lap’s struggles, weighted down with lead, becomes most poignant is the final portion of the study wherein 84 patients received assay-directed therapy. To wit, we must remember that in 2012, drug refractory CLL remains an incurable malignancy (with the exception of a small subset of successfully transplanted patients) and that no chemotherapy-alone trial has provided a survival advantage in this group. But this only begins to explain this trial’s results.

Among the virtually insurmountable hurdles that these investigators were forced to confront was the fact that fully 52 percent of the standard treatment arm group were destined to receive fludarabine. This drug, the current gold standard for previously treated patients who fail chlorambucil (constituting 73 percent of the patients in this part of the trial), has an objective response rate of 48 – 52 percent in this population. As the drug would likely be identified as active in vitro as well, this had the impact of pitting the assay arm and the standard arm against one another, frequently using exactly the same treatment.

While this does not mean that the assay arm could not succeed, it does have an enormous impact upon the sample size calculations used to determine the number of patients required to achieve significance.  No pharmaceutical company would ever allow a registration trial to be conducted against an “unknown” control arm, particularly one using the same therapy as the study arm – not ever! Despite these burdens, the assay-directed arm had a superior one-year survival, while virtually all other trends favored the group who received assay-selected therapy. The results of this study are worthy of recognition and further support the clinical relevance, predictive validity and importance of functional analyses. Yet, this interesting study in CLL is unceremoniously relegated to the status of a Letter to the Editor in Leukemia. Perhaps, like Phar Lap, no one really wants to upset the odds.

Stalking Leukemia Genes One Whole Genome at a Time

An article by Gina Kolata on the front page of the July 8, Sunday New York Times, “In Leukemia Treatment, Glimpses of the Future,” tells the heartwarming story of a young physician afflicted with acute lymphoblastic leukemia. Diagnosed in medical school, the patient initially achieved a complete remission, only to suffer a recurrence that led him to undergo a bone marrow transplant. When the disease recurred a second time years later, his options were more limited.

As a researcher at Washington University himself, this young physician had access to the most sophisticated genomic analyses in the world. His colleagues and a team of investigators put all 26 of the University’s gene sequencing machines to work around the clock to complete a whole genome sequence, in search of a driver mutation. The results identified FLT3. This mutation had previously been described in acute leukemia and is known to be a target for several available small molecule tyrosine kinase inhibitors. After arranging to procure sunitinib (Sutent, Pfizer Pharmaceuticals), the patient began treatment and had a prompt and complete remission, one that he continues to enjoy to this day.

The story is one of triumph over adversity and exemplifies genomic analysis in the identification of targets for therapy. What it also represents is a labor-intensive, costly, and largely unavailable approach to cancer management. While good outcomes in leukemia have been the subject of many reports, imatinib for CML among them, this does not obtain for most of the common, solid tumors that lack targets for these new silver bullets. Indeed, the article itself describes unsuccessful efforts on the part of Steve Jobs and Christopher Hitchens, to probe their own genomes for effective treatments. More to the point, few patients have access to 26 gene-sequencing machines capable of identifying genomic targets. A professor of bioethics from the University of Washington, Wiley Burke, raised additional ethical questions surrounding the availability of these approaches only to the most connected and wealthiest of individuals.

While brute force sequencing of human genomes are becoming more popular, the approach lacks scientific elegance. Pattern recognition yielding clues, almost by accident, relegates scientists to the role of spectator and removes them from hypothesis-driven investigation that characterized centuries of successful research.

The drug sunitinib is known for its inhibitory effect upon VEGF 1, 2 and 3, PDGFr, c-kit and FLT3. Recognizing the attributes of this drug and being well aware of C-KIT and FLT3’s role in leukemias, we regularly add sunitinib into our leukemia tissue cultures to test for cytotoxic effects in malignantly transformed cells.  The insights gained enable us to simply and quickly gauge the likelihood of efficacy in patients for drugs like sunitinib.

Once again we find that expensive, difficult tests seem preferable to inexpensive, simple ones. While the technocrats at the helm of oncology research promise to drive the price of these tests down to a level of affordability, everyday we wait 1,581 Americans die of cancer. Perhaps, while we await perfect tests that might work tomorrow, we should use good tests that work today.

Faster than the Speed of Light

Last week, scientists at CERN, the European particle physics laboratory located outside Geneva, Switzerland, conducted an experiment, the results of which now challenge one of the most fundamental principles of modern physics. I speak of Albert Einstein’s 1905 declaration that the speed of light is an absolute and that nothing in the universe could travel faster.

E = MC2, the principle under which nuclear energy and weapons have been developed, as well as all of the corollaries of the theory of relativity were called into question when a series of sub atomic particles, known as neutrinos traveled from Switzerland to Italy at a speed that was 1/60 of a billionth of a second faster than the speed of light. What has followed has been a flurry of interest by departments of physics all over the world. Confronted with this new finding, these investigators will diligently seek to reproduce or refute the findings.

This was not the first time that someone challenged the primacy of Einstein’s 1905 theory. Indeed, during the 1930s, for largely political and anti-Semitic reasons, the Nazi party attempted to disprove Einstein. Yet, all of the political meanderings, personal vendettas and intellectual jealousy could not unseat Einstein’s guiding principle. That is, until objective evidence in the form of the CERN experiments came to the fore.

Science — however lofty — and scientists — however highly regarded — dwell in the same realm as all the rest of us mere mortals. Their biases and preconceived notions often cloud their vision. Comfortable with a given paradigm, they hold unyieldingly to its principles until they are forced, however unwillingly, to relinquish their belief systems in favor of a new understanding. I write of this in the context of laboratory-based therapeutics – a field of scientific investigation that has provided firm evidence of predictive validity. These technologies have improved response, time to progression and survival for patients with leukemia, ovarian, breast and lung cancers, as well as melanoma and other advanced malignancies. Thousands of peer-reviewed published experiences have established the merit of human tumor primary cultures for the prediction of response. Investigations into the newest classes of targeted therapies are providing new insights into their use and combinatorial potential.

Yet,  while the physicists of the world will now rise to the challenge of data, the medical oncologists and their academic counterparts refuse to accept the unimpeachable evidence that supports  the validity of assay-directed therapy. Perhaps if our patients were treated at CERN in Geneva,  their good outcomes would receive the attention they so richly deserve.

Is There a Role for Maintenance Therapies in Medical Oncology?

There is a long tradition of maintenance therapy in pediatric oncology. Children with acute lymphoblastic leukemia uniformly receive three stages of therapy: induction, consolidation, and finally maintenance. The maintenance stage consists of weekly, or even daily therapies.

The historical experiences of relapse in this population lead investigators to consistently expose these patients to drugs for a period of years. Despite the apparent success of this approach in childhood cancers, long-term maintenance therapy did not gain popularity in adult oncology. Why?

There are probably several reasons. One reason is that childhood leukemia is among the most chemo-responsive diseases in medicine. As such, there are many active drugs available for treatment and many non-cross-resistant maintenance schedules that can be employed.

A second reason is the relative tolerability of drugs like oral thioguanine or mercaptopurine that are used in chronic maintenance therapy. By contrast adult tumors rarely achieve complete remissions. The number of active drugs has historically been very limited and the tolerance of long-term treatments characteristically poor.

Despite this, there is an appealing rational for maintenance therapy. Once we recognized and incorporated the tenents of apoptosis and programmed cell death into cancer management, we were forced to reconsider many of the principles of older treatment protocols.

Conceptually, maintenance allows for a cytotoxic exposure when the cell enters a “chemosensitive” period in its life cycle.  Cancer cells that are “out surviving” their normal counterparts often do so in a quiescent stage (G0 Gx). In order to capture these cells, drugs must be present in the body when these cells awaken from their dormancy. As we have now achieved increasingly durable remissions in diseases like breast cancer, small cell lung and ovarian, we are confronting patients in long-term complete remission. When you add to this newfound population the availability of comparably mild agents, like the low dose Gemcitabine/Cisplatin doublet, we now have at our disposal active drugs that can be safely continued for long periods of time.

Using laboratory selection to identify first line (induction), second line (consolidation) and finally third line (maintenance) schedules, we can now offer our patients well-tolerated combinations that offer the hope of more durable remissions.

The GOG 178, in which continued taxol dosing provided more durable remission in ovarian cancer, provided the first inklings of this. Unfortunately, taxol is toxic. And the more durable remissions came at an increasingly high price: neuropathy, myelosuppression, alopecia, fatigue and malaise, which greatly limited the utility of this approach. Yet it does not limit its theoretical attractiveness as we continue to develop targeted agents with more selective activity and modified toxicity profiles. We anticipate maintenance therapies will become more widespread.

Based upon our experiences to date, we are successfully using this approach with our patients who achieve good clinical remissions.

Why I Do Chemosensitivity Testing

My earliest experience in cancer research came during my first years of medical school. Working in a pharmacology laboratory, I studied the biology and toxicity of a class of drugs known as nitrosoureas. My observations were published in a series of articles in the journal Cancer Research.

The work afforded me the opportunity to interact directly with some of the country’s leading cancer investigators. Many of the fellows with whom I worked went on to famous careers in academia and the biotech industry. I remember the rather dismal outcomes of patients treated in the early 80s; but I felt confident that there had to be a better way to treat cancer patients than just throwing drugs at them and hoping they worked.

It was then that I decided that testing cancer patients’ cell samples in the laboratory, using the drugs they might receive, could help select the most active agents. Several years later, as an oncology fellow, I had the opportunity to test this hypothesis, and it worked. I reported my first observations in leukemia patients in 1984, a successful study that proved that relapsed leukemia patients could be effectively treated when the drugs were first selected in the laboratory. (Nagourney, R et al, Accurate prediction of response to treatment in leukemia utilizing a vital dye exclusion chemosensitivity technique. Proc ASCO abs # 208, 1984)

Unfortunately, this was an era when the field of in vitro chemosensitivity testing had fallen on hard times. A negative study published in the New England Journal of Medicine, using a growth-based assay endpoint, soured the community on the concept and our cell-death based assay results fell upon deaf ears. Yet, I knew it worked. And, based upon my continued efforts in the field, I developed the EVA/PCD® platform that we use today.

With response rates two to three fold higher than national averages, and successes that include the development for the most widely used treatments for low grade lymphoma and CLL (Nagourney, R et al Br J Cancer 1993), recurrent ovarian cancer (Gyn Onc 2003) and refractory breast cancer (J Clin Oncol 2000), the question really should be why doesn’t everyone do assays for their patients?