Investigators in Boston Re-Invent the Wheel

A report published in Cell from Dana-Farber Cancer Institute describes a technique to measure drug cov150hinduced cell death in cell lines and human cancer cells. The method “Dynamic BH3 profiling” uses an oligopeptidic BIM to gauge the degree to which cancer cells are “primed” to die following exposure to drugs and signal transduction inhibitors. The results are provocative and suggest that in cell lines and some human primary tissues, the method may select for sensitivity and resistance.

We applaud these investigators’ recognition of the importance of phenotypic measures in drug discovery and drug selection and agree with the points that they raise regarding the superiority of functional platforms over static (omic) measures performed on paraffin fixed tissues. It is heartening that scientists from so august an institution as Dana-Farber should come to the same understanding of human cancer biology that many dedicated researchers had pioneered over the preceding five decades.

Several points bear consideration. The first, as these investigators so correctly point out: “DBP should only be predictive if the mitochondrial apoptosis pathway is being engaged.” This underscores the limitation of this methodology in that it only measures one form of programmed cell death – apoptosis. It well known that apoptosis is but one of many pathways of programmed cell death, which include necroptosis, autophagy and others.

While leukemias are highly apoptosis driven, the same cannot so easily be said of many solid tumors like colon, pancreas and lung. That is, apoptosis may be a great predictor of response except when it is not. The limited results with ovarian cancers (also apoptosis driven) are far from definitive and may better reflect unique features of epithelial ovarian cancers among solid tumors than the broad generalizability of the technique.

A second point is that these “single cell suspensions” do not recreate the microenvironment of human tumors replete with stroma, vasculature, effector immune cells and cytokines. As Rakesh Jain, a member of the same faculty, and others have so eloquently shown, cancer is not a cell but a system. Gauging the system by only one component may grossly underestimate the systems’ complexity, bringing to mind the allegory of elephant and the blind man. Continuing this line of reasoning, how might these investigators apply their technique to newer classes of drugs that influence vasculature, fibroblasts or stroma as their principal modes of action? It is now recognized that micro environmental factors may contribute greatly to cell survival in many solid tumors. Assay systems must be capable of capturing human tumors in their “native state” to accurately measure these complex contributions.

Thirdly, the ROC analyses consistently show that this 16-hour endpoint highly correlates with 72- and 96-hour measures of cell death. The authors state, “that there is a significant correlation between ∆% priming and ∆% cell death” and return to this finding repeatedly. Given that existing short term (72 – 96 hour) assays that measure global drug induced cell death (apoptotic and non-apoptotic) in human tumor primary cultures have already established high degrees of predictive validity with an ROC of 0.89, a 2.04 fold higher objective response rate (p =0.0015) and a 1.44 fold higher one-year survival (p = 0.02) are we to assume that the key contribution of this technique is 56 hour time advantage? If so, is this of any clinical relevance? The report further notes that 7/24 (29%) of ovarian cancer and 5/30 (16%) CML samples could not be evaluated, rendering the efficiency of this platform demonstrably lower than that of many existing techniques that provide actionable results in over 90% of samples.

Most concerning however, is the authors’ lack of recognition of the seminal influence of previous investigators in this arena. One is left with the impression that this entire field of investigation began in 2008. It may be instructive for these researchers to read the first paper of this type in the literature published in in the JNCI in 1954 by Black and Spear. They might also benefit by examining the contributions of dedicated scientists like Larry Weisenthal, Andrew Bosanquet and Ian Cree, all of whom published similar studies with similar predictive validities many years earlier.

If this paper serves to finally alert the academic community of the importance of human tumor primary culture analyses for drug discovery and individual patient drug selection then it will have served an important purpose for a field that has been grossly underappreciated and underutilized for decades. Mankind’s earliest use of the wheel dates to Mesopotamia in 3500 BC. No one today would argue with the utility of this tool. Claiming to have invented it anew however is something quite different.

Cancer Explained – The Role of Cell Death

Following a recent blog, I received an inquiry from one of our readers. The individual asked whether I could better explain my oft repeated statement that “cancer doesn’t grow too much, it dies too little.” The questioner was puzzled by my assertion that chemotherapy drugs acted to stop cells from growing, while she had come to believe that this was synonymous with killing them. This dichotomy is at the crux of our modern understanding of cancer.

In response, I would like to examine the very basis of what is known as carcinogenesis, the process by which cancer comes to exist.

For more than a century, scientists believed that cancer cells were growing more rapidly than normal cells. They based this on serial measurements of patient’s tumors, which revealed that tumor dimensions increased. A small lump in the breast measuring one-half inch in diameter would be found six months later to be one inch in diameter. And six months after that it was two inches in diameter. This was growth, plain and simple, and so it was reasoned that cancer cells must be growing too much. As such, cancer therapies, per force of necessity, would need to stop cancer cells from growing if they were to work at all.

Dying Cell - lo resAnd then, in 1972, a paper was published in the British Journal of Cancer that described the phenomenon of apoptosis, a form of programmed cell death. Although it would be almost a decade before cancer researchers fully grasped the implications of this paper, it represented a sea change in our understanding of human tumor biology.

Let’s use the example of a simple mathematical equation. Every child would recognize the principles of the following formula:
Tumor mass = growth rate – death rate
This simple equation represents the principle of modern cancer biology. Where cancer researchers went wrong was that they mistakenly posited that the only way a tumor mass could increase was through an increase in the growth rate. However, as any child will tell you, a negative of a negative is a positive. That is, at a given growth rate, the tumor mass can also increase if you reduce the death rate. Thus, the “growth” so obvious to earlier investigators did not reflect an increase in proliferation but instead a decrease in cell attrition. Cancer didn’t grow too much it died too little, but the end result was exactly the same.

It should now be abundantly clear exactly why chemotherapy drugs, designed to stop cells from growing, didn’t work. Yes, the drugs stopped cells from growing, and yes any population of “growing cells” would suffer the effect. But they didn’t cure cancers because the cancers weren’t growing particularly fast. Indeed, the fact that chemotherapy works at all is almost an accident. Contrary to our long held belief that we were inhibiting cell proliferation, chemotherapy drugs designed to damage DNA and disrupt mitosis, were actually working (when they did at all) by forcing the cells to take inventory and decide whether they could continue to survive. If the injury were too extreme, the cells would commit suicide through the process of cell death. If the cells were not severely damaged or could repair the damage, then they carried on to fight another day. None of this, however, had anything to do with cell growth.

Chemosensitivity Testing: Lessons Learned

Like all physicians and scientists engaged in the study of cancer biology and cancer treatment, I had accepted that cancer was a disease of abnormal cell growth. I remember reading the lead article in the New England Journal of Medicine (NEJM) that described the clonogenic assay (Salmon, S. E., Hamburger, A. W., Soehnlen, B. S., et al. 1978. Quantitation of differential sensitivity of human tumor stem cells to anticancer drugs. N Engl J Med 298:1321–1327).

I sat in a laboratory at Georgetown University reading about a lab test that could accurately predict the outcome of cancer patients, without first having to give patients toxic drugs. It seemed so logical, so elegant, so inherently attractive. Sitting there as a medical student, far removed from my formal cancer training, I thought to myself, this is a direction that I would like to pursue.

But I was only a first year student and there were miles to go before I would treat cancer patients. Nonetheless, selecting drugs based on a laboratory assay was something I definitely wanted to do. At the time I had no idea just how difficult that could prove to be.

After medical school I found myself in California. There I met an investigator from the National Cancer Institute who had recently joined the faculty at the University of California, Irvine. He too had read the NEJM paper. Being several years ahead of me in training he had applied the clonogenic technique at his laboratory at the National Cancer Institute. Upon his arrival in California, he had continued his work with the clonogenic assay.

All was going along swimmingly until the NEJM published their report documenting the results of five years experience with the clonogenic assay.  It wasn’t a good report card. In fact the clonogenic assay got an “F.”

Despite the enthusiastic reception that the assay had previously enjoyed, the hundreds of investigators around the world who had adopted it and the indefatigable defense of its merits by leading scientists, it seemed that something was very wrong with the clonogenic assay and I desperately needed to know what that was.

It so happens that in parallel to clonogenic assays, my colleague was working on a simpler, faster way to measure drug effects. Using the appearance of cells under the microscope and their staining characteristics, one could skip the weeks of growth in tissue culture and jump right to the finish line. The simple question to be answered was: Did the drugs and combinations kill cancer cells in the test tube? And if they did kill cancer cells in the test tube, would those drugs work in the patient? The answer was, “YES!”

Despite the clonogenic assay’s supporters, it turned out that killing cancer cells outright in the test tube was a much, much better way to predict patient’s outcomes. It would be years before I understood the depth of this seemingly simple observation and the historical implications it would have for cancer therapy.

FINAL book cover-lo resIn Chapter 7 of my soon-to-be-released book, Outliving Cancer I examine the impact of programmed cell death on human biology.

What is Cancer?

This is a question that has vexed scientific investigators for  centuries, and for the last century, our belief was predicated upon physical observation that cancer reflected altered  cell growth. After all, to the untrained eye, or even to the rather sophisticated eye, the mass in the pelvis or the lymph node under the arm, or the abnormality on a chest x-ray, continued to expand upon serial observation. This was “growth” (at least since the time of Rudolph Virchow); and growth it was reasoned represented cell division.

Based upon the cell growth model, cancer therapists devised drugs and treatments that would stanch cellular proliferation. If cells were growing, then cells needed to reproduce the genetic elements found in chromosomes leading to the duplication of the cell through mitosis. If chromosomes were made of DNA, then DNA would be the target of therapy. From radiation to cytotoxic chemotherapy, one mantra rang through the halls of academia, “Stop cancer cells from dividing and you stop cancer.”

As in many scientific disciplines, nothing spoils a lovely theory more than a little fact. And, the fact turned out to be that cancer does not grow too much, it dies too little. Cancer doesn’t “grow” its way into becoming a measurable tumor, it “accumulates” its way to that end.

In 1972, we realized that the most basic understanding of cancer biology up to that point was absolutely, positively wrong.

Working in a laboratory during my fellowships, I began to realize that something was wrong with the principles that guided cancer therapeutics. My first inkling came from the rather poor outcomes that many of my patients experienced despite high-dose, aggressive drug combinations.

Then, it was the failure of the clonogenic assay to predict clinical outcomes that further raised my suspicions. I began to ponder cell growth – cell death, cell growth – cell death. With each passing day the laboratory analysis that I conducted identified active treatments that worked.  Using short-term measures of cell death (not cell growth),. I could predict which of my patients would get better.  All of the complicated and inefficient clonogenic assay investigations could not. Cell growth – cell death – what was I missing?

It would be years before I would attend a special symposium on the topic of cell death that it all became abundantly clear.

My “eureka” moment is captured in Chapter 6 of my soon-to-be-released book, Outliving Cancer.FINAL book cover-lo res

Why Oncologists Don’t Like In Vitro Chemosensitivity Tests

In human experience, the level of disappointment is directly proportional to the level of expectation. When, for example, the world was apprised of the successful development of cold fusion, a breakthrough of historic proportions, the expectations could not have been greater. Cold fusion, the capacity to harness the sun’s power without the heat and radiation, was so appealing that people rushed into a field about which they understood little. Those who remember this episode during the 1990s will recall the shock and dismay of the scientists and investors who rushed to sponsor and support this venture only to be left out in the cold when the data came in.

Since the earliest introduction of chemotherapy, the ability to select active treatments before having to administer them to patients has been the holy grail of oncologic investigation. During the 1950s and 60s, chemotherapy treatments were punishing. Drugs like nitrogen mustard were administered without the benefit of modern anti-emetics and cancer patients suffered every minute. The nausea was extreme, the bone marrow suppression dramatic and the benefits – marginal at best. With the introduction of cisplatin in the pre Zofran/Kytril era, patients experienced a heretofore unimaginable level of nausea and vomiting. Each passing day medical oncologists wondered why they couldn’t use the same techniques that had proven so useful in microbiology (bacterial culture and sensitivity) to select chemotherapy.

And then it happened. In June of 1978, the New England Journal of Medicine (NEJM) published a study involving a small series of patients whose tumors responded to drugs selected by in vitro (laboratory) chemosensitivity. Eureka! Everyone, everywhere wanted to do clonogenic (human tumor stem cell) assays. Scientists traveled to Tucson to learn the methodology. Commercial laboratories were established to offer the service. It was a new era of cancer medicine. Finally, cancer patients could benefit from effective drugs and avoid ineffective ones. At least, it appeared that way in 1978.

Five years later, the NEJM published an update of more than 8,000 patients who had been studied by clonogenic assay. It seemed that with all the hype and hoopla, this teeny, tiny little detail had been overlooked: the clonogenic assay didn’t work. Like air rushing out of a punctured tire, the field collapsed on itself. No one ever wanted to hear about using human tumor cancer cells to predict response to chemotherapy – not ever!

In the midst of this, a seminal paper was published in the British Journal of Cancer in 1972 that described the phenomenon of apoptosis, a form of programmed cell death.  All at once it became evident exactly why the clonogenic assay didn’t work. By re-examining the basic tenets of cancer chemosensitivity testing, a new generation of assays were developed that used drug induced programmed cell death, not growth inhibition. Cancer didn’t grow too much, it died too little. And these tests proved it.

Immediately, the predictive validity improved. Every time the assays were put to the test, they met the challenge. From leukemia and lymphoma to lung, breast, ovarian, and even melanoma, cancer patients who received drugs found active in the test tube did better than cancer patients who received drugs that looked inactive. Eureka! A new era of cancer therapy was born. Or so it seemed.

I was one of those naive investigators who believed that because these tests worked, they would be embraced by the oncology community. I presented my first observations in the 1980s, using the test to develop a curative therapy for a rare form of leukemia. Then we used this laboratory platform to pioneer drug combinations that, today, are used all over the world. We brought the work to the national cooperative groups, conducted studies and published the observations. It didn’t matter. Because the clonogenic assay hadn’t worked, regardless of its evident deficiencies, no one wanted to talk about the field ever again.

In 1600, Giordano Bruno was burned at the stake for suggesting that the universe contained other planetary systems. In 1634, Galileo Galilei was excommunicated for promoting the heliocentric model of the solar system. Centuries later, Ignaz Semmelweis, MD, was committed to an insane asylum after he (correctly) suggested that puerperal sepsis was caused by bacterial contamination. A century later, the discoverers of helicobacter (the cause of peptic ulcer disease) were forced to suffer the slings and arrows of ignoble academic fortune until they were vindicated through the efforts of a small coterie of enlightened colleagues.

Innovations are not suffered lightly by those who prosper under established norms. To disrupt the standard of care is to invite the wrath of academia. The 2004 Technology Assessment published by Blue Cross/Blue Shield and ASCO in the Journal of Oncology and ASCO’s update seven years later, reflect little more than an established paradigm attempting to escape irrelevance.

Cancer chemosensitivity tests work exactly according to their well-established performance characteristics of sensitivity and specificity. They consistently provide superior response and, in many cases, time to progression and even survival. They can improve outcomes, reduce costs, accelerate research and eliminate futile care. If the academic community is so intent to put these assays to the test, then why have they repeatedly failed to support the innumerable efforts that our colleagues have made over the past two decades to fairly evaluate them in prospective randomized trials? It is time for patients to ask exactly why it is that their physicians do not use them and to demand that these physicians provide data, not hearsay, to support their arguments.

Why Some Patients Refuse Chemotherapy – And Why Some of Them Shouldn’t

In the June 13, 2011, issue of Time magazine, Ruth Davis Konigsberg described cancer patients who refuse to take potentially lifesaving therapy. Her article, titled “The Refuseniks – why some cancer patients reject their doctor’s advice,” examined the rationale applied by patients who decline chemotherapy. Many of these patients are rational, articulate, intelligent and capable individuals. While there are those who by virtue of religious belief, underlying depression, or loss of loved ones, decline interventions, many of these patients make compelling arguments in favor of their decisions.

When we examine the basis of these patients’ therapeutic nihilism, much of it reflects the uncertainty of benefit combined with the certainty of toxicity. What these patients articulate is the fundamental dilemma confronted by cancer patients, what we might describe as their logical assessment of “return on investment.”

Everything in life is based on probabilities. Will your husband or wife be true? Will you have a boy or a girl? Will you live to see retirement? Will your nest egg be adequate? Cancer medicine is no different.

Will the treatment I’m being offered extend my life long enough to be worth the short- and medium-term toxicities that I will certainly suffer?

While I cannot address this question with regard to surgery or radiation, I feel uniquely qualified to do so in the context of chemotherapy. What, after all, is a chemosensitivity assay? When correctly performed, it is a laboratory test that dichotomizes groups of patients with average likelihoods of response (e.g. 20%, 30%, 40%, etc.) into those who are more or less likely to respond based on the results. On average, a patient found sensitive in vitro has a twofold improvement in response, while those found resistant have a demonstrably lower likelihood of benefit. We have now shown this to be true in breast, ovarian, and non-small cell lung cancers, as well as melanoma, childhood and adult leukemias, and other diseases.

To address the misgivings of the Refuseniks, we might ask the following question: Would you take a treatment that provided a 30 percent likelihood of benefit? How about a 40 percent? 50 percent? 60 percent? 70 percent? Or 80 percent? While many might decline the pleasure of chemotherapy at a 20-30 percent response rate, a much larger number would look favorably upon a 70 percent response rate. On the flipside, a patient offered a treatment with a 50 percent likelihood of benefit (on average), who by virtue of a lab study realizes that their true response rate is closer to 19 percent (based on resistance in vitro), might very logically (and defensibly) decline treatment. These real life examples reflect the established performance characteristics of our laboratory tests (Nagourney, RA. Ex vivo programmed cell death and the prediction of response to chemotherapy. Current Treatment Options in Oncology 2006, 7:103-110.).

Rather than bemoan the uncertainties of treatment outcome, shouldn’t we, as clinical oncologists, be addressing these patients’ very real misgivings with data and objective information? I, for one, believe so.

The False Economy of Genomic Analyses

We are witness to a revolution in cancer therapeutics. Targeted therapies, named for their capacity to target specific tumor related features, are being developed and marketed at a rapid pace. Yet with an objective response rate of 10 percent (Von Hoff et al JCO, Nov 2011) reported for a gene array/IHC platform that attempted to select drugs for individual patients we have a long way to go before these tests will have meaningful clinical applications.

So, let’s examine the more established, accurate and validated methodologies currently in use for patients with advanced non-small cell lung cancer. I speak of patients with EGFR mutations for which erlotinib (Tarceva®) is an approved therapy and those with ALK gene rearrangements for which the drug crizotinib (Xalkori®) has recently been approved.

The incidence of ALK gene rearrangement within patients with non-small cell lung cancer is in the range of 2–4 percent, while EGFR mutations are found in approximately 15 percent. These are largely mutually exclusive events. So, let’s do a “back of the napkin” analysis and cost out these tests in a real life scenario.

One hundred patients are diagnosed with non-small cell lung cancer.
•    Their physicians order ALK gene rearrangement     $1,500
•    And EGFR mutation analysis     $1,900
•    The costs associated $1,500 + $1,900 x 100 people =    $340,000
Remember, that only 4 percent will be positive for ALK and 15 percent positive for EGFR. And that about 80 percent of the ALK positive patients respond to crizotinib and about 70 percent of the EGFR positive patients respond to erlotinib.

So, let’s do the math.

We get three crizotinib responses and 11 erlotinib responses: 3 + 11 = 14 responders.
Resulting in a cost per correctly identified patient =     $24,285

Now, let’s compare this with an ex-vivo analysis of programmed cell death.

Remember, the Rational Therapeutics panel of 16+ drugs and combinations tests both cytotoxic drugs and targeted therapies. In our soon to be published lung cancer study, the overall response rate was 65 percent. So what does the EVA/PCD approach cost?

Again one hundred patients are diagnosed with non-small cell lung cancer.
•    Their physicians order an EVA-PCD analysis    $4,000
•    The costs associated: $4,000 x 100 people =    $400,000
•    With 65 percent of patients responding, this
constitutes a cost per correctly identified patient =     $6,154

Thus, we are one quarter the cost and capable of testing eight times as many options. More to the point, this analysis, however crude, reflects only the costs of selecting drugs and not the costs of administering drugs. While, each of those patients selected for therapy using the molecular profiles will receive an extraordinarily expensive drug, many of the patients who enjoy prolonged benefit using EVA/PCD receive comparatively inexpensive chemotherapeutics.

Furthermore, those patients who test negative for ALK and EGFR are left to the same guesswork that, to date has provided responses in the range of 30 percent and survivals in the range of 12 months.

While the logic of this argument seems to have escaped many, it is interesting to note how quickly organizations like ASCO have embraced the expensive and comparatively inefficient tests. Yet ASCO has continued to argue against our more cost-effective and broad-based techniques.

No wonder we call our group Rational Therapeutics.

Chronic Lymphocytic Leukemia (CLL) as a Platform for Functional Profiling

Among the most common forms of leukemia in adults is chronic lymphocytic leukemia. This neoplasm usually arises in a subset of lymphocytes known as B-cells. However, T-cell variants also occur. The disease presents clinically as an elevation of the circulating lymphocytes. This may be associated with enlarged lymph nodes, splenomegaly or liver enlargement.

The decision to treat patients is largely based upon clinical staging systems know as the Rai or Binet classifications. Low risk patients can often be observed without treatment, while more aggressive presentations (such as those associated with anemia and low platelet counts) require intervention. More recently, molecular determinants of aggressiveness have been applied in the prognosis of this disease. These include: CD38, VH gene mutation and Zap 70. Additional findings include ATM mutations, principally in the T-cell and pro-lymphocytic variants.

For more than 40 years, the treatment of choice for this disease was oral chlorambucil. Although effective, chlorambucil resulted in the development of resistance and was associated with rather significant myelosuppression over time. The introduction of fludarabine (FAMP) and 2-CDA revolutionized the management of this disease —providing high response rates with relatively tolerable toxicities.

The introduction of 2-CDA and fludarabine in the 1980s offered an opportunity for our laboratory to examine drug interactions in CLL patients. Combining the alkylating agents (of which, chlorambucil is a member) with 2-CDA revealed synergy (supra-additvity) in 100 percent of the CLL samples we studied (Nagourney, R; et al. British Journal of Cancer, 1993). Based on this observation, we began treating patients with CLL and related lymphoid malignancies with a combination of Cytoxan and 2-CDA, resulting in dramatic and durable remissions.

O’Brian, Keating and other investigators at the MD Anderson then undertook this work (using fludarabine), providing for the most effective therapy for CLL in today’s literature. Unfortunately, a percentage of patients who receive this combination develop deep myelo-suppression. Therefore, the administration of this combination requires careful monitoring by the physician.

One of the most interesting aspects of the high activity observed for fludarabine was the capacity of this “anti-metabolite” to induce cell death in short term cultures of CLL cells. It was well known that CLL cells were not highly proliferative, yet the anti-metabolite class of drugs was specifically designed to stop cell proliferation at the level of DNA synthesis. We realized that 2-CDA and Fludarabine had to be killing cells, not preventing their growth. This conundrum provided an opportunity for us to test a related anti-metabolite in this disease. We chose cytarabine (Ara-C), a drug not considered effective for CLL (e.g. low proliferative rate, no likelihood of DNA synthesis inhibition, no likelihood of cytotoxicity). To our delight, low doses or Ara-C proved highly effective in controlling even the most advanced cases of CLL as we then reported.

CLL became one of our favored models for the study of human tumor biology, enabling us to study drug responses at the molecular level. Many of the observations that we made in this hematologic malignancy granted us insights that we continue to apply in solid tumors today.

Forms of Cell Death

Following the description of apoptosis in the British Journal of Cancer in 1972, scientists around the world incorporated the concept of programmed cell death into their cancer research. What is less understood is the fact that apoptosis is not synonymous with programmed cell death. Programmed cell death is a fundamental feature of multicellular organism biology. Mutated cells incapable of performing their normal functions self-destruct in service of the multicellular organism as a whole. While apoptosis represents an important mechanism of programmed cell death, it is only one of several cell death pathways. Apoptotic cell death occurs with certain mutational events, DNA damage, oxidative stress and withdrawal of some growth factors particularly within the immune system. Non-apoptotic programmed cell death includes: programmed necrosis, para-apoptosis, autophagic cell death, nutrient withdrawal, and subtypes associated with mis-folded protein response, and PARP mediated cell death. While apoptotic cell death follows a recognized cascade of caspase mediated enzymatic events, non-apoptotic cell death occurs in the absence of caspase activation.

With the recognition of programmed cell death as a principal factor in carcinogenesis and cancer response to therapy, there has been a growing belief that the measurement of apoptosis alone will provide the insights needed in cancer biology. This oversimplification underestimates the complexity of cell biology and suggests that cancer cells have but one mechanisms of response to injury. It has previously been shown that cancer cells that suffer lethal injury and initiate the process of apoptosis can be treated with caspase inhibitors to prevent caspase-mediated apoptosis. Of interest, these cells are not rescued from death. Instead, these cells committed to death, undergo a form of non-apoptotic programmed cell death more consistent with necrosis. Thus, commitment to death overrides mechanism of death.

Labs that focus on measurements of caspase activation can only measure apoptotic cell death. While apoptotic cell death is of importance in hematologic cancers and some solid tumors, it does not represent the mechanism of cell death in all tumors. This is why we measure all cell death events by characterizing metabolic viability at the level of cell membrane integrity, ATP content, or mitochondrial function. While caspase activation is of interest, comparably easy to measure and useful in many leukemias and lymphomas, it does not represent cancer cell death in all circumstances and can be an unreliable parameter in many solid tumors.

Chemosensitivity-Resistance Assay as Functional Profiling

Modern cancer research can be divided into three principal disciplines based upon methodology:

1.     Genomic — the analysis of DNA sequences, single nucleotide polymorphisms (SNPs), amplifications and mutations to develop prognostic and, to a limited degree, predictive information on cancer patient outcome.

2.     Proteomic — the study of proteins, largely at the level of phosphoprotein expressions.

3.     Functional — the study of human tumor explants isolated from patients to examine the effects of growth factor withdrawal, signal transduction inhibition and cytotoxic insult on cancer cell viability.

Contrary to analyte-based genomic and proteomic methodologies that yield static measures of gene or protein expression, functional profiling provides a window on the complexity of cellular biology in real-time, gauging tumor cell response to chemotherapies in a laboratory platform. By examining drug induced cell death, functional analyses measure the cumulative result of all of a cell’s mechanisms of resistance and response acting in concert. Thus, functional profiling most closely approximates the cancer phenotype.  Insights gained can determine which drugs, signal transduction inhibitors, or growth factor inhibitors induce programmed cell death in individual patients’ tumors. Functional profiling is the most clinically validated technique available today to predict patient response to drugs and targeted agents.