Type I Error

Scientific proof is rarely proof, but instead our best approximation. Beyond death and taxes, there are few certainties in life. That is why investigators rely so heavily on statistics.

Statistical analyses enable researchers to establish “levels” of certainty. Reported as “p-values,” these metrics offer the reader levels of statistical significance indicating that a given finding is not simply the result of chance. To wit, a p-value equal to 0.1 (1 in 10) means that the findings are 90 percent likely to be true with a 10 percent error. A p-value of 0.05 (1 in 20) tells the reader that the findings are 95 percent likely to be true. While a p-value equal to 0.01 (1 in 100) tells the reader that the results are 99 percent likely to be true. For an example in real time, we are just reporting a paper in the lung cancer literature that doubled the response rate for metastatic disease compared with the national standard. The results achieved statistical significance where p = 0.00015.  That is to say, that there is only 15 chances out of 100,000 that this finding is the result of chance.

Today, many laboratories offer tests that claim to select candidates for treatment. Almost all of these laboratories are conducting gene-based analysis. While there are no good prospective studies that prove that these genomic analyses accurately predict response, this has not prevented these companies from marketing their tests aggressively. Indeed, many insurers are covering these services despite the lack of proof.

So let’s examine why these tests may encounter difficulties now and in the future. The answer to put it succinctly is Type I errors. In the statistical literature, a Type I error occurs when a premise cannot be rejected.  The statistical term for this is to reject the “null” hypothesis. Type II errors occur when the null hypothesis is falsely rejected.

Example: The scientific community is asked to test the hypothesis that Up is Down. Dedicated investigators conduct exhaustive analyses to test this provocative hypothesis but cannot refute the premise that Up is Down. They are left with no alternative but to report according to their carefully conducted studies that Up is Down.

The unsuspecting recipient of this report takes it to their physician and demands to be treated based on the finding. The physician explains that, to his best recollection, Up is not Down.  Unfazed the patient, armed with this august laboratory’s result, demands to be treated accordingly. What is wrong with this scenario? Type I error.

The human genome is comprised of more than 23,000 genes: Splice variants, duplications, mutations, SNPs, non-coding DNA, small interfering RNAs and a wealth of downstream events, which make the interpretation of genomic data highly problematic. The fact that a laboratory can identify a gene does not confer a certainty that the gene or mutation or splice variant will confer an outcome. To put it simply, the input of possibilities overwhelms the capacity of the test to rule in or out, the answer.

Yes, we can measure the gene finding, and yes we have found some interesting mutations. But no we can’t reject the null hypothesis. Thus, other than a small number of discreet events for which the performance characteristics of these genomic analyses have been established and rigorously tested, Type I errors undermine and corrupt the predictions of even the best laboratories. You would think with all of the brainpower dedicated to contemporary genomic analyses that these smart guys would remember some basic statistics.

About Dr. Robert A. Nagourney
Dr. Nagourney received his undergraduate degree in chemistry from Boston University and his doctor of medicine at McGill University in Montreal, where he was a University Scholar. After a residency in internal medicine at the University of California, Irvine, he went on to complete fellowship training in medical oncology at Georgetown University, as well as in hematology at the Scripps Institute in La Jolla. During his fellowship at Georgetown University, Dr. Nagourney confronted aggressive malignancies for which the standard therapies remained mostly ineffective. No matter what he did, all of his patients died. While he found this “standard of care” to be unacceptable, it inspired him to return to the laboratory where he eventually developed “personalized cancer therapy.” In 1986, Dr. Nagourney, along with colleague Larry Weisenthal, MD, PhD, received a Phase I grant from a federally funded program and launched Oncotech, Inc. They began conducting experiments to prove that human tumors resistant to chemotherapeutics could be re-sensitized by pre-incubation with calcium channel blockers, glutathione depletors and protein kinase C inhibitors. The original research was a success. Oncotech grew with financial backing from investors who ultimately changed the direction of the company’s research. The changes proved untenable to Dr. Nagourney and in 1991, he left the company he co-founded. He then returned to the laboratory, and developed the Ex-vivo Analysis - Programmed Cell Death ® (EVA-PCD) test to identify the treatments that would induce programmed cell death, or “apoptosis.” He soon took a position as Director of Experimental Therapeutics at the Cancer Institute of Long Beach Memorial Medical Center. His primary research project during this time was chronic lymphocytic leukemia. He remained in this position until the basic research program funding was cut, at which time he founded Rational Therapeutics in 1995. It is here where the EVA-PCD test is used to identity the drug, combinations of drugs or targeted therapies that will kill a patient's tumor - thus providing patients with truly personalized cancer treatment plans. With the desire to change how cancer care is delivered, he became Medical Director of the Todd Cancer Institute at Long Beach Memorial in 2003. In 2008, he returned to Rational Therapeutics full time to rededicate his time and expertise to expand the research opportunities available through the laboratory. He is a frequently invited lecturer for numerous professional organizations and universities, and has served as a reviewer and on the editorial boards of several journals including Clinical Cancer Research, British Journal of Cancer, Gynecologic Oncology, Cancer Research and the Journal of Medicinal Food.

3 Responses to Type I Error

  1. Elaine L. says:

    For me, this was a very enlightening post. Thank-you.

  2. gpawelski says:

    The biomarker-based paradigm will require us to consider the level of evidence necessary to declare true activity. Daniel J. Sargent, PhD, Professor of Cancer Research at the Mayo Clinic, tells us that it may become impossible to perform traditional trials with requirements to achieve a P-value less than 0.05, high statistical power, and an OS advantage.

    When the patient population becomes small, we’re going to have to consider either other endpoints or other statistical philosophies. Should we use a Bayesian strategy, in which we borrow information from other clinical trials to help make decisions? Or do we loosen the P-value requirements, that a P-value of less than 0.1 or 0.2, for example, be considered a sufficient level of evidence for activity?

    These are active areas of research that need to be fully considered as we enter this era of truly personalized therapy with patient populations that are becoming smaller and smaller. I do know that the Bayesian method is no stranger to the functional profiling platform. It’s what gives credit to the accuracy of the assay tests.

    The absolute predictive accuracy of cell culture assay tests varies according to the overall response rate in the patient population, in accordance with Bayesian principles. The actual performance of assays in each type of tumor precisely match predictions made from Bayes’ Theorem.

    Bayes’ Theorem is a tool for assessing how probable evidence makes some hypothesis. It is a powerful theorem of probability calculus which is used as a tool for measuring propensities in nature rather than the strength of evidence (Solving a Problem in the Doctrine of Changes).

  3. gpawelski says:

    What Johns Hopkins’ Dr. Gabor D. Kelen stated in ResearchGate, a professional network for scientists and researchers, seems to be entirely consistent with what Dr. Nagourney wrote.

    Hypothesis testing is based on certain statistical and mathematical principles that allow investigators to evaluate data by making decisions based on the probability or implausibility of observing the results obtained.

    However, classic hypothesis testing has its limitations, and probabilities mathematically calculated are inextricably linked to sample size.

    Furthermore, the meaning of the p value frequently is misconstrued as indicating that the findings are also of clinical significance.

    Finally, hypothesis testing allows for four possible outcomes, two of which are errors that can lead to erroneous adoption of certain hypotheses:

    1. The null hypothesis is rejected when, in fact, it is false.
    2. The null hypothesis is rejected when, in fact, it is true (type I or alpha error).
    3. The null hypothesis is conceded when, in fact, it is true.
    4. The null hypothesis is conceded when, in fact, it is false (type II or beta error).

    Type I error occurs when you cannot reject the null hypothesis and type II error occurs when you reject it inappropriately. The other outcome would be consistent with what you might look upon as true positives and true negative.

    The sample size error is extremely important for it goes to the next point of all these discussions, that is, when does statistical significance occur and not be relevant and when does statistical significance not occur and yet the actual finding proves to be of great relevance. Sample size dictates that.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: