Workshop on the Design and Analysis of Clinical Trials
(24 - 28 Oct 2011)


~ Abstracts ~

 

Statistical challenges and adaptive design strategies in evaluation of new vaccines
Ivan S. F. Chan, Merck Research Laboratories, USA


Vaccines are biological products that work primarily by introducing antigen or attenuated live virus into the body to trigger an immune response specific for the protection of a targeted disease. Unlike drugs, vaccines are typically developed for prevention of disease in healthy or uninfected subjects and are usually administered in a single series (and with a booster in some cases). Because of their biological nature, vaccines typically have more variability in manufacturing and are less stable than drugs. These unique characteristics pose some special challenges in designing vaccine trials and, depending on the targeted disease, often call for very large studies and long duration of follow-up. In this presentation, we will highlight some of key statistical challenges in trial design and analysis in evaluating vaccine efficacy and the predictive value of immunological markers. We will also discuss several adaptive design strategies that aim at improving the efficiency of vaccine clinical development. Several real examples will be used to illustrate the methodologies.

« Back...

 

Interim analyses that get it wrong
Simon Day, Roche Products Limited, UK


Clinical trials are fragile experiments that don't take lightly to being fooled with. In parallel with this, the fascination (but problem) of statistical investigations, is that we use them to answer questions to which we do not (and cannot ever) know the right answer. We cannot use the analogy of shooting arrows at a target and looking at measures of bias and precision. Instead we shoot arrows blindly in the air and, based on where they land, with some consideration of bias and precision, infer where the target lies.

It is very unusual to be able to re-evaluate an interim analysis to see if we got the right answer. This talk will present three case studies that can, to some degree, do just that. We can answer the elusive question of "what would have happened if.?" And we should worry about the answers we find.

« Back...

 

An overview of competing risks data, with applications in clinical trials
Jason Fine, The University of North Carolina at Chapel Hill, USA


This talk will survey competing risks, with an eye towards clinical trials applications. Conceptual issues related to endpoint definition will be explored, following by a discussion of standard analytic methods, including one sample estimation, two sample testing, and regression modeling. Issues related to summarizing treatment differences will be examined, including challenges in extending standard approaches for independently censored data to the competing risks setting. A primary focus will be a comparison of analyses based on the cause specific hazard and on the cumulative incidence functions. The current state of competing risks methodology in clinical trials will be reviewed, with potential areas for further development highlighted, particularly in the area of adaptive designs. Real data examples will illustrate the main points.

« Back...

 

Dose-finding experiments in clinical trials
Nancy Flournoy, University of Missouri, USA


Consider two situations. In one, toxicity increases with dose. In the second, one considers efficacy in addition, and assumes efficacy increases with dose - so except at the extremes, the probability of efficacy without toxicity (success) will increase up to a point at which the toxicity is great enough to cause it to turn down. In the first case, typically, one seeks to identify a dose with a prescribed toxicity rate; in the second case, one seeks to identify the dose that maximizes the probability of success. These goals can be posed in terms of estimation or dose selection, as a finite number of doses are typically permitted. Two common classes of procedures that differ in many fundamental ways are discussed: up-and-down designs and best intention designs.

« Back...

 

Clinical trials for personalized medicine: designs and statistical inference
Feifang Hu, University of Virginia, USA


In a short period of time, advances in genetics have allowed scientists to identify genes (biomarkers) that are linked with certain diseases. To translate these great scientific findings into real-world products for those who need them (personalized medicine), clinical trials play an essential and important role. Personalized medicine is an approach that will allow physicians to tailor a treatment regimen based on an individual patient's characteristics (which could be biomarkers or other covariates). To develop personalized medicine, we need new designs for clinical trials so that genetics information and other biomarkers can be incorporated to assist in treatment selection. This talk concerns the designs of clinical trials and the corresponding statistical inference for personalized medicine.

This talk first provides a brief review of design and statistical inference related with personalized medicine. Personalized medicine raises some new challenges for the design of clinical trials as: (1) more covariates (biomarkers) have to be considered, and (2) particular attention needs to be paid to the interaction between treatment and covariate. Then we discuss several new families of designs for personal medicine. New techniques are introduced to study the theoretical properties of the proposed designs. Advantages of the proposed designs are demonstrated through both theoretical and numerical studies. To deal with the complex data structure in clinical trials of personalized medicine, some further and important statistical issues are discussed.

« Back...

 

Causal effects based on randomized clinical and intervention trials
Alan Hubbard, University of California, USA


Randomization has been long used in therapeutic and intervention trials to form the basis of causal inference regarding treatment or intervention effects. By now there is a rich literature on the design and analysis of randomized clinical and intervention trials. Recent interest has focused on various forms of adaptive and sequential designs intended to maximize the utility of trials and increase their efficiency. In addition, there has been renewed interest on the design and analysis of trials where a pure 'intention to treat' comparison may not answer key questions of scientific interest. This often occurs, for example, when there are additional factors that come into play post-randomization such as non-compliance to treatment assignment. A different application occurred in The Methods for Improving Reproductive Health in Africa (MIRA) trial, designed to examine the effects of the diaphragm and lubricant gel in reducing sexually transmitted infections, due to the role of condom use during follow-up. Similar causal issues arise in a very different situation involving blinded pain trials with treatment-related side effects due to the potential for unmasking of treatment assignment. We will introduce, motivate and discuss these issues and discuss estimators of parameters based on definitions of the direct effect of the treatment/intervention of interest.

« Back...

 

Group sequential and adaptive clinical trial designs
Christopher Jennison, University of Bath, UK


I shall survey methods for the design, monitoring and adaptation of clinical trials. I shall describe the group sequential approach to monitoring a trial and its application to a variety of response types, including survival data. I shall present "error spending" tests that offer flexibility to deal with fluctuations in patient accrual or response rates and the uncertainties of survival data.

I shall describe adaptive methods based on "combination tests" and discuss their application in sample size re-estimation. I shall demonstrate the benefits of adaptation when investigators wish to test multiple hypotheses and illustrate these by "enrichment" designs which allow refinement of the patient population based on interim response data.

« Back...

 

On regulatory statistics
Peter Anthony Lachenbruch, Oregon State University, USA


Drugs in all countries are regulated by agencies charged with the responsibility to ensure that the drugs, biologics or devices are evaluated in appropriate fashion. The methods vary among countries and by type of product. I will speak of my own experience with the FDA. These remarks are adapted from a book I am writing with Janet Wittes.

The evaluation process goes through several phases: pre-clinical studies to ensure that the product does not have major issues in laboratory animals; phase I studies which are the first in human and designed to ensure there are no major issues in humans; phase II studies should show the appropriate dose and schedule of administration; phase III studies show that the product is efficacious and safe.

After completing phase III studies, and the drug is approved for marketing, a label must be written that describes the studies, the limitations on use, the safety issues that have been identified. There is usually a negotiation between the FDA and the sponsor, with the sponsor wanting a broader set of indications and the FDA being concerned about over-interpreting the data.

At all phases of drug development, it is vitally important that the sponsor be in communication with the FDA. I have seen cases in which the sponsors scientists were forbidden to speak with the FDA reviewers without a regulatory affairs officer present or on the phone for fear that something would be disclosed that the sponsor did not want the FDA to know about. At the same time, some FDA reviewers were over-cautious and restrictive in their conclusions.

We will discuss the contents of the IND, the NDA/BLA and how to meet with the FDA. The over-riding principle is for full disclosure and no lying. One VP stated that the FDA had not placed any stipulations on the trials, but quickly backed off when a clinical reviewer produced a letter stipulating conditions on the trials. Such lies create suspicion by the review team and can lead to greater scrutiny of the application.

« Back...

 

Short course on adaptive methods for clinical trials
Tze Leung Lai, Stanford University, USA


The course is divided into two parts.

Part I. Overview of Literature and a New Approach to Adaptive Designs:
Brief survey, theory of sequential testing, an efficient approach, comparative studies.

Part 2. Adaptation beyond Interim Sample Size Determination:
Adaptive design with interim dose selection, seamless phase II-III cancer clinical trials, biomarker-based adaptive design.

Each part is followed by software development issues and illustration by Prof Balasubramanian Narasimhan, Stanford University, USA.

The course should be accessible to graduate students in Statistics at NUS.

« Back...

 

Principles for response-adaptive randomization
William Rosenberger, George Mason University, USA


We discuss guiding principles for the use of response-adaptive randomization in clinical trials. First, we describe a set of criteria by which the investigator can determine whether response-adaptive randomization is useful. We then discuss a template for the appropriate selection of a response-adaptive randomization procedure. Such guidance should be useful in designing state-of-the-art clinical trials. In addition, we compare recent designs with respect to these criteria give their strengths and weaknesses.

« Back...

 

Multiple comparisons for multiple endpoints and multiple doses
Ajit C. Tamhane, Northwestern University, USA


Multiple testing problems are omnipresent in modern clinical trials because of the trend by pharmaceutical companies to have large trials that address multiple objectives. Regulatory agencies are requiring more stringent statistical tests for analysis of data from such trials. The first part of this talk will give an overview of the basic multiple testing concepts and procedures. The second part of the talk will focus on gatekeeping procedures for testing hierarchically ordered and logically related null hypotheses that arise in clinical trials involving multiple endpoints, multiple doses, noninferiority-superiority tests and subgroup analyses.

« Back...

 

Bayesian adaptive designs for early-phase oncology trials
Guosheng Yin, The University of Hong Kong, Hong Kong


Phase I oncology trials aim to find the maximum tolerated dose (MTD) for an investigational drug. Phase II trials examine the potential efficacy of the drug by treating patients at the identified MTD. Such early-phase trials involve limited resources and especially small sample sizes. Clinical trials should be designed in an efficient, adaptive and ethical way to save resources, draw correct conclusions earlier, benefit more patients and result in less unnecessary toxicities. There has been great interest and extensive development in Bayesian adaptive trial designs, especially for these early-phase trials. We introduce a wide range of statistical methods that are commonly used for designing early-phase clinical trials and interim monitoring from Bayesian perspectives. In particular, we cover the Bayesian model averaging continual reassessment method (BMA-CRM), phase I/II seamless trial designs, Bayesian adaptive randomization, and dose finding in drug-combination trials. We highlight the advantages and disadvantages of each method and give an overview of their broad applications in oncology trials designed at M. D. Anderson Cancer Center. We examine the operating characteristics of these Bayesian adaptive designs through extensive simulation studies.

« Back...

 

A general framework for sequential and adaptive methods in survival studies
Zhiliang Ying, Columbia University, USA


Adaptive treatment allocation schemes based on interim responses have generated a great deal of recent interest in clinical trials and other follow-up studies. An important application of such schemes is in survival studies, where the response variable of interest is time to the occurrence of a certain event. The first part of this talk reviews existing literature on adaptive treatment allocation and on survival analysis with staggered entry. In the second part, a general framework is introduced that provides a unified approach. The new approach is based on marked point processes with suitably chosen sigma-filtrations. The usual large sample properties are established and applications to adaptive and sequential designs are discussed. This talk is based on joint work with Xiaolong Luo and Gongjun Xu.

« Back...

 
Best viewed with IE 7 and above