Chapter 6

Looking at Clinical Trials and Drug Development

In This Chapter

arrow Understanding preclinical (that is, “before humans”) studies

arrow Walking through the phases of clinical studies that test on humans

arrow Checking out other special-purpose clinical trials

Many of the chapters in this book concentrate on specific statistical techniques and tests, but this chapter gives a bigger picture of one of the main settings where biostatistics is used: clinical research (covered in Chapter 5). As an example, I talk about one particular kind of clinical research: developing a drug and bringing it to market. Many a biostatistician may go his entire career without ever being involved in a clinical drug trial. However, clinical research is worth taking a look at, for several reasons:

check.png It’s a broad area of research that covers many types of investigation: laboratory, clinical, epidemiological, and computational experiments.

check.png All the statistical topics, tests, and techniques covered in this book are used at one or more stages of drug development.

check.png It’s a high-stakes undertaking, costing hundreds of millions of dollars, and the return on that investment can range from zero dollars to many billions of dollars. So drug developers are highly motivated to do things properly, and that includes using the best statistical techniques throughout the process.

check.png Because of the potential for enormous good — or enormous harm — drug development must be conducted at the highest level of scientific (and statistical) rigor. The process is very closely scrutinized and heavily regulated.

This chapter takes you through the typical steps involved in bringing a promising chemical to market as a prescription drug. In broad strokes, this process usually involves most of the following steps:

1. Discover a chemical compound or biological agent that shows promise as a treatment for some disease, illness, or other medical or physical condition (which I refer to throughout this chapter as a target condition).

2. Show that this compound does things at the molecular level (inside the cell) that indicate it may be beneficial in treating the target condition.

3. Test the drug in animals.

The purpose of this testing is to show that, at least for animals, the drug seems to be fairly safe, and it appears to be effective in treating the target condition (or something similar to it).

4. Test the drug in humans.

This testing entails a set of logical steps to establish the largest dose that a person can tolerate, find the dose (or a couple of doses) that seems to offer the best combination of safety and efficacy, and demonstrate convincingly that the drug works.

5. Continue to monitor the safety of the drug in an ever-increasing number of users after it’s approved and marketed.

remember.eps Throughout this chapter, I use the words effectiveness and efficacy (and their adjective forms effective and efficacious) to refer to how well a treatment works. These words are not synonymous:

check.png Efficacy refers to how well a treatment works in an ideal-world situation, where everybody takes it exactly as prescribed.

check.png Effectiveness refers to how well the treatment works in the real world, where people might refuse to take it as directed (or at all) because of its unpleasantness and/or its side effects. (This is especially relevant in chemotherapy trials, but it comes up in all kinds of clinical testing.)

Both of these aspects of drug performance are important: efficacy perhaps more so in the early stages of drug development, because it addresses the theoretical question of whether the drug can possibly work, and effectiveness more so in the later stages, where the drug’s actual real-world performance is of more concern. Common usage doesn’t always honor this subtle distinction; the terms “safety and efficacy” and “safe and effective” seem to be more popular than the alternatives “safety and effectiveness” or “safe and efficacious,” so I use the more popular forms in this chapter.

Not Ready for Human Consumption: Doing Preclinical Studies

Before any proposed treatment can be tested on humans, there must be at least some reason to believe that the treatment might work and that it won’t put the subjects at undue risk. So every promising chemical compound or biological agent must undergo a series of tests to assemble this body of evidence before ever being given to a human subject. These “before human” experiments are called preclinical studies, and they’re carried out in a progressive sequence:

check.png Theoretical molecular studies (in silico): You can take a chemical structure suggested by molecular biologists, systematically generate thousands of similar molecules (varying a few atoms here and there), and run them through a program that tries to predict how well each variant may interact with a receptor (a molecule often in or on the body’s cells that plays a role in the development or progression of the target condition). This kind of theoretical investigation is sometimes called an in silico study (a term first used in 1989 to describe research conducted by computer simulation of biological molecules, cells, organs, or organisms; it means in silicon, referring to semiconductor computer circuits). These techniques are now routinely used to design new large-molecule drugs (like naturally occurring or artificially created proteins), for which the computer simulations tend to be quite reliable.

check.png Chemical studies (in vitro): While computer simulations may suggest promising large molecules, you generally have to evaluate small-molecule drugs in the lab. These studies try to determine the physical, chemical, electrical, and other properties of the molecule. They’re called in vitro studies, meaning in glass, a reference to the tired old stereotype of the chemist in a white lab coat, pouring a colored liquid from one test tube to another. Don’t forget your goggles and wild hair.

check.png Studies on cells and tissues (ex vivo): The next step in evaluating a candidate drug is to see how it actually affects living cells, tissues, and possibly even complete organs. This kind of study is sometimes called ex vivo, meaning out of the living, because the cells and tissues have been taken out of a living creature. The researchers are looking for changes in these cells and tissues that seem to be related, in some way, to the target condition.

check.png Animal studies (in vivo): After studies show that a molecule undergoes chemical reactions and interacts the right way with the targeted receptor sites, you can evaluate the drug’s effect on complete living organisms to see what the drug actually does for them (how effective it is) and what it may do to them (how safe it is). These studies are called in vivo, meaning in the living, because they’re conducted on intact living creatures.

technicalstuff.eps In vivo studies tend to be more useful for small-molecule drugs than for large-molecule drugs. Animals and humans may react similarly to small molecules, but the actions of antibodies and proteins tend to be very different between species.

Testing on People during Clinical Trials to Check a Drug’s Safety and Efficacy

After a promising candidate drug has been thoroughly tested in the laboratory and on animals, shows the desired effect, and hasn’t raised any serious warning flags about dangerous side effects (see the preceding section), then it’s time to make the big leap to clinical trials: testing on humans. But you can’t just give the drug to a bunch of people who have the condition and see whether it helps them. Safety issues become a serious concern when human subjects are involved, and human testing has to be done in a very systematic way to minimize risks.

Clinical drug development is a heavily regulated activity. Every country has an agency that oversees drug development — the U.S. Food and Drug Administration (FDA), Health Canada, the European Medicines Agency (EMEA), the Japanese Ministry of Health and Welfare, and on and on. These agencies have an enormous number of rules, regulations, guidelines, and procedures for every stage of the process. (Note: In this chapter, I use FDA to stand not only for the U.S. regulatory agency, but for all such agencies worldwide.)

Before you can test your drug in people, you must show the FDA all the results of the testing you’ve done in laboratory animals and what you propose to do while testing your drug on humans. The FDA decides whether it’s reasonably safe to do the clinical trials. The following sections describe the four phases of a clinical trial.

Phase I: Determining the maximum ­tolerated dose

An old saying (six centuries old, in fact) is that “dose makes the poison.” This adage means that everything is safe in low enough doses (I can fearlessly swallow one microgram of pure potassium cyanide), but anything can be lethal in sufficiently large doses (drinking a gallon of pure water in an hour may well kill me).

So the first step (Phase I) in human drug testing is to determine how much drug you can safely give to a person, which scientists express in more-­precisely defined terms:

check.png Dose-limiting toxicities (DLTs) are unacceptable side effects that would force the treatment to stop (or continue at a reduced dose). The term unacceptable is relative; severe nausea and vomiting would probably be considered unacceptable (and therefore DLTs) for a headache remedy, but not for a chemotherapy drug. For each drug, a group of experts decides what reactions constitute a DLT.

check.png The maximum tolerated dose (MTD) is the largest dose that doesn’t produce DLTs in a substantial number of subjects (say, more than 5 or 10 percent of them).

The goal of a Phase I trial is to determine the drug’s MTD, which will mark the upper end of the range of doses that will be allowed in all subsequent trials of this drug. A typical Phase I trial enrolls subjects into successive groups (cohorts) of about eight subjects each. The first cohort gets the drug at one particular dose and its subjects are then watched to see whether they experience any DLTs. If not, then the next cohort is given a larger dose (perhaps 50 percent larger or twice as large). This dose-escalation process continues as long as no DLTs are seen.

As soon as one or more DLTs occur in a cohort, the area around that dose level is explored in more detail in an attempt to nail down the best estimate of the MTD. Depending on how many DLTs are observed at a particular dose level, the protocol may specify testing another cohort at the same dose, at the previous (lower) dose level, or somewhere in between.

Phase I trials are usually done on healthy volunteers because they’re mainly about safety. An exception is trials of cancer chemotherapy agents, which have so many unpleasant effects (many of them can actually cause cancer) that it’s usually unethical to give them to healthy subjects.

A drug may undergo several Phase I trials. Many drugs are meant to be taken more than once — either as a finite course of treatment (as for chemotherapy) or as an ongoing treatment (as for high blood pressure). The first Phase I trial usually involves a single dosing; others may involve repetitive dosing patterns that reflect how the drug will actually be administered to patients.

The statistical analysis of Phase I data is usually simple, with little more than descriptive summary tables and graphs of event rates at each dose level. Usually, no hypotheses are involved, so no statistical testing is required. Sample sizes for Phase I studies are usually based on prior experience with similar kinds of drugs, not on formal power calculations (see Chapter 3).

The dose level for the first cohort is usually some small fraction of the lowest dose that causes toxicity in animals. You might guess that tolerable drug doses should be proportional to the weight of the animal, so if a 5-kilogram monkey can tolerate 30 milligrams of a drug, a 50-kilogram human should tolerate 300 milligrams. But tolerable doses often don’t scale up in such a simple, proportionate way, so researchers often cut the scaled-up dose estimate down by a factor of 10 or more to get a safe starting dose for human trials.

warning_bomb.eps Even with these precautions, the first cohort is always at extra risk, simply because you’re sailing in uncharted waters. Besides the possibility of nonproportional scale-up, there’s the additional danger of a totally unforeseen serious reaction to the drug. Some drugs (especially antibody drugs) that were perfectly well tolerated in rats, dogs, and monkeys have triggered severe (even fatal) reactions in humans at conservatively scaled-up doses that were thought to be completely safe. So extra precautions are usually put in place for these “first in man” studies.

Besides the primary goal of establishing the MTD for the drug, a Phase I study almost always has some secondary goals as well. After all, if you go to the trouble of getting 50 subjects, giving them the drug, and keeping them under close scrutiny for a day or two afterward, why not take advantage of the opportunity to gather a little more data about how the drug behaves in humans?

With a few extra blood draws, urine collections, and a stool specimen or two, you can get a pretty good idea of the drug’s basic pharmacokinetics:

check.png How fast it’s absorbed into the bloodstream (if it’s taken orally)

check.png How fast (and by what route) it’s eliminated from the body

And, of course, you don’t want to pass up the chance to see whether the drug shows any signs of efficacy (no matter how slight).

Phase II: Finding out about the drug’s performance

After the Phase I trials, you’ll have a good estimate of the MTD for the drug. The next step is to find out about the drug’s safety and efficacy at various doses. You may also be looking at several different dosing regimens, including the following options:

check.png What route (oral or intravenous, for example) to give the drug

check.png How frequently to give the drug

check.png For how long (or for what duration) to give the drug

remember.eps Generally, you have several Phase II studies, with each study testing the drug at several different dose levels up to the MTD to find the dose that offers the best tradeoff between safety and efficacy. Phase II trials are called dose-finding trials; at the end of Phase II, you should have established the dose (or perhaps two doses) at which you would like to market the drug.

A Phase II trial usually has a parallel, randomized, and blinded design (see Chapter 5 for an explanation of these terms), enrolling a few dozen to several hundred subjects who have the target condition for the drug (such as diabetes, hypertension, or cancer).

You acquire data, before and after drug administration, relating to the safety and efficacy of the drug. The basic idea is to find the dose that gives the ­highest efficacy with the fewest safety issues. The situation can be viewed in an idealized form in Figure 6-1.

9781118553992-fg0601.eps

Illustration by Wiley, Composition Services Graphics

Figure 6-1: A Phase II trial tries to find the dose that gives the best tradeoff between safety (few adverse events) and efficacy (high response).

Efficacy is usually assessed by several variables (called efficacy endpoints) observed during the trial. These depend on the drug being tested and can include:

check.png Changes in measurable quantities directly related to the target condition, such as cholesterol, blood pressure, glucose, and tumor size

check.png Increase in quality-of-life questionnaire scores

check.png Percent of subjects who respond to the treatment (using some acceptable definition of response)

Also, safety has several indicators, including

check.png The percent of subjects experiencing various types of adverse events

check.png Changes in safety lab values, such as hemoglobin

check.png Changes in vital signs, such as heart rate and blood pressure

tip.eps Usually each safety and efficacy indicator is summarized and graphed by dose level and examined for evidence of some kind of “dose-response” association. (I describe how to test for significant association between variables in Chapter 17.) The graphs may indicate peculiar dose-response behavior; for example, efficacy may increase up to some optimal dose and then decrease for higher doses.

Figure 6-1, in which an overall safety measure and an overall efficacy measure are both shown in the same graph, is useful because it makes the point that (ideally, at least) there should be some range of doses for which the efficacy is relatively high and the rate of side effects is low. In Figure 6-1, that range appears to lie between 150 and 350 milligrams:

check.png Below 150 milligrams, the drug is very safe (few adverse events), but less than half as effective as it is at higher doses.

check.png Between 150 and 300 milligrams, the drug is quite safe (few adverse events) and seems to be fairly effective (a high response rate).

check.png Above 300 milligrams, the drug is very effective, but more than 25 percent of the subjects experience side effects and other safety issues.

The “sweet spot” for this drug is probably somewhere around 220 milligrams, where 80 percent of the subjects respond to treatment and the side-effects rate is in the single digits. The actual choice of best dose may have to be thrashed out between clinicians, businesspeople, bioethicists, and other experts, based on a careful examination of all the safety and efficacy data from all the Phase II studies.

remember.eps The farther apart the two curves are in Figure 6-1, the wider the range of good doses (those with high efficacy and low side effects) is. This range is called the therapeutic range. But if the two curves are very close together, there may be no dose level that delivers the right mix of efficacy and safety. In that case, it’s the end of the road for this drug. The majority of drugs never make it past Phase II.

Between the end of Phase II and the beginning of Phase III, you meet again with the FDA, which reviews all your results up to that point and tells you what it considers an acceptable demonstration of safety and efficacy.

Phase III: Proving that the drug works

If Phase II is successful, it means you’ve found one or two doses for which the drug appears to be safe and effective. Now you take those doses into the final stage of drug testing: Phase III.

Phase III is kind of like the drug’s final exam time. It has to put up or shut up, sink or swim, pass or fail. Up to this point, the drug appears to be safe and effective, and you have reason to hope that it’s worth the time and expense to continue development. Now the drug team has to convince the FDA — which demands proof at the highest level of scientific rigor — that the drug, in the dose(s) at which you plan to market it, is safe and effective. Depending on what treatments (if any) currently exist for the target condition, you may have to show that your drug is better than a placebo or that it’s at least as good as the current best treatment. (You don’t have to show that it’s better than the current treatments. See Chapter 16.)

technicalstuff.eps The term as good as can refer to both safety and efficacy. Your new cholesterol medication may not lower cholesterol as much as the current best treatments, but if it’s almost completely free of the (sometimes serious) side effects associated with the current treatments, it may be considered just as good or even better.

Usually you need to design and carry out two pivotal Phase III studies. In each of these, the drug must meet the criteria agreed upon when you met with the FDA after Phase II.

The pivotal Phase III studies have to be designed with complete scientific rigor, and that includes absolutely rigorous statistical design. You must

check.png Use the most appropriate statistical design (as described in Chapter 5).

check.png Use the best statistical methods when analyzing the data.

check.png Ensure that the sample size is large enough to provide at least 80 or 90 percent power to show significance when testing the efficacy of the drug (see Chapter 3).

If the FDA decided that the drug must prove its efficacy for two different measures of efficacy (co-primary endpoints), the study design has to meet even more stringent requirements.

When Phase III is done, your team submits all the safety and efficacy data to the FDA, which thoroughly reviews it and considers how the benefits compare to the risks. Unless something unexpected comes up, the FDA approves the marketing of the drug.

Phase IV: Keeping an eye on the marketed drug

Being able to market the drug doesn’t mean you’re out of the woods yet! During a drug’s development, you’ve probably given the drug to hundreds or thousands of subjects, and no serious safety concerns have been raised. But if 1,000 subjects have taken the drug without a single catastrophic adverse event, that only means that the rate of these kinds of events is probably less than 1 in 1,000. When your drug hits the market, tens of millions of people may use it; if the true rate of catastrophic events is, for example, 1 in 2,000, then there will be about 5,000 of those catastrophic events in 10 million users. (Does the term class action lawsuit come to mind?)

If the least inkling of possible trouble (or a signal, in industry jargon) is detected in all the data from the clinical trials, the FDA is likely to call for ongoing monitoring after the drug goes on the market. This process is referred to as a risk evaluation mitigation strategy (REMS), and it may entail preparing guides for patients and healthcare professionals that describe the drug’s risks.

technicalstuff.eps The FDA also monitors the drug for signs of trouble. Doctors and other healthcare professionals submit information to the FDA on spontaneous adverse events they observe in their patients. This system is called the FDA Adverse Event Reporting System (FAERS). FAERS has been criticized for relying on voluntary reporting, so the FDA is developing a system that will use existing automated healthcare data from multiple sources.

If a drug is found to have serious side effects, the official package insert may have to be changed to include a warning to physicians about the problem. This printed warning message is surrounded by a black box for emphasis, and is (not surprisingly) referred to as a black-box warning. Such a warning can doom a drug commercially, unless it is the only drug available for a serious condition. And if really serious problems are uncovered in a marketed drug, the FDA can take more drastic actions, including asking the manufacturer to withdraw the drug from the market or even reversing its original approval of the drug.

Holding Other Kinds of Clinical Trials

The Phase I, II, and III clinical trials previously described are part of the standard clinical testing process for every proposed drug; they’re intended to demonstrate that the drug is effective and to create the core of what will become an ever-growing body of experience regarding the safety of the drug. In addition, you’ll probably carry out several other special-purpose clinical trials during drug development. A few of the most common ones are described in the following sections.

Pharmacokinetics and pharmacodynamics (PK/PD studies)

The term pharmacokinetics (PK) refers to the study of how fast and how completely the drug is absorbed into the body (from the stomach and intestines if it’s an oral drug); how the drug becomes distributed through the various body tissues and fluids, called body compartments (blood, muscle, fatty tissue, cerebrospinal fluid, and so on); to what extent (if any) the drug is metabolized (chemically modified) by enzymes produced in the liver and other organs; and how rapidly the drug is excreted from the body (usually via urine, feces, and other routes).

The term pharmacodynamics (PD) refers to the study of the relationship between the concentration of the drug in the body and the biological and physiological effects of the drug on the body or on other organisms (bacteria, parasites, and so forth) on or in the body.

remember.eps Generations of students have remembered the distinction between PK and PD by the following simple description:

check.png Pharmacokinetics is the study of what the body does to the drug.

check.png Pharmacodynamics is the study of what the drug does to the body.

It’s common during Phase I and II testing to collect blood samples at several time points before and after dosing and analyze them to determine the plasma levels of the drug at those times. This data is the raw material on which PK and PD studies are based. By graphing drug concentration versus time, you can get some ballpark estimates of the drug’s basic PK properties: the maximum concentration the drug attains (CMax), the time at which this maximum occurs (tMax), and the area under the concentration-versus-time curve (AUC). And you may also be able to do some rudimentary PD studies from this data — examining the relationship between plasma drug concentrations and measurable physiological responses.

But at some point, you may want (or need) to do a more formal PK/PD study to get detailed, high-quality data on the concentration of the drug and any of its metabolites (molecules produced by the action of your body’s enzymes on the original drug molecule) in plasma and other parts of the body over a long enough period of time for almost all the drug to be eliminated from the body. The times at which you draw blood (and other specimens) for drug assays (the so-called sampling time points) are carefully chosen — they’re closely spaced around the expected tMax for the drug and its metabolites (based on the approximate PK results from the earlier trials) and more spread out across the times when nothing of much interest is going on.

A well-designed PK/PD study yields more precise values of the basic PK parameters (CMax, tMax, and AUC) as well as more sophisticated PK parameters, such as the actual rates of absorption and elimination, information about the extent to which the drug is distributed in various body compartments, and information about the rates of creation and elimination of drug metabolites.

A PK/PD study also acquires many other measurements that indicate the drug’s effects on the body, often at the same (or nearly the same) sampling time points as for the PK samples. These PD measurements include:

check.png Blood and urine sampling for other chemicals that would be affected by the drug: For example, if your drug were a form of insulin, you’d want to know glucose concentrations as well as concentrations of other chemicals involved in glucose metabolism.

check.png Vital signs: Blood pressure, heart rate, and perhaps rate of breathing.

check.png Electrocardiographs (ECGs): Tracings of the heart’s electrical activity.

check.png Other physiological tests: Lung function, treadmill, and subjective assessments of mood, fatigue, and so on.

Data from PK/PD studies can be analyzed by methods ranging from the very simple (noting the time when the highest blood concentration of the drug was observed) to the incredibly complex (fitting complicated nonlinear models to the concentrations of drug and metabolites in different compartments over time to estimate reaction rate constants, volumes of distribution, and more). I describe some of these complex methods in Chapter 21.

Bioequivalence studies

You may be making a generic drug to compete with a brand-name drug already on the market whose patent has expired. The generic and brand-name drug are the exact same chemical, so it may not seem reasonable to have to go through the entire drug development process for a generic drug. But because there are differences in the other ingredients that go into the drug (such as fillers and coatings), you have to show that your formula is essentially bioequivalent to the name-brand drug. Bioequivalent means that your generic product puts the same (or nearly the same) amount of the drug’s active ingredient into the blood as the brand-name product.

A bioequivalence study is usually a fairly simple pharmacokinetic study, having either a parallel or a crossover design (see Chapter 5 for more on design structure). Each subject is given a dose of the product (either the brand-name or generic drug), and blood samples are drawn at carefully chosen time points and analyzed for drug concentration. From this data, the basic PK parameters (AUC, CMax, and so on) are calculated and compared between the brand-name and generic versions. I describe the statistical design and analysis of bioequivalence studies in Chapter 16.

Thorough QT studies

In the mid-1900s it was recognized that certain drugs interfered with the ability of the heart to “recharge” its muscles between beats, which could lead to a particularly life-threatening form of cardiac arrhythmia called Torsades de Points (TdP). Fortunately, warning signs of this arrhythmia show up as a distinctive pattern on an electrocardiogram (ECG) well before it progresses to TdP.

You’ve seen the typical squiggly pattern of an ECG in movies (usually just before it becomes a flat line). Cardiologists have labeled the various peaks and dips on an ECG tracing with consecutive letters of the alphabet, from P through T, like you see in Figure 6-2.

9781118553992-fg0602.eps

Illustration by Wiley, Composition Services Graphics

Figure 6-2: The parts of a typical ECG tracing of one heartbeat.

That last bump, called the T-wave, is the one to look at. It depicts the movement of potassium ions back into the cell (called repolarization), getting it ready for the next beat. If repolarization is slowed down, the T-wave will be stretched out. For various reasons, cardiologists measure that stretching out time as the number of milliseconds between the start of the Q wave and the end of the T wave; this is called the QT interval.

The QT interval is usually adjusted for heart rate by any of several formulas, resulting in a “corrected” QT interval (QTc), which is typically around 400 milliseconds (msec). If a drug prolongs QTc by 50 milliseconds or more, things start to get dicey. Ideally, a drug shouldn’t prolong QTc by even 10 milliseconds.

Data from all preclinical and human drug trials are closely examined for the following so-called signals that the drug may tend to mess up QTc:

check.png Any in-silico or in-vitro studies indicate that the drug molecule might mess up ion channels in cell membranes.

check.png Any ECGs (in animals or humans) show signs of QTc prolongation.

check.png Any drugs that are chemically similar to the new drug have produced QT prolongation.

If any such signals are found, the FDA will probably require you to conduct a special thorough QT trial (called a TQT or QT/QTc trial) to determine whether your drug is likely to cause QT prolongation.

A typical TQT may enroll about 100 healthy volunteers and randomize them to receive either the new drug, a placebo, or a drug that’s known to prolong QTc by a small amount (this is called a positive control, and it’s included so that you can make a convincing argument that you’d recognize a QTc prolongation if you saw one). ECGs are taken at frequent intervals after administering the product, clustered especially close together at the times near the expected maximum concentration of the drug and its known metabolites. Each ECG is examined, the QT and heart rate are measured, and the QTc is calculated.

tip.eps The statistical analysis of a TQT is similar to that of an equivalence trial (which I describe in Chapter 16). You’re trying to show that your drug is equivalent to a placebo with respect to QTc prolongation, within some allowable tolerance, which the FDA has decreed to be 10 milliseconds. Follow these steps:

1. Subtract the QTc of the placebo from the QTc for the drug and for the positive control at the same time point, to get the amount of QTc prolongation at each time point.

2. Calculate the 90 percent confidence intervals around the QTc prolongation values, as described in Chapter 10.

3. Plot the average differences, along with vertical bars representing the confidence intervals, as shown in Figure 6-3.

9781118553992-fg0603.eps

Illustration by Wiley, Composition Services Graphics

Figure 6-3: The results of a TQT study. The comparator should exceed the 10-millisecond threshold; the new drug should not.

To pass the test, the drug’s QTc mean prolongation values and their confidence limits must all stay below the 10-millisecond limit. For the positive control drug, the means and their confidence limits should go up; in fact, the confidence limits should lie completely above 0 (that is, there must be a significant increase) at those time points where the control drug is near its peak concentration.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset