30
1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD 1 , Alan Schumitzky PhD 1,2 , David Bayard PhD 1,3 , Robert Leary 1,4 , PhD, Alison Thomson PhD 5 , Andreas Botnen, MS 1 , Michael Van Guilder PhD 1 , Aida Bustad, BA 1 , and Michael Neely MD 1 1. Laboratory of Applied Pharmacokinetics, USC School of Medicine, Los Angeles CA; www.lapk.org , email [email protected] 2. Department of Mathematics, USC; 3. Guidance and Control Section, Jet Propulsion Laboratory, Pasadena CA; 4, Present address, Pharsight,Co., Cary NC; 5. Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, Glasgow, UK THE PROBLEM: Models of pharmacokinetic (PK) and pharmacodynamic (PD) systems, physiologically based PK/PD models, and similar mechanistically based models are for the great part used to understand and describe the behavior of drugs and combinations of drugs in animals and in patients. Models are also made to find factors (descriptors or covariates) associated with patient variability in such models. They are usually made to aid in the process of drug discovery, and to develop a dosage regimen of the drug to be used in patients. For the most part, these models have been of the parametric type, where the parameter distributions are assumed to be either normal or lognormal. The parameters obtained by those methods are those of the equations that describe such distributions, such as means, medians, covariances, and correlations. The models are published, presented at conferences, and used to develop dosage regimens for patients. The pharmaceutical industry dominates such modeling. Indeed, the discipline of Clinical Pharmacology, as a bedside practice, has greatly declined. Clinical Pharmacologists are generally thought of now as those who are associated with the drug industry. Although modeling plays an increasingly important part in the design and analysis of clinical trials, the optimal care of patients as specific individuals is usually not considered either by the pharmaceutical industry, by medical or pharmacy school curricula, or by most current clinical pharmacologists. Specific techniques for individualizing the initial regimen, monitoring each patient clinically and by measuring serum concentrations and other responses, and adjusting the regimen rigorously and scientifically to best achieve specific desired target goals in individual patients are generally not considered. Therapeutic drug monitoring is usually thought to be an added cost, and therefore hard to justify. Pharmacokinetic principles and modeling methods are usually taught as basic sciences rather than as useful clinical tools. Medical schools usually resist teaching them in any clinically useful

PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

1

PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE.

Roger Jelliffe MD1, Alan Schumitzky PhD1,2, David Bayard PhD1,3, Robert

Leary1,4, PhD, Alison Thomson PhD5, Andreas Botnen, MS1, Michael Van Guilder PhD1, Aida Bustad, BA1, and Michael Neely MD1

1. Laboratory of Applied Pharmacokinetics, USC School of Medicine, Los Angeles CA; www.lapk.org, email [email protected] 2. Department of Mathematics, USC; 3. Guidance and Control Section, Jet Propulsion Laboratory, Pasadena CA; 4, Present address, Pharsight,Co., Cary NC; 5. Strathclyde Institute of Pharmacy and Biomedical Sciences, University of Strathclyde, Glasgow, UK THE PROBLEM:

Models of pharmacokinetic (PK) and pharmacodynamic (PD) systems, physiologically based PK/PD models, and similar mechanistically based models are for the great part used to understand and describe the behavior of drugs and combinations of drugs in animals and in patients. Models are also made to find factors (descriptors or covariates) associated with patient variability in such models. They are usually made to aid in the process of drug discovery, and to develop a dosage regimen of the drug to be used in patients.

For the most part, these models have been of the parametric type, where

the parameter distributions are assumed to be either normal or lognormal. The parameters obtained by those methods are those of the equations that describe such distributions, such as means, medians, covariances, and correlations. The models are published, presented at conferences, and used to develop dosage regimens for patients. The pharmaceutical industry dominates such modeling. Indeed, the discipline of Clinical Pharmacology, as a bedside practice, has greatly declined. Clinical Pharmacologists are generally thought of now as those who are associated with the drug industry. Although modeling plays an increasingly important part in the design and analysis of clinical trials, the optimal care of patients as specific individuals is usually not considered either by the pharmaceutical industry, by medical or pharmacy school curricula, or by most current clinical pharmacologists. Specific techniques for individualizing the initial regimen, monitoring each patient clinically and by measuring serum concentrations and other responses, and adjusting the regimen rigorously and scientifically to best achieve specific desired target goals in individual patients are generally not considered. Therapeutic drug monitoring is usually thought to be an added cost, and therefore hard to justify. Pharmacokinetic principles and modeling methods are usually taught as basic sciences rather than as useful clinical tools. Medical schools usually resist teaching them in any clinically useful

Page 2: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

2

way, while pharmacy schools often focus on equations rather than on clinical decision making. THE TOOLS:

Nevertheless, the clinical need for planning, monitoring, and adjusting doses of potentially dangerous and toxic drugs precisely is widely accepted. However, specific tools to achieve these goals for each individual patient are usually not discussed or used. These very practical clinical software tools are each described briefly below. 1. NONPARAMETRIC (NP) POPULATION PK/PD MODELING.

Rather than compute the estimators of an assumed (but usually not proven) normal, lognormal, or multimodal parametric distribution, NP population models estimate the entire most likely discrete joint parameter distribution [1]. No assumptions are made about the specific shape of the parameter distributions. The shape is totally determined by the data itself, and by its assay and environmental uncertainties. Because of this, it can better detect and describe the frequently unsuspected, often genetically determined, subpopulations of patients present in the population data set.

An ideal population model might be obtained if somehow we could directly

observe the exact model parameter values in each subject. The distribution would be a discrete collection of the specific parameter values obtained from each subject. The usual statistical summaries would degrade the richness of the actual distribution found. Since such direct observation is clearly not possible, we must give doses and measure responses such as serum concentrations, and then estimate the distribution of the model parameter values in the population.

The NP approach is the closest to the above ideal. It was first developed

by Mallet [1]. The method exists in several versions by various authors. Probably the most available one is in the MM-USCPACK software collection [2,3]. The data of the patient population is resolved into discrete points (support points), up to one for each subject studied in the population. More uncertainty results in fewer support points. Less uncertainty leads to more support points, up to one for each patient studied. Each support point consists of an estimate of each model parameter value, along with an estimate of the probability of that set of estimates. Such a model, first made by Mallet [1], is shown in Figure 1 below. The multiple support points shown in such NP models make them uniquely suited for developing dosage regimens using the new “Multiple Model” method of dosage design (see section 4 below).

Page 3: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

3

Figure 1. An example of an NP population model made by Alain Mallet. Horizontal axis, a

parameter such as the rate constant for elimination. Vertical axis, a parameter such as apparent volume of distribution. Note the clearly non-Gaussian shape of the horizontal parameter, with the central gap. This may well represent two unsuspected subpopulations such as fast and slow metabolizers of a drug. The size of each point reflects its estimated probability (reproduced by permission). 2. FITTING DATA BY ITS FISHER INFORMATION, NOT BY ASSAY

COEFFICIENT OF VARIATION (CV%) OR BY SOME ASSUMED OVERALL ERROR MODEL.

2.1 Consequences of using the assay CV%. Use of the percent error of an assay to describe its precision has several significant consequences. Using CV%, the apparent precision of an assay drops markedly as the measured concentration becomes lower and lower. An “acceptable” categorical lower limit of quantification is often taken as something like a CV of 10 or 15 or 20%. This varies between laboratories. Regulatory bodies often make decisions about what is said to be “acceptable” based on judgment, but, sadly, not on science. An intuitively taken policy is simply decided upon. There is much discussion at meetings about just what constitutes an “acceptable” CV% which reflects “acceptable” precision. Below this value a categorical cutoff is made, and the data is censored. However, even a blank sample is measured with a certain SD. Of course at the blank, the CV% is infinite. However, the SD at the blank is always finite. 2.2 The first major problem with CV% - believing that one has to censor data. Very low measurements are thought of as the signal being “lost in the noise”. Below a selected cutoff, the measurement is not felt to be “precise enough” for acceptable quantification (the lower limit of quantification, or LLOQ), or further down, even for detection (the lower limit of detection, or LLOD). Data below these judgmentally selected cutoffs are censored and are either not reported, or are reported simply as being “less than” some selected

Page 4: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

4

LLOQ or LLOD. Such data reported as “below detectable limits” often eventually become regarded by physicians (and by their patients as well) as though the substance being measured (a Philadelphia chromosome, an HIV PCR, or a drug concentration, for example) somehow is not really there. This often leads to serious clinical and pharmacokinetic misperceptions, as nondetectable eventually becomes mentally equated with zero. Actually, several special policies have been developed to deal with this problem [4]. None has been successful. The actual measurement, whatever it is, is the best reflection of what is actually there, along with its SD. 2.3 The second major problem with CV% - no way to give correct

weighting of measured data for modeling. The other problem, an increasingly important one, is that there is no way to assign a proper quantitative measure of credibility to a data point using CV%. This is a problem relatively new to the laboratory community. Data points are now used clinically for therapeutic drug monitoring and Bayesian updating of individual patient pharmacokinetic models. Indeed, in the statistics books, one simply does not find CV% as a mathematical or statistical measure of credibility. Instead, one finds the Fisher information of a data point. This is the reciprocal of the variance with which any data point was measured.

It is also thought that the assay SD is much less constant over its

operating range. This is often not the case (see Figure 2 below). What is important is to have and use a well known quantitative measure of the relative credibility of a data point. It should not be corrupted by the measurement itself, as is the CV%.

2.4 Fisher Information – the reciprocal of the assay variance.

The Fisher information of a data point is the reciprocal of the variance

with which that data point was measured. Take the assay SD at that point. Instead of dividing it by the measurement to obtain the CV%, simply square the SD to obtain the variance, V. Take its reciprocal, 1/V. Multiply the measured result by 1/V to assign proper weight to that assay measurement. This procedure is a well known and widely used measure of statistical credibility [5-7].

2.5 Relationship between CV% and Fisher information.

Let us consider a hypothetical assay with a coefficient of variation of 10% throughout its range. Suppose there is a measurement of 10 units. Its SD is 1.0 unit, as its CV is 10%. Because of this, its variance is 1.0, and its Fisher information is also 1.0. Now consider another measurement from another sample, where the value is 20 units. The CV being 10%, the SD is now 2.0. The variance, however, is now 4.0, and the Fisher information is now ¼.

Page 5: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

5

This is the important difference between the Fisher information and the CV%. It is because the variance about a data point is the square of the SD. So if an assay has a constant CV%, doubling the measured value results in a weight of ¼. Also, as an assay result gets lower and approaches zero, the SD usually gets smaller and smaller, though not always (see Figure 2 below). In any event, while the assay SD usually gets smaller and the Fisher information becomes greater, the CV%, as everyone knows, becomes greater, and eventually becomes infinite. One may erroneously think that the measurement becomes “lost in the noise”. This is the perceptual problem when using CV%. It is because of the perception of assay error as CV% that leads people to make artificial and categorical cutoffs such as LLOQ and LLOD. Data are then arbitrarily withheld and censored. This problem is illustrated in Figure 2 below. The figure is based on the documented error of the Gentamicin assay at the Los Angeles County – USC Medical Center several years ago. At the high end, a value of 12 ug/ml, measured in quadruplicate, had an SD of 1.71 ug.ml, and a CV of 14.3%. A value of 8.0 ug/ml, similarly measured in quadruplicate, had an SD of 0.79 ug/ml and a CV of 9.96%. A value of 4.0 ug/ml, again in quadruplicate, had an SD of 0.41 ug/ml and a CV of 10.83%. A value of 2.0 ug/ml, again in quadruplicate, had an almost identical SD of 0.42 ug/ml, but the CV now rose to 21.15%. Finally, a blank measurement, also done in quadruplicate, had an SD of 0.57 ug/ml. The CV%, of course, was infinite.

21.15

10.33 9.9663

14.323

0.56708

0.423024 0.413376

0.7973

1.7188

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

0 2 4 8 12

Concentration (ug/ml)

SD

5

10

15

20

25

CV%

CV%

SD

Figure 2. Relationship between measured concentration (horizontal scale), CV% (right hand scale) and Assay SD (left hand scale). CV% (diamond symbols) increases as shown at low values. On the other hand, the assay SD is always finite at any value, all the way down to and including the blank. Because of this, there is no need to censor any data at all. The measurement and the SD, done in this way, enhance the sensitivity of any assay all the way down to and including the blank, with a well documented statistical measure of credibility.

Page 6: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

6

2.6 Using Fisher Information, there is no LLOQ or LLOD, and no need

to censor data. A problem arises when a result is in the gray zone, below the LLOQ but a little above the blank. There has been much discussion about what the best thing is to do about this problem. Some have said it should be set to zero. Others say it should perhaps be set to halfway between the blank and the LLOD. Commonly, laboratories have reported the result simply as being “less than” whatever the LLOQ, in their judgment, is considered to be. However, when doing therapeutic drug monitoring or any pharmacokinetic modeling, this is a most unsatisfactory situation. The measurement simply cannot be used in any procedure to fit data quantitatively or to make a population pharmacokinetic model of a drug in a patient. It is extremely easy to do all this, and to make both the toxicologists and the pharmacokineticists happy at the same time, by reporting the result both ways. For example, a gentamicin sample might be reported as having a measured concentration of “0.2 ug/ml, below the usual LLOQ of 0.5 ug/ml”. Both parties can easily have what each needs for their work. The assay error polynomial can be stored in software to do the proper weighting and fitting of the data. It is a good thing that much attention has been paid to determining the error of assays. However, once the assay has been shown to be “acceptably” precise, that error has usually been forgotten or neglected. For example, many error models simply use the reciprocal of the assay result itself, or its squared value, and forget the actual error of the assay. On the other hand, they often assume a model for the overall error pattern and estimate its parameter values. This is usually done because it is assumed that the assay SD is only a small part of the overall error SD, due to the many other significant remaining environmental sources of error. That is clearly not so, as we shall see further on. 2.7 Determining the Assay Error Polynomial In the USC*PACK software collection [8,9], for example, one is encouraged first to determine the error pattern of the assay quite specifically, by determining several representative assay measurements in at least quintuplicate, and to find the standard deviation (SD) of each of these points, as shown in Figure 3.

Page 7: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

7

Figure 3. Graph of the relationship between serum Gentamicin concentrations, measured by our hospital’s assay in at least quadruplicate (the dots) and the standard deviations (SD’s) of the measurements. The relationship is captured by the polynomial equation shown at the top. Y = assay SD, X = measured serum concentration, Xsq = square of serum concentration. One can measure, in at least quintuplicate (and the more the better – some say 10), a blank sample, a low one, an intermediate one, a high one, and a very high one. One can then fit the relationship between the serum concentration (or any other measured response) and the SD with which it has been measured, with a polynomial of up to third order if needed, so that one can then compute the Fisher information associated with any single sample that goes through the laboratory assay system. One can then express the relationship as

SD = A0 + A1C + A2C2 + A3C3 (1) where SD is the assay SD, A0 through A3 are the coefficients of the polynomial, C is the measured concentration, C2 is the concentration squared, and C3 is the concentration cubed. A representative plot of such a relationship, using a second order polynomial to describe the error pattern of an assay of gentamicin, is shown in Figure 3.

Page 8: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

8

2.8 Determining the Remaining Environmental Error In addition, a parameter which we have called gamma, a further measure of all the other environmental sources of intra-individual variability, can also be computed by software for population PK modeling. It is used in the USC*PACK and the newer MM-USCPACK BigNPAG program as a multiplier of each of the coefficients of the assay error polynomial as described above. The nominal value of gamma is 1.0, indicating that there is no other source of variability that the assay error pattern itself. Gamma is therefore usually greater than 1.0. It includes not only the various environmental errors such as those in preparing and administering the doses, recording the times at which the doses were given, and recording the times at which the serum samples were obtained, but also the errors in which the structural model used fails to describe the true events completely (model misspecification), and also any possible changes in the model parameter values over time, due to the changing status of the patient during the period of data analysis. Gamma is thus an overall measure of all the other sources of intraindividual variability besides the assay error. In this way, one can calculate just how much of the overall SD is due to the assay SD, and how much is due to the remaining environmental SD. Determining gamma helps greatly to explain the impact of the environmental variability found in any fit. If gamma is small (2-4), it suggests that the sum of the environmental sources of noise is small. If it is large (10), it suggests that the overall environmental noise (the total effect of all the other factors mentioned above) is large. 3. EVALUATING CHANGING RENAL FUNCTION IN CLINICAL SETTINGS.

Most methods for estimating creatinine clearance have been based only upon a single serum creatinine specimen. In 1972 we published a method using a pair of serum creatinine samples instead [10]. It was based on data obtained by others from patients with stable renal function, and on data of the effect of chronic uremia itself upon creatinine production. It calculated the creatinine clearance that would make serum creatinine rise or fall from a first to a second value over a stated time in a patient of a stated age, gender, height, and weight. It predicted creatinine clearance in postsurgical renal transplant patents, starting with the day of surgery, with a precision not different from that found with the standard 24 hour urine gold standard. This has been a great practical help in individualizing clinical drug therapy. A modification was published in 2002 [11]. Figure 4 shows the relationship between estimated versus measured creatinine clearance in patients in the renal transplant unit of the Los Angeles County – USC Medical Center. The data began on the day of surgery and continued thereafter. The scatter in the data is not different from that of the error in the gold

Page 9: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

9

standard measurement based on 24 hour urine specimens [11,12]. Similar methods have been described since our original article [13,14].

Figure 4. Estimated creatinine clearance (horizontal axis, ml/min/1.73 m2) versus

measured creatinine clearance (vertical axis, ml/min/1.73 m2). The scatter between estimated and measured values is essentially the same as the error in the classical measurement of CCr using a 24 hr urine specimen. 4. MULTIPLE MODEL (MM) DOSAGE DESIGN.

This new tool develops maximally precise dosage regimens. It uses NP population models. The multiple support points in the NP population model provide multiple predictions of future serum concentrations and other responses. Each prediction is weighted by the probability of the support point giving that particular prediction. When one selects a specific target goal at a target time, it is then easy to calculate the weighted squared error of the failure of the dosage regimen to hit the specific selected target goal. It is then easy to find the regimen which specifically minimizes that error. This provides, for the first time, dosage regimens which hit the desired target with maximum precision. In this way, one can develop a maximally precise initial dosage regimen of a drug [15,16]. One can see visual plots of the diversity of the multiple predictions provided by the NP population model and that particular maximally precise MM dosage regimen. The MM approach to Bayesian adaptive control has been widely used in the aerospace industry for flight control and guidance systems.

Page 10: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

10

Figure 5. Plot of the multiple predictions of Gentamicin serum concentrations over the first four days of an initial regimen generated by the MM-USCPACK clinical software using MM dosage design for a representative patient age 65, male, 70 in tall, 68 kg weight, serum creatinine 1.2 mg/dL. Target goals = 12 ug/mL peak, 0.5 ug/mL trough. Thick lines – those more probable (actually various colors reflecting their probability. Thin lines – predictions of less probability. The diversity of the various predictions is shown. Horizontal = time into regimen. Vertical = the multiple predicted serum concentrations which achieve the selected target goals with minimum expected weighted squared error (maximum precision). Figure 5 shows the trajectories over time of the 76 multiple predictions of future serum concentrations generated by the 76 support points in the Gentamicin population PK model in the MM-USCPACK clinical software. The software found that a once daily regimen hit the target goals for the stated patient most precisely, given the constraints of q 8 h, q12 h, and q 24 h dosing intervals. In addition, one can place the cursor at many selected places on the plot. When a pointing finger is shown, one can click the mouse there and see a plot of all the various predictions at that particular time. Figure 6 below shows those predictions for the peak concentrations at the end of the 30 minute infusion of the first dose (½ hr into the regimen). The diversity of the predictions based on the population model is easily seen.

Page 11: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

11

Figure 6. Time slice of the multiple predictions at ½ h into the regimen, the time of the peak serum concentration. Horizontal = the multiple predicted serum concentrations. Vertical = probability of each prediction. Middle dashed vertical line = predicted weighted average concentration of 11.51 ug/mL (target = 12). Outer dashed lines = 95 percentile distributions (roughly 7 to 17 ug/mL) of the predictions. 5. MONITORING THE PATENT AND GETTING MM BAYESIAN POSTERIOR JOINT DISTRIBUTIONS.

As serum concentrations and other responses are obtained, the population model is now presented with and supplemented by this new individual patient data. In contrast to the maximum aposteriori probability (MAP) parametric Bayesian estimator, the MM Bayesian process computes the posterior probability of each population support point given the new data from that individual patient. Those population model support points that fit the individual patient’s new data well have greater probability. Those that fit it poorly have much less. Usually a only a few or perhaps only a single point remain, while the probability of most of the population support points become negligible. One can see graphically just how much one has learned by the therapeutic monitoring one has done, by the usually much narrower distribution of the estimated serum concentrations. After this, the MM Bayesian posterior joint distribution is used to develop the next dosage regimen to best hit the specific selected target goal, which may or may not be changed, based on clinical judgment. Comparing the patient’s clinical

Page 12: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

12

behavior with plots of the patient’s past serum concentrations and the estimates of from the patient’s posterior individual model is the best way we know to evaluate a patient’s clinical sensitivity to a drug.

Figure 7 below shows the reconstruction of past events from a real patient

having the same age, gender, height, weight, and initial serum creatinine as the simulated patient shown in Figures 5 and 6. Figure 7 shows the patient’s past doses and their times, serum concentrations and their times, and serum creatinine concentrations and their times.

Figure 7. Data of a patient undergoing first intuitive dosage adjustment and then

Bayesian reconstruction of his clinical situation. At first he received 80 mg of Gentamicin roughly q 8 h (all times were known exactly). His first serum results were 3.8 peak and 1.8 trough. Because of this, his dosage was intuitively raised to 100 mg q 8 h. However, the 3rd dose was given intramuscularly, late, as shown, as the IV had infiltrated. A serum sample in the middle of that dose interval was 5.2 ug/mL. A subsequent peak was 9.1 ug/mL, but the trough, almost 15 h after that third dose of 100 mg, was very high (4.1 ug/mL). Because of this, his dosage was reduced intuitively to 80 mg once again, as shown.

At this point the data in Figure 7 was analyzed by the MM-USCPACK

clinical software. Note that the patient’s serum creatinine rose from an initial 1.2 mg/dL, to 1.5, and then to 2.1 mg/dL. Using the tool for estimation of creatinine clearance described in section 3, his creatinine clearance was seen to have been 56 ml/min/1.73 M2 initially. CCr then fell to 41 as the SCr began to rise from 1.2 to 1.5 mg/dL, and declined further to 27 as the SCr continued to rise from 1.5 to 2.1 mg/dL. Clinically, the therapeutic situation was not at all clear. It was not impossible to visualize what the proper dosage regimen should be.

Page 13: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

13

At that point, MM Bayesian analysis was used to make an individualized PK model for this patient, fitted both to his serum concentration data and his changing CCr over time. The results are shown in Figure 8. A god fit (hard to see in the figure) was seen. The combination of the method of estimation of CCr and the population model in which the elimination rate constant was linked to CCr as a covariate was able to track the behavior of the drug in this patient quite well, as shown in the plots.

Figure 8. Plot of the trajectories of estimated serum concentrations produced by MM Bayesian analysis of the patient’s data shown in Figure 7. Instead of 76 estimations as shown in the population PK model in Figures 5 and 6, only 5 support points have survived the Bayesian analysis. Note their very much increased probabilities, up to 84%, (top horizontal of the figure) compared to those of the population model, where the highest probability of the various support points was 13%. The five serum concentrations are shown by the round dots and their nearby numbers. Further, placing the cursor on serum sample number 3, at 23 hours into the regimen in Figure 8, yielded Figure 9, below, which also showed the much greater precision with which the patient’s individual PK model was known, giving further visual confirmation of the amount of learning about the patient conferred by the process of TDM. The clinical software tools thus provide a useful way to see in advance what the various predictions will be, and to develop an initial dosage regimen to hit the selected target goals, for example, 12 ug/ml peak and 0.5 ug/ml trough. It also illustrates the great uncertainties associated with the predictions when the

Page 14: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

14

population model alone is used, before any TDM has been done, and the greatly reduced uncertainties as the TDM data tells us more about the patient as an individual. This individualized model can now be used to develop the subsequent dosage regimen to hit with maximum precision whatever the desired target goals may be.

Figure 9. Profile of the multiple predictions at the time that serum sample number 3 was obtained. Horizontal = the multiple predicted serum concentrations. Vertical = probability of each prediction. Middle dashed vertical line = predicted weighted average concentration of 11.51 ug/mL (target = 12). Outer dashed lines = 95 percentile distributions (roughly 7 to 17 ug/mL) of the predictions. Note the much narrower bandwidth of the 95% distance of predictions around the measured serum concentration of 5.2 ug/mL, and the weighted average estimated value of 4.88 ug/mL. 5.1 Hybrid Bayesian (HB) analysis.

The range of the patient’s parameter values is that of the NP population

model. The values of the NP population support points do not change – only their posterior probability based on the individual patient’s new data. Consequently, an unusual patient’s parameter values may be outside the original population range, or may be in an area containing only a few support points.

Page 15: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

15

To address this challenging problem, we have developed a hybrid Bayesian approach. First, a conventional MAP Bayesian estimate is made. This can reach out beyond the original ranges of the population model if needed. Then extra support points (16 to 36, for example) are added near the MAP estimate point, to augment the population model to receive the patient data. This individually augmented population model is then used for that patient’s MM Bayesian analysis. This HB procedure should yield better adjustments of dosage regimens. It should also be safer. It currently is being implemented. 5.2 Interacting MM Bayesian (IMM) Analysis.

A particular problem can be presented by patients who are very unstable clinically, and whose parameter values (volumes of distribution, rate constants, etc.) may change rapidly without any change in any known covariate or descriptor. All other current Bayesian methods obtain either the parameter values or distributions which best fit all the data. These parameter values are assumed to be fixed and unchanging throughout all the data. In very unstable patients this may lead to very bad fits.

Because of this, we have developed a sequential interacting MM Bayesian

method which permits a patient’s posterior support points to change with each new dose or each new serum concentration if it seems more likely to do so [17]. This approach is well known in aerospace to track evasive targets. We have implemented this approach in our clinical MM-USCPACK software. In a simulated patient with a change in parameter values, it tracked drug behavior with less than half the error of either the MAP or the MM Bayesian methods. Further in a study of over 130 patients on gentamicin and over 130 on vancomycin in a post-cardiac surgery unit in Glasgow, it tracked drug behavior significantly better than the MAP or MM Bayesian methods [18].

5.3. Entering Data of Dosage and Timing Errors

In the clinical software, one can also take into account the uncertainties

with which the doses are prepared and administered, and the errors in the timing of the doses and the serum samples. For example, one can enter the overall SD with which the doses were prepared by the pharmaceutical company (pills, for example) or administered (intravenous (IV) drugs, for example). Sometimes this information can be very difficult to obtain from the pharmaceutical company (pills) or from the company and the ward service giving the drug. At any rate, one can make clinical estimates of the SD with which drugs are given to the patient. One can also make clinical estimates of the SD of the difference between the times the doses were ordered to be given and the times they actually were given, and the SD of the difference between when blood samples were said to have been obtained and when they actually were obtained. It is therefore now possible to consider these usually ignored factors in the clinical situation in which the

Page 16: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

16

patient’s therapy takes place. More uncertainty may make it easier to fit the data (more wiggle room) and it can also clearly make the fit less precise (with more support points appearing the Bayesian posterior joint parameter distributions).

6. LINKED PHARMACODYNAMIC MODELS: DIFFUSION OF DRUGS INTO ENDOCARDIAL VEGETATIONS, AND MODELS OF POSTANTIBIOTIC EFFECT AND BACTERIAL GROWTH AND KILL.

In this section we describe the tools of linkage of nonlinear pharmacodynamic models to the basic linear pharmacokinetic model, describing drug diffusion into endocardial vegetations, the simulation of a postantibiotic effect, and the modeling of bacterial growth in the absence of a drug and its kill by an antibiotic. 6.1 Modeling Drug Diffusion It is difficult to estimate whether or not the drug is able to kill the organisms all the way into the center of an endocardial vegetation. Because of this, a diffusion model was made.

Figure 10. Diagram of the concentric layers of the spherical model of the endocardial vegetation.

A spherical shape was assumed for the vegetation, as shown in Figure 10. It is modeled having several concentric layers, with diffusion taking place from layer to layer, delta r. The sphere was assumed to be homogeneous, with equal diffusion in all directions, with a constant coefficient of diffusion throughout. The diffusion was assumed to be dependent on the concentration of drug in the surrounding medium, such as the serum concentration, and its time course. The diameter of endocardial vegetations can be measured by transesophageal echocardiography.

Page 17: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

17

The following equation was used to model the diffusion: ∂C = 1 x ∂ [D x r2 x ∂C] (2) ∂t r

2 ∂r ∂r

where C represents the concentration in the sphere at time t, at a distance r from the center of the sphere, and D represents the coefficient of diffusion in the sphere, and x indicates multiplication. When D is assumed constant, the equation becomes ∂C = D x [ ∂

2C + 2 x ∂C ] (3)

∂t ∂r2 r ∂r

The vegetation is assumed to be continuously immersed in the surrounding medium, and the drug concentration in that medium is assumed to attain a value which results in equilibrium with the very outer layer of the sphere. The medium then undergoes the changes in concentration with time that constitute the serum level time course. This time course is thus presented as the input to the spherical model [19]. The diffusion coefficient found by Bayer, Crowell, et al. for aminoglycosides in experimental endocarditis [20,21] was used. The model has become part of the MM-USCPACK clinical programs for individualizing drug dosage regimens [8]. The model can also be used to simulate behavior inside an abscess, and, by appropriate choice of sphere diameter and diffusion coefficient, to simulate the post-antibiotic effect of a certain desired duration. 6.2. Examples: Simulated Endocardial Vegetations Consider a hypothetical 65 year old man, 70 in tall, weighing 70 kg, with a serum creatinine of 1.0 mg/dL. Let us assume that he has a vegetation seen by echocardiography on his aortic valve that might be either 0.5, 1.0, or 2.0 cm in diameter. We wish to examine the ability of an amikacin regimen designed to achieve serum peaks of 45 ug/ml and troughs of approximately 5.0 ug/ml to reach effective concentrations within the vegetation in these three cases. Let us apply the findings of Bayer et el [20,21] to compute the time course of estimated amikacin concentrations in the center of these three vegetations, to calculate their possible ability to kill an organism having an estimated minimum inhibitory concentration (MIC) of 8.0 ug/ml, for example. Using the program for Amikacin in the USC*PACK collection [8], the above patient’s estimated creatinine clearance (CCr) is about 69 ml/min/1.73M

2.

We enter the target peak serum concentration goal of 45 ug/ml and an initial trough concentration of 5.0 ug/ml. The ideal dose interval to achieve that peak

Page 18: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

18

and trough exactly, adjusted for the patient's renal function, employing a planned duration of the IV infusion of 0.5 hr, turns out to be 10.231 hrs. Let us approximate this in a practical manner by choosing a dose interval of 12 hrs. The dosage regimen to achieve the peak goal with such a dose interval is, when revised to practical amounts, 850 mg for the first dose, followed by 750 mg every 12 hrs thereafter. On this regimen, predicted serum concentrations are 43 ug/ml for the peak and 3.2 ug/ml for the trough. The peak is 542 % of the stated MIC, and serum concentrations are predicted to be at least the MIC for 66 % of each dose interval. The AUC/MIC ratio for the first 24 hours is 48.8. The important question now is whether or not this serum concentration profile will result in adequate penetration of the vegetation in each of the three cases, and whether or not the regimen will kill effectively there, as well as in the central (serum level) compartment. Figure 11 shows the predicted amikacin concentrations in the center of the simulated vegetation having a diameter of 0.5 cm. As shown, concentrations rise rapidly above the MIC and stay there, suggesting that the above regimen should probably be able to kill organisms having an MIC of about 8.0 ug/ml fairly promptly in the center of the vegetation. The time lag of concentrations in the center of the sphere is modest, about 3-4 hrs, behind the serum concentrations. On the other hand, if the vegetation were 1.0 cm in diameter instead, the drug would take about 12 hours to diffuse to the center and reach the MIC, and the rise and fall of drug concentrations would be much more damped, as shown in Figure 12.

Figure 11. Predicted time course (the first 6 days) of Amikacin concentrations (dashed line) in the center of a simulated endocardial vegetation of 0.5 cm. Solid line - predicted serum concentrations. Upper horizontal line - target oeak goal of 45 ug/ml. Lower horizontal dashed line – estimated organism MIC of 8.0 ug/ml. The predicted endocardial concentrations rise promptly, and are consistently above the estimated MIC of 8.0 ug/ml.

Page 19: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

19

Figure 12. Predicted time course (the first 6 days) of Amikacin concentrations (dashed line) in the center of a simulated endocardial vegetation of 1.0 cm. Solid line - Predicted serum concentrations, and other lines and symbols as in Figure 11. Predicted endocardial concentrations rise more slowly, are more damped, with smaller oscillations from peak to trough, but once the estimated MIC is reached, are consistently above 8.0 ug/ml.

Figure 13. Predicted time course (the first 6 days) of Amikacin concentrations (dashed line) in the center of a simulated endocardial vegetation of 2.0 cm. Solid line - Predicted serum concentrations, and other lines and symbols as in Figure 11. The predicted endocardial concentrations rise much more slowly and are much more damped, with essentially no oscillations from peak to trough. Once the estimated MIC is reached, the concentrations are consistently above 8.0 ug/ml, but two full days are required before the MIC is reached. Further, if the diameter of the vegetation were 2.0 cm, it would take still longer, as shown in Figure 13. The drug would take about 48 hours to reach the MIC in the center of the vegetation, and significant growth of organisms might well take place before that. For every doubling of the diameter of the sphere it will take 4 times as long to reach an equal concentration in the center of the sphere. 6.3 Another Example: Simulating a Post-Antibiotic Effect. Figure 14 shows computed drug concentrations in the center of a small sphere simulating a microorganism having a diameter of 0.1 micron, 3 simulated layers of diffusion, and a diffusion coefficient of 1.5 x 10-14. This particular sphere has the property that in its center, the concentrations of drug fall below the MIC about 6 hrs after the serum levels do, thus simulating (without making any suggestions or conclusions about mechanism of action) a post-antibiotic effect of about 6.0 hrs.

Page 20: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

20

Figure 14. Plot of computed amikacin concentrations (the first 6 days) in the center of a simulated microorganism. Diffusion coefficients in the very small sphere diffusion model are adjusted so that concentrations in the center of the organism lag behind the serum concentrations and, if they fall below the MIC, would do so approximately 6 hrs after the serum concentrations do, thus simulating a post-antibiotic effect of about 6 hrs. We see here that the process of diffusion into and out of spherical porous objects such as endocardial vegetations and small microorganisms can be described with reasonably simple models. The equations describing this process are the same as those for release of drug from a sustained-release preparation formulation.

6.4. Modeling the Relationship between Serum Concentration and Bacterial (or viral) Kill. One can also make a Hill model of the relationship between serum drug concentrations and bacterial or viral kill. This has been done by Zhi [22] We have implemented such a model in which the rate constant for growth (reflecting the doubling time of the organism), the rate constant for kill (reflecting the half time of kill), Let us assume that an organism is in its logarithmic phase of growth in the absence of any antibiotic. It will have a rate constant for this growth - a doubling time. The killing effect of the antibiotic can be modeled as a Michaelis-Menten or Hill model. The model generates a rate for this effect. The rate of growth or kill of an organism depends upon the difference between these two rate constants. The killing effect will be determined by the Emax, representing the maximum possible

Page 21: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

21

rate constant for kill, the EC50, the concentration at which the effect is half maximal, and the time course of the concentrations at the site of the effect achieved with the dosage regimen the patient is given. Both the growth rate constant and the Emax can be found from available data in the literature for various organisms. The general growth versus kill equation is

g kdB K K Bdt

(4)

and

nt

n

nt

k CECCE

K

50

max , (5)

where B is the number of organisms (set to 1.0 relative unit at the start of therapy), Kg is the rate constant for growth, Kk is the rate constant for killing, Emax is the maximum possible effect (rate of killing), EC50 is the concentration at which the killing rate is half maximal, n is the Hill or sigmoidicity coefficient, Ct is the concentration at the site of the effect (serum, peripheral compartment, effect compartment, or in the center of a spherical model of diffusion), at any time t, and x indicates multiplication. The EC50 can be found from the measured (or clinically estimated) minimum inhibitory concentration (MIC) of the organism. This relationship was developed by Zhi [22], and also independently by Schumitzky [23]. The MIC is modeled as a rate of kill that is equal to but opposite in direction to the rate constant for growth. The MIC thus offsets growth, and at the MIC there is neither net growth nor decrease in the number of organisms. At the MIC,

0dtdB and gk KK , (6)

and

n

g

ng

KEECK

MIC1

max

50

. (7)

In this way, the EC50 can be found from the MIC, and vice versa. The input to this effect model can be from either the central or the peripheral compartment concentrations of a pharmacokinetic model, or from the center (or any other layer) of a spherical model of diffusion [19]. The sphere may represent an endocardial vegetation, an abscess, or even a small microorganism. In the latter case, one can adjust the sphere diameter and the diffusion coefficient so that the concentrations in the center of the small sphere lag behind the serum concentrations and cross below the MIC about 6 hours after the serum concentrations do, to simulate a post-antibiotic effect of about 6

Page 22: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

22

hours, for example. The effect relationship was modeled by Bouvier D’Ivoire and Maire [24], from data obtained from Craig and Ebert [25], for pseudomonas and an aminoglycoside. Figure 15 below shows the clinical problem when one tries to develop a once daily gentamicin regimen for a simulated healthy young man age 20, 70 in tall, weighing 70 kg, with a serum creatinine of 1 mg/dL. He has a pseudomonas sepsis. A common regimen might be to achieve a peak of 12 ug/mL and a trough of 0.5 ug/mL, for example. A reasonable regimen to achieve those goals is 280 mg every 24 hours. Suppose that the MIC of his infecting pseudomonas is 2 ug.mL. As shown below,

Figure 15. Plot of predicted future serum concentrations (thick line and Central scale on left), computed numbers of viable bacteria (dashed line and Effect scale, numbers of organisms, starting with one unit at start of therapy), and the organism MIC (horizontal dashed line). The organisms are never totally killed. Because of the large scale (up to one million-fold increase) the organisms do not appear to grow out until after about 40 hours into the regimen, but then grow out of control to reach the million=fold increase before three days. It is generally thought that for concentration – dependent drugs, it is the peak concentration that correlates best with kill. However, that really depends on the dose frequency. In this case, for a young healthy man whose CCr turned out to be 108 mL/min/1.73 m2, the drug is excreted much more rapidly than in the usual middle aged man for whom most once daily regimens are useful. We see here that the rapid excretion of the drug leads to concentrations that for approximately 18 hours out of each 24 hour dose interval are below the MIC of 2

Page 23: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

23

ug/mL, and the organisms grow out in an uncontrolled manner. Such a patient will be fortunate if he has good white blood cells and a good immune system. Otherwise, if he were granulocytopenic, he may well be in real trouble with this regimen. The use of such models provides a deeper insight into the process of bacterial kill, and the great need to prevent the emergence of resistant organisms in the population that at first had the MIC that was obtained from the overall population. Clinically, one can make estimated of the MIC of these resistant organisms, enter that figure into the software, and examine the probability of getting good killing to prevent the emergence of resistant organisms. 7. OPTIMIZING LEARNING WHILE TREATING–USING THE DOSE AS AN ACTIVE PROBING PARTNER IN ADDITION TO THE MEASUREMENT OF RESPONSE.

We have developed a dual controller that combines, for the first time, the

techniques of particle filtering for nonlinear estimation and the Iteration-in-Policy Space (IPS) algorithm for approximating the Stochastic Dynamic Programming (SDP) equations of Bellman. The resulting dual controller is denoted as the Particle IPS Algorithm and has been summarized in [26,27]. Dual controllers have the desired property of active learning, that is, they optimally proportion their effort between controlling the patient’s PK/PD system, and actively probing it to extract more useful information. This probing action is known to systematically improve controller performance compared to controllers that do not probe. Translated into clinical practice, dual controllers should provide still more precise drug dosage regimens with minimum variability in patient response.

We first tested the Particle IPS algorithm by applying it to controlling a physical pendulum model. The pendulum can be regarded as a child's swing that blows in the wind (i.e., process noise) and which is controlled by pulling and pushing on it at allowable times (the doses). The controller goal is defined by having the pendulum achieve a prescribed excursion at a specified time. For optimization purposes, this goal is defined in terms of a quadratic cost function. The problem is made difficult by taking both the length and mass of the pendulum to be uncertain (i.e., unknown parameters), the sign of the control influence parameter (push or pull) to be unknown, and the pendulum position measurements to be in error (i.e., measurement noise). These complexities make the problem commensurate with that of actively controlling a 1 or 2 compartmental PK model with unknown parameters.

Page 24: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

24

CONTROLLER PERFORMANCE COMPARISON Control Law Expected Cost CPU Time (s) HCE 18.794 .05 1-IPS(HCE) 16.536 20 OLF 15.874 .05 1-IPS(OLF) 14.873 20

Table 1: Simulation results comparing stochastic controller costs incurred on the pendulum

control example.

Results of a Monte Carlo study of the control of the pendulum are shown in Table 1. The Heuristic Certainty Equivalence (HCE) controller, using MAP Bayesian adaptive control, incurs an expected cost of 18.8 compared to 16.5 for the 1-IPS(HCE) algorithm. This represents an improvement of 2.3 expected cost units. The 1-IPS(HCE) algorithm uses the HCE control internally as part of its definition, and the relative improvement is ensured by the IPS theory behind its development. On the same example, the MM Bayesian open-loop-feedback (OLF) policy incurs a cost of 15.9 compared to the 14.9 for the 1 IPS(OLF) algorithm. This represents a cost improvement of 1 unit, Again, the 1-IPS(OLF) algorithm uses the OLF control internally as part of its definition, and improvement is ensured by the IPS theory.

The pendulum example indicates that the Particle IPS algorithm

systematically improves on standard HCE and OLF stochastic control policies. These results are encouraging, and have specific importance in the context of PK applications for which most drug dosage control design approaches currently are either of the HCE (cf., [28]) or OLF (cf., [8,9]) type. Work is now in progress on the Dual Control approach to optimizing aminoglycoside therapy.

For example, now consider a two – horizon (or a 2 cycle) approach to gentamicin therapy. In this scenario, suppose a once daily regimen is given over 2 days. The things to optimize are the doses for each day (each morning), and the optimal time to draw the second level on the first day. The first level will be drawn very shortly after the peak, at the end of the first IV infusion. However, the results do not come back from the lab until sometime after the AM dose is given on Day 2. The true optimal stochastic control is known to have the dual control property. It optimally proportions its resources between controlling and learning about the patient. Dual control optimizes controls with respect to simulated future costs including the effect of future Bayesian posteriors available from anticipated future measurements. MMDUAL is implemented as a receding two-horizon (R2H) problem since duration of patient stay is often not known exactly in advance. The R2H assumption is optimal if patient stays for exactly two horizons, and is a reasonable approximation otherwise. The timing of levels and doses can be considered as “controls” and included in the stochastic optimization.

Page 25: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

25

Figure 16. MM-Dual Determination of the doses for Days 1 and 2, and for the timing of level 2 on Day 1.

Figure 17. MM-Dual Determination of the doses for Horizon B (Days 3 and 4, and for the timing of level 2 (y4) on Day 3.

The sequence of operations is basically as below:

1. Start with the Horizon A dose amounts (Days 1 and 2) and the time for level 2 fixed at initial candidate values. 2. Compute the closed loop expected cost using MC simulation: 2.1 Draw random subjects from the discrete population prior. 2.2 Apply the doses and the time for level 2 over Horizon A

Page 26: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

26

2.3 Draw random assay noise and simulate the 2 measurements for the first day. 2.4 Update the Bayesian posterior based on those values. 2.5 Use conventional open - loop optimal MM dosage to compute the doses for Horizon B (Days 3,4). 2.6 Compute the cost over the current realization. 2.7 Repeat 2.1 – 2.6 many times and take the average of all realized costs. This gives the expected cost associated with step 1 above. 3. Repeat steps 1 and 2 with different doses 1 and 2 (day 1 and 2) and time for level 2 until the cost is minimized.

Simulating the measurements y1 and y2 for the first day to get the anticipated Bayesian posterior, and using it to compute the doses for Horizon B (days 3 and 4) leads to a closed loop expected cost in Step 2. This is different from the open loop expected cost obtained when the anticipated Bayesian posteriors are not considered. This optimization of the closed loop expected cost induces the active, probing, learning behavior of the dual controller which gives us true stochastic optimal control. Further work is being done on this and other clinical scenarios of drug therapy. After this, the optimal policy is to use conventional MM dosage to finish out the control for Horizon B (days 3 and 4). 8. OUTCOMES.

The above tools 1 – 4, and 6, are currently available in the MM-USCPACK software collection. They are important tools for population modeling and understanding drug behavior in populations of patients, and for optimizing the care of each individual patient by developing the most precise dosage regimens currently available [8,9]. For digitalis glycosides, in the past, with older PK software, therapeutic efficacy was preserved while toxicity was significantly reduced [29]. For lidocaine, better arrhythmia control was achieved along with no increase in toxicity [30]. For aminoglycoside antibiotics, survival improved, toxicity was less, hospital stay was reduced by 6 days, and hospital costs were also reduced [31]. For busulfan in children receiving bone marrow transplants, survival improved and veno-occlusive disease was reduced [32]. For cyclosporine in similar children, severe graft versus host disease (GVHD) was greatly reduced, and an average net cost savings of approximately 70,000 Euros was achieved for each episode of severe GVHD prevented [33]. The new MM software does better than the other and older types [17,18]. In addition, it has been useful in optimizing dosage individualization in children with HIV [34]. 9. CONCLUSIONS.

Therapy with dangerous drugs cannot be individualized for each patient simply by looking up the literature or by getting an impression from the raw serum level data and using clinical judgment. It requires a formal structure and specific quantitative tools. These tools are not only for modeling, but also

Page 27: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

27

specifically to best control those models to optimize the care of each individual patient. These tools have improved outcome in many instances (see above).

These tools need to be used and also to be taught to medical and

pharmacy students. The inertia of the medical and clinical pharmacology community to quantitative approaches may well be the reason that clinical pharmacology has so greatly declined as a specialty outside of the pharmaceutical industry. These methods, many of which have been around for decades, have not been incorporated into medical school curricula and practice. We continue to train physicians who have no knowledge of or training in the techniques of optimal drug therapy with potentially dangerous drugs for each individual patient. A change badly needs to be made.

Modeling is not just for research and drug development. It can and should

also lead to clinical action, not just publication. Good PK/PD software is the key tool for individualized therapy for each patient when it is needed. Most hospitals have perceived only the added cost of monitoring and analysis. They have not perceived the shortening of hospital stay and the significant reduction of overall hospital costs achieved by reducing the complications associated with drug therapy. Administrators and the FDA should consider these aspects of health care. ACKNOWLEDGEMENTS: Supported by NIH Grants EB005803 and GM068968. REFERENCES: 1. Mallet A: A Maximum Likelihood Estimation Method for Random

Coefficient Regression Models. Biometrika. 73: 645-656, 1986. 2. Schumitzky A: Nonparametric EM Algorithms for Estimating Prior

Distributions. App. Math and Computation. 45: 143-157, 1991. 3. Leary R, Jelliffe R, Schumitzky A, and Van Guilder M: Nonparametric

Pharmacokinetic/Dynamic Population Modeling with Adaptive Grids. Presented at the Annual Meeting of the American Society for Clinical Pharmacology and Therapeutics, Orlando, FL, March 6-10, 2001, Proceedings, p. P58.

4. Beal S: Ways to Fit a PK Model with Some Data Below the Quantification

limit. J. Pharmacokin. Pharmacodyn., 28: 481-504, 2001. 5. Jelliffe R: Explicit Determination of Laboratory Assay Error Patterns - A

Useful Aid in Therapeutic Drug Monitoring (TDM). Check Sample Series: Drug Monitoring and Toxicology, American Society of Clinical Pathologists Continuing Education Program, Chicago, Il, 10 (4) : pp.1-6, 1990.

Page 28: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

28

6. Jelliffe R, Schumitzky A, Van Guilder M, Liu M, Hu L, Maire P, Gomis P, Barbaut X, and Tahani B: Individualizing Drug Dosage Regimens: Roles of Population Pharmacokinetic and Dynamic Models, Bayesian Fitting, and Adaptive Control. Therapeutic Drug Monitoring, 15: 380-393, 1993.

7. DeGroot M: Probability and Statistics, 2nd ed., Addison-Wesley, 1989, pp

420-424. 8. Jelliffe R, Bayard D, Schumitzky A, and Botnen A: “The Multiple Model

Method of Maximally Precise Individualized Dosage Regimens, in the MM-USCPACK Clinical Collection of Computer Programs:, Pharmacokinetics Section, XI Annual Meeting of “Chelovek i Lekarstvo” (Man and Drugs), Moscow, Russia, April 20, 2004. Available by license from the author.

9. R. Jelliffe, D. Bayard, A. Schumitzky, M. Milman, F. Jiang, S. Leonov, V.

Gandhi, A. Gandhi, A. Botnen, “Multiple Model (MM) dosage design: Achieving target goals with maximal precision,” 14th IEEE Symposium on Computer-Based Medical Systems (CMBS'01), July 26-27, 2001.

10. Jelliffe R and Jelliffe S: A Computer Program for Estimation of Creatinine

Clearance from Unstable Serum Creatinine Levels, Age, Sex, and Weight. Math Biosci, 4: 17-24, 1972.

11. Jelliffe R: Estimation of Creatinine Clearance in Patients with Unstable

Renal Function, without a Urine Specimen. Am. J. Nephrology, 22: 320-324, 2002.

12. Wheeler L and Sheiner L: Clinical Estimation of Creatinine Clearance. Am. J. Clin. Path. 72: 27-31, 1979.

13. Chiou W and Hsu F: A New Simple and Rapid Method to Monitor the Renal Function based on Pharmacokinetic Consideration of Endogenous Creatinine. Research Communications in Chemical Pathology and Pharmacology, 10: 316- 330, 1975.

14. Brater D: Drug Use in Renal Disease. Balgowlah, Australia, ADIS Health

Science Press, 1983, p 29. 15. Jelliffe R, Schumitzky A, Bayard D, Milman M, Van Guilder M, Wang X,

Jiang F, Barbaut X, and Maire P: Model-Based, Goal-Oriented, Individualized Drug Therapy: Linkage of Population Modeling, New "Multiple Model" Dosage Design, Bayesian Feedback, and Individualized Target Goals. Clin. Pharmacokinet. 34: 57-77, 1998.

16. D.S. Bayard, R.W. Jelliffe, A. Schumitzky, M.H. Milman, M. Van Guilder,

“Precision drug dosage regimens using multiple model adaptive control:

Page 29: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

29

Theory and application to simulated Vancomycin therapy,” in Selected Topics in Mathematical Physics, Prof. R. Vasudevan Memorial Volume, Ed. by R. Sridhar, K. S. Rao, V. Lakshminarayanan, World Scientific Publishing Co., Madras, 1995.

17. Bayard D, and Jelliffe R: A Bayesian Approach to Tracking Patients

having Changing Pharmacokinetic Parameters. J. Pharmacokin. Pharmacodyn. 31 (1): 75-107, 2004.

18. Macdonald I, Staatz C, Jelliffe R, and Thomson A: Evaluation and

Comparison of Simple Multiple Model, Richer Data Multiple Model, and Sequential Interacting Multiple Model (IMM) Bayesian Analyses of Gentamicin and Vancomycin Data Collected From Patients Undergoing Cardiothoracic Surgery. Ther. Drug Monit. 30:67–74, 2008.

19. Maire P, Barbaut X, Vergnaud JM, El Brouzi M, Confesson M, Pivot C,

Chuzeville M, Ivanoff N, Brazier J, and Jelliffe R: Computation of Drug Concentrations in Endocardial Vegetations in Patients during Antibiotic Therapy. Int. J. Bio-Med. Comput., 36: 77-85, 1994.

20. Bayer A, Crowell D, Yih J, Bradley D, and Norman D: Comparative

Pharmacokinetics and Pharmacodynamics of Amikacin and Ceftazidime in Tricuspid and Aortic Vegetations in Experimental Pseudomonas Endocarditis. J. Infect. Dis., 158: 355-359, 1988.

21. Bayer A, Crowell D, Nast C, Norman D, and Borelli R: Intravegetation

Antimicrobial Distribution in Aortic Endocarditis Analyzed by Computer-Generated Model: Implications for Treatment. Chest, 97: 611-617, 1990.

22. Zhi J, Nightingale CH, and Quintiliani R: Microbial Pharmacodynamics of

Pipericillin in Neutropenic Mice of Systemic Infection due to Pseudomonas Aeruginosa. J Pharmacokin. Biopharm. 4: 355-375, 1988.

23. Schumitzky A: personal communication. 24. Bouvier D'Ivoire M, and Maire P: Dosage Regimens of Antibacterials:

Implications of a Pharmacokinetic/ Pharmacodynamic Model. Drug Invest. 11: 229-239, 1996.

25. Craig W, and Ebert S: Killing and Regrowth of Bacteria in Vitro: a Review.

Scand J Infect Dis Suppl 74: 63-70, 1991. 26. D.S. Bayard, “Reduced complexity dynamic programming based on policy

iteration,” J. Mathematical Analysis and Applications, Vol. 170, No. 1, pp. 75-103, October 1992.

Page 30: PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG … · 1 PHARMACOMETRIC TOOLS TO OPTIMIZE CONTROL OF DRUG PK/PD MODELS FOR BEST PATIENT CARE. Roger Jelliffe MD1, Alan Schumitzky

30

27. Bayard DS, Schumitzky A. Implicit Dual Control Based on Particle Filtering and Forward Dynamic Programming. Technical Report 2007-1, November 19, 2007, Laboratory of Applied Pharmacokinetics, University of Southern California. Submitted for publication to the International Journal of Adaptive Control and Signal Processing, 2008.

28. Sheiner L, Beal S, Rosenberg B, and Marathe V: Forecasting Individual

Pharmacokinetics. Clin. Pharmacol. Therap. 1979; 26: 294-305. 29. Jelliffe R, Buell J, and Kalaba R: Reduction of Digitalis Toxicity by

Computer-Assisted Glycoside Dosage Regimens, in Computer-Assisted Medical Decision Making, ed by Reggia J and Tuhrim S, Springer-Verlag, New York, Volume 1, pp. 93-124, 1985. Reprinted, with permission, from Ann. Int. Med, 77: 891-906, 1972.

30. Rodman J, Jelliffe R, Kolb E, Tuey D, deGuzman M, Wagers P, and

Haywood L: Clinical Studies with Computer-Assisted Initial Lidocaine Therapy. Arch Int Med, 144: 703-709, 1984.

31. van Lent-Evers N, Mathot R, Geus W, van Hout B, and Vinks A: Impact of

Goal-Oriented and Model-Based Clinical Pharmacokinetic Dosing of Aminoglycosides on Clinical Outcome: A Cost-Effectiveness Analysis. Therap. Drug Monit. 221: 63-73, 1999.

32. Bleyzac N, Souillet G, Magron P, Janoly A, Martin P, Bertrand Y,

Galambrun C, Dai Q, Maire P, Jelliffe R, and Aulagner G: Improved clinical outcome of paediatric marrow recipients using a test dose and Bayesian pharmacokinetic individualization of busulfan dosage regimens. Bone Marrow Transplantation, 28: 743-751, 2001.

33. Martin P, Bleyzac N, Souillet G, Galambrun C, Bertrand Y, Maire P, Jelliffe

R, and Aulagner G: Relationship between CsA trough blood concentration and severity of acute graft-versus-host disease after paediatric stem cell transplantation from matched sibling or unrelated donors. Bone Marrow Transplantation 32: 777-784, 2003.

34. Neely M, and Jelliffe R: Practical Therapeutic Drug Management in HIV-

Infected Patients: Use of Population Pharmacokinetic Models Supplemented by Individualized Bayesian Dose Optimization. J. Clin. Pharmacol. 48: 1081 – 1091, 2008.