7
Drug Information Cost-Effective Approach to Managing Laboratory Reference Ranges for Local Laboratories in Clinical Research Vadim Tantsyura, MS, MA, DrPH(c) 1 , Imogene Grimes, PhD 2,* , Jules Mitchel, MBA, PhD 3 , Sergiy Sirichenko, MS 4 , Jim Crowe, MS 5 , and Deborah Viola, MBA, PhD 6 Abstract The use of a single central laboratory with universal references ranges is not always a viable option in clinical studies; examples are oncology studies where a rapid turnaround of clinical laboratory results is critical. However, the complexities associated with multiple sites, multiple laboratories, and multiple age and sex groups can lead to logistical nightmares across clinical trials and make handling laboratory data one of the most challenging, labor-intensive, and time-consuming tasks for clinical data managers, espe- cially where different laboratories are used for the same patient. Also, evidence suggests that the reference ranges (RRs) used by the local laboratories often create a false sense of precision that is not always supported by science. Managing time-specific, demographic-specific, and site-specific RRs requires significant investment in time and labor. As a result, an alternative approach to management of local laboratory RRs that uses ‘‘standard’’ (sometimes called ‘‘published’’) ranges has been growing in popularity over the past several years. This article attempts to compare the pros and cons of this approach relative to the historic ways of handling local laboratory RRs. Scientific, operational, and economic perspectives are also presented. Keywords reference ranges, normal ranges, clinical trials, clinical data management, cost-effectiveness Background: Importance of Reference Ranges in Clinical Research Laboratory data are critical markers of disease, hence they are a key measures of safety within clinical trials. Laboratory ana- lytes are among the most common biomarkers used in clinical research and medical care because laboratory abnormalities often precede clinical symptoms and serve as a first indicator of potential adverse events associated with an investigational medicinal product. Because clinical laboratory results are essential to clinical trials, there is a need to collect and ensure the accuracy and integrity of a very large amount of these types of data. For example, laboratory data account for approxi- mately 30% of the clinical data in a typical oncology phase 2 clinical trial. According to Karvanen, ‘‘In many studies and especially in early clinical development phase (such as oncol- ogy phase 1 studies), laboratory data constitute 50%-80% of the data to be collected.’’ 1 Based on the authors’ experience, a typ- ical small oncology biotechnology company deals with >30,000–60,000 laboratory records (vs *1000–1500 adverse events [AEs]) per investigational product. This order of magnitude is not insignificant. The average new drug application to the FDA contains about 573,000 laboratory test results—the sponsors might be evaluating thousands of results each day. 2 Laboratory results are of little value without the ability to analyze the results in comparison with baseline or reference values. Laboratory results are typically either compared with other samples taken from the same subject at a different time point (eg, baseline values) or are compared with an established 1 Independent consultant, Danbury, CT, USA 2 Otsuka Pharmaceutical Company, Rockville, MD, USA 3 Target Health Inc, New York, NY, USA 4 Pinacle21/OpenCDISC, Bedminster, NJ, USA 5 JT Crowe Consulting, Richmond, VA, USA 6 New York Medical College, Valhalla, NY, USA * Current affiliation: Transtech Pharma Inc, High Point, NC, USA Submitted 20-Dec-2011; accepted 20-Mar-2012. Corresponding Author: Vadim Tantsyura, MS, MA, DrPH(c), 68 Judith Drive, Danbury, CT 06811, USA Email: [email protected] Drug Information Journal 46(5) 593-599 ª The Author(s) 2012 Reprints and permission: sagepub.com/journalsPermissions.nav DOI: 10.1177/0092861512446333 http://dij.sagepub.com by guest on June 4, 2015 dij.sagepub.com Downloaded from

Cost-Effective Approach to Managing Lab RR for Local Laboratories in CR, 2012

Embed Size (px)

Citation preview

Page 1: Cost-Effective Approach to Managing Lab RR for Local Laboratories in CR, 2012

Drug Information

Cost-Effective Approach to Managing

Laboratory Reference Ranges for Local

Laboratories in Clinical Research

Vadim Tantsyura, MS, MA, DrPH(c)1, Imogene Grimes, PhD2,*,

Jules Mitchel, MBA, PhD3, Sergiy Sirichenko, MS4, Jim Crowe, MS5,

and Deborah Viola, MBA, PhD6

Abstract

The use of a single central laboratory with universal references ranges is not always a viable option in clinical studies; examples are

oncology studies where a rapid turnaround of clinical laboratory results is critical. However, the complexities associated with

multiple sites, multiple laboratories, and multiple age and sex groups can lead to logistical nightmares across clinical trials and make

handling laboratory data one of the most challenging, labor-intensive, and time-consuming tasks for clinical data managers, espe-

cially where different laboratories are used for the same patient. Also, evidence suggests that the reference ranges (RRs) used by

the local laboratories often create a false sense of precision that is not always supported by science. Managing time-specific,

demographic-specific, and site-specific RRs requires significant investment in time and labor. As a result, an alternative approach

to management of local laboratory RRs that uses ‘‘standard’’ (sometimes called ‘‘published’’) ranges has been growing in popularity

over the past several years. This article attempts to compare the pros and cons of this approach relative to the historic ways of

handling local laboratory RRs. Scientific, operational, and economic perspectives are also presented.

Keywords

reference ranges, normal ranges, clinical trials, clinical data management, cost-effectiveness

Background: Importance ofReference Ranges in Clinical Research

Laboratory data are critical markers of disease, hence they are a

key measures of safety within clinical trials. Laboratory ana-

lytes are among the most common biomarkers used in clinical

research and medical care because laboratory abnormalities

often precede clinical symptoms and serve as a first indicator

of potential adverse events associated with an investigational

medicinal product. Because clinical laboratory results are

essential to clinical trials, there is a need to collect and ensure

the accuracy and integrity of a very large amount of these types

of data. For example, laboratory data account for approxi-

mately 30% of the clinical data in a typical oncology phase 2

clinical trial. According to Karvanen, ‘‘In many studies and

especially in early clinical development phase (such as oncol-

ogy phase 1 studies), laboratory data constitute 50%-80% of the

data to be collected.’’1 Based on the authors’ experience, a typ-

ical small oncology biotechnology company deals with

>30,000–60,000 laboratory records (vs *1000–1500 adverse

events [AEs]) per investigational product. This order of

magnitude is not insignificant. The average new drug application

to the FDA contains about 573,000 laboratory test results—the

sponsors might be evaluating thousands of results each day.2

Laboratory results are of little value without the ability to

analyze the results in comparison with baseline or reference

values. Laboratory results are typically either compared with

other samples taken from the same subject at a different time

point (eg, baseline values) or are compared with an established

1 Independent consultant, Danbury, CT, USA2 Otsuka Pharmaceutical Company, Rockville, MD, USA3 Target Health Inc, New York, NY, USA4 Pinacle21/OpenCDISC, Bedminster, NJ, USA5 JT Crowe Consulting, Richmond, VA, USA6 New York Medical College, Valhalla, NY, USA* Current affiliation: Transtech Pharma Inc, High Point, NC, USA

Submitted 20-Dec-2011; accepted 20-Mar-2012.

Corresponding Author:

Vadim Tantsyura, MS, MA, DrPH(c), 68 Judith Drive, Danbury, CT 06811, USA

Email: [email protected]

Drug Information Journal46(5) 593-599ª The Author(s) 2012Reprints and permission:sagepub.com/journalsPermissions.navDOI: 10.1177/0092861512446333http://dij.sagepub.com

by guest on June 4, 2015dij.sagepub.comDownloaded from

Page 2: Cost-Effective Approach to Managing Lab RR for Local Laboratories in CR, 2012

reference range. Reference ranges (RRs) can also be known as

‘‘normal ranges’’ (although not all populations can be consid-

ered truly ‘‘normal’’ or ‘‘healthy’’).i

RRs are established by analyzing a large number of samples

and statistically determining the appropriate range of values.ii

Because values may differ according to variables such as age,

gender, disease processes, time of day, or regional variations,

multiple ranges are often established for a given laboratory test.

Laboratories may either establish their own set of RRs or obtain

ranges from published sources. RRs are usually collected at the

beginning of a study, unless there are changes to the specimen

collection, instrumentation, or methodology.iii

Three categories of use of RRs in oncology clinical trials are

typically considered:3

� Inclusion/exclusion (I/E) criteria are often defined as multiples

of upper limit of normal (ULN), rather than in absolute values

� Protocol defined serious adverse event (SAE) criteria (eg,

laboratory results of aspartate aminotransferase [AST] and/

or alanine aminotransferase [ALT] >3� ULN with total

bilirubin >2� ULN will be considered an ‘‘important medi-

cal event’’ and will be reported to the sponsor as an SAE)

� Standard common terminology criteria (CTC) toxicity

grades (in about half of the laboratory analytes collected

in oncology clinical trials, the toxicity is graded based on

the result as a multiple of the ULN)

Table 1 provides an example of the common terminology cri-

teria for adverse events (CTCAE) by the National Cancer Insti-

tute (version 3) using AST as an example.

The costs of maintaining multiple normal laboratory ranges

from multiple laboratories are very high. Briefly, for each labora-

tory, (1) clinical operations determine RRs, (2) sites or data man-

agement enter RRs into electronic data capture (EDC) systems,

(3) during the monitoring process, clinical operations check that

the laboratory units and ranges displayed in EDC system reflect

what is on the patient laboratory report, and (4) often laboratories

update RRs during the course of a study requiring additional entry

and subsequent data management. As a result, multitudes of hours

are invested each year to maintain and verify RRs.

Why is there a need to maintain site-specific laboratory nor-

mal ranges? An informal survey conducted at the 2011 annual

DIA meeting revealed that many participants believed that no

alternative approach was published and no alternatives were

explicitly supported by the FDA. As a result, this article attempts

to fill this gap by proposing a scientifically solid and cost-

effective way of handling laboratory RRs in clinical trials.

Inherent Variability of Laboratory RRs andProblems With Interpretation of RRs

The inherent variability of lab RRs is a well-documented

fact.4,5 Figures 1 and 2 illustrate a typical distribution of lower

and ULN RRs for potassium across participating laboratories.4

In this report, the lower limit of normal varied from 3.0 to 3.9

mmol/L and the upper of limit of normal varied from 4.5 to 6.2

Table 1. Common terminology criteria for adverse events (CTCAE)version 3.0, aspartate aminotransferase grading example.a

CTCAE v. 3.0 GradeNormalized Result(ULN)

0 � ULN1 > 1 to 2.5� ULN2 > 2.5 to 55� ULN3 > 5.0 to 205� ULN4 > 205� ULN

ULN, upper limit of normal.aRules vary for each analyte.

Figure 1. Potassium—lower limit of normal frequency. Source: Imageadapted from Haag et al.4

Figure 2. Potassium—upper limit of normal frequency. Source: Imageadapted from Haag et al.4

594 Drug Information Journal 46(5)

by guest on June 4, 2015dij.sagepub.comDownloaded from

Page 3: Cost-Effective Approach to Managing Lab RR for Local Laboratories in CR, 2012

mmol/L. The variability of such magnitude is typical across

many analytes.

From a statistical perspective, major sources of variability

can include the following:

� Patient population (eg, contrast a patient population from a

hospital where the proportions of heavy smokers and heavy

drinkers are higher than average and a hospital associated

with a health-conscious university where there are smaller

proportions of smokers and heavy drinkers)

� Site and laboratory environment itself (including equipment,

standard operating procedures, operational environment)

� Attributes of key laboratory and support staff, such as their

judgment relative to laboratory mishaps, training, and

experience (technical performance affects the ranges)

Ruvuna et al6 list the following 3 major causes of RR varia-

bility (and ultimately imprecision): (1) variation in assay specs

used by different laboratories, (2) variation in clinical decision

(in allowable false-positive vs false-negative rates) between

laboratories, (3) variation in mathematical decisions (95% CI

vs 90% CI). (Unusual values from the sample may be censored

just because they are several standard deviations from the mean

and make the curve messy.2) In addition, Thompson et al2

argue that in some cases the environmental and biological

sources of variability (diet, age, and national origin) are not

compensated for in calculation of RRs.

This inherent variability of RRs makes interpretation of

results more difficult and makes it a target of criticism from

many perspectives. Thus, Solberg and Stamm7 raised the issue

of ‘‘obtaining a sufficient number of adequate specimens for

the production of the reference values.’’ Henny et al8 presented

a ‘‘need for revisiting the concept of reference values,’’ point-

ing to the ‘‘need for more practical recommendations regarding

systematic errors and transferability, regarding the reference

population, regarding statistical methods used, regarding refer-

ence and decision limits and the question about which percen-

tiles to be used.’’ Hyltoft-Petersen9 points to ‘‘the flaws of

population-based reference intervals due to the biological indi-

viduality presented by all, as the dispersion of values for any

individual may span only a small part of the traditional refer-

ence interval for many quantities.’’ He also points to ‘‘a high

percentage of false positive results when the traditional

description of reference values as 95% reference intervals is

used.’’ Thus, it is reasonable to conclude that the RR used by

the local laboratories often create a false sense of precision that

is not always supported by science.

Furthermore, when the normal range changes, tracking the

significance of the result can be challenging or even compro-

mised. A case was observed for which ALT values for a patient

were 51 and 48, respectively, for baseline and postbaseline.

When the upper limit of the normal range decreased from 52

to 45, the patient was classified as shifting from normal to

above normal, when the patient’s values actually declined.10

These statistical artifacts can occur when normal ranges vary

during a study for any reason.

The difficulty with interpretation of laboratory results is not

the only problem, which is caused by the variability of RRs. As

is shown in the following section, such variability is also a

source of operational challenges for clinical and data manage-

ment professionals.

Inherent Labor Intensiveness of theCurrent Standard Laboratory RRManagement Process

The process of collecting and maintaining local laboratory RRs

usually includes the following 6 steps, which require intensive

use of resources:

1. Laboratory RRs received by the site coordinator or con-

tract research organization (CRO)/sponsor

2. RRs transcribed into the electronic case report form

(eCRF)

3. RRs document validated (quality control) in the eCRF with

appropriate revisions of the RR within the EDC system

4. Documentation filed in the data management project files

5. Project team notified of entry of new (or updated) RR

6. EDC laboratory units managed and converted (of original

laboratory units of measurement) to standard (ie, systeme

international [SI]) units

7. There are many data and data quality issues that clinical

and data management professionals deal with during the

life cycle of laboratory RR management processes. Com-

mon examples include the following:

� Multiple sets of normal ranges per site are used, which

leads to tracking issues

� The ability to collect the RR from the study sites is not

as smooth as in the case of a central laboratory because

of site-specific resource constraints

� Missing information for dates, units, ranges, age, and

gender as well as typographical errors

� Managing units utilizes resources when different local

laboratories use different units for the same analytes

(up to 7, typically 3 or 4)6; Thus, unit conversions take

significant time, and cleaning local laboratory data

becomes much more difficult because of multiple

ranges used by the different sites

� The large number of laboratory-range related issues

leads to lengthy interactions between the sponsor and

the sites, which can lead to frustration; consequently,

clinical and data management personnel need to deal

Tantsyura et al 595

by guest on June 4, 2015dij.sagepub.comDownloaded from

Page 4: Cost-Effective Approach to Managing Lab RR for Local Laboratories in CR, 2012

with the ‘‘upset sites’’ when, for example, supporting

documentation for revised RRs is not provided

To cope with the multitude of laboratory RR issues, sponsor

companies employ data visualization tools and (built around

them) expensive processes. Unfortunately, the software allevi-

ates only a portion of the task, leaving data managers with heav-

ily manual, tedious, and labor-intensive work. In addition to

being extremely resource-consuming, the process is difficult to

control for quality, and thus it is inherently error prone, regard-

less of the assistance of software visualization tools. It can be

argued that the current process artificially imposes a significant

burden on data managers, and it can be estimated that hundreds

of millions are spent in the United States annually on the main-

tenance of the existing system (see Table 2 for more details).

One may argue that the lion’s share of these resources can be

saved. Economic reality presents an incentive to pharmaceutical

professionals regarding the reasonableness of spending signifi-

cant resources on every tiny detail of questionable importance,

especially knowing that 66%–71% of trials in phases 2 and 3

in oncology, for example, are negative.11 Additional scientific

considerations include statistical anomalies and artifacts that can

arise as the result of changes in metainformation about a patient

that is unrelated to the patient’s disease, which can result in diag-

nostic changes or conflicting conclusions. Thus, the soundness

of the process from the scientific and resource-investment per-

spectives can benefit from a reevaluation. As a result, we present

alternative strategies to managing RRs that could be both scien-

tifically and economically more plausible.

Alternative Strategies to Managing RRs

In addition to the classic approach of using individual local

RRs (discussed above and presented as alternative 1 below),

a literature review helped to identify 2 previously published

approaches (‘‘RR from highest enrollment site’’14 and ‘‘the

phantom laboratory GLN’’6). These options are listed as alter-

natives 2 and 3 below. (Also, Chuang-Stein15 suggested ‘‘con-

structing disease-specific or protocol-specific RRs.’’

However, no sufficient operational details were provided, which

is why this particular alternative is not discussed further.) Alter-

native 4 below is a strategy of using the ‘‘standard’’ or ‘‘pub-

lished’’ laboratory RRs that has not been considered or

discussed in the peer-reviewed literature. These are summarized

in Table 3.

Alternative 1 is the commonly used method wherein local

RRs are collected and used for analysis as well as dose escalation

purposes. This approach is considered by many to be the most

clinically meaningful. However, this approach can be debated

on scientific grounds. Furthermore, this approach is expensive.

Alternative 2 is a potential alternative wherein the labora-

tory with the highest enrollment is used as the standard.6

Ruvuna et al state that ‘‘[t]his strategy works, but the draw-

backs as defined by Chuang-Stein (1998) include possible

chemical, analytical, and mathematical biases associated with

estimated normal ranges from a single laboratory data that may

render them inappropriate for generalizing across multiple

laboratories.’’

Alternative 3 provides a strategy of pooling and summariz-

ing laboratory values from multiple laboratories by implement-

ing a 2-step process. ‘‘Step one involves converting all

laboratory units of measurements into the same units of mea-

surements by transforming all units to systeme international

d’unites (SI units). . . . Step two involves creating a ‘phantom’

laboratory of estimated GLN [generalized lab norms] using the

percentile method.’’6 This method allows pulling the RR from

multiple local sites and successfully creating an unbiased

‘‘default’’ normal RR (labeled as a ‘‘phantom laboratory’’) that

can be used for computing statistical summaries or toxicity

grading for a trial or a program. However, the ‘‘phantom

laboratory’’ method does not reduce the resource consumption

associated with the need for local RR collection and mainte-

nance. Quite to the contrary, although statistically sound, this

method leads to a further complication of the process and thus

is considered suboptimal.

Alternative 4 is the strategy of using ‘‘standard’’ RRs, which

is supported by this publication. In many cases, using standard

RRs (ie, those published in the New England Journal of

Medicine16 or other sources) for laboratory analysis is a more

scientifically sound and cost-effective alternative to

laboratory-specific ranges. More specifically, this process may

include the following details:

� Site laboratory ranges are collected for the trial master file

(per Good Clinical Practice guidelines according to the Inter-

national Conference on Harmonisation, these are collected

Table 2. A quick summary of calculations.

� Total spending on clinical trials $US49.8 billion/year in 2006.Congressional Budget Office12 cites PhRMA’s estimates of $US39billion in 2006; linear extrapolation using 5% inflation-adjustedgrowth rate from 2006 to 2011 allows to estimate the annualspending at US$49.8 billion/year. According to Kaitin13 the USpharmaceutical spending reached US$50 billion/year in 2008. Theauthors selected more conservative estimate of US$49.8 billion/year in 2011.

� Oncology (primary use of local laboratories) cost ¼ 18%–20% oftotal ¼ US$9.0–$10 billion

� Data Management costs are typically 10%–20% of total budgets fora oncology clinical trial ¼ US$900–2000 million/year

� Laboratory reference range management is estimated at 20%–30%of total data management costs in oncology ¼ US$180–$600million/year.

596 Drug Information Journal 46(5)

by guest on June 4, 2015dij.sagepub.comDownloaded from

Page 5: Cost-Effective Approach to Managing Lab RR for Local Laboratories in CR, 2012

but might not be easily accessible); they are not further pro-

cessed electronically nor used for any standard statistical

analysis.

� I/E criteria in protocols based on ULN are changed to abso-

lute values (ie, instead of saying a patient is eligible if base-

line ‘‘AST <2� ULN’’ we change this to ‘‘AST <72 U/L’’).

� Sites are instructed to perform dose modifications and

grade AEs based on the local laboratory ranges. As a result,

when the sponsor company or CRO reviews the results,

they might think a dose should have been modified because

of the laboratory grade when it was not, or vice versa.

� Summary tables may appear to have minor discrepancies.

For example, a laboratory table might report a grade 3 AST

rate of 10% versus the AE table report of 11%. Also, all

laboratory summary tables showing grades will have a foot-

note on how they were calculated.

� For documentation, ICH E617 requires documentation at

the site,iv and this requirement should be followed. It is

anticipated that updates to protocols and statistical analysis

plans will be required (if in doubt, seek advice from the

regulators).

The alternative 4 methodology allows for significant opera-

tional flexibility and adaptability. It also gives sponsor compa-

nies the ability to assess and mitigate risks on a case-by-case

basis. For instance, if an investigational new drug bears risk

of drug-induced liver injury, close attention might be given

to those biomarkers that are indicative of liver injuries (such

as bilirubin, alkaline phosphatase [ALP], ALT, AST—see FDA

drug-induced liver injury guidance18 for more details). In addi-

tion, those analytes with higher variability of local RRs might

be scrutinized more carefully (ie, using the cutoff point of coef-

ficient of variation [CV] > 10% suggested by Ruvuna et al6)

when pooling results across laboratories relative to the analytes

of lower variability or of those that bear lower risk for the par-

ticular patient population.

Furthermore, compared to the inherent labor-intensiveness

of the current practice (alternative 1), it is estimated that

approximately 80%–90% of RR management costs can be cut.

Therefore, savings of the equivalent of 0.25%–1.1% of the total

typical oncology trial budget can be expected (product of all

percentage points in Table 2). Thus, aggregate industry cost

savings associated with alternative 4 across all oncology clini-

cal trials in the United States are estimated at US$144–$540

million per year.

Potential Risks and Criticismof the Proposed Method

Currently, the acceptance of alternative 4, utilizing published

RRs, is constrained by perceived potential risks associated with

this approach. What are these potential risks, and how much

risk is acceptable? According to the FDA,19 ‘‘[R]isk manage-

ment is an iterative process of (1) assessing a . . . benefit-risk

balance, (2) developing and implementing tools to minimize its

risks while preserving its benefits, (3) evaluating tool effective-

ness and reassessing the benefit-risk balance, and (4) making

adjustments, as appropriate, to the risk minimization tools to

further improve the benefit-risk balance’’ Also, according to

the FDA,20 ‘‘There is a growing consensus that risk-based

approaches to monitoring, such as focusing on the most critical

data elements, are more likely to ensure subject protection and

overall study quality.’’ Furthermore, ‘‘Sponsors should perform

a risk assessment that generally considers the types of data to

be collected in a clinical trial, the specific activities required

to collect these data, and the range of potential safety and other

human subject protection concerns that are inherent to the clin-

ical investigation. Sponsors should consider the findings of the

risk assessment when developing a monitoring plan. There is

increasing recognition that some types of errors in a clinical

trial are more important than others.’’20 One should take a close

look at the following 3 sources of potential ‘‘risks’’ associated

with the proposed methodology of handling laboratory RRs.

Table 3. Alternative approaches to laboratory reference range handling: advantages and limitations.

Alternative Advantages Limitations

1. Local RRs Perceived by many as clinicallymeaningful

The most expensive option/lowest ROI

2. RRs from highest patient enrollmentsite14

Single range is easy to use Not clinically meaningful; resulting summaries may not becomparable among laboratories

3. The ‘‘phantom’’ laboratory generalized labnorms6

� Not subject of violation ofnormal distribution

� Cross-sectionalrepresentation of all sites

Labor intensive; suboptimal ROI

4. Standard RRs (eg, New England Journal ofMedicine16 or other)

� Scientifically solid� Marginal efforts/highest ROI

Nonconventional (ie, will required time to adopt)

RR, reference range; ROI, return on investment.

Tantsyura et al 597

by guest on June 4, 2015dij.sagepub.comDownloaded from

Page 6: Cost-Effective Approach to Managing Lab RR for Local Laboratories in CR, 2012

Clearly, risk to patient safety is the primary concern of the

clinicians and regulators in clinical trials. Thus, one of the risks

frequently mentioned by the opponents of the proposed process

is that patients’ safety might be jeopardized in case of use of

standard/published RRs because of potential misclassification

of AEs. However, in this approach, use of standard/published

RRs is limited to the collection/analysis of data by the sponsor

companies. It is possible and recommended that the clinical

investigators participating in clinical trials continue using local

RRs. In essence, the clinical decision criteria at the patient level

do not change. Thus, there is no additional risk to the patients

from using alternative 4.

The second set of concerns is the risks associated with anal-

ysis and reporting of laboratory data, more specifically poten-

tial underreporting or overreporting of AEs (because of the

small differences in CTC grade calculations using local and

standard RRs). For example, one may argue that there is a pos-

sibility of ‘‘incorrect’’ calculations of maximum tolerated dose

in dose-escalation studies. In our opinion, these risks are over-

estimated, for worrying about ‘‘incorrect’’ dosing presumes

that a set of normal ranges is correct and another set is incor-

rect, and we do think they are merely inconsistent. One could

propose that the local laboratory ranges be used for patient care

if there is a difference. Many trials use central laboratories as

well as local laboratories, and the results will vary. If there are

concerns about incorrect dosing presuming that the local

laboratory range is correct, then use of a central laboratory

would carry the same concern, and most major drug trials use

central laboratories.

The third type of risk might come from the potential bias

when the authors of the study protocol are free to select across

multiple published RRs for a particular marker. This risk can be

minimized either by setting the laboratory RR standards at the

industry level or by additional requirements imposed on the

sponsor companies being held to the question of why one pub-

lished RR was chosen over another.

Finally, it can be argued that the existing (alternative 1, local

RRs) method bears some risks to the patient, for it is noted that

when urgent (nonscheduled) laboratory results are needed and

alternative laboratories are used, errors in interpretation may arise.

Conclusion

The evidence above suggests that the RRs used by the local

laboratories often create a false sense of precision that is not

always supported by science. Local RRs are based on the local

populations evaluated by the hospital. It is possible but not very

likely that this local population of the particular hospital is

more representative of the patients in the study than the popu-

lation used to calculate the standard/published RRs, unless the

hospital is a specialty hospital that treats only a narrow set of

diseases. Even oncology centers are usually located in a general

hospital whose laboratories serve a broad range of patients of

different ages, races, and genders, many of whom may be

healthy persons who are getting annual physicals. Thus, the

fears associated with perceived ‘‘risks’’ of using standard RRs

are unfounded. The concern over ‘‘incorrect’’ dosing (or

‘‘incorrect’’ analysis) because of the use of standard/textbook

RRs presumes that a set of normal ranges is correct and another

set is incorrect, while they may merely be inconsistent. In addi-

tion, based on the authors’ experience, the differences in grad-

ing laboratory AEs using standard RRs versus local RRs are

relatively clinically insignificant (5%-15%).

Based on the evidence presented, it is recommended that the

existing process be altered to the use of standard RR methods

(ie, those set by the New England Journal of Medicine)16 by

pharmaceutical companies for data analysis purposes. At the

same time, personnel at investigational sites can be instructed

to perform dose modifications and grade AEs based on site-

specific ranges, if the sponsor company prefers a less contro-

versial approach.

There is no single best uniform approach to every possible

situation in managing local RRs in clinical trials, and study-

specific approaches should be selected based on study details

such as indication, study population, critical analytes, sample

size, and so on. In addition, research might be needed to deter-

mine the precise sensitivities of each parameter. In approaching

any new study, it is important to discern between the analytes

of the higher risk/importance and the analytes or lower risk/

importance. The high importance analytes need to be scruti-

nized much more thoroughly than the others. In doing so, the

clinical trial sponsors will follow the FDA recommendation

that ‘‘each sponsor [should] design a monitoring plan that is tai-

lored to the specific human subject protection and data integrity

risks of the trial.’’20

There is anecdotal evidence that the standard RR methodol-

ogy has been successfully used in the United States by a num-

ber of reputable pharmaceutical companies. If the standard RR

method is adapted, industry-wide cost savings in the United

States could approach US$144–$540 million per year. Open

support of the standard RRs by the FDA and EMA (European

Medicines Agency) could assist with changing the status quo

and accelerating the wider acceptance of the method in the

pharmaceutical industry.

Acknowledgments

The authors thank John Walker (Novartis) and Michael Gusmano,

PhD (New York Medical College) for their thoughtful comments.

Declaration of Conflicting Interests

The author(s) declared no potential conflicts of interest with respect to

the research, authorship, and/or publication of this article.

598 Drug Information Journal 46(5)

by guest on June 4, 2015dij.sagepub.comDownloaded from

Page 7: Cost-Effective Approach to Managing Lab RR for Local Laboratories in CR, 2012

Funding

The author(s) received no financial support for the research, author-

ship, and/or publication of this article.

Notes

i. ‘‘How is health defined? Often these are observations of a few

young, Caucasian, non-obese males who do not smoke, drink

ethanol, or abuse drugs.’’2

ii. Reference ranges typically consist of a high value, a low value, the

unit of measurement, and an effective date. Reference ranges can

also be age- and gender-specific, necessitating identification of

these parameters.

iii. Laboratory relicensure may also trigger the need to update docu-

mentation of reference ranges.

iv. The ICH Guidelines for Good Clinical Practice17 in sections 8.2.

11 and 8.3.7 recommend the following information be kept in the

files of the investigator/institution and Sponsor:

� Reference values or ranges for all medical/laboratory/technical

procedures or tests.

� Changes or updates to reference values or ranges for all medi-

cal/laboratory/technical procedures or tests.

� Documentation of certification, accreditation, established qual-

ity control, or other validation (where required) of all medical/

laboratory/technical procedures or tests.

� Documentation of changes or updates relating to certification,

accreditation, established quality control, or other validation

(where required) of all medical/laboratory/technical procedures

or tests.

References

1. Karvanen J. The statistical basis of laboratory data normalization.

Drug Information Journal. 2003;37:101–107. http://dij.sagepub.

com/content/37/1/101.full.pdfþhtml.

2. Thompson WL, Brunelle RL, Wilson MG. Performance and inter-

pretation of laboratory tests. Quoted by: Cato A, Sutton L, Cato A

III. Clinical Drug Trials and Tribulation. 2nd ed. New York, NY:

Marcel Dekker; 2002. Drugs and the Pharmaceutical Sciences.

3. Wallach JB. Interpretation of Diagnostic Tests. 7th ed. Philadel-

phia, PA: Lippincott Williams & Wilkins; 2000.

4. Haag MDM, Kelly JR, Ho A, Seccombe DW. A study to examine the

accuracy of potassium measurements in clinical laboratories across

Canada. Clin Biochem. 2000;33(6):449–456. http://www.ceqal.

com/files/data/Publications/Methods/Accuracy%20of%20

Potassium%20Measurements%20in%20Canada%202001.pdf.

5. Dutta A, Saha C, Johnson CS, Chalasani N. Variability in the

upper limit of normal for serum alanine aminotransferase levels:

a statewide study. Hepatol. 2009;50:1957–1962. http://online

library.wiley.com/doi/10.1002/hep.23200/pdf.

6. Ruvuna F, Flores D, Mikrut B, De La Garza K, Fong S. General-

ized lab norms for standardizing data from multiple laboratories.

Drug Information Journal. 2003;35:61–79.

7. Solberg HE, Stamm D. International Federation Of Clinical Chem-

istry (IFCC) Scientific Division, expert panel on theory of reference

values approved recommendation on the theory of reference values,

part 4. Control of analytical variation in the production, transfer and

application of reference values prepared for publication. Eur J Clin

Chem Clin Biochem. 1991;29:531–535. http://edoc.hu-berlin.de/oa/

degruyter/cclm.1991.29.8.531.pdf.

8. Henny J, Petitclerc C, Fuentes-Arderiu X, et al. Need for revi-

siting the concept of reference values. Clin Chem Lab Med.

2000;38:589–595. Quoted by: Hyltoft-Petersen P. A review

of the Clinical Chemistry and Laboratory Medicine special

issue on reference values (vol. 42, n. 7, 2004). http://www.

westgard.com/the -latest-on-reference-values-and-reference-inter-

vals.htm#interpretlimits.

9. Hyltoft-Petersen P. A review of the Clinical Chemistry and

Laboratory Medicine special issue on reference values (vol. 42,

n. 7, 2004). http://www.westgard.com/the-latest-on-reference-

values-and-reference-intervals.htm#interpretlimits.

10. Grimes I, Kidd M. Memo to file included in Map to NDA. Con-

fidential source. 1997.

11. Henderson L. Oncology clinical trials, keys to approval. Appl Clin

Trials Online. http://www.appliedclinicaltrialsonline.com/applied

clinicaltrials/article/articleDetail.jsp?id=730568. Published July 1,

2011.

12. Congressional Budget Office. Research and development in the

pharmaceutical industry. 2006. http://www.cbo.gov/ftpdocs/

76xx/doc7615/10-02-DrugR-D.pdf.

13. Kaitin KI. Deconstructing the drug development process: the

new face of innovation. Clin Pharmacol Ther. 2010;87:

356–361.

14. Chuang-Stein C. Laboratory data in clinical trials: a statisti-

cian’s perspective. Drug Information Journal. 1998;19:

167–177. Quoted by: Ruvuna F, Flores D, Mikrut B, De La

Garza K, Fong S. Generalized lab norms for standardizing data

from multiple laboratories. Drug Information Journal. 2003;35:

61–79.

15. Chuang-Stein C. Summarizing laboratory data with different refer-

ence ranges in multicenter clinical trials. Drug Information Journal.

1992;26:77–84. Quoted by: Ruvuna F, Flores D, Mikrut B, De La

Garza K, Fong S. Generalized lab norms for standardizing data from

multiple laboratories. Drug Information Journal. 2003;35:61–79.

16. Kratz A, Ferraro M, Sluss PM, Lewandrowski KB. Laboratory

reference values. N Engl J Med. 1998;339:1063–1072.

17. FDA. ICH E6 good clinical practice: Consolidated guidance. http://

www.fda.gov/downloads/Drugs/GuidanceComplianceRegulatory

Information/Guidances/ucm073122.pdf.

18. FDA. Guidance for industry: drug-induced liver injury:

premarketing clinical evaluation, July 2009. http://www.fda.gov/

downloads/Drugs/GuidanceComplianceRegulatoryInformation/

Guidances/UCM174090.pdf

19. FDA. Guidance for industry: development and use of risk minimi-

zation action plans, March 2005. http://www.fda.gov/downloads/

RegulatoryInformation/Guidances/UCM126830.pdf

20. FDA. Guidance for industry: oversight of clinical investiga-

tions—a risk-based approach to monitoring, draft guidance,

August 2011. http://www.fda.gov/downloads/Drugs/Guidance

ComplianceRegulatoryInformation/Guidances/UCM269919.pdf

Tantsyura et al 599

by guest on June 4, 2015dij.sagepub.comDownloaded from