Upload
others
View
2
Download
0
Embed Size (px)
Citation preview
Practical and Statistical
Considerations for the
Design and Analysis of
Analytical Bridging Studies
Ken Miller and Harry Yang
CMC Strategy Forum
27Jan14
Overview
Practical considerations in determining an analytical
transition strategy
Statistical considerations for design and analysis of analytical
bridging studies
2
Practical considerations in determining a transition
strategy
Where is the product in its lifecycle?
Is it a transition to new method or improvement in an existing
method?
What type of improvement is it?
– Operational improvement (e.g. time and/or resources) – limited
understanding of impact on assay performance
– Performance improvement – more specific (e.g., cell based versus
binding), less variable (after optimization)
3
Discovery Pre-Clinical
Development
Early-Stage Clinical
Development
Late-Stage Clinical
Development Commercial
Transition to a new method (binding assay cell-based assay)
– Transition to a cell-based bioassay is an improvement – better
reflects product mechanism of action (MoA)
– Improvement may/is expected to result in a difference because
assays are measuring different things
– Cell-based bioassay should be treated as a new measure of
potency and need not be statistically bridged to the binding assay
• “Qualify” the cell-based bioassay
• Compare degradation pathways between cell-based bioassay and
binding assays
• Test historical samples in cell-based bioassay to make informal
comparison and generate data for setting specifications
4
Practical considerations in determining a transition
strategy (cont.)
Transition to a new method (HPSEC gel electrophoresis)
– Methods measure same quality attribute
– Formal bridging might be required
– Ultimately select method with best operating characteristics to
measure a quality attribute
5
Practical considerations in determining a transition
strategy (cont.)
Optimize a current method
– Need to preserve scale and performance characteristics of the
measurement system
– Formal bridging is required
– Bridging acceptance criteria should address program risks
• Risk that assays are “similar” if different – may result in bad program
decisions
• Risk that assays are “dissimilar” if the same – may result not being
able to replace with the new assay
– Statistically based design and analysis used to manage study risks
• Risk of concluding “similar” if different
• Risk of concluding “dissimilar” if the same
6
Practical considerations in determining a transition
strategy (cont.)
7
A bridging study is required …
what next?
Common Problem with Data Analysis of Bridging
Study
8
Goal of bridging study: to demonstrate performance
equivalence (an equivalence hypothesis)
Common practice #1: compare the point estimate of
mean performance with the acceptance criterion
Problem: uncertainty in point estimate
True value may be outside the acceptable range
acceptance of a poor assay
high rate of OOS
Common Problem with Data Analysis of Bridging Study (cont.)
9
Common practice #2: assessment of equivalence is
based on classical hypothesis testing and the associated
p value:
Null hypothesis: The performance of two assays is
equivalent
Research hypothesis: The performance is not equivalent
Decision Rule: Conclude equivalence if fail to reject null
hypothesis at p-value < 0.05
This method is often referred to as difference testing
Comparison of Confidence Interval vs. p-Value
10
Are Assays Equivalent?
p value equiv. test
Pair 1: No No
Pair 2: Yes Yes
Pair 3: No Yes
Pair 4: Yes No
Difference in slope
Sa
mp
le
95% CI for slope difference (test-reference)
-d* 0 d*
D
C
B
A
- d* to d* defines the equiv. interval
90% CI for mean difference
Mean difference
Assay
Pair
Root Cause of Difference Test Issue
Testing a wrong hypothesis which aims for demonstrating a
difference
Failure to show a significant difference ≠ equivalence
– could be due to lack of power: large variability and/or small
sample size
A significant difference ≠ outside the acceptance range
– assay is too precise such that a small difference is statistically
significant
11
Take-Aways
If you want to demonstrate similarity, equivalence needs to
be the research hypothesis
Watch out for the common error – “not statistically different,
so equal”
Failure to reject the null hypothesis can mean insufficient
evidence (e.g., study too small) or that null actually true
12
Confidence Intervals
13
How to assess equivalence: 90% Confidence Interval
(i.e., two one-sided t-test)
nstXCI df
a b c
UAL
0
LAL
The width of CI gets smaller as
variability, s, decreases or
sample size, n, increases.
Confidence Intervals
In addition to mean, confidence intervals can be constructed
for other performance characteristics such as precision
estimates
Sample size (number of runs) should be considered in the
design of a bridging study to ensure a good chance of
success.
14
2
22
,, )(
IPdfdf ttn
, : probabilities of erroneously
rejecting a true null hypothesis or
accepting a false null hypothesis
: estimate of intermediate precision
: acceptable deviation from
acceptance criterion
IP̂
Confidence Intervals
15
Acceptable Deviation
Intermediate Precision
σ = 10 % σ = 20 % σ = 30 %
∆ = 20 % 4 8 16
∆ = 10 % 8 27 58
∆ = 5 % 27 101 223
Example: mean difference (% bias)
Required # of runs to achieve 80% chance of success assume true mean % bias = 0
Summary
Important to look at the stage-criticality of the analytical
method change
Not all analytical method changes are necessarily an “apples”-
to-”apples” comparison
Partner with a biostatistician
16
Special Thanks To …
Tim Schofield
Mark Schenerman
17