23
On the Characterization of Spread-Out Bragg Peaks and the Definition of ‘Depth’ and ‘Modulation’ Bernard Gottschalk Harvard High Energy Physics Laboratory 42 Oxford St., Cambridge, MA 01238, USA April 28, 2003 Abstract We describe a model-independent analysis of measured spread-out Bragg peak (SOBP) data. The three segments (proximal rise, flat top and distal drop) are fit with separate polynomials. In addition to the three sets of polynomial coefficients, the breakpoints between segments, which define the range of each polynomial, are parameters of the fit. The intersections of the best-fit polynomials provide a robust definition of the corners, even when (as with beam gating) the proximal corner is quite rounded. Based on the corners we define d100 , the depth of penetration at full dose, and m100 , the extent in depth or ‘modulation’ at full dose. The procedure also defines the 100% dose level and other quantities useful in quality assurance (QA). These include the entrance dose, the slope of the SOBP, the range-equivalent depth d80 , and the steepness of the distal drop. The definitions of depth and modulation most widely used at present, using a horizontal line at 90% of the mid-SOBP dose, are d 90 and m 90 in our notation. (The prime denotes a horizontal fiducial line rather than one which follows the slope of the SOBP.) We argue that d100 and m100 are more convenient definitions mathematically, for QA and for the equipment designer, and no less convenient for the treatment planner. To illustrate these points we discuss a retrospective analysis of a large data set from the Northeast Proton Therapy Center (NPTC). 1

On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

On the Characterization of Spread-Out

Bragg Peaks and the Definition of

‘Depth’ and ‘Modulation’

Bernard Gottschalk

Harvard High Energy Physics Laboratory42 Oxford St., Cambridge, MA 01238, USA

April 28, 2003

Abstract

We describe a model-independent analysis of measured spread-outBragg peak (SOBP) data. The three segments (proximal rise, flat topand distal drop) are fit with separate polynomials. In addition to thethree sets of polynomial coefficients, the breakpoints between segments,which define the range of each polynomial, are parameters of the fit. Theintersections of the best-fit polynomials provide a robust definition of thecorners, even when (as with beam gating) the proximal corner is quiterounded. Based on the corners we define d100, the depth of penetrationat full dose, and m100, the extent in depth or ‘modulation’ at full dose.The procedure also defines the 100% dose level and other quantities usefulin quality assurance (QA). These include the entrance dose, the slope ofthe SOBP, the range-equivalent depth d80, and the steepness of the distaldrop.

The definitions of depth and modulation most widely used at present,using a horizontal line at 90% of the mid-SOBP dose, are d′

90 and m′90 in

our notation. (The prime denotes a horizontal fiducial line rather than onewhich follows the slope of the SOBP.) We argue that d100 and m100 aremore convenient definitions mathematically, for QA and for the equipmentdesigner, and no less convenient for the treatment planner. To illustratethese points we discuss a retrospective analysis of a large data set fromthe Northeast Proton Therapy Center (NPTC).

1

Page 2: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

Contents

1 Introduction 3

2 Depth 32.1 Fluence Measurements . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Dose Measurements . . . . . . . . . . . . . . . . . . . . . . . . . 42.3 Definition of Depth . . . . . . . . . . . . . . . . . . . . . . . . . . 5

3 Modulation 5

4 The Polynomial Method 64.1 Data Conditioning . . . . . . . . . . . . . . . . . . . . . . . . . . 6

4.1.1 CRS Data (Y only) . . . . . . . . . . . . . . . . . . . . . 64.1.2 HCL and SCX Data (XY) . . . . . . . . . . . . . . . . . . 64.1.3 Normalization . . . . . . . . . . . . . . . . . . . . . . . . . 74.1.4 Dose Offset . . . . . . . . . . . . . . . . . . . . . . . . . . 7

4.2 Finding the Number of Segments . . . . . . . . . . . . . . . . . . 74.3 Estimating the Corners . . . . . . . . . . . . . . . . . . . . . . . 84.4 Fitting the Polynomials . . . . . . . . . . . . . . . . . . . . . . . 8

4.4.1 General . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84.4.2 Transforming the Independent Variable . . . . . . . . . . 94.4.3 Choice of Polynomial Order . . . . . . . . . . . . . . . . . 10

4.5 Refining the Corners . . . . . . . . . . . . . . . . . . . . . . . . . 104.6 Finding Polynomial Intersections . . . . . . . . . . . . . . . . . . 114.7 Dealing With Full Modulation . . . . . . . . . . . . . . . . . . . . 114.8 Other SOBP Parameters . . . . . . . . . . . . . . . . . . . . . . . 12

5 Analysis of NPTC Data 125.1 Data and Program Organization . . . . . . . . . . . . . . . . . . 125.2 Time Series . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135.3 Frequency Distributions . . . . . . . . . . . . . . . . . . . . . . . 145.4 Modulation Versus Beam Cutoff Time . . . . . . . . . . . . . . . 15

6 Summary 15

Bibliography 16

List of Figures 16

2

Page 3: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

1 Introduction

To ‘characterize’ a depth-dose scan (for example, Figure 1) is to analyze it andcompute certain numbers, notably the treatment depth d and modulation m.These numbers are used by the treatment planner, for quality assurance (QA),to confirm that proton therapy equipment meets specifications, and so on.

In a common analysis procedure for a spread-out Bragg peak (SOBP) onedisplays the depth-dose scan and marks the region of interest (BC in Figure 1)by locating a cursor. After this interactive step the computer finds the meandose in BC, or perhaps the dose at the midpoint of BC, to define the 100% doselevel. It then draws a horizontal line at 90% and finds its intersections withthe data. That defines what we will call d′90 and m′

90. The prime denotes thefact that the fiducial line is horizontal rather than following the slope (if any)of the SOBP. Throughout this report d always refers to the projection of theappropriate point on the x axis and m to the projected distance between twosuch points.

An alternative procedure is as follows. It can be made fully automatic (ascould the previous one). Let the SOBP be fit with separate polynomials in thethree regions AB (the proximal rise), BC (the flat region) and CD (the distalfalloff). In addition to the three sets of polynomial coefficients, let breakpointsB and C be parameters of the fit, adjusted to minimize the overall χ2. Assuminga decent fit is obtained, as in Figure 1, we are now in a position to define d100

and m100 using the polynomials, specifically their intersections, projected asalways on the x axis. This procedure works even if, as shown, the SOBP has anon-zero slope. It is not based on an a priori choice of a 100% level. Rather, itcan be used to define the 100% level as a byproduct.

We propose that d100 and m100 be adopted as the standard definitions ofdepth and modulation in charged particle radiation therapy. We will arguethat they are more convenient than d′90 and m′

90 mathematically, for QA, andfor the equipment designer, and at least as convenient for treatment planning.Section 2, a rather extended discussion of the distinction between depth andrange, makes the case for d100, and Section 3, for m100. Section 4 describesthe polynomial fitting method in detail, Section 5 discusses the retrospectiveanalysis of a large data set from NPTC and Section 6 is a summary.

2 Depth

We have intentionally used the term depth rather than range. The ‘range’ of abeam was defined in the early days of particle physics. It is the quantity givenin range-energy tables such as [1] and refers to a measurement of fluence versusdepth, not dose versus depth.1 We will reserve ‘depth’ for dose measurements.It is the quantity of clinical interest. Nevertheless it is useful to understand thedistinction between depth and range because we frequently use both dose- andfluence-measuring devices in proton radiation therapy. Whether ‘depth’ meansa beam parameter or the position in a water tank will be obvious from context,as it is in this paragraph.

2.1 Fluence Measurements

The relation between depth and range is illustrated by Figure 2. The initialrange r0 or more fully, the mean electromagnetic range of a beam of charged

1 For a given number of protons stopping in, say, a water tank, fluence is protons/cm2 ateach depth, whereas absorbed dose is Gray ≡ Joule/Kg at each depth.

3

Page 4: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

particles is defined as that depth at which the fluence of those particles thatsuffer only electromagnetic (EM) interactions has fallen to half its initial value.Protons undergo nuclear in addition to EM interactions and must be treatedsomewhat carefully. The dots in the top frame of Figure 2 show data froman experiment in which a proton beam of small cross section was stopped ina Faraday Cup (FC) which effectively ‘counts’ the protons by measuring totalcharge. Sheets of polyethylene CH2 were placed in front of the FC. The dotsshow the charge measured in the FC for a given number of beam monitor units,as a function of the total thickness of CH2. A slow falloff, not quite linear, isfollowed by a sharp drop. The line is a fit with a quadratic polynomial timesan error function.

We define primary protons as ones that suffer only EM interactions andsecondary protons and other particles as ones emerging from nuclear events. Inthe experiment the FC accepts relatively few secondaries because of their lowenergies and large angles with respect to the beam. Therefore the plot canbe thought of as the output of a number of ‘primary fluence meters’ at variousdepths, that is, a scan of primary fluence versus depth. The slow fall is due to theattrition of primary fluence as protons suffer nuclear interactions. The sharpdrop comes when the surviving primary protons stop. It has a finite widthdetermined by the statistical nature of stopping (range straggling) combinedwith the energy spread of the incident beam. The ‘half’ in the definition of r0 istaken with respect to the beginning of the sharp drop, not the entrance of thedevice. At 160 MeV the primary protons surviving to stop by EM interactionsconstitute ≈ 80% of the incident beam, nearly independent of the stoppingmaterial. The experiment just described is sometimes called an integral fluencemeasurement.

One can also perform a differential fluence measurement by constructing amulti-layer Faraday cup (MLFC), a succession of sheets of material in each ofwhich we measure the charge deposited. The results are shown schematicallyin the second frame. Now r0 corresponds to the maximum of the peak or,because the peak is very nearly symmetrical, its mean. The nuclear interactionregion is barely visible as a buildup before the main peak [2]. Charged nuclearsecondaries always stop before the EM peak because they have lower energiesand much larger angles with the beam than primary protons.

2.2 Dose Measurements

Now suppose we measure dose rather than fluence, by depth-scanning a smalldosimeter in a water tank illuminated by a broad monoenergetic beam. Theresult is called a Bragg peak.2 The higher the proton energy, the deeper thepeak will be. Some point on the Bragg peak should correspond to r0, but whichpoint? Since dose equals fluence times mass stopping power, the Bragg peak canbe derived from the fluence peak by convolving it with −(1/ρ)(dE/dx), which isa function of energy and therefore of depth. One must also assume some valuefor the energy spread of the beam. In the early 1960’s, A.M. Koehler performedthe calculation graphically and found that

d80 ≈ r0 (1)

where d80 is the distal 80% point of the Bragg peak.3 This result, since confirmed[3, 4] several times, was independent of the energy spread of the beam. Since it

2 The peak in differential fluence versus depth is not a Bragg peak. It would be there evenif the rate of energy loss did not increase as the protons slowed down (the Bragg effect).

3 Strictly speaking r0 = d80 only for a pristine Bragg peak. In a spread-out Bragg peakthere will be a small contribution to d80 from the second modulator step.

4

Page 5: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

comes from a numerical calculation the ‘80’ is not exact, but it is close enough.Beams of three different energy spreads but the same r0 are shown in Figure 2.As the energy spread increases the Bragg peak widens and shifts upstream butd80 remains the same.

2.3 Definition of Depth

To summarize, if we insist on finding the range of a beam by measuring doserather than fluence, it is d80 that we should identify with r0. d80 is a goodquantity for QA because it disentangles the effects of a change in range, thatis, beam energy, from the effects of a change in the beam energy spread. Tomonitor the latter, Figure 2 suggests that we might look at the slope of thedistal edge of the Bragg peak, or the width of the fluence peak if we possess ahigh-resolution MLFC.

However, d80 is perhaps too far down the dose curve to qualify as a goodclinical measure of the depth. At present d′90 is widely used, but the choice of90% is somewhat arbitrary. If we wish to take the target volume to full dose,d100 would be more appropriate. However, that is not always the clinical goal.

Practically speaking, the differences between d100, d′90 and d80 are a fewmillimeters, small but just significant clinically [5].

3 Modulation

m100 is a more convenient definition of modulation than m′90 in three ways:

mathematically, for QA, and for the equipment designer. For the treatmentplanner, m100 is at least as convenient as m′

90.Mathematically, with m′

90 there is a range of possible SOBP’s (from 90%entrance dose to full entrance dose) for which the modulation is undefined.These SOBP’s are sometimes used, and they are not all clinically equivalent toeach other. Therefore one needs to distinguish between them with additionallanguage e.g. ‘treat to skin’ or ‘95% entrance dose’. m100 is defined for anyrealizable and clinically useful SOBP.4

As far as QA is concerned, assume we have a system where modulationis adjusted by beam gating as at NPTC. At least three classes of problemscan occur. The beam cutoff time could be wrong (potentiometer that readsmodulator position is badly registered or calibrated). Or the underlying pristineBragg peak could be wrong (beam grazing, absorbers damaged). Or the relativeweights of the Bragg peaks could be wrong (bad beam current modulation file,faulty cyclotron current monitor). m′

90 is sensitive both to the cutoff time andthe shape of the proximal falloff, and therefore responds to all three effects. Achange in m′

90 signals a problem, but does not tell us directly what to look for.It is even conceivable that two problems could cancel. m100 only monitors beamcutoff time. Additional SOBP parameters from the analysis can be used to spotother potential problems. Section 5 gives examples.

From the point of view of the equipment designer m100 is preferable foressentially the same reason. m100 only depends on beam cutoff time, whereasm′

90 depends both on that and on the shape of the proximal SOBP. In fact itbecomes singular as the entrance dose approaches 90% from below, leading to avery strong dependence of m′

90 on beam cutoff angle in this region. Section 5.4is a detailed discussion of this point, with experimental data.

4 One might object that if the proximal rise (AB in Figure 1) is extremely short, m100

becomes undefined. Indeed, the computer has to deal with this situation (see Section 4.7).However, it always assigns a value to m100 and the ‘deadband’ or region of ambiguity is,clinically speaking, negligible.

5

Page 6: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

The argument in favor of m100 over m′90 is weakest for treatment planning.

Treatment plans are usually a compromise between various clinical goals. Whenthe overriding goal is to take a well delineated target volume to full dose theadvantage of d100 and m100 is obvious. However, it is frequently necessary toallow some dose falloff in the target so as to spare adjacent structures. Theplanner then needs to consider dose gradients in addition to d and m, howeverthese are defined.

4 The Polynomial Method

The proposed method really boils down to defining depth and modulation withrespect to the (x projections of) corners of the SOBP instead of intersectionswith some horizontal line. There is really nothing obscure about the corners.5

The task before us is to find an objective mathematical procedure to locatethem, which we now describe step by step.

4.1 Data Conditioning

In developing and testing the program we used data from several sources: ahome-made tank at the Harvard Cyclotron Laboratory (HCL), a Scanditronix(SCX) tank and (principally) a CRS water tank [6]. They all produced ASCIIfiles but the type of data and the formats were different. We first describe howthe two basic forms of data were handled and then discuss two issues commonto both.

Data conditioning and subsequent steps are governed by parameters definedby an initialization (INI) file. The present discussion, however, is mainly quali-tative. Details on each parameter are given in the user guide.

4.1.1 CRS Data (Y only)

The CRS tank provided dose (Y) data only, the depth (X) being implied bya constant step size (0.25 mm) and a wall correction. The CRS electronicsaverages dosimeter current over a short time interval so the data may be noisydepending on conditions. The data were read and an X value was assignedto each Y value. The dose offset was deduced from the tail of the SOBP (seeSection 4.1.4) and subtracted from each Y. The data were then smoothed byconvolution with a Gaussian. Since that corrupts the first few points these weredropped. X and Y for the remaining points were averaged by fours to yield 1mm steps. Finally the Y values were normalized to a maximum value of 1 sothat subsequent steps could deal with fairly standard data.

4.1.2 HCL and SCX Data (XY)

The HCL and SCX tanks both use a step and measure technique with currentintegrators so the data points are fewer and less noisy. Both devices also madeuse of variable step size to save time, so X and Y are given for each point. Afterreading the data we sorted them by increasing X, since the HCL tank allowedone to fill in portions of the scan out of sequence.6 The dose offset was correctedand the data were smoothed as for CRS data, though smoothing was usually

5 We gave 18 NPTC staff members an SOBP graph with a fairly rounded proximal cornerand asked them to mark the corners as defined by the intersections of smooth lines tangentto each segment. The averaged results for d100 and m100 differed from the program values byabout 2 mm.

6 A least-squares fit is indifferent to the order of points, but other parts of the procedureare not.

6

Page 7: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

turned off. Next, the data could be lumped as for CRS data (usually turnedoff) or, if data were too sparse in some region, points could be added by linearinterpolation. Finally the data were normalized to a maximum value of 1.

4.1.3 Normalization

Commercial water tanks appear to have separate ‘scan’ and ‘dosimetry’ modes.However, if the reference chamber is the beam monitor, if the field chamberis calibrated (known pCoul/Gray under standard conditions), and if one keepstrack of the absolute charge measurement for both chambers, a scan can alsoserve to calibrate the beam monitor (that is, to measure the machine’s outputfactor) avoiding the need for a separate measurement. We have therefore takencare to preserve the normalization constant through the analysis so that thefitted functions can be converted back to original data units.

4.1.4 Dose Offset

The SOBP scan frequently rides on a non-zero vertical offset, either deliberateor caused by dosimeter drift current. If there are enough measured points in thetail of the scan beyond the distal falloff, the offset can be deduced from these.The procedure is governed by three INI parameters. First we look for at leastnTail contiguous points below some level tMaxLevel (referred to 1), workingbackwards through the Y array. We then find the rms spread (deviation fromthe mean) of all such sets of nTail contiguous points, and select the set with thelowest rms. If that rms is lower than tMaxRms the mean of that set is taken asthe offset and subtracted from all Y’s. The procedure can be turned off (amongother ways) by setting nTail to 0 or 1, since at least two points are needed.

4.2 Finding the Number of Segments

An idiosyncrasy of the NPTC QA data set is the presence of partial scans(Figure 3 panels 9, 10, 11) done to save time when only the distal depth iswanted. We fit these with four lines 2/2/2/2 instead of a curve and two lines(say 6/2/2).7 The ‘modulation’ computed by the program is meaningless ofcourse. We need a corner-finding algorithm to distinguish partial scans fromfull SOBP’s.

The curvature or second derivative (y′′) has an extremum at each cornerand therefore the third derivative has a zero-crossing. The numerical y′′ forany three data points (they may be unequally spaced) follows directly from thedefinition of y′′ and is

((y3 − y2) /(x3 − x2)) − ((y2 − y1) /(x2 − x1)).5 × (x3 − x1)

(2)

It is to be associated with point 2. It is undefined for the first and last pointsand may be set to 0. y′′ computed from measured data using (2) is noisy evenwhen the data are smoothed. We smooth y′′ directly by convolution with aGaussian. We then normalize y′′ to unit standard deviation. This gives y′′

less variability over a large set of SOBP’s allowing the same cuts to be usedthroughout. Normalization is done after smoothing so the rms is not dominatedby rogue points. Figure 5 is a plot of smoothed, normalized y′′ for our standardmix of CRS scans.

7 6/2/2 denotes a polynomial of 6 terms and two polynomials of 2 terms.

7

Page 8: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

The corner search begins by taking the second derivative, smoothing, takinganother derivative, smoothing again and normalizing to unit rms. We thenoperate on the result (call it td(i)) with

td(i) = SIGN(1.,td(i)*td(i+1))*ABS(td(i+1)-td(i))

that is, we look for zero crossings (negative product of adjacent points), weightedby the magnitude of the change in value of td(i) to suppress ‘minor’ zerocrossings. Finally, we again normalize to unit rms but do not smooth. Theresulting function is shown in Figure 6. A cut at (say) −1 discriminates nicelybetween partial scans (four or more hits) and regular SOBP’s. Panel 9 has anextra zero-crossing because the two corners are so close together.

4.3 Estimating the Corners

Once we have classified the scan as a regular SOBP (fewer than four corners) thenext task is to find initial values for the polynomial range endpoints. We workbackwards (D,C,B of Figure 1) using a different technique for each. A relatedquestion is whether the polynomial ranges should be separate (e.g. points 1-19,20-42. . .) or overlapping (1-19, 19-42. . .). We use separate ranges though theoverall procedure works either way. There is no very compelling reason butthere are two minor advantages. First, the overall fit is single valued. Between19 and 20 in the example we define it by a straight line connecting the fits at19 and 20. Second, each data point is used just once in the fit.

The endpoint at D remains fixed through the fit and is found as follows.Using the data normalized to unit maximum, we define D as the first point,working backwards through the array, that exceeds 0.2 (for example). Theobjective in fitting the distal edge is to define a tangent line that best expressesthe steep part, not so much to obtain the best possible fit. That is why we usea low order polynomial (N = 2) and deliberately shun the lower corner.

The first approximation to C is found with the second derivative (Figure 5).Working backwards through the array we discard the positive peak in y′′ andlocate the point where the negative spike crosses the test value (-1.5, INI). Thisgives a number slightly biased towards the inside of the SOBP.

Even though the proximal corner B may be obvious to the eye, y′′ is usuallytoo small to be useful (Figure 5). Instead, we fit a straight line to AC (Figure 7)and compute the deviation of the data from this line (Figure 8). The deviationpeaks at B. We first normalize the deviation to unit rms, then search forwards,discarding points until the deviation becomes negative and only then looking forthe first maximum greater than (say) 1. This procedure avoids the occasionalmaximum near A.

4.4 Fitting the Polynomials

4.4.1 General

Fitting an N -term polynomial y(x) to a set of M data points xi, yi is a standardprocedure in data analysis [7, 8]. For completeness we’ll outline the procedureusing N = 3 (a quadratic polynomial) by way of example.8 We assume themeasurements have equal weights, as will be true of a depth-dose scan. Thepolynomial is

y(x) ≡ a0 + a1x + a2x2 (3)

8 It is convenient to focus on the number of terms rather than the ‘degree’ of the polynomial.The degree is (N − 1).

8

Page 9: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

where aj are the polynomial coefficients. In a ‘least-squares’ fit we seek to adjustthe aj so as to minimize

χ2 ≡M∑

i=1

(yi − y(xi))2 (4)

This leads to a set of N simultaneous linear equations for the aj , which can bewritten

α × a = β (5)

α is the known matrix

α =

M∑

xi

∑x2

i∑xi

∑x2

i

∑x3

i∑x2

i

∑x3

i

∑x4

i

(6)

and β is the known column vector

β =

∑yi

∑yi xi

∑yi x2

i

(7)

The sums run from 1 to M. α is symmetric and has only 2N−1 distinct elements.The rest can be copied. Numerical Recipes [7] describes various ways of solving(5) or the equivalent M × N ‘Vandermonde’ matrix. In our case, the errors inthe a’s are not needed. Therefore we don’t need to invert α and we can savesome time by using LU decomposition [7].

4.4.2 Transforming the Independent Variable

High-order polynomial fits are tricky because of their sensitivity to roundofferror. That is due to the large range of values of the elements of α in typicalproblems.9 Suppose x runs from 0 to 15 cm as it well might in an SOBPproblem. For N = 6 the lower right element is of order 155 ∼ 106 whereas theupper left element is M (say 20). Any method of solving (5) ultimately involvessubtracting a multiple of one row of α from another row, which involves liningup the decimal points. Therefore precision suffers if the matrix elements differby many orders of magnitude. (Single precision on a PC is about 7 figures.)One will still get a solution, that is, a set of a’s, but it may not be the best fit oreven a reasonable fit. Trouble is indicated if χ2 does not decrease monotonicallywith N , or if it depends on the order of the data points.

There is a simple, but not too well known, solution. Simply transform thexi to new variables ui which lie in a smaller range u1 ≤ ui ≤ uM . Use the lineartransformation

ui = S (xi − X0) (8)

with constants

S = (uM − u1)/(xM − x1) (9)X0 = (x1uM − xMu1)/(uM − u1) (10)

It goes without saying that, when we later evaluate the fit function, we mustuse the transformed variable with the transformed polynomial. In other words

9 A loose way of saying the matrix is ‘ill conditioned’. See [7] for a precise definition of‘condition’.

9

Page 10: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

we need to save S and X0 and possibly the ui as well as the coefficients ofthe transformed polynomial. An obvious choice for the transformed range is−1 ≤ ui ≤ 1. Besides improving precision this choice yields coefficients of thetransformed polynomial which are all of the order of the yi instead of varying byorders of magnitude like the a’s of (3). In fact each coefficient simply gives therelative contribution of its term at the end of the data range because |±1j | = 1.

The MATLAB Web site [9] suggests an even better transformation thanthe (-1,1) scheme, namely letting the transformed variables ui have zero meanand unit standard deviation. Mathematically speaking the MATLAB scheme isattractive because it uses all the xi, not just the extreme values, to define thetransformation. In (8) we must now let X0 be the mean of the xi and S bethe reciprocal of their rms deviation from X0. In a test case described belowthe matrix α turns out to be somewhat better conditioned than with the (-1,1)scheme. u1 and uM in the MATLAB scheme will still have opposite signs and,for any reasonable set of xi’s, will still be of order 1.

Figure 9 demonstrates the benefit of the transformation, in this instance the(-1,1) version. We fit the proximal side of a measured pristine Bragg peak, aspecial case of the SOBP problem and a fairly severe test. We plot the rmsdeviation of the fit

rms ≡√

χ2/M (11)

against N . In the test case M = 44. The untransformed fit behaves erraticallyabove N = 6 and crashes with overflow above N = 17. Erratic behavior in the(-1,1) fit only sets in at N = 16. As N increases the residual error approachesa constant value arising from experimental scatter in the measurements.

With singular value decomposition (SVD [7]) plus transformation one canimprove the behavior with N even further, out to N = 30.10 We used SVDto confirm our previous statements about the condition number of α since itprovides a precise measure of that quantity. However, its use in the SOBPprogram would be overkill. It is slower, and the transformation technique itselfis barely needed for N ∼ 6. We use the slightly inferior (-1,1) scheme becauseat points in the program it is convenient that the endpoints of the transformedvariable be exactly (-1,1) and the midpoint exactly 0.

4.4.3 Choice of Polynomial Order

In choosing N for each of the three segments (say 6/2/2) the object is not to getthe best possible fit. That will always drive us to larger N ’s. Instead, we wantthe smallest N’s that give reasonably good and consistent fits over a large familyof scans with different depths and modulations. It is helpful to have a programthat fits and summarizes a test assortment of SOBP’s as shown in Figures 3and 4. The reason for choosing N as low as reasonable is that polynomials arebadly behaved just outside the fit region, the more so the higher the order. Butwe may need to use them just outside the fit region to find the intersections.

4.5 Refining the Corners

Though the corners B and C are, strictly speaking, just nonlinear fit parametersnone of the standard methods [7, 8] is helpful because, unlike the polynomialcoefficients, the indices defining B and C (kb(1) and kb(2) in the program) areintegers, not continuous variables. Instead, we use the simplest possible method,varying C (kb(2)) each way over a small range, computing χ2 for each value,and picking the lowest. Then we repeat with B (kb(1)). The order is important

10 Interestingly, with no transformation SVD does very much worse than LU decomposition.

10

Page 11: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

because C is intrinsically much better determined and much less dependent onB than vice versa. The ranges are initialization parameters and must not betoo large, especially for B, because the high-order polynomial representing ABhas a way of wanting to ‘help out’ in the flat region. In other words, there isalways a local minimum near the original guess at B but for some data patternsthere may be another deeper minimum some distance downstream.

4.6 Finding Polynomial Intersections

We now have the data indices kb(1) and kb(2) expressing the corners B andC that give the best overall fit. Used directly these would already give usfairly good values for d100 and m100. Why go further? The main reason isthat we can get a very much sharper value, particularly for d100, by lookingat the intersections of the best polynomials, and this value is divorced fromthe particular values at which data points exist. The second reason is thatwe want a procedure which is general enough to compute traditional measuresof modulation and depth such as m′

90. That involves the intersection of twolines, and simply quoting the nearest data points would needlessly degrade theanalysis. By divorcing the results from particular data points we also leaveourselves free to experiment with sparser scans in the future.

To keep the program general we do not exploit the fact that our polynomialsare frequently straight lines. Instead, we seek the root of f(x) = 0 where f(x)is the difference between two general polynomials. We use the ‘false position’method [7] which is reasonably fast and avoids crashes by keeping the unknownroot bracketed.

The subroutine must be called with two values of x which bracket a root(or, strictly speaking, an odd number of roots). For the C corner this is trivialbecause we have two low order polynomials (usually, two straight lines) whichintersect just once in any reasonable range we might pick. At B the problemis more complicated. If corner B is soft, as with beam gating, the polynomialsmay intersect once, twice, or not at all, because the high-degree AB polynomialis a fit to a different data set than the BC polynomial, and may drop off fairlyquickly outside the fit range. A little thought will convince the reader that,if there are two intersections, the outer (more proximal) one is desired. Wetherefore start at kb(1) and look for a bracket working down. Failing that,we return to kb(1) and work up. Failing that, we find the distance of closestapproach and pass that off as an intersection.

Once we have that machinery, finding (say) m90 or m′90 is easy. We find,

respectively, either the intersection with a polynomial that is 0.9× the BC poly-nomial or a horizontal line drawn at 0.9× the dose at the midpoint of the BCpolynomial.

4.7 Dealing With Full Modulation

Panel 8 of Figure 3 shows an important special case. The modulation is essen-tially 100% and, as an additional complication, the SOBP has a significantslope. We want the program to give m100 = d100 in this case.11 The analysisof scans of this type can take either of two branches. If the B corner findergives a value so small that there aren’t enough points to fit, we define the firstsegment as having zero length by setting kb(1) to 1 and we set a flag that tellsus not to look for a polynomial intersection at B. In other words, we fit theSOBP with two segments instead of three. Alternatively, the B corner finder

11 Because the tank has a fairly thick wall (≈ 1 cm) we don’t really know what’s going onin the first centimeter.

11

Page 12: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

may give a reasonably large value and there may be a local minimum in χ2 asin the present case (Figure 8). Therefore we check every SOBP to see whetherthe constant terms a0 of the AB and BC polynomials are the same within (say)±0.01. (Recall that we normalized the data to a maximum of 1.) If so, we againset the first segment to zero length, refit the entire range to get the right overallslope and other parameters, and set m100 = d100.

4.8 Other SOBP Parameters

SOBP parameters other than d and m are useful, and easy to extract from thepolynomials. Once we have the best fit, indeed before finding intersections, werenormalize and refit the data one more time so that a0 of the ‘flat top’ willbe exactly 1. That defines the 100% level by implication as the dose at themidpoint (u = 0) of BC. That is the same definition as is frequently used in thesemi-interactive method of SOBP analysis.

We have already mentioned the range-equivalent depth d80 as being the bestmeasure of beam energy stability. Strictly speaking this is true only for a pristineBragg peak. However, when the range modulator is designed to keep the distalfalloff as steep as possible (maximum allowable step size), the contribution ofthe second step to d80 is small.

We may want a measure of the steepness of the distal falloff, equivalent tothe beam energy spread. We could use the slope of the CD polynomial, but ifwe have already computed d100 and d80 it suffices to record them to sufficientaccuracy so the small difference (≈ 0.2 cm) is well determined.

The slope of the BC region can tell us if all is well with the modulator stepweights and the shape of the pristine Bragg peak. There are several equivalentways of quoting that parameter: as a dimensionless slope, as a relative changein dose with depth (%/cm), or simply as an end-to-end change (%). In Section5 we use the latter, calling it the rise.

The entrance dose also reflects the weights and the shape of the pristinepeak. It is simply the value of the AB polynomial at u = −1.

Finally, the goodness of fit itself, rms, may contain useful information aboutoperating conditions at the time of the scan. The program computes and pre-serves rms for each segment separately, as well as rms for the full fit.

5 Analysis of NPTC Data

5.1 Data and Program Organization

At NPTC all scans with the CRS tank [6] are performed with the same computer.At present, different teams take scans during morning QA or calibrations. Otherteams may take scans at other times for experimental investigations. Threescans are taken in a typical QA session, one with the standard d′90, m

′90 = 16, 10

beam, one with all lollipops out and one with all in.12 The latter may bepartial scans to save time. The scans are analyzed semi-interactively by eachteam (see Section 1) and the results are put into an Excel data base. Thissystem works well, and we should emphasize that all the effects described later(Section 5.2) were already found by the QA teams prior to the present analysis.However, it tends to tie one down to SOBP parameters one has decided totrack from the beginning of the operation. One can always add new things tolook at, but checking them retroactively, except for a few sample cases, is very

12 The ‘lollipops’ are a binary set of lead and lexan foils located just upstream of themodulator wheels. They are normally used to trim the depth or multiple scattering. It wouldbe more efficient to verify the ‘all in’ or ‘all out’ conditions with the range verifier [5].

12

Page 13: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

time consuming. Since the analysis described in this report is fast and fullyautomatic we sought to apply it to a large data set, namely all the CRS scannerdata taken since treatments began at NPTC.

We obtained somewhat over a year’s worth of scans on a CD-ROM. Therewere 118 Megabytes of data comprising 7236 files in 450 folders. Some of the fileswere duplicates (copies in more than one folder), some were not scans (Excelfiles and the like) and some were in scan format but computed from otherscans, therefore not independent measurements. File names varied considerablyin length and naming conventions tended to be short lived. We wrote a programCRSLIST13 to sort the files by date of creation and produce a DOS batch file,which copied them to a single large folder and renamed them 0000NNNN.CRS.In the process we eliminated duplicates, computed scans, and non-scans, andcreated an ASCII ‘catalog’ file giving the new file name, date/time of creation,and the original name. A fragment is shown below.

00004149 10JAN03 01:11 MorningQASOBP.cal00004150 10JAN03 07:21 run001.dat00004151 10JAN03 07:25 run002.dat00004152 10JAN03 07:30 run003.dat

00004162 10JAN03 19:18 132 120 BCME0502.cal00004163 10JAN03 19:23 Pristene 132R SD50 current86.4nA.dat00004164 10JAN03 19:24 Pristene 132R SD50 current86.4nA.cal

There remained 4353 files including full depth scans (QA and calibration),partial depth scans, pristine Bragg peaks, transverse scans, electronic tests,junk, and empty files. The main analysis program, using the procedures of theprevious Section, was called CRSFITS. It took about one minute (0.014 sec/file)to analyze the data set. In working through the catalog, the program can alsoattempt to extract information from the original long file name. This was donefor Section 5.4. A program CRSVIEW, which merely plots selected batchesof scans, proved very useful in identifying and helping fix sources of programcrashes. To be practical, the analysis program must in its final form accept anycatalogued file without crashing. Later, one can easily tell by looking at rmsand error flags whether a particular result is useful.

To speed up further program development we wrote a data summary file offitted SOBP parameters. This file gives the analysis date/time and the analysisprogram parameters. It then lists, for each data file, the file number, number ofsegments (3 or 4) found, date/time (minutes from the start of 1990), an errorflag, rms (equation 11) overall and for the flat segment (%), m100, m′

90, d100

and d′90 (cm), the entrance dose relative to the center of the SOBP (%), therelative rise of the ‘flat’ region of the SOBP (%), and the range-equivalent depthd80 (cm). Figures 10 and 11 were obtained with program CRSHIST reading thesummary file.

5.2 Time Series

Figure 10 is a time series of five SOBP parameters over 15 months. We havechosen cuts and ranges to display the QA measurements for d, m = 16, 10. Wehave not used the file names, which are somewhat inconsistent even for the QAruns, to select data files. The repetitive nature of the QA runs makes themstand out over the background of calibration runs which happen to satisfy the

13 We used MicroSoft Fortran Powerstation v1.0. The computer was a Pentium III(850 MHz) running DOS 6.22/Windows 3.11 under a dual-boot system [10]. The Figureswere prepared with the scientific graphics program Axum v3.0.

13

Page 14: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

same cuts. All five frames have a ‘good fit’ cut 0 ≤ rms ≤ 1.5% (equation 11)in addition to further cuts as described. The upper left-hand box of Figure 11shows the distribution of rms. The vertical line is at 1%.

The top frame of Figure 10 shows the range-equivalent depth d80. We haveselected SOBP’s satisfying a loose m cut, 7.0 ≤ m100 ≤ 8.5 cm for normal (3segment) scans, and ignored m100 for partial (4 segment) scans. Because thestandard configuration uses just two thin lollipops we see two lines close together(all out, normal) and one at a much smaller depth (all in). We have alreadycommented on the background from calibration and experimental runs whichhappen to satisfy the m cut. The three daily scans seem very well correlated,giving the impression that the small day-to-day and week-to-week fluctuationsare either due to machine parameters or perhaps to water tank setup (changingwall correction). There is a slow downward drift which was also seen in theconventional QA analysis. The cause is still under investigation.

The next two frames show m100 and m′90 with a d cut, 15.7 ≤ d100 ≤ 16.4 cm.

They illustrate how m100 depends only on beam cutoff time whereas m′90 mixes

in effects from the shape of the pristine Bragg peak and the beam currentmodulation file (modulator step weights). m100 shows a steady decline. Shortlyafter the time of this study the decline was diagnosed as a calibration changein the ADC that measures the instantaneous modulator angle. Replacing theADC restored m to the higher value. m′

90 also shows an overall decline butother effects are superimposed. During December 2002, for instance, m′

90 rosesignificantly. This was caused by the slope of the SOBP, normally ≈ +1%, goingnegative (frame 4). That effectively raises the proximal segment AB causingan increase in m′

90. The problem turned out to be radiation damage to theion chamber that monitors cyclotron output, causing erroneous beam currentmodulation. The chamber was repaired in mid January 2003 restoring the SOBPslope to its normal value. m100 was unaffected by this incident because it didnot involve the beam cutoff time.

Lastly, the bottom frame shows the entrance dose using both the d and mcuts given previously. It shows fluctuations which correlate with those in m′

90

some of which may be due to a third effect, changes in the underlying pristineBragg peak. In summary, Figure 10 demonstrates that hardware problems arebetter sorted out by tracking m100, the SOBP slope and the entrance dose, thanby tracking m′

90 alone.In addition to the parameters shown we also checked the slope of the distal

falloff by studying diff ≡ (d80 − d100) to see whether the observed fluctuationshad anything to do with gantry room beam energy spread. The time series (notshown) was flat within statistics. Figure 11 column 2 shows histograms of difffor the entire 15 months. We used the ‘good fit’ cut as always. Rows 4, 5, 6correspond to scans not meeting the QA m cut (i.e. all non-QA scans), QA fullscans, and QA partial scans. Each bin is 0.02 mm. For the QA full scans, diffhas a mean of 1.79 mm and an rms deviation of 0.093 mm. It is small but veryprecise.

5.3 Frequency Distributions

Once we have studied time series for systematic effects, frequency distributions(histograms) can yield more quantitative results. A typical set is shown in Figure11. It is accompanied by an output file (not shown) which gives statistics suchas total counts, mean, and rms deviation in the region of interest for each box.

The top 5 rows of columns 3-6 show the labeled quantities stratified by timein 100-day (roughly 3-month) intervals. They are projections of Figure 10 onthe y axis. The bottom row in each column shows the sum over the entire

14

Page 15: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

time period. The histograms quantify several effects discussed in the previousSection. For instance, the slow downward drift of d80 is clearly seen in rows1-5 of column 3. This column shows the distribution of d80 for all lollipopsin (the lowest line in the top frame of Figure 10). From statistical analysis ofthese graphs the downward drift in d80 is ≈ 1.5 mm/year. The apparent splitin the lowest histogram is probably a consequence of the downward drift beingnon-uniformly spread over QA runs.

5.4 Modulation Versus Beam Cutoff Time

The data set includes a two hour modulation study on 29SEP02 with optionA4, during which the stop digit14 was varied and consistently encoded in thefile name. That gave us the opportunity to compare the dependence of m′

90 andm100 on stop digit, shown in Figure 12. The line is half computation and half fitin the following sense. m100 measures the distance between the SOBP cornersand therefore should be a pure measure of beam cutoff time, independent of theproximal shape of the SOBP. If the Bragg peak pullback were a linear functionof time, m100 would be a linear function of stop digit. The pullback versus timedepends on modulator design, namely on the thicknesses of lead and plastic andthe relative weight of each step. It is not linear, but it is calculable given themodulator design.

In principle, the stop digit zero (the value corresponding to the first edge ofthe first step) as well as the slope (stop digits per millisecond) are also known,but it is not easy to find them out. Therefore we computed the pullback versustime from modulator data but treated the stop digit zero and slope as fittedparameters. In other words the shape of the line was computed but the stopdigit axis was shifted and (uniformly) stretched to obtain the best fit. The fittedslope was 0.579 msec/digit corresponding to 173 digits for the full track, whichseems reasonable considering the beam stop block, which was not included inthe modulator design file, and the fact that the system does not quite use thefull range 0-255. The calculation accounts for the shape of m100 quite well.

In summary, if the modulator geometry and the beam cutoff relative to itare known, m100 can be computed. Computing m′

90 requires us to know theshape of the Bragg peak as well, and is considerably more involved.

6 Summary

We have described a model independent way of analyzing SOBP’s that dependsonly on the generic shape. It consists of fitting separate polynomials to thethree segments, adjusting the break points to obtain the best fit, and findingthe intersections of the polynomials. This provides an objective definition ofthe corners of the SOBP, based on which we define the depth and modulationat full dose, d100 and m100.

We have argued that these definitions are more convenient than the morecommon d′90 and m′

90 mathematically, for QA, and for equipment design. Forthe treatment planner, they are probably at least equally convenient. The fittedpolynomials yield other measures of the SOBP. These measures are unbiased ifthe same program parameters are used for an entire data set, and they arestatistically efficient in that they use all the data in a scan.

We have described the mathematical procedures in detail, and we have testedthe new analysis on a large data set from NPTC. That test brought out the

14 The stop digit is a number 0-255 which determines the modulator track angle at whichthe cyclotron beam cuts off. It allows one to vary the modulation by beam gating.

15

Page 16: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

need for better data organization, within the CRS computer and site-wide. Forinstance, if the stop digit and other beam line settings in force during everyscan were readily available to the analysis, every calibration scan would test thehardware, and those tests would cover the clinical range more completely thanselected QA points. That requires careful planning for the synchronizationof data from different sources (machine logs and QA computer) and for filearchiving procedures. We hope to improve the system in that direction.

In closing, we wish to thank Uwe Titt for supplying us with the NPTC dataset, Marc Bussiere and Mike Collier for advice, Thomas Bortfeld for reviewingthe draft, and the Harvard High Energy Physics Laboratory for its hospitality.

References

[1] Berger et al., ‘Stopping powers and ranges for protons and alpha particles,’International Commission on Radiation Units and Measurements (ICRU)report 49 (1993).

[2] B. Gottschalk, R. Platais and H. Paganetti, ‘Nuclear interactions of 160MeV protons stopping in copper: a test of Monte Carlo nuclear models,’Med. Phys. 26 (12) (1999) 2597- 2601.

[3] M. Berger, ‘Penetration of proton beams through water: I. Depth-dosedistributions, spectra and LET distribution,’ report NISTIR 5226, U.S.Natl. Inst. of Standards and Technology, Gaithersburg, MD 20899 (1993).

[4] T. Bortfeld, ‘An analytical approximation of the Bragg curve for therapeu-tic proton beams,’ Med. Phys. 24 (12) (1997) 2024-2033.

[5] B. Gottschalk, ‘Calibration of the NPTC range verifier,’ HCL TechnicalNote 1/16/2001.

[6] Computerized Radiation Scanners Inc., 140 Sopwith Drive, Vero Beach, FL32968, USA.

[7] William H. Press, Brian P. Flannery, Saul A. Teukolsky and William T.Vetterling, “Numerical Recipes: the Art of Scientific Computing,” Cam-bridge University Press (1986).

[8] Philip R. Bevington, “Data Reduction and Error Analysis for the PhysicalSciences,” McGraw-Hill (1969).

[9] MATLAB web site: http//www.mathworks.com/access/helpdesk/help/techdoc/math anal/dataful4.shtml.

[10] System Commander 2000 v5.04, V Communications Inc., 2290 North FirstSt., San Jose, CA 95131, USA.

16

Page 17: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

List of Figures

1 Model SOBP defining various quantities. . . . . . . . . . . . . . . 182 Fluence, differential fluence and dose as a function of depth for

proton beams of a given energy and different energy spreads,illustrating r0 = d80. . . . . . . . . . . . . . . . . . . . . . . . . . 18

3 Sample depth-dose scans from NPTC. Figures 5, 6 and 8 refer tothe same mix of scans. . . . . . . . . . . . . . . . . . . . . . . . . 19

4 Scans for the HCL modulator wheels. We used a quadratic ratherthan a linear function for the ‘flat’ region. . . . . . . . . . . . . . 19

5 Smoothed second derivative normalized to unit rms deviation.The reference lines are at ±1. . . . . . . . . . . . . . . . . . . . . 20

6 ‘Corner function’ (see text) normalized to unit rms deviation.The reference lines are at ±1. . . . . . . . . . . . . . . . . . . . . 20

7 Method of estimating corner B. The distance from a straight linefit to AC has a maximum at B. . . . . . . . . . . . . . . . . . . . 21

8 Distance from straight line normalized to unit rms deviation. Thereference lines are at ±1. . . . . . . . . . . . . . . . . . . . . . . . 21

9 The rms deviation for a polynomial fit versus the number ofterms. Solid circles: x transformed to −1 ≤ u ≤ 1. Emptycircles: x not transformed. . . . . . . . . . . . . . . . . . . . . . . 22

10 Time series for five SOBP parameters in the NPTC data for thedaily d, m = 16, 10 QA runs. See text for discussion. . . . . . . . 22

11 Frequency distributions (histograms) for the data set shown infigure 10 . See text for discussion. . . . . . . . . . . . . . . . . . . 23

12 Measured m′90 (open circles) and m100 as a function of stop digit.

The line is calculated from modulator geometry, but the starttime and time per stop digit are fitted. . . . . . . . . . . . . . . . 23

17

Page 18: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

Figure 1: Model SOBP defining various quantities.

Figure 2: Fluence, differential fluence and dose as a function of depth for protonbeams of a given energy and different energy spreads, illustrating r0 = d80.

18

Page 19: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

Figure 3: Sample depth-dose scans from NPTC. Figures 5, 6 and 8 refer to thesame mix of scans.

Figure 4: Scans for the HCL modulator wheels. We used a quadratic ratherthan a linear function for the ‘flat’ region.

19

Page 20: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

Figure 5: Smoothed second derivative normalized to unit rms deviation. Thereference lines are at ±1.

Figure 6: ‘Corner function’ (see text) normalized to unit rms deviation. Thereference lines are at ±1.

20

Page 21: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

Figure 7: Method of estimating corner B. The distance from a straight line fitto AC has a maximum at B.

Figure 8: Distance from straight line normalized to unit rms deviation. Thereference lines are at ±1.

21

Page 22: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

Figure 9: The rms deviation for a polynomial fit versus the number of terms.Solid circles: x transformed to −1 ≤ u ≤ 1. Empty circles: x not transformed.

Figure 10: Time series for five SOBP parameters in the NPTC data for thedaily d, m = 16, 10 QA runs. See text for discussion.

22

Page 23: On the Characterization of Spread-Out Bragg Peaks and the ...comes from a numerical calculation the ‘80’ is not exact, but it is close enough. Beams of three different energy

Figure 11: Frequency distributions (histograms) for the data set shown in figure10 . See text for discussion.

Figure 12: Measured m′90 (open circles) and m100 as a function of stop digit.

The line is calculated from modulator geometry, but the start time and timeper stop digit are fitted.

23