Upload
others
View
3
Download
0
Embed Size (px)
Citation preview
QUALITY ASSURANCE OF TREATMENT PLANNING SYSTEMS PRACTICAL EXAMPLES FOR NON-IMRT PHOTON BEAMS
Ben MijnheerAgnieszka Olszewska
Claudio FiorinoGuenther Hartmann
Tommy KnöösJean-Claude Rosenwald
Hans Welleweerd
2004 – First edition
ISBN 90-804532-7
© 2004 by ESTRO
All rights reserved
No part of this publication may be reproduced,
stored in a retrieval system, or transmitted in any form or by any means,
electronic, mechanical, photocopying, recording or otherwise
without the prior permission of the copyright owners.
ESTRO
Mounierlaan 83/12 – 1200 Brussels (Belgium)
AUTHORS:
This booklet could only be written thanks to the enthusiastic co-operation of all mem-bers of the QUASIMODO group:
Ben Mijnheer, Amsterdam, The Netherlands (Projectleader)
Carlos De Wagter, Gent, Belgium (Co-Project leader)
Rafael Arrans, Sevilla, Spain
Annemarie Bakai,Tübingen, Germany
Joerg Bohsung, Berlin, Germany
Wojciech Bulski, Warszawa, Poland
Claudio Fiorino, Milano, Italy
Sofie Gillis, Gent, Belgium
Germaine Heeren, Brussel, Belgium
Guenther Hartmann, Heidelberg, Germany
Dominique Huyskens, Leuven, Belgium
Tommy Knöös, Lund, Sweden
Agnieszka Olszewska, Amsterdam, The Netherlands
Marta Paiusco, Reggio Emilia, Italy
Bruce Perrin, Manchester, United Kingdom
Jean-Claude Rosenwald, Paris, France
Francisco Sanchez-Doblado, Sevilla, Spain
David Thwaites, Edinburgh, United Kingdom
Hans Welleweerd, Utrecht, The Netherlands
Peter Williams, Manchester, United Kingdom
IV
ACKNOWLEDGEMENTS
Although it is virtually impossible to thank all persons who contributed in one or other way
to the drafting of this booklet, we very much appreciate the co-operation of the following
persons: Maria do Carmo Lopes, Coimbra, Portugal; Maria Gavrilenko, Kiev, Ukraine; Sara
Broggi, Milan, Italy; Corine van Vliet–Vroegindeweij, Amsterdam, The Netherlands; Milàn
Tomsej, Brussels, Belgium and Sonny La, Lund, Sweden. We also like to thank the
International Atomic Energy Commission (IAEA) for letting us have insight in their forth-
coming report “Commissioning and Quality Assurance of Computerized Treatment
Planning”. The constructive comments from the vendors of treatment planning systems are
also appreciated.
QUASIMODO was funded by the EUROPEAN COMMISSION, Directorate General Health
and Consumer Protection - Europe Against Cancer Programme, and is part of the ESQUIRE
Project: Education, Science and Quality assurance In Radiotherapy in Europe, Grant
Agreements S.12.322029 and SPC.2002480.
VII
CONTENTS:
ESTRO BOOKLET NO. 7:QUALITY ASSURANCE OF TREATMENT PLANNING SYSTEMS
PRACTICAL EXAMPLES FOR NON-IMRT PHOTON BEAMS
AUTHORS V
ACKNOWLEDGEMENTS VII
CONTENTS IX
1. INTRODUCTION 1
1.1 General remarks 1
1.2 Quality assurance of a TPS 2
1.3 Division of QA tasks to be performed by the vendor 4
and the user of a TPS
1.4 Incompleteness or redundancy of recommended tests 6
1.5 National and international reports discussing QA 7
of treatment planning systems
1.6 Contents of the booklet 9
2. ACCURACY REQUIREMENTS AND TOLERANCE LEVELS 11
2.1 Recommendations for dosimetric and geometric accuracy 11
2.2 How to express deviations between measurements and calculations? 12
2.2.1 Evaluation in low dose gradient areas 14
2.2.2 Evaluation in high dose gradient areas 15
2.2.3 Combined evaluation of dosimetric and spatial deviations 16
2.2.4 Evaluation and reporting deviations for a large number of points 17
2.3 Acceptance criteria for the accuracy of photon beam dose calculations 18
3. ANATOMICAL DESCRIPTION 21
3.1 Basic patient entry 21
3.2 Image input and use 22
3.2.1 Image input 24
3.2.2 Contour input 26
3.2.3 Image use 26
3.2.4 Co-ordinate system of images 27
IX
3.3 Anatomical structures 27
3.3.1 Definition of anatomical structures 27
3.3.2 Automated contouring 28
3.3.3 Manual contouring 28
3.3.4 Manipulation of contours 28
3.3.5 Construction of volumes 30
4. BEAM DESCRIPTION 33
4.1 Beam definition 33
4.1.1 SAD, SSD and field size 33
4.1.2 Gantry rotation 34
4.1.3 Collimator rotation 34
4.1.4 Table movement 35
4.1.5 Jaw definition and beam co-ordinates 35
4.1.6 Multi-leaf collimator definition 35
4.1.7 Wedge and block insertion 35
4.1.8 Consistency check of beam co-ordinate system 36
4.1.9 Warnings and error messages 36
4.1.10 Bolus definition 37
4.2 Beam display 37
4.2.1 Beam’s-eye-view (BEV)-display 37
4.2.2 Beam position and shape 38
4.2.3 Beam position in BEV 38
4.2.4 Block position in BEV 38
4.2.5 MLC-shaped field 38
4.2.6 Bolus position 38
4.3 Beam geometry 39
4.3.1 Automatic block and auto-leaf positioning 39
4.3.2 User-defined block 39
4.3.3 DRR: linearity and divergence 39
4.3.4 Input, change and edit functions 39
5. DOSE AND MONITOR UNIT CALCULATION 41
5.1 Beam characterisation set 42
5.1.1 Data input process 42
5.1.2 Documentation 43
5.2 Dose calculation 43
5.2.1 Open square fields 44
5.2.2 Open rectangular fields 44
5.2.3 Variation in SSD 45
X
5.2.4 Wedged square field 45
5.2.5 Wedged rectangular fields 45
5.2.6 Field with a central block 45
5.2.7 Blocked field 46
5.2.8 Inhomogeneities 46
5.2.9 Oblique incidence 46
5.2.10 Missing tissue 47
5.2.11 Off-axis square field 47
5.2.12 Off-axis elongated field 47
5.2.13 Wedged off-axis field 47
5.2.14 Off-plane field 48
5.2.15 Square MLC field 48
5.2.16 Off-axis square MLC field 48
5.2.17 MLC-shaped field 49
5.2.18 Block and tray insertion 49
5.3 2-D and 3-D dose verification 51
5.3.1 2-D dose distribution 52
5.3.2 Dose-volume histogram 52
5.4 Monitor unit calculation 54
6. PERIODIC QUALITY CONTROL 55
6.1 Data input process 55
6.1.1 Digitiser 55
6.1.2 Film scanner 56
6.1.3 CT data 56
6.1.4 MR data 56
6.1.5 Integrity of simultaneous input 56
6.2 Software 56
6.2.1 MU calculation 57
6.2.2 Standard treatment techniques 57
6.2.3 MLC-shaped field 57
6.3 Data output process 57
6.3.1 Printing/plotting devices 57
6.3.2 Block cutting device 58
6.3.3 Treatment plan transfer 58
7. EXAMPLES OF TESTS 59
7.1 Tests from chapter 3: Anatomical description 59
7.2 Tests from chapter 4: Beam description 65
7.3 Tests from chapter 5: Dose and monitor unit calculation 68
XI
APPENDIX A.1 Definitions 83
APPENDIX A.2 List of abbreviations and symbols 87
APPENDIX A.3 Categorisation of tests 89
REFERENCES 93
XII
1. INTRODUCTION
1.1. GENERAL REMARKS
A computerised treatment planning system, TPS, is an essential tool in the design of
a radiotherapeutic treatment of cancer patients and some groups of patients suffering from
non-malignant diseases. A typical installation of a TPS is in a radiotherapy network, virtual
or real, in which it functions together with other systems required for patient treatment
(Figure 1.1). Nowadays, image transfer in a hospital is often done via DICOM (Digital
Imaging and Communications in Medicine). The DICOM standard has been developed to
meet the needs of manufacturers and users of medical imaging equipment for interconnec-
tion of devices in networks. A single DICOM file contains both a header (which stores
information about the patient’s name, the type of scan, image dimensions, etc), as well as all
of the image data (which can contain information in three dimensions). DICOM RT
(DICOM in radiotherapy) contains the radiotherapy modalities: RT Image (e.g., CT slices),
RT Dose (e.g., dose matrices), RT Structure Set (e.g., target volumes and organs at risk), RT
Plan (e.g., gantry-/collimator-/couch-angles, field sizes, blocks, MLC settings) and RT
Treatment Record. The TPS should be able to share patient data and must be capable to
import information from imaging devices such as CT and MR. The imported data should rep-
resent correctly both image content (pixel value) and geographic (pixel position) informa-
tion. Imaging information may also come from a hospital-based system supporting all
departments with patient data. Although most manufacturers claim to apply DICOM, some
TPSs are not fully DICOM compatible, while also differences exist in the way different ven-
dors have implemented DICOM in their system.
Figure 1.1 Radiotherapy network showing imaging devices, treatment units, treatment planning work-stations, and supporting (e.g., booking, image archiving and record and verify) systems.
1
For radiotherapy tasks, CT information has the highest importance since it is not only
used for target and organ delineation, but also for dose calculation. Therefore, geometry data
and Hounsfield numbers must be transferred and interpreted correctly. During commission-
ing of the TPS the whole range of input variables of the treatment unit, including movements,
must be entered and verified. If this is not possible one may have to use in-house written data
transfer tools with all the problems of maintaining and verifying these utensils. Independent
of the type of solution, it is important that the correctness of data transfer is verified prior to
clinical use of both the TPS and the accelerator(s). This procedure includes identifying the
dimension and (asymmetric) position of the collimator opening, gantry and table rotation and
the co-ordinate systems of these parts of the linac. Adopting the international standard for
definition of co-ordinates, movements and scales, as given in IEC Report 61217 (IEC 1996),
for all systems involved (linac, TPS, etc.), will facilitate the communication between radio-
therapy equipment, particularly if use is made of DICOM RT. In this way the internal co-
ordinate system of the TPS can be transferred directly to the specific treatment unit, thus
ensuring data integrity and avoiding misinterpretation of data. The use of look-up tables or
similar transfer tools can facilitate this task. A quality assurance, QA, programme of a TPS
must therefore include tests of maintaining data integrity after transfer in a radiotherapy envi-
ronment. These tests should be repeated after software upgrades of either the TPS or the
linac.
1.2 QUALITY ASSURANCE OF A TPS
After the installation of a TPS in a hospital, acceptance testing and commissioning of
the system is required, i.e., a comprehensive series of operational tests has to be performed
before using the TPS for treating patients. These tests, which should partly be performed by
the vendor and partly by the user, do not only serve to ensure the safe use of the system in a
specific clinic, but also help the user in appreciating the possibilities of the system and under-
standing its limitations. In the past some irradiation accidents happened with patients under-
going radiation therapy, which were related to the misuse of a treatment planning system.
Most often these accidents were not the result of system malfunctioning but due to a lack of
understanding of how the TPS works. More details related to the incidence of accidents in
radiotherapy can be found in several reports (IAEA 2000, IAEA 2001, ICRP 2001, Cosset
2002). In many of these accidents, a single cause could not be identified but usually there
was a combination of factors contributing to the occurrence of the accident. The most promi-
nent factors were deficiencies in education and training, and a lack of quality assurance pro-
cedures. Good training, as well as the availability of well-documented quality assurance pro-
cedures, therefore have a huge impact in preventing planning errors.
Over recent years, increased attention has been paid to quality assurance of treatment
planning systems by various national and international organisations. Examples include Van
Dyk et al., 1993, Shaw, 1996, SSRPM 1997, Fraass et al., 1998, Mayles et al., 1999, IAEA
2
2004 and NCS 2004. These reports provide recommendations for specific aspects of QA of
a TPS, such as anatomical description, beam description and dose calculations. However,
contrary to the situation for treatment machines, not many sets of practical recommendations
for commissioning and QA of a TPS exist. Although a lot of information is provided in these
reports, it is difficult for a TPS-user to decide which tests are absolutely necessary to per-
form by an individual user, and which tests the vendor or users groups of a specific system
should perform. Also the number of tests provided by some of these reports is so over-
whelming, that it would require a huge investment in manpower to perform the recommen-
dations given in these reports. For those reasons departments with limited physics staff often
choose for a pragmatic approach, thus doing only those QA tests they consider of direct
importance for the use of the new TPS in their department. Particularly with respect to the
3-D aspects of planning systems, there are no clear guidelines which specific tests should be
performed before the clinical use of a 3-D TPS can be started in a safe way. For that reason
it was decided during the 1999 ESTRO Physics Meeting in Göttingen, Germany, that
ESTRO would start activities in the field of QA of a TPS. It was emphasized that ESTRO
would concentrate on those activities not yet covered by other groups or already described
in other reports.
In August 2001 the ESQUIRE project, funded by the European Communities, EC,
started for a period of two years. The aim of that project was to increase the confidence level
of clinicians for embracing optimised radiotherapy treatment regimens by making sure they
can be achieved without an increase in severe side effects. One of the actions proposed for
this purpose was to develop QA procedures for optimised radiotherapy planning and de-
livery, as outlined in the part of the project called QUASIMODO (QUality ASsurance of
Intensity MODulated radiation Oncology). QUASIMODO will promote the safe introduc-
tion of advanced technology in RT by developing procedures for the QA of treatment plan-
ning systems, and by exploring new methodology for the verification of intensity-modula-
ted radiation therapy, IMRT.
From the review of national and international documents discussing QA of treatment
planning systems (Section 2.2. of this booklet) it became clear that there is a need for a
minimum number of tests. These tests should not only be suitable for small hospitals with
limited resources, but are also needed by large (university) centres having a high patient load
or limited staff. These tests should not be too cumbersome to perform and should cover the
most essential parts of a TPS required for accurately planning of established conformal
radiotherapy techniques. It should be realised, however, that the minimum number of tests to
be performed in a specific institution depends very much on the local clinical practice. It
might well be that some of the tests described in this booklet are not necessary, while for
advanced treatment techniques many more tests should be performed before the TPS can be
applied clinically. The first aim of the QUASIMODO project was therefore to identify a set
of examples of tests for QA of treatment planning systems, easy to perform by users of dif-
ferent types of TPS.
3
Recently a rapidly increasing number of institutions started with clinical implemen-
tation of IMRT. By varying the beam intensity over the treatment fields it is possible to de-
liver the radiation dose more conform to irregularly shaped target volumes. In this way it is
possible to deliver a higher dose to the tumour while at the same time reducing the dose to
surrounding healthy tissues. For the QA of these advanced techniques, general guidelines
have been formulated by Ezzell et al. (2003). An interesting observation from a survey on
the status of IMRT in Europe in 2002 was that almost every institution applied its own phan-
tom/dosimetry system for the verification of treatment delivery. Obviously a specific solu-
tion was found in each institution, but no common approach or method was adapted at that
time. It is the second aim of the QUASIMODO project to design tests and provide guidelines
for the verification of IMRT. This second part of the QUASIMODO project is not only rela-
ted to QA of treatment planning systems but includes QA of the treatment delivery as well.
The results of that part of the project will therefore be the topic of another report and will
not be discussed further in this booklet. It should be noted that a number of tests described
in this booklet are also applicable to treatment planning systems handling IMRT. However,
for the additional problems encountered in IMRT, other tests are required.
1.3 DIVISION OF QA TASKS TO BE PERFORMED BY THE VENDOR AND THEUSER OF A TPS
The International Electrotechnical Commission, IEC, published an International
Standard on “Requirements for the safety of radiotherapy treatment planning systems” (IEC
2000). Similar to other IEC documents concerning medical equipment, e.g., for linear ac-
celerators, this International Standard defines a number of requirements to be complied with
by vendors of such equipment in order to provide protection against the occurrence of safe-
ty hazards to patients. Compliance with these requirements should be checked by testing by
the manufacturer and demonstrated to the customer.
In this booklet we have adopted the same approach, i.e., a suggestion is made for a
division of QA tests to be performed by the vendor or by an individual user (or a group of
users). As a consequence a large number of tests can already be performed before the sys-
tem is installed in the hospital, i.e., before acceptance testing of the system by the user starts.
In particular tests demonstrating the accuracy of dose computation, but also other tests con-
cerning anatomy description and beam description, which may either be performed by the
vendor or another user or users group, should be made available by the vendor. The results
of these tests should be described in documents accompanying the system, and only be spot-
checked by an individual user.
In Appendix A.3 each test described in this report is assigned either to the category
“Vendor Acceptance Tests” or to the category “User Commissioning Tests”. In the first cat-
egory a large number of tests to be performed by the vendor of a specific system are pre-
sented. As a rule, the manufacturer should perform the generic tests related to the proper
4
functioning of the system, i.e., tests that will give the same results for all systems indepen-
dently of the local customisation or of the beam data entered into the system in a specific
institution. Such tests will complement those that can be found in, for example, IEC Report
62083 (IEC 2000). On the other hand, it is not the intention of the proposed user tests of the
second category, to repeat all checks of the generic accuracy of specific features/performance
characteristics implemented in a TPS. This should have been established within the devel-
opment process and be able to be demonstrated by the vendor. It should be noted that these
generic tests are sensitive to the quality of the of the generic data, and the vendor should
ensure its accuracy.
However, there cannot always be a strong division between both sets of tests since
interaction between the vendor and the user is also required. As an example, prior to instal-
lation it is highly desirable to formulate customer acceptance tests agreed with the vendor,
satisfactory completion of which should be a contractual requirement. Next, it is necessary
for the tests to ensure that the local implementation, including beam data etc., gives results
consistent with the established generic capability. Further verification of performance is
required following the installation of upgrades, new software releases and changes to cus-
tomisation including the addition or editing of, for example, beam data. Therefore, a QA pro-
gramme for new software upgrades should be defined depending on the modules of the soft-
ware that have been changed in the new version.
Historically, after installation of a TPS by the vendor, and checking if the system is
operational, it is left to the users who will test the system in the way they like. User training
may take place on- or off-site, as negotiated in the sales contract, after which the user will
precede through the acceptance/verification-testing phase on its own. It is the intention of
this booklet that this procedure will change in the future. The vendor should provide now the
results of the tests designated in Appendix A.3 as vendor tests, in combination with agreed
acceptance data, either prior to or after installation of the system. In addition the vendor
should present, under reference to this booklet, a list of commissioning/acceptance tests that
the user should perform on its own. After a reasonable time period, negotiable based on cir-
cumstances but before first clinical use, the vendor and user should then together discuss the
results of the tests and decide about complete, or maybe partial, acceptance of the system. In
this way on-site participation of the vendor during the acceptance-testing phase can be min-
imized. Although a number of tests described in Appendix A.3 belong to a “standard” accep-
tance programme, not all results of tests may already be available for each vendor. It may
therefore take some time for the vendors to implement this new procedure and to put to-
gether the acceptance test materials and data sets.
Finally it is necessary to define tests, which should be repeated, for ongoing quality
control purposes, during the lifetime of the TPS. Definition of the local QC programme will
be based on careful consideration of risk, and particular attention given to those aspects of
performance that are likely to change with time and or use.
We recommend adoption of the tests provided in this booklet as a de facto standard
set of acceptance tests, with users only customizing the acceptance criteria, depending on
5
local clinical practice and intended use. It is therefore hoped that this booklet will not only
provide guidelines for the TPS user to perform a QA programme in the hospital, but that it
also becomes part of the user manual of a vendor and will be used for customer training.
Based on the experience of both users and vendors, adaptations of the tests may in the future
be necessary.
1.4 INCOMPLETENESS OR REDUNDANCY OF RECOMMENDED TESTS
Generally speaking, quality assurance may be achieved by:
1.Vendor’s performance statements in system specifications based on generic data.
2.Vendor’s demonstration to individual users of features that have been customisedby the vendor for the user’s particular installation.
3.User’s tests of features customised by the user on their installed system.
4 User’s investigations to ensure sufficient understanding of the performance and
limitations of the installed system.
5 User’s tests for periodic quality control of the installed system.
According to this list, the tests in this report describe a series of quality assurance
tasks (tests), which are examples of actions necessary to check the capabilities and limita-
tions of a TPS. For the purpose of this report tests are defined broadly, to include all actions
required giving assurance that the TPS is fit for its intended purpose according to this list of
QA actions above. However, the set of tests cannot be taken as a complete list, and each user
has to identify and perform additional tests specific for the clinical needs in his/her depart-
ment. Some TPS vendors have already a module integrated in their software for comparison
of measurements and calculations, which makes it easy for a user to perform a relatively
large number of tests. On the other hand, it is not necessary to perform all the tests described
in this booklet locally. Users groups of a specific vendor’s system can in many cases provide
useful information about tests that have already been performed with the system.
Information of this type is published from time to time in the peer-reviewed literature.
Knowledge of the results of tests performed by other users of the same system will not only
reduce the actual time involved in performing the tests described in this booklet, but may also
point to the need of other tests. A vendor might also wish to collect the results of the tests
described in this booklet from various users of the system in a systematic way, and share that
information with other (potential) customers. Such a collection of data gives insight in the
limitations of a specific algorithm and may lead to improvements of the system’s perform-
ance.
Some of the tests described in this booklet may appear as a duplication of tests
already performed during system development or at a later stage by other users. This redun-
dancy is, however, considered to be still very useful because some tests have an outstanding
6
training value and prevent individual users to misinterpret the system behaviour. Moreover,
some performance characteristics already checked by the vendor should be spot-checked by
an individual user.
Finally it should be noted that not all tests presented in this booklet can be performed
for every type of TPS. Because of their importance for those systems that are able to perform
these tests, we preferred to keep them in the booklet.
1.5 NATIONAL AND INTERNATIONAL REPORTS DISCUSSING QA OF TREAT-
MENT PLANNING SYSTEMS
A large number of papers and reports about quality assurance of a 3-D TPS exist. The
main activities described in these publications concern the verification of dose calculations.
More recently several national and international organisations have drafted, or are in the
process of finalizing, more general guidelines for the commissioning and quality assurance
of treatment planning systems. It is cumbersome to describe and compare those documents,
because they differ very much in the number of QA issues discussed, the attention paid to
details and their layout. Also the processes of commissioning and quality assurance are
sometimes difficult to distinguish. In this section we will briefly discuss the main issues of
these documents.
Probably the first report in which performance evaluation of a treatment planning sys-
tem was mentioned, was presented by a Nordic group (Dahlin et al., 1983). In that report
user requirements of a CT-based computerized TPS were discussed. At that time it was
already noted that: “The physicist in charge needs a standardized set of tests and test condi-
tions to control the reliability of the output”. Ten years later a Canadian report (Van Dyk et
al., 1993) described for the first time in a systematic way commissioning and quality as-
surance of treatment planning systems. It concentrates on dosimetric aspects of a planning
system. After discussing the accuracy required for dose calculation in several regions of a
photon beam, a number of tests are described related to dose calculation, such as tests of
depth dose characteristics (TPR, TAR, PDD, etc.) and beam profiles.
The first UK report (Shaw, 1996) includes an interesting description of possible
problems and errors that may occur with hard- and software of a TPS. Part of the document
focuses on checking general hardware (peripheral devices) such as digitiser, plotter, visual
display system and CT interface. In the part dedicated to external photon beam software,
there is a recommendation to check the algorithms for reconstruction of measured (data-set)
fields and non-measured (interpolated) fields. However, no details are given how to perform
such tests.
The Swiss report (SSRPM, 1997) recommends a number of tests, of which some are
given in detail. For instance, it has a test about the correctness of adding bolus into the dose
calculation algorithm. Additional tests include checks of the block-cutting device, beam tests
7
(profiles, TPR, TAR, PDD, TMR, etc.) and of some standard irradiation techniques such as
tangential breast fields, four-field box technique, mantle fields and non-coplanar head &
neck fields.
The American report (Fraass et al., 1998) is a comprehensive report and contains a
large amount of information on quality assurance of the whole treatment planning process.
It discusses extensively a number of dosimetric and non-dosimetric aspects of a TPS and
describes a lot of associated tests. It provides a general framework how to design a QA pro-
gramme for all kinds of TPS, both for external therapy and brachytherapy. Because of its
completeness, it might be difficult for a user to choose those tests that are most urgently
needed for the situation in that particular institution.
The second UK report (Mayles et al., 1999) describes the QA of several parts of the
treatment planning process such as the use of imaging devices (simulator, CT and MR), tar-
get volume definition and dose prescription. Topics to be checked are listed, however, with-
out giving details how to perform these tests.
The recommendations from the American report (Fraass et al., 1998) have been
adapted in a more recent publication (Van Dyk et al., 2003). It describes an approach analo-
gous to the AAPM TG53 Report in evaluating the non-dosimetric components of a TPS, and
gives more detailed dose calculation tests, including the variables to be considered and the
possible range of parameters.
The report of the Netherlands Commission on Radiation Dosimetry (NCS, 2004) has
formulated a detailed set of tests to assure the accurate functioning of a TPS. A number of
these tests were the starting point of the tests recommended in this booklet. In the NCS report
mainly point dose measurements are discussed and only a limited number of 2-D tests, such
as the verification of beam profiles, are given. Tests for these issues are also important and
therefore included this booklet.
Quality assurance of brachytherapy treatment planning systems has recently been dis-
cussed in a new ESTRO booklet (Venselaar and Pérez-Calatayud, 2004). In an integrated
TPS, where external beam therapy software runs on the same platform as the brachytherapy
software, in- and output issues are basically identical. However, there are still many special
features of brachytherapy software that need to be addressed in a proper QA programme,
independent of whether the system is stand-alone or part of a larger system. In ESTRO
Booklet No. 8 the physicist’s tasks at commissioning and continued use of a brachytherapy
TPS, the verification of a treatment plan, and the clinical aspects of quality assurance of a
brachytherapy TPS are discussed in detail.
A forthcoming IAEA document contains a wealth of information on commissioning
and quality assurance of computerized treatment planning both for external beam therapy
and brachytherapy (IAEA, 2004). It describes a large number of tests, but does not provide
a simple protocol that can be followed step-by-step by a user in a hospital for the QA of a
TPS. Also a number of tests presented in the IAEA document refer to testing the system
itself, and are not specific for an individual user. The IAEA document can at this moment be
considered as the most complete reference work in the field of QA of treatment planning sys-
8
tems. It is, however, too comprehensive, and for a user too difficult to choose those tests that
are most urgently needed for the situation in a specific institution.
1.6 CONTENTS OF THE BOOKLET
After comparing the tests provided in the reports prepared by the different national
and international task groups, it became clear that none of these reports provided a set of tests
which can easily be performed at the hospital level. Also the separation of tests to be per-
formed by the vendor, either before or during the acceptance testing, and by the user during
the commissioning of the system, has not been described in any of these reports. In this
booklet an attempt has therefore been made to provide such a division of QA tests to be per-
formed by an individual user, or by the vendor or a users group. In the following chapters
two sets of examples, based on tests given in some of these reports, will be described in more
detail. These tests can be done without extensive study of the documents described in the
previous section. It should, however, be clear that the tests provided in this booklet should
be considered as practical examples of an elementary part of a QA programme of a TPS. In
many situations more tests are needed before the system can be operated in a reliable way in
the clinic. The other documents discussed in Section 1.5 should then give guidance in what
other tests should be performed either by the vendor or the user. The main topics elucidated
in the following chapters can be summarized as follows:
In Chapter 2 requirements and acceptance criteria for the accuracy of the dosimetric
and geometric aspects of a TPS are discussed. In this booklet a number of tests are presen-
ted to verify these issues. Various methods to express differences between measurements and
results of tests performed by a TPS are discussed in this chapter.
In Chapter 3 tests to verify the basic patient entry data in the TPS are given, to con-
firm that no trivial, but potentially very serious, mistakes can happen. In the same chapter the
anatomical model or description of the patient, one of the most critical issues of a TPS, is
discussed. The tests provided in this part of the chapter should ensure that the data used for
creating an anatomical structure are correct and properly linked to a specific patient. In this
chapter image information of a patient and geometrical elements of a TPS are also tested.
In Chapter 4 beam definition, beam display and beam geometry are tested. The defi-
nition of geometry and co-ordinate system of a specific treatment machine is necessary in
order to use a radiation beam in a TPS for treatment planning purposes. Beam definition and
its use are critical items for the accurate design of a treatment plan and should therefore be
carefully checked.
In Chapter 5 tests to verify the accuracy of the dose calculations performed by a TPS
are presented. The beam data input, dose calculation algorithms and criteria of accep-tance
are taken into account. Verification of monitor unit calculation will also be discussed in this
chapter. Special attention will be paid to check if monitor unit calculation methodology fits
with the normalisation method used in the TPS.
9
In Chapter 6 tests are given to perform periodically at specified time intervals to
verify that nothing has changed with the TPS. Periodic tests are different from the tests asso-
ciated with accepting or commissioning a new TPS or a new software release. Some addi-
tional tests will be provided, related to the most common techniques for treatment of tumours
of head & neck, lung/esophagus, breast and pelvis/prostate.
In Chapter 7 examples are given of the tests described in Chapters 3 to 5. These tests
have recently been performed on different commercially available treatment planning sys-
tems, and therefore reflect situations that can occur to any user. These examples have been
chosen to illustrate in more detail in which way the tests can be performed in practice. They
also show the usefulness of these tests, because in some cases unexpected limitations of the
systems were discovered.
There are several other aspects of QA of a TPS, such as plan documentation and
export, system management and system security that are important for a user. Potential
problems with respect to plan documentation and export are related to data transfer from the
TPS to additional devices (computer controlled block cutters, MLC, record-and-verify sys-
tem, etc.). Because it is difficult to design tests valid for each TPS in combination with its
specific use in a radiotherapy department, no additional tests are given in this booklet for
these aspects. The reader is referred to the information provided in the other reports on this
topic.
System management is crucial for a safe and reliable access to the TPS, which should
be restricted to those who are qualified to do it. Nowadays, a TPS often consists not only of
standard computer hard- and software, but includes also graphics workstations, servers and
other peripheral equipment. System management of a TPS covers: hardware (computer sys-
tem), software (data) and the network configuration based on DICOM connectivity. Another
aspect of QA of a TPS is therefore to take care of the technical performance and integrity
between hard- and software. These issues have been discussed extensively in other reports
and will not be discussed in this booklet.
It should be noted that in this booklet we restrict ourselves to external, non-IMRT,
photon beams, although some parts of it, e.g., Chapters 3 and 4 (Anatomical Description and
Beam Description), are also valid for external electron beam treatments.
10
2. ACCURACY REQUIREMENTS AND TOLERANCELEVELS
2.1 RECOMMENDATIONS FOR DOSIMETRIC AND GEOMETRIC ACCURACY
Requirements for the accuracy of a treatment planning system must be seen in the
light of the total uncertainty in the 3-D dose delivery to a patient. For this purpose, all steps
in the planning and delivery process should be considered. An important criterion is the accu-
racy in the absorbed dose distribution required from a clinical/radiobiological point of view.
In an IPEM report (Mayles et al., 1999) an overview is given of the clinical evidence for
accuracy in radiotherapy. From this survey it can be concluded that a difference in absorbed
dose of about 10% is often detectable in tumour control, and that a difference of about 7%
in absorbed dose can be observed for a number of normal tissue reactions. From an exten-
sive review of dose-response data, Brahme et al. (1988) concluded that the standard devia-
tion in the mean dose in the target volume should be at most 3% (one standard deviation, SD)
to have control of the treatment outcome with a 5% tolerance level. This is in agreement with
a recommendation given by Mijnheer et al. (1987) based on a review of steepness of dose-
response curves observed for normal tissue complications, and other clinical observations.
These clinical/radiobiological observations point to the need that the absorbed dose should
be delivered within 7-10%. Assuming that this number is equivalent to a confidence level of
95%, the standard deviation in the absorbed dose delivered to the (ICRU) dose specification
point, must be as low as 3-5%.
According to IAEA Report TRS 398 (IAEA 2000), the latest code of practice for the
calibration of high-energy radiation beams, the uncertainty in the absorbed dose to a point
under reference conditions is about 1.5% (1SD). If the treatment planning and treatment
delivery process include all other steps after the calibration of the beam, then a window of
uncertainty of 2.6 to 4.8% (1SD) remains for the rest of the treatment process. Because
several steps of this dosimetry chain do not depend on the use of a computerized TPS, a
much smaller uncertainty or tolerance must be assigned for the TPS as a QA goal. In the past
generally rather simple recommendations were given for the required accuracy of dose cal-
culations, such as 2% or 2 mm in regions of the beam with small or large dose gradients,
respectively (ICRU 1987). Later these recommendations have been refined and adapted to
the accuracy of dose calculations that can be achieved in clinical practice. Recently a sum-
mary of the accuracy requirements given in different reports and by various groups has been
published (Venselaar et al., 2001). The set of recommendations given by Venselaar et al. has
been adopted in this booklet and will be discussed in more detail in Section 2.3.
Almost no recommendations are available with respect to geometric accuracy
requirements in radiation therapy. In the IPEM report a value for the accuracy on posi-
0tioning of field edges and shielding blocks in relation to the planning target volume of 4mm
(1SD) is recommended (Mayles et al., 1999). This number seems rather large with respect
11
to current attempts to reduce margins around target volumes, while also the shielding of
organs at risk needs a much better precision. Also the use of portal imaging and immobili-
sation will result in smaller geometric (set-up) uncertainties. Furthermore, these require-
ments concern the actual patient irradiation, i.e., the whole treatment process, and conse-
quently a much smaller uncertainty or tolerance must be used for the TPS. A better approach
might therefore be to compare the geometric accuracy requirements of a planning system
with those of an accelerator, which are of the order of 1 to 2 mm. It should be noted that the
actual geometric accuracy achievable with a TPS depends strongly on items like image res-
olution, grid size and dose matrix geometry. More work is needed to formulate geometric
accuracy requirements for treatment planning systems, in relation to the actual patient treat-
ment.
2.2 HOW TO EXPRESS DEVIATIONS BETWEEN MEASUREMENTS AND CAL-CULATIONS?
The verification and QC process of a TPS often reveals the problem of how to express
the deviations between measurements and calculations and how to define criteria that must
be fulfilled to use the system clinically. Different methods must be used depending on the
physical quantity to be tested, as well as on the region in the radiation beam to be studied.
Geometry tests involve spatial units, while dosimetric tests involve point, line, or 2-D/3-D
matrix comparisons. In principle, all dosimetric tests are point-based (from a single point to
a large amount of data) thus these comparisons can be made voxel-by-voxel. This approach
is suitable in low dose gradient areas. In high dose gradient areas, e.g., a penumbra, the spa-
tial deviation must also be considered, as will be discussed later. A useful tool that handles
this situation is the dose/distance-to-agreement check (Harms et al., 1998), which has been
further developed into the so-called γ-index (Low et al., 1998, Depuydt et al., 2002, Low and
Dempsey, 2003, Bakai et al., 2003). In this way acceptance criteria can be specified as a
combination of the accepted dose deviation, e.g., 3%, and the accepted distance-to-agree-
ment, e.g., 3.0mm. For this purpose recommended tolerance levels can be applied as given,
for instance, in Table 2.1 in this booklet.
The results of comparisons between calculated and measured dose distributions can
be normalised to the local dose value, to the dose at a specific point inside the beam under
consideration, or to the dose in a reference field. The latter procedure puts, for example, dose
points outside the beam edges in relation to the treated volume. Local normalisation in low
dose regions may give deviations of multiple tens of a percent, although they may sometimes
be rather insignificant. If an organ at risk is present in that region, normalisation to the local
dose value is, however, relevant. Also for IMRT applications accurate calculation of the dose
outside the beam edges is important, but accuracy requirements for these situations are still
under development. In these situations, the point chosen may be the depth of dose maximum
or the reference depth (i.e., the depth used during beam calibration). Also the point on the
12
central beam axis at the same depth has been proposed. In this booklet we provide tolerances
for both options. In regions with a steep dose gradient it is recommended that the dose dif-
ference should be translated in a distance-to-agreement value.
Differences between dose values predicted by a TPS and measured values at the same
point are the result from limitations of the dose calculation, uncertainties in the measurement
procedure, or fluctuations in the output of the accelerator. In order to obtain the accuracy of
the dose calculation performed by the treatment planning system itself, it is recommended in
this booklet to express the deviation d(i) between calculated (subscript c) and measured (sub-
script m) dose values as the ratio between absolute dose values, after a normalisation to
reference conditions. This should be interpreted as all dose measurements (single or multi-
ple points) shall be given in absolute dose, D, per monitor unit, M, normalised to a meas-
urement in a reference field (Figure 2.1).
The following equation gives the ratio d(i) between calculated and measured dose val-
ues at a point i after such a normalisation:
(Eq. 2.1)
Alternatively the deviation can be expressed as the percent difference d%(i) between
calculation and measurements, normalised to the measured quantity: d%(i) = 100. [d(i)-1].
Figure 2.1 The left figure defines the geometry for the dose determination at a specific point i, and theright figure defines the reference situation (source-skin-distance, SSD, field size at isocentre, s, depth,d). Normalisation is obtained by taking the ratio of the absorbed dose per monitor unit, D(i)/M(i), foreach point i. In practice the same value of M is often taken for all points i, as well as for M(ref). Thefigure is valid for both dose measurements (m) and dose calculations (c).
13
(Dc (i) / Mc (i)) / (Dc (ref) / Mc (ref))d(i) =
(Dm (i) / Mm (i)) / (Dm (ref) / Mm (ref))
SSDi
SSDref
Pointi
PointrefIsocentre
Isocentre
In practice, the reference field can be chosen as the one used during calibration of the
treatment machine. Another reference field, which may conveniently be used, is the one
applied for periodic control of the output of the accelerator. Defining the reference beam as
the one used during calibration and following the latest dosimetry protocol, i.e., IAEA
Report TRS 398 (IAEA 2000), means that all calculations and measurements should be nor-
malized to the absorbed dose at 10cm depth in a 10cm x 10cm field. The absorbed dose in
this context is also given per monitor unit, which is in compliance with the discussion above.
This procedure is identical to that used during the commissioning of the accelerator where
all measurements must be normalized to the output in a reference field to account for the
variations in output of the accelerator. The absorbed dose in each test should therefore be
given per monitor unit (similar to dose rate where the time is replaced by monitor units).
Following this concept, the number of monitor units for a certain geometry is implicitly also
checked. It should be noted that in most treatment planning systems the dose is expressed as
dose to water. In some systems several approaches are possible; for instance the dose to
muscle can be chosen, which might be the preferred choice if a Monte Carlo dose calcula-
tion engine is available. The vendor should clearly indicate which dose definition is chosen.
A simple experimental protocol for the verification of dose is to determine the
absorbed dose in the reference geometry prior to the points of interest, as well as just after-
wards. The average dose for these measurements can, if they are within a certain value, e.g.,
1%, be used to normalise the verification dose measurements. This is also a check of the sta-
bility of the treatment unit during the measurement session. It should be noted that variations
in the output of the accelerator during relative dose measurements are often taken into
account by using a reference chamber placed in a corner of the field. All relative dose
measurements obtained in this way still have to be related to the absolute dose for a certain
number of monitor units.
The procedure for the evaluation of deviations between calculation and measurement
for various situations will now be described in more detail in the following paragraphs. A
number of numerical examples will be presented in Chapter 7.
2.2.1 Evaluation in low dose gradient areas
In areas with low dose gradients it is sufficient to evaluate the dose deviation inde-
pendently of the spatial consideration. The absorbed dose, Dm(i), for a certain number of
monitor units, Mm(i), is determined using some kind of detector (e.g., ionisation chamber,
diode, diamond). In close relation (before and afterwards) to this measurement the dose in
the reference field, Dm(ref), is determined for a delivery of a number of monitor units
Mm(ref). Thus we know Dm(i)/Mm(i) as well as Dm(ref)/Mm(ref) and can determine the meas-
ured normalised dose at point i. To facilitate the process it is advantageous to give the same
number of monitor units in the two cases, i.e., Mm(i)=Mm(ref).
14
Step two involves the creation of an identical geometry/beam set-up in the TPS. In a
similar way as applied for the measurement situation, both the geometry of interest and the
reference geometry must be defined. An attractive approach is to apply the same number of
monitor units during the calculation and the irradiation, Mm(i)=Mc(i) and Mm(ref)=Mc(ref),
and then letting the program calculate the dose to these two points, Dc(i) and Dc(ref). In this
way the measured and calculated dose at point i relative to that at the reference point can
directly be compared.
However, in some TPSs it is complicated to calculate the dose for a given number of
monitor units, and in some cases even impossible. Instead it is necessary to determine the
number of monitor units for a certain absorbed dose to the point (either i or ref). Thus we
will get two different values Mc(i) and Mc(ref) for the two situations where the absorbed dose
at these two points is identical, Dc(i)= Dc(ref). This procedure has to be repeated for each
point of interest, whereas the reference field has to be calculated only once. When such a
comparison is repeated for many points, recalculating for each point i the number of moni-
tor units Mc(i) for the dose Dc(i) becomes very cumbersome and is unnecessary. Actually,
when it has been done for one point i0, the ratio Dc(i)/Mc(i) used in equation 2.1 is more
directly obtained from the relative dose distribution using equation 2.2.
(Eq. 2.2)
where Dc(i)/Dc(i0) is the relative dose normalised to point i0 for the same number of monitor
units applied to all points i, i.e., Mc(i)=Mc(i0).
2.2.2 Evaluation in high dose gradient areas
Evaluation of deviations between measurements and calculations in high dose gra-
dient areas based on dose differences may result in very large figures, which are very sensi-
tive to geometric uncertainties. Thus a better approach is to quantify these dose differences
as distance–to-agreement. This distance (spatial deviation) is equal to the smallest distance
r(i) between a measurement point rm(i) and a point rc in the calculation volume with the same
absorbed dose D. Interpolation within a given calculated or measured dose matrix may faci-
litate this evaluation. In principle this should be done in three dimensions, (where r, rm and
rc are three-dimensional vectors), but such a procedure can also be applied to a one- or two-
dimensional data set, i.e., for depth doses, profiles, and isodose lines.
15
Dc (i) Dc (i) Dc (i0) Mc (i0) Dc (i) Dc (i0)= x x = x
Mc (i) Dc (i0) Mc (i0) Mc (i) Dc (i0) Mc (i0)
2.2.3. Combined evaluation of dosimetric and spatial deviations
A simple method to combine dosimetric and spatial deviations is to calculate both and
to select the smallest value relative to the recommended tolerance value as given, for
instance, in Table 2.1.
A more complex but very elegant concept, that combines dosimetric and spatial devi-
ations into one single figure of merit, is the γ-evaluation method, which was first presented
by Low et al. (1998) and later refined by several groups (e.g., Depuydt et al., 2002, Low and
Dempsey, 2003, and Bakai et al., 2003). The method can be considered as a comparison of
points in the four-dimensional dose-position vector space. The two points to be compared are
the points (rc,Dc) and (rm,Dm), where r is the three-dimensional spatial co-ordinate and D the
absorbed dose co-ordinate. If the base vectors of the co-ordinate system are equal to the dose
criteria, Δd, and the spatial criteria, Δr, respectively, then if the length of the normalized vec-
tor between these points is less or equal to unity, agreement is fulfilled (Figure 2.2).
Figure 2.2 Dose-distance vector space showing the measured dose Dm at point rm, and the calculateddose Dc at rc. (For clarity the figure is drawn in two dimensions where the spatial dimensions arereduced to one).
For all points (rc,Dc) the difference between the measured and calculated dose
d(i)=Dm(i)-Dc has to be determined, as well as the distance between the points r(i)=rm(i)-rc.
The γ-value, is then found from scaling with the base vectors according to:
(Eq. 2.3)
For γ<1 the measured dose at point (rc,Dc) is within the acceptance criteria. The eval-
uation should preferable be performed in three dimensions where r=(rx,ry,rz). The method can
also be applied to dose values along a line (e.g., a depth dose curve or a beam profile), and
two-dimensional data sets. In all cases, the calculated dose data may still be a full 3-dimen-
16
sional dose matrix. As a matter of fact, lower γ-values are obtained by considering the full
dimensionality of the calculated dose matrix.
Dm and Dc must have the same unit, i.e., preferably in Gy per MU normalised to the
reference situation. In order to be in agreement with the tolerance values given later in Table
2.1, the dose differences d(i) and Δd should be normalized to some dose value. As will be
discussed in Section 2.3, the normalising dose should be the local dose Dm in most circum-
stances except where this dose is too low to have a clinical significance. In such case, the
normalising dose should preferably be defined inside the measured open field. In any case,
it should be made clear which normalising quantity is used. Because different tolerance val-
ues are valid for different regions in a radiation field, the γ-evaluation method should in prin-
ciple be able to handle these different criteria. These more sophisticated evaluation methods
are particularly useful if inhomogeneous 3-D dose distributions in organs at risk are
analysed, as encountered in IMRT.
In Chapter 7, as an example, the use of the γ-evaluation method is illustrated for a
4-field box technique.
2.2.4. Evaluation and reporting deviations for a large number of points
When the number of comparison points is large, the above concept of reporting devi-
ations between measurements and calculations will easily collapse, and a method of com-
piling these deviations into a single number is required. Standard statistical tests could be a
way to perform these evaluations, e.g., a paired Student’s T-test. Other methods have also
been published, for instance the use of the quantity “confidence limit” as discussed by
Venselaar and Welleweerd (2001) and Venselaar et al., (2001). The confidence limit is based
on the average (systematic) deviation between measurement and calculation for a number of
data points in a comparable situation, and the standard deviation (SD) in this average of the
differences. The confidence limit is then defined as the sum of the average deviation and 1.5
SD. The factor 1.5 is based on experience and was a useful choice in clinical practice
(Venselaar et al., 2001). The cited papers applied this concept to dose deviation, but it can
easily be expanded to both the spatial deviation and the gamma concept.
When dose data (depth dose curves, profiles or any line between two points, 2-D and
3-D dose matrices) are determined, it is convenient to do all measurements for a certain beam
configuration at the same time. In this case, gain and offset of the scanning system can be
kept constant, thus yielding all line doses in the set related to the same relative output. A
complementary dose measurement with this system, in combination with an absolute dose
measurement, under reference conditions (10cm depth for a field size of 10cm x 10cm), con-
nects the whole set of line dose measurements with the number of monitor units given. In
this way we have accomplished the same procedure as outlined above. The procedure to get
these data sets from a TPS varies for the different systems, and is too diverse to describe in
this booklet.
17
This statistical approach can be used in a number of different ways. For example, data
may be grouped according to field size, beam quality, or beam modifier (e.g., wedge filter).
Such a grouping must be done with caution, because no systematic deviation for a small sub-
group, e.g., for very small, very large or off-axis fields), should be lost.
2.3 ACCEPTANCE CRITERIA FOR THE ACCURACY OF PHOTON BEAM DOSECALCULATIONS
Following the discussions above, the acceptance level for the accuracy of dose calcu-
lations of a TPS should be around 2%. This value can be used for areas where the absorbed
dose is rather homogenous, e.g., inside the central part of a beam. Different acceptance cri-
teria can, however, be formulated depending on the position in the beam. Figures 2.3 and 2.4
show the various regions that can be defined in a photon beam, incident on a homogeneous
phantom. In principle there are two areas with a homogenous dose, well inside or far outside
the beam. In between we have the penumbra and build-up regions with a high dose gradient.
Figure 2.3 Definition of different regions in a radiation beam, based on the magnitude of the dose gra-dient, for which different acceptance criteria for the accuracy of dose calculations, δ, are valid.
Assuming that the penumbra is defined as the distance between the 20 and 80% dose
level, and recognizing that the penumbra distance for modern linacs is of the order 5mm, we
get a dose gradient of 12% per mm. We can also define the build-up region as a high dose
gradient area. Because of these differences in dose gradient, low and high dose gradient areas
should have different acceptance criteria. Using the concept of Venselaar et al. (2001) we can
divide a photon beam into four regions according to:
1. Points along the central axis of the beam beyond the depth of dose maximum: low
dose gradient area.
18
2. Points on and off the central axis in the build-up and penumbra region. This region
includes also points in the proximity of interfaces: high dose gradient area.
3. Points inside the beam (e.g., inside 80% of the geometrical beam) but off the central
axis: low dose gradient area.
4. Points outside the geometrical beam or below shielding blocks, jaws, MLC, etc.
where the dose is lower than, for instance, 7% of the central axis dose at the same
depth: low dose gradient area.
Figure. 2.4 Illustration of the different regions where the criteria δ1 - δ4 and RW50 can be applied tocompare calculated and measured values of a depth-dose curve (upper panel) and a beam profile (lowerpanel) (adapted from Venselaar et al., 2001). For the profiles, data is normalised at the central beamaxis, although it is recommended in this booklet to use a normalisation to dose per monitor unit underreference conditions.
Acceptance criteria for the accuracy of dose calculations for each region will be noted
as δ1, δ2, δ3, and δ4. In the low dose gradient areas it is convenient to trace dose deviations,
while in the high dose gradient a distance-to-agreement is a better parameter to express
deviations between measurements and calculations. This is consistent with the recommen-
dation above of using the γ-analysis when evaluating dose distributions.
The δ values for these four regions, given as confidence limits as proposed by
Venselaar et al., and adopted in this booklet, are given in Table 2.1. Figure 2.4 shows in
which regions of a depth dose curve or a beam profile the different tolerances can be applied.
The δ values given in Table 2.1.could also serve as guidelines to set the dose criteria, Δd, and
19
the spatial criteria, Δr, of the gamma index given in Eq. 2.3. The regions of the beam defined
in Figures 2.3 and 2.4 are interconnected in the gamma index concept, but different high and
low dose gradient regions might have different δ values. In addition, δ values can be applied
to any dose distribution, including distributions for multi-beam composite plans. However, it
is still useful to differentiate low and high dose regions, which could be done by setting a
threshold, e.g., 7% of the reference dose in the open field region, below which the dose devi-
ation is normalised to this reference dose rather than to the local dose.
Also given in this table are the radiological width, the width of a profile measured at
half its height compared to the value at the beam axis, and the beam fringe, the distance
between the position of the 50% and 90% values relative to the maximum of the profile in
the penumbra.
It should be noted that the recommendations for δ4 (outside the beam edges in the low
dose/low dose gradient region), might be too permissive for IMRT applications. Often a con-
siderable part of the dose delivery, either to the target volume or to an organ at risk, results
from the addition of a number of dose distributions outside the beam edges. The accuracy of
the dose calculation of that part of a beam should then have a higher accuracy, depending on
the IMRT technique and position of the organ at risk relative to the target volume.
Table 2.1 Tolerances δ, given as the confidence limit, for the dose deviation d%(i), for the variousregions in a photon beam. (Adapted from Venselaar et al., 2001). These data should be con-sidered as recommendations for good clinical practice and not as absolute values valid underall circumstances.
Region Homogenous, Complex geometry More complexsimple geometry (wedge, inhomogeneity, geometries****
asymmetry, blocks / MLC)
δ1 Central beam axis data - high 2% 3% 4%high dose, low dose gradient
δ2* Build-up region of central axis 2 mm 3 mm 3 mmbeam, penumbra ragion of the or 10% or 15% or 15%profiles - high dose, high dose gradient
δ3 Outside central beam axis region - 3% 3% 4%high dose, low dose gradient
δ4** Outside beam edges – low dose, 30% (3%) 40% (4%) 50% (5%)low dose gradient
RW50 Radiological width – high dose, 2 mm or 1% 2 mm or 1% 2 mm or 1% *** high dose gradient.
δ50-90 Beam fringe – high dose, 2 mm 3 mm 3 mmhigh dose gradient
* One of the two tolerance values should be used.** These figures are normalized to the local dose, or at the dose at a point at the same depth on the central beam
axis, or the open part of the field in case of blocked fields (in brackets).***` The percent figure should be used for field sizes larger than 20 cm.**** More complex geometry is defined as a combination of at least two complex geometries.
20
3. ANATOMICAL DESCRIPTION
In this chapter tests are given that can be helpful to achieve confidence in the correct
representation of the anatomical description of patients introduced into the TPS. The set of
tasks described in this chapter is dependent on the structure of the specific TPS. Basically all
tests should be performed for all systems, but for some systems they have to be adapted or
even may be non-applicable. Particularly in this chapter, and in Chapter 4 (Beam
Description), a large number of tests that are presented should be performed by the vendor
of a specific system during the acceptance testing (See Appendix A.3). Depending on the
installed configuration and/or connectivity between the various imaging modalities and the
TPS, additional tests should be performed by an individual user. A number of the tests pre-
sented in this chapter concern the verification of features/functionality present in the system
and need not necessarily be performed at the user’s site, but may also be demonstrated
during user training sessions elsewhere.
Before starting the tests described in this chapter, it is assumed that the connectivity
between the various imaging modalities and the TPS is working according to DICOM stan-
dards. Because the infrastructure of each department is different, it is impossible to design
connectivity tests valid for all configurations present in radiotherapy departments, and no
suggestions for such tests are given.
3.1. BASIC PATIENT ENTRY
In this section tests are proposed for checking the response of the system in situations
related to the uniqueness of a patient associated with existing patient data, and retrieving
patient data from the system. As a minimum we assume that the following information of the
patient is available: ID-number, family name, target volume and in case of a CT-scan, a CT
ID-number. Test the following situations:
a. Introduce two patients with the same last name but different ID-numbers in the TPS.
Patient 1: Xxxxxx Yyyyyy: 20021972
Patient 2: Xxxxxx Yyyyyy: 11041970
Record the system response: not possible/no warning, not possible/warning,
possible/warning, possible/no warning. In the case of a warning, record the warning.
b. Introduce two patients with different last names but the same ID-number in the same
directory of the TPS.
Patient 1: Xxxxxx Yyyyyy: 20021972
Patient 2: Zzzzzz Wwwww: 20021972
Record the system response: not possible/no warning, not possible/warning,
possible/warning, possible/no warning. In the case of a warning, record the warning.
21
c. Introduce the same patient twice in the same directory of the TPS.
Patient 1: Xxxxxx Yyyyyy: 20021972
Patient 2: Xxxxxx Yyyyyy: 20021972
Record the system response: not possible/no warning, not possible/warning,
possible/warning, possible/no warning. In the case of a warning, record the warning.
d. Enter a second anatomical description for an existing patient. Verify that the TPS warns
for overwriting and/or association of a new description to an existing case.
Introduce into the system the same patient with two different target volumes.
Patient 1: Xxxxxx Yyyyyy: 20021972 breast
Patient 2: Xxxxxx Yyyyyy: 20021972 cervix
Record the system response: not possible/no warning, not possible/warning,
possible/warning, possible/no warning. In the case of a warning, record the warning.
e. Check how easy (number of steps, questions, warnings from TPS) it is to delete a patient
from the TPS.
f. Check the limitations of the TPS concerning moving/copying patients/plans from one
directory to another.
3.2 IMAGE INPUT AND USE
Image acquisition for treatment planning is usually performed by computed tomo-
graphy (CT) and magnetic resonance imaging (MRI). In special cases, positron emission
tomography (PET) and single photon emission computed tomography (SPECT) are addi-
tionally used (Grosu et al., 2000, Henze et al., 2000, Levivier et al., 1995, Pirotte et al.,
1997). Each imaging modality is applied for specific reasons: the CT dataset may be mapped
to the electron density of the tissue and is needed to calculate dose distribution within the
patient. MRI, on the other hand, provides superior soft tissue contrast and is used to de-
lineate the tumour and the organs at risk. PET and SPECT images can be used additionally
to measure the relative metabolic activity for detecting differences in tumour regions or dif-
ferentiating tumour from necrosis. These complementary aspects can be integrated into treat-
ment planning by correlation of the images from different modalities.
As an essential prerequisite for the treatment planning process and, in particular for
the correlation process, the images that are converted into the TPS must reflect the real
geometry of the patient, i.e., possible distortions of the images have to be minimized. In addi-
tion, the accuracy of the correlation also depends on the correct functioning of the multi-
modality registration software included in the treatment planning program such as image
fusion (co-registration). Therefore, appropriate tests are indispensable prior to the first appli-
cation to patients. The aim of this part is to propose some tests mostly related to CT imag-
ing, however also including a geometry test for MRI, PET and SPECT.
22
For the tests presented in this chapter, related to the correct use of CT data, two types
of phantoms are recommended. A description of examples of such phantoms are given in this
section, but other phantoms available in a department, having similar construction and well-
defined dimensions, are also suitable for this purpose. It should be noted that CT data are not
only used for describing patient anatomy, but also used to define electron densities of the
various tissues to be irradiated. For that purpose a table that converts HU values into elec-
tron densities is present in the system. Phantoms containing non-water equivalent materials
can then be used for testing the correctness of this curve.
• Phantom A: Made from PMMA (Perspex, Plexiglas, Lucite) slabs, having outer dimen-
sions of 30cm x 15cm x 20cm, on which lead markers are attached. These spherical lead
markers are partly placed inside and partly sticking out of the phantom. The number of
lead markers should be enough to identify the dimensions of the phantom and the direc-
tion of the phantom; head-feet and anterior-posterior direction markers are helpful The
markers can be located at the edges of the phantom or arranged in some geometrical
shape, and should be easy to recognize on a radiographic film or a digitally reconstructed
radiograph (DRR). Figure 3.1 presents an example of a phantom with 12 lead markers
asymmetrically located at the edges of the phantom.
Figure 3.1 Schematic view of PMMA phantom (A) with lead markers
• Phantom B: This phantom contains 3 types of inhomogeneous internal structures, which
are helpful for testing both structure-related and inhomogeneity-related issues: water-
equivalent, lung-equivalent and bone-equivalent. These structures have cylindrical shape
with a diameter of 3cm, 4cm and 3cm, respectively (Figure 3.2). If this phantom is used
for the determination of the relation between CT-number and relative electron density, the
values of the relative electron density of these structures should be known. There are sev-
eral types of these phantoms commercially available. In some treatment planning systems
the modern approach to dose calculations is to use CT numbers for mapping to tissue type,
which fully specifies the chemical composition, and density. That information is then used
to determine the cross-section/interaction coefficient data required for dose calculations.
For that option in a TPS it is necessary that the atomic composition and specific density
of the phantom material and its internal structures be known.
23
Figure 3.2 Schematic view of the inhomogeneous phantom (B)
• An increasing number of phantoms are becoming commercially available to verify the
delivery of advanced treatment techniques such as IMRT. These phantoms can, some-
times, also be used for volume evaluation and beam alignment as described in this chap-
ter (e.g., Craig et al., 2001).
• Another phantom useful for multi-image-modality tests may consist of PMMA cylinders
containing smaller empty cylinders to be used as marker structures such as ‘target points’
or ‘linear tubes’ (Karger et al., 2003; Figure 7.4). These empty cylinders can be filled with
liquids that are specific for the respective imaging modality, e.g., iodine contrast agent for
CT and MRI, and radioactive nuclides for PET and SPECT. Iodine contrast agent is well
visible in CT as well as in MRI, so the phantom does not have to be refilled between con-
secutive CT- and MRI-measurements.
3.2.1 Image input
a. Identity consistency of scans. Follow the instructions below and check the results:
• Introduce into the TPS a patient data set with duplicated slices and verify that the TPS
gives appropriate warnings.
• Introduce into the TPS a data set in which the CT field-of-view is changed and verify
that the TPS gives appropriate warnings and/or verify that the dimensions of the phan-
tom are correct in the different slices (e.g., phantom A).
• Introduce into the TPS CT scans with the same names and patient-ID and verify that the
TPS gives appropriate warnings.
b. Scan parameters–varying slice thickness.
Generate at the CT-scanner and introduce into the TPS scans with varying distance
between slices, for instance: 10, 5 and 7mm. The TPS should warn for this difference
and prevent from further data processing or use the right distances.
c. Two different sets of images for the same patient
Introduce into the TPS two different sets of CT slices, with the same z-value but with
different image resolution and/or different field-of-view for the same patient. Check if
the TPS allows the user to select the CT set for importing.
24
d. Maximum number of CT slices
Check in the documentation the maximum number of CT slices permitted by the TPS.
Check how the system works if the amount of input images is larger than this number,
for instance if there is a possibility of selecting images.
e. Patient orientation
Make CT scans of a phantom with various orientations that are used clinically (head
first, feet first, prone, supine left and right side).
Import the files into the TPS and check the representation of the patient orientation.
The phantom needs to be properly marked (e.g., phantom A).
f. Integrity of simultaneous input
Check that simultaneous input of data of the same patient from different devices, e.g.,
a digitiser, film scanner, CT scanner or MR scanner, by different users of these chan-
nels, does not interfere in the TPS.
g. Check if the same or different users can work on a patient file, which is already opened.
h. Geometric integrity of slices
Scan a phantom of well-defined dimensions and check its dimensions in the TPS (e.g.,
phantom A).
i. CT number representation
Check the representation of CT numbers (Hounsfield Unit values). Use a phantom con-
sisting of different densities simulating water-equivalent, lung-equivalent and bone-
equivalent tissues (e.g., phantom B). Import the CT data in the TPS, measure the HU
values and compare them with the original numbers (measured on CT). CT numbers in
the TPS can be obtained by making a profile or defining a small volume in the dif-
ferent regions and determine the corresponding maximum, minimum or average CT
number (e.g., phantom B).
j. Text information
Check if on all graphics, beam displays, and other windows on the screen, the patient’s
last name and the ID-number are shown. Also plan number and trial number (if appli-
cable) and version number of the TPS should be visible. Test how many characters can
be used in the last name of a patient and check if after saving and reloading the same
characters are shown.
25
3.2.2 Contour input
a. Check the accuracy of the manual digitiser input:
Enter test contours drawn onto a sheet of paper into the TPS and check the position of
all corner points, using preferably the measuring tools provided within the software
Verify also the dimensions of the plot of the digitised structures.
b. Check the accuracy of the film scanner input:
Enter test contours into the TPS and check the dimensions along the scan direction and
at both sides along the transport direction, using preferably the measuring tools pro-
vided within the software.
Verify also the dimensions of the plot of structures introduced by the scanner into TPS.
It should be noted that the dimensions of the contours of a scanned image are depend-
ent on the choice of gray-scale control setting (window and level).
3.2.3 Image use
a. Geometry of reconstructed images:
Scan the phantom (in normal head-first orientation) and reconstruct sagittal, coronal
and oblique planes and check the geometry by comparison with the distance between
markers placed in the phantom (e.g., phantom A).
b. Orientation of reconstructed images:
Check if the reconstructed planes in the previous test have the correct orientation (use
a beam projection with selected gantry and table angles).
c. Grey-scale representation:
Compare a hard–copy made at the CT-scanner with the display of the TPS using the
same window/level setting (if possible use a film scanner to enter a hard-copy into the
TPS).
d. Grey-scale representation of reconstructed images:
Reconstruct oblique planes (if option available) that are slightly tilted with respect to
coronal and sagittal orientations and check the grey-scale interpolation (phantom A).
e. Windowing and zooming images
Check if windowing and zooming functions work together.
26
3.2.4 Co-ordinate system of images
• Introduce a phantom co-ordinate system for the phantom, which is to be used to test the
geometry. Its origin should be defined in such a way that it is well identifiable in the real
phantom as well as detectable on its images. Use of special markers may be helpful.
• Refer the position of real objects (markers) with respect to this phantom co-ordinate sys-
tem
• Introduce a co-ordinate system for the images of the phantom. This system should have
its origin at the same position as at the real phantom.
• Refer the position of the image the same objects (markers) with respect to this image co-
ordinate system
• Determine the vectorial difference between objects and image of objects and compare
with tolerance values
3.3 ANATOMICAL STRUCTURES
3.3.1 Definition of anatomical structures
a. Unique identification:
Check how an anatomical structure is related to a specific patient plan and whether there
is a risk of confusion.
Check if you can use different structure sets within the same plan.
Check if it is possible to make an image fusion and if the right structure set is used.
Examples:
– Define a new structure with the name that already exists for another structure.
– Define identical names for two structures on different data set.
b. Unique properties:
If your TPS needs structures with special properties, like external contour, check the TPS
response to the attempt to define a second structure with a different name but also with
the same property (e.g., phantom B).
c. Maximum number of contours per anatomical structure:
Check a clinically relevant number (for instance 60) of contours per CT-slice set that may
be defined for each structure. Test for each CT-slice that a clinically relevant number of
contours per CT-slice (for instance 25) can be defined. The response of the system when
those numbers are reached or exceeded should be tested (e.g., phantom B).
27
3.3.2 Automated contouring
Make a CT-scan of a test phantom with well-defined geometry containing materials
with densities similar to lung, bone and soft tissue. Let the TPS automatically generate exter-
nal and internal structures. Contour all structures and check the contour by eye if it is fol-
lowing the structure (e.g., phantom B).
Verify the printed contours of internal structures.
a. Correct geometry in automated contouring
Measure along the axis the diameter of the well-defined structures. Measure left and right
separately to detect shifts.
b. Threshold changing for automated contouring
If there is a possibility to change a threshold in automated contouring option, repeat test
3.3.2.a for different threshold values.
3.3.3 Manual contouring
These tests are both valid for manual contouring on CT images (using the computer
mouse) or via a digitiser.
a. Contouring direction
If contours may be entered clockwise (CW) and counter-clockwise (CCW), enter the
same contour in both directions. Check by visual inspection or volume analysis that the
volumes do not depend on whether the contours are entered CW or CCW, also when
mixed (e.g., phantom B).
b. Maximum number of points
For each contour there will be a maximum number of points that may be defined and the
response of the system when this number is reached or exceeded should be tested.
3.3.4 Manipulation of contours
The tests described in this section concern the verification of the correct handling of
contours of structures in individual planes.
a. Add a margin to a contour
The 2-D option of contour expansion should be verified. A cylindrical- (e.g., phantom B)
and/or triangle- (with sharp angle defined by 3 points on a single plane) shape is recom-
28
mended. Expand a drawn contour by 5mm or 10mm. Measure the distance between the
original and expanded contours.
b. 3-D structure expansion
Due to different possibilities of contour expansion algorithms in a TPS, the method of
handling such a 3-D expansion needs to be tested. The structure must be limited in the
cranio-caudal direction (e.g., sphere, pyramid or cone, cylindrical section). Expand a pre-
defined contour using a 5-mm or 10-mm margin in the direction normal to the surface of
that structure. Measure the distance between the original and expanded contours in
various planes and check the correct shape of the surface of the expanded structure in a
qualitative way in 3-D display under various angles of view, and in a quantitative way in
orthogonal planes. If negative margins are allowed, make a contraction after an expansion,
using the same margin, and check that the original structure is restored.
c. Check: correcting, adding, deleting and copying of contours and structures.
d. Validation
Any change made in the contours of structures should be detected by the TPS. Check if
an existing 3-D surface of that structure is invalidated and preferably recalculated auto-
matically. Use the above-defined structure, change on two slices the structure consi-
derably and check on the 3-D view if the 3-D surface is reconstructed properly (e.g., phan-
tom B). If dose computation had been previously performed, check that it is invalidated
after contour modification.
e. Check special TPS tools for example: the point reduction option (e.g., phantom B).
f. 2-D contour transfer
Construct the contours in various planes of another data set, make hardcopy plots
and check the dimension of the contours (phantom B). (This test is only useful when
matching is used clinically). Some new TPSs have a tool where contours can be drawn on
sagittal and coronal views and then derived into axial slices, which option should then also
be verified.
g. Hidden contours (if option available)
Check the number of hidden contours and the copy/move/delete option from visible to
hidden and vice-versa.
h. Bifurcated structures
Check if the system can handle bifurcated structures, i.e., if it is allowed to contour dif-
ferent areas in the same slice belonging to the same (target) volume, for instance nodes in
H&N treatments.
29
i. Interpolated structures
Construct the structure using the interpolation option in the TPS and check the dimensions
of the contours. Use any phantom with structures such as sphere, oblique cylinder or
horseshoe-shaped oblique structure.
j. Bolus definition and related options
See section. 4.1.10.
3.3.5 Construction of volumes
The tests described in this section concern the verification of the correct handling of
volumes of structures in a set of CT slices. These 3-D aspects of the TPS are extremely
important in modern radiotherapy, and correct handling of volumes, including the expansion
of structures along the direction normal to the surface of that structure, should therefore be
verified.
a. Volume computation
Let the TPS compute a phantom surface from the derived contours. Check the correct
shape of the surface. Let the TPS compute the volumes inside the structure’s surface and
compare with the exact value. Expand and contract structures with a margin for instance
of 5, 7, 10, 12 and 15mm (e.g., phantom B).
b. Construction of a volume from a set of CT slices with non-regular spacing
Construct surfaces from contours in planes that are not regularly spaced. For this purpose
delete a number of contours of an original regular set, generate the surface again and com-
pare with the original surface. Use, for instance, the original data set with 1mm space
between slices (e.g., phantom B) and delete some slices. Calculate the volume in all si-
tuations and compare.
c. Construction of a volume from a set of CT slices with non-sequential order of slices
during contouring
Define a cylindrical structure from contours drawn on each second slice and go back to
define the structure on the other slices. Check the computed volume (e.g., phantom B).
d. Construction of a volume from non-axial contours
If possible repeat test 3.3.5a, for orientations other than axial (e.g., phantom B).
e. Capping option (automatic interpolation at the beginning and the end of the available
contours)
Check if the calculated volume agrees with the expected volume in case the distance
between slices is changed.
30
f. Volume of subtracted regions
Subtract one well-defined volume from another. Compare volumes of a subtracted region
calculated by the TPS with those calculated manually (e.g., phantom B).
g. Boolean option
Check the possibility of adding or subtracting parts of an existing volume to create new
volumes. Verify the dimension of the new volume (e.g., phantom B).
31
4. BEAM DESCRIPTION
Beam definition and its use are critical items for the accurate design of a treatment
plan. In this chapter a set of performance tests related to beam definition, beam display and
beam geometry are proposed. The vendor of the TPS should do the majority of this set of
tests during the acceptance-testing phase (Appendix 3). Some of these tests depend on how
carefully the customization process was performed. This is the direct responsibility of the
medical physicist in charge of the TPS, whether the customization is done by him (her) self
or by the vendor.
4.1 BEAM DEFINITION
4.1.1 SAD, SSD and field size
Position a beam with rectangular shape on a 3-D phantom with its isocentre 10 cm
below the surface, but not at the geometrical centre of the phantom (see Figure 4.1).
Figure 4.1 Schematic view of beam position relative to the phantom.
a. Field size
Measure the field size at the isocentre level in an axial and sagittal plane and verify the
agreement between measured and stated TPS beam co-ordinates. Define at least three dif-
ferent field sizes, for instance: 5cm x 20cm, 7.5cm x10cm, and 15cm x 30cm. Check the
name given to each field dimension (X or Y). In some TPSs the ruler tool cannot be used
in BEV display. In this case the verification of the field size in both dimensions (X and Y)
can be done using a collimator rotation.
33
b. Divergence
Check the correctness of divergence by measurement of the field size at the surface, repeat
it for various field sizes, for instance: 5cm x 20cm, 7.5cm x 10cm, and 15cm x 30cm.
c. Isocentre position
Verify the co-ordinates and graphical position of the isocentre relative to the origin of
the phantom. Check the name and direction of all co-ordinate axes and their consistency
with the anatomical co-ordinate system.
d. SSD or depth verification
Verify the SSD or depth as stated by the TPS.
e. Repeat the tests 4.1.1 a, b, c, d for SSD>100 cm (for instance 130cm)
Plot at least three cases from the above-described situations to verify the agreement
between plotted geometries and geometries presented on the screen.
4.1.2. Gantry rotation
Position a beam on a phantom with a gantry angle of 30o, 60o and 135o and verify
the direction of the rotation, and the agreement of the beam angle on the plot with the TPS
beam co-ordinates. Verify also the correspondence between the gantry angle and the orien-
tation of the small patient model on the screen. Suggestion: use an asymmetric field in this
test and display isodose curves as well.
4.1.3. Collimator rotation
Position a beam perpendicularly incident on a phantom and rotate the collimator to
an angle of 30o, 60o and 135o. Verify the collimator angle and its rotation direction using
the beam display in an axial and sagittal plane and the BEV-display (Figure 4.2). Suggestion:
use an asymmetric field in this test and display isodose curves as well.
Figure 4.2 Position of the field for the verification of the field shape in BEV-display and in the axialplanes.
34
4.1.4 Table movement
Position a beam perpendicularly incident on a phantom and rotate the table over an
angle of 30o, 60o and 315o. Verify the table angle and its rotation direction using the beam
display in an axial and sagittal plane and in the BEV-display. Suggestion: use an asym-
metric field in this test and display isodose curves as well.
4.1.5 Jaw definition and beam co-ordinates
Check the consistency of the beam co-ordinates of the separate jaws (i.e., X1, X2, Y1,
Y2) and check their values at the definition level (in most cases at isocentre level), as well
as their setting limitations (including over-travel) for open and wedged fields using the beam
display in an axial and sagittal plane and the BEV-display (Figure 4.3).
Figure 4.3 Jaw setting for the beam co-ordinates test, in BEV-display.
4.1.6 Multi-leaf collimator definition
Check the number of leaves, leaf position and numbering, leaf direction (X or Y), leaf
width, over-travel and maximum leaf position, preferably by using the BEV-display. Verify
the exact definition of the leaf position in case of rounded leaf ends. Plot the position of the
leaves for a number of different leaf settings, including some extreme positions. Check how
the (backup) jaw positions are changed according to modification of leaf positions.
Suggestion: display isodose curves as well.
4.1.7 Wedge and block insertion
Check the possible directions of wedge insertion and the agreement of the wedge dis-
play with the insert direction. Check if the wedge direction is changed after collimator rota-
35
tion and if the insertion direction remains the same. Suggestion: check consistency with iso-
dose curve display.
Use an asymmetric block arrangement (Figure 4.4) and check the blocking tray rota-
tion as the collimator rotates. Check the position of the field limits defined by the edge of the
blocks in BEV-display on different axial and sagittal slices.
Figure 4.4 Asymmetric block arrangement for block test, in BEV-display.
4.1.8 Consistency check of beam co-ordinate system
Verify the consistent use of beam co-ordinates defined in test 4.1.1 and their limita-
tions throughout all parts of the planning process.
4.1.9 Warnings and error messages
To ensure the safety of the patient, as well as to avoid collisions, check the adequacy
of warnings and error messages following beam input with values that violate the actual
limits of the accelerator:
• Limits for gantry, collimator and table angles.
• Field size limits (x and y) for wedged and non-wedged fields.
• Over-travel for asymmetric fields and MLC.
• Insertion direction for the wedge.
• Anti-collision warnings, especially in case blocks are applied.
Check how they behave when beam parameters are changed (e.g., field size, wedges,
blocks, MLC, etc. when machine, SSD or SAD is changed).
36
4.1.10 Bolus definition
a. Check whether the generated bolus completely covers the beam aperture projection on the
patient skin.
b. Check the density and height of the bolus throughout the full beam aperture using CT-
number and distance measurements.
c. Check whether the bolus is separated from the body contour of the patient.
d. Check manual bolus insertion.
4.2 BEAM DISPLAY
In the following section a water phantom of relevant patient size is used, e.g., width
40 cm, height 25 cm and length 30 cm, having two cylinders (Figure 4.5) with two different
densities (for instance lung density: 0.3 g/cm3 and bone density: 1.8 g/cm3) symmetrically
placed around the origin. In order to simulate lung and bone, the cylinders should have a
minimum diameter of 5 and 1 cm, respectively.
Figure 4.5 Water phantom with two cylinders having different density placed symmetrically aroundthe origin.
4.2.1 Beam’s-eye-view (BEV)-display
Set up a symmetric co-axial beam (10cm x 10cm) with its isocentre coincident with
the phantom origin. Select a gantry angle of 0o. Verify the BEV-display with respect to the
phantom external contours, the cylinders, field dimensions and field position. This test must
be repeated for a few different source-surface distances.
Set up an asymmetric beam with its isocentre coincident with the phantom origin.
For the next set of tests, select a gantry angle of 30o, co-axial and a wedge. Perform tests
4.2.2 – 4.2.7 for that geometry.
Verify the plot of the BEV-display of the phantom.
37
4.2.2 Beam position and shape
Verify the beam position and shape (beam axis, divergence lines and aperture) and
wedge direction in an axial and coronal plane through the isocentre relative to the phantom
external contours and cylinders.
4.2.3 Beam position in BEV
a. Verify the beam position in BEV, 3-D-axonometric and 3-D-projection view relative to
the phantom external contours and cylinders.
b. Verify the agreement of the beam position between the digitally reconstructed radi-
ograph (DRR) and the BEV.
4.2.4 Block position in BEV
a. Define a block manually, using BEV field co-ordinates in the beam. Verify the block pre-
sentation using the BEV display options.
b. Verify the projection on CT slices of blocks inserted by the BEV option.
In both cases plot and verify the block definition in BEV.
4.2.5 MLC-shaped field
Define an MLC field manually, using BEV co-ordinates, and verify the MLC presen-
tation in the various display types. Repeat the tests 4.2.2, 4.2.3, and 4.2.4. Special attention
should be paid to the correctness of the ‘AP/PA’, ‘cranial/caudal’ and ‘left/right’ indicators in
these displays. In addition verify the plotted MLC field.
4.2.6 Bolus position
Verify the display of the bolus on the patient’s surface in BEV, transversal, frontal and
sagittal views, and 3-D displays in case the bolus switch is ‘on’. Verify whether the bolus is
removed if the bolus is switched off. Suggestion: check also the modification of dose distri-
bution as the bolus is switched on and off, globally or for individual beams.
38
4.3 BEAM GEOMETRY
Define in a geometric phantom (for instance a water phantom defined in the TPS) a
3-D parallelepiped, conic or spherical target volume and check the following functionality:
4.3.1 Automatic block and auto-leaf positioning
Set up a symmetric co-axial beam at 0o gantry angle and define an irregular and/or
“extreme” geometrical (for instance triangle) field shape using a margin defined in BEV.
a. Verify the position of the blocks, in an axial and coronal plane.
b. Verify the position of the leaves of the MLC in an axial and coronal plane.
c. Verify for the MLC, the inner, middle and outer position, or other method (e.g., least
squared) for settings of the leaves.
d. Verify for the MLC the backup jaw position.
4.3.2 User-defined block
Set up a symmetric co-axial beam with gantry- collimator- and table-angle set to 0o
and define a block manually using predefined co-ordinates. Verify the correct conversion of
these co-ordinates in a BEV-plot at a fixed plot distance and all beam displays mentioned
previously.
4.3.3 DRR: linearity and divergence
Make CT scans of a phantom with well-defined markers, for instance phantom A, as
described in Section 3.2. At least 8 markers should be located on the surface of the phantom.
Create DRR images at different SSD (for instance 90 and 100cm). The markers should be
visible on the DRR images. Measure the distances between the markers. Compare the dis-
tances between the marker positions on the DRR with the actual distances between the mar-
kers. (Note: in some TPSs the ruler tool cannot be used in DRR display).
4.3.4 Input, change and edit functions
Verify the accuracy of functions as: move beam, oppose beam, z-co-ordinate, shift
block, copy block, read block from BEV-file, BEV-beam position, mirror option, BEV lat-
eral/ longitudinal rotation, move of the patient co-ordinate system and the consistency
between results of moves at different TPS levels using co-ordinates and distance measure-
ments. If a dose distribution had been previously computed, check that such modifications
result in proper cancellation of computations, which are not any longer valid.
39
5. DOSE AND MONITOR UNIT CALCULATION
Methods for checking and verifying the dose calculations in a TPS are described in
this chapter. The accuracy of the final result depends not only on the dose calculation algo-
rithm, but also on the quality of the beam data input. Therefore tests of both aspects deter-
mining the final result of a dose calculation will be presented in this chapter. It should be
noted that these tests should be performed in close co-operation with other users and/or the
vendor of the system, to avoid duplication and to understand the observed phenomena.
Suggestions for division of tests between vendor and user will be discussed in Appendix 3.
The results of the tests described in this chapter should be in agreement with the recom-
mendations on accuracy requirements discussed in Chapter 2.
In some systems it is possible to choose the medium to which the dose should be
specified, for instance a specific type of reference tissue. Also with Monte Carlo calculations
such a choice is possible. In this booklet we will, however, specify all dose values as dose to
water.
The tests described in this chapter are the starting point of a QA programme of the
dose and monitor unit calculation part of a TPS. In many situations a much more extensive
series of tests is necessary to verify the dose calculations of the irradiation techniques applied
in a specific hospital. On the other hand, some tests may not be very relevant for an institu-
tion and can therefore be skipped. It is, however, strongly recommended to start a QA pro-
gramme of a TPS with the tests and configurations described in this booklet.
Any verification of dose calculation requires that some beam data have been pre-
viously entered into the TPS. These data could consist of a “generic beam data set”, gene-
rated either from published reports (AAPM 1995, Venselaar and Welleweerd 2001) or from
another set of measured reference data (or Monte Carlo computations). The TPS vendor
should supply the generic beam data set with proper reference to how it has been obtained.
It is meant to be used as a demonstration package to prove the possibilities and the accura-
cy of dose and monitor unit computation. However, it must never be used for clinical pur-
poses for which a new set of beam data representative of the user’s beam characteristics must
be obtained.
To check the accuracy of dose computation in the user’s beam for a large range of
situations representative of the clinical practice, it would be necessary to perform extensive
measurements. This turns out to be impracticable and one has therefore to accept the results
derived from the generic data set and assume that they can be extrapolated to the user’s beam
data. Alternatively, the “Quality Index” (QI)-methodology provides a solution easily appli-
cable to those situations where the dose is modified due to some perturbations such as
changes in the shape of the patient surface (Caneva et al., 2000) or presence of inhomo-
geneities. The QI- methodology requires that a correction factor (CF) has been previously
obtained from the ratio of a given situation (e.g., with inhomogeneity) to a reference set-up
(e.g., a water phantom) for a wide range of beam energies expressed by their Quality Index
41
(ratio of TPR for a fixed source-detector distance at 20 and 10cm, respectively). Provided
that the test is properly designed, CFs can be made independent of the type of linear accel-
erator and will vary smoothly only as a function of QI. Provided this variation is available,
it is easy for individual users to perform a dose calculation with their own beam data for both
the reference and the modified set-up and to extract CF. This CF value should coincide with
the interpolated CF value for the user’s beam QI. Compared to the use of generic data, the
QI-methodology has the advantage that it is performed after beam modelling and is therefore
more representative of the data used clinically.
5.1 BEAM CHARACTERISATION SET
5.1.1 Data input process
In order to model a photon beam in a TPS, a set of basic beam data, i.e., the beam
characterisation set as specified by the vendor, needs to be determined1 and entered into the
system by the user or with the help of the vendor. If for one or other reason the user’s meas-
urement procedure is deviating from the recommended method, it is recommended to dis-
cuss this issue with the vendor. Immediately after the beam modelling process the corre-
spondence between calculated and input data should be verified for a limited number of
fields, the beam verification set2. The comparison should include depth doses, beam profiles,
and if applicable the reference (or calibration) dose per monitor unit. A number of treatment
planning systems nowadays have special tools to compare measured data with calculated
results. Besides the agreement between measurements and calculations, the internal consis-
tency of the measured dataset should also be verified, because errors might be introduced,
for instance if the accelerators are not tuned properly, or if different persons collect data over
a prolonged time period.
In order to prove the correctness of a specific algorithm, the vendor can use a set of
benchmark (generic) data provided by for instance the NCS (Venselaar and Welleweerd,
2001) or by a users group of that system. In this way the vendor can demonstrate the global
accuracy of the algorithm, but the user still has to verify the TPS for the specific conditions
(slightly different beam fits and photon beam energy) encountered in his/her clinic.
42
1 Usually measured beam data, but some data related to machine configuration different from thoserequired for beam characterisation (Chapter 4), may also be required for the TPS to behave accurate-ly.2 The beam verification set may be identical to the beam characterisation set, but is usually extendedand includes several other situations, e.g., extended SSD and rectangular fields.
5.1.2 Documentation
Proper documentation of the characterisation set is needed to reproduce the entire
data manipulation process by other users. It should give a detailed description how the data
were obtained, implemented and stored, and should include:
• A complete written description of dates of measurements, names of investigators.
• Methods of measurement, equipment and software used for data collection and methods
used for data manipulation.
• A copy of the original data (before manipulation).
• A copy of the final results (printout, plot, digital files, etc.).
• A description where the data are filed.
• A logbook in which all events, measurements and changes concerning the TPS are docu-
mented.
5.2 DOSE CALCULATION
In a TPS, dose distributions are calculated for complicated irradiation techniques
applying algorithms that may sometimes be sophisticated, or on the other hand, deliberately
be approximate to result in acceptable calculation times. It is important that the user makes
sure that the dose calculations are accurate for simple situations such as the following:
• Square and elongated fields.
• Different SSD.
• Beam modifiers: wedges, blocks and trays.
• Asymmetric collimator settings.
• MLC-shaped fields.
Therefore a series of measurements should be performed for these conditions (some
of them being part of the commissioning of a new accelerator). This is particularly important
for the beam modifiers for which accurate results will be obtained only if the proper para-
meters have been defined in the TPS library. For elongated fields or for different SSD, it
could be acceptable to calculate manually from the equivalent square approximation or from
the inverse square law, the expected values of calculated doses.
The tests described in the following paragraphs are defined to test the limitations of
the dose calculation algorithms in combination with tests of beam fits in the treatment plan-
ning system. It is not recommended to perform all tests for all energies and all accelerators
existing in a radiotherapy department. The tests described in this chapter concern percentage
depth dose (PDD), beam profile (PRF) and off-axis factor (OAF) calculations, and should at
least be performed at the depth of dose maximum and at 5, 10 and 20cm depth. The depth
43
of 10cm is considered as a reference depth. Generally other parts of the beam are also of
interest in modern radiotherapy, for instance if organs at risk (OARs) are involved or for
dose-volume histogram (DVH) evaluation. Therefore, performing tests at other depths is
strongly recommended.
It should be emphasized that the tests proposed in this chapter are not meant as an
extensive testing of the algorithms implemented in the system. This is the task of the vendor
and the users group of a specific TPS, as discussed in Chapter 2. The tests given in this chap-
ter are meant as a minimum number of tests to be performed by the vendor during the accep-
tance testing, or by an individual user before applying the system clinically in his (her) insti-
tution.
The schematic illustrations given for each test are meant to help to set up the geo-
metry, but are not at scale. The dashed line in the centre indicates the middle (0) of the phan-
tom, which coincides with the central beam axis. In some tests the central beam axis is indi-
cated by a dotted line.
5.2.1 Open square fields
a Field size 5cm x 5cm, SSD=90cm
b Field size 10cm x 10cm, SSD=90cm
c Field size 30cm x 30cm, SSD=90cm
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF.
5.2.2 Open rectangular fields
a Field size 20cm x 5cm, SSD=90cm
b Field size 5cm x 20cm, SSD=90cm
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF.
44
5.2.3 Variation in SSD
Field size 10cm x 10cm, SSD=100cm and
extreme values of SSD for instance 80 cm
and 130 cm if possible.
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF.
5.2.4 Wedged square field
Field size 10cm x 10cm, wedge 60°, SSD=90cm
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF (also in
the direction perpendicular to the wedge direction).
5.2.5 Wedged rectangular fields
a. Field size 5cm x 20cm, wedge 60°, SSD=90cm
b. Field size 20cm x 5cm, wedge 60°, SSD=90cm
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF (also in
the direction perpendicular to the wedge direction).
5.2.6 Field with a central block
Field size 15cm x 15cm, central block with
clinically relevant dimensions (e.g., as used
for spinal cord blocking) clinically, SSD=90cm
Calculations in the transverse plane through
the central axis of the beam: PRF.
45
5.2.7 Blocked field
Field size 30cm x 30cm, block of size: 20cm x 30cm
or 10cm x 30cm, SSD=90cm
Calculations in the transverse plane through
the central axis of the beam: PRF.
5.2.8 Inhomogeneities
Lung/air For instance a cylinder with a low-density material.
Field size: 10cm x10cm
Bone For instance a cylinder with a bone-like material.
Field size: 10cm x10cm
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF.
These materials might be available for other purposes, e.g., for QA of CT scanners. In order
to verify dose calculations inside a lung, it is recommended to apply a solid phantom having
lung-like material in which an ionisation chamber or TLDs can be inserted.
5.2.9 Oblique incidence
Field size 10cm x 10cm, open, SSD=90cm,
gantry angle 315o
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF.
46
5.2.10 Missing tissue
Field size 10cm x 10cm, open,
SSD=90cm, gantry angle 0o
Part of the beam is outside the phantom.
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF.
5.2.11 Off-axis square field
Field size 10cm x 10cm, open,
SSD=90cm, off-axis 5cm
Calculations in the transverse plane through
the central axis of the beam: PRF, OAF.
5.2.12 Off-axis elongated field
Field size 2cm x 20cm, open,
SSD=90cm, off-axis 10cm
Calculations in the transverse plane through
the central axis of the beam: PRF, OAF.
5.2.13 Wedged off-axis field
Field size 10cm x 10cm, wedged,
SSD=90cm, off-axis 5cm
Calculations in the transverse plane through
the central axis of the beam: PRF, OAF.
47
5.2.14 Off-plane field
Field size 10cm x 10cm, open,
SSD=90cm, off plane 5cm (i.e., off-axis
in both directions)
Calculations in the transverse plane through
the central axis of the beam: PRF, OAF.
5.2.15 Square MLC field
Field size 10cm x 10cm, open, SSD=90cm
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF.
5.2.16 Off-axis square MLC field
Field size 10cm x 10cm, open,
SSD=90cm, off-axis 5cm
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF.
48
5.2.17 MLC-shaped field
“Banana shape” field enclosed only by leaves while
the backup jaws are open far as possible, SSD=90cm
Calculations in the transverse plane through
the central axis of the beam: PDD, PRF.
Several options for this test are suggested:
- to do a scan in the middle of the leaves
- to do a scan at the border of two leaves
- to do a scan perpendicular to the leaf direction
5.2.18 Block and tray insertion
a. The aim of this test is to check if there is an influence on the dose calculation using dif-
ferent methods of block digitising. The same tests can be used for MLC-shaped fields.
Calculations in the transverse plane through the central axis of the beam: PDD, PRF.
The following steps are suggested:
1. Create an open beam with a field size of 15cm x 15cm (or 20cm x 20cm) perpendicu-
lar to a phantom surface.
2. Add to the beam described above four blocks.
The blocks are digitised separately as indica-
ted in Figure 5.1.
(• Start of the drawing of the blocks)
Figure 5.1 Four blocks digitised separately.
49
3. Design the same arrangement of blocks, for the
same field, but now digitise the blocks as a
single block described by two loops in the
same direction (Figure 5.2).
(· Start of the drawing of the blocks).
Figure 5.2 Four blocks digitised as a single block described by two loops in the same direction.
4. Design the same arrangement of blocks, for the
same field, but now digitise the blocks as a
single block described by two loops in the
opposite direction (Figure 5.3).
(· Start of the drawing of the blocks).
Figure 5.3 Four blocks digitised as a single block described by two loops in the opposite direction.
Compare the central axis depth dose distributions and beam profiles at various depths for the
four cases described above.
b. An additional test can be performed to check whether the value of the block transmission
influences the depth dose distribution. Calculate the central axis depth dose distribution
for geometries and blocks of different transmission, e.g., for 5, 50 and 90%. Compare the
results for:
- Field size 15 cm x 15 cm, open
- Field size 15 cm x 15 cm, with a small corner block
- Field size 15 cm x 15 cm, with one block
- Field size 15 cm x 15 cm, with two blocks
- Field size 15 cm x 15 cm, with four blocks.
50
Figure 5.4 Position of blocks to check whether the value of the block transmission influences the depthdose distribution.
c. Verify if the correct tray factor is entered in the TPS. This is of particular concern if dif-
ferent types of trays are used in a department.
5.3 2-D AND 3-D DOSE VERIFICATION
Besides the verification of the dose at specific points or along lines (i.e., depth dose
curves or beam profiles), it is essential that dose values are verified over the whole volume
of interest. Comparing calculated with measured dose distributions in several relevant planes
can do this. The position of isodose lines should also be checked in these planes. Verification
of dose-volume histograms is another way of verifying the 3-D dose distribution but should
include verification of the volumes under investigation.
51
a. Field with a small corner block b. Field with one block
c. Field with two blocks d. Field with four blocks
5.3.1 2-D dose distribution
For a verification of 2-D dose distributions define the following beams:
• open square fields 5cm x 5cm, 10cm x 10cm and 30cm x 30cm (see 5.2.1)
• open rectangular field 5cm x 20cm (see 5.2.2),
• wedged square field 10cm x 10cm (see 5.2.4),
• blocked field 30cm x 30cm (see 5.2.7),
• MLC-shaped field (see 5.2.17).
Tests should be performed at SSD = 90cm, depth = 10cm. As a minimum, 95%, 90%,
80%, 50% and 20% isodoses should be measured in water phantom using the “tracking” or
“reconstruction from multiple profiles” option. 2-D dose distributions can also be measured
by means of film dosimetry in a water or solid phantom. In this case the procedures for accu-
rate film analysis, i.e., calibration and conversion from optical density to dose, should be
carefully established.
In order to analyse the results of the tests, as a minimum, manual matching of the two
isodose distributions (measured and calculated) should be performed and maximum devia-
tions between isodoses should be reported in mm. If computerized matching is used, a func-
tion that combines dose differences and distance discrepancies (in regions with a large dose
gradient), the gamma evaluation method as discussed in Section 2.2.3, can be used. If γ-value
analysis can be performed, it is recommended to apply the criteria presented in Table 2.1.
Mean and maximum γ-values should be reported together with the fraction of points show-
ing a γ-value larger than unity (using “gamma-area histograms”). An example of such a test
is given in Chapter 7.
It should be noted that the determination of interpolated isodoses, reconstructed from
a series of measurements using the scanning water phantom software, also introduces uncer-
tainties. For that reason some companies recommend a profile-by-profile comparison to
determine measured versus calculated accuracy by comparing the values of the calculated
profiles at the intersection of the isodoses to the displayed isodoses at those intersections.
5.3.2 Dose-volume histogram
Several tests have been described to assure that the 3-D dose distribution over a struc-
ture is accurately binned into a DVH. Craig et al., (1999) applied a manual verification of a
cumulative DVH of a dose distribution of a wedged beam over a cubic volume. Multiplying
the area defined by the intersection of the volume and an isodose line by the axial length of
the volume yields the volume equal to or greater than the isodose line. In the NCS Report
(NCS 2004) various tests are described using either a user-defined dose distribution or a sin-
gle beam. In a simple test the DVH is calculated in a rectangular structure along the beam
axis, with a length and width of 1cm x 1cm starting at the depth of dose maximum and a
height of approximately 20cm, irradiated with a photon beam. The DVH can then be com-
52
pared with that calculated from the corresponding percentage depth dose table. Not only tests
for cube-like structures are given, but also for a sphere, which is less sensitive to grid-based
artefacts.
The tests related to DVH presented in this booklet consist in checking the consisten-
cy between the 3-D dose distributions displayed as isodose lines and assumed to be the
reference, and the DVH (Panitsa et al., 1998). Since DVH computations are highly sensitive
to parameters such as density of sampling points (including slice spacing) or size of the dose
bins, the tests must be performed after setting these parameters at values close to the values
used in clinical practice. It is therefore assumed that the construction of volumes by the TPS
has already been tested in the way described in Section 3.3.5.
Low dose gradient: Construct a homogeneous cubic virtual phantom (e.g., 25 x 25 x
25 cm3) and include a central 5 x 5 x 5 cm3 cubic structure of the same density. Use a 10cm
x 10cm beam normal to the top phantom surface, with its isocentre at the centre of the cubic
structure. Display isodose lines in a plane through the beam axis and in planes perpendi-
cular to the beam axis through top and bottom part of the structure (Figure 5.5 a). Determine
the dose at the centre and at each corner of the (square) section of the structure in each plane.
On the other hand, compute the DVH in the structure. The computed volume should be very
close to 125 cm3 (depending on the slice spacing) and the minimum and maximum dose of
the DVH should be consistent with those found in the computed planes (maximum in top
plane, minimum in bottom plane).
High dose gradient: Create a beam opposed to the previous one and shift the two
beams laterally by 5 cm in order to have the isocentre at the beam edge (Figure 5.5 b).
Measure in the axial plane through the beam axis the distance d (in cm) between the 80%
and 20% isodose line (normalised at beam isocentre). Check the dose uniformity along the
beam edges using a sagittal plane through the centre of the cubic structure. Compute the
DVH in the structure. The DVH should be linear between the 20 and 80% dose values and
the volume difference between these two dose levels should be equal to 25 x d (cm3).
Figure 5.5 a) Example of the geometry for a DVH test in a low dose gradient area; b) example of thegeometry for a DVH test in a high dose gradient area.
53
5.4 MONITOR UNIT CALCULATION
The calculation of the number of monitor units for the individual fields of a treatment
plan is included in most modern TPSs. It is essential for the user of a TPS to understand the
principles of the MU calculation algorithm. Generally the MU calculation is closely linked
to the calculation of the relative dose distribution. In some systems the dose distribution is
directly calculated as dose per MU, or dose per fluence, from which a representation of
relative dose and number of MUs is derived. As outlined in Section 2.2, tests of the dose cal-
culation presented in this booklet concern the verification of the absolute dose, i.e., the dose
per MU, after correction for any variation in output of the accelerator. Following this con-
cept, the number of monitor units for a certain geometry is implicitly also checked.
In many institutions a check of the number of MUs independent of the MU calcula-
tion provided by the TPS is performed. Such a procedure can be considered as an essential
part of routine clinical QA programme, and several software packages are commercially
available for this purpose. In ESTRO Booklet No. 3 (Dutreix et al., 1997) and Booklet No.
6 (Mijnheer et al., 2001) the formalism as well as numerical data are provided for such an
independent MU calculation method. In these documents examples can be found how to per-
form such type of independent MU calculation.
54
6. PERIODIC QUALITY CONTROL
The sources of error and inaccuracies in the functioning of computerised treatment
planning systems may be divided into several groups: the hardware, the data base of basic
dosimetric input data for radiation beams, the geometric input data for treatment machines,
the system of acquiring anatomical patient data, the dose calculation algorithms used in a
particular system, the patient data base and the archiving of the results of a treatment plan-
ning exercise. All elements require thorough testing but some tests may be performed only
once and some should be repeated periodically.
Periodic quality control of particular parts of a TPS is an important aspect of the qua-
lity assurance process of a TPS for safety and security reasons. This chapter consists of tests
that should be performed periodically, at regular time intervals. If a TPS is able to perform a
checksum test of the data file in it, such a test should be performed as often as desired
(Section 6.2). The need for testing separate parts of the software is then less urgent. If it not
possible to do a checksum test, it depends strongly on the local situation which frequency
should be applied to perform a specific test. For that reason no general recommendations are
formulated for the frequency of the tests described in this chapter. Some tests related to prop-
er functionality of peripheral devices used for data input, software and output devices are
also presented.
In some countries guidelines are available for periodic QC of a TPS, or are even
obligatory to follow for legal reasons. The tests described in this chapter are not meant to
replace these national guidelines, but intended as an addition to these.
6.1 DATA INPUT PROCESS
The correct functionality of all devices related to the data input process and network
connections should be checked. Depending on the specific TPS configuration, testing of dif-
ferent devices needs to be performed. Some of the periodic tests proposed in this section are
described already in Chapter 3: Anatomical Description.
6.1.1 Digitiser
A simple geometrical figure like a square, rectangle or a triangle, should be entered
via a digitiser and then printed back on a printer or drawn on a plotter. The input and output
figures should be compared and the accuracy of the input-output devices assessed.
Recommended reproducibility: 1 mm.
Alternatively perform test 3.2.2 a. The vertical and horizontal scales are to be tested
along 10 cm showing an agreement better than 1mm.
55
6.1.2 Film scanner
Objects on a film like square, rectangle, and triangle with specific dimensions are
scanned. The size displayed by the TPS is then compared with the real one.
Recommended reproducibility: 1mm. Alternatively perform test 3.2.2 b.
6.1.3 CT data
a. Scan a phantom with well-known outer contours and with known electron densities, e.g.,
phantom B from Chapter 3. Enter the data in the TPS. Compare the geometries and elec-
tron densities with the actual data.
Recommended reproducibility:
- geometry: 1mm
- if electron density < 1.5 (relative to water): 0.05
- if electron density > 1.5 (relative to water): 0.1
b. Geometric integrity of slices. Scan a phantom of well-defined dimensions and check its
dimensions in the TPS using a central field-of-view (e.g., phantom A).
Recommended reproducibility: 1mm.
6.1.4 MR data
Scan a phantom with well-known outer dimensions including some inhomogeneities,
e.g., the phantoms described in Chapter 3. Enter the data in the TPS. Compare the geometry
with the actual data. Recommended reproducibility: 2mm.
6.1.5 Integrity of simultaneous input
Check that simultaneous input (digitiser, film scanner, CT, MR) by different users on
these channels does not interfere in the TPS. See test 3.2.1 f.
6.2 SOFTWARE
Most modern treatment planning systems are able to perform a checksum test. Such
a checksum program compares current executable files with reference files created at the
time of commissioning the TPS, and periodically updated files. Performing a checksum test
is therefore one of the most powerful and efficient tests of the software of a TPS. Tests of
56
TPS software include tracing problems related to dose calculation. As part of a periodic
quality control programme, verification of treatment plans of standard techniques is there-
fore recommended in some institutions even if a checksum test is available. It should be
noted that the operating system is taking care of some of the tests presented in this section.
6.2.1 MU calculation
Check the MU calculation for some geometries presented in Chapter 5:
5.2.11 Off-axis square field
5.2.12 Off-axis elongated field
5.2.13 Wedged off-axis field
5.2.14 Off-plane field
6.2.2 Standard treatment techniques
Check the dose distributions for some standard treatment techniques:
a. Head & neck – 2 fields (opposing lateral or oblique, wedged fields; bolus)
b. Lung/oesophagus – 2 or 3 fields (oblique A-P, wedged, open fields)
c. Breast – 2 fields (tangential, wedged fields)
d. Pelvis/prostate – 3-field or box technique (oblique, wedged, open fields).
6.2.3 MLC-shaped field
Check the MLC-shaped field as described in test 5.2.17.
6.3 DATA OUTPUT PROCESS
The following tests are related, or similar to, tests described elsewhere in this book-
let.
6.3.1 Printing/plotting devices
a. On each treatment plan, either printed or plotted, two scales are required in both direc-
tions (horizontal and vertical) with centimeter marks at least over 10cm. However, all the
marks shall be verified with a ruler; they must be at the right place (i.e., no distortion and
57
no displacement). A test plan for a phantom of well-known geometry can be made (e.g.,
phantom B from Chapter 3). The phantom should be digitised, verified on the screen, then
plotted and verified on paper. Recommended reproducibility: 1mm.
b. Print or draw on a plotter the geometries from test 6.1.1. The input and output figures
should be compared and the accuracy of the input-output devices assessed.
Recommended reproducibility: 1 mm.
c. Plot and verify the BEV display from test 4.2.1. Recommended reproducibility: 1mm.
d. Plot isodose distributions of at least 3 beam geometries and compare them with the dis-
play on the screen. See tests 5.2 and 5.3. Recommended reproducibility: 1mm.
6.3.2 Block cutting device
The correct transfer of information from TPS to the block-cutting device should be
tested. The most clinically relevant shapes of block and compensators like square, rectangle,
and triangle should be sent to the block-cutting device and their size should be examined.
Recommended reproducibility: no difference.
6.3.3 Treatment plan transfer
The correct transfer of information from a TPS to a linear accelerator and other hard-
ware, like a record or verify system, needs to be tested. Use plans/fields from tests 6.2.2 and
6.2.3 to check if all information was sent correctly.
58
7. EXAMPLES OF TESTS
In this chapter results will be presented of examples of tests described in Chapters 3
to 5. Although the details may differ from those given in these chapters, these examples show
how these tests can be performed in practice, and can be analysed when comparing calcula-
tions from a TPS with measurements. Different members of the QUASIMODO group,
having different TPSs and using different measuring equipment, have recently performed
these tests. These examples are therefore representative for the accuracy of state of the art
(non-IMRT) treatment planning systems. They illustrate the enormous possibilities of a
modern 3-D TPS, but also the limitations of the algorithms of some of these systems.
The examples of tests given in this chapter have the same number as used in the cor-
responding chapter, where also a more complete description of the test can be found.
Although many more examples could be given, we have chosen from each chapter a limited
number of representative tests, which may in addition have some interesting features.
7.1 TESTS FROM CHAPTER 3: ANATOMICAL DESCRIPTION
Test 3.2.1 Image input a: Identity consistency of scans
Different fields-of-view within one CT data set may give wrong dimensions of the
phantom/patient. A CT data set with slices of two different fields-of-view was introduced
into a TPS. The TPS has given no warning. The dimensions of the phantom were measured
with the ruler tool from the TPS. Figure 7.1 shows that due to the changes in the field-of-
view of slices z=2 and z=3, the system recognizes wrong dimensions of the phantom in slice
z =3.
Figure 7.1 CT data of phantom A with slices having different fields-of-view.
59
Test 3.2.1 Image input b: Scan parameters –varying slice thickness
System A: A set of CT scans with variable slice thickness of 4 and 10mm was trans-
ferred to the TPS. The system does not give any warning or comment. The construction of
the volume was done correctly.
System B: A set of CT scans with variable slice thickness of 2 and 5mm was trans-
ferred to the TPS. The system does not give any warning or comment. The construction of
the volume was done correctly.
Test 3.2.1 Image input i: CT number representation
CT data of a cylindrical phantom (AAPM CT Performance Phantom) were imported
in a TPS. CT numbers in the TPS, obtained by making profiles, and determining the maxi-
mum, minimum, and mean CT number in a small region of interest (ROI), were compared
with the original numbers measured on CT. In the example shown in Figure 7.2 good corre-
spondence was observed.
Figure 7.2 CT scan of a cylindrical phantom (AAPM CT Performance Phantom) with the number ofHounsfield Units (HU) in a region of interest (ROI) represented by a TPS. The pixel statistics are: min.127.0, max. 157.0, mean 139.3, and a standard deviation of 5.5 HU. The number at the CT worksta-tion is 134.
60
Test 3.2.3 Image use a: Geometry of reconstructed images
CT data of a cylindrical phantom, having a number of parallel inserts, were imported
in a TPS. The distance between these inserts was measured in reconstructed transversal,
sagittal, and coronal planes. The distance between the inserts is maintained in the recon-
structed images shown in Figure 7.3.
Figure 7.3 Reconstructed transversal, sagittal and coronal planes of CT scans of a cylindrical phan-tom, as represented by a TPS. Indicated are corresponding distances between inserts.
Test 3.2.4 Co-ordinate system of images
A schematic view of the phantom used for this test is shown in Figure 7.4. This phan-
tom is made for use in combination with a stereotactic localiser for stereotactic imaging. It
is particularly useful for CT/MR image-modality tests. The PMMA cylinders contain small
empty cylinders filled with a liquid to be used as markers (also referred to as ‘target points’).
Iodine contrast agent was used as liquid for CT as well as for MR. The five target points
shown in Table 7.1 were used for the comparison of the co-ordinate values. The origin of the
co-ordinate system was that of a stereotactic frame to which the phantom was attached.
Table 7.1 Given co-ordinate values in mm with respect to the origin of a stereotactic frame.
target point # X Y Z
1 60.0 0.0 44.12 0.0 60.0 74.13 -60.0 0.0 104.14 0.0 -60.0 134.15 0.0 0.0 164.1
61
Figure 7.4 View of the phantom with markers visible with CT and MR imaging.
The stereotactic localiser was attached to the phantom, and CT as well as MR ima-
ging was performed. With CT imaging, three different settings were used: a) a special scan
for CT angiography with 2-mm slice thickness, b) the same scan modus with 3-mm slice
thickness, and c) a so-called “head”-scan with 3-mm slice thickness. With MR imaging, two
different settings were used, one for T1-weighted imaging and one for T2-weighted imaging.
Two different MR systems (MR1 and MR2) were examined.
The determination of the co-ordinate values from the different images was performed
using the stereotactic tool of a treatment planning system. When calculating the vectorial dif-
ference, Dr, between the given position of a marker point, and the corresponding position
obtained by the imaging procedure and stereotactic co-ordinate determination, a value of
2mm must not be exceeded. The following results were obtained:
CT Imaging
Table 7.2 Results of a special scan for CT angiography with 2-mm slice thickness.
Target point # X Y Z Δr
1 61.16 0.46 44.28 1.26 2 0.77 60.90 73.88 1.20 3 -60.50 0.40 104.29 0.67 4 0.32 -60.20 134.94 0.92 5 0.59 0.75 164.94 1.27
62
63
Table 7.3 Results of a special scan for CT angiography with 3-mm slice thickness.
Target point # X Y Z Δr
1 61.16 0.18 44.18 1.18 2 0.51 60.62 74.46 0.88 3 -60.22 0.40 104.85 0.88 4 0.32 -60.20 134.67 0.68 5 0.59 0.47 164.68 0.95
Table 7.4 Results of a “head”-scan with 3-mm slice thickness.
Target point # X Y Z Δr
1 61.15 0.20 43.99 1.17 2 0.77 60.64 74.25 1.01 3 -60.52 0.42 104.63 0.85 4 0.28 -60.18 134.41 0.45 5 0.82 0.78 164.37 1.16
MR Imaging
Table 7.5 Results for MR1 of a T1-weighted scan.
Target point # X Y Z Δr
1 59.80 -0.43 44.34 0.53 2 -0.15 59.51 74.44 0.61 3 -60.96 -0.03 104.77 1.17 4 -0.74 -60.15 133.55 0.93 5 -0.63 -0.17 162.64 1.60
Table 7.6 Results for MR1 of a T2-weighted scan.
Target point # X Y Z Δr
1 60.25 -0.48 44.73 0.83 2 0.17 59.62 74.22 0.43 3 -60.86 -0.13 104.19 0.87 4 -0.72 -60.23 133.70 0.86 5 -0.75 -0.11 162.69 1.60
Table 7.7 Results for MR2 of a T1-weighted scan.
Target point # X Y Z Δr
1 60.81 -0.82 43.96 1.16 2 0.70 59.23 74.55 1.13 3 -60.14 -0.70 103.68 0.83 4 0.05 -61.14 135.55 1.85 5 0.40 -0.96 165.35 1.63
Table 7.8 Results for MR2 of a T2-weighted scan.
Target point # X Y Z Δr
12 0.83 59.17 74.24 1.18 3 -60.20 -0.64 104.90 1.04 4 0.03 -61.23 135.84 2.135 0.37 -1.25 166.46 2.70
The vectorial distance Δr, as shown in the last column of each table, was on average
1.0mm and generally within the tolerance value of 2.0mm for any CT modality. The same
imaging accuracy is achieved with MR1. However, MR2 (which is a more recent MR sys-
tem) shows a significantly worse accuracy, with tolerance excess for two marker positions.
The differences are not related to the TPS, in fact use is made of the measuring tool of the
TPS, but due to the non-optimal image quality of the second MR scanner.
A note must be added at this point: for new MR systems there is a tendency to reduce
the length of the magnet coil, which supplies the basic magnetic field. This may have advan-
tages for the diagnostic procedure; however, it has a clear disadvantage in providing images
without distortion as needed for high-precision radiotherapy.
Test 3.3.5 Construction of volumes b: Construction of a volume from a set
of CT slices with non-regular spacing
In the original CT data set the space between a set of slices is 1mm. The TPS inter-
polates between slices to create a volume. For various non-regular slice spacing, the surface
of an object was constructed and the resulting volume calculated. The results show that the
volume was calculated correctly in all cases.
64
Table 7.9 Determination of the volume of an object for a set of CT slices with non-regular spacing.“Non-regular 1/3” means that the space between the slices was either 1 or 3mm.
Spacing [mm] Number of contours Volume [cm3]
1 61 73.862 31 73.763 21 73.695 13 73.68
Non-regular 1/3 41 73.82Non-regular 2/5 21 73.71Non-regular 2/3 28 73.75Non-regular 1/5 41 73.78
7.2 TESTS FROM CHAPTER 4: BEAM DESCRIPTION
Test 4.1.1 SAD, SSD and field size a: Field size
The size of various fields was measured at isocentre level in transversal and sagittal
planes using the ruler tool of the TPS and compared with the TPS beam co-ordinates. The
results shown in Table 7.10 demonstrate that the measurements are in good agreement with
the stated field sizes. The accuracy depends on the magnification factor used for the meas-
urements.
Table 7.10 Comparison of stated field size with field lengths measured in the TPS in transversal andsagittal planes.
Field size X x Y [cm] Transversal / sagittal [cm]
3 x 5 3.01 / 4.99 10 x 20 10.001 / 20.02 30 x 10 29.96 / 10.01
Test 4.1.3 Collimator rotation
For an asymmetric (4cm in the X-direction and 2cm in the Y-direction) field of 10cm
x 12cm, a collimator rotation of 60o was performed. The length of the projection of the bor-
der of the field on a phantom was measured in three planes in a coronal view, and in the BEV
option of the TPS. The measured distances were compared with the actual dimensions and
are shown in Table 7.11.
65
Table 7.11 Results of measurements in a coronal view of a field projected on a phantom after per-forming a collimator rotation. No differences were observed.
Position Actual dimension [cm] Measurements [cm] BEV [cm]
Z= -2.0 12.93 12.92 12.92Z= 0 8.00 8.02 8.02Z= 2.5 2.54 2.54 2.54
Test 4.1.9 Warnings and error messages
The adequacy of warnings and error messages following beam input with values that
violate gantry, collimator and table angle description, was checked for two TPSs.
System A: Gantry, collimator and table rotation is possible over the full range of 0-360°.
After typing a negative value of the gantry angle (– 35°), the system does not give a
warning. The angle will be 325°.
System B: Gantry, collimator and table rotation is possible over the full range of 0-360°.
After typing a negative value of the gantry angle (– 35°), the system gives a warning and asks
for an angle in the range 0 - 360°.
Test 4.2.1 Beam’s-eye-view (BEV)-display
The BEV-display was checked for a field of 10cm x 10cm irradiating a phantom in
which two inhomogeneous structures, consisting of lung-equivalent and bone-equivalent
material, are positioned. The isocentre is coinciding with the origin of the phantom. The cen-
tre of the bone-inhomogeneity lies in the isocentre plane, while the centre of the lung-inho-
mogeneity is at 92.7cm SAD. The BEV dimensions refer to 100cm SAD. Figure 7.5 shows
that the distance between the centre of the field and the centre of the bone-inhomogeneity in
the CT slice and the BEV agrees very well. The distance between the centre of the field and
the centre and distal border of the lung-inhomogeneity in the CT slice and the BEV, cor-
respond also very well after correction for beam divergence.
66
Figure 7.5 CT slice and BEV of an inhomogeneous phantom irradiated by a 10cm x 10cm field.Indicated in both displays are the distances between the centre of the field and the bone- and lung-inho-mogeneity.
Test 4.2.6 Bolus position
In a TPS the constant thickness tool was used to define a bolus. A bolus layer of
height 15cm, width 10cm and thickness 2cm was generated on a curved surface. The display
of the bolus on the phantom surface was performed for a field length of 10cm. In Figure 7.6
the drawing of the bolus on the phantom surface, as well as the results of measurements of
its thickness at several places inside the beam, are presented.
The covered surface depends on the height and width values of the bolus, which can
be defined by the user (the width by choosing a cutoff depth in the tested system). In this sys-
tem there is no option to create a bolus layer by hand. Only automatically generated bolus
layers, without manual modification, are possible. A density value can be chosen but no grey
67
values are visible in CT scans. The bolus is visible after performing the dose calculations and
is not separated from the body contour of the patient.
Figure 7.6 Drawing of a bolus with a predefined thickness of 2cm on a curved phantom surface, aswell as the results of measurements of the thickness of the bolus at several places inside the beam.
7.3 TESTS FROM CHAPTER 5: DOSE AND MONITOR UNIT CALCULATION
Test 5.2.1 Open square fields (absolute dose)Test 5.2.2 Open rectangular fields (absolute dose)Test 5.2.4 Wedged square fields (absolute dose)Test 5.2.5 Wedged rectangular fields (absolute dose)
68
In Table 7.12 results from a point dose study are shown, in which 100 monitor units
are given, and the absorbed dose is measured with an ionisation chamber along the central
beam axis. The same geometry is set up in the TPS and the number of monitor units for an
absorbed dose of 1.00 Gy is calculated. The last column gives the ratios d(i) and d%(i)
between calculated and measured dose per monitor unit values at point i. Note that devia-
tions in bold indicate values outside the tolerance level of 2% and 3% for the open and
wedged field, respectively, as given in Table 2.1, although these values cannot directly be
applied to single point measurements.
Table 7.12 Data from point measurements using an ionisation chamber positioned at 20cm depth alongthe central beam axis in a large water phantom, source-skin distance 90cm, irradiated witha beam of 18 MV x-rays.
Measured Calculated
Field size Dose (Dose/MU) Dose (Dose/MU)(XxY cm2) Type (Gy) MU meas (Gy) MU calc d(i) /d%(i)
5x5 Open 0.557 100 0.00557 1.00 184.78 0.00541 0.971 / -2.9%60º Wedge 0.152 100 0.00152 1.00 677.73 0.00148 0.969 / -3.1%
10x10 Open 0.614 100 0.00614 1.00 165.46 0.00604 0.984 / -1.6%60º Wedge 0.173 100 0.00173 1.00 579.46 0.00173 1.000 / 0.0%
20x20 Open 0.673 100 0.00673 1.00 149.80 0.00668 0.993 / -0.7%60º Wedge 0.197 100 0.00197 1.00 527.66 0.00189 0.964 / -3.6%
30x30 Open 0.694 100 0.00694 1.00 145.35 0.00688 0.991 / -0.9%60º Wedge 0.206 100 0.00206 1.00 509.46 0.00196 0.951 / -4.9%
5x20 Open 0.597 100 0.00597 1.00 168.16 0.00595 0.997 / -0.3%60º Wedge 0.167 100 0.00167 1.00 593.02 0.00169 1.010 / +1.0%
20x5 Open 0.589 100 0.00589 1.00 172.77 0.00579 0.983 / -1.7%60º Wedge 0.164 100 0.00164 1.00 622.97 0.00162 0.979 / -2.1%
Test 5.2.1 Open square fields (profile)
Figure 7.7 shows the result of a comparison between calculations and measurements
of a beam profile of an open square field. Inside the high dose region the difference varies
between – 1.6% and + 2.7%, the average deviation is +1.0% with a standard deviation of
1.2%. The deviations, summarized as a confidence level of 2.7%, are within the recom-
mended tolerance level of 3% as given in Table 2.1.
69
Figure 7.7 Comparison between calculation (—) and measurement (—-) of a profile of a 30cm x 30cmfield at 10cm depth.
Test 5.2.1 Open square fields (PDD)
In the following two figures results from depth dose scans are compared with dose
calculations in a TPS. To facilitate the normalisation process all scans are obtained without
any change in electrometer setting of the scanning system. The reference channel of the elec-
trometer is connected directly to the monitor ionisation chamber of the linac. The scan for a
field size of 10cm x10cm is included in the set of measurements to get the data normalised
to the absolute dose per monitor unit at 10cm depth. In the TPS, the dose along the central
beam axis has been exported for a calculation in a water phantom. The dose along the lines
is divided by the number of monitor units calculated by the TPS for each field. Figure 7.8
shows the measured and calculated depth dose curves, while in Figure 7.9 the deviations d(i)
between calculated and measured dose per monitor unit values have been calculated on a
point by point basis.
Figure 7.8 Depth dose curves of a 6 MV beam for four field sizes, obtained by scanning in a watertank, normalised to the output (measured with an ionisation chamber) at 10cm depth of the 10cm x10cm field. Red lines are measured data and black lines calculations performed by a TPS.
70
Figure 7.9 Ratio, d(i), between calculated and measured dose per monitor unit values, for the 6 MVdepth dose data presented in Figure 7.8.
Table 7.13 Statistical evaluation of the deviations between calculated and measured data of the four 6MV depth dose curves presented in Figures 7.8 and 7.9. The deviations are expressed asd%(i): 100.[d(i) – 1]. The confidence limit is calculated as the absolute value of the aver-age deviation plus 1.5 times the standard deviation.
Build-up region (0-2 cm) Field size Field size Field size Field size5x5 cm2 10x10 cm2 15x15 cm2 20x20 cm2
Average deviation (%) 3.7 0.7 - 1.3 - 3.7Standard deviation (%) 6.2 2.1 2.4 5.1Confidence limit (%) 13.0 3.9 5.0 11.3
Remaining curve (2-25 cm)5x5 cm2 10x10 cm2 15x15 cm2 20x20 cm2
Average deviation (%) - 0.1 0.2 0.1 - 0.6Standard deviation (%) 0.3 0.3 0.3 0.5Confidence limit (%) 0.5 0.7 0.6 1.4
Using the concept of Venselaar and Welleweerd (2001), the confidence limits have
been determined for these four sets of depth dose data. Both a high dose gradient (build-up)
region from 0-2 cm and a low dose gradient region from 2-25 cm have been analysed. The
results are given in Table 7.13. Note that the confidence limits for the 5x5 cm2 and 20x20 cm2
do not fulfil the recommended 10% accuracy requirement of dose calculations of a TPS in
the build-up region as presented in Table 2.1, and the distance-to-agreement criterion should
therefore also be investigated. It should be noted that part of this discrepancy might also be
caused by uncertainties in the measurement procedure, which is not trivial in the build-up
region.
71
Test 5.2.4 Wedged square field (profile)
This test, as well as the analysis of the results, is similar to the former test (5.2.1) but
performed in another institution having a different TPS, another accelerator and other mea-
suring equipment.
Table 7.14 Statistical evaluation of the deviations between calculated and measured data of the four 6MV dose profiles presented in Figures 7.10 and 7.11. The deviations are expressed asd%(i): 100.[d (i) – 1]. The confidence limit is calculated as the absolute value of the aver-age deviation plus 1.5 times the standard deviation. Note that the confidence limit for thepoints outside the field does not fulfil the recommended 4% accuracy requirement of dosecalculations of a TPS in this region, as presented in Table 2.1. The points inside the fieldand in the penumbra are within the tolerance levels of 3% and 10%, respectively.
Inside field Penumbra Outside field
Average deviation (%) -0.4 0.4 2.0Standard deviation (%) 0.7 5.4 4.0Confidence limit (%) 1.5 8.4 8.0
Figure 7.10 Beam profiles of a 20cm x 20cm 6 MV field with a 600 wedge, obtained by scanning in awater tank at five different depths, normalized to the output measured with a calibrated ionisationchamber in a 10cm x 10cm field. The solid lines are the absolute dose profiles calculated by the TPSfor 100 MUs.
72
Figure 7.11 Difference between calculated and measured dose per monitor unit values, for the beamprofiles presented in Figure 7.10. The error was calculated relative to the local dose for the points with-in the field, and relative to the dose on the central beam axis at the depth of measurement for pointsoutside the field.
Test 5.2.6 Field with a central block
Figure 7.12 Beam profiles of a 16cm x 16cm 6 MV field with a central beam block, obtained by scan-ning in a water tank at five different depths, normalized to the output measured with a calibrated ioni-sation chamber in the 10cm x 10cm field. The solid lines are the absolute dose profiles calculated bythe TPS for 100 MUs.
73
Figure 7.13 Difference between calculated and measured dose per monitor unit values, for the beamprofiles presented in Figure 7.12. The error was calculated relative to the local dose for the points with-in the field, and relative to the dose in the open part of the beam at the depth of measurement for pointsunder the block and outside the field.
Table 7.15 Statistical evaluation of the deviations between calculated and measured data of the four 6MV dose profiles presented in Figures 7.12 and 7.13. The confidence limit is calculated asthe absolute value of the average deviation plus 1.5 times the standard deviation. The pointsunder the block, in the penumbra, inside and outside the field are all within the tolerancelevels of 4%, 10%, 3% and 4%, respectively, as presented in Table 2.1.
Central area Penumbra Inside field Penumbra Outside field(under block) (block) (open field)
Average deviation (%) 1.3 1.4 -0.3 1.1 2.5Standard deviation (%) 0.6 2.9 0.6 4.8 0.6Confidence limit (%) 2.2 5.7 1.1 8.3 3.5
74
Test 5.2.11 Off-axis square field (OAF)
Figure 7.14 Comparison between calculation (–) and measurement (---) of off-axis factors for a 10cmx 10cm 18 MV beam at 10cm depth for various off-axis distances. The difference is a result of a non-perfect beam fit.
Test 5.2.11 Off-axis square field (OAF)
Figure 7.15 Comparison between calculation (–) and measurement (---) of off-axis factors (OAF) fora 3cm x 3cm 18MV beam, for various off-axis distances (OAD) at depths of 3, 10 and 20cm. The dif-ference is a result of a non-perfect beam fit and limitations of the dose calculation algorithm.
Test 5.2.13 Wedged off-axis field (absolute dose)
Indicated is the difference Δ between calculated and measured dose values for off-
axis fields.
A: 6MV, field size 9cm x 9cm, 30o wedge:
SSD = 93cm, depth 7cm, 3cm off-axis (in the wedge direction): Δ = 3.1%
B: 18 MV, field size 10cm x 10cm, 60o wedge:
SSD = 90cm, depth 3cm, 2.5 cm off-axis (perpendicular to the wedge direction): Δ = 3.3%
75
Test 5.2.13 Wedged off-axis field (OAF)
Figure 7.16 Comparison between calculation (–) and measurement (---) of off-axis factors (OAF) for
a 3cm x 3cm 18MV beam with a 60o wedge, for different off-axis distances (OAD) at depths of 3, 10
and 20cm. The difference is a result of a non-perfect beam fit and limitations of the dose calculation
algorithm.
Test 5.3.2 Dose-Volume Histogram
A mathematically defined (virtual) phantom of size 25cm x 25cm x 26cm, and an
internal structure of 5cm x 5cm x 6cm were constructed. A 10cm x10cm beam with energy
of 18 MV was used for this test (Figure 7.17). The geometry was taken from the test descrip-
tion. A dose of 200 cGy was delivered to the isocentre, which is located in the middle of the
internal structure/virtual phantom.
Low dose gradient area
Figure 7.18 shows the DVH of the internal structure in the centre of the beam, i.e.,
the low dose gradient area. The computed volume is 150.06 cm3 and is in agreement with the
expected value of 150 cm3. Figures 7.19 and 7.20 show the dose profiles along a diagonal
computed in the top and bottom plane of the internal structure. The maximum and minimum
dose values of the DVH are in agreement with the corresponding values at the centre in the
top plane and in the corner of the bottom plane, 223.7 and 174.2 cGy, respectively. A dose
grid size of 2 mm and 4 mm were used for this analysis. For the low dose gradient area the
DVH calculation does not present any significant discrepancy. The dose grid size has no
influence on these calculations.
76
(a) (b)
Figure 7.17 Virtual phantom, internal structure and beam setting defined in the system for the DVHtest; (a) low dose gradient area test; (b) high dose gradient area test.
Figure 7.18 DVH of the internal structure in the low dose gradient area for two dose grid sizes of 2mm and 4mm.
77
Figure 7.19 Dose profile computed along a diagonal in the top plane of the structure. The central beamaxis is located at a distance of 4.2 cm.
Figure 7.20 Dose profile computed along a diagonal in the bottom plane of the structure. The centralbeam axis is located at a distance of 4.2 cm.
78
Dos
e (c
Gy)
High dose gradient area
Figure 7.21 shows a DHV of the internal structure calculated for dose grid sizes of 2mm and
4mm. In both cases the curves have a linear behaviour between 20% and 80% of dose at the
isocentre, on which a step-pattern due to the dose grid size is apparent.
Figure 7.21 DVH of the same structure obtained for two dose grid sizes of 2mm (red) and 4mm (blue).
(a) (b)
Figure 7.22 Measurements of the distance d between the 20% (blue) and 80% (pink) isodose lines; (a)4mm dose grid size; (b) 2mm dose grid size. The two lines in the centre of the figure represent the beamedges.
79
Table 7.16 Results of the volume calculation between the 20% and 80% isodose surfaces.
Dose grid V80% V20% V20% - V80% d 30 x d Difference between size [mm] [cm3] [cm3] [cm3] [cm] [cm3] V20% - V80% and
direct volume calculation.
4 64.53 88.53 24.09 0.80 24.00 0.4%2 66.05 86.03 19.98 0.67 20.10 0.6%
Difference between dose grids 20.6% 19.4%
In Table 7.16 the results are shown of the volume calculations between the 20% and
80% isodose surfaces in the internal structure for the two dose grid sizes. The difference in
calculated values of the volume between these isodose surfaces due to dose grid size is
around 20%. Obviously the dose grid size is an important factor for the accurate calculation
of DVHs in high dose gradient regions.
For the direct volume calculation the distance d between the 20% and 80% isodose
lines has to be multiplied with the surface of the internal structure in the sagittal plane, i.e.,
30 cm2. Good agreement exists between the volume assessed from the dose-volume his-
togram and the direct volume determination using the isodose planes, indicating that the TPS
has no problem with the DVH algorithm. It also shows the usefulness of this DVH test in
regions with a high dose gradient.
Test 6.2.2 Standard treatment techniques: d - pelvis/prostate
As an example of a dose matrix evaluation, the verification of a 4-field box technique
irradiating the CarPet phantom (Gillis et al., 2004) will be presented. The calculation is per-
formed with a treatment planning system with a dose matrix of 0.15 x 0.15 x 0.2 cm3. The
plan has been normalised to a value of 2 Gy (100 %) at the isocentre and the resulting dose
distribution in a transversal plane at 1 cm distance from the central plane is given in Figure
7.23.
Figure 7.23 Calculated dose distribution for a 4-field box technique in a plane at 1 cm distance fromthe central plane in the CarPet phantom.
80
To verify the calculation, a radiographic film (Kodak EDR2) was positioned inside
the phantom and irradiated with four 6 MV beams. A film calibration from 0 to 3.4 Gy in 10
steps was performed with films at 10 cm depth in a polystyrene phantom perpendicular to
the irradiation beam. This was done prior to the irradiation of the CarPet phantom. The cal-
ibration films were scanned in a VIDAR scanner and converted to optical density. Next the
treatment film was scanned and converted to absorbed dose. The resulting dose distribution
is shown in Figure 7.24. After careful alignment using punched holes in the film, the differ-
ence between measured and calculated dose was calculated (Figure 7.25).
Figure 7.24 Dose distribution measured with a Kodak EDR2 film positioned at 1 cm from the centralplane in the CarPet phantom.
Figure 7.25 Dose deviation presented as the dose calculated by the TPS divided by the dose measuredwith the film, performed for each point. The bluish regions show areas where the measured values arehigher than the calculated data, while the reddish regions are areas where the measurements are lower.
To include a spatial tolerance in the dose difference, the γ-method, as outlined in
Section 2.2.3, has also been applied to both sets of dose data. The resulting matrix of γ-val-
ues is shown in Figure 7.26. One can notice that inside the area where all four fields con-
tribute to the dose, good agreement between measurement and calculation is found. Good
agreement is also observed in most other areas, except for the penumbra regions. Some pro-
blems due to the film/phantom clamping might be the reason for the differences observed at
the two lateral sides.
81
Figure 7.26 Matrix showing the γ-values, expressed as a percentage where 100% is equal to gamma=1,for the 4-field box technique irradiating the CarPet phantom. White are γ-values equal to unity; redcolours represent areas where the criteria (2% in dose relative to the dose at the isocentre, or 2mm indistance) are not fulfilled, while blue shades get closer to zero when they get darker.
The resulting γ-values can also be expressed as a gamma-area histogram over a cer-
tain area of interest (Figure 7.27). In this way it is possible to compare the accuracy of the
actual delivery of a certain irradiation technique, with predefined criteria for that area.
However, the tolerance criteria given in Table 2.1 are valid for single photon beams, and it is
not obvious that the same criteria are valid for all types of composite plans.
Figure 7.27 Gamma-area histogram showing the γ-values for the 4-field box technique irradiating the
CarPet phantom in the analysed region of the film.
It should be noted that by doing film measurements to verify dose calculations per-
formed with a TPS, the accuracy of the film dosimetry system should be well known, and
not introduce large uncertainties. Also the same precautions for the constancy of the output
of the accelerator should be made, as during other dose verification measurements, i.e., all
data should be normalized to the absolute dose per monitor unit at 10cm depth.
82
APPENDIX A.1 DEFINITIONS
Acceptance testingA set of procedures carried out after delivery to confirm that the TPS works according to its
specifications as documented at the moment of purchase.
Beam characterisation set The beam characterisation set consists of the basic beam data required for beam modelling
and specified according to the format defined by the vendor. This set needs to be determined
and entered into the system. Usually, the set consists of measured dose values, but it may also
consist of Monte Carlo computations, if properly benchmarked for the treatment machine. In
addition, a complete geometrical characterisation is also required, including all movement
ranges and limits.
Beam’s-eye-view (BEV)A beam’s-eye-view display is a computer-generated image projected at a given distance from
the source (usually the SAD), as it would appear to a viewer located at the radiation source
and looking toward the irradiated medium. It generally concerns the patient’s anatomy,
obtained from previously contoured structures in a 3-D set of images. Like the radiation
beam, BEV images follow the beam divergence. BEV images are used to decide which beam
orientations yield the best view of the target volume without irradiating too much normal
anatomy, and to design shielding blocks to protect normal anatomy from unnecessary irra-
diation.
Beam verification set The beam verification set can be identical to the beam characterisation set but is usually
extended and includes several other situations, e.g., prolonged SSD and rectangular fields.
CommissioningA set of procedures required bringing the new TPS or new software release into safe clinical
operation. The TPS user should define the details of this procedure. The procedures include
the introduction of geometric and dosimetric data into the system to define the treatment
machine and its beams, and performing tests to learn how to use it, to verify the correct func-
tioning of the entire software and to determine the limits of accuracy of the various calcula-
tions.
Confidence limitA global index calculated from the statistical distribution of the dose deviation among a large
number of points. The confidence limit concept is useful to evaluate if the dose deviation
between measured and calculated dose distributions is acceptable. It can be extended to the
analysis of spatial deviations or of the gamma index distribution.
83
Digitally reconstructed radiograph (DRR)A digitally reconstructed radiograph is similar to a BEV, except that a DRR is a computer-
generated image, obtained from a 3-D data set of CT (or other modality) images and pro-
jected at a given distance from the source. A DRR shows how anatomy would appear to a
viewer located at the radiation source and looking towards the isocentre. DRRs are often
used as reference images compared to portal images to verify that a treatment plan is being
delivered with high precision.
Dose-volume histogram (DVH)Graphical representation of a 3-D dose distribution showing the number of voxels, i.e., the
volume or relative volume, of a structure, which receives a given dose. In the cumulative
form, which is mostly used, the count of voxels includes all those which receive a dose lar-
ger than the given dose value. In the differential form, it represents a histogram where for
each dose value the relative frequency of voxels that receive that dose is represented.
Dose deviation (d(i) or d%(i)) The ratio between calculated and measured dose for the same number of monitor units.
Alternatively, this deviation can be expressed as a percent difference between calculated and
measured dose with respect to the calculated value taken as reference. The relationship
between these expressions is: d%(i) = 100. [d(i)-1].
Electron density, relative electron densityThe electron density is the number of electrons per unit volume, while the relative electron
density is the electron density for a particular medium divided by the electron density for
water. This is important for dose calculations and is typically obtained from CT information.
Gamma indexA figure of merit used for evaluation of the dose calculated by the TPS, which combines the
dosimetric deviation and the spatial deviation. The gamma index is equal to 1 when the nor-
malized vectorial distance between calculations and measurements in the position-dose
space is within predefined tolerances. For a gamma index less than 1 the calculated dose
values are within the accepted criteria (e.g., 3% / 3mm).
Generic beam data setA generic beam data set is a beam characterisation set used by the vendor to test its own TPS
and to demonstrate its accuracy. It can be obtained from published reports and it should be
supplied with the TPS to facilitate its acceptance. The generic beam data set must not be used
for clinical purposes.
84
Percentage depth dose (PDD)The ratio of the dose at a given depth to the dose at the depth of maximum dose in a water
equivalent phantom, for a given field size, at a fixed source-skin distance.
Periodic quality controlA set of procedures carried out to verify periodically the correct functioning of the TPS.
These tests are repeated with a pre-set frequency; some can be carried out automatically,
others manually.
Quality assurance (QA) of a TPS A set of procedures carried out to determine the accuracy and reliability of the TPS, and to
guarantee that the system performs according to previously established specifications.
Spatial deviation (r(i))Minimum distance between a point rm(i) where a certain dose value has been measured and
a point with the same calculated dose value. The spatial deviation is also called “distance-to-
agreement”.
Tissue–phantom ratio (TPR)The ratio of the absorbed dose at certain depth to that at reference depth measured in a large
water phantom at the same fixed source–detector distance.
Tolerance (δ)The maximum acceptable deviation between a calculated and measured physical quantity. It
can be expressed either as a percent deviation or as a spatial deviation, depending on the spa-
tial region being analysed (low vs. high dose gradient region).
Treatment Planning System (TPS)A treatment planning system consists of a software package, or a combination of different
packages, and its hardware part. It enables the input of patient data, the anatomy definition,
beam set-up, the dose distribution calculation, the plan evaluation in terms of dose, volume,
biological effect and the output for documentation and transfer to other units (simulator,
record and verify system, treatment machine).
(3-D) Treatment Planning System (3-D TPS)A modern 3-D treatment planning system offers the functionality of: constructing a 3-D
patient model (based on a volumetric CT-scan); simulating 3-D configurations of beams
(with arbitrary beam orientations); positioning of the isocentre and field shape; performing
3-D dose calculation (with algorithms that take the 3-D aspects of patient, beams and inter-
action physics into account); evaluating and optimising 3-D dose distributions (dose-volume
histograms and normal tissue complication probability calculations); advanced viewing of
patient anatomy, beams and dose distribution in their 3-D relationship.
85
APPENDIX A.2 LIST OF ABBREVIATIONS AND SYMBOLS
Symbol Meaning
A-P Anterior – Posterior
BEV Beam’s-Eye-View
CCW Counter-ClockWise
CW ClockWise
d Depth; Distance between 20% and 80% isodose lines
d(i) or d%(i) Deviation between calculated and measured dose per monitor unit at a
point i expressed either as a ratio or as a percent difference.
CF Correction Factor
CT Computed Tomography
D absorbed Dose
DICOM Digital Imaging and COmmunication in Medicine
DRR Digitally Reconstructed Radiograph
DVH Dose-Volume Histogram
HU Hounsfield Unit
IMRT Intensity-Modulated Radiation Therapy
M number of Monitor units
MLC Multi-Leaf Collimator
MR Magnetic Resonance
MU Monitor Unit
OAF Off-Axis Factor
OAD Off-Axis Distance
OAR Organ At Risk
PDD Percentage Depth Dose
PET Positron Emission Tomography
PRF beam PRoFile
QA Quality Assurance
QC Quality Control
QI Quality Index
r(i) Spatial deviation (or distance-to-agreement) at point rm(i)
ROI Region Of Interest
RW50 Radiological Width
s field size at isocentre
SAD Source-Axis Distance
SD Standard Deviation
SSD Source-Skin Distance
SPECT Single Photon Emission Computed Tomography
87
TAR Tissue-Air Ratio
TMR Tissue-Maximum Ratio
TPR Tissue-Phantom Ratio
TPS Treatment Planning System
z CT slice number
2-D Two-Dimensional
3-D Three-Dimensional
δ Tolerance for dose deviation d%(i) or spatial deviation r(i)
γ Gamma index
88
89
APPENDIX A.3 CATEGORISATION OF TESTS
In this booklet an attempt has been made to provide suggestions for a general division of QA
tests to be performed by the vendor, or by an individual user. Particularly in Chapter 3
(Anatomical Description), and in Chapter 4 (Beam Description), a large number of tests are
presented the vendor of a specific system should perform, either before the system is
installed in the hospital, or during the acceptance testing at the user’s site. Some of these ven-
dor tests can be generic tests, the results of which should be shown to and discussed with the
user; others need to be performed at the user’s place. Such a division depends on the avail-
ability of the user’s data by a vendor, as well as the specific situation at a user’s location, e.g.,
of the time and resources available for extensive on site testing. In this Appendix therefore
no attempt was made to make such a division in the column “Vendor Acceptance Tests”.
Nevertheless, the user is well advised to repeat certain vendor acceptance tests, in particular
those tests, which are adequate to spot-check the integrity of the user’s input data. A star in
brackets denotes such tests. The following tests are numbered according to the correspon-
ding sections in this booklet.
Test Short test description Vendor User# Acceptance Tests Commissioning Tests
3 Anatomical description3.1 Basic patient entry3.1.a Two patients with the same
last name *3.1.b Two patients with the same
ID- number *3.1.c The same patient twice *3.1.d The same patient with two
different targets *3.1.e Deleting a patient *3.1.f Moving/copying patients/plans
from one directory to another. *3.2 Image input and use3.2.1 Image input3.2.1a Identity consistency of scan * *3.2.1b Scan parameters-varying slice
thickness * *3.2.1c Two different sets of images for
the same patient * *3.2.1d Maximum number of CT slices *3.2.1e Patient orientation * *3.2.1f Integrity of simultaneous input * *3.2.1g Opening a patient file more
than once *
90
3.2.1h Geometric integrity of slices * *3.2.1i CT number representation * *3.2.1j Text information * *3.2.2 Contour input3.2.2a Digitiser input * *3.2.2b Film scanner input * *3.2.3 Image use3.2.3a Geometry of reconstructed images * *3.2.3b Orientation of reconstructed images * *3.2.3c Grey-scale representation *3.2.3d Grey-scale representation of
reconstructed images *3.2.3e Windowing and zooming images *3.2.4 Co-ordinate system of images * *3.3 Anatomical structures3.3.1 Definition of anatomical structures3.3.1a Unique identification * *3.3.1b Unique properties * *3.3.1c Maximum number of contours per
anatomical structure *3.3.2 Automated contouring3.3.2a Correct geometry in automated
contouring * *3.3.2b Threshold changing for automated
contouring *3.3.3 Manual contouring3.3.3a Contouring direction *3.3.3b Maximum number of points *3.3.4 Manipulation of contours3.3.4a Add margin to a contour * (*)3.3.4b 3-D structure expansion * (*)3.3.4c Correcting, adding, deleting and
copying contours and structures * *3.3.4d Validation * *3.3.4e TPS depending tools *3.3.4f 2-D contour transfer *3.3.4g Hidden contours *3.3.4h Bifurcated structures * (*)3.3.4i Interpolated structures * (*)3.3.4j Bolus definition3.3.5 Construction of volumes3.3.5a Volume computation * *3.3.5b Construction of a volume from a set
of CT slices with non-regular spacing *3.3.5c Construction of a volume from a
set of CT slices with non-sequential order of slices during contouring *
3.3.5d Construction of a volume from non-axial contours * (*)
3.3.5e Capping option * (*)3.3.5f Volume of subtracted regions * (*)3.3.5g Boolean option * (*)4 Beam description4.1 Beam definition 4.1.1 SAD, SSD and field size *4.1.1a Field size *4.1.1b Divergence *4.1.1c Isocentre position *4.1.1d SSD or depth verification *4.1.1e Test from § 4.1.1 for SSD>100cm *4.1.2 Gantry rotation *4.1.3 Collimator rotation *4.1.4 Table movement * *4.1.5 Jaw definition and beam co-ordinates * *4.1.6 Multi-leaf collimator definition *4.1.7 Wedge and block insertion * *4.1.8 Consistency check of beam
co-ordinate system * *4.1.9 Warnings and error messages *4.1.10 Bolus definition * *4.2 Beam display4.2.1 Beam’s-eye-view display *4.2.2 Beam position and shape *4.2.3 Beam position in BEV *4.2.4 Block position in BEV *4.2.5 MLC-shaped field *4.2.6 Bolus position * *4.3 Beam geometry4.3.1 Automatic block and auto-leaf
positioning * *4.3.2 User-defined block. * *4.3.3 DRR: linearity and divergence *4.3.4 Input, change and edit functions *5 Dose and monitor unit calculation5.1 Basic beam dataset5.1.1 Data input process *5.1.2 Documentation *5.2 Dose calculation 5.2.1 Open square fields * (*)5.2.2 Open rectangular fields * (*)5.2.3 Variation in SSD * (*)5.2.4 Wedged square field * (*)5.2.5 Wedged rectangular fields * (*)5.2.6 Field with a central block * (*)
91
5.2.7 Blocked field * (*)5.2.8 Inhomogeneities *5.2.9 Oblique incidence * (*)5.2.10 Missing tissue * (*)5.2.11 Off-axis square field * (*)5.2.12 Off-axis elongated field * (*)5.2.13 Wedged off-axis field * (*)5.2.14 Off-plane field * (*)5.2.15 Square MLC field * (*)5.2.16 Off-axis square MLC field * (*)5.2.17 MLC-shaped field * (*)5.2.18 Block and tray insertion * (*)5.3 2-D and 3-D dose verification5.3.1 2-D dose distribution *5.3.2 Dose-volume histogram *5.4 Monitor unit calculation
92
REFERENCES
AAPM Report #55. Radiation Treatment Planning Dosimetry Verification, Radiation
Therapy Committee Task Group #23, edited by D. Miller. American Institute of Physics,
College Park, MD, 1995.
Bakai A., Alber M. and Nüsslin F. A revision of the γ-evaluation concept for the comparison
of dose distributions. Phys. Med. Biol. 48: 3543-3553, 2003.
Brahme A., Chavaudra J., Landberg T., McCullough E., Nüsslin F., Rawlinson A., Svensson
G. and Svensson H. Accuracy requirements and quality assurance of external beam therapy
with photons and electrons. Acta Oncol. Suppl. 1, 1988.
Caneva S., Rosenwald J-C. and Zefkili S. A method to check the accuracy of dose computa-
tion using quality index: application to scatter contribution in high energy photon beams.
Med. Phys. 27: 1018-1024, 2000.
Cosset J.M., ESTRO Breur Gold Medal Award Lecture 2001: Irradiation accidents – lessons
for oncology? Radiother. Oncol. 63: 1-10, 2002.
Craig T., Brochu D. and Van Dyk J. A quality assurance phantom for three-dimensional ra-
diation treatment planning. Int. J. Radiat. Oncol. Biol. Phys. 44: 955-966, 1999.
Dahlin H., Lamm I.L., Landberg T., Levernes S. and Ulsø N. User requirements on CT-based
computed dose planning systems in radiation therapy. Acta Radiol. Oncol. 22: 397-415,
1983.
Depuydt T., Van Esch A. and Huyskens D.P. A quantitative evaluation of IMRT dose distri-
butions: refinement and clinical assessment of the gamma evaluation. Radiother. Oncol. 62:
309-319, 2002.
Dutreix A., Bjärngard B.E., Bridier A., Mijnheer B.J., Shaw J.E. and Svensson H. Monitor
Unit Calculation For High Energy Photon Beams, ESTRO Booklet No. 3, ESTRO, Brussels,
Belgium, 1997.
Ezzell G., Galvin J.M., Low D., Palta J.R., Rosen I., Sharpe M.B., Xia P., Xiao Y., Xing L.
and Yu C.X. Guidance document on delivery, treatment planning, and clinical implementa-
tion of IMRT: Report of the IMRT subcommittee of the AAPM Radiation Therapy
Committee. Med. Phys. 30: 2090 – 2115, 2003.
93
Fraass, B., Doppke, K., Hunt, M., Kutcher, G., Starkschall, G., Stern, R. and Van Dyk, J.
American Association of Physicists in Medicine Radiation Therapy Committee Task Group
53: Quality assurance for clinical radiotherapy treatment planning. Med. Phys. 25: 1773-
1836, 1998.
Gillis S., De Wagter C., Bohsung J., Perrin, B., Williams, P. and Mijnheer B.J. An inter-cen-
tre quality assurance network for IMRT using film dosimetry: preliminary results of the
European QUASIMODO project. Radiother. Oncol. (submitted).
Grosu A-L., Weber W., Feldmann H.J., Wuttge B., Bartenstein P., Grosss M.W., Lumenta C.,
Schwaiger M. and Molls M. First experience with I-123-alpha-methyl-tyrosine SPECT in
the 3-D radiation treatment planning of brain gliomas. Int. J. Radiat. Oncol. Biol. Phys. 47:
517–526, 2000.
Harms W.B., Low D.A., Wong J.W. and Purdy J.A. A software tool for the quantitative
evaluation of 3D dose calculation algorithms. Med. Phys. 25: 1830-1836, 1998.
Henze M., Schuhmacher J., Hipp P., Kowalski J., Becker D.W., Doll J., Mäcke H.R.,
Hofmann M., Debus J. and Haberkorn U. PET imaging of somatostatin receptors using
[68Ga]DOTA-D-Phe1-Tyr3-octreotide: first results in patients with meningiomas. J. Nucl.
Med. 42: 1053–1056, 2000.
IAEA Report TRS 398. Absorbed dose determination in external beam radiotherapy. An
international code of practice for dosimetry based on standards of absorbed dose to water.
International Atomic Energy Agency, Vienna, Austria, 2000.
IAEA Report SRS 17. Lessons learned from accidental exposures in radiotherapy.
International Atomic Energy Agency, Vienna, Austria, 2000.
IAEA Report “Investigation of an accidental exposure of radiotherapy in Panama”.
International Atomic Energy Agency, Vienna, Austria, 2001.
IAEA TECDOC-xxx, Commissioning and quality assurance of computerized treatment
planning. International Atomic Energy Agency, Vienna, Austria, 2004.
ICRP Publication 86. Prevention of Accidental Exposures to Patients Undergoing Radiation
Therapy. International Commission on Radiation Protection, Pergamon Press, Oxford, 2001.
ICRU Report 42. Use of computers in external beam radiotherapy procedures with high-
energy photons and electrons. International Commission on Radiation Units and
Measurements, Baltimore, Maryland, USA, 1987.
94
IEC Report 61217: Radiotherapy equipment – Co-ordinates, movements and scales.
International Electrotechnical Commission, Geneva, Switzerland, 1996.
IEC Report 62083: Medical electrical equipment – Requirements for the safety of radio-
therapy treatment planning systems. International Electrotechnical Commission, Geneva,
Switzerland, 2000.
Karger P.C., Hipp P., Henze M., Echner G., Höss A., Schad L. and Hartmann G.H.
Stereotactic imaging for radiotherapy: accuracy of CT, MRI, PET and SPECT. Phys. Med.
Biol. 48: 211-221, 2003.
Levivier M., Goldman S., Pirotte B., Brucher J-M., Balériaux D., Luxen A., Hildebrand J.
and Brotchi J. Diagnostic yield of stereotactic brain biopsy guided by positron emission
tomography with [18F]fluorodeoxyglucose. J. Neurosurg. 82 : 445–52, 1995.
Low D.A., Harms W.B., Mutic S. and Purdy J.A. A technique for the quantitative evaluation
of dose distributions. Med. Phys. 25: 656-661, 1998.
Low D.A. and Dempsey, J.F. Evaluation of the gamma dose distribution comparison method.
Med. Phys. 30: 2455-2464, 2003.
Mayles, W.P.M., Lake, R., McKenzie, A., Macauly, E.M., Morgan, H.M., Jordan, T.J. and
Powley S.K. Physics aspects of quality control in radiotherapy. IPEM Report No.81. The
Institute of Physics and Engineering in Medicine, York, Great Britain, 1999.
Mijnheer B.J., Battermann J.J. and Wambersie A. What degree of accuracy is required and
can be achieved in photon and neutron therapy? Radiother. Oncol. 8: 237-252, 1987.
Mijnheer B.J., Bridier A., Garibaldi C., Torzsok K. and Venselaar J.L.M. Monitor Unit
Calculation For High Energy Photon Beams - Practical Examples. ESTRO Booklet No. 6,
ESTRO, Brussels, Belgium, 2001.
NCS Report-xx, Quality assurance of 3-D treatment planning systems; practical guidelines
for acceptance testing, commissioning, and periodic quality control of radiation therapy
treatment planning systems. The Netherlands Commission on Radiation Dosimetry, Delft,
The Netherlands, 2004. (http://www.ncs-dos.org/draft_01.html)
Panitsa E., Rosenwald J-C. and Kappas C. Quality control of Dose Volume Histogram com-
putation characteristics of 3D treatment planning systems. Phys. Med. Biol.43: 2807-2816,
1998.
95
Pirotte B., Goldman S., David Ph., Wikler D., Damhaut Ph., Vandesteene A., Salmon I.,
Brotchi J. and Levivier M. Stereotactic brain biopsy guided by positron emission tomo-
graphy (PET) with [F-18] Fluorodeoxyglucose and [C-11] Methionine. Acta Neurochir. 68
(Suppl): 133–8, 1997.
Shaw, J.E. A guide to commissioning and quality control of treatment planning systems.
Report No.68. The Institute of Physics and Engineering in Medicine and Biology, York,
Great Britain, 1996.
Siddon R.L. Solution to treatment planning problems using co-ordinate transformations.
Med. Phys. 8: 766-774, 1981.
SSRPM Report 7: Quality control of treatment planning systems for teletherapy. Swiss
Society for Radiobiology and Medical Physics: ISBN 3-908125-23-5, 1997.
Van Dyk, J., Barnett, R.B., Cygler, J.E. and Shragge P.C. Commissioning and quality as-
surance of treatment planning computers. Int.J. Radiat. Oncol. Biol. Phys. 26: 261-273,
1993.
Van Dyk, J., Barnett, R.B. and Battista J.J. Computerized radiation treatment planning sys-
tems. Pp. 231-286. In: The modern technology of radiation oncology. J. Van Dyk (Ed.).
Medical Physics Publishing, Madison, WI, USA, 2003.
Venselaar J. and Pérez-Calatayud, J. (Eds.). A practical guide to quality control of
brachytherapy equipment. ESTRO Booklet No. 8, ESTRO, Brussels, Belgium, 2004.
Venselaar J.L.M. and Welleweerd J. Application of a test package in an intercomparison of
the performance of treatment planning systems used in a clinical setting. Radiother. Oncol.
60: 203-213, 2001.
Venselaar J.L.M., Welleweerd J. and Mijnheer B.J. Tolerances for the accuracy of photon
beam dose calculations of treatment planning systems. Radiother. Oncol. 60: 191-201, 2001.
96