Upload
gleb-orlikov
View
107
Download
2
Tags:
Embed Size (px)
Citation preview
1
Software Engineering Department
Analysis of PHANTOM images in order
to determine the reliability of
PET/SPECT cameras
Authors Archil Pirmisashvili (ID: 317881407) Gleb Orlikov (ID: 317478014)
Supervisor Dr. Miri Cohen Weiss
2
Software Engineering Department
Table of contents:
1. INTRODUCTION ............................................................................................................................3
2. THEORY .........................................................................................................................................5
2.1 BACKGROUND ............................................................................................................................................ 5 2.1.1 Image registration by maximization of combined mutual information and gradient information [1]: 5 2.1.2 Multi-modal volume registration by maximization of mutual Information [2]: .................................. 6 2.1.3 An Automatic Technique for Finding and Localizing Externally Attached Markers in CT and MR Volume Images of the Head [3]: ............................................................................................................... 9 2.1.4 Use of the Hough transformation to detect lines and curves in pictures [4]: .................................... 12
2.2 DETAILED DESCRIPTION ............................................................................................................................... 13 2.2.1 Introduction: ................................................................................................................................. 13 2.2.2 The problem is: .............................................................................................................................. 14 2.2.3 Our solution to problem is: ............................................................................................................ 14
2.3 EXPECTED RESULTS..................................................................................................................................... 14
3. SOFTWARE ENGINEERING DOCUMENTS ................................................................................. 16
3.1 REQUIREMENTS (USE CASE) ......................................................................................................................... 16 3.2 GUI ....................................................................................................................................................... 17 3.3 PROGRAM STRUCTURE – ARCHITECTURE, DESIGN .............................................................................................. 20
3.3.1 UML class diagram ........................................................................................................................ 20 3.3.2 Sequence diagram ......................................................................................................................... 22 3.3.3 Activity diagram ............................................................................................................................ 22
3.4 TESTING PLAN ........................................................................................................................................... 23 3.4.1 Test scenario for: Main interface ................................................................................................... 23 3.4.2 Test scenario for – Program Option ............................................................................................... 24 3.4.3 Test scenario for – Mask Generator ............................................................................................... 24 3.4.4 Test scenario for – DICOM images selection ................................................................................... 25 3.4.5 Test scenario for – Manually correction ......................................................................................... 26
4. RESULT AND CONCLUSION ...................................................................................................... 27
4.1 QA TESTING PROCESS ................................................................................................................................ 27 4.2 PROBLEMS AND SOLUTIONS.......................................................................................................................... 27
4.2.1 Working with set of DICOM images: .............................................................................................. 27 4.2.2 Creation of PET/CT mask: .............................................................................................................. 27 4.2.3 Find the best slices: ....................................................................................................................... 27 4.2.4 Fit the MASK to Best slice: ............................................................................................................. 28 4.2.5 Retrieving SUV (Standardized uptake values) from DICOM image: ................................................. 29
4.3 RUNNING/SIMULATION .............................................................................................................................. 30 4.3.1 Simulation 1 .................................................................................................................................. 30 4.3.2 Simulation 2 .................................................................................................................................. 31 4.3.3 Simulation 3 .................................................................................................................................. 32
4.4 FINAL CONCLUSION .................................................................................................................................... 33
REFERENCES ................................................................................................................................. 34
3
Software Engineering Department
1. Introduction
Imaging visualization methods are widely used in modern medicine. These methods allow get images of human normal and pathological organs and systems. Beside CT and MRI methods, nuclear diagnostic is a branch of imaging diagnostic, in which multi-modality imaging techniques such as positron emission tomography (PET) and single-photon emission computed tomography (SPECT) are widely used. These two methods use gamma cameras in order to provide 2D/3D images. The maintenance of these cameras requires periodical QA tests. Today this procedure takes at least 4 hours per camera. Therefore our goal is to automate this procedure to reduce time.
Nuclear medicine encompasses both diagnostic imaging and treatment of disease, and may also
be referred to as molecular medicine or molecular imaging & therapeutics. Nuclear medicine uses
certain properties of isotopes and the energetic particles emitted from radioactive material to
diagnose or treat various pathology. Different from the typical concept of anatomic radiology,
nuclear medicine enables assessment of physiology. This function-based approach to medical
evaluation has useful applications in most subspecialties, notably oncology, neurology, and
cardiology. Gamma cameras are used in e.g. scintigraphy, SPECT and PET to detect regions of
biologic activity that may be associated with disease. Relatively short lived isotope, such as 123I is
administered to the patient. Isotopes are often preferentially absorbed by biologically active tissue
in the body, and can be used to identify tumors or fracture points in bone. Images are acquired
after collimated photons are detected by a crystal that gives off a light signal, which is in turn
amplified and converted into count data.
Scintigraphy is a form of diagnostic test wherein radioisotopes are taken internally, for example
intravenously or orally. Then, gamma cameras capture and form two-dimensional images from the
radiation emitted by the radiopharmaceuticals.
Single-Photon Emission Computed Tomography (SPECT) is a 3D tomographic technique that
uses gamma camera data from many projections and can be reconstructed in different planes. A
dual detector head gamma camera combined with a CT scanner, which provides localization of
functional SPECT data, is termed a
SPECT/CT camera, and has shown utility in advancing the field of molecular imaging. In most
other medical imaging modalities, energy is passed through the body and the reaction or result is
read by detectors. In SPECT imaging, the patient is injected with a radioisotope, most commonly
Thallium 201TI, Technetium 99mTC, Iodine 123I, and Gallium 67Ga. The radioactive gamma rays are
emitted through the body as the natural decaying process of these isotopes takes place. The
emissions of the gamma rays are captured by detectors that surround the body. This essentially
means that the human is now the source of the radioactivity, rather than the medical imaging
devices such as X-Ray or CT.
Positron emission tomography (PET) uses coincidence detection to image functional processes.
Short-lived positron emitting isotope, such as 18F, is incorporated with an organic substance such
as glucose, creating F18-fluorodeoxyglucose, which can be used as a marker of metabolic
utilization. Images of activity distribution throughout the body can show rapidly growing tissue, like
tumor, metastasis, or infection. PET images can be viewed in comparison to computed
tomography scans to determine an anatomic correlate. Modern scanners combine PET with a CT,
or even MRI, to optimize the image reconstruction involved with positron imaging. This is
performed on the same equipment without physically moving the patient off of the gantry. The
resultant hybrid of functional and anatomic imaging information is a useful tool in non-invasive
diagnosis and patient management.
4
Software Engineering Department
Figure 1: Positron annihilation event in PET
Imaging phantoms, or simply "phantoms", are specially designed objects that are scanned or
imaged in the field of medical imaging to evaluate, analyze, and tune the performance of various
imaging devices. These objects are more readily available and provide more consistent results
than the use of a living subject or cadaver, and likewise avoid subjecting a living subject to direct
risk. Phantoms were originally employed for use in 2D x-ray based imaging techniques such as
radiography or fluoroscopy, though more recently phantoms with desired imaging characteristics
have been developed for 3D techniques such as MRI, CT, Ultrasound, PET, and other imaging
methods or modalities.
Figure 2: PHATOM
A phantom used to evaluate an imaging device should respond in a similar manner to how human
tissues and organs would act in that specific imaging modality. For instance, phantoms made for
2D radiography may hold various quantities of x-ray contrast agents with similar x-ray absorbing
properties to normal tissue to tune the contrast of the imaging device or modulate the patients’
exposure to radiation. In such a case, the radiography phantom would not necessarily need to
have similar textures and mechanical properties since these are not relevant in x-ray imaging
modalities. However, in the case of ultrasonography, a phantom with similar rheological and
ultrasound scattering properties to real tissue would be essential, but x-ray absorbing properties
would not be needed.
Physicists perform the PHANTOM studies in PET and SPECT cameras, each producing a stack of
images that shows the 3D radioactive distribution as produced by the camera. The results can be
measured and compared to either the ideal results or to previous results.
Aim of QA test - Tomographic image quality is determined by a number of different performance
parameters, primarily the scanner sensitivity, tomographic uniformity, contrast and spatial
resolution, and the process that is used to reconstruct the images. Because of the complexity of
the variation in the uptake of radiopharmaceuticals and the large range of patient sizes and
shapes, the characteristics of radioactivity distributions can vary greatly and a single study with a
phantom cannot simulate all clinical imaging conditions. Cameras produce images simulating
those obtained in a total body imaging study involving both hot and cold lesions. Image quality is
assessed by calculating image contrast and background variability ratios for both hot and cold
5
Software Engineering Department spheres. This test allows assessment of the accuracy of the absolute quantification of radioactivity
concentration in the uniform volume of interest inside the phantom.
2. Theory
2.1 Background
The goal of the test is to determine the two “best” slices from the collection of image slices
provided by the camera. Best slice is the slice image, which best matches to template of the
ROI (regions-of-interest). Accordingly, we need firstly to define the template and then use it in
order to find the two “best” slices. Template contains positions of hot and cold ROI cylinders.
There are some algorithms that work with CT and PET images:
2.1.1 Image registration by maximization of combined mutual information and gradient
information [1]:
Mutual information has developed into an accurate measure for rigid and affine mono- and multimodality image registration. The robustness of the measure is questionable, however. A possible reason for this is the absence of spatial information in the measure. The present paper proposes to include spatial information by combining mutual information with a term based on the image gradient of the images to be registered. The gradient term not only seeks to align locations of high gradient magnitude, but also aims for a similar orientation of the gradients at these locations. Method: The definition of the mutual information I of two images A and B combines the marginal and joint entropies of the images in the following manner:
𝐼(𝐴, 𝐵) = 𝐻(𝐴) + 𝐻(𝐵) − 𝐻(𝐴, 𝐵) Here, H(A) and H(B) denote the separate entropy values of A and B respectively. H(A,B) is he joint entropy, i.e. the entropy of the joint probability distribution of the image intensities. Correct registration of the images is assumed to be equivalent to maximization of the mutual information of the images. This implies a balance between minimization of the joint entropy and maximization of the marginal entropies. Recently, it was shown that the mutual information measure is sensitive to the amount of overlap between the images and normalized mutual information measures were introduced to overcome this problem. Examples of such measures are the normalized mutual information introduced by Studholme:
𝑌(𝐴, 𝐵) =𝐻(𝐴) + 𝐻(𝐵)
𝐻(𝐴, 𝐵)
and the entropy correlation coefficient used by Maes:
𝐸𝐶𝐶(𝐴, 𝐵) =2𝐼(𝐴, 𝐵)
𝐻(𝐴) + 𝐻(𝐵)
These two measures have a one-to-one correspondence.
Image locations with a strong gradient are assumed to denote a transition of tissues, which
are locations of high information value. The gradient is computed on a certain spatial scale.
We have extended mutual information measures (both standard and normalized) to include
spatial information that is present in each of the images. This extension is accomplished by
multiplying the mutual information with a gradient term. The gradient term is based not only on
the magnitude of the gradients, but also on the orientation of the gradients.
The gradient vector is computed for each sample point x ={x1, x2, x3} in one image and its
corresponding point in the other image, x`, which is found by geometric transformation of
6
Software Engineering Department x. The three partial derivatives that together form the gradient vector are calculated by
convolving the image with the appropriate first derivatives of a Gaussian kernel of scale σ. The
angle αx,x` (σ) between the gradient vectors is defined by:
∝𝑥,𝑥` (𝜎) = 𝑎𝑟𝑐𝑐𝑜𝑠∇𝑥(𝜎) ∙ ∇𝑥`(𝜎)
|∇𝑥(𝜎)||∇𝑥`(𝜎)|
with ∇x(σ) denoting the gradient vector at point x of scale σ and | · | denoting magnitude.
The proposed registration measure defined by:
𝐼𝑛𝑒𝑤(𝐴, 𝐵) = 𝐺(𝐴, 𝐵)𝐼(𝐴, 𝐵)
with
𝐺(𝐴, 𝐵) = ∑ 𝜔(𝛼𝑥,𝑥`(𝜎))(𝑥,𝑥`)∈(𝐴∩𝐵)
𝑚𝑖𝑛(|∇𝑥(𝜎)|, |∇𝑥`(𝜎)|)
Similarly, the combination of normalized mutual information and gradient information is
defined:
𝑌𝑛𝑒𝑤(𝐴, 𝐵) = 𝐺(𝐴, 𝐵)𝑌(𝐴, 𝐵)
2.1.2 Multi-modal volume registration by maximization of mutual Information [2]:
This approach works directly with image data; no pre-processing or segmentation is required. This technique is, however, more flexible and robust than other intensity-based techniques like correlation. Additionally, it has an efficient implementation that is based on stochastic approximation. Experiments are presented that demonstrate the approach registering magnetic resonance (MR) images with computed tomography (CT) images, and with positron-emission tomography (PET) images. Consider the problem of registering two different MR images of the same individual. When perfectly aligned these signals should be quite similar. One simple measure of the quality of a hypothetical registration is the sum of squared differences between voxel values. This measure can be motivated with a probabilistic argument. If the noise inherent in an MR image were Gaussian, independent and identically distributed, then the sum of squared differences is negatively proportional to the likelihood that the two images are correctly registered. Unfortunately, squared difference and the closely related operation of correlation are not effective measures for the registration of different modalities. Even when perfectly registered, MR and CT images taken from the same individual are quite different. In fact MR and CT are useful in conjunction precisely because they are different. This is not to say the MR and CT images are completely unrelated. They are after all both informative measures of the properties of human tissue. Using a large corpus of data, or some physical theory, it might be possible to construct a function F(·) that predicts CT from the corresponding MR value, at least approximately. Using F we could evaluate registrations by computing F(MR) and comparing it via sum of squared differences (or correlation) with the CT image. If the CT and MR images were not correctly registered, then F would not be good at predicting one from the other. While theoretically it might be possible to find F and use it in this fashion, in practice prediction of CT from MR is a difficult and under-determined problem. The the following derivation is referred to the two volumes of image data that are to be
registered as the reference volume and the test volume. A voxel of the reference volume is
denoted u(x), where the x are the coordinates of the voxel. A voxel of the test volume is
denoted similarly as v(x). Given that T is a transformation from the coordinate frame of the
reference volume to the test volume, v(T (x)) is the test volume voxel associated with the
reference volume voxel u(x). Note that in order to simplify some of the subsequent equations
we will use T to denote both the transformation and its parameterization.
We seek an estimate of the transformation that registers the reference volume u and test
volume v by maximizing their mutual information:
(1) �� = arg𝑚𝑎𝑥
𝑇𝐼(𝑢(𝑥), 𝑣(𝑇(𝑥)))
Mutual information is defined in terms of entropy in the following way:
(2) 𝐼 (𝑢(𝑥), 𝑣(𝑇(𝑥))) ≡ ℎ(𝑢(𝑥)) + ℎ (𝑣(𝑇(𝑥))) − ℎ(𝑢(𝑥), 𝑣(𝑇(𝑥)))
7
Software Engineering Department
h(·) is the entropy of a random variable, and is defined as ℎ(𝑥) ≡ − ∫ 𝑝(𝑥) ln(𝑝(𝑥)) 𝑑𝑥 , while
the joint entropy of two random variables x and y is ℎ(𝑥) ≡ − ∫ 𝑝(𝑥, 𝑦) ln(𝑝(𝑥, 𝑦)) 𝑑𝑥 𝑑𝑦.
Entropy can be interpreted as a measure of uncertainty, variability, or complexity.
The mutual information defined in Equation (2) has three components. The first term on the
right is the entropy in the reference volume, and is not a function of T. The second term is the
entropy of the part of the test volume into which the reference volume projects. It encourages
transformations that project u into complex parts of v. The third term, the (negative) joint
entropy of u and v, contributes when u and v are functionally related.
The entropies described above are defined in terms of integrals over the probability densities
associated with the random variables u(x) and v(T (x)). When registering medical image data
we will not have direct access to these densities.
The first step in estimating entropy from a sample is to approximate the underlying probability
density p(z) by a superposition of functions centered on the elements of a sample A drawn
from z:
(3) 𝑝(𝑧) ≈ 𝑃∗(𝑧) ≡1
𝑁𝐴
∑ 𝑅(𝑧 − 𝑧𝑗)𝑧𝑗∈𝐴
where NA is the number of trials in the sample A and R is a window function which integrates
to 1. P∗(z) is widely known as the Parzen window density estimate.
Unfortunately, evaluating the entropy integral:
(4) ℎ(𝑧) ≈ −𝐸𝑧[𝑙𝑛𝑃 ∗ (𝑧)] ≈ −1
𝑁𝑏
∑ 𝑙𝑛𝑃∗(𝑧𝑖)𝑧𝑖∈𝐵
where NB is the size of a second sample B. The sample mean converges toward the true
expectation at a rate proportional to 1/√NB.
We may now write an approximation for the entropy of a random variable z as follows:
(5) ℎ(𝑧) ≈ ℎ∗(𝑧) ≡ −1
𝑁𝑏
∑ 𝑙𝑛1
𝑁𝐴
∑ 𝐺𝜓(𝑧𝑖 − 𝑧𝑗)
𝑧𝑗∈𝐴𝑧𝑖∈𝐵
Where (Gaussian density function):
𝐺𝜓(𝑧) ≡ (2𝜋)−𝜋2 |𝜓|−0.5exp (−
1
2𝑧𝑇𝜓−1𝑧)
Next we examine the entropy of v(T (x)), which is a function of the transformation T . In order
to find a maximum of entropy or mutual information, we may ascend the gradient with respect
to the transformation T. After some manipulation, the derivative of the entropy may be written
as follows:
(6) 𝑑
𝑑𝑇ℎ ∗ (𝑣(𝑇(𝑥))) =
1
𝑁𝐵
∑ ∑ 𝑊𝑢(𝑉𝑖 , 𝑉𝑗)(𝑉𝑖 − 𝑉𝑗)𝑇
𝑥𝑗∈𝐴𝑥𝑖∈𝐵
𝜓−1𝑑
𝑑𝑇(𝑉𝑖 − 𝑉𝑗 )
Using the following definitions:
𝑣𝑖 ≡ 𝑣(𝑇(𝑥𝑖)), 𝑣𝑗 ≡ 𝑣 (𝑇(𝑥𝑗)), 𝑣𝑘 ≡ 𝑣(𝑇(𝑥𝑘))
And
𝑊𝑣(𝑉𝑖 , 𝑉𝑗) ≡𝐺𝜓𝑣
(𝑉𝑖 − 𝑉𝑗)
∑ 𝐺𝜓𝑣(𝑉𝑖 − 𝑉𝑘 )𝑥𝑘∈𝐴
The entropy approximation described in Equation (5) may now be used to evaluate the mutual
information between the reference volume and the test volume [Equation (2)]. In order to seek
a maximum of the mutual information, we will calculate an approximation to its derivative,
8
Software Engineering Department 𝑑
𝑑𝑇𝐼(𝑇) ≈
𝑑
𝑑𝑇ℎ ∗ (𝑢(𝑥)) +
𝑑
𝑑𝑇ℎ ∗ (𝑣(𝑇(𝑥))) −
𝑑
𝑑𝑇ℎ ∗ (𝑢(𝑥), 𝑣(𝑇(𝑥)))
Given these definitions we can obtain an estimate for the derivative of the mutual information
as follows:
𝑑𝐼
𝑑𝑇
=
1
𝑁𝐵
∑ ∑ (𝑉𝑖 − 𝑉𝑗)𝑇
𝑥𝑗∈𝐴𝑥𝑖∈𝐵
× [𝑊𝑣(𝑣𝑖 , 𝑣𝑗)𝜓𝑣
−1− 𝑊𝑤(𝑤𝑖 , 𝑤𝑗)𝜓
𝑣𝑣
−1]
𝑑
𝑑𝑇(𝑣𝑖 − 𝑣𝑗)
The weighting factors are defined as:
𝑊𝑣(𝑣𝑖, 𝑣𝑗) ≡𝐺𝜓𝑣
(𝑣𝑖 − 𝑣𝑗)
∑ 𝐺𝜓𝑣(𝑣𝑖 − 𝑣𝑘)𝑥𝑘∈𝐴
𝑊𝑤(𝑤𝑖, 𝑤𝑗) ≡𝐺𝜓𝑣
(𝑉𝑖 − 𝑉𝑗)
∑ 𝐺𝜓𝑣(𝑉𝑖 − 𝑉𝑘)𝑥𝑘∈𝐴
If we are to increase the mutual information, then the first term in the brackets may be
interpreted as acting to increase the squared distance between pairs of samples that are
nearby in test volume intensity, while the second term acts to decrease the squared distance
between pairs of samples whose intensities are nearby in both volumes. It is important to
emphasize that these distances are in the space of intensities, rather than coordinate
locations.
The term 𝑑
𝑑𝑇(𝑣𝑖 − 𝑣𝑗) will generally involve gradients of the test volume intensities, and the
derivative of transformed coordinates with respect to the transformation.
We seek a local maximum of mutual information by using a stochastic analog of gradient
descent. Steps are repeatedly taken that are proportional to the approximation of the
derivative of the mutual information with respect to the transformation:
Repeat:
A ← {sample of size NA drawn from x}
B ← {sample of size NB drawn from x}
T ← 𝑇 + 𝜆𝑑��
𝑑𝑇
The parameter λ is called the learning rate. The above procedure is repeated a fixed number
of times or until convergence is detected. When using this procedure, some care must be
taken to ensure that the parameters of transformation remain valid.
In addition to the learning rate λ, the covariance matrices of the Parzen window functions are
important parameters of this technique. It is not difficult to determine suitable values for these
parameters by empirical adjustment, and that is the method we usually use. Referring back to
Equation (3), ψ should be chosen so that P∗(z) provides the best estimate for p(z). In other
words ψ is chosen so that a sample B has the maximum possible likelihood. Assuming that
the trials in B are chosen independently, the log likelihood of ψ is:
(7) 𝑙𝑛 ∏ 𝑃 ∗ (𝑧𝑖) = ∑ ln 𝑃 ∗ (𝑧𝑖)𝑧𝑖∈𝐵𝑧𝑖∈𝐵
This equation bears a striking resemblance to Equation (4), and in fact the log likelihood of ψ
is maximized precisely when the entropy estimator h∗(z) is minimized.
Was assumed that the covariance matrices are diagonal,
(8) 𝜓 = 𝐷𝐼𝐴𝐺(𝜎12, 𝜎2
2, … )
Following a derivation almost identical to the one described above derived an equation
analogous to Equation (6),
(9) 𝑑
𝑑𝜎𝑘
ℎ∗(𝑧) =1
𝑁𝑏
∑ ∑ 𝑊𝑧(𝑧𝑏 , 𝑧𝑎)
𝑥𝑎∈𝑎𝑥𝑏∈𝑏
(1
𝜎𝐾
) ([𝑍]𝐾
2
𝜎𝐾2 − 1)
9
Software Engineering Department
where [z]k is the z`th component of the vector z. In practice both the transformation T and the
covariance ψ can be adjusted simultaneously; so while T is adjusted to maximize the mutual
information, I (u(x), v(T (x))), ψ is adjusted to minimize h∗(v(T (x))).
2.1.3 An Automatic Technique for Finding and Localizing Externally Attached Markers
in CT and MR Volume Images of the Head [3]:
Different imaging modalities provide different types of information that can be combined to aid
diagnosis and surgery. Bone, for example, is seen best on X-ray computed tomography (CT)
images, while soft-tissue structures are seen best on magnetic resonance (MR) images.
Because of the complementary nature of the information in these two modalities, the
registration of CT images of the head with MR images is of growing importance for diagnosis
and for surgical planning. Furthermore, registration of images with patient anatomy is used in
new interactive image-guided surgery techniques to track in real time the changing position of
a surgical instrument or probe on a display of preoperative image sets of the patient. The
definition of registration as the determination of a one-to-one mapping between the
coordinates in one space and those in another, such that points in the two spaces that
correspond to the same anatomic point are mapped to each other.
Point-based registration involves the determination of the coordinates of corresponding points
in different images and the estimation of the geometrical transformation using these
corresponding points. The points may be either intrinsic, or extrinsic. Intrinsic points are
derived from naturally occurring features, e.g., anatomic landmark points. Extrinsic points are
derived from artificially applied markers, e.g., tubes containing copper sulfate. We use external
fiducial markers that are rigidly attached through the skin to the skull. The points used for
registration fiducial points or fiducials, as distinguished from “fiducial markers,” and pick
as the fiducials the geometric centers of the markers. Determining the coordinates of the
fiducials, which we callJiducia2 localization, may be done in image space or in physical space.
Several techniques have been developed for determining the physical space coordinates of
external markers.
The algorithm finds markers in image volumes of the head. A three-dimensional (3-D) image
volume typically consists of a stack of two-dimensional (2-D) image slices. The algorithm finds
markers whose image intensities are higher than their surroundings. It is also tailored to find
markers of a given size and shape. All of the marker may be visible in the image, or it may
consist of both imageable and nonimageable parts. It is the imageable part that is found by the
algorithm, and it is the size and shape of this imageable part that is important to the algorithm.
Henceforth when we use the term “marker” we are referring to only the imageable portion of
the marker. Three geometrical parameters specify the size and shape of the marker
adequately for the purposes of this algorithm: 1) the radius rm, of the largest sphere that can
be inscribed within the marker, 2) the radius Rm, of the smallest sphere that can circumscribe
the marker, and 3) the volume Vm, of the marker. Cylindrical markers with diameter d and
height h for clinical experiments. For these markers:
𝑟𝑚 =min(𝑑, ℎ)
2, 𝑅𝑚 =
√𝑑2 + ℎ2
2, 𝑉𝑚 =
𝜋𝑑2ℎ
4
First, we must search the entire image volume to find marker-like objects. Second, for each
marker-like object, we must decide whether it is a true marker or not and accurately localize
the centroid for each true one. Therefore, the algorithm consists of two parts. Part one finds
“candidate voxels”. Each candidate voxel lies within a bright region that might be the image of
a marker. The requirements imposed by Part One are minimal with the result that, for the
M markers in that image, there are typically many more than M candidate points identified.
Part Two selects from these candidates M points that are most likely to lie within actual
10
Software Engineering Department markers and provides a centroid for each one. Part One is designed so that it is unlikely to
miss a true marker. Part Two is designed so that it is unlikely to accept a false marker.
Part One takes the following input: The image volume of the head of a patient. The type of
image (CT or MR). The voxel dimensions ∆𝑥𝑣, ∆𝑦𝑣, and ∆𝑧𝑣. The marker’s geometrical
parameters rm, Rm and Vm. The intensity of an empty voxel. Part One produces as output a set
of candidate voxels.
Part Two takes the same input as Part One, plus two additional pieces of information: the set
of candidate voxels produced by Part One and the number of external markers M known a
priori to be present in the image. Part Two produces as output a list of M “fiducial points”.
Each fiducial point is a 3-D position (zf, yf, zf ) that is an estimate of the centroid of a marker.
The list is ordered with the first member of the list being most likely to be a marker and the last
being the least likely.
Part One operates on the entire image volume.
1. If the image is an MR image, a 2-D, three-by-three median filter is applied within each
slice to reduce noise.
2. To speed up the search, a new, smaller image volume is formed by subsampling. The
subsampling rate in x is calculated as ⌊𝑟𝑚
∆𝑥𝑣⌋. The subsampling rates in y and z are similarly
calculated.
3. An intensity threshold is determined. For CT images, the threshold is the one that
minimizes the within-group variance. For MR images, the threshold is computed as the
mean of two independently determined thresholds. The first is the threshold that
minimizes the within-group variance. The second is the threshold that maximizes the
Kullback information value.
4. This threshold is used to produce a binary image volume with higher intensities in the
foreground. Foreground voxels are typically voxels that are part of the image of markers
or of the patient’s head.
5. If the original image is an MR image, spurious detail tends to appear in the binary image
produced by the previous step. The spurious detail is composed of apparent holes in the
head caused by regions that produce weak signal, such as the skull and sinuses. Thus, if
the original image is an MR image, these holes in the binary image are filled. In this step
each slice is considered individually. A foreground component is a two-dimensionally
connected set of foreground voxels. The holes are background regions completely
enclosed within a slice by a single foreground component. This step reduces the number
of false markers.
6. Two successive binary, 2-D, morphological operations are performed on each slice. The
operations taken together have the effect of removing small components and small
protrusions on large components. In particular, the operations are designed to remove
components and protrusions whose cross sections are smaller than or equal to the largest
cross section of a marker. The operations are erosion and dilation, in that order. The
structuring element is a square. The x dimension (in voxels) of the erosion structuring
element is calculated as ⌈2𝑅𝑚/∆𝑥𝑣| ⌉ (the ceiling function and the prime refers to the
subsampled image). The y dimension is similarly calculated. The size of the dilation
structuring element in each dimension is the size of the erosion element plus one.
7. The binary image that was output by the previous step is subtracted from the binary image
that was input to the previous step. That is, a new binary image is produced in which
those voxels that were foreground voxels in the input image but background in the output
image are set to foreground. The remaining voxels are set to background. The result is a
binary image consisting only of the small components and protrusions that were removed
in the previous step.
8. For the entire image volume, the foreground is partitioned into 3-D connected
components. The definition of connectedness can be varied. We have found that including
11
Software Engineering Department the eight 2-D eight-connected neighbors within the slice plus the two 3-D six-connected
neighbors on the neighboring slices works well for both CT and MR images.
9. The intensity-weighted centroid of each selected component is determined using the voxel
intensities in the original image. The coordinates of the centroid position (xc, yc, zc) are
calculated independently as follows:
𝑥𝑐 =∑ (𝐼𝑖 − 𝐼0)𝑥𝑖𝑖
∑ (𝐼𝑖 − 𝐼0)𝑖
, 𝑦𝑐 =∑ (𝐼𝑖 − 𝐼0)𝑦𝑖𝑖
∑ (𝐼𝑖 − 𝐼0)𝑖
, 𝑧𝑐 =∑ (𝐼𝑖 − 𝐼0)𝑧𝑖𝑖
∑ (𝐼𝑖 − 𝐼0)𝑖
10. The voxels that contain the points (xc, yc, zc) are identified.
The voxels identified in the last step are the candidate voxels.
The step of Part two: Part Two operates on a region of the original image around each candidate voxel. Desired to use the smallest region possible to improve speed. The region must contain all voxels whose centers are closer to the center of the candidate voxel than the longest marker dimension (2Rm), plus all voxels that are adjacent to these voxels. For convenience, we use a rectangular parallelepiped that is centered about the candidate voxel. The x dimension (in voxels) is calculated as 2⌈2𝑅𝑚/∆𝑥𝑣⌉ + 3. The 3 represents the center voxel, plus an adjacent voxel on each end. The y and z dimensions are similarly calculated. For each of these regions Part Two performs the following steps:
1. It is determined whether or not there exists a “suitable” threshold for the candidate voxel.
This determination can be made by a brute-force checking of each intensity value in the
available range of intensities. In either case a suitable threshold is defined as follows. For
a given threshold the set of foreground (higher-intensity) voxels that are three-
dimensionally connected to the candidate voxel are identified. The threshold is considered
suitable if the size and shape of this foreground component is sufficiently similar to that of
a marker. There are two rules that determine whether the size and shape of the
component are sufficiently similar.
a) The distance from the center of the candidate voxel to the center of the most distant
voxel of the component must be less than or equal to the longest marker dimension
(2Rm).
b) The volume, Vc, of the component, determined by counting its voxels and multiplying
by the volume of a single voxel 𝑉𝑣 = ∆𝑥𝑣 × ∆𝑦𝑣 × ∆𝑧𝑣, must be within the range
⌈𝛼𝑉𝑚 , 𝛽𝑉𝑚⌉.
2. If no such threshold exists, the candidate point is discarded. If there are multiple suitable
thresholds, the smallest one (which produces the largest foreground component) is
chosen in order to maximally exploit the intensity information available within the marker.
3. If the threshold does exist, the following steps are taken
a) The intensity-weighted centroid of the foreground component is determined using
the voxel intensities in the original image. The coordinates of the centroid position
(xf, yf, zf ) are calculated as in Step 9 of Part One of the algorithm but with the
foreground component determined in Step 1.
b) The average intensity of the voxels in the foreground component is calculated
using the voxel intensities in the original image.
4. The voxel that contains the centroid (xf, yf, zf) is iteratively fed back to Step 1 of Part Two.
If two successive iterations produce the same centroid, the centroid position and its
associated average intensity is recorded. If two successive iterations have not produced
the same centroid by the fourth iteration, the candidate is discarded.
The centroid positions (xf, yf, zf) are ranked according to the average intensity of their components. The M points with the highest intensities are declared to be fiducial points and are output in order by rank. A candidate with a higher intensity is considered more likely to be a fiducial point.
12
Software Engineering Department 2.1.4 Use of the Hough transformation to detect lines and curves in pictures [4]:
The set of all straight lines in the picture plane constitutes two-parameter family. If we fix a parameterization for the family, then an arbitrary straight line can be represented by a single point in the parameter space. For reasons that become obvious, we prefer the so-called normal parameterization. As illustrated in Fig. 3, this parameterization specifies a straight line by the angle 𝜃 of its normal and its algebraic distance p from the origin. The equation of a line corresponding to this geometry is:
𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃 = 𝑟
If we restrict 𝜃 to the interval [0,π), then the normal parameters for a line are unique. With this restriction, every line in the x-y plane corresponds to a unique point in the 𝜃 − 𝑟 plane. Suppose, now, that we have some set {(𝑥1, 𝑦1), … , (𝑥𝑛 , 𝑦𝑛)} of n figure points and we want to find a set of straight lines that fit them. We transform the points (𝑥𝑖 , 𝑦𝑖) into the sinusoidal curves in the 𝜃 − 𝑟 plane defined by:
(1) 𝑟 = 𝑥𝑖𝑐𝑜𝑠𝜃 + 𝑦𝑖𝑠𝑖𝑛𝜃 It is easy to show that the curves corresponding to co-linear figure points have a common point of intersection. This point in the 𝜃 − 𝑟 plane, say (𝜃0, 𝑟0) defines the line passing through the colinear points. Thus, the problem of detecting co-linear points can be converted to the problem of finding concurrent curves.
Figure 3.The normal parameters for the line
A dual property of the point-to-curve transformation can also be established. Suppose we of points in the 𝜃 − 𝑟 plane, all lying on the curve:
𝑟 = 𝑥0𝑐𝑜𝑠𝜃 + 𝑦0𝑠𝑖𝑛𝜃
Then it is easy to show that all these points correspond to lines in the x-y plane passing through the point (𝑥0, 𝑦0). We can summarize these interesting properties of the point-to-curve transformation as follows:
1. A point in the picture plane corresponds to a sinusoidal curve in the parameter plane.
2. A point in the parameter plane corresponds to a straight line in the picture plane.
3. Points lying on the same straight line in the picture plane correspond to curves
through a common point in the parameter plane.
4. Point s lying on the same curve in the parameter plane correspond to lines through
the same point in the picture plane.
13
Software Engineering Department
2.2 Detailed description
2.2.1 Introduction:
Physicians use phantom in the test, in order to simulate the human organs. They fill the cylinders with different volumes of radiopharmaceutical (depicted on figure 4).
Figure 4:PET phantom viewed from above
The phantom is placed into the PET camera, and the scan begins. As a result of the scan we get a set of slice images (figure 5).
Figure 5: Image slice receive from the PET camera
The best slice is the image slice that has no noises and all cylinders are clearly visible. So, the physicians need to select the “best slice” from the set of received slices to work with it.
They mark all clearly visible cylinders and get minimum, maximum and mean SUV (Standardized Uptake Value) statistics from marked regions. This data is needed for future calculations, such as ratios.
At the end of the test they produce a report with attached hard copy image slice. If all the results meet the criteria then the camera has passed the test.
14
Software Engineering Department 2.2.2 The problem is:
1. Define the template – actually the template is the MASK that is applied on the slice image
in order to define the ROIs and choosing the best slice from the set of images.
Figure 6: Applied template mask
2. Fit the template to the PET slices size (scaling, rotating and moving).
3. According the template choose “best” slice from all slices, given by the camera.
2.2.3 Our solution to problem is:
1. Find at least three spots using algorithm depicted in [3] from the CT image. This algorithm
provides as a result centroids of all founded cylinders. By Z-coordinate of centroids and
known thickness of the slice image we can get the number of the CT slice in order to build
a template according to the found centroids.
We got the slice and now we need to find all the circles on it, according the Haugh
transformation algorithm [4]. Founded 8 circles gave us the needed template (figure 6).
2. To fit the template to the PET slices size, the following steps are applied:
a) Extract a slice same as the slice found from PET camera (the template`s slice).
b) Color the inner space of the phantom in the image in white.
c) Find the center of this circle (center of phantom) using Haugh transformation
algorithm [4].
d) Get the size of this white circle and transform (scale) template to its size.
3. We need to check all slices according to template in order to find the “best” slice that has
less errors and noises. Get needed values from founded ROIs. Then, do all calculations
needed for the report. At the end provide the report with attached hard copy image slice.
2.3 Expected results
To illustrate the expected results (“best” slice image of phantom that contain “clear” (best fitted) information), we want to show two slices. The first one is bad one (figure 7) and the second one (figure 8) is good enough to be an expected result.
15
Software Engineering Department
Figure 7: Bad slice image (not selected to be the best)
Figure 8: Good slice image (candidate to be the best slice)
16
Software Engineering Department We get the best slice image, needed for QA test with marked ROIs (figure 9).
Figure 9: Hard copy of final ROIs
3. Software Engineering documents
3.1 Requirements (Use case)
17
Software Engineering Department
3.2 GUI
This is the main window of the program with filled test parameters:
You can change the application settings using options window:
18
Software Engineering Department There is an option to generate MASKs with MASK generation application:
The program automatically finds the best slice and fits the selected MASK to it. But there is an option to edit applied MASK if user does not like how it was applied:
19
Software Engineering Department During the loading if found more than one series in search directory, program pop-ups the “series selection” window:
For problem solving there is a help window with all explanations:
20
Software Engineering Department
3.3 Program structure – Architecture, Design
3.3.1 UML class diagram
+ CenterClosingCT(img : Image<Gray, Byte>) : Image<Gray, Byt...
+ ClosingImage(img : Image<Gray, Byte>, erodeElement : IntPtr,...
+ ConvertFromImageCoordinates(img : Image<Gray, Byte>, pnt ...
+ FindBestSlice(slices : List<DicomFile>, mask : CircleMask) : Dico...
+ FitCircleMask(img : Image<Gray, Byte>, msk : CircleMask) : Circ...
+ MakeBinaryImage(img : Image<Gray, Byte>, intensityThreshol...
+ SearchPhantomCenter(image : Image<Gray, Byte>, cannyThre...
+ SearchPhantomRadius(image : Image<Gray, Byte>, cannyThre...
- shapes : List<Shape>
+ CircleMask(shapes : List<Shape>)
+ CircleMask(center : PointF, radius : Single, shapes :...
21
Software Engineering Department
+ lstReturn : List<Di...
- allList : List<DicomF...
+ ChooseSeries(strLi...
- SortList(list : List<D...
- masks : Dictionary<...
- PETimagesList : List...
- PETimagesList3D : L...
- SortList(list : List<D...
- allList : List<DicomF...
+ SliceFitForm(allList ...
- PETimagesList : List...
- SortList(list : List<D...
- shapes : Dictionary...
22
Software Engineering Department 3.3.2 Sequence diagram
3.3.3 Activity diagram
23
Software Engineering Department
3.4 Testing plan
This section presents test scenario done for common user requirements for learning.
3.4.1 Test scenario for: Main interface
# Taken Action Expected Results Pass/Fail
1
Start the application
An empty (clear parameters) GUI opened. The application is ready for use. “Run test” & “Correct Manually” buttons is disabled. All the other GUI components are enabled.
Pass
2
“File->Program option”
Program options window opened. All the buttons enabled. Text fields show the paths that user has defined.
Pass
3 “File->Exit” Close the application. Pass
4 “Mask->Generate Mask”
Open the MASK generation application. All the GUI components are enabled.
Pass
5 “Help->About”
Open the “about” window. All the text fields are correctly shown. “OK” button enabled.
Pass
6 “Help->Help”
Open the “help” (.chm) window.
Pass
7
Paths “Browse” button
Open the browse window. All the GUI components are enabled. After the selection, the full path is shown in program window.
Pass
8 Test parameter wrong values or empty fields
Pop up error message. Pass
9 Mask combo box clicked
Open the combo box dialog with list of MASKs that exists in MASKs folder.
Pass
10 “Load images” button with empty paths or MASK was not chosen
Pop up error message. Pass
11
“Load images” button with filled correct paths + chosen MASK
Load the images. Update and show test log. Progress bar is running during the loading. After the loading is completed find the best slices, fit MASKs for them, show them in program main window and disable the “Load images” button. “Run test” & “Correct manually” buttons is enabled. While loading the images the “Clear” button is disabled. If during the loading there more than one series in DICOM images folder, pop up the
Pass
24
Software Engineering Department
selection window (all GUI parameters are correct slider is disabled).
12
“Correct manually” button
Open fit the MASK manually window. All the GUI parameters are enabled. Best slice is shown in the window with automatically fitted MASK.
Pass
13 “Clear” button
In all step of the test the button clears all the test parameters.
Pass
14 “Run test” button
Open the test result (.pdf) file. File filled with all correct calculation results.
Pass
15 Exit program button Close the application. Pass
3.4.2 Test scenario for – Program Option
# Taken Action Expected Results Pass/Fail
1
“Browse” the path button
Open the browse window. Text fields are disabled and show a path that user chose during the installation. All the GUI parameters are shown correctly. After the browse selection text fields show the chosen path.
Pass
2 Exit button Close the application. Pass
3.4.3 Test scenario for – Mask Generator
# Taken Action Expected Results Pass/Fail
1
“File->Load Background”
Open file dialog. All the GUI parameters correct. Show the background image when user has been selected it.
Pass
2
“File->Load Mask”
Open file dialog. All the GUI parameters correct. Show the mask when user has been selected it.
Pass
3 “File->Save Mask”
Open the browse dialog to save the (.msk) file of the created Mask.
Pass
4 “File->Exit” Close the application. Pass
5 “Help->About”
Open about dialog. All the GUI parameters correct. OK button enabled.
Pass
6 Selection of ROI objects Highlight the chosen object. Make the transformation
Pass
25
Software Engineering Department
option opened for this object.
7
Mouse right button
Open the object transformation dialog (if the object was not selected before the None option is checked in).
Pass
8 Object selected + transformation did not select + (Up/Down key pressed or Left mouse clicked + move cursor up and down)
Nothing was happened. Pass
9 Object selected + transformation selected + (Up/Down key pressed or Left mouse clicked + move cursor up and down)
Transform of chosen object is working correctly.
Pass
10 Orientation check box checked (by default)
Show the PHANTOM outline circle.
Pass
11 Orientation check box unchecked
Does not show the PHANTOM direct circle.
Pass
12 ROIs check box checked (by default)
Show the ROIs circles. Pass
13 ROIs check box unchecked
Does not show the ROIs circles.
Pass
14 Exit button Close the application. Pass
3.4.4 Test scenario for – DICOM images selection
# Taken Action Expected Results Pass/Fail
1
Combo box
Drop down the founded series. Show images and enable the slider when series is selected.
Pass
2 Slider moving
Show the series DICOM images.
Pass
3 Exit/Cancel button
Close the window. Pop up error message. Stop the loading.
Pass
26
Software Engineering Department 3.4.5 Test scenario for – Manually correction
# Taken Action Expected Results Pass/Fail
1
Slider moving
Show the DICOM slice images in the list of chosen series. Show the number of the image in “Slice” text field.
Pass
2
Green direction buttons
Move the Mask on chosen image, according to each button direction image (Up/Down/Left/Right).
Pass
3
Purple rotation buttons
Rotate the Mask on chosen image, according to each button direction image (left direction = counter clockwise, right direction = clockwise).
Pass
4
Scale selection
Scale the Mask on chosen image. Up values (>0) increase the Mask size, Down values (<0) decrease the Mask size.
Pass
5 OK button
Save the current mask position and chosen image as best slice. Close the window.
Pass
6 Exit/Cancel button Close the window. Pass
27
Software Engineering Department
4. Result and conclusion
During the work on the project, we dealt with number of problems. In this chapter we want to describe them and show our solutions.
4.1 QA Testing Process
1. Generate/Choose the PHANTOM Mask.
2. Load Series of 2D/3D image slices from source directories.
3. Find the best slice in each image slice series (2D/3D). As it was mentioned above, the
best slice is the slice, which contains “clear” (best fitted) information.
4. Fit the masks.
5. Retrieve test values from ROIs according chosen Masks.
6. Generate report.
4.2 Problems and solutions
4.2.1 Working with set of DICOM images:
Problem:
In order to complete the QA test, user must select the path to 2D PET DICOM images and to 3D PET DICOM images. But source images directory can contain more than one series of PHANTOM slices.
Solution:
The program will load all image series and will ask the user to choose the series. User can run over the slices to see the quality of each set and choose the best one.
Note: If there is only one set of to 2D PET DICOM images and one set of 3D PET DICOM images in folder, system automatically detect it (will not show the popup window).
4.2.2 Creation of PET/CT mask:
Problem:
Our test program uses a PHANTOM MASK for choosing the best slices and for calculating the ROIs SUV values. In the start of our work we had no MASK so there was nothing to apply.
Solution:
In RAMBAM medical center tester works only with one kind of PHANTOM, that can be changed in the future. For this MASK and for all future MASKS, we have created tool for MASK generation.
4.2.3 Find the best slices:
Problem:
According the Part A of our project we wanted to use “An Automatic Technique for Finding and Localizing Externally Attached Markers in CT and MR Volume Images of the Head” algorithm in order to obtain the best slice. But unfortunately, the algorithm did not work. So there is a need in another way to solve the problem.
28
Software Engineering Department Solution:
Firstly we wanted to use “Hough” algorithm in order to find all visible circles on the image. But different PET slices series have varies intensities, are noisy and it is very difficult to determine if it is a real image or noise. So we needed to provide new parameters to “Hough” algorithm every time we had new series. There was no regularity in those parameters, so it was impossible to use this method.
Another solution was to find the slice with highest intensity voxel on it. There was a problem with it, because each slice contains the max value and it can be real or caused by noise.
During our tries, we have mentioned, that changing image contrasts affects the visibility of image parts. So, by setting the specific image contrast we can mark the hot spots only. By counting hot spots in each slice we can determine the quality of the slice and it can be candidate for the best slice. For each candidate and his neighbors, define how many visible circles the slice has (using “Hough” algorithm) and the difference between neighbor slice circles. Compare the numbers of visible circles on the slice, and choose the slice with max circle numbers (not above 4 in our case), having lowest difference between the neighbors slice circles, as the best slice.
Note: The contrast in DICOM image is defined by two parameters: window width, window center. These parameters defines range window of gray levels. There are two PET/CT cameras machines in RAMBAM hospital: GE Discovery 690 (new model), GE Discovery LS(old model). For marking the hot spots we used following settings:
D690 (new model) – window width = 1, window center = (4000 + 9085 – window width provided in DICOM file (tag: (0028, 1051)).
LS (old model) – window width = 1, window center = 40 * (400 – (energy window limit upper limit (tag: (0054, 0016)) – energy window lower limit (tag: (0054, 0014)))).
4.2.4 Fit the MASK to Best slice:
Problem:
After finding the best slice we need to fit the MASK. Initially, we took a CT image, and used image processing (“opening”, “closing”, and filtering), then we used a “Hough” algorithm to find the “Bone cylinder” and center of PHANTOM slice and after that we calculated scale/moving/rotation factors. But this solution was not good, because all this actions change source image and caused discrepancy.
After this work we understood that it is impossible to transform CT fitted MASK to PET image because different size of these images. So, we decided to fit the MASK directly on PET image.
Solution: Firstly we found the highest/lowest/left/right points of PHANTOM and received square. By square we got the center of PHANTOM and radius as scale factor for the MASK. Then we converted the image to binary image by applying threshold found some of hot spot circles (center and radius). By center of PHANTOM and centers of these circles we found rotation factor for the MASK.
Note: To find rotation angle we need to determine angles differences between Hot spot center and axes Y (as shown in figure 10). Let us call angle between center of Hot spot on the PET image and axes y – α, angle between center of Hot spot on the MASK and axes Y – β. The difference between angles: ∆=∝ −𝛽. The rotation angle is an average between all Hot spots Δ`s.
And afterwards we fit the MASK by applying all the transformations with founded factors.
29
Software Engineering Department
Figure 10: Rotation angle
4.2.5 Retrieving SUV (Standardized uptake values) from DICOM image:
Problem:
The values that, are stored in DICOM image are in Bq/ml, but for our QA test those values in units of SUV, so we need to convert the Bq/ml to SUV.
Solution:
If the original image units are Bq/ml and all necessary data are present, PET images can be displayed in units of SUVs.
If the PET image units field (DICOM Tag: <0054, 1001>) is set to BQML, then the PET images may be displayed in SUVs or in uptake in the form of Bq/ml. The application must do the conversion from activity concentration to SUV. GE (as we work only with GE cameras) applications provide the following SUV types:
1. SUV Body Weight (SUVbw) – this value we need for our test.
2. SUV Body Surface Area (SUVbsa).
3. SUV Lean Body Mass (SUVlbm).
Calculations:
SUVbw = PET image pixels ∙ Weights in grams
𝐼𝑛𝑗𝑒𝑐𝑡𝑒𝑑 𝐷𝑜𝑠𝑒
PET image pixels and injected dose are decay corrected to the start of scan. PET image pixels are in units of Activity/Volume. Images converted to SUVbw are displayed with units of gr/ml.
Images with initial units of uptake (Bq/ml) may be converted to SUVs and back to uptake or to another SUV type. However if the images are loaded in some units other than uptake, then no conversion shall be allowed. This holds true even if the units are the same as SUV units. This is because there is no way to know exactly how the SUVs were calculated.
SUV computation requires the following DICOM attributes to be filled in:
weight = patient weight = Study Patient Weight (10,1010)
tracer activity = Total Dose (18, 1074)
measured time = Radio Pharmaceutical Start Time (18, 1072)
administered time = Radio Pharmaceutical Start Time (18, 1072)
half life = Radio Nuclide Half Life(0018,1075)
30
Software Engineering Department scan time = Series Date (0008, 0021) + Series Time (0008,0031)
Note: Series Date/Time can be overwritten if the original PET images are post processed and a new series is generated. The software needs to check that the acquisition Date/Time (0008, 0023) and (0008, 0033) is equal to or later than the Series Date/Time. If it isn’t, the Series Date/Time has been overwritten and for GE PET images the software should use a GE private attribute (0009x, 100d) for the scan start DATETIME.
Proceed to calculate SUVs as below.
The formula we use for SUV factors are:
SUVbw = 𝑝𝑖𝑥𝑒𝑙 ∙ 𝑤𝑒𝑖𝑔ℎ𝑡
𝑎𝑐𝑡𝑢𝑎𝑙 𝑎𝑐𝑡𝑖𝑣𝑖𝑡𝑦
𝑎𝑐𝑡𝑢𝑎𝑙 𝑎𝑐𝑡𝑖𝑣𝑖𝑡𝑦 = 𝑡𝑟𝑎𝑐𝑒𝑟 𝑎𝑐𝑡𝑖𝑣𝑖𝑡𝑦 ∙ 2−(𝑠𝑐𝑎𝑛 𝑡𝑖𝑚𝑒−𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑡𝑖𝑚𝑒)
ℎ𝑎𝑙𝑓 𝑙𝑖𝑓𝑒
Note: In the GE PET Images, Total Dose(18,1074) = NET Activity to the patient at Series Time (0008, 0031).
4.3 Running/Simulation
4.3.1 Simulation 1
Date of QA test: 03/12/12
Camera: Discovery D690
FOV2:
31
Software Engineering Department FOV1:
Test Result: Test was successfully passed.
4.3.2 Simulation 2
Date of QA test: 03/12/12
Camera: Discovery LS
FOV2:
32
Software Engineering Department FOV1:
Test Result: Test was successfully passed.
4.3.3 Simulation 3
Date of QA test: 08/05/13
Camera: Discovery D690
We deliberately have rotated the MASK in order to fail the QA Test. As you can see on the pictures the calculated results have not passed the criteria. So, the Test failed!
FOV2:
33
Software Engineering Department FOV1:
Test Result: Test Failed.
4.4 Final conclusion
As we see during our project, that “An Automatic Technique for Finding and Localizing Externally Attached Markers in CT and MR Volume Images of the Head” algorithm is not applicable in our project. This algorithm probably is fine for the work with CT images, but our project focuses on PET images, so we found the best solution for it.
In a work with image processing, you need to pay attention that the image processing result generally are not accurate, and if there is a need in precision you need to try additional techniques in order to double check your results.
In addition, if there are similar images but with different quality you need to adjust the contrast in order to improve your image.
Our work was based only on two kinds of GE cameras, so the project is oriented for them. Any additions may cause additional changes in algorithms and calculations.
34
Software Engineering Department
References
[1] J. P. Pluim, J. B. Maintz, and M. A. Viergever, ‘‘Image registration by maximization of combined mutual information and gradient information,’’IEEE Trans. Med. Imaging 19, 809–814 ~2000.
[2] W. M. Wells III, P. Viola, H. Atsumi, S. Nakajima, and R. Kikinis, ‘‘Multi-modal volume registration by maximization of mutual information,’’ Med. Image Anal 1, 35–51 ~1996.
[3] Matthew Y. Wang, Calvin R. Maurer, Jr., J. Michael Fitzpatrick,* Member, IEEE, and Robert J. Maciunas, “IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 43, NO. 6, JUNE 1996.”
[4] Use of the Hough transformation to detect lines and curves in pictures. Technical note 36. April 1971. By: Richard O.Duda and Peter E. Hart. Artificial intelligence center.