158
jSNARK v1.0.6 A PROGRAMMING SYSTEM FOR THE RECONSTRUCTION OF IMAGES FROM PROJECTIONS STUART W. ROWLAND August 9, 2015

jSNARK v1.0.6 A PROGRAMMING SYSTEM FOR THE RECONSTRUCTION ...jsnark.sourceforge.net/jSNARK_manual_1_0_6.pdf · A PROGRAMMING SYSTEM FOR THE RECONSTRUCTION OF IMAGES FROM PROJECTIONS

Embed Size (px)

Citation preview

jSNARK v1.0.6A PROGRAMMING SYSTEM FOR THE RECONSTRUCTION

OF IMAGES FROM PROJECTIONS

STUART W. ROWLAND

August 9, 2015

2

Contents

1 INTRODUCTION 131.1 Statement of purpose and history . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131.2 Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

2 THE jSNARK FRAMEWORK 172.1 Basis concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172.2 Units and constants in jSNARK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.3 Time varying pictures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.4 Pictures and digitization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.5 Rays and ray sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.6 Projections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.7 Rotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.8 jSNARK input/output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.9 Limitations on the sizes of variables and files . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3 Information Flow 233.1 Data generation phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.2 Initialization and reconstruction phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.2.1 Initialization subphase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253.2.2 Reconstruction subphase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

3.3 Analysis phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

4 jSNARK_GUI 274.1 Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274.2 Create . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.2.1 Shape list . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.2.1.1 Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.2.1.2 Shape . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.2.2 Phantom geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 294.2.3 Projection geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.2.3.1 Aperture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.2.3.2 Arrangement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.2.3.3 Rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 304.2.3.4 Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.2.3.5 Accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314.2.3.6 Ray sum generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.3 Execute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344.3.1 Input . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

4.3.1.1 Reconstruction Region . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.3.1.2 Projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

4.3.2 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.3.2.1 Image geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354.3.2.2 Basis element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3

4 CONTENTS

4.3.2.3 Termination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.3.2.4 Ray order . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.3.2.5 Post processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.4 Figure of merit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.4.1 Figure of merit method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

4.5 Evaluator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.5.1 Evaluator method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.6 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374.6.1 Output method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

4.7 Versions and schema checking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

5 Foundation classes 395.1 Globals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.2 JSnarkMath . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395.3 Fourier . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.4 Poisson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405.5 Bessel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.6 Axis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415.7 Vector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

5.7.1 Vector2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425.7.2 Vector3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5.8 GridPoint . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425.9 Ray . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

5.9.1 Ray2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435.9.2 Ray3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

5.10 HalfSpace and ClosedHalfSpace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.11 Bounds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445.12 Rotation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

5.12.1 RotationOrder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 455.13 GenericSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

5.13.1 ProjectionSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.13.2 ImageSet . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

5.14 SetInstance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.15 ReadProjectionFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.16 The decorator pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

6 Writing user plug-ins 536.1 The PlugIn interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 536.2 The TwoD and ThreeD interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566.3 The Shape interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

6.3.1 The ConstantDensityShape class . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 566.4 The ImageGeometry interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 576.5 The BasisElement interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586.6 The ImageIterator interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586.7 The Image interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 596.8 The Directions interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 616.9 The Projection interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 636.10 The Arrangement interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.11 The SliceNoise interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.12 The ProjectionNoise interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.13 The QuantumNoise interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 656.14 The RayNoise interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666.15 The Algorithm interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 666.16 The Termination interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

CONTENTS 5

6.17 The RayIterator interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 686.18 The RayOrder interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696.19 The PostProcessing interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 696.20 The ReconstructionProcessor interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706.21 The FigureOfMerit interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 706.22 The Evaluator interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 716.23 The Output interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

7 Built-in plug-ins 737.1 Shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

7.1.1 2D shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737.1.1.1 Ellipse . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737.1.1.2 Rectangle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737.1.1.3 Hexagon . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737.1.1.4 Blob . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 737.1.1.5 Mottle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

7.1.2 3D shapes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747.1.2.1 Ellipsoid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747.1.2.2 Box . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747.1.2.3 Rhombic dodecahedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747.1.2.4 Truncated Octahedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747.1.2.5 Blob . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 747.1.2.6 Mottle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

7.2 Arrangements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767.2.1 2D arrangements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

7.2.1.1 Parallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767.2.1.2 Arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 767.2.1.3 Tangent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

7.2.2 3D arrangements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787.2.2.1 ParallelBeam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797.2.2.2 Cone Bean . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797.2.2.3 TangentConeBeam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807.2.2.4 ArcConeBeam . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

7.3 Directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817.3.1 2D directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

7.3.1.1 SingleAngle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817.3.1.2 EquallySpaced . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817.3.1.3 RandomAngles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

7.3.2 3D directions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817.3.2.1 EulerAngle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817.3.2.2 Quaternion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 817.3.2.3 RandomDirections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827.3.2.4 CircleOrbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827.3.2.5 LineOrbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 827.3.2.6 HelixOrbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

7.4 Projection noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837.4.1 2D projection noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

7.4.1.1 Scatter2D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 837.4.2 3D projection noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

7.4.2.1 Scatter3D . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847.5 Quantum noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

7.5.1 2D and 3D quantum noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847.5.1.1 AbstractQuantumNoise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 847.5.1.2 NoQuantumNoise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

6 CONTENTS

7.5.1.3 ConstantProjectionQuantumNoise . . . . . . . . . . . . . . . . . . . . . . . . 857.5.1.4 ConstantRayQuantumNoise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857.5.1.5 VariableQuantumNoise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857.5.1.6 PETQuantumNoise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

7.6 Ray noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 857.6.1 AdditiveNoise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867.6.2 MultiplicativeNoise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

7.7 Grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867.7.1 2D grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

7.7.1.1 SquareGrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867.7.1.2 HexagonalGrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

7.7.2 3D grids . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867.7.2.1 SimpleCubicGrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 867.7.2.2 BodyCenteredCubicGrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877.7.2.3 FaceCenteredCubicGrid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

7.8 Basis elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877.8.1 2D and 3D basis elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

7.8.1.1 BlobBasis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 877.8.2 2D basis elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

7.8.2.1 SquarePixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917.8.2.2 HexagonalPixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

7.8.3 3D basis elements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917.8.3.1 CubicVoxel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917.8.3.2 DodecahedronVoxel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917.8.3.3 Truncated Octahedron Voxel . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

7.9 DDA algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917.9.1 Square pixel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 917.9.2 Blob on Hexagonal grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927.9.3 Cubic voxel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 937.9.4 Blobs on a 3D grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

7.10 Image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 977.10.1 2D image . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

7.10.1.1 SquareGridImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987.10.1.2 HexagonalGridImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

7.10.2 3D images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987.10.2.1 SimpleCubicGridImage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987.10.2.2 BodyCenteredCubicGridImage . . . . . . . . . . . . . . . . . . . . . . . . . . 987.10.2.3 FaceCenteredCubidGridImage . . . . . . . . . . . . . . . . . . . . . . . . . . 99

7.11 Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997.11.1 2D and 3D algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

7.11.1.1 DoNothing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997.11.1.2 ART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 997.11.1.3 Block ART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

7.11.2 2D algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057.11.2.1 Back-projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1057.11.2.2 Fan beam back-projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1077.11.2.3 Parallel beam filtered back-projection . . . . . . . . . . . . . . . . . . . . . . 1077.11.2.4 Fan beam filtered back-projection . . . . . . . . . . . . . . . . . . . . . . . . 109

7.11.3 3D algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097.11.3.1 Back-projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1097.11.3.2 Weighted Back-projection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1107.11.3.3 Katsevich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

7.12 Ray orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217.12.1 2D ray orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

CONTENTS 7

7.12.1.1 Efficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1217.12.1.2 Sequential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227.12.1.3 Shuffle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

7.12.2 3D ray orders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227.12.2.1 Efficient . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227.12.2.2 Sequential . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227.12.2.3 Shuffle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

7.13 Termination tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1227.13.1 2D and 3D termination tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

7.13.1.1 Iteration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237.13.1.2 Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237.13.1.3 Residual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

7.14 Post processing operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237.14.1 2D and 3D post processing operations . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

7.14.1.1 Normalize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1237.14.1.2 Constrain . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247.14.1.3 Contour . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247.14.1.4 Smooth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

7.15 Figure of merit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247.15.1 2D and 3D figures of merit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

7.15.1.1 Pointwise accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1247.15.1.2 Structural accuracy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257.15.1.3 Hit ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

7.16 Evaluator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257.16.1 2D and 3D evaluators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

7.16.1.1 Distance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1257.16.1.2 Residual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

7.16.2 2D Evaluators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1267.16.2.1 Lines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

7.16.3 3D Evaluators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1267.16.3.1 Line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

7.17 Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1267.17.1 2D output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

7.17.1.1 DisplaySinogram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277.17.1.2 ImageOutput . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

7.17.2 3D output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277.17.2.1 ConvertToChimera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277.17.2.2 ConvertToSlicer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277.17.2.3 DisplayProjections . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1277.17.2.4 Slices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

A Mathematical symbols and notation 129

B Installation and running 131B.1 Windows Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131B.2 Linux Installation (or other UNIX-like system) . . . . . . . . . . . . . . . . . . . . . . . . . . 131B.3 Stand alone execution of jSNARK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132B.4 Plug-ins . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

C Examples 133C.1 b1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133C.2 b2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133C.3 b3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133C.4 b4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

8 CONTENTS

C.5 b5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133C.6 b6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133C.7 b7 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134C.8 b8 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134C.9 b10 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

C.9.1 2dblobs, 3dblobs_bccgrid, 3dblobs_fccgrid, 3dblobs_scgrid, 3dblobs_mandl . . . . . 134C.10 2dGridTests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134C.11 2dHeadSum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134C.12 2dHeadOver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134C.13 3DblobPhantom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134C.14 3dGridTests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134C.15 3DSLHeadKakSum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.16 3DSLHeadKakOver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.17 3DSLHeadKatSum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.18 3DSLHeadKatOver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.19 3DSLHeadWangSum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.20 3DSLHeadWangOver . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.21 additive . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.22 arc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.23 ARTon2Dhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.24 ARTvariations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.25 BART . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135C.26 circleOrbit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136C.27 figure4, figure9, figure10ab, figure10cd, figure11, figure12ab, figure12cd . . . . . . . . . . . . . 136C.28 ForbildAbdomen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136C.29 ForbildHead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136C.30 ForbildHip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136C.31 ForbildJaw . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136C.32 ForbildThorax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136C.33 helix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136C.34 line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136C.35 multiplicative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136C.36 offCenterCircular . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.37 offCenterHelix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.38 offCenterLine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.39 PETNoise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.40 quantum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.41 rotatingBox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.42 simple3DAperture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.43 simple3DCone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.44 simple3DParallel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.45 SLwithMottle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.46 tangent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137C.47 timeVarying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138C.48 WBP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138C.49 webpage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

D jSNARK schema 139D.1 The snark element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139D.2 The create element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139D.3 The shapeList element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139D.4 The params element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141D.5 The projectionGeometry element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142D.6 The accuracy element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

CONTENTS 9

D.7 The execute element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142D.8 The input element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142D.9 The reconstructionRegion element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144D.10 The projection element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144D.11 The pseudo element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145D.12 The algorithm element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145D.13 The iterations element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147D.14 The figureOfMerit element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147D.15 The evaluator element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147D.16 The output element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

E jSNARK memory usage 149

F Supplying real data 151

Bibliography 153

10 CONTENTS

List of Figures

3.1 Information flow within jSNARK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

5.1 The Decorator Pattern . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 485.2 The Shape classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 495.3 Decorator class problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.4 Decorator pattern problem solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 505.5 The Projection classes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

7.1 Rhombic Dodecahedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757.2 Truncated Octahedron . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 757.3 The parallel geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 777.4 The divergent geometry case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 787.5 The parallel beam geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 797.6 The cone beam geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 807.7 Plots of blobs of order 0, 1, 2, and 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 907.8 Ray through square grid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 927.9 Hexagonal grid case 1: 0 ≤ θ < π/6 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 947.10 Hexagonal grid case 2: π/6 ≤ θ < π/3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 957.11 Hexagonal grid case 3: π/3 ≤ θ ≤ π/2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 967.12 Intersection of kappa plane with detector plane . . . . . . . . . . . . . . . . . . . . . . . . . . 1137.13 Intersection of kappa planes with a tangent detector . . . . . . . . . . . . . . . . . . . . . . . 1157.14 Tam-Danielsson window for the tangent detector . . . . . . . . . . . . . . . . . . . . . . . . . 1177.15 Intersection of kappa planes with an arc detector . . . . . . . . . . . . . . . . . . . . . . . . . 1197.16 Tam-Danielsson window for the arc detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

D.1 XML root element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140D.2 The create element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140D.3 The shapeList element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141D.4 The params element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142D.5 The projectionGeometry element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143D.6 The accuracy element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143D.7 The execute element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144D.8 The input element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144D.9 The reconstructionRegion element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144D.10 The projection element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145D.11 The pseudo element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145D.12 The algorithm element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146D.13 The iterations element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147D.14 The figureOfMerit element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147D.15 The evaluator element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148D.16 The output element . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

11

12 LIST OF FIGURES

Chapter 1

INTRODUCTION

“Just the place for a Snark!” the Bellman cried,As he landed his crew with care;

Supporting each man on the top of the tideBy a finger entwined in his hair.

“Just the place for a Snark! I have said it twice:That alone should encourage the crew.

Just the place for a Snark! I have said it thrice:What I tell you three times is true.”

Fit the First - The LandingThe Hunting of the Snark (an agony in eight fits)by Lewis Carroll

The image reconstruction from projections problem can be stated roughtly as: given approximations of realray sums of a picture for a number of rays, estimate the digitization of the picture. All these terms will bedefined more precisely in subsequent sections of this manual.

In the area of image reconstruction from projections, researchers often desire to compare two or morereconstruction techniques and assess their relative merits. jSNARK provides a uniform framework in whichto implement algorithms and evaluate their performance. jSNARK has been designed to be extensible. Itcan create phantoms, projection data, and perform reconstructions in 2D, 3D, and even 4D.

1.1 Statement of purpose and historyjSNARK is a programming system designed to help researchers interested in developing and evaluatingreconstruction algorithms. It is the next incarnation of the SNARK programs, the first of which was writtenby Richard Gordon in 1970. This was followed by SNARK77 [30] and SNARK89 [26] which were writtenin FORTRAN. They were specifically designed to help with the problem of reconstructing cross-sections ofthe X-ray absorption coefficient distribution inside the body from X-ray projections. SNARK93 extendedthis capability to include positron emission tomography, PET (where the problem is that of reconstructingcross-sections of isotope concentrations inside the body from γ-ray projections).

The SNARK93 programming system was implemented in FORTRAN77. It was designed to

1. be capable of dealing with many modes of data collection (different geometrical arrangements of X-raysource and detectors, different X-ray spectra, etc.);

13

14 CHAPTER 1. INTRODUCTION

2. contain many of the published reconstruction algorithms;

3. be capable of generating mathematically described phantoms that realistically represent various cross-sections of the human body, together with mathematically simulated projection data of these cross-sections reflecting the characteristics (including noise) of various possible tomography devices;

4. contain subroutines to carry out work which appears to be common to many reconstruction algorithms,so as to ease the incorporation of additional (user-defined) algorithms;

5. be capable of a variety of display modes;

6. contain routines for the statistical evaluation of reconstruction algorithms;

7. provide a methodology for testing for statistically significant differences between reconstruction algo-rithms.

SNARK05 [5] is an updated version of SNARK93. The following are the major advances that are incorporatedinto the SNARK05 package:

1. SNARK05 is implemented in C++;

2. the file structures for holding projection data, phantoms and reconstructions have been redesigned tomatch the capabilities of the typical computer environment at the beginning of the third millennium(specifically, XML headers are used in data files);

3. all iterative algorithms are now capable of handling image representations that use “blobs” [36, 37], aswell as those that use “pixels”;

4. the efficient data access ordering proposed in [27] is a standard feature of SNARK05;

5. all artificial restrictions on sizes have been removed - the only remaining restrictions are those imposedby the hardware, compiler and operating system.

SNARK09 [8] is an updated version of SNARK05. The following are the major advances that are incorporatedinto the SNARK09 package:

1. capability of generating mathematically described phantoms that include inhomogeneity;

2. capability of applying beam hardening correction to polychromatic X-ray projection data;

3. the imagewise-region-of-interest figure of merit;

4. up to 10 user implemented algorithms, at most five using blobs;

5. up to five user defined figures of merit.

SNARK14 [9] is an updated version of SNARK09. The following are the major advances that are incorporatedinto the SNARK14 package:

1. capability of superiorizing iterative algorithms to improve peformance;

2. capability of applying the Simultaneous Algebraic Reconstruction Technique (SART) to projectiondata;

3. added several new options to the statistical evaluation;

4. added several new stopping criteria.

jSNARK is another version of SNARK. It includes the following features:

1. jSNARK is written in JavaTM;

2. it is object oriented;

1.2. ACKNOWLEDGMENTS 15

3. it is capable of 2D, 3D, and even 4D reconstructions where the extra dimension is time;

4. it is extensible making significant use of plug-ins (see Chapter 6 and Appendix B.4).

Java was chosen for the implementation language for a number of reasons.

• Java is truly portable. There is no need to check whether or not the code will compile and run onevery combination of hardware and operating system. The only requirement is that the computer hasa Java runtime of the appropriate version (1.8.0 or later in the case of jSNARK).

• jSNARK makes extensive use of plug-ins. A plug-in is a class whose name is not known until runtime.The class name is obtained at runtime, the code for the class is then located and loaded into memoryand one or more instances of the class are created. This capability does not exist in a portable way inC++.

• SNARK has always had the capability of describing an entire experiment in a single input file. Ina language like C++, this means that the code for all the phases of SNARK have to be in memoryduring the entire execution. Java, on the other hand, loads the code for a class when it encounters thefirst instance of the class. When there are no more instances of the class, the code is garbage collected.Thus, Java provides the advantages of an overlay system with none of the work. The alternative isto have all the various pieces as separate programs and to combine them with the operating system’scommand language.1 This approach would severely limit the “mix-and-match” capability of plug-ins.

• Java is available bundled with Netbeans2, an integrated development environment (IDE). Netbeanssyntax checks the code as it is being typed. It offers numerous tips about what should come next andhas tools to browse code and automatically fix many errors. The only time the author has a compilererror is when a change to one class creates an error in another. Netbeans has tools to perform commonrefactorings [12] like renaming, changing method parameters and moving methods up and down in theclass hierarchy.

Java has aquired the undeserved reputation that it is slow. See http://kano.net/javabench/ and http:

//www.idiom.com/~zilla/Computer/javaCbenchmark.html for some comparison benchmarks. The authorattempted to speed up the execution of the ART algorithm in a 3D reconstruction. Performance measure-ments showed that 95% of the execution time was spent in the subroutine that computed the intersections ofa ray with the blobs on the BCC grid. That subroutine was recoded in C and called using the JNI interface.Writing the tightest C code the author is capable of reduced the execution time of a single ART iterationfrom 4 hour and 20 minutes to 4 hours. Where efficiency is paramount, one should consider moving theexecute intensive code to a high performance processor like NVidia’s Tesla.3

1.2 AcknowledgmentsThe design and development of jSNARK has been aided and improved greatly from discussions with andsuggestions from the DIG group at CUNY.4 Large parts of this manual have been copied and adapted fromthe SNARK05 manual [5].

1XMIPP, http://xmipp.cnb.csic.es/, is an example of this approach.2https://netbeans.org/3http://www.nvidia.com/object/tesla_computing_solutions.html4http://www.dig.cs.gc.cuny.edu/

16 CHAPTER 1. INTRODUCTION

Chapter 2

THE jSNARK FRAMEWORK

This chapter is written to acquaint the user with the physical meaning of the concepts used in jSNARK.

2.1 Basis conceptsjSNARK is an extensible framework that is used to link together a number of plug-ins (see Appendix B.4)that describe a 2D, 3D, or 4D reconstruction problem. It comes with a number of plug-ins (see Chapter 7)that have already been implemented. A plug-in is a compiled Java class that implements some interface. Itis dynamically located and loaded into memory at the point where it is used. The possible plug-in interfacesare:

Algorithm a reconstruction algorithm;

Arrangement the rays that make up a single projection;

BasisElement the basis element used by a series expansion reconstruction algorithm;

Directions the direction of one or more projections;

Evaluator a class that compares a reconstruction to a phantom and outputs possibly multiple statistics;

FigureOfMerit a class that compares a reconstruction to a phantom and outputs a single value;

Grid the description of the coordinates of the centers of BasisElement-s;

Image a digitized image following some ImageGeometry;

ImageGeometry the combination of the Grid and BasisElement of an Image;

Output a class that transforms an image into some other representation;

Projection the combination of a projection Direction, an Arrangement, and the corresponding ray sums;

ProjectionNoise a class that modifies artificially generated projection data using all the data for thatprojection;

QuantumNoise a class that modifies artificially generated projection data simulating various forms ofquantum statistics;

RayNoise a class that modifies artificially generated projection data for a single ray;

Shape a description of some geometric shape;

SliceNoise a place holder for a future enhancement;

Termination a class that indicates whether or not an iterative reconstruction algorithm should continue.

These are all described in detail in Chapter 6.

17

18 CHAPTER 2. THE JSNARK FRAMEWORK

2.2 Units and constants in jSNARKjSNARK deals with physically measurable quantities, such as time, distance between the x-ray source, anddetector or the linear attenuation (per unit length) at a point inside an object. The actual values assignedto these quantities are dependent on the assumed units of measurement . While the units of measurementdo not have to be specified for jSNARK (in fact, there is no facility for its specification) consistency mustbe maintained throughout a single jSNARK run.

jSNARK operates in either a two- or three-dimensional coordinate system. In 2D, the point (x, y)represents the physical location whose coordinates are x and y as measured in the assumed length. Similarly,in 3D, the point (x, y, z) represents the physical location whose coordinates are x, y, and z as measured inthe assumed length.

There are two particular constants, called Globals.ZERO and Globals.INFIN, which are defined by jS-NARK and whose meanings need to be clearly understood (especially since their names are potentiallymisleading). Globals.ZERO is a very small positive floating point number (10−14 to be exact) and Glob-als.INFIN is a very large positive floating point number (1015 to be exact). These values are referred toin later sections. For example, jSNARK requires that some input items must have a value greater thanGlobals.ZERO, rather than just be positive. Also, they are available to those users who wish to create theirown routines; they can, in particular, be used for checking for underflow and overflow conditions. In additionto the Java constants Math.PI and Math.E, other constants available to the user are Globals.TWOPI (2π),Globals.PID2 (π/2), Globals.PID180 (π/180), Globals.SQRT2 (

√2), and Globals.SQRT3 (

√3). The Globals

prefix will be dropped in the remainder of this document.

2.3 Time varying picturesjSNARK extends the capabilities of SNARK05 to include time varying pictures for both 2D and 3D resultingin 3D and 4D reconstruction problems, respectively. Time varying pictures are assumed be cyclic with acycle length of C.

Any float parameter to a shape (see Section 4.2.1.2) may be specified as either a 1-tuple, a 2-tuple, or a3-tuple.

As a 1-tuple, v, the value of the parameter is a constant.As a 2-tuple, b,m, the value of the parameter at time t is given by (b + mt/C)π/180 where m is the

change in the value over the unit interval, b is the constant term. 2-tuples are needed to describe anglevariations. m should be a multiple of 360.1

As a 3-tuple, DC,A, θ, the value of the parameter at time t is given by DC + A sin (2πt/C + θπ/180),where DC is the constant term, A is the amplitude of the variation, and θ is the phase shift in degrees.3-tuples are used to describe length and position variations.

Note: It is the users responsibility to ensure the phantom description has the propery thatidentical phantoms are created at times t and t+C for all t. If this is not the case, inconsistentprojection data will be calculated. See Section 4.2.3.6.

2.4 Pictures and digitizationjSNARK deals with “pictures” or “images” defined over 2D or 3D in some assumed fixed coordinate system.To be exact, a picture has two components:

1. the picture region, which is a square or cube with side length S whose center is at the origin of thecoordinate system and whose sides are parallel to its axes;

2. a function of two or three variables whose value is zero outside the picture region.

Sometimes, when this is not confusing, we shall call just the function the “picture” or “image.” However,identical functions may give rise to different pictures if the picture regions are different.

In the following a variable with an arrow over it (⃗•) will represent a point or vector in either R2 or R3.1jSNARK does not enforce this restriction.

2.5. RAYS AND RAY SUMS 19

We shall refer to the value f(x⃗, t) as the density of f at the point x⃗ at time instant t. Within jSNARK,f(x⃗, t) is approximated by a digitized version, f (0)(x⃗, t). The digitized version is specified by a grid G ofpoints, a basic basis function b, and a coefficient cn(t) associated with each grid point. A grid is a set ofpoints, {x⃗n|0 ≤ n ≤ N − 1} in the picture region. A basic basis function is a real valued function on R2 orR3, whichever is appropriate.

For 0 ≤ n ≤ N − 1, each of the functions

bn(x⃗) = b(x⃗− x⃗n) (2.1)

is called a basis function and g(x⃗, t) is defined by

f (0)(x⃗, t) =

N−1∑n=0

cn(t)bn(x⃗); (2.2)

i.e., as an expansion into the basis functions bn with coefficients cn(t). The coefficients, cn(t), are chosensuch that f (0)(x⃗, t) is close to f(x⃗, t) according to some distance measure.

The most common grids are the square grid in 2D and the cubic grids in 3D. All the built-in grids aredescribed in Section 7.7. Given a grid spacing ∆, let m = ⌈S/(2∆)⌉ where ⌈x⌉ denotes the smallest integerthat is greater than or equal to the real number x. Then the square grid is the set of points

{(i∆, j∆)| −m ≤ i ≤ m and −m ≤ j ≤ m}.

For this grid N = (2m+ 1)2 and n = (j +m)(2m+ 1) + i+m. The cubic grid is the set of points

{(i∆, j∆, k∆)| −m ≤ i ≤ m and −m ≤ j ≤ m and −m ≤ k ≤ m}.

For this grid N = (2m+ 1)3 and n = ((k +m)(2m+ 1) + (j +m))(2m+ 1) + i+m.The most common basis functions are the Voronoi tessellations of these grids called the square pixel

(picture element) and cubic voxel (volume element) respectively. The generic Voronoi tesselation basiselement on any grid will be called a spel for spatial element.

All reconstruction algorithms produce a digitized image as their output. They can be broadly classifiedas either transform methods [35] or series-expansion methods [6]. The transform methods solve the recon-struction problem using continuous mathematics. The resulting solution is then evaluated at the grid pointsand the solution is expressed with those values as the coefficients for the Voronoi basis elements appropriatefor the grid. The series-expansion methods discretize the problem immediately forming a set of simultaneousequations with the measured ray sums and use iterative methods to approximate a solution to the probablyinconsistent set of equations. In either case, we will use the notation

f (q)(x⃗) =N−1∑n=0

c(q)n bn(x⃗) (2.3)

for the reconstruction at the qth iteration for a single time instant. For the transform methods, q = 1. Atime sequence of reconstructions will be given by

f (q)(x⃗, ti) =N−1∑n=0

c(q)n (ti)bn(x⃗),

where the time instant is given by ti.

2.5 Rays and ray sumsA ray in jSNARK is a line. It is specified by a point on the line, p⃗, and a unit vector, u⃗. The ray consistsof the set of points given by p⃗ + au⃗ for all real numbers a. If p⃗0 = p⃗ + a0u⃗ and p⃗1 = p⃗ + a1u⃗, then∥p⃗0 − p⃗1∥ = |a0 − a1|.

20 CHAPTER 2. THE JSNARK FRAMEWORK

Given a picture and a ray, the real ray sum is the integral of the picture along the ray. In this terminology,the reconstruction problem that is treated by jSNARK may roughly be stated as follows: given approxima-tions (based on physical measurements) of the real ray sums of a picture for a number of rays, estimate thedigitization of the picture.

jSNARK also deals with pseudo ray sums; these are defined only for expansions of the form (2.2). Thepseudo ray sum is the real ray sum of the picture defined by the function f (0) of (2.2).

2.6 Projections

A projection is a collection of rays that have all been collected at the same time instant. The usual groupingsare parallel and divergent beams. In the parallel beam case, all the rays are parallel to each other. In thedivergent beam case, all the rays intersect at a single point. A number of projection geometries are possiblewithin jSNARK. Section 7.2 describes the built-in projection arrangements.

2.7 Rotations

Rays are typically defined as lines connecting a radiation source with a detector position. Rotations providethe means of specifying the relationship between the source and detector and the reconstruction region. Touse a rotation, we must define an unrotated orientation and then whether the rotation is applied to thereconstruction region or the source/detector pair. In jSNARK, the rotation is applied to the source/detectorpair so that the sides of the reconstruction region remain parallel to the axes. In two dimensions the unrotatedposition of the source is on the positive X axis and the detector’s positive direction is the same as the positiveY axis. In three dimensions the unrotated position of the source is on the positive Z axis and the axes ofthe detector are parallel to the X and Y axes of the reconstruction region.

In two dimensions a rotation is specified by a single angle, θ. Let Rθ : R2 → R2 be function that rotatespoints in R2 by the angle θ (or equivalently rotates the axes by the angle −θ). Let p⃗ = (x, y) be a point inR2. Then

p⃗′ = Rθ(p⃗) = (x′, y′)

wherex′ = x cos θ − y sin θ,y′ = x sin θ + y cos θ,

and converselyp⃗ = R−1

θ (p⃗′)

wherex = x′ cos θ + y′ sin θ,y = −x′ sin θ + y′ cos θ.

In three dimensions a rotation can be specified as three Euler angles or a unit quaternion.There are 12 possible Euler angle orders. A common Euler angle order is ZYZ, meaning that the first

rotation is about the Z axis, the second about the rotated Y axis and the third about the rotated Z axisagain. These are the body oriented Euler rotations. There are an additional 12 Euler angle orders wherethe axes remain fixed. These are not used in jSNARK.

A unit quaternion is another way to specify a rotation. A quaternion is a four-tuple, (a, b, c, d), a, b, c, d ∈R, that is an extension of the complex numbers and has the equivalent representation a+ bi+ cj+ dk wherei, j, and k have the relationship i2 = j2 = k2 = ijk = −1. Using these relationships, it is easy to derivethe rule for multiplying quaternions. The quaternions form a non-commutive group under multiplication. Aunit quaternion is a quaternion where a2 + b2 + c2 + d2 = 1. When used as a rotation, the unit quaternionis (cos 2θ, u sin 2θ, v sin 2θ, w sin 2θ) where θ is the angle of rotation and (u, v, w), a unit vector in R3, is theaxis of rotation. A point p⃗ ∈ R3, p⃗ = (x, y, x), can be represented by the quaternion (0, x, y, z). If R is a unitquaternion and P is the quaternion representation of the point p⃗, then R×P is the quaternion representationof the rotation of p⃗ by the angle of the rotation about the axis of rotation. We will use the notation R × p⃗to mean convert p⃗ to its equivalent quaterion, perform the quaternion multiplication (rotate the point), and

2.8. JSNARK INPUT/OUTPUT 21

convert the product back to a vector in R3. The inverse rotation, R−1 is the quaternion (a,−b,−c,−d).This is also the complex conjugate, R̄.

2.8 jSNARK input/outputThe following summarizes the input requirements of jSNARK that we have discussed and some additionalinput and output possibilities. An exhaustive description is given in the following chapters.

Typical inputs to jSNARK:

• Geometry information

– Number of basis elements along a single dimension

– The grid defining the centers of the basis functions

– Environment for data collection: e.g., parallel or divergent

– number of projection directions

– List of projection directions

• Test picture (optional)

– This is either taken from a previous jSNARK run or is specified by a description in terms ofgeometrical figures

• Projection data

– Gathered experimentally or generated by jSNARK (from a test picture)

Typical outputs from jSNARK:

• Comparison of reconstructions with the test picture resulting in a single number (figure of merit)

• Comparison of reconstructions with the test picture resulting in multiple values (evaluator)

• Displayable images of 2D reconstructions or slices of 3D reconstructions

• The clock time used to perform each command and each iteration of an algorithm

2.9 Limitations on the sizes of variables and filesNo artificial limitations on sizes are present in jSNARK. The only limits that remain are those imposedby the hardware, compiler and operating system. The Java runtime defaults are such that only small tomedium sized reconstruction problems are possible. jSNARK has facilities for overriding those defaults. Ifthe specified values are larger than the available physical memory of the computer, the operating system willpage parts of jSNARK to disk. For vary large reconstruction problems, it may not be possible to specify alarge enough value to the Java runtime. For those problems, jSNARK will automatically page pictures todisk. The user has control over how much data will be kept in memory. Whether the paging is done by theoperating system or Java, these large problems pay a high penalty due to the paging. Details on how tocontrol the Java runtime memory and the jSNARK paging mechanism are given in Appendix E.

22 CHAPTER 2. THE JSNARK FRAMEWORK

Chapter 3

Information Flow

A jSNARK run consists of up to three phases:

1. data generation phase;

2. initialization and reconstruction phase;

3. analysis phase.

Each of these phases requires some input files and produces some output files. Some of the output files areused as the input files of a later (or even the same) phase. Provided that the appropriate input files areavailable, a single jSNARK run may consist of just one of the three phases, or of the first two phases, or ofthe last two phases, or of all three phases.

We now proceed with a description of each of the three phases together with the input files required andoutput files produced by the phase under discussion. The reader should consult Figure 3.1 for an overview.The file descriptions that follow are rather brief. With the exception of the INPUT file, all files are written(and subsequently accessed) by jSNARK, and the user need only be concerned with their formats if they areextending the capabilities of jSNARK. The INPUT file can contain the input for one, two, or three phasesof a jSNARK run. The Data Generation Input corresponds to the create element of the jSNARK XMLinput file. The Initialization and Reconstruction Input corresponds to the execute element of the jSNARKXML input file. The Analysis Input corresponds to the figureOfMerit , evaluate, and output elements of thejSNARK XML input file. The XML input file schema is described in detail in Appendix D.

3.1 Data generation phaseDuring this phase jSNARK generates a test picture (a phantom) and/or projection data of it. The projectiondata consists of real ray sums of the phantom, possibly contaminated by the types of noise that one maycome across in a device used for collecting data for reconstruction.

The Data Generation Input is the section of the file INPUT that contains the commands that cause thegeneration of the test picture and the projection data. These commands appear within the create elementof the XML input file.

*.phan is a file in the DIG standard1 for images that contains the description of the phantom. Thisdescription contains information on how the phantom has been specified, as well as the actual density valuesassigned to the basis elements in the phantom. The actual file name used is determined at run time.

*.proj is a file in the DIG standard2 for projections that contains the description of the projectiondata. This description contains information on the mathematical phantom and how the projection data wasgenerated. The actual file name used is generated at run time.

The Data Generation Output is the section of the file OUTPUT that echos the data generation input fileand reports on the work carried out by the data generation phase.

1See http://www.dig.cs.gc.cuny.edu/manuals/software_discipline.pdf and http://www.dig.cs.gc.cuny.edu/manuals/

schemas/DIG.xsd.2ibid.

23

24 CHAPTER 3. INFORMATION FLOW

Figure 3.1: Information flow within jSNARK

3.2. INITIALIZATION AND RECONSTRUCTION PHASE 25

3.2 Initialization and reconstruction phaseThe Initialization and Reconstruction Input is the section of the file INPUT that contains the commandsspecifying where the initialization information is coming from and what reconstructions are to be carried out.These commands appear in the execute element of the XML input file. The Initialization and ReconstructionOutput is the section of the file OUTPUT that echos the initialization and reconstruction input file andreports on the work carried out by the initialization and reconstruction phase.

This phase can be broken up into two subphases: the initialization subphase and the reconstructionsubphase. The latter cannot be performed without the former having been performed in the same run.

3.2.1 Initialization subphaseDuring this subphase the default grid for the reconstruction region (size of the reconstruction region andnumber of grid lines) as well as the assumed geometry of data collection are determined. If there is a testpicture, it will be read from *.phan.

*.recon is an XML file that contains the names of the phantom file (if present) the projection file, and thenames of the images produced in the reconstruction subphase. The actual file name is generated at run time.The image files will be written to a directory with same name as the *.recon file with the .recon extensionremoved.

3.2.2 Reconstruction subphaseDuring this subphase the reconstruction algorithms specified by the initialization and reconstruction inputare carried out. At the end of each iteration, the current image is written to disk in the reconstructiondirectory described above as a *.image file and the name of that file is saved in *.recon.

If additional files are required for any user-written plug-in, they should be explicitly defined and openedby that plug-in (and closed when the work of the plug-in is done).

3.3 Analysis phaseDuring this phase various displays of the phantom and/or the reconstructions are generated based on theinformation in *.recon. Statistical analysis of the results as well as comparisons of the reconstructions to thephantom may also be carried out.

The Analysis Input is the section of the file INPUT that contains the commands specifying the displays,analyses, and comparisons to be generated. These commands appear in the figureOfMerit, evaluate, andoutput elements of the XML input file. They are shown as a single phase in the diagram to simplify thediagram and are separate elements in the XML because they have different behaviors.

Analysis files is a catch-all for the many different outputs of these commands.The Analysis Output is the section of the file OUTPUT that echos the analysis input file and reports on

work carried out by the analysis phase, including details of the specified comparisons, analyses, and displays.

26 CHAPTER 3. INFORMATION FLOW

Chapter 4

jSNARK_GUI

The input to jSNARK is an XML file that follows the schema shown in Appendix D. Although it is possibleto create such an input file using a text editor, it is a very tedious and error prone activity. jSNARK_GUIis a graphical user interface (GUI) front end to jSNARK that simplifies creating and editing the jSNARKXML input. This GUI reads the schema file and interprets it in order to create the XML file. It also readsthe jSNARK jar file scanning it for the built-in plug-ins. The “Load Plug-ins” button can be used to directit to read additional jar files and scan them for additional plug-ins. The “Run” button is used to executejSNARK passing the current input file to it. jSNARK creates a number of files in a working directory.The location of the working directory can be changed with the “Working Directory” button. For the firstexecution of jSNARK_GUI, the default working directory is the directory where jSNARK_GUI was started.After the first execution, the default working directory is recalled from the previous execution or the user’shome directory if the old working directory was deleted. As the jSNARK input file is being created, a treeview of the XML will appear in the left panel. Clicking on a node in this panel will cause the correspondingpart of the file to appear in the right panel. The right panel will contain parameters and buttons to moveup and down in the XML schema. Parameters have a name, type and possibly a value. Moving the mouseover a parameter name or a button will cause a tool tip to appear with a short description of the parameteror button. The presentation of the parameter value depends on the parameter type. Integer, float and somestring parameters will have a text box for their values. Some parameters can only take a small numbervalues. These are shown as either radio buttons or a drop down list. Some parameters are file names. Theseare identified by a “Browse” button. In most cases, these can be left blank. Some parameters are requiredand some are optional. The required parameters have their type displayed in red followed by the word“required”. The optional parameters of plug-ins are not shown by default. To see them you must click the“Show optional parameters” check box. The right panel will usually contain two buttons labeled “Update”and “Cancel”. The “Update” button is used to move up the XML schema keeping the changes on the currentscreen. The “Cancel” button discards any parameter changes on the current screen.

Comments describing the jSNARK input can be entered in the “Comments” text area at the top of everyscreen. These will be echoed at the appropriate place in the jSNARK output.

Several places in the XML input file allow multiple occurrences of various plug-ins. Whenever this isallowed, an “identifyingString” parameter is present. Sometimes it is a required parameter and other timesit is not. In either case, no two plug-ins can have the same identifying string. In the screen that lists themultiple occurrences of the plug-ins, each instance will be shown as the identifyingString if it is specified oras the plug-in name followed by its parameters. In addition, each instance has an up and down icon next toit to change the order of the instances.

4.1 Dimension

The main choice in a jSNARK input file is whether this is to be a two-dimensional or three-dimensionalreconstruction. This choice must be made before anything else can be done and once made can not bechanged.

27

28 CHAPTER 4. JSNARK_GUI

4.2 Create

The “Create” button is used to describe a phantom and create a phantom picture and/or projection data. Aphantom is a digital image described by the composition of a number of shapes, where a shape is a boundedgeometric figure. Example shapes are the ellipse and rectangle in 2D and ellipsoid and box in 3D.

Every phantom must have a name. It is entered in the phantomName parameter. The name is used toidentify the phantom in later operations.

4.2.1 Shape list

The “shapeList” button is used to specify the shapes that make up the phantom.If the phantom is a time varying phantom, specify the cycle length in some unit of time (see Section 2.3)

here. If this field is left blank, it defaults to 1.There are two popular ways to combine shapes to form an image, summation and overlay . This choice

is made here. The summation method was first presented in [44]. Using it, the density at a point in thephantom is the sum of the densities of all the shapes that contain that point. The overlay method was usedin the FORBILD project.1 Using it, the density at a point is the density of the last shape in the shape listthat contains the point. Obviously, a key difference between the methods is that the summation method isorder independent while the overlay method is order dependent. Another difference is in the kind of shapesthat can be used. jSNARK contains two kinds of shapes, constant density and variable density. A constantdensity shape has a constant density in its interior while a variable density shape’s density is a function ofthe point where the density is measured. An ellipse (see Section7.1.1.1) is an example of a constant densityshape while a blob (see Section 7.1.1.4) is an example of a variable density shape. The summation methodallows either kind of shape while the overlay method is restricted to constant density shapes.

4.2.1.1 Spectrum

Like its predecessor, SNARK05, jSNARK can simulate the energy spectrum of polychromatic radiation forray sum generation with a limited number (seven) of discrete energy levels. The “Spectrum” button allowsthe spectrum to be defined. Each energy level is identified with a unique string and has an associated energyfraction and background density. Each energy fraction must be ≥ 0 and the sum of the energy fractions mustbe > ZERO. The energy fractions are normalized by dividing each of them by their sum. If a spectrum is notspecified, the radiation is assumed to be monochromatic at an unspecified energy level and with a backgrounddensity of 0. In the following, let NERGY be the number of energy levels, ei be the normalized fraction ofthe i’th energy level, and Bi be the background absorption at the i’th energy level for 1 ≤ i ≤ NERGY.

4.2.1.2 Shape

A shape is a plug-in along with a density for each energy level in the spectrum or a single density if a spectrumis not specified. For those shapes that are subclasses of the ConstantDensityShape class (see Section 6.3.1),the shape may be intersected with any number of half-spaces (see Section 5.10). The half-spaces are alwaysclosed. Let NOBJ be the number of shapes, ρj (x⃗, t) be the nominal value of the j’th shape at the point x⃗and tine instant t, and dj,i(t) be the density of the j’th shape at the i’th energy level for 1 ≤ j ≤ NOBJand 1 ≤ i ≤ NERGY at time instant t. If the shape is a constant density shape, ρj(x⃗, t) is the characteristicfunction of the shape. If the shape is a variable density shape, ρj(x⃗, t) is the nominal value of the shape atthe point x⃗ and time instant t. Typically this will be a value in the range [0, 1]. Any of the float parametersof a shape may be given as a 1-tuple, a 2-tuple, or a 3-tuple as described in Section 2.3.

Each shape can have an optional identifyingString to name it. When this is present, the name can bereferenced in the figure of merit (see Section 4.4.1) or evaluation (see Section 4.5.1) screens to eliminate theneed of repeating the shape parameters.

1http://www.imp.uni-erlangen.de/phantoms/drasim.html

4.2. CREATE 29

4.2.2 Phantom geometry

The phantomGeometry button is used to specify the geometry that is used to generate a digitized imagefrom the spectrum and shape list.

Phantom file is the name of the file where the phantom image will be written. If omitted, the name willbe the name of the input file with the extension ’.snark’ replaced with ’.phan’. If a directory name is notpart of the name, it will be written in the working directory. The phantom file follows the DIG standard2

for image files.The number of time bins specifies the number of instances in the cycle where the phantom will be created.

Let N be this number and C be the cycle length given on the shapeList (see Section 4.2.1) screen. Thenthe phantom will be computed at the time values tk = (k + 1

2 ) × C/N , where 0 ≤ k < N . These values oft will be used for the calculation of the shape parameters described in Section 2.3. Note that these timescorrespond to the centers of the N time intervals.

Exactly two of number of basis elements, grid spacing, and image size must be specified. The third iscomputed from the other two.

Let NELEM be the number of basis elements, ∆ be the grid spacing, s be the specified image size andS be the actual image size. If NELEM and ∆ are specified, then S = NELEM ×∆. If NELEM and s arespecified, then S = s and ∆ = S/NELEM. If ∆ and s are specified, then NELEM = 2

⌊s2∆ + 1

2

⌋+ 1 and

S = NELEM ×∆.Subsampling, if specified, is a small positive integer. It is used to create a more accurate digital image

by averaging multiple positions to get a single image value. Let ss be the value of subsampling, if specified,and 1 otherwise. Let mm = (ss− 1) /2.

Let m = ⌊NELEM/2⌋ and δ = ∆/ss.In 2D, the image will be generated on the square grid given by

x⃗i,j = ((i−m)×∆, (j −m)×∆)

for 0 ≤ i < NELEM and 0 ≤ j < NELEM. The digital image grid is the set of these points. The subsamplegrid is given by

v⃗i,j = ((i−mm)× δ, (j −mm)× δ)

for 0≤ i < ss and 0 ≤ j < ss. Let V = {v⃗i,j}.In 3D, the image will be generated on a simple cubic grid given by

x⃗i,j,k = ((i−m)×∆, (j −m)×∆, (k −m)×∆)

for 0 ≤ i < NELEM, 0 ≤ j < NELEM, and 0 ≤ k < NELEM. The digital image grid is the set of thesepoints. The subsample grid is given by

v⃗i,j,k = ((i−mm)× δ, (j −mm)× δ, (k −mm)× δ)

for 0≤ i < ss, 0 ≤ j < ss , and 0 ≤ k < ss. Let V = {v⃗i,j,k}.For each shape, let ρj,k(x⃗) be the characteristic function of constant density shape j at time tk or the

nominal density of variable density shape j at time tk. The nominal density should lie in the range [0, 1].Recall that cn(tk) is the coefficient for the nth grid point for the digital image g (2.2) and let x⃗n be the

corresponding grid point.For the summation model, the value of that coefficient is given by

cn(tk) =1

|V |∑v⃗ϵV

NOBJ∑j=1

ρj(x⃗n + v⃗, tk)

NERGY∑i=1

dj,i(tk)ei, (4.1)

where |V | is the number of elements in the set V . (Recall dj,i is defined in section 4.2.1.2 and ei is definedin section 4.2.1.1.)

2See http://www.dig.cs.gc.cuny.edu/manuals/software_discipline.pdf and http://www.dig.cs.gc.cuny.edu/manuals/

schemas/DIG.xsd.

30 CHAPTER 4. JSNARK_GUI

For the overlay model, we define for a fixed time tk, a fixed point x⃗ and each v⃗ ∈ V ,

j(x⃗n + v⃗, tk) =

{n, where ρn(x⃗n + v⃗, tk) = 1 and (∀m)(n < m < N)ρm(x⃗n + v⃗, tk) = 0,0 if there is no such n.

Then, let Vn(tk) = {v⃗ ∈ V |j(x⃗n + v⃗, tk) > 0}. The density assigned to the digital image is then given by

cn(tk) =1

|Vn(tk)|∑

v⃗∈Vn(tk)

NERGY∑i=1

dj(x⃗n+v⃗,tk),i(tk)ei.

4.2.3 Projection geometryThe projectionGeometry button is used to specify the geometry that is used to generate projection datafrom the spectrum and shape list where a projection is the collection of ray sums for a single source anddetector position.

Projection file is the name of the file where the projection data will be written. If omitted, the namewill be the name of the input file with the extension ’.snark’ replaced with ’.proj’. If a directory name is notpart of the name, it will be written in the working directory. The projection file follows the DIG standard3

for projection data files.

4.2.3.1 Aperture

Like its predecessor, SNARK05, jSNARK has the capability to simulate a detector with varying sensitivityover its surface. All detectors are assumed to have the same size and the same sensitivity. For a 2D phantom,the detector array is a line segment or an arc. For a 3D phantom, the detector array is a rectangle or asection of a cylinder. The aperture button allows up to 7 weights to be defined. The default is a singleweight with value 1. Let N be the number of non-zero elements that are defined. The normalized values ofthe aperture weights are the aperture values divided by their sum. In the 2D case, the 1D detector elementis subdivided into N equally sized subelements and the weight of subelement i is ai, where ai is the i’thnormalized aperture weight. In the 3D case, the 2D detector element is divided into N · N equally sizedrectangular subelements and the weight of subelement i, j is ai · aj , where ak is the k’th normalized weight.

Let

NAPER =

{N, in the 2D case,N ·N, in the 3D case,

and

rn =

{an, in the 2D case,ai · aj, where i =

⌊nN

⌋and j = n mod N in 3D,

(4.2)

for 0 ≤ n ≤ N − 1.

4.2.3.2 Arrangement

The arrangement button is used to specify the relationship between the source and the detector. An arrange-ment is a plug-in. Examples of arrangements in 2D are parallel and tangent. Examples of arrangements in3D are parallel beam and cone beam.

4.2.3.3 Rays

The rays button is used to specify the number of rays in a projection. In 2D this is the number of detectorsalong the 1D detector strip. In 3D the 2D detector array may be rectangular.

The program button is used to let jSNARK compute the number of rays in a projection. The userspecifies the detector spacing and the image size. The number of rays is computed such that with the givenarrangement the entire image will be projected onto the detector from any direction.

3ibid.

4.2. CREATE 31

The user button is used to explicitly specify the number of rays in a projection. In 2D, three parametersare shown: the number of rays, detector spacing, and detector width. Exactly two of the three must bespecified. Let NRAYS be the number of rays, d be the detector spacing, and w be the detector width. If thenumber of rays and detector spacing are specified, then the detector width is given by NRAYS × d. If thenumber of detectors and the detector width are specified, then the detector spacing is given by w/NRAYS.If the detector spacing and detector width are specified, then the number of rays is given by 2

⌊w2d + 1

2

⌋+ 1.

In 3D, two additional parameters are shown, detector height and detector rows. If detector height is notspecified, it defaults to the detector spacing. If the number of detector rows is not specified, it defaults tothe number of rays. The number of detectors may be given explicitly or computed.

Note: These names may be a bit confusing and may be changed in the future. Detector width is thewidth of the entire detector array while detector spacing is the width of a single detector in the array.Detector height is the height of a single detector in the array and this value times the number of detectorrows is the height of the entire detector.

4.2.3.4 Directions

The directions button is used to specify the projection directions. A direction is a plug-in. Examples ofdirections in 2D are equally spaced angles and a single angle. Examples of directions in 3D are Euler anglesand the helix orbit. A time instant is associated with each direction. Time starts at zero. Each directioninstance has a start offset. That offset is added to the time when the last direction was completed. Then,for each direction generated, the time instant is increased by the pulse time value. This time value is usedto generate a phantom at that time instant.

4.2.3.5 Accuracy

The accuracy button is used to specify a noise model that modifies the computed projection data.The sliceNoise button is used for distance dependent blurring. Slice noise is a plug-in. Note: This

feature has not been implemented yet.The projectionNoise button is used to modify the projection data where the model uses the entire

projection as input to the noise generator. Projection noise is a plug-in. An example is scatter noise(see Section 7.4).

The quantumNoise button is used to modify the projection data based on quantum statistics. Quantumnoise is a plug-in. If no plug-in is specified, a default plug-in that simulates perfect detectors is used.

The rayNoise button is used to modify the projection data where the noise model depends only on thevalue of a single ray. Ray noise is a plug-in. Examples of ray noise are additive and multiplicative Gaussiannoise.

4.2.3.6 Ray sum generation

Raysums (projection data) are created for a number of rays and a number of projections when the projec-tionGeometry element is included in the jSNARK XML input file.

Projection data are approximations to the real ray sums. A real ray sum is defined as the integral of thedensity of the picture along a ray (see Section2.5). A single projection is a set of line integrals from a sourceto a detector at some orientation in space all taken at some instant in time. The rays are equally spacedalong the detector. In 2D, the detector can be a line segment or an arc. In 3D, the detector can be a rectangleor a section of a cylinder. Each detector element can be further divided into a number of sub-elements asdescribed in Section 4.2.3.1. The source and detector pair have an arbitrarily assigned unrotated position.In 2D, the source lies on the positive X axis and the center of the detector intersects the negative X axis. In3D, the source lies in the positive Z axis and the center of the detector intersects the negative Z axis. Eachparticular projection is achieved by rotating the source-detector pair to the actual position. For the line andhelix orbits (see Section 7.3.2), an additional translation is also performed.

A ray is specified by a point on the ray and a direction. For projections, the point is a point on thedetector. The direction is a normal to the detector for parallel radiation or a unit vector from the point tothe source position for fan beam projections.

32 CHAPTER 4. JSNARK_GUI

As we have seen in the last section, jSNARK allows multiple densities associated with each ray (one foreach energy level), which makes the definition of real ray sums inapplicable in the polychromatic case.

In the following, we will show how the ray sum is computed for a single ray at a single energy level andsingle time instant. The ray is specified by a point p⃗ on the ray and a unit vector u⃗ in the direction of theray. Later we will show how multiple rays are combined in the aperture, how the multiple energy levels arecombined, and how simulated noise is added.

Time Each projection is computed for the phantom at a particular time. Each projection plug-in has twooptional values, startTime and pulseTime that are used to compute the values of t where the projections aretaken. If startTime is not specified, it defaults to 0. If pulseTime is not specified, it defaults to 1. Supposethat there are two directions plug-ins being used. Let S1 and S2 be the two start times, P1and P2 be the twopulse times and N1 and N2 be the number of projections for each plug-in. Then the times for the projectionsin the first direction are ti = S1+ iP1 for 0 ≤ i < N1 and the times for the projections in the second directionare ti = tN1−1 + S2 + (i − N1)P2 for N1 ≤ i < N1 + N2. These times will be used for the projection datacreation as described below.

Summation model For the summation model for each ray given by point p⃗ and direction u⃗ at each energylevel i and time instant t, the real ray sum Ri of the phantom is the sum

Ri(t) =

NOBJ∑j=1

dj,i(t)

∫ ∞

−∞ρj (p⃗+ au⃗, t) da, (4.3)

where NOBJ is the number of elemental shapes, ρj(x⃗, t) is the value of the j’th elemental shape at the pointx⃗ and time t, and dj,i(t) is the density of the j’th elemental shape at energy level i and time t.

Overlay model The situation is more complicated for the overlay model. For each shape in the phantom,the ray intersects the shape boundary at either zero or two points. As the ray progresses through thephantom, it is divided into 2N−1 segments whereN is the number of shapes with two boundary intersections.Consider two shapes that both intersect a ray twice. Let a1,1 and a1,2 be the ray parameters that describethe intersection of the ray with the first shape with the relationship a1,1 < a1,2. Similarly, let a2,1 and a2,2be the intersections with the second shape where a2,1 < a2,2 and this shape occurs after the first in thephantom description. There are six cases to consider. The first two cases are where the shapes are disjoint.The second two cases are where one shape is contained inside the other. The last two cases are where theshapes partially overlap. For each case, Ri(t) is the raysum for that case.

a1,1 < a1,2 ≤ a2,1 < a2,2 : Ri(t) = (a1,2 − a1,1)d1(t) + (a2,2 − a2,1)d2(t).

a2,1 < a2,2 ≤ a1,1 < a1,2 : Ri(t) = (a2,2 − a2,1)d2(t) + (a1,2 − a1,1)d1(t).

a1,1 ≤ a2,1 ≤ a2,2 ≤ a1,2 : Ri(t) = (a2,1 − a1,1)d1(t) + (a2,2 − a2,1)d2(t) + (a1,2 − a2,2)d1(t).

a2,1 ≤ a1,1 ≤ a1,2 ≤ a2,2 : Ri(t) = (a2,2 − a2,1)d2(t).

a1,1 ≤ a2,1 ≤ a1,2 ≤ a2,2 : Ri(t) = (a2,1 − a1,1)d1(t) + (a2,2 − a2,1)d2(t).

a2,1 ≤ a1,1 ≤ a2,2 ≤ a1,2 : Ri(t) = (a2,2 − a2,1)d2(t) + (a1,2 − a2,2)d1(t).

The algorithm for computing the ray sums for the overlay model starts by initializing an array A with thelist of intersections of shapes with the given ray padded with two entries of INFIN and the array s with theindex of the shape corresponding to the intersection with the same index padded with two entries of -1. Thearray o is the permutation to sort the array A into increasing order. Array e is a boolean array the samesize as the arrays A and s initialized to false. This array is used to keep track of whether we are enteringor exiting a shape. From these, we get the following pseudo-code algorithm which is done in parallel for forall energies i. In the pseudo-code comments, “earlier” and “later” refer to the ordering of the shapes in thecreate shapeList element.

4.2. CREATE 33

R = 0// k is the sort order index

k = o0// a is the current boundary point

a = Ak

// si is the index of the current shape

si = skif (si < 0) return

// mark shape si as being entered

esi = true

// mm is the index of the shape containing amm = sifor (j = 1 to size of A) {

// move to the next boundary crossing

k = ojm = skif (m >= si) {

// either entering a later shape or exiting the current shape

// update RR+ (Ak − a) ∗ dk,i// update boundary point

a = Ak

}

if (m == si) {

// exiting the current shape

esi =false// find the last shape that contains the current boundary point

for (n = mm downto 0) {

if (en) {

si = nbreak

}

}

if (n >= 0) {

// update mm with the shape that was found

mm = n} else {

// there is no such shape - advance to the next shape

j = j + 1k = ojsi = skif (si < 0) {

// reached the end of the list

break

}

// move to the next boundary point

a = Ak

ek =true}

} else if (mm > si) {

// entering a later shape - mark entry of that shape

ek =trueif (m > mm) {

mm = m

34 CHAPTER 4. JSNARK_GUI

}

si = m} else {

// entering or exiting an earlier shape

if (m < 0) {

break

}

em =!em}

}

return R

The returned value, R, is the ray sum for a single ray at a single energy level and a single instant of timeand corresponds to Ri(t) in the overlay case.

Aperture and Noise Each detector is divided into NAPER smaller detectors (see Section 4.2.3.1). LetRi,n be the real sum for the i’th energy level for the central ray of the n’th subray (defined by Equation(4.3)). The energy absorption coefficient is given by

S =NAPER∑n=1

rn

NERGY∑i=1

pi × q.raysum(Ri,n(t) +Bi), (4.4)

where rn is the weight of the n’th subray as given in equation 4.2, pi is the weight of the i’th energy level,Bi is the background absorption at the i’th energy level (see Section 4.2.1.1), and q.raysum is the raysummethod of the QuantumNoise instance (see Sections 6.13 and 7.5). This value is the expected number ofphotons detected by the detector. The collection of these values for a single projection are passed to theapplyNoise mathod of the ProjectionNoise instance if one is present (see Section 6.12). Next, the value foreach detector is passed to the applyNoise method of the QuantumNoise instance which restores the value toa raysum corrupted by quantum and projection noise (see Section 6.13). Lastly, the raysum for each detectoris modified by the applyNoise method all the RayNoise instances that have been specified (see Section 6.14).

4.3 ExecuteThe execute button is used to used to perform one or more reconstructions using one or more reconstructionalgorithms.

Reconstruction file is the name of the file where the reconstruction information will be written. If omitted,the name will be the name of the input file with the extension ’.snark’ replaced with ’.recon’. If a directoryname is not part of the name, it will be written in the working directory. The reconstruction file is an XMLfile that references the phantom picture (if present), the projection data file, and the reconstruction imagesfor each iteration of each algorithm. The reconstruction images will be written to a directory in the samedirectory as the reconstruction file with the same name as the reconstruction file less the ’.recon’ extension.Each iteration of each algorithm will be written to its own image file. It must be the case that the iterationsof each algorithm are in separate files because jSNARK keeps the images in the algorithm’s internal formatand the DIG file structure does not allow for mixed geometries in a single file. It is preferable to haveeach image in its own file because the DIG structure is to write the image data in binary after the XMLheader.4 This means that if multiple images are to be written to a single file, the images have to be kept in atemporary file as they are being produced and then copied to the image file after the XML header describingall the images has been written. The copy step is not necessary with a single image.

4.3.1 Input

The input button specifies the inputs for the reconstructions that follow.4Note that this is a violation of the XML standards. See http://www.w3.org/TR/REC-xml/#sec-well-formed

4.3. EXECUTE 35

4.3.1.1 Reconstruction Region

The reconstructionRegion button specifies the reconstruction region. It offers two choices, read phantom andreconstruction geometry. Read phantom requires that the phantom file is specified only if a phantom wasnot created earlier in the input. Reconstruction geometry requests the values for the cycle length, number oftime bins, number of basis elements, basis element separation, and image size. These are treated identicallyto the values in Section 4.2.2 above.

4.3.1.2 Projection

The projection button specifies where the projection data comes from. It offers two choices, read projectionsand pseudo. Read projections requires that a projection file is specified only if projection data was notcreated earlier in the input. Pseudo requests that pseudo projection data is generated from the phantomimage.

If a time varying reconstruction is being performed, projection data are binned so that projections fromsimilar points in the cycle are collected together. Let C be the cycle length, N be the number of timeintervals, and ti be time of projection i. Then this projection will be placed in bin ⌊Nti/C⌋. The number ofprojections placed in each bin is reported on the output along with an estimate of the average density for thatbin. If it happens that a bin does not contain any projections, that bin will not be used for reconstructions.

In all cases, projections are presented to the algorithms in increasing time order.

4.3.2 Algorithm

The algorithm button adds the execution of a reconstruction algorithm to the run. An algorithm is a plug-in.Every algorithm requires a unique identifying string that is used to identify the output of the algorithm

for later operations.The starting image for the execution can be one of zero, average, continue, or phantom. The zero image

is, of course, all zeros. The average image is a constant valued image where the value is the estimate ofthe the average image density based on the projection data values. Continue causes the output of the lastreconstruction to be used as the starting point for this algorithm. Phantom uses the phantom as the startingimage. Both continue and phantom may require that the format of the image is changed because thoseimages may have different image geometries and basis elements than those requested for this algorithm (seebelow). These conversions are only approximations. Users should be aware that conversions from spel basiselements to blob basis elements can be quite expensive.

Sequence handling specifies how time varying projection data should be presented to an algorithm thatis not capable of time varying reconstructions. The choices are time averaging and slice by slice. The timeaveraging choice presents all the projection data to the algorithm at once as if there were no time variation.The slice by slice choice creates a set of reconstruction images and runs the algorithm with separate imagesfor each time bin.

We will use f (q)(x⃗, t) to denote the value of a reconstruction at iteration q at the point x⃗ and time t.

4.3.2.1 Image geometry

An image geometry is a plug-in. All algorithms are capable of performing reconstructions in a number ofgeometries. The choices are available here. All of the built-in geometries allow the reconstruction grid tohave a number of grid lines and grid spacing that can be quite different from those of the reconstructionregion. If the product of the number of grid lines times the grid spacing is quite different from that of thereconstruction region, a warning is issued. If only one of the number of grid lines and grid spacing parametersis specified, the other will be computed based on the reconstruction region size. Further, if the number ofgrid lines is not specified, the computed value will have the same parity as the number of grid lines in thereconstruction region. Note that all reconstructions will be resampled to the reconstruction geometry duringevaluation and output.

36 CHAPTER 4. JSNARK_GUI

4.3.2.2 Basis element

A basis element is a plug-in. The series expansion reconstruction algorithms are capable of performingreconstructions with a number of basis elements. The choices are presented here. Non-series expansionreconstruction algorithms are restricted to spel basis elements.

4.3.2.3 Termination

A termination is a plug-in that indicates when an iterative reconstruction algorithm should be terminated.If no termination plug-in is specified, the algorithm is terminated after one iteration.

4.3.2.4 Ray order

Ray order is a plug-in that directs the order in which rays are processed for the series expansion reconstructionalgorithms. If no ray order plug-in is specified, the Efficient (see Sections 7.12.1.1 and 7.12.2.1) plug-in isused.

4.3.2.5 Post processing

Post processing is a plug-in that performs some operation on a reconstruction before it is written to disk.Zero or more such operations are allowed.

4.4 Figure of meritThe figureOfMerit button is used to add figure of merit evaluation to the run. It outputs an XML file thatcontains the figure of merit values. jSNARK_experimenter (when written) will read this file to generate thefigure of merit evaluation.

Reconstruction file is the name of the reconstruction file. If omitted and an execute element is presentin the input file, this defaults to same name that was used in the execute section, otherwise, it defaults tothe name of the input file with the extension ’.snark’ replaced with ’.recon’.

Figure of merit file is the name of the file where the figure of merit XML output will be written. Ifomitted, it defaults to the name of the input file with the extension ’.snark’ replaced with ’.fom’.

4.4.1 Figure of merit methodA figure of merit is a plug-in that compares a phantom with a reconstruction and results in a single numberthat represents the "goodness" of a reconstruction according to some criterion. The value should have theproperty that higher values indicate a better reconstruction according to the criterion. Figure of merit meth-ods may take optional shape (region of interest) parameters. The user is presented with two drop down lists,one for identifyingString and the other for a shape. The identifyingString list is populated with the namesof the named shapes in the create section. Picking one of these means that the shape is the same one thatwas defined in the create section with that name, eliminating the need of duplicating the shape parameters.Alternatively, the user can create a new shape. As in create (see Section 4.2.1.2), the ConstantDensityShapesubclasses may be intersected by any number of half-spaces. Since the purpose of the shape is to define a re-gion of interest, it does not make sense to use anything other than a ConstantDensityShape here. In additionto the shape parameters, a region of interest has three additional parameters, lowerBound, upperBound, andclipOrIgnore. These modify how the shape is treated during the figure of merit computation. clipOrIgnoreis restricted to the values clip and ignore. If clip is specified, then values in the phantom or reconstructiongreater than upperBound are set to upperBound and values less than lowerBound are set to lowerBound. Ifignore is specified, then positions where the value of the phantom is outside those limits are ignored. Fortime varying reconstructions, the parameters of the regions of interest and the bounds are allowed to be timevarying parameters. This allows a region of interest to track a shape that moves in the phantom description.If the phantom is a time varying phantom and the resconstruction was produced using the time averagingoption, then an average phantom is calculated from the set of phantom instances and the figure of meritplug-in compares the reconstruction to the average phantom.

4.5. EVALUATOR 37

4.5 EvaluatorThe evaluator button is used to add an evaluation to the run.

Reconstruction file is the name of the reconstruction file. If omitted and an execute element is presentin the input file, this defaults to same name that was used in the execute section, otherwise, it defaults tothe name of the input file with the extension ’.snark’ replaced with ’.recon’.

4.5.1 Evaluator methodAn evaluator method is a plug-in that compares a phantom with a reconstruction and outputs one or morevalues. Evaluator methods may also take optional region of interest parameters that behave identically tothose described in Section 4.4.1 above.

4.6 OutputThe output button is used to add an output operation to the run.

Reconstruction file is the name of the reconstruction file. If omitted and an execute element is presentin the input file, this defaults to same name that was used in the execute section, otherwise, it defaults tothe name of the input file with the extension ’.snark’ replaced with ’.recon’.

4.6.1 Output methodAn output method is a plug-in that generates another representation (often a displayable image) of a phantomor reconstruction.

4.7 Versions and schema checkingWhile jSNARK is being developed, the XML schema is also being changed. Each version of jSNARK hasits own schema. Usually there are only minor changes, but occasionally major changes have to be made.Whether minor or major, the changes make it such that a *.snark file created under one version of jSNARKwill not run in any other version. When reading an existing *.snark file, the jSNARK_GUI program checksthat the file version matches the GUI version and that the file conforms to the version schema. It reportsan error and will not continue if either of these checks fails. Instead of having to start over, in some cases itmay be possible to edit the existing input file to make it acceptable to the new jSNARK version. Open acopy of the file using your favorite text editor. The version number is contained near the end of the first linejust before the dimensions attribute. Change it to match the GUI version. Now attempt to read it with theGUI. Failures are mostly caused by changes in the attributes to some XML element. If the GUI complainsthat an unexpected attribute is found, locate the XML element containing the attribute and delete it. If theGUI complains that a required attribute is missing, you may try to add that attribute if you know its type,otherwise it is safer to delete the entire element. If the GUI complains that a required element is missing,it is safer to delete the containing element rather than try to add the missing element using the text editor.Deleting elements can be tricky because you have to delete everything between the start and the end. If thisfails, your best bet is to read the file with the matching GUI version and in a second GUI recreate the filematching screen for screen.

38 CHAPTER 4. JSNARK_GUI

Chapter 5

Foundation classes

This chapter may be skipped if the reader is not interested in implementing new jSNARK plug-ins.

5.1 Globals

The Globals class contains certain data items that need to be referenced in many different parts of jSNARK.There are no instances of the Globals class. All data and methods are static. Some of these have alreadybeen discussed in Section 2.2. Other data items include the image size, default grid spacing, number of timeintervals, and number rays (see Sections 4.2.2 and 4.2.3).

5.2 JSnarkMath

The JSnarkMath class is similar to the Java Math class. It contains methods that are pure functions. Thereare no instances of the JSnarkMath class. All methods are static. They are:

quadraticRoots(double a, double b, double c): double[] returns a double array that contains the tworeal roots of the quadratic equation ax2+bx+c = 0, where a, b, and c are the arguments to the method.It returns null if there are no real roots. If two roots are found, the smaller root will be first.

min(double[] value): double returns the minimum value contained in the array passed as an argument.

max(double[] value): double returns the maximum value contained in the array passed as an argument.

primeFactors(int n): int[] returns an int array containing the prime factors of its argument.

findPerm(double[] data): int[] returns the permutation that when applied to the double array argumentwill sort the array into increasing order.

efficientOrder(int size): int[] returns an array of integers representing the efficient order of the given size.The efficient order is described in [27].

sinc(double x): double returns sin(πx)πx .1

Ci(double x): double returns γ + ln(x) +∫ x

0cos(t)−1

t dt, where γ is Euler’s constant.

Cin(double x): double returns∫ x

01−cos(t)

t dt.

1Some authors define sinc(x) as sin(x)x

.

39

40 CHAPTER 5. FOUNDATION CLASSES

5.3 Fourier

The Fourier class contains the forward and inverse fast Fourier treansform algorithms and two supportingmethods.

Fourier(int dim[]) initializes an instance of the Fourier class providing an array the defines the dimensionsof the discrete data to be transformed. dim is a three dimensional array that specifies the size of eachof the dimensions of the data array. For a one dimensional array, dim[2] is the size of the dataand dim[0]=dim[1]=1. For a two dimensional array, dim[2] is the size of the X dimension, dim[1]is the size of the Y dimension, and dim[0]=1. If x is a complex array to be transformed, thenx.length=2*dim[0]*dim[1]*dim[2] and the real part of x[k,l,m] is stored in vector fashion in a cellwith index 2*(k*n[2]*n[1]+l*n[2]+m), the imaginary part is stored in the cell immediately following.Note that the subscript m increases most rapidly, and k least rapidly.

fft(double[] x): void computes the discrete fast Fourier transform of a 1, 2, or 3-dimensional complexarray in place. This algorithm is based on Polge and Bhagavan [42]. That algorithm was written inFortran. It was translated to C++ for SNARK05 and now translated to Java. Its argument, x, is thecomplex input data as a linear array with a size consistent with the constructor. The data storage isdescribed in the constructor, As with all implementations of the fast Fourier transform, the origin is atx[0,0,0]. Index k>=dim[0]/2 logically corresponds to index k-dim[0]. Similarly, index l>=dim[1]/2

corresponds to index l-dim[1] and index m>=dim[2]/2 corresponds to index m-dim[2].

ifft(double[] x): void computes the discrete inverse fast Fourier transform fo a 1, 2, or 3 dimensional arrayin place. Its argument, x, is a complex array as above.

convertToComplex(double[] in, int[] inDim): double[] converts a real array into an equivalent com-plex array suitable for use with fft above. The input and output dimensions are the same as theargument dim for the fft. The input array is shifted so that the coefficient at the logical origin,[(inDim[0]+1)/2,(inDim[1]+1)/2,(inDim[2]+1)/2], is moved to [0,0,0]. If an output dimensionis greater than the corresponding input dimension, the output is padded with zeros. If an outputdimension is less than the corresponding input dimension, the excess coefficients are discarded. Thereare two versions of this routine. This version returns the output array as a result.

convertToComplex(double[] in, int[] inDim, double[] out): void converts a real array into an equiv-alent complex array suitable for use with fft above. Its output is put in the third argument, out, whichmust have the appropriate size or an error will be reported.

extractRealPart(double[] in, int[] outDim): double[] converts a complex array into an equivalent realarray discarding the complex part. This routine does the inverse of convertToComplex above. Itmoves the origin back to the center of the array and pads or discards as needed. There are also twoversions of this routine, one that creates the result array and one that puts the result in an argument.This version returns the result array.

extractRealPart(double[] in, int[] outDim, double[] out): void converts a complex array into an equiv-alent real array discarding the complex part. Its output is put in the third argument, out, which musthave the appropriate size or an error will be reported.

5.4 Poisson

The Poisson class is used to generate instances of a random integer with a Poisson distribution.

Poisson(Random random) create a new instance of this class using the given instance of the uniformrandom number generator.

nextPoisson(double lambda): double returns an instance of a random integer from a Poisson distribu-tion with the mean given by its argument.

5.5. BESSEL 41

5.5 BesselThe Bessel class implements numerical approximations for the common Bessel functions. Like the JSnark-Math class, all of these methods are static and instances of the class can not be created.

J(int order, double x): double returns the value of the Bessel function of the first kind ([1], p. 358) ofthe given order at the given argument.

JHalf(int order, double x): double returns the value of the Bessel function of the first kind ([1], p. 437)of the given order + 1/2 at the given argument.

I(int order, double x): double returns the value of the modified Bessel function ([1], p. 374) of the givenorder at the given argument.

IHalf(int order, double x): double returns the value of the modified Bessel function ([1], p. 443) of thegiven order + 1/2 at the given argument.

j(int order, double x): double returns the value of the spherical Bessel function ([1], p. 437) of the givenorder at the given argument.

5.6 Axispublic final class Axis {

static public final Axis X = new Axis("x");

static public final Axis Y = new Axis("y");

static public final Axis Z = new Axis("z");

public String toString() { ... }

static public final Axis valueOf(String axisName) { ... }

}

The Axis class contains three static members, “X”, “Y”, and “Z”, which are used to represent the threedimensions. There is a static member function, valueOf, which when called with one of the letters, “X”, “Y”,or “Z”, will return the corresponding static member. The toString member function will return the name ofthe axis.

5.7 Vectorpublic interface Vector extends Cloneable {

public double norm();

public double normsq();

public void addToSelf(Vector v);

public Vector add(Vector v);

public void subtractFromSelf(Vector v);

public Vector subtract(Vector v);

public void normalizeSelf();

public double angle(Vector v);

public void negateSelf();

public void multiplySelf(double a);

public double dotProduct(Vector v);

public double distance(Vector v);

42 CHAPTER 5. FOUNDATION CLASSES

public Vector midpoint(Vector v);

public Vector clone() throws CloneNotSupportedException;

}

The Vector interface specifies the operations that all Vector instances must implement. The angle methodreturns the angle between two vectors. Other than that, the operations should be obvious from their names.Depending on the context, an instance of a Vector may represent either a point or a direction.

5.7.1 Vector2D

The Vector2D class is the implementation of the Vector interface for two-dimensional vectors.

5.7.2 Vector3D

The Vector3D2 class is the implementation of the Vector interface for three-dimensional vectors.

5.8 GridPoint

public class GridPoint {

public int i;

public int j;

public int k;

public GridPoint() {

}

public GridPoint(int i, int j, int k) {

this.i = i;

this.j = j;

this.k = k;

}

public GridPoint(int i, int j) {

this.i = i;

this.j = j;

}

}

The GridPoint class represents the grid points of the various ImageGeometries (see Section 6.4). In the 2Dcase, the k coordinate is always 0.

5.9 Ray

public abstract class Ray implements Cloneable, Serializable {

public Ray() { }

public Ray(Vector point, Vector direction) { ... }

public Vector getPoint() { ... }

2The Vector3D class was written by Luc Maisonobe (http://www.spaceroots.org/software/mantissa/index.html) andadapted to jSNARK.

5.9. RAY 43

public Vector getDirection() { ... }

public abstract double distance(Vector p);

public abstract double distanceSq(Vector p);

public Vector getPoint(double distance) { ... }

public void setRaySum(double raySum) { ... }

public double getRaySum() { ... }

public double[] intersect(ClosedHalfSpace[] region) { ... }

public double[] restrict(double[] endPoints, ClosedHalfSpace[] region) { ... }

public String toString() { ... }

public Ray clone() throws CloneNotSupportedException { ... }

public boolean equals(Object obj) { ... }

public int hashCode() { ... }

protected Vector point;

protected Vector direction;

protected double raySum;

}

The Ray class is an abstract class that represents a line. It consists of two Vectors, a point and a direction.The point is any point on the Ray and the direction is a unit vector in the direction of the ray. The rayconsists of all the points of the form p⃗ + ad⃗, where p⃗ is the point, d⃗ is the direction of the ray, and a is areal number. p⃗ and d⃗ must have the same dimensionality. This parameterized form gives us an easy wayto calculate the distance between two points along the ray. Let p⃗1 and p⃗2 be two points along a ray wherep⃗1 = p⃗+ a1d⃗ and p⃗2 = p⃗+ a2d⃗ then the distance between them, ∥p⃗1 − p⃗2∥, is |a1 − a2|. The parameterizedform also allows a set of points along the ray to be easily ordered. This property is essential for computingthe raysums of overlay phantoms (see Section 4.2.1).

5.9.1 Ray2Dpublic class Ray2D extends Ray implements Cloneable, Serializable {

public Ray2D() { ... }

public Ray2D(Vector2D point, Vector2D direction) { ... }

public Vector2D intersection(Ray2D line) { ... }

public double distanceSq(Vector2D p) { ... }

public double distance(Vector2D p) { ... }

public double distanceSq(Vector p) { ... }

public double distance(Vector p) { ... }

public Ray2D clone() throws CloneNotSupportedException { ... }

}

The Ray2D class is the implementation of the Ray class for two-dimensional Rays. It extends the Ray classwith a method to find the intersection of two Rays. If the lines are coincident, an arbitrary point on the lineis returned. If the lines are parallel, null is returned.

5.9.2 Ray3Dpublic class Ray3D extends Ray implements Cloneable, Serializable {

public Ray3D() { ... }

public Ray3D(Vector3D point, Vector3D direction) { ... }

public double distanceSq(Vector3D p) { ... }

public double distance(Vector3D p) { ... }

44 CHAPTER 5. FOUNDATION CLASSES

public double distanceSq(Vector p) { ... }

public double distance(Vector p) { ... }

public Ray3D clone() throws CloneNotSupportedException { ... }

}

The Ray3D class is the implementation of the Ray class for three-dimensional Rays.

5.10 HalfSpace and ClosedHalfSpace

public abstract class HalfSpace implements Serializable {

public HalfSpace(double distance, Vector direction) { ... }

public HalfSpace(Vector point, Vector direction) { ... }

public abstract boolean isInHalfSpace(Vector point);

public Vector intersect(Ray ray) { ... }

public double intersectDistance(Ray ray) { ... }

public void setDistance(double distance) { ... }

}

The HalfSpace class represents a half-space in two or three dimensions. It consists of the Vector that isnormal to the half-space, and the distance of the half-space from the origin. The vector will be normalizedto be a unit vector. Let n⃗ be the normal to the half-space and d the distance. The ClosedHalfSpace is aderived class where the inHalfSpace method is implemented such that it contains the points on the plane.Then the closed half-space is the set

{p⃗|p⃗ · n⃗ ≥ d}.

It is easy to implement an OpenHalfSpace class if such a class is needed.The HalfSpace class includes a method to find the intersection of a Ray with the boundary of the half-

space. If the Ray is contained in the boundary, an arbitrary point on the ray is returned. If it is parallel tothe boundary and not contained in the boundary, null is returned.

5.11 Bounds

public class Bounds {

public Bounds(Vector lower, Vector upper) { ... }

public Vector getLower() { ... }

public Vector getUpper() { ... }

public void setLower(Vector lower) { ... }

public void setUpper(Vector upper) { ... }

}

The Bounds class consists of two Vector instances that are used to bound some spatially limited geometricfigure. Let l⃗ be the lower bound and u⃗ be the upper bound. Then for very point x⃗ contained in the figure,it will be the case that l⃗ ≤ x⃗ ≤ u⃗, where the meaning of a⃗ ≤ b⃗ is that every component of a⃗ is less than orequal to the corresponding component of b⃗. In 2D, a bounds instance describes a rectangle and in 3D a box.

5.12. ROTATION 45

5.12 Rotation

public class Rotation implements Serializable {

public Rotation() { ... }

public Rotation(double q0, double q1, double q2, double q3) { ... }

public Rotation(Vector3D axis, double angle) { ... }

public Rotation(Vector3D u1, Vector3D u2, Vector3D v1, Vector3D v2) { ... }

public Rotation(Vector3D u, Vector3D v) { ... }

public Rotation(RotationOrder order, double alpha1, double alpha2, double alpha3) { ... }

public Rotation(Rotation r) { ... }

public Rotation revert() { ... }

public double getQ0() { ... }

public double getQ1() { ... }

public double getQ2() { ... }

public double getQ3() { ... }

public Vector3D getAxis() { ... }

public double getAngle() { ... }

public Vector3D applyTo(Vector3D u) { ... }

public void applyToSelf(Vector3D u) { ... }

public Vector3D applyInverseTo(Vector3D u) { ... }

public void applyInverseToSelf(Vector3D u) { ... }

public Rotation applyTo(Rotation r) { ... }

public Rotation applyInverseTo(Rotation r) { ... }

public boolean equals(Object obj) { ... }

public int hashCode() { ... }

}

The Rotation3 class is a quaternion that represents a rotation in three-dimensions. It has a number ofconstructors, the most useful of which takes an instance of a RotationOrder and the three rotation angles.It has operations to apply either the rotation or its inverse to a Vector3D or another Rotation.

5.12.1 RotationOrder

public final class RotationOrder {

public String toString() { ... }

static public final RotationOrder XYZ = new RotationOrder("XYZ");

static public final RotationOrder XZY = new RotationOrder("XZY");

static public final RotationOrder YXZ = new RotationOrder("YXZ");

static public final RotationOrder YZX = new RotationOrder("YZX");

static public final RotationOrder ZXY = new RotationOrder("ZXY");

static public final RotationOrder ZYX = new RotationOrder("ZYX");

static public final RotationOrder XYX = new RotationOrder("XYX");

static public final RotationOrder XZX = new RotationOrder("XZX");

static public final RotationOrder YXY = new RotationOrder("YXY");

static public final RotationOrder YZY = new RotationOrder("YZY");

static public final RotationOrder ZXZ = new RotationOrder("ZXZ");

static public final RotationOrder ZYZ = new RotationOrder("ZYZ");

3The Rotation class was written by Luc Maisonobe (http://www.spaceroots.org/software/mantissa/index.html) andadapted to jSNARK.

46 CHAPTER 5. FOUNDATION CLASSES

}

The RotationOrder4 class consists of 12 static instances of this class that represent the 12 body centeredEuler angle possibilities.

5.13 GenericSet

public class GenericSet {

public GenericSet(String subDir) { ... }

public void setMaxOccupied(int maxOccupied) { ... }

void add(SetInstance instance) { ... }

void checkSize(SetInstance instance) { ... }

File getPageFile() throws IOException { ... }

}

The GenericSet class is the base class for the ProjectionSet and ImageSet classes. It is responsible fordetecting when the sizes of these classes exceed certain limits and paging components of them to disk beforethat happens. Only the data array is paged. This data must be contained in a SetInstance (Section 5.14)instance. It uses the simple newest replaces oldest algorithm.

5.13.1 ProjectionSet

public class ProjectionSet extends GenericSet {

public ProjectionSet() { ... }

public ProjectionSet(double cycleLength, int numberOfTimeBins) { ... }

public void setProjection(int projectionIndex, Projection projection)

throws JSnarkException { ... }

public void setProjections(Projection[] projections) throws JSnarkException { ... }

public Projection getProjection(int timeSlice, int projectionIndex)

throws JSnarkException { ... }

public Projection getProjection(int projectionIndex) throws JSnarkException { ... }

public ProjectionSet getProjectionSet(int timeSlice) throws JSnarkException { ... }

public int getNumberOfTimeIntervals() { ... }

public void setAverageDensity(double[] averageDensity) { ... }

public int getNumberOfProjections(int binIndex) { ... }

public int getNumberOfProjections() { ... }

public List<SliceNoise> getSliceNoiseList() { ... }

public List<ProjectionNoise> getProjectionNoiseList() { ... }

public List<RayNoise> getRayNoiseList() { ... }

public QuantumNoise getQuantumNoise() { ... }

}

The ProjectionSet class contains the projections for all the directions and all the time instances.

4The RotationOrder class was written by Luc Maisonobe (http://www.spaceroots.org/software/mantissa/index.html)and adapted to jSNARK.

5.14. SETINSTANCE 47

5.13.2 ImageSetpublic class ImageSet extends GenericSet implements Cloneable {

public ImageSet(BasisElement basisElement, ImageGeometry imageGeometry) { ... }

public void initImages(double[] averageDensity) throws JSnarkException { ... }

public void initImages() throws JSnarkException { ... }

public void writeImages(File imageFile) throws JSnarkException { ... }

public Image getImage(int timeSlice) throws JSnarkException { ... }

public Image getImage() throws JSnarkException { ... }

public int getNumberOfTimeBins() { ... }

public ImageSet clone() throws CloneNotSupportedException { ... }

public ImageGeometry getImageGeometry() { ... }

public ImageSet convertImages(ImageGeometry newGeometry) throws JSnarkException { ... }

}

The ImageSet class contains a set of time varying images. The quanta for paging is the Image (see Section6.7).

5.14 SetInstancepublic class SetInstance implements Cloneable {

public SetInstance() { }

public void setParent(GenericSet parent) { ... }

public void setSize(int size) { ... }

public double[] getData() { ... }

public boolean hasData() { ... }

}

The SetInstance class contains a data array that may potentially be paged to disk. Access to the data arrayis through the getData method which will fetch the data from disk if it is not currently in memory. Thepointer returned from getData should be not be cached. This will defeat the paging and potentially resultin holding on to a stale copy of the data.

5.15 ReadProjectionFilepublic class ReadProjectionFile extends DefaultPhase implements ContentHandler {

public static ReadProjectionFile getReadProjectionFile(File file, boolean echo)

throws JSnarkException { ... }

public ProjectionSet getProjections() throws JSnarkException { ... }

public String getProjectionSetName() { ... }

}

The projection file is not read during the analysis phase (see Section 3.3) unless it is necessary. The analysismethods are passed an instance of a File class. The projection data can be obtained from this object withthe Java statement:

48 CHAPTER 5. FOUNDATION CLASSES

Figure 5.1: The Decorator Pattern

ProjectionSet projections = new ReadProjectionFile(file, false).getProjections();

where file is the File class instance of the projection data.

5.16 The decorator patternjSNARK makes extensive use of the decorator pattern (see [14], pp. 175-184). The Unified ModelingLanguage (UML) for the decorator pattern is shown in Figure 5.1. It has a base interface (Component),one or more classes that implement that interface (ConcreteComponent), a decorator class (Decorator), andperhaps one or more classes that extend the decorator (ConcreteDecoratorA, ConcreteDecoratorB). Thedecorator pattern has two uses. The first use is to solve the MxN problem where there are M main classesand N variants of them which would ordinarily require MxN different classes.

This is illustrated in jSNARK with the Shape interface (see Section 6.3). The analysis phase classes usea plain Shape class instance. The phantom creation part of the data generation phase needs a shape withan effective density. The projection data creation part of the data generation phase needs a shape with adensity spectrum. The UML for these classes is shown in Figure 5.2.

The other use of the decorator class is to add functionality to an existing class without subclassing. InjSNARK this mostly occurs because at the point where the extra functionality is needed, the class instancehas already been created. Moving all the possible functionality into the base class is undesirable becausethe base class will then become bloated with methods that are only needed in isolated parts of the program.A problem can occur when using this form of the decorator pattern if the base class happens to contain

5.16. THE DECORATOR PATTERN 49

Figure 5.2: The Shape classes

data as shown in Figure 5.3. In this case, Decorator and all its subclasses have two instances of the datasomeAttribute - the first through inheritance and the second through the base reference. When coding it iseasy to use the wrong instance of the data. The solution to this problem is shown in Figure 5.4.

This is illustrated in jSNARK with the Projection interface (see Section 6.9) shown in Figure 5.5.

50 CHAPTER 5. FOUNDATION CLASSES

Figure 5.3: Decorator class problem

Figure 5.4: Decorator pattern problem solution

5.16. THE DECORATOR PATTERN 51

Figure 5.5: The Projection classes

52 CHAPTER 5. FOUNDATION CLASSES

Chapter 6

Writing user plug-ins

This chapter may be skipped if the reader is not interested in implementing new jSNARK plug-ins.A plug-in is any class that implements one of the interfaces that is a subtype of the PlugIn interface. In

addition, it must also implement the TwoD or ThreeD (see Section 6.2) interface, or both.

6.1 The PlugIn interface

public interface PlugIn {

public String getPlugInType();

public Parameters getParameters();

public void create(Properties arguments, boolean echo)

throws JSnarkException;

public void putPlugIn(PlugIn plugIn);

Every plug-in must have a default constructor and implement the PlugIn interface. This interface has fourmethods, getPlugInType, getParameters, create, and putPlugIn.

getPlugInType returns a string that identifies the plug-in type so that the GUI will display its name inthe correct place. If the GUI had knowledge of all the interfaces, this method would not be necessary. Theinstanceof operator could have been used instead. The problem is that new interfaces can be created forthe imbedded plug-ins and the GUI would not be aware of them. The getPlugInType method allows newinterface types to be discovered on the fly.

getParameters returns an instance of the Parameters class. This is used by the GUI to present the plug-inparameters to the user.

The create method is used to initialize an instance of the plug-in. The arguments parameter is an instanceof the java.util.Properties class, a string to string map, that contains the names and values of the plug-in’sparameters. The echo parameter is a boolean that indicates whether or not the plug-in should output thevalues of its parameters.

The putPlugIn method is called when the plug-in has nested plug-ins. Its argument is an instance of anested plug-in.

If the plug-in accepts multiple nested plug-ins it is necessary to determine which is being passed in eitherby calling its getPlugInType method or by checking the class type using the instanceof operator.

class Parameters {

public Parameters(

String name, NestedPlugIn[] requiredPlugIns,

NestedPlugIn[] optionalPlugIns,

Parameter[] required, Parameter[] optional) {

this.name = name;

53

54 CHAPTER 6. WRITING USER PLUG-INS

this.requiredPlugIns = requiredPlugIns;

this.optionalPlugIns = optionalPlugIns;

this.requiredParameters = required;

this.optionalParameters = optional;

}

public Parameters(

String name, Parameter[] required,

Parameter[] optional) {

this.name = name;

this.requiredParameters = required;

this.optionalParameters = optional;

}

public Parameters(String name, Parameter[] required) {

this.name = name;

this.requiredParameters = required;

}

public Parameters(String name) {

this.name = name;

}

public String name;

public NestedPlugIn[] requiredPlugIns;

public NestedPlugIn[] optionalPlugIns;

public Parameter[] requiredParameters;

public Parameter[] optionalParameters;

}

The Parameters class is contained in the PlugIn interface. It has five attributes, name, requiredPlugIns,optionalPlugIns, requiredParameters, and optionalParameters.

The name attribute is the name of the plug-in that will be displayed by jSNARK_GUI.A plug-in can reference other plug-ins and can have additional parameters. These may either be required

or optional. By default, optional plug-ins or parameters will not be displayed by jSNARK_GUI. Therequired and optional plug-ins are of type NestedPlugIn. The required and optional parameters are of typeParameter .

There are four constructors for this class that just copy their arguments to the class attributes.

class NestedPlugIn {

public NestedPlugIn(String type, String documentation) {

this.type = type;

this.documentation = documentation;

}

public NestedPlugIn(

String type, String documentation,

String defaultPlugIn) {

this.type = type;

this.documentation = documentation;

this.defaultPlugIn = defaultPlugIn;

}

6.1. THE PLUGIN INTERFACE 55

public String type;

public String defaultPlugIn;

public String documentation;

}

The NestedPlugIn class is contained in the PlugIn interface. It has three attributes, type, defaultPlugIn,and documentation.

The type attribute is a string valued argument that names the type of the nested plug-in. It is displayedto the left of the drop down list containing the candidates for the nested plug-in. Those plug-ins whosegetPlugInType method match this name will be candidates for this plug-in.

The defaultPlugIn attribute is present for those cases when the nested plug-in must exist, but the user isnot required to specify a default.

The documentation attribute is a string that will be displayed as a tool tip when the cursor is placedover the type name. There are two constructors for this class that just copy their arguments to the classattributes.

class Parameter {

public Parameter(

String name, String type, String documentation,

String[] values) {

this.name = name;

this.type = type;

this.documentation = documentation;

this.values = values;

}

public Parameter(String name, String type, String documentation) {

this.name = name;

this.type = type;

this.documentation = documentation;

}

public String name;

public String type;

public String documentation;

public String[] values;

}

}

The Parameter class is contained in the PlugIn interface. It has four attributes, name, type, documen-tation, and values.

The name attribute is a string valued attribute that is the name of the parameter. It will be displayed tothe left of the value by jSNARK_GUI. When specified, an entry in the PropertyList argument to the createmethod of the PlugIn will exist with the name as the key and the user’s value as the value.

The type attribute is a string valued attribute that is the type of the parameter. It should be one of ’int’,’float’, or ’string’. It will be displayed to the right of the value by jSNARK_GUI.

The documentation attribute is a string valued attribute that describes the parameter. It will be displayedas a tool tip when the cursor is placed over the name.

The values attribute is an array of strings that is used when a string valued parameter is restricted to asmall number of values. The possible choices are given in the array.

56 CHAPTER 6. WRITING USER PLUG-INS

6.2 The TwoD and ThreeD interfaces

The TwoD and ThreeD interfaces do not implement any methods. They are used by jSNARK_GUI todecide whether the plug-in should be for two- or three-dimensional reconstructions, or both.

6.3 The Shape interface

public interface Shape extends PlugIn {

public boolean isIn(Vector p);

public double valueAt(Vector p);

public double lineIntegral(Ray r);

public Bounds bounds();

}

The Shape interface has five methods, isIn, valueAt , lineIntegral , and bounds.The isIn method returns true if the given point is in the shape and false otherwise.The valueAt method returns the value of the shape at the given point. For the constant density shapes,

valueAt is either 0.0 or 1.0. For the variable density shapes, valueAt returns a positive real number. It isexpected that for every point p such that isIn(p) is false, then valueAt(p) is 0.0.

The lineIntegral method returns the integral of the shape along the given line.The bounds method returns an instance of the Bounds class (see Section 5.11). The Bounds class consists

of two points, lower and upper. It is expected that the value of every coordinate of the lower point is lessthan or equal to the corresponding coordinate of the upper point.

6.3.1 The ConstantDensityShape class

public abstract class ConstantDensityShape implements Shape {

public ConstantDensityShape() {

}

public final Vector[] intersections(Ray r) { ... }

public abstract double[] entryExit(Ray r);

public abstract boolean isInBaseShape(Vector p);

public Bounds bounds() { ... }

public void setHalfSpaces(ClosedHalfSpace[] halfSpaces) { ... }

public final double lineIntegral(Ray r) { ... }

public final boolean isIn(Vector p) { ... }

public final double valueAt(Vector p) { ... }

protected ClosedHalfSpace[] halfSpaces = new ClosedHalfSpace[0];

}

The ConstantDensityShape class is an abstract class for those convex shapes with a constant density in theirinterior. It modifies the base shape, allowing it to be intersected by any number of closed half-spaces. Itadds three new methods, entryExit , isInBaseShape, and intersections, that subclasses need to define andimplements the lineIntegral, isIn, and valueAt methods of the Shape interface.

The entryExit method returns null if the given ray does not intersect the base shape or an array of twodoubles containing the parameters for the two points along the ray that intersect the boundary of the shape.In the later case, it is expected that the values are in increasing order (see Section 5.9).

Subclasses of this class implement isInBaseShape exactly as they would have implemented isIn. Thismethod is used by the implementation of isIn.

6.4. THE IMAGEGEOMETRY INTERFACE 57

The intersections method returns null if the given ray does not intersect the base shape or an array ofVector -s containing exactly two Vector -s if it does. The two Vector -s are the intersection points of the linewith the shape and the array of closed half-spaces.

The isIn method returns true if the given point is in the shape and all the half-spaces and false otherwise.

6.4 The ImageGeometry interface

public interface ImageGeometry extends Cloneable, PlugIn {

public int getNumberOfXGridLines();

public int getNumberOfYGridLines();

public double getGridSpacing();

public double getBasisElementSeparation();

public void setBasisElement(BasisElement basisElement);

public BasisElement getBasisElement();

public double getBasisElementSize();

public double getImageWidth();

public double getImageSize();

public int getImageSizeInGridPoints() throws JSnarkException;

public void writeXMLHeader(PrintWriter printWriter, Element shapeList,

Element reconstructionGeometry, Image image)

throws JSnarkException;

public void setNumberOfTimeBins(int numberOfTimeBins);

public int getNumberOfTimeBins();

public Image createImage() throws JSnarkException;

public void setImageName(String imageName);

public GridPoint convertToGridPoint(Vector point);

public Vector convertToVector(GridPoint gridPoint);

public boolean isBasisElement(GridPoint gridPoint);

public ImageGeometry clone() throws CloneNotSupportedException;

public boolean equals(ImageGeometry imageGeometry);

}

The ImageGeometry interface describes the grid and basis elements that define a digital image. It hastwenty methods, getNumberOfXGridLines, getNumberOfYGridLines, getGridSpacing , getBasisElementSep-aration, setBasisElement , getBasisElement , getBasisElementSize, getImageWidth, getImageSize, getImage-SizeInGridPoints, writeXMLHeader , setNumberOfTimeBins, getNumberOfTimeBins, createImage, setIma-geName, convertToGridPoint , convertToVector , isBasisElement , clone, and equals.

The getNumberOfXGridLines method returns the number of basis elements is the X and Z directions.The getNumberOfYGridLines method returns the number of grid lines in the Y direction. Except for the

hexagonal geometry (see Section 7.7.1.2), this is the same value as getNumberOfXGridLines.The getGridSpacing method returns the spacing between grid lines.1

The getBasisElementSeparation method returns the smallest distance between the centers of basis ele-ments.

The setBasisElement method sets the basis element for the geometry.The getBasisElement returns the basis element.The getBasisElementSize method returns the size (area or volume) of the basis element.The getImageWidth method returns the actual width of the image. This value may be larger than that

specified by the jSNARK input file.The getImageSize method returns the area of the image in 2D and the volume of the image in 3D.

1For the hexagonal grid (Section 7.7.1.2), this is the spacing between grid lines in the Y direction.

58 CHAPTER 6. WRITING USER PLUG-INS

The getImageSizeInGridPoints method returns the size of the basis element coefficient array of an imageusing this geometry.

The writeXMLHeader method writes the DIG format XML header.The setNumberOfTimeBins method sets the number of time intervals and getNumberOfTimeBins re-

trieves that value.The createImage method is a factory method (see [14], pp. 107-116) that returns an instance of an Image

corresponding to this geometry (see Section 6.7).The setImageName sets the name of this image.The convertToGridPoint method returns the GridPoint coresponding to a basis element center that is

closest to the given point.The convertToVector method returns the Vector that corresponds to the GridPoint.The isBasisElement method returns true if the given GridPoint corresponds with the center of a basis

element and false otherwise.The clone method performs a deep copy of the image geometry.The equals method returns true if and only if its argument is an identical image geometry.The AbstractImageGeometry2D and AbstractImageGeometry3D classes provide default implementations

of many of these methods. Additional ImageGeometry subclasses should be subclasses of one of them.

6.5 The BasisElement interfacepublic interface BasisElement extends PlugIn {

public void setGridSpacing(double gridSpacing);

public void setCenter(Vector center);

public boolean equals(BasisElement basis);

public double getSupport();

public double valueAt(Vector point);

public boolean isIn(Vector point);

public double lineIntegral(Ray ray);

public void setBasisType(Element mainHeader);

}

The BasisElement interface has eight methods, setGridSpacing , setCenter , equals, getSupport , valueAt , isIn,lineIntegral , and setBasisType.

The setGridSpacing method sets the size of the basis element. It has the single parameter gridSpacing.gridSpacing is the distance between grid lines of the image geometry.2

The setCenter method is used to set the center point of the basis element. That way a single instanceof the basis element can be used for all the positions the basis element may occupy in the image geometrygrid.

The equals method returns true if and only if its argument is an equivalent basis element.The getSupport method returns the support of the basis element. The support is the maximum distance

from the center of the basis element to another point in the basis element.The valueAt, isIn, and lineIntegral methods are the same as the Shape methods of the same name (see

Section 6.3).The setBasisType method sets the appropriate attributes in the XML header for this basis element.

6.6 The ImageIterator interfacepublic interface ImageIterator extends Iterator<Vector> {

public int getIndex();

2For the hexagonal grid (Section 7.7.1.2), this is the spacing between grid lines in the Y direction.

6.7. THE IMAGE INTERFACE 59

public double getBasisElementCoefficient();

public void setBasisElementCoefficient(double value);

public void updateBasisElementCoefficient(double amt);

public void scaleBasisElementCoefficient(double factor);

}

The ImageIterator interface is used to step through the basis elements of an image one by one. It inherits thehasNext and next methods of the Iterator interface. It adds five new methods, getIndex, getBasisElement-Coefficient, setBasisElementCoefficient, updateBasisElementCoefficient, and scaleBasisElementCoefficient.

The hasNext method returns true if there are any unprocessed basis elements.The next method returns the position of the center of the next basis element as an instance of either

Vector2D or Vector3D.The getIndex method returns the index associated with the last basis element returned by next.The getBasisElementCoefficient returns the basis element coefficient of the last basis element returned

by next.The setBasisElementCoefficient sets the basis element coefficient to its argument.The updateBasisElementCoefficient method modifies the basis element coefficient by adding its argument

to it.The scaleBasisElementCoefficient method modifies the basis element coefficient by multiplying it by its

argument.The AbstractImageIterator class provides default implementations of all the ImageIterator methods except

next and hasNext. It should be used as the basis for all classes implmenting this interface.

6.7 The Image interfacepublic interface Image extends Iterable, Cloneable, Serializable {

public void setAll(double value);

public double getBasisElementCoefficient(int index);

public double getBasisElementCoefficient(GridPoint gridPoint) throws JSnarkException;

public double getDensity(int index);

public double getDensity(GridPoint gridPoint);

public double getDensity(Vector point);

public void setBasisElementCoefficient(int index, double value);

public Vector getPoint(int index);

public void writeImage(DataOutputStream imageStream) throws JSnarkException;

public void readImage(DataInputStream input) throws JSnarkException;

public int getGridWidth();

public int getGridHeight();

public int getImageSizeInGridPoints();

public void setImageSize(int size);

public void setParent(ImageSet imageSet);

public double getMax();

public double getMin();

public void setBasisType(Element mainHeader);

public void setGridType(Element mainHeader);

public void setSamplingRates(Element mainHeader);

public double[] getBasisElementCoefficients();

public double getGridSpacing();

public BasisElement getBasisElement();

public ImageGeometry getImageGeometry();

public Image clone() throws CloneNotSupportedException;

public GridPoint convertToGridPoint(Vector point);

60 CHAPTER 6. WRITING USER PLUG-INS

public Vector convertToVector(GridPoint gridPoint);

public boolean isBasisElement(GridPoint gridPoint);

public int convertToIndex(GridPoint gridPoint);

public ImageIterator iterator();

}

The Image3 interface is the abstraction of a single time instant digital image. It references an ImageGe-ometry and a BasisElement and contains the coefficients of the basis elements. It has 29 methods, setAll ,getBasisElementCoefficient (two variants), getDensity (three variants), setBasisElementCoefficient , getPoint,writeImage, readImage, getGridWidth, getGridHeight , getImageSizeInGridPoints, setImageSize, setParent ,getMax , getMin, setBasisType, setGridType, setSamplingRates, getBasisElementCoefficients, getGridSpac-ing , getBasisElement , getImageGeometry , clone, convertToGridPoint , convertToVector , isBasisElement ,convertToIndex , and iterator .

The setAll method sets the value of all the basis element coefficients to the value of its argument.The getBasisElementCoefficient method with an integer argument returns the coefficient of the basis

element with that index.The getBasisElementCoefficient method with a GridPoint argument returns the coefficient of the basis

element at that grid point and throws a JSnarkException if there is no basis element at that grid point.The getDensity method with an integer argument returns the density of the image at the coordinates of

the basis element with that index.The getDensity method with a gridPoint argument returns the density of the image at the given grid

point.The getDensity method with a Vector argument (see Section 5.7) returns the value of the image at that

point.The setBasisElementCoefficient method sets the coefficient of the basis element with the given index to

the given value.The getPoint method returns a Vector containing the coordinates of the grid point with the given index.The writeImage method writes the image on a data stream.The readImage method can read such an image providing that the image being read has the same

ImageGeometry and BasisElement as the Image instance.The getGridWidth method returns the number of grid lines in the X or Z directions.The getGridHeight method returns the number of grid lines in the Y direction.4

The getImageSizeInGridPoints returns the size of the basis element coefficient array.The setImageSize method sets the size of the basis element coefficient array.The setParent method is used to attach an image to an ImageSet.The getMax method returns the maximum value of all the basis element coefficients.The getMin returns the minimum value.The setBasisType method sets the attributes of the basis element in the DIG format XML header.The setGridType sets the grid attributes.The setSamplingRates method sets the sampling rate attributes.The getBasisElementCoefficients returns the array of basis element coefficients.The getGridSpacing method returns the distance between grid lines.5

The getBasisElement returns the basis element.The getImageGeometry method returns the image geometry.The clone method makes a deep copy of the image.The convertToGridPoint method converts its Vector argument to the indices of the grid point closest to

the vector.The convertToVector method converts the GridPoint argument to the equivalent Vector.The isBasisElement method returns true if the GridPoint argument is also the center of a basis element.

3The names of some of the methods contain the strings “Density” and “BasisElementCoefficient”. These mean quite differentthings. Refer to Section 2.4 for their definitions.

4Except for the hexagonal grid (see Section 7.7.1.2), this is the same value as getGridWidth.5For the hexagonal grid (Section 7.7.1.2), this is the spacing between grid lines in the Y direction.

6.8. THE DIRECTIONS INTERFACE 61

The convertToIndex method returns the index corresponding to the given grid point or -1 if there is nocorresponding basis element.

The iterator interface returns an instance of an ImageIterator that is used to step through each of thebasis elements making up the image.

The AbstractImage class provides default implementations of many of these methods.The images for all time bins are collected together in an instance of the ImageSet class (see Section

5.13.2). This class will automatically page images to the disk for large reconstructions. See Appendix E formore information on controlling this.

6.8 The Directions interfacepublic interface Directions extends PlugIn{

public int size();

public void setStartTime(double startTime);

public double getEndTime();

}

The Directions interface specifies the directions of the projections. It has three methods, size, setStartTime,and getEndTime.

The size method returns the number of directions contained in the class. This is because there is no easyway to combine an angle in 2D and an orientation in 3D.

The setStartTime method sets the time for the first projection of the directions set.The getEndTime method gets the time of the last projection plus the pulse time.User written Directions classes should be derived from either Directions2D or Directions3D.

public abstract class Directions2D implements Directions, TwoD {

public String getPlugInType() {

return "directions";

}

public void putPlugIn(PlugIn plugIn) {

}

public int size() {

return numberOfPulses;

}

public double getEndTime() {

return startTime + numberOfPulses * pulseTime;

}

public void setStartTime(double startTime) {

this.startTime += startTime;

for (int i = 0; i < angles.length; ++i) {

angles[i].setTime(startTime + i * pulseTime);

}

}

public ProjectionAngle[] getAngles() {

return angles;

}

62 CHAPTER 6. WRITING USER PLUG-INS

protected int numberOfPulses = 1;

protected double startTime;

protected double pulseTime;

protected ProjectionAngle[] angles;

}

Directions2D is an abstract class adds one new method, getAngles, and provides default implementationsfor getPlugInType, putPlugIn, and size.

public abstract class Directions3D implements Directions, ThreeD {

public abstract List<ProjectionOrientation>

getProjectionOrientations(double sourceToOrigin);

public String getPlugInType() {

return "directions";

}

public void putPlugIn(PlugIn plugIn) {

}

public int size() {

return numberOfPulses;

}

public double getEndTime() {

return startTime + numberOfPulses * pulseTime;

}

public void setStartTime(double startTime) {

this.startTime += startTime;

}

protected int numberOfPulses = 1;

protected double startTime;

protected double pulseTime;

}

Directions3D is an abstract class that add one new method, getProjectionOrientations and provides de-fault implementations for getPlugInType, putPlugIn, and size. A ProjectionOrientation consists of a rotationgiven by a quaternion (see Section 5.12) and a point (see Section 5.7) that lies on the central ray of theprojection.

public interface MacroDirections {

Element[] getReplacementXML();

}

The MacroDirections interface is needed for those Directions classes that generate different directionseach time their create method is called. The random directions classes (see Sections 7.3.1.3 and 7.3.2.3) areinstances of this behavior. It contains a single method, getReplacementXML. This method is expected toreturn an array of XML Element-s that are the equivalent of the initial set of directions.

6.9. THE PROJECTION INTERFACE 63

6.9 The Projection interfacepublic interface Projection extends Cloneable {

public int computeNumberOfRays(double imageSize, double detectorSpacing)

throws JSnarkException;

public int getNumberOfRays();

public int getTotalNumberOfRays();

public double getDetectorSpacing();

public Ray getRay(int rayIndex) throws JSnarkException;

public double[] getProjectionData();

public void setParameters(Element parameters);

public double getTotalDensity();

public void setProjectionSet(ProjectionSet projectionSet) throws JSnarkException;

public double getTime();

public void setTime(double time);

public Projection clone() throws CloneNotSupportedException;

}

The Projection interface is the abstraction of a single projection. Every implementation of this interfaceshould be derived from either the AbstractProjection2D or AbstractProjection3D classes. The Projectioninterface has eleven methods, computeNumberOfRays, getTotalNumberOfRays, getRay , getProjectionData,setParameters, getTotalDensity , setProjectionSet , getTime, setTime, and clone.

The computeNumberOfRays, method computes the number of rays along one dimension to completelycover the image with the given size with a detector size.

The getTotalNumberOfRays method returns the total number of rays in the projection.The getRay method returns the ray with the given index. The value of the index must be between 0 and

getTotalNumberOfRays()-1, inclusive.The getProjectionData method returns an array containing the projection data. This is the actual

projection data array. The values in it should not be modified.The setParameters method outputs the projection attributes in the DIG XML header.The getTotalDensity method returns the sum of the values in the projection data array.The setProjectionSet sets the ProjectionSet instance that contains this projection. The clone method

returns a deep copy of the instance.The getTime method returns the time for this projection.The setTime method sets the time for this projection.The clone method returns a deep copy of the projection.

public interface Projection2D {

public double getSin();

public double getCos();

public double getAngle();

public double getDetectorSpacing();

public double project(Vector2D point);

}

The Projection2D interface adds methods that are applicable only for 2D projections. They are getSin,getCos, getAngle, getDetectorSpacing, and project .

The getSin and getCos methods return the sine and cosine of the projection angle while getAngle returnsthe projection angle in radians.

The getDetectorSpacing method returns the distance between detectors.The project method returns the position of the projection of the given point onto the detector.

64 CHAPTER 6. WRITING USER PLUG-INS

public interface FanBeamProjection extends Projection2D {

public double computePhi(double detector);

public void computeXleg(double imageSize, double detectorSpacing) throws JSnarkException;

public double getSourceToOrigin();

public double getSourceToDetector();

}

The FanBeamProjection interface adds methods that are applicable only for 2D fan beam projections. Theyare computePhi, computeXleg , getSourceToOrigin, and getSourceToDetector. Fan beam projection classesshould be derived from the AbstractFanBeamProjection class and not this interface.

The computePhi method has a detector position as its argument and returns the angle between the lineconnecting that detector position with the source and the line connecting the source to the origin.

The computeXleg method takes the image size and detector spacing as arguments. It is expected toset the variables pMin and xLeg defined in the AbstractFanBeamProjection class. pMin is half the diagonaldistance of the reconstruction region measured in detector spacing units. xLeg is distance between the sourceand a corner of the reconstruction region when that ray is perpendicular to the diagonal.

The getSourceToOrigin and getSourceToDetector methods return the distances between the source andorigin and source and detector, respectively.

The projections for all directions and all time instances are collected together in an instance of theProjectionSet class (see Section 5.13.1). This class will automatically page projections to the disk for largereconstruction problems. See Appendix E for more information on this.

public interface CreateProjection extends Projection {

public void createProjection(List<ShapeWithDensitySpectrum> shapeList,

Spectrum spectrum, double[] aperture, QuantumNoise quantumNoise)

throws JSnarkException;

public void applyRayNoise(RayNoise rayNoise);

public void setDensityModel(DensityModel densityModel);

public void writeProjection(DataOutputStream projectionFile) throws JSnarkException;

public void setSubsampling(int subsampling);

public Ray getRay(int rayIndex, int subIndex) throws JSnarkException;

}

The CreateProjection interface uses the decorator pattern (see Section 5.16). Every implementationof the Projection interface should have a corresponding implementation of the CreateProjection interfaceusing the decorator pattern derived from either the AbstractCreateProjection2D or the AbstractCreatePro-jection3D class so it can be used in ray sum calculation. The CreateProjection interface has six methods,createProjection, applyRayNoise, setDensityModel , writeProjection, setSubsampling , and getRay .

The createProjection method computes the values for the projection data using the given shape list,spectrum, aperture, and quantum noise.

The applyRayNoise modifies each projection value with the ray noise.The setDensityModel method sets the density model to either the summation or overlay types (see Section

4.2.1).The writeProjection method writes the projection values on the projection file. The XML header is

written elsewhere.The setSubsampling method sets the subsampling for the data creation. The subsampling is the number

of values in the aperture list.The getRay method returns the ray corresponding to the ray index and subsampling index.The AbstractCreateProjection class provides implementations of all the methods except getRay. The only

other method that may have to be overridden is setSubsampling.

6.10. THE ARRANGEMENT INTERFACE 65

The AbstractCreateProjection class contains a static method, createProjectionFactory. This method isa factory method (see [14], pp. 107-116). The method takes an instance of a Projection class and returnsa corresponding instance of a CreateProjection class. Each new Projection and CreateProjection class pairneed to be added to this factory.

6.10 The Arrangement interfacepublic interface Arrangement extends PlugIn {

public Projection getExampleProjection() throws JSnarkException;

public Projection[] getProjections(List<Directions> directions) throws JSnarkException;

}

An Arrangement is the connector between projections and directions. Each subclass of the Projectioninterface has a corresponding subclass of the Arrangement interface. The Arrangement interface has twomethods, getExampleProjection and getProjections.

The getExampleProjection method is a factory method that returns an instance of a Projection object atan arbitrary Direction.

The getProjections method is a factory method that returns an array of Projection objects that correspondto projections in the directions given by the argument.

6.11 The SliceNoise interfaceSliceNoise has not been implemented yet. This is only a place holder.

6.12 The ProjectionNoise interfacepublic interface ProjectionNoise extends PlugIn {

public void applyNoise(Projection projection) throws JSnarkException;

}

The ProjectionNoise interface is used to modify the entire projection data array, usually by some sort ofconvolution operator. It has a single method, applyNoise, that modifies the projection data of its argumentby some method.

6.13 The QuantumNoise interfaceThe approximation to the ray sum produced by jSNARK is a function of p,

p = − lnA

C, (6.1)

where C is the calibration measurement (proportional to the number of photons counted by the detectorbefore the insertion of the object to be reconstructed), A is the actual measurement (proportional to thenumber of photons counted by the detector with the object of interest in place), and ln denotes the naturallogarithm.

It is important that the user has a clear understanding of the intended interpretation of the density valuedj,i assigned to the elemental object j at energy level i. jSNARK allows two different interpretations: dj,i is anabsolute attenuation coefficient or dj,i is an attenuation coefficient relative to some standard (background)material, such as water. In the latter case, it is assumed that calibration is done with the background

66 CHAPTER 6. WRITING USER PLUG-INS

material between the source and the detector, and during the actual measurement the background materialis (partially) replaced by the object to be reconstructed. It is also assumed that, for each energy level i, theattenuation by the background material is constant, Bi, for all rays. A special case of this is obtained byassuming that the background material is vacuum and all Bi’s are zero. This special case is the same asusing absolute attenuation coefficients. The background attenuation was specified when the spectrum wasdefined (see Section 4.2.1.1).

The calibration measurement C is estimated in a way which depends on the arrangement of the x-raysource and detector(s). In certain scanners, the value of C is constant for all rays of one projection, butvaries from projection to projection. In other scanners, the value of C is constant for all projections for aparticular ray (say, the fifth ray), but varies from ray to ray. Yet in other cases, C may be different for eachray in each projection. The QuantumNoise subclasses account for these possibilities.

public interface QuantumNoise extends PlugIn {

public void setNraysSpectrum(int numberOfRays, Spectrum spectrum);

public void nextProjection();

public double getCalibration(int numRay);

public double applyNoise(double raySum, int numRay) throws JSnarkException;

double raysum(double raysum);

public double getMean();

public double getCalibrationMeasurement();

}

The QuantumNoise interface is used to simulate the various kinds of quantum noise. It has sevenmethods, setNraysSpectrum, nextProjection, getCalibration, applyNoise, raysum, getMean, and getCalibra-tionMeasurement .

The setNraysSpectrum method sets the number of rays and the Spectrum (see Section 4.2.1.1) for thequantum noise generator.

The nextProjection method indicates when a new projection is being processed.The getCalibration method returns the calibration measurement, C, for the ray with the given index.The applyNoise method returns A, the value of the projection after quantum noise has been applied.The raysum method returns p.The getMean method returns the mean photon count.The getCalibrationMeasurement returns the mean photon count of the calibration measurement. See the

built-in plug-ins in Section 7.5 for the various possibilities.

6.14 The RayNoise interfacepublic interface RayNoise extends PlugIn {

public double applyNoise(double raySum);

}

The RayNoise interface is used to introduce noise into the projection data that does not depend on anythingother than the current ray value. It has a single method, applyNoise, that returns the modified value of itsargument.

6.15 The Algorithm interfacepublic interface Algorithm extends PlugIn {

public boolean isIterative();

6.15. THE ALGORITHM INTERFACE 67

public void setProjections(ProjectionSet projectionSet) throws JSnarkException;

public void setRayOrder(RayOrder rayOrder);

public boolean run(int iteration) throws JSnarkException;

}

The Algorithm interface is the abstraction of a reconstruction algorithm. It should not be used directly.Algorithms should instead implement either the TimeVaryingAlgorithm or SingleInstantAlgorithm interfacedescribed below. The Algorithm interface has four methods, isIterative, setProjections, setRayOrder , andrun.

The isIterative method returns true if and only if the algorithm is an iterative algorithm. Iterativealgorithms are assumed to be of the series expansion type and can operate on any grid and basis elementcombination. Non-iterative algorithms are assumed to compute the digital image on a regular grid and arerestricted to spel basis elements.

The setProjections method supplies the projection data for the algorithm.The setRayOrder provides an implementation of the RayOrder interface (see Section 6.18) for an iterative

algorithm.The run method is invoked to execute one iteration of an iterative algorithm or to perform the reconstruc-

tion using a non-iterative algorithm. An iterative algorithm can return the value true to force terminationof the algorithm.

public interface TimeVaryingAlgorithm extends Algorithm {

public void setImage(ImageSet image) throws JSnarkException;

public ImageSet getImage();

}

The TimeVaryingAlgorithm interface is the abstraction for algorithms that directly deal with reconstruc-tions of time varying images. It has two methods, setImage and getImage.

The setImage method passes the initial ImageSet (see Section 5.13.2) to the algorithm.The getImage method returns the ImageSet after the run method has been called. Usually, this will be

the same image set that was passed in with setImage, but it need not be.

public interface SingleInstantAlgorithm extends Algorithm {

public void setTimeInterval(int timeInterval) throws JSnarkException;

public void setImage(Image image) throws JSnarkException;

public Image getImage();

}

The SingleInstantAlgorithm interface is the abstraction for algorithms that deal with single time instantreconstructions. It has three methods, setTimeInterval, setImage, and getImage.

The setTimeInterval method is necessary for those algorithms that maintain state information about thereconstruction in progress so that the information is kept separately for each time interval.

The setImage method sets the image for the reconstruction algorithm. It will be called for each timeinterval within a single iteration.

The getImage returns the modified image after one iteration. Usually, this will be the same image instancethat was passed in with setImage, but it need not be.

It will almost always be the case that the methods of the Image class are inadequate to implement aparticular reconstruction algorithm. It is impossible to anticipate the methods that would be needed toimplement any algorithm and any such attempt would result in an Image class containing far too manymethods. The solution used here is the decorator pattern (see Section 5.16) putting the methods neededfor a particular reconstruction algorithm in the decorator. When the new methods are such that they aredifferent for different grids and basis elements, multiple decorator classes are required and a factory method(see [14], pp. 107-116) to create the correct decorator instance given a particular base instance is useful.

68 CHAPTER 6. WRITING USER PLUG-INS

6.16 The Termination interface

public interface Termination extends PlugIn {

public void setProjections(ProjectionSet projections) throws JSnarkException;

}

The Termination interface is the abstraction of the termination test to determine when an iterative algorithmshould terminate. It should not be used directly. Instead, use either the TimeVaryingTermination orthe SingleInstantTermination interface described below. The Termination interface has a single method,setProjections. The setProjections method supplies the projection data for the termination test.

public interface TimeVaryingTermination extends Termination {

public boolean isDone(ImageSet imageSet, int iteration) throws JSnarkException;

}

The TimeVaryingTermination interface is for those termination tests that directly deal with time vary-ing reconstructions. It has a single method, isDone. This method returns true if the algorithm is to beterminated.

public interface SingleInstantTermination extends Termination {

public void setTimeInterval(int timeInterval);

public boolean isDone(Image image, int iteration) throws JSnarkException;

}

The SingleInstantTermination interface is for those termination tests that deal with single time instantreconstructions. It has two methods, setTimeInterval and isDone.

The setTimeInterval method is necessary for those termination tests that maintain state informationabout the reconstruction in progress so that the information is kept separately for each time interval.

The isDone method returns true if the algorithm is to be terminated. When used in a time varyingreconstruction, the reconstruction will terminate when the termination test is true for all time intervals.

6.17 The RayIterator interface

public interface RayIterator extends Iterator<Ray> {

public int getProjectionIndex();

public int getRayIndex();

}

The RayIterator interface is the abstraction for providing the rays to a series expansion reconstructionalgorithm one by one. The RayIterator interface extends the Iterator interface which has the hasNext andnext methods.

The hasNext method returns true until every ray has been returned once and then it returns false.The next method returns an object of type Ray (see Section 5.9).The getProjectionIndex method returns the index of the projection that contains the ray returned by the

most recent call of next.The getRayIndex returns the index of the ray within that projection.

6.18. THE RAYORDER INTERFACE 69

6.18 The RayOrder interfacepublic interface RayOrder extends Iterable, PlugIn {

public void setProjections(ProjectionSet projectionSet) throws JSnarkException ;

public RayIterator iterator();

public int getTotalNumberOfRays();

}

The RayOrder interface extends the Iterable interface and adds two new methods, setProjections, and get-TotalNumberOfRays.

The setProjections method supplies the projections to the ray order algorithm. It provides access to theprojections (see Section 5.13.1) which in turn supply the rays with the getRay (see Section 6.9) method.

The getTotalNumberOfRays returns the total number of rays.The value returned by getRayIndex method of the RayIterator will always be less than this number.

6.19 The PostProcessing interfacepublic interface PostProcessing extends PlugIn {

}

The PostProcessing interface provides the root for the two post processing variants, TimeVaryingPostPro-cessing and SingleInstantPostProcessing. It should not be used directly, use the TimeVaryingPostProcessingor SingleInstantPostProcessing interfaces instead. It has no methods.

public interface TimeVaryingPostProcessing extends PostProcessing {

public void setAverageDensity(double[] averageDensity);

public void doPostProcessing(ImageSet imageSet) throws JSnarkException;

}

The TimeVaryingPostProcessing interface is for those post processing operations that operate on timevarying images. It has two methods, setAverageDensity and doPostProcessing.

The setAverageDensity method is passed an array that contains the estimated average density of thereconstruction at each time interval.

The doPostProcessing method is passed the ImageSet (see Section 5.13.2) resulting from the executionof a reconstruction algorithm and modifies it in place.

public interface SingleInstantPostProcessing extends PostProcessing {

public void setAverageDensity(double averageDensity);

public void doPostProcessing(Image image) throws JSnarkException;

}

The SingleInstantPostProcessing interface is for those post processing operations that operate on a singletime instant image. It has two methods, setAverageDensity and doPostProcessing.

The setAverageDensity method is passed the estimaged average density of the reconstruction for thesingle time instant.

The doPostProcessing method is passed an image resulting from the execution of a reconstruction al-gorithm and modifies it in place. In the case of a time varying reconstruction, the images from each timeinterval are processed one at a time.

70 CHAPTER 6. WRITING USER PLUG-INS

6.20 The ReconstructionProcessor interface

public interface ReconstructionProcessor extends PlugIn {

public void phantom(String name, ImageSet phantom) throws JSnarkException;

public void projectionFile(File projectionFile) throws JSnarkException;

public void setAlgorithmName(String algorithmName);

public void setReconstructionName(String reconstructionName);

public void reconstruction(int iteration, ImageSet reconstruction)

throws JSnarkException;

}

The ReconstructionProcessor interface includes the operations that are common to the FigureOfMerit, Eval-uator, and Output interfaces below. It should not be used directly, use the FigureOfMerit, Evaluator, orOutput interface instead. It contains the methods phantom, projectionFile, setAlgorithmName, setRecon-structionName, and reconstruction.

The phantom method passes the phantom to the reconstruction processor.The projectionFile, method passes a File class instance that points to the projection data file. If the

reconstruction processor actually needs the ProjectionSet instance (see Section 5.13.1) for this data, use theJava statement

ProjectionSet projectionSet = new ReadProjectionFile(file, false).getProjections();

where file is the File class instance.The setAlgorithmName, method sets the name of the reconstruction algorithm for the later calls of the

reconstruction method.The setReconstructionName, method sets the reconstruction name for the later calls of the reconstruction

method.The reconstruction method is invoked for each reconstruction that is identified for processing by the

iterations specification.

6.21 The FigureOfMerit interface

public interface FigureOfMerit extends ReconstructionProcessor {

public boolean allowsShapeList();

public void shapeList(List<TimeVaryingShapeWithBounds> shapeList);

public double figureOfMerit(int iteration, ImageSet reconstruction) throws JSnarkException;

}

The FigureOfMerit interface is used for those reconstruction processor operations that compare a phan-tom and a reconstruction and produce a single number as output. The FigureOfMerit interface adds twoadditional methods, shapelist and figureOfMerit.

The shapeList method passes a list of shapes to the figure of merit processor so that it may restrict itsattention to the parts of the image contained in those shapes.

The reconstruction method is not called for a figure of merit processor. Instead, the figureOfMerit methodis invoked returning the figure of merit value for the reconstruction. The value returned should have theproperty that larger values correspond to better reconstructions under the figure of merit criterion.

6.22. THE EVALUATOR INTERFACE 71

6.22 The Evaluator interfacepublic interface Evaluator extends ReconstructionProcessor {

public boolean allowsShapeList();

public void shapeList(List<TimeVaryingShapeWithBounds> shapeList);

public void setIdentifyingString(String identifyingString);

public void closeOutput();

}

The Evaluator interface is used for those reconstruction processor operations that compare a phantom anda reconstruction and produce multiple values as output. The Evaluator interface adds a three methods,shapeList , setIdentifyingString , and closeOutput to the ReconstructionProcessor interface.

The shapeList method passes a list of shapes (regions of interest) to the evaluator processor so that itmay restrict its attention to the parts of the image contained in those shapes.

The setIdentifyingString passes the identifying string attribute to the evaluator plug-in.The closeOutput method informs the evaluator when it should close its output file if it opened one.

6.23 The Output interfacepublic interface Output extends ReconstructionProcessor {

}

The Output interface is used to output phantoms, projections, and reconstructions in forms that can beused as input to other programs. The Output interface adds no new methods to the ReconstructionProcessorinterface. A difference between output processing and the other reconstruction processors is that the phantommethod is only called if the iterations specification includes it.

72 CHAPTER 6. WRITING USER PLUG-INS

Chapter 7

Built-in plug-ins

7.1 ShapesA phantom is a set of shapes. The phantom is generated from the shapes as described in Section 4.2.2. Theprojection data for the phantom is generated from the shapes as described in Section 4.2.3.

A shape in jSNARK is any geometrical figure that has the following properties:

1. Given any point x⃗, the shape has a computable value at that point, s(x⃗).

2. Given any line l⃗(u) = p⃗+ ud⃗, where p⃗ is a point on the line, u is a real number, and d⃗ is a unit vectorin the direction of the line, the integral

∫∞−∞ s(⃗l(u))du is computable.

3. The set {x⃗|s(x⃗) > 0} is bounded and convex.

Most shapes are specified by both a center location and a retation. The shape is always rotated first andthen translated to the center location.

7.1.1 2D shapes7.1.1.1 Ellipse

An ellipse is specified by the coordinates of its center, the lengths of its semi-major and semi-minor axes,and angle of rotation. The ellipse has the value 1.0 in its interior and zero outside. In phantom creation,this value is multiplied by the density.

7.1.1.2 Rectangle

A rectangle is specified by the coordinates of its center, the half-lengths of its two sides and angle of rotation.The rectangle has the value 1.0 in its interior and zero outside. In phantom creation, this value is multipliedby the density.

7.1.1.3 Hexagon

A hexagon is specified by the coordinates of its center and the length of one of its sides. Two of its sides areparallel to the Y axis. Rotation of hexagons has not been implemented. The hexagon has the value 1.0 inits interior and zero outside. In phantom creation, this value is multiplied by the density.

7.1.1.4 Blob

A blob is a shape that is spatially bounded and essentially band limited [36, 37]. It should only be used asa shape in phantom creation. It is specified by its center, support, and shape. The blob order is fixed at2. The maximum value of the blob is 1.0 at its center. In phantom creation, the value is multiplied by thedensity.

73

74 CHAPTER 7. BUILT-IN PLUG-INS

7.1.1.5 Mottle

The mottle shape is a complicated shape designed to give a lumpiness to an otherwise uniform region, muchlike the variations found in human tissue. It should only be used as a shape in phantom creation. The maincomponent of the mottle shape is another shape that provides the boundary of the region. The GUI includesmottle and blob as choices for this embedded shape, but they must not be chosen. A run-time error willoccur if they are. Inside this region, the mottle shape consists of a number of blobs centered on a uniformgrid with random heights. The origin of that grid is the center of the region. The grid can be either thesquare grid (see Section 7.7.1.1) or the hexagonal grid (see Section 7.7.1.2). The grid spacing is provided bythe separation parameter. The random values of the blobs are selected from either a uniform or Gaussiandistribution with a zero mean and standard deviation given by the density parameter. The values of theu and v parameters of the embedded shape must be greater than the support of the blob so that all blobsare entirely contained in the region. A run-time error will occur this is not the case. If the blob supportis smaller than the half the grid spacing, the individual blobs will be observed. If the blob support is largerelative to the grid spacing, they will overlap so much that there will not be much variation. A supportbetween 1.5 and 2.0 times the grid spacing should work quite nicely.

7.1.2 3D shapes7.1.2.1 Ellipsoid

An ellipsoid is specified by the coordinates of its center, the lengths of its semi-axes, and a rotation given asZYZ Euler angles. The ellipsoid has the value 1.0 in its interior and zero outside. In phantom creation, thisvalue is multiplied by the density.

7.1.2.2 Box

A box is specified by the coordinates of its center, the half lengths of its sides, and a rotation given as ZYZEuler angles. The box has the value 1.0 in its interior and zero outside. In phantom creation, this value ismultiplied by the density.

7.1.2.3 Rhombic dodecahedron

A rhombic dodecahedron (Figure 7.1) is specified by the coordinates of its center and its corner to cornerdiameter. The corner to corner axes of the rhombic dodecahedron will be parallel to the X, Y, and Z axes.Rotation of rhombic dodecahedrons has not been implemented. The rhombic dodecahedron has the value1.0 in its interior and zero outside. In phantom creation, this value is multiplied by the density.

7.1.2.4 Truncated Octahedron

A truncated octahedron (Figure 7.2) is specified by the coordintes of its center and diameter. The axesconnecting the six square faces will be parallel to the X, Y, and Z axes. Rotation of a truncated octahedronhas not been implemented. The truncated octahedron has the value 1.0 in its interior and zero outside. Inphantom creation, this value is multiplied by the density.

7.1.2.5 Blob

A blob is a shape that is spatially bounded and essentially band limited [36, 38]. It should only be used asa shape in phantom creation. It is specified by its center, support, and shape. The maximum value of theblob is 1.0 at its center. In phantom creation, the value is multiplied by the density.

7.1.2.6 Mottle

The mottle shape is a complicated shape designed to give a lumpiness to an otherwise uniform region, muchlike the variations found in human tissue. It should only be used as a shape in phantom creation. Themain component of the mottle shape is another shape that provides the boundary of the region. The GUIincludes mottle and blob as choices for this embedded shape, but they must not be chosen. A run-time

7.1. SHAPES 75

Figure 7.1: Rhombic Dodecahedron

Figure 7.2: Truncated Octahedron

76 CHAPTER 7. BUILT-IN PLUG-INS

error will occur if they are. Inside this region, the mottle shape consists of a number of blobs centered ona uniform grid with random heights. The origin of that grid is the center of the region. The grid can beeither the simple cubic grid (see Section 7.7.2.1), the body centered cubic grid (see Section 7.7.2.2) or theface centered cubic grid (see Section 7.7.2.3). The grid spacing is provided by the separation parameter. Therandom values of the blobs are selected from either a uniform or Gaussian distribution with a zero meanand standard deviation given by the density parameter. The values of the u, v, and w parameters of theembedded shape must be greater than the support of the blob so that all blobs are entirely contained in theregion. A run-time error will occur this is not the case. If the blob support is smaller than the half the gridspacing, the individual blobs will be observed. If the blob support is large relative to the grid spacing, theywill overlap so much that there will not be much variation. A support between 1.5 and 2.0 times the gridspacing should work quite nicely.

7.2 ArrangementsAn arrangement is a class that implements the Arrangement interface (see Section 6.10). It represents aprojection where the projection direction is unspecified. It has factory methods (see [14], pp. 107-116) forProjection instances when supplied with the projection directions.

7.2.1 2D arrangements7.2.1.1 Parallel

The parallel arrangement is the 2D projection where the radiation source is at infinity and the rays areparallel. The detectors lie on a straight line. This is illustrated in Figure 7.3. Let f : R2 → R be a twodimensional image. The Radon transform of f , [Rf ](s, θ), is given by

gpar(s, θ) = [Rf ](s, θ⃗) =∫∞−∞ f(sθ⃗ + uθ⃗⊥)du

=∫∞−∞ f(s cos θ − u sin θ, s sin θ + u cos θ)du

=∫∞−∞ f(Rθ(s, u))du

(7.1)

where s is the position on the detector, θ⃗ = (cos θ, sin θ) is the unit vector at angle θ with the x axis, δ is theDirac delta distribution, and Rθ is the rotation operator described in Section 2.7. The detectors consist of anarray of N samples of the Radon transform at a sampling distance d. The n-th sample is at s =

(n−

⌈N2

⌉)d,

for 0 ≤ n < N .

7.2.1.2 Arc

The arc arrangement is the 2D projection where the radiation source is an idealized point source located atsome fixed distance from the origin and the detectors lie on arc with a fixed distance between the sourceand detector strip. This is illustrated in Figure 7.4. The rays of the arc projection are related to the parallelprojection rays by

garc(s, θ) = garc(α, θ) = [Rf ](S sinα, θ + α), (7.2)

where s is the position on the detector, S is the distance from the source to the origin (labeled RADIUS inthe figure), D is the distance from the source to the detector (labeled STOD in the figure), and α = s/D.

7.2.1.3 Tangent

The tangent arrangement is the 2D projection where the radiation source is an idealized point source locatedat some fixed distance from the origin and the detectors lie on straight line at some fixed distance betweenthe source and detector strip that is tangent to a circle centered at the point source. This is also illustratedin Figure 7.4. The rays of the tangent projection are related to the parallel projection rays by

gtan(s, θ) = [Rf ](S sinσ, θ + σ), (7.3)

7.2. ARRANGEMENTS 77

Figure 7.3: The parallel geometry

78 CHAPTER 7. BUILT-IN PLUG-INS

Figure 7.4: The divergent geometry case

where σ = tan−1(s/D), and s, D, and S are as above.

7.2.2 3D arrangements

The 3D arrangements are the parallel beam and the cone beam. The cone beam is further divided intotangent detector and arc detector, where the source position is the center of the arc. Depending on thedirection plug-in the detector has one of two relationships to the 3D coordinate system. Let u⃗ and w⃗ be twoorthonormal vectors that define the orientation of the detector and o⃗ be the 3D coordinates of the origin ofthe detector. In the case of an arc detector, u⃗ is a tangent vector. Then let v⃗ = w⃗ × u⃗. Except for the lineand helix orbits (see Sections 7.3.2.5 and 7.3.2.6), it will be the case that the line given by o⃗+ av⃗ interceptsthe origin, i.e., (∃a ∈ R) s.t. o⃗ + av⃗ = 0⃗. These orientations can be described by a pure rotation. For theline and helix orbits, it will be the case that w⃗ is parallel one of the x⃗, y⃗, or z⃗ axes. That is, w⃗ × x⃗ = 0⃗, orw⃗ × y⃗ = 0⃗, or w⃗ × z⃗ = 0⃗. Further, it will be the case that v⃗ is orthogonal to that axis and the line given byo⃗ + av⃗ intercepts it. That is, (∃a, b ∈ R) s.t. o⃗ + av⃗ = bw⃗. These orientations are described by a rotationfollowed by a translation.

7.2. ARRANGEMENTS 79

Figure 7.5: The parallel beam geometry

7.2.2.1 ParallelBeam

The parallel beam arrangement is the 3D projection where the radiation source is at infinity and the rays areparallel. This is illustrated in Figure 7.5. The unrotated orientation of the source and detector array is withthe source located on the positive Z axis and the detector array parallel to the X-Y plane and intersectingthe Z axis at some negative value. The detector’s U axis is aligned with the reconstruction’s X axis, and theW axis is alligned with the reconstruction’s Y axis and the origin of the detector is intersected by the Z axis.Using ZYZ Euler angles, a rotated orientation is described by first rotating the source and detector arrayaround the Z axis by alpha, then a rotation around the new Y axis by beta and finally around the new Zaxis by gamma. Given a unit quaternion, R, for orientation of the source and detectors, the two dimensionalprojection of f is given by

gp(u,w,R) =

∫ ∞

−∞f(R× (u,w, z))dz. (7.4)

The detector is a rectangular array withN elements in the U direction andM elements in the W direction.The spacing between detector elements in the U direction is du and the spacing in the W direction is dw.Internally to jSNARK, it is a linear array indexed starting at zero at the -u, -w corner going to NM − 1at the +u, +w corner with the u elements varying fastest. The (i, j)-th detector element, 0 ≤ i < N , and0 ≤ j < M , is at index n = i+ jN and has the coordinates (u,w) =

((i−

⌈N2

⌉)du, (j −

⌈M2

⌉)dw

).

7.2.2.2 Cone Bean

In general, the cone beam transform is given by [31]

[Df ](y⃗, θ⃗) =∞∫0

f(y⃗ + tθ⃗)dt, (7.5)

80 CHAPTER 7. BUILT-IN PLUG-INS

Figure 7.6: The cone beam geometry

where y⃗ is the position of the radiation source (outside the support of f), and θ⃗ is a vector on the unit spherein R3.

Figure 7.6 illustrates the unrotated configurations of both the tangent and arc cone beam arrangements.

7.2.2.3 TangentConeBeam

A tangent detector is a flat screen at a distance D from the source. The center of the detector is the pointthat is closest to the source. The two directions on the tangent detector will be called u and w.

We can relate the (y⃗, θ⃗) coordinates to the detector coordinates through three unit vectors, e⃗u(y⃗), e⃗v(y⃗),and e⃗w(y⃗). e⃗w(y⃗) is a unit vector parallel to the W direction of the detector. e⃗u(y⃗) is a unit vector parallel tothe U direction of the tangent detector or the vector tangent to the cylinder passing through the center of thedetector and perpendicular to e⃗w(y⃗). If d⃗ is the coordinate of the center of the detector, then e⃗v(y⃗) = d⃗−y⃗

|d⃗−y⃗|is a unit vector parallel to the vector from the source to the center of the detector.

The tangent detector coordinates corresponding to source position y⃗ and direction θ⃗ are

u = D θ⃗·e⃗u(y⃗)θ⃗·e⃗v(y⃗)

and

w = D θ⃗·e⃗w(y⃗)

θ⃗·e⃗v(y⃗).

7.2.2.4 ArcConeBeam

An arc detector is a section of a cylinder with radius D, where the source lies on the axis of the cylinder.The two directions on the arc detector are α (measured in radians) and w, where the W axis is parallel tothe cylinder axis and α is the perpendicular direction. The set of points on the detector closest to the sourceform an arc. The center of the detector is the midpoint of that arc.

7.3. DIRECTIONS 81

The arc detector coordinates corresponding to source position y⃗ and direction θ⃗ are

α = tan−1 θ⃗·e⃗u(y⃗)θ⃗·e⃗v(y⃗)

and

w = D θ⃗·e⃗w(y⃗)√1−(θ⃗·e⃗w(y⃗))

2.

The tangent and arc coordinates can be related by

(α′, w′) =(sin−1

(u√

D2+u2

), Dw√

D2+u2

)and

(u′, α′) =(D tanα, w

cosα

).

(7.6)

7.3 Directions

A Directions class instance is a set of one or more radiation directions. For divergent radiation, the directionis the direction of the central ray.

7.3.1 2D directions

7.3.1.1 SingleAngle

The single angle Directions class specifies a single projection angle in degrees.

7.3.1.2 EquallySpaced

The equally spaced Directions class specifies a set of equally spaced projection angles. The user specifies thefirst and last angles in degrees and the number of angles. Let f be the first angle, l be the last angle, andN be the number of angles. Then the ith angle, ai, is given by

ai = f + il − f

N − 1,

where 0 ≤ i ≤ N − 1.1

7.3.1.3 RandomAngles

The random angles Directions class specifies some number of randomly generated projection angles. Itimplements the MacroDirections interface (see Section 6.8) and returns the set of generated angles as anarray of SingleAngle elements. They are uniformly distributed on the unit circle.

7.3.2 3D directions

7.3.2.1 EulerAngle

The Euler angle Directions class specifies a single projection specified by its body centered Euler angles indegrees. The angles may be any in of the 12 Euler angle orders.

7.3.2.2 Quaternion

The quaternion Directions class specifies a single projection specified by a quaternion. The quaternion isspecified as an axis (a Vector in 3D) and an angle of rotation about that axis. The axis must not be the zerovector and the resulting quaternion will be normalized to a unit quaternion so it is a pure rotation.

1A future version of jSNARK may change this to ai = f + i l−fN

so that specifying 0,180 for parallel radiation and 0,360 fordivergent makes sense.

82 CHAPTER 7. BUILT-IN PLUG-INS

7.3.2.3 RandomDirections

The random directions Directions class specifies some number of random projection angles. They will beuniformly distributed on the unit sphere. It implements the MacroDirections interface (see Section 6.8) andreturns the set of generated directions as an array of Quaternion elements.

7.3.2.4 CircleOrbit

The circle orbit Directions class specifies a number of intervals with the source moving in a circular orbit.Circular orbits are restricted to rotation about the X, Y, or Z axis. The user specifies the axis of rotation,the number of intervals that will be taken, and a starting offset. If the starting offset is 0 and the Z axis ischosen, the source starts on the positive X axis. If the X axis is chosen, the source starts on the positiveY axis. If the Y axis is chosen, the source starts on the positive Z axis. In all cases, the source will rotateabout the chosen axis using the right hand rule. The starting offset will change this position by the specifiedamount. In the cone beam case, one complete revolution is performed. In the parallel beam case, a halfrevolution is performed. This is because parallel projections 180 degrees apart are mirror images of eachother (except for noise). Let N be the number of intervals, o the offset converted into radians, and r therange (2π or π). Then the angles of the source in the plane of rotation are given by

θn =nr

N+ o,

where 0 ≤ n ≤ N − 1.

7.3.2.5 LineOrbit

The line orbit Directions class specifies a number of projections with the source moving along a straight line.Line orbits are restricted to lines where the radiation direction is parallel to one of the three axes. Unlike theprevious projections, the origin does not necessarily project onto the center of the detector array. The userspecifies the number of intervals, the fixed axis, the variable axis, the start position and the end position.The source will be located in the direction of the fixed axis, perhaps at infinity. It can be one of +X, -X,+Y, -Y, +Z, or -Z. The source moves along the variable axis, X, Y, or Z from the start position to the endposition. This must not be the same as the fixed axis. Suppose the fixed axis is +X and the variable axis isZ. Let N be the number of intervals, s be the start position, and e be the end position. Then the positionof the radiation source at interval n is

(S, 0, s+ n(e−s)

N

)for 0 ≤ n ≤ N − 1, where S is the distance from

the source to the Z axis.Note: Using the line orbit with parallel radiation is not very useful. Successive source

positions will duplicate previous measurements, except for noise.

7.3.2.6 HelixOrbit

The helix orbit Directions class specifies a number of projections with the source moving along a helix. It isa combined circular and line motion. Helix orbits are restricted to a helix around one of the three axes. Likethe line orbit, the origin does not necessarily project onto the center of the detector array. The user specifiesthe number of intervals, helix axis, starting position, ending position, number of turns, and optionally astarting offset. If the starting offset is not specified, it is 0. The helix axis is one of X, Y, or Z. If the startingoffset is 0 and the Z axis is chosen, the source starts on the positive X axis. If the X axis is chosen, thesource starts on the positive Y axis. If the Y axis is chosen, the source starts on the positive Z axis. Inall cases, the source will rotate about the chosen axis using the right hand rule. Suppose it is Z. Let R bethe radius if the helix, N be the number of intervals, s be the start position, e be the end position, T bethe number of turns, and o be the starting offset converted into radians. Then the position of the radiationsource at interval n is

y⃗(θn) =(R cos θn, R sin θn, s+ (e− s)

( nN

+o

2πR

))for 0 ≤ n ≤ N − 1, where θn = 2πTn

N + o. Note that the Z position is modified by the offset such thatthe helix is unchanged by the offset. Only the positions on the helix are changed. The pitch of the helix is(e− s)/T .

7.4. PROJECTION NOISE 83

Since the source position, y⃗ is now a function of θn, we will write e⃗u(θn) as a synonym for e⃗u(y⃗(θn)) andsimilarly for e⃗v(θn).

For the tangent detector, we relate the detector measurements to the cone beam transform (equation7.5) by

g(t)(θn, u, w) = [Df ](y⃗(θn), ϑ⃗),where

ϑ⃗ = ue⃗u(θn)+De⃗v(θn)+we⃗w√u2+D2+w2

,

u = D ϑ⃗·e⃗u(θn)ϑ⃗·e⃗v(θn)

, and

w = D φ⃗·e⃗wϑ⃗·e⃗v(θn)

.

For the arc detector, we relate the detector measurements to the cone beam transform by

g(a)(θn, α, w) = [Df ]( ⃗y(θn), ϑ⃗),

whereϑ⃗ = D sin(α)e⃗u(θn)+D cos(α)e⃗v(θn)+we⃗w√

D2+w2,

α = tan−1(

ϑ⃗·e⃗u(θn)ϑ⃗·e⃗v(θn)

), and

w = D ϑ⃗·e⃗w√1−(ϑ⃗·e⃗w)2

.

An important concept relating to the helical trajectory is the Tam-Danielsson window [7, 45] which isthe projections of the upper and lower helical trajectories onto the detector relative to the radiation source.

7.4 Projection noiseThe projection noise classes add noise to computed projection data where the change to each ray valuedepends on the values of the neighboring rays.

7.4.1 2D projection noise7.4.1.1 Scatter2D

The 2D scatter plug-in approximates convolving the projection data with a function approximating

Λ(x) =

1 + p if x = 0,1− |x|/w, if 0 < |x| ≤ w,

0, otherwise.

The user specifies the width of the scattering, w, and optionally a scattering peak, p. If the scattering peak isnot specified, it defaults to 0.0. In order to explain the scatter model precisely, let d be the detector spacingand Ai and Ai denote the value for the i’th detector (0 ≤ i < NRAYS) (NRAYS is the number of rays in aprojection) before and after scatter has been introduced.

Ai =

NRAYS-1∑j=0

vi−jAj

/Vi, (7.7)

where vk and Vi are calculated as follows. For all integers k, let

vk =

0, if |k| × d > w,

1− |k|×dw , if k ̸= 0 and |k| × d ≤ w,

1 + p, if k = 0,

(7.8)

and

Vi =NRAYS-1∑

j=0

vi−j . (7.9)

84 CHAPTER 7. BUILT-IN PLUG-INS

7.4.2 3D projection noise

7.4.2.1 Scatter3D

The 3D scatter plug-in approximates convolving the projection data with a scaled version of the function

Λ(x, y) =

1 + p if x = 0 and y = 0

(1− |x|/w)(1− |y|/w) if 0 < |x| ≤ w and 0 < |y| ≤ w,

0 otherwise.

The user specifies the width of the scattering, w, and optionally the scattering peak, p. Let d be the detectorspacing and w the width of the scattering. The discrete version of the scatter function is given by

Λ(i, j) =

1 + p if i = 0 and j = 0,1C (1− | idw |)(1− | jdw |) if | idw | ≤ 1 and | jdw | ≤ 1,

0 otherwise.

where

C =

⌈wd ⌉∑

i=−⌈wd ⌉

⌈wd ⌉∑

j=−⌈wd ⌉

(1− | idw|)(1− |jd

w|).

7.5 Quantum noise

The quantum noise classes add noise to the projection data by simulating the quantum effects of the radiationunder a number of different conditions.

7.5.1 2D and 3D quantum noise

7.5.1.1 AbstractQuantumNoise

This class provides a common implementation of many of the quantum noise methods. Let

back =

NERGY∑i=1

e× exp(−Bi)

aback = quantumMean× back

cback = quantumCalibration× aback

cbackRoot =√cback

where quantumMean and quantumCalibration are the values of the parameters with those names or 0.0 if notspecified, and NERGY, pi, and Bi are the number of energies, their weights and the background absorptiondescribed in Section 4.2.1.1.

raysum(double raysum) returns exp(−raysum).

getCalibration(int numray) returns 0.0.

applyNoise(double raysum, int numray) returnsln(getCalibration(numray) * nextPoisson(aback) / (raysum * quantumMean))where ln is the natural log.

nextProjection() does nothing.

7.6. RAY NOISE 85

7.5.1.2 NoQuantumNoise

The no quantum noise class is a subclass of AbstractQuantumNoise that leaves the projection data unchanged.It overrides the applyNoise method.

applyNoise(double raysum, int numray) returns ln(back/raysum) where ln is the natural log.

7.5.1.3 ConstantProjectionQuantumNoise

The constant projection quantum noise class is a subclass of AbstractQuantumNoise that covers the casewhere the quantum noise level is constant for all the rays in a single projection, but varies from projectionto projection. It overrides the methods nextProjection and getCalibration.

nextProjection() sets the instance variable calibration tonextGaussian(cback, cbackRoot) / nextGaussian(cback, cbackRoot).

getCalibration(int numray) returns calibration.

7.5.1.4 ConstantRayQuantumNoise

The constant ray quantum noise class is a subclass of AbstractQuantumNoise that covers the case where thequantum noise level varies for each ray within a single projection, but is constant for each ray index over allthe projections. During initialization, it creates an array, calibration, with the same number of elements asthe number of rays in a projection and initializes each withnextGaussian(cback, cbackRoot) / nextGaussian(cback, cbackRoot).It overrides the getCalibration method.

getCalibration(int numray) returns calibration[numray].

7.5.1.5 VariableQuantumNoise

The variable quantum noise class is a subclass of AbstractQuantumNoise that covers the case where thequantum noise varies both from ray to ray in a single projection and from projection to projection. Itoverrides the method getCalibration.

getCalibration(int numray) returns nextGaussian(cback, cbackRoot) / nextGaussian(cback, cbackRoot).

7.5.1.6 PETQuantumNoise

The PET (positron emission tomography) quantum noise class is a subclass of AbstractQuantumNoise thatcovers the case where the photon statistics are the result of electron positron annihilation. It overrides thecalculation of aback:

aback = quantumMean×NERGY∑

i=1

ei ×Bi

It also overrides the applyNoise method.

applyNoise(double raysum, int numray) returns raysum + nextPoisson(aback) if Poisson statistics arebeing used or raysum + aback otherwise (see Section 5.2).

7.6 Ray noise

Ray noise modifies the projection data with a model where the changed value only depends on the currentvalue of the ray.

86 CHAPTER 7. BUILT-IN PLUG-INS

7.6.1 AdditiveNoise

The additive noise class adds to each ray value an instance of a Gaussian distributed random number witha zero mean and the given standard deviation.

7.6.2 MultiplicativeNoise

The multiplicative noise class multiplies each ray value by an instance of a Gaussian distributed randomnumber with mean 1 and the given standard deviation, or 0.0 if the instance was negative.

7.7 GridsGrids are subclasses of the ImageGeometry interface (see Section 6.4) that specify the set of points that arethe centers of the basis elements.

7.7.1 2D grids

7.7.1.1 SquareGrid

The N ×N square grid is the set of points

{((i−m)ω, (j −m)ω)|0 ≤ i < N and 0 ≤ j < N} , (7.10)

where ω is the grid spacing and m =⌊N2

⌋. The Voronoi shape that tessellates this grid is the square. The

default value of ω is s.

7.7.1.2 HexagonalGrid

When regular hexagons tile the plane, the grid connecting their centers consists of lines parallel to the Xaxis, lines that intersect the X axis at an angle of 60◦, and lines that intersect the X axis at an angle of120◦. Let η be the spacing between these grid lines. The distance between hexagon centers is 2η√

3. The (x, y)

coordinates of the centers of the hexagons is the set of points{(iη√3, jη)|i+ j ≡ 0 mod 2

}Given an image size, S, the M ×N hexagonal grid is the set of points{

((i−m)η√3, (j − n)η)|0 ≤ i < M and 0 ≤ j < N and i+ j ≡ p mod 2

}, (7.11)

where M = 2⌈S√3

⌉+ 1, N = 2

⌈S2η

⌉+ 1, m = M−1

2 , n = N−12 , and p is either 0 or 1. The grid points of a

hexagonal grid are a distance of 2η√3

from their nearest neighbors. The default value of η is s, where s is thedefault pixel spacing. This results in a hexagonal pixel with the equivalent resolution as the default squarepixel.

7.7.2 3D grids7.7.2.1 SimpleCubicGrid

The N ×N ×N simple cubic grid is the set of points

{((i−m)σ, (j −m)σ, (k −m)σ)|0 ≤ i < N and 0 ≤ j < N and 0 ≤ k < N} , (7.12)

where σ is the grid spacing and m =⌊N2

⌋. The Voronoi shape that tessellates this grid is the cube. The

default value of σ is s.

7.8. BASIS ELEMENTS 87

7.7.2.2 BodyCenteredCubicGrid

The N ×N ×N body centered cubic grid is the set of points

{((i−m)β, (j −m)β, (k −m)β)|0 ≤ i < N and 0 ≤ j < N and 0 ≤ k < N and i ≡ j ≡ k mod 2} , (7.13)

where β is the grid spacing and m =⌊N2

⌋. The Voronoi shape that tessellates this grid is the truncated

octahedron. This basis element has not been implemented. The default value of β is s√2, where s is the

default voxel spacing. This results in a truncated octahedron with the equivalent resolution as the defaultvoxel.

7.7.2.3 FaceCenteredCubicGrid

The N ×N ×N face centered cubic grid is the set of points

{((i−m)ϕ, (j −m)ϕ, (k −m)ϕ)|0 ≤ i < N and 0 ≤ j < N and 0 ≤ k < N and i+ j + k ≡ 0 mod 2} ,(7.14)

where ϕ is the grid spacing and m =⌊N2

⌋. The Voronoi shape that tessellates this grid is the rhombic

dodecahedron. The default value of ϕ is s√3/2 where s is the default voxel spacing. This results in a

rhombic dodecahedron with the equivalent resolution as the default voxel.

7.8 Basis elements

A basis element is a shape. In jSNARK_GUI, the Voronoi shapes are referred to as pixel (2D) and voxel(3D) and generically as spel. During the execution of jSNARK, these place holders are replaced by the actualbasis elements. The analytical reconstruction algorithms are restricted to the spel basis elements. The seriesexpansion reconstruction algorithms may use any basis element.

7.8.1 2D and 3D basis elements

7.8.1.1 BlobBasis

The blob basis element is based on a generalization of the Kaiser-Bessel window [10]. It is rotationallysymmetric, spatially limited and effectively band-limited. The SNARK09 blob is given by ([8], eq. 2.5)

b(r) =

C I2(α√

1−( ra )

2)

I2(α)

(1−

(ra

)2) if 0 ≤ r ≤ a,

0 otherwise,

where I2 is the modified Bessel function of the first kind of order 2, r is the distance from the center of theblob, a is the support of the blob, α controls the shape of the blob and C is a normalizing term. The generalform of an n-dimensional blob of order m is given by ([36], eq. A1)

wmn (r) =

Im

(α√

1−( ra )

2)

Im(α)

[√1−

(ra

)2]m if 0 ≤ r ≤ a,

0 otherwise.(7.15)

The dimensionality enters into the Fourier transform of the blob given below. The SNARK09 blob is thespecial case of this where n = 2 and m = 2. The Fourier transform of (7.15) is given by ([36], eq. A3)

[Fwmn ](R) =

(2π)n/2anαm

Im(α)

In/2+m

(√α2−(2πaR)2

)(√

α2−(2πaR)2)n/2+m if 2πaR ≤ α,

(2π)n/2anαm

Im(α)

Jn/2+m

(√(2πaR)2−α2

)(√

(2πaR)2−α2)n/2+m otherwise,

88 CHAPTER 7. BUILT-IN PLUG-INS

where J is the Bessel function of the first kind and I is the modified Bessel function of the first kind ([1], ch.9,10).

If P is a set of points in Rn, then define

XP (x) =∑p∈P

δ(x− p),

where δ is the Dirac delta distribution. We will define the infinite versions of the grids above as the sets:

Sω = {(mω,nω)|m,n ∈ Z} (square grid),Hη =

{(mη√

3, nη

)|m,n ∈ Z and (m+ n) mod 2 = 0

}(hexagonal grid),

HTη =

{(mη, nη√

3

)|m,n ∈ Z and (m+ n) mod 2 = 0

}(hexagonal grid transpose),

Cσ = {(lσ,mσ, nσ)|l,m, n ∈ Z} (simple cubic grid),Bβ = {(lβ,mβ, nβ)|l,m, n ∈ Z and (l ≡ m ≡ n) mod 2} (body centered cubic grid),Fϕ = {(lϕ,mϕ, nϕ)|l,m, n ∈ Z and (l +m+ n) mod 2 = 0} (face centered cubic grid).

The Fourier transform of XP , where P is one of the grids above, is another grid. These are given in thefollowing table.

[FXSω ] =1ω2XS1/ω

[FXHη ] =1√3η2

XHT1/η

[FXCσ ] =1σ3XC1/σ

[FXBβ] = 1

4β3XF1/2β

[FXFϕ] = 1

2ϕ3XB1/2ϕ

(See [15], Appendix A for a derivation of the Fourier transforms of the 3D grids.) One way to optimize thevalues of the parameters for the blob basis function are those where zeros of [Fwm

n ](R) fall on the locationsof the grid points nearest the origin of the Fourier transform of the grid. (See Benkarroum, et.al. [2] for ajustification of this choice.) The roots of [Fwm

n ] are the roots of Jn/2+m because In/2+m has no roots. Theusual notation is for jn,s to be the s-th root of Jn. Let R1 and R2 be the desired roots of [Fwm

n ]. Then weneed to find values of a and α that satisfy√

(2πaR1)2 − α2 = jn,1 (7.16)

and √(2πaR2)2 − α2 = jn,s (7.17)

for s > 1. It is easy to show that

a =1

√j2n,s − j2n,1R2

2 −R21

and

α =

√R2

1j2n,s −R2

2j2n,1

R22 −R2

1

satisfy these equations. The first and fifth zeros of J3 are j3,1 = 6.38016 and j3,5 = 19.40942 ([1], p. 409). Forthe hex grid, R1 = 1

η and R2 =√3η . Substituting these in the above, we get a = 2.0629η and α = 11.2828.

The SNARK09 ([8], p.40) value of a is 1.7865∆, where η =√3∆/2 due to different definitions of the grid

spacing. Table 7.1 gives the values of R1 and R2 for all the grids.The blob plug-in has a number of optional parameters. They are order, root, shape, support, interpolation,

and numberSamples. The order parameter specifies the order of the Bessel function used for the blob inequation 7.15. If order is specified, then either root must be specified or both shape and support must bespecified. If order is not specified, it defaults to 2. If root is specified then the values for the shape andsupport are calculated. The value specified for support is always multiplied by the grid spacingto arrive at the actual support. This is so the user can change the grid parameters and not have to alsochange the blob parameters.

7.8. BASIS ELEMENTS 89

grid R1 R2

square 1ω

√2

ω

hexagonal 1η

√3η

simple cubic 1σ

√2

σ

body centered cubic√2

2β1β

face centered cubic√3

2ϕ1ϕ

Table 7.1: Values of R1and R2 for the grids

The blob plug-in precomputes a table of values for the blob parameters and interpolates between thesevalues instead of performing the expensive task of computing the blob value every time it is needed. Theinterpolation parameter is an imbedded plug-in that specifies the interpolation method. It has two built-inimplementations, nearestNeighbor and linear. The nearest neighbor method uses more memory while thelinear method uses more computer time. The default is nearest neighbor interpolation. The numberSamplesparameter specifies the number of sample points used by the interpolation method. If it is not specified, itdefaults to 512 for linear interpolation and 131072 for nearest neighbor interpolation.

Keep the following in mind when choosing blob parameters.

1. The order of the blob is related to the continuity of the blob. An order 0 blob is discontinuous at thesupport. An order 1 blob is continuous, but has a discontinuous first derivitive. An order 2 blob hasa continuous first derivitive, but a discontinuous second derivitive. And so on. The order 2 blob istypically chosen in the literature. As a practical matter, the blob order inconsequential. Figure 7.7shows the plots of blobs of order 0, 1, 2, and 3 at normal resolution (a) and high resolution (b). Themagnitude of the discontinuity of the order 0 blob is less than 10−6 and the discontinuity of the order1 blob first derivitive is of the same magnitude. The maximum value of order allowed in jSNARK is10.

2. Choosing which root to place at R2 is the subject of Benkarroum, et.al. [2]. Briefly, too small an indexresults in poor representation while too large an index blurs edges. The index should be the smallestindex that can accurately represent a constant region. For the order 2 blob on the BCC grid, that isj4.

3. In 2D, the hexagonal grid is more efficient2 than the square grid. In 3D, the BCC grid is the mostefficient and the SC grid the least efficient.

4. The plug-in reports on the full width at half maximum of the blob. If this value is smaller than thebasis element separation, the blob is too small to represent a constant density region. Increase eitherthe blob root or the support, depending on how the blob is specified.

Matej and Lewitt [40] used different criteria for choosing the 3D blob parameters. Instead of finding ana and α that satisifed equations 7.16 and 7.17, they picked a value for a and then solved equation 7.16 for α.They then truncated to value to two decimal places. The corresponding values for shape and support of theirorder 2 blobs in jSNARK are given in table 7.2. The XMIPP software package3 also uses the parametersfor the blob with support 2.00. The shape of blobw has a low value because they used the j3 root insteadof the j1 root. Benkarroum, et.al. [2] shows that the shape and support values should be specified to highprecision so that the Fourier transform of the blob has its root at R1. An approximate value moves the rootaway from R1. The high precision values are given in table 7.3 along with the shape value for blobw usingthe j1 root.

2There are fewer grid points for an equivalent resolution.3http://xmipp.cnb.csic.es/twiki/bin/view/Xmipp/WebHome

90 CHAPTER 7. BUILT-IN PLUG-INS

(a) normal resolution (b) high resolution

Figure 7.7: Plots of blobs of order 0, 1, 2, and 3.

blob shape supportSC BCC FCC

blobnn 3.6 1.25s 1.77s 1.44sblobn 6.4 1.50s 2.12s 1.73sblob 10.4 2.00s 2.83s 2.31sblobw 3.5 2.25s 3.18s 2.60s

Table 7.2: Matej and Lewitt Blob Parameters

blob shape supportSC BCC FCC

blobnn 3.585224 1.25s 1.767767s 1.443376sblobn 6.324179 1.50s 2.121320s 1.732051sblob 10.444256 2.00s 2.828427s 2.309401sblobw (j3) 3.496234 2.25s 3.181981s 2.598076sblobw (j1) 14.068010

Table 7.3: Matej and Lewitt Blob Parameters to 6 significant digits

7.9. DDA ALGORITHMS 91

7.8.2 2D basis elements

7.8.2.1 SquarePixel

The square pixel is the Voronoi basis element for the square grid.

7.8.2.2 HexagonalPixel

The hexagonal pixel is the Voronoi basis element for the hexagonal grid.

7.8.3 3D basis elements

7.8.3.1 CubicVoxel

The cubic voxel is the Voronoi basis element for the simple cubic grid.

7.8.3.2 DodecahedronVoxel

The rhombic dodecahedron is the Voronoi basis element for the face centered cubic grid.

7.8.3.3 Truncated Octahedron Voxel

The truncated octahedron is the Voronoi basis element for the body centered cubic grid.

7.9 DDA algorithms

There is a strong connection between grids and basis elements even though in jSNARK they are orthogonalconcepts. The most important connection is the determination of the set of grid points where a ray interceptsa basis element located at that grid point and for each such grid point determining the length of intersectionof the ray with the basis element. In 2D, there are two grids and two basis elements resulting in fourcombinations. In 3D, there are three grids and two basis elements resulting in six combinations, giving a totalof 10 combinations. Digital difference analyzer (DDA) methods (see [20], pp. 63-65) are known to rapidlycalculate these results, but the author was unwilling to invest the time to create all 10 implementations of theDDA algorithms. Instead, two generic algorithms were created, one for 2D and one for 3D. These algorithmsidentify those grid points whose distance from the ray is less than the support of the basis element. For eachsuch grid point, the the length of intersection of the ray with a basis element centered at the grid point iscomputed. DDA algorithms for the four most useful grid and basis element combinations, namely squarepixels, blobs on the hexagonal grid, cubic voxels, and blobs on the BCC grid are described below, but onlythe first three are implemented.

7.9.1 Square pixel

Recall that the square grid is the set of points

{((i−m)ω, (j −m)ω)|0 ≤ i < N and 0 ≤ j < N} ,

where ω is the grid spacing and m =⌊N2

⌋. A square pixel with side length ω is centered at each grid point.

Fig. 7.8 illustrates a small example where the grid points lie at the center of each square. We need to findthe indices of the pixels that are intersected by a ray and the length of the ray within each pixel. Withoutloss of generality, we restrict the angle of the ray, θ = ∠dbc, to 0 ≤ θ ≤ π/4. Rays with other angles canbe transformed into this case using rotations and reflections. Let τ = ω tan θ and λ = ω/cosθ. Note that inFig. 7.8, τ is the distance between c and d, i and f, and j and h, and λ is the distance between b and d, dand f, and f and h.

The DDA operates using a control variable χ, such that 0 ≤ χ < ω. χ is initialized with the distancebetween a and b and pixel P being the pixel containing the ray segment between b and d. The DDA operatesby repeated application of the following two rules:

92 CHAPTER 7. BUILT-IN PLUG-INS

Figure 7.8: Ray through square grid

Case 1: χ+ τ < ω.

In this case, the length of intersection of the ray with pixel P is λ. χ is replaced with χ+τ and P is replacedwith the pixel to the right. The process terminates if there is no such pixel.

Case 2: χ+ τ ≥ ω.

In this case, the length of intersection of the ray with pixel P is λω−χτ and P is replaced with the pixel above

it. The process stops if there is no such pixel. For this new P , the length of intersection is λ(1− ω−χτ ). Now

χ is replaced with χ+ τ − ω and P is replaced with the pixel to the right. The process terminates if thereis no such pixel.

7.9.2 Blob on Hexagonal gridRecall that the hexagonal grid is given by the set of points{

(iη

2, jη√3

2)| −M ≤ i < M and −N ≤ j < N and i+ j ≡ p mod 2

},

where the grid dimensions are (2M +1)× (2N +1), and p is either 0 or 1. The ray length through a blob isa function of the distance of the blob center from the ray, so in this case, we want the DDA to generate thelist of grid points whose centers are less than the support of the blob away from the ray and for each suchgrid point we need the distance of the grid point from the ray which we can then use to compute the raylength. There are three cases to consider: 0 ≤ θ < π/6, π/6 ≤ θ < π/3, and π/3 ≤ θ ≤ π/2. Other anglescan be transformed into one of these three cases by reflection about either the X or Y axis.

7.9. DDA ALGORITHMS 93

In Figs. 7.9-7.11, the points with indices (i-2,j), (i,j), (i+2,j), (i-1,j+1), and (i+1,j+1), are the centers ofblobs. The basis element separation distance is η. The red line represent a typical ray for that case. Pointsabove the line have a positive distance to the line while points below the line have negative distances. Thedistance between the ray and grid point (i,j) is Di,j ≤ 0. The other distances are Di−2,j , Di+2,j , Di−1,j+1,and Di+1,j+1. The reader can easily check that the differences in distances between horizontal pairs, i.e.Di+1,j and Di,j are equal and the differences between pairs of the type Di−1,j+1 and Di,j are also equal. Wewill refer to the first difference as L1 and the second as L2.

The algorithm operates as follows:

1. Find the distance between a grid point near the line on the left or bottom edge of the reconstructionregion depending on where the ray enters the region. Let (i,j) represent that grid point and let Di,j bethe signed distance between the grid point and the line.

2. Use the relationship for the appropriate case to generate the distances to neighboring grid points,(i-2,j), (i+2,j), (i-1,j+1), and (i+1,j+1), where

Di−2,j = Di,j + L1,Di+2,j = Di,j − L1,Di−1,j+1 = Di,j + L1 + L2, andDi+1,j+1 = Di,j + L2.

3. Repeat this process as long as the absolute value of the distance is less than the blob support and thereare more grid points to explore.

Case 1: 0 ≤ θ < π/6

A typical example of this case is shown in Fig. 7.9. For this case, γ = π/3 − θ. Then L1 = η sin θ, andL2 = η sin γ.

Case 2: π/6 ≤ θ < π/3

A typical example of this case is shown in Fig. 7.10. For this case, γ = π/3 − θ. Then L1 = η sin θ andL2 = η sin γ .

Case 3: π/3 ≤ θ ≤ π/2

A typical example of this case is shown in Fig. 7.11. For this case, γ = π/3− θ ≤ 0. Then L1 = η sin θ andL2 = η sin γ ≤ 0.

7.9.3 Cubic voxelFor this case, we need the list of voxels intersected by the ray and the length of intersection of the ray witheach voxel. Recall that in 3D a ray is specified by a point on the ray and a direction - a point on the unitsphere. The coordinates of the direction are the cosines of the angles the ray makes with the X, Y, and Zaxes. Let (cx, cy, cz) be such a direction. By rotations and reflections, we can transform the ray such thatcx ≥ cy ≥ 0 and cx ≥ cz ≥ 0. Let λ = σ

cx, τy = σ

cycx

, and τz = σ czcx

. Analogous to the 2D case, λ is thedistance the ray travels in a voxel in the X direction, τy is the distance it moves in the Y direction and τz isthe distance it moves in the Z direction.

The DDA operates with two control variables, χy and χz, such that 0 ≤ χy < σ and 0 ≤ χz < σ. χy andχz are initialized with the y and z distances where the ray enters the first voxel, V. The DDA operates byrepeated application of the following rules:

Case 1: χy + τy < σ and χz + τz < σ.

In this case, the length of intersection of the ray with the voxel V is λ. χy is replaced with χy + τy, χz isreplaced with χz + τz, and V is replaced with the next voxel in the X direction. The process terminates ifthere is no such voxel.

94 CHAPTER 7. BUILT-IN PLUG-INS

Figure 7.9: Hexagonal grid case 1: 0 ≤ θ < π/6

7.9. DDA ALGORITHMS 95

Figure 7.10: Hexagonal grid case 2: π/6 ≤ θ < π/3

96 CHAPTER 7. BUILT-IN PLUG-INS

Figure 7.11: Hexagonal grid case 3: π/3 ≤ θ ≤ π/2

7.10. IMAGE 97

Case 2: χy + τy ≥ σ and χz + τz < σ.

In this case, the length of intersection of the ray with the voxel V is λσ−χy

τyand V is replaced with the next

voxel in the Y direction. The process terminates if there is no such voxel. For this new V , the length ofitersection is λ(1 − σ−χy

τy). Now χy is replaced with χy + τy − σ and χz is replaced with χz + τz and V is

replaced with the next voxel in the X direction. The process terminates if there is no such voxel.

Case 3: χy + τy < σ and χz + τz ≥ σ.

In this case, the length of intersection of the ray with the voxel V is λσ−χz

τzand V is replaced with the next

voxel in the Z direction. The process terminates if there is no such voxel. For this new V , the length ofitersection is λ(1 − σ−χz

τz). Now χy is replaced with χy + τy and χz is replaced with χz + τz − σ and V is

replaced with the next voxel in the X direction. The process terminates if there is no such voxel.

Case 4a: χy + τy ≥ σ and χz + τz ≥ σ and σ−χy

τy< σ−χz

τz.

In this case, the length of intersection of the ray with the voxel V is λσ−χy

τyand V is replaced with the

next voxel in the Y direction. The process terminates if there is no such voxel. For this new V , the lengthof itersection is λ

(σ−χz

τz− σ−χy

τy

)and V is replaced with the next voxel in the Z direction. The process

terminates if there is no such voxel. For this new V , the length of itersection is λ(1 − σ−χz

τz). Now χy is

replaced with χy + τy − σ and χz is replaced with χz + τz − σ and V is replaced with the next voxel in theX direction. The process terminates if there is no such voxel.

Case 4b: χy + τy ≥ σ and χz + τz ≥ σ and σ−χy

τy> σ−χz

τz.

In this case, the length of intersection of the ray with the voxel V is λσ−χz

τzand V is replaced with the

next voxel in the Z direction. The process terminates if there is no such voxel. For this new V , the lengthof itersection is λ

(σ−χy

τy− σ−χz

τz

)and V is replaced with the next voxel in the Y direction. The process

terminates if there is no such voxel. For this new V , the length of itersection is λ(1 − σ−χy

τy). Now χy is

replaced with χy + τy − σ and χz is replaced with χz + τz − σ and V is replaced with the next voxel in theX direction. The process terminates if there is no such voxel.

7.9.4 Blobs on a 3D grid

The author was unable to find an efficient DDA algorithm for blobs on a 3D grid because the distance beweenregularly spaced points on a grid line with another line (the ray) is a complicated quadratic.

In addition to the lack of a DDA algorithm, reconstructions using blobs on a 3D grid are slow becausea ray typically intersects 10 times as many blobs as voxels. An efficient algorithm has been suggested byMataj and Lewitt [39] for parallel projection data. In that case, the projection of any two blobs are identicalexcept for a translation of the origin. By precomputing this footprint, one can quickly determine the rayintegral by finding the projection of the center of a blob on the detector and the intersection of a ray withthe detector and then performing a lookup and interpolation to find the precomputed value. This algorithmhas not been implemented in jSNARK. The algorithm does not extend to the cone beam geometry whereeach blob has a different footprint.

7.10 Image

An Image class consists of a grid, a basis element, and the set of coefficients of the basis elements at eachgrid point. There is a one-to-one correspondence between the Image subclasses and the Grid subclasses.

98 CHAPTER 7. BUILT-IN PLUG-INS

7.10.1 2D image

7.10.1.1 SquareGridImage

The square grid image is an image based on the square grid.

7.10.1.2 HexagonalGridImage

The hexagonal grid image is an image based on the hexagonal grid. One way of storing the basis elementcoefficients for the hexagonal grid is to store a coefficient for every i and j in 7.11 and not use those coefficientsthat do not meet the modulus condition. An alternative method is to recognize that the hexagonal arrayconsists of two rectangular arrays that are overlaid. Consider the pair (i, j) as the index of one of the pointsin 7.11. Then the set of indices is given by

{(i, j)|0 ≤ i < M and 0 ≤ j < N and i+ j ≡ p mod 2}.

There are two cases to consider.Case1: p = 0, then

set 1 =

{(2k, 2l)|0 ≤ k <

⌊M

2

⌋and 0 ≤ l <

⌊N

2

⌋},

and

set 2 =

{(2m+ 1, 2n+ 1) |0 ≤ m <

⌊M

2

⌋and 0 ≤ n <

⌊N

2

⌋}.

Case 2: p = 1, then

set 1 =

{(2k + 1, 2l)|0 ≤ k <

⌊M

2

⌋and 0 ≤ l <

⌊N

2

⌋},

and

set 2 =

{(2m, 2n+ 1) |0 ≤ m <

⌊M

2

⌋and 0 ≤ n <

⌊N

2

⌋}.

Earlier versions of jSNARK used the first method of storing the coefficients. This version uses the compactrepresentation. This is an instance of the classic time space trade-off. The first method is computationallyefficient but uses storage in an inefficient manner. The second method is computationally inefficient, butuses storage efficiently. Future versions of jSNARK may include both methods and allow the user to chosethe most appropriate method.

7.10.2 3D images

7.10.2.1 SimpleCubicGridImage

The simple cubic grid image is an image based on the simple cubic grid.

7.10.2.2 BodyCenteredCubicGridImage

The body centered cubic (BCC) grid image is an image based on the body centered cubic grid. One way ofstoring the basis element coefficients for a BCC grid image is to allocate an array containing all the possiblegrid points and only use those that meet the i ≡ j ≡ k mod 2 criterion. This is extremely wasteful. Only1/8 of the locations meet the criterion. An alternative is to allocate two smaller arrays, one for the casewhere i ≡ j ≡ k ≡ 0 mod 2 and the other for the case where i ≡ j ≡ k ≡ 1 mod 2. Earlier versions ofjSNARK used the first approach while the current version uses the latter. Future versions of jSNARK mayinclude both methods and allow the user to chose the most appropriate method.

7.11. ALGORITHM 99

7.10.2.3 FaceCenteredCubidGridImage

The face centered cubic (FCC) grid image is an image based on the face centered cubic grid. One way ofstoring the basis element coefficients for a FCC grid image is to allocate an array containing all the possiblegrid points and only use those that meet the i+ j+ k ≡ 0 mod 2 criterion. This is extremely wasteful. Only1/4 of the locations meet the criterion. An alternative is to allocate four smaller arrays for the four possiblecases.

Case 1: i ≡ j ≡ k ≡ 0 mod 2.Case 2: i ≡ 0 mod 2 and j ≡ k ≡ 1 mod 2.Case 3: j ≡ 0 mod 2 and i ≡ k ≡ 1 mod 2.Case 4: k ≡ 0 mod 2 and i ≡ j ≡ 1 mod 2.Earlier versions of jSNARK used the first approach while the current version uses the latter. Future

versions of jSNARK may include both methods and allow the user to chose the most appropriate method.

7.11 Algorithm

An Algorithm class usually implements a reconstruction algorithm. All of the built-in algorithms are singletime instance algorithms. For time varying reconstructions they will be called once for each time bin andpresented with the set of projection data for that time bin. Time will be omitted in the description of thealgorithms.

7.11.1 2D and 3D algorithms

7.11.1.1 DoNothing

The do nothing algorithm does just that. It can be used as a simple filler for the execute phase or whencombined with the “continue” option to apply additional post processing to the output of the previousalgorithm.

7.11.1.2 ART

This is a general implementation of the additive Algebraic Reconstruction Techniques. It incorporates thefeatures and options described in [17, 18, 19, 21, 24, 25]. In the description we shall follow, as closely aspossible, the notation of [23], which provides a survey of ART-type methods. The generality is achieved byhaving this plug-in have plug-ins of its own. The ART plug-in has three optional numeric parameters, norm-Power, lowerBound, and upperBound and five optional plug-ins, tolerance, relaxation, constraint, constraintrelaxation, and variant. All the ART plug-ins are subclasses of the ARTPlugIn interface.

public interface ARTPlugIn extends PlugIn {

public void echo();

public void setProjectionSet(ProjectionSet projectionSet) throws JSnarkException;

public void setImage(SeriesExpansionImage image);

public void setTimeInterval(int timeInterval);

public void setIteration(int iteration) throws JSnarkException;

public void nextRay();

}

The echo method is called when the ART plug-in should echo its arguments. This should be used instead ofthe echo argument to the create method so that the ART plug-in arguments appear after those of the mainART algorithm. The setProjection and setImage methods inform the plug-in where those object instancesare located. The plug-in should not modify the data in those objects. The setTimeInterval and setIterationmethods inform the plug-in when those change. The nextRay method informs the plug-in when a new rayis being processed.

100 CHAPTER 7. BUILT-IN PLUG-INS

The Image class is extended with the SeriesExpansionImage abstract class following the decorator pattern.Instances of this class are created from instances of the Image class by the SeriesExpansionImageFactoryclass, a singleton (see [14] pp. 127-134) that implements a factory method (see [14] pp. 107-116).

The algorithm is an iterative procedure which, starting from an initial estimate of the picture to bereconstructed, updates the estimate through a sequence of steps. A single step is influenced by exactly oneray for which we have an estimate of the ray sum. Only those basis function coefficients which contributeto the associated pseudo ray sum are updated. The updating is done by the addition of a correction term(related to the contribution of the basis function to the pseudo ray sum) to the coefficient in each suchbasis function, so that after the correction the pseudo ray sum for the ray in question will be nearer to theestimated ray sum. An iteration of ART is completed when the update has been performed once on everyray.

More precisely, let n be the number of basis functions and m be the total number of projection measure-ments (rays). We use x(k) to denote the n-dimensional column vector after the k’th step. In particular, x(0)represents the contents of the array at the beginning of the reconstruction process, as determined by thestarting image option of the execute XML element (see Section 4.3). We use p to denote the m-dimensionalcolumn vector, whose i’th component, pi, is the value of the i’th ray.

For 1 ≤ i ≤ m, we also define an n-dimensional column vector ai, where the j’th component of ai, aij ,is the integral along the ray of the j’th basis element. Let A be the m× n matrix whose ith row is aTj . TheART algorithms are iterative methods of approximating solutions to the system of equations

Ax ≈ p.

Some versions of the ART algorithm also make use of a set of auxiliary variables, one variable for each ray.We let u(k) denote the m-dimensional column vector whose i’th component, u(k)i , is the auxiliary variableassociated with the i’th ray after the k’th step. u(0) is defined to be the vector whose components are all 0.We use ei to denote the m-dimensional column vector whose only nonzero component is a 1 in the i’th row.

At the beginning of the (k + 1)’st step of an ART algorithm x(k) and u(k) are stored in memory. A rayis selected by

Ray ray = rayOrder.next();

expansionImage.findIntersections(ray);

double norm = expansionImage.getRayNorm(normPower);

(see Section 7.12). Let i be the index of the ray returned by rayOrder.next(). The findIntersections methodof the expansionImage instance computes the values of the aij terms. Let l be the value of normPower.Then the value of norm is given by

||ai||l =n∑

j=0

(aij)l.

Then x(k+1) and u(k+1) are defined by

x̂(k+1) = x(k) + r(k)c(k)ai,

x(k+1) = Ψ(x(k), x̂(k+1)),

u(k+1) = u(k) + r(k)d(k)ei.

The basic difference between the different ART algorithms is the way c(k), d(k), and r(k) are chosen.Whenever ||ai||l ≤ ZERO, c(k) = d(k) = 0 resulting in x(k+1) = x(k) and u(k+1) = u(k). Otherwise, the

following code is executed:

double pseudoRaySum = expansionImage.getPseudoRaySum();

double realRaySum = ray.getRaySum();

double diff = realRaySum - pseudoRaySum;

double relaxation = artRelaxation.relaxation();

double epsilon = artTolerance.tolerance(realRaySum);

double correction = artVariant.correction(diff, epsilon, norm, relaxation,

rayOrder.getRayIndex(), rayOrder.getProjectionIndex());

expansionImage.additiveUpdate(relaxation * correction);

7.11. ALGORITHM 101

The value of r(k) is the value returned by the relaxation method of the ARTRelaxation plug-in.

public interface ARTRelaxation extends ARTPlugIn, TwoD, ThreeD {

public double relaxation();

}

The only built-in implementation of this interface is the constant relaxation plug-in. It has a single optionalparameter, the relaxation factor. This must be a value greater than ZERO and less than 2. If the user didnot specify this plug-in or did not specify the value of the relaxation factor, a default instance that returnsthe constant 1.0 is used.

The value of pseudoRaySum is given byn∑

j=0

aij x̄(k)j

where x̄(k)j is the result of calling the constraintValue method of the ARTConstraint instance with the

argument x(k)j .

public interface ARTConstraint extends ARTPlugIn, TwoD, ThreeD {

public void setLower(double lower);

public void setUpper(double upper);

public void setTotalNumberOfRays(int count);

public void setRelaxation(ARTConstraintRelaxation relaxation);

public double constrainValue(double value);

public double boundValue(double oldValue, double newValue);

}

There are four implementations of this interface, no constraint, ART2, bounded constraint and binary ART.The ART2 implementation of the constrainValue method returns

x̄(k)j =

lowerBound, if lowerBound is specified and x(k)j < lowerBound,upperBound, if upperbound is specified and x(k)j > upperBound,x(k)j , otherwise.

All the other implementations return x(k)j . If the user did not specify this plug-in, an instance of the no

constraint class is used as the default.The tolerance (epsilon) is the value returned by the tolerance method of the ARTTolerance plug-in.

public interface ARTTolerance extends ARTPlugIn, TwoD, ThreeD {

public double tolerance(double raySum);

}

There are two built-in implementations of this interface, fixed tolerance and noise tolerance. Both toleranceplug-ins have a single optional parameter, the tolerance. It can not be negative. The fixed tolerance plug-inalways returns that value. Some background is needed to describe the value returned by the noise toleranceplug-in. Let A be the square of the additive noise standard deviation (see Section 7.6.1), if present, and 0otherwise. Let B be the square of the multiplicative noise standard deviation (see Section 7.6.2), if present,and 0 otherwise. Let

E =

{∑NERGYi=1 ei × exp(−Bi), if a spectrum is specified (see Section 4.2.1.1),

1, otherwise.

102 CHAPTER 7. BUILT-IN PLUG-INS

Recall that quantumMean and quantumCalibration are the mean and calibration values of the quantumnoise plug-in (see Section 7.5). Let

C =

{ 1quantumMean×E , if quantum noise is present,0, otherwise.

And let

D =

{2.0×C

quantumCalibration + C, if quantum noise is present,0, otherwise.

The noise tolerance plug-in returns the value toler×√υ, where toler is the value of the tolerance parameter

andυ = A+B × raysum + C ×D × exp(raysum).

The value of c(k) is the value returned by the correction method of the ARTVariant plug-in and ifnecessary, the u(k+1) vector is updated as a side effect.

public interface ARTVariant extends ARTPlugIn, TwoD, ThreeD {

public double correction(double diff, double epsilon, double normSquared,

double relaxation, int rayIndex, int projectionIndex)

throws JSnarkException;

}

There are three implementations of this interface, ART3, ART4, and Bayesian.If the ART3 plug-in is selected,

c(k) =

0, if |DIFF| ≤ ϵ,

2×(DIFF−ϵ)||ai||l , if ϵ < DIFF < 2ϵ,

2×(DIFF+ϵ)||ai||l , if ϵ < -DIFF < 2ϵ,

DIFF||ai||l , if 2ϵ ≤ |DIFF|,

d(k) = 0.

If the ART4 plug-in is selected,

c(k) = mid{u(k)i ,

DIFF + ϵ

||ai||l,DIFF − ϵ

||ai||l

},

d(k) = −c(k),

where mid{a, b, c} denotes the median of the three numbers a, b, and c.If the Bayesian plug-in is selected,

c(k) = (snr)2DIFF − u

(k)i

1 + (snr)2||ai||l,

d(k) = c(k)/(snr)2.

(Restriction: (snr)2 > ZERO.)If the user did not specify this plug-in, an instance of the ART3 plug-in is used as the default.The function Ψ, which produces the j’th component of x(k+1) from the j’th components of x(k) and x̂(k+1),

is incorporated into the additiveUpdate method of the expansionImage calling the boundValue method of theARTConstraint plug-in.

7.11. ALGORITHM 103

If the no constraint or ART2 plug-in is selected,

x(k+1)j = x̂

(k+1)j .

If the bounded constraint is selected,

x(k+1)j =

x̂(k+1)j + (cr)(k)

(lowerBound − x̂

(k+1)j

), if lowerBound is defined and x̂(k+1)

j < lowerBound,

x̂(k+1)j + (cr)(k)

(upperBound − x̂

(k+1)j

), if upperBound is defined and x̂(k+1)

j > upperBound,

x̂(k+1)j , otherwise,

where (cr)(k) is the result of calling the relaxation method of the ARTConstraintRelaxation plug-in.

public interface ARTConstraintRelaxation extends ARTPlugIn, TwoD, ThreeD {

public double relaxation();

}

The only built-in implementation of this interface is the constant relaxation plug-in. It has a single optionalparameter, the relaxation factor. This must be a value greater than ZERO and less than 2. If the user didnot specify this plug-in or did not specify the value of the relaxation factor, a default instance that returnsthe constant 1.0 is used.

If binary ART is selected, the definition of x(k+1)j makes use of some auxiliary concepts. Let

HALFWAY=(upperBound+lowerBound)/2.

(Restriction: If binary ART is specified, both lowerBound and upperBound must be specified.) Let

ABOVE=HALFWAY+(upperBound-lowerBound)/1000

andBELOW=HALFWAY-(upperBound-lowerBound)/1000.

In addition, we define two sequences of numbers λ(k) and µ(k), based on the integer modifier rays, the totalnumber of rays (the product of the total number of rays in a projection times the number of projectiondirections).

λ(0) = lowerBound,

λ(k+1) = λ(k) +HALFWAY − lowerBound

rays × 2iter ,

µ(0) = upperBound,

µ(k+1) = µ(k) − upperBound − HALFWAYrays × 2iter .

Let

x̃(k+1)j =

x̂(k+1)j + (cr)(k)

(lowerBound − x̂

(k+1)j

), if x̂(k+1)

j < λ(k),

x̂(k+1)j + (cr)(k)

(upperBound − x̂

(k+1)j

), if x̂(k+1)

j > µ(k),

x̂(k+1)j , otherwise,

where (cr)(k) is the result of calling the relaxation method of the ARTConstraintRelaxation plug-in describedabove.

x(k+1)j =

BELOW, if x(k)j ≤ λ(k) and x̃(k+1)

j ≥ HALFWAY,ABOVE, if x(k)j ≥ µ(k) and x̃(k+1)

j ≤ HALFWAY,x̃(k+1)j , otherwise.

The qth iteration of the ART algorithm is completed when x(qm) is computed and the values of x(qm)n are

used as the values of c(q)n in Equation 2.3.

104 CHAPTER 7. BUILT-IN PLUG-INS

7.11.1.3 Block ART

This is a combined implementation of two of the block ART algorithms described in Eggermont, et. al. [11].It partitions the rows of matrix A into some number of blocks. The measurement vector p is partitioned ina corresponding way. It then operates on each block simultaneously. Two degenerate cases are when eachrow of A is considered a block and when the entire matrix A is the only block. The block ART plug-in hasa required plug-in and two optional plug-ins. The required plug-in specifies the blocking strategy while theoptional plug-ins specify the variant and relaxation. The interfaces for all of these plug-ins are subclasses ofthe ARTPlugIn interface described above. The nextRay method of this interface is not used by block ART.

The partitioned matrix is given by

A =

A1

−−A2

−−...

−−Al

where 1 ≤ l ≤ m. The projection vector, p, is partitioned as

pT =(PT1 | PT

2 | · · · | PTl

)where for 1 ≤ i ≤ l, the number of rows in Ai is the same as the number of components in Pi. Theimplementation combines algorithms (1.4) and (4.2) of [11] as the steps

x(0) = x0,x(n+1) = x(n) + λAT

i Σ(n)d(n),

for n = 0, 1, . . . and i = n mod l + 1.

The qth iteration of Block ART is completed when x(ql) is computed and the values of x(ql) are used asthe values of c(q) in Equation 2.3

The difference between the algorithms is the value of λ and how the vector d(n) is computed. In algorithm(1.4) of [11], λ = 1 and

d(n) = Pi −Aix(n).

In algorithm (4.2), the Bayesian variation, “λ represents the levels of confidence we have in x0 being a goodapproximation for x, and in p being a good approximation for the line integrals.” We introduce a newvariable

rT =(rT1 | rT2 | · · · | rTl

)where the number of components of ri is the same as the number of components of Pi. r(0) is initializedwith all zeros. The algorithm performs the steps

d(n) = λ(Pi −Aix

(n))− r(n),

r(n+1)j = r

(n)j for j ̸= i,

r(n+1)i = r

(n)i +Σ(n)d(n).

The BlockARTVariant interface adds four methods to the ARTPlugIn interface, setBlocker, nextBlock, correc-tion, and updateDual. The setBlocker method passes the ARTBlocks instance to the variant. The nextBlockinstance signals when a new block is being processed. The correction method is passed the vector Pi−Aix

(n)

and is expected to return the vector d(n). The updateDual method is passed the vector Σ(n)d(n) and is used toupdate the vector r(n). It is expected to return the vector λΣ(n)d(n). There are two built-in implementationsof this interface, plainBlockART which implements algorithm (1.4) and BayesianBlockArt which implementsalgorithm (4.2).

7.11. ALGORITHM 105

public interface ARTBlocks extends ARTPlugIn, Iterable {

public int size();

}

The ARTBlocks interface also inherits the Iteratable interface. Subclasses of this interface implement theiterator method that returns a class instance where the next method will return an array of Ray (seeSection 5.9) instances representing a block, Ai, and the corresponding projection estimates, Pi. Differentimplementations of this interface correspond to different ways of partitioning the matrix A. There arethree built-in implementations of this interface, projection, rowBlock, and columnBlock. The projectionimplementation is available in both 2D and 3D and returns all the rays in a projection as a single block. TherowBlock and columnBlock implementations are available only in 3D. They return the rows and columns,respectively, of the 2D projection array. In all implementations the projections are returned in the efficientorder (see Section 5.2) and the rows and columns of the 2D projection arrays in 3D are also returned inefficient order.

public interface BlockRelaxation extends ARTPlugIn {

public void setBlocker(ARTBlocks blocker) throws JSnarkException;

public void nextBlock();

public double[] relax(double[] correction);

}

The BlockARTRelaxation interface adds three methods to the ARTPlugIn interface, setBlocker, nextBlock,and relax. The setBlocker method informs the BlockARTRelaxation instance about the blocker instancethat will be used during reconstruction. The nextBlock method is called when a new block is about to beprocessed. The relax method is passed the array of corrections that will be used when processing the currentblock. It implements the multiplication by the matrix Σ(n) in the algorithm. It returns an array of the samesize of modified corrections after processing by the BlockARTRelaxation instance. This may be the samearray that was passed in. There are two built-in implementations of this interface, constantRelaxation andeigenvalueRelaxation. The constantRelaxation implementation has Σ(n) as a diagonal matrix with a constantvalue on the diagonal. That constant is an optional parameter to the plug-in. If no relaxation method isspecified, or if the constantRelaxation method is specified without a relaxation parameter, the default is theconstant relaxation factor 1.0. The eigenvalueRelaxation implementation computes Σ(n) as a diagonal matrixwhere the diagonal entries are estimates of 1

λmax(ATi Ai)+λmin(AT

i Ai)+1, where λmax(A

Ti Ai) is the maximum

eigenvalue of ATi Ai and λmin(A

Ti Ai) is the minimum eigenvalue. The algorithm for estimating λmax(A

Ti Ai)

is in Herman and Lent [22]. The estimate of λmin(ATi Ai) is 0.0. If u(n) is the upper bound of the estimate

of λmax(ATi Ai) and l(n) is the lower bound, the algorithm is terminated when 1

l(n)+1− 1

u(n)+1< 0.01.

7.11.2 2D algorithms

7.11.2.1 Back-projection

This is a general implementation of the back-projection algorithm described in Gullberg [16]. The 2Dback-projection plug-in has a single optional parameter, weighting, and a single optional imbedded plug-in,interpolator.

Roughly speaking, this algorithm operates as follows. For each projection and for each grid point in theimage, the grid point is projected onto the detector and using some interpolation method the value of theprojection at that point is determined. That value is multiplied by the weight for the projection and addedto the basis element coefficient. Only pixel basis elements are allowed.

106 CHAPTER 7. BUILT-IN PLUG-INS

More precisely,

c(1)n =

∑NPROJj=1 gpar(x⃗n · θ⃗j , θj)∆θj for parallel projection data,

∑NPROJj=1 garc(α, θj)∆θj for arc projection data,

∑NPROJj=1 gtan(s, θj)∆θj for tangent projection data,

where NPROJ is the number of projections, x⃗n is the grid point associated with c(1)n , α = tan−1[

r sin(θj+ϕ)S+r cos(θj+ϕ)

],

r = ||x⃗n||, ϕ = arg(x⃗n), and s = D tanα. The digitized reconstruction is obtained by evaluating Equation2.3.

The weighting parameter is restricted to the values equal and trapezoidal. These affect the value of ∆θj .The equal option gives each projection the same weight while the trapezoidal option assigns a weight usingthe trapezoidal rule. In both cases, the weights sum to π for parallel projection data and 2π for fan beamprojection data. If the parallel projections are equally spaced and cover a 180 degree range or the fan beamprojections are equally spaced and cover a 360 degree range, the two options result in identical weights. Ifthe weighting parameter is not specified, trapezoidal weighting is assumed.

The interpolator plug-in affects how the value of g is estimated at positions between the sampled positions.The interpolator plug-in uses two interfaces.

public interface Interpolator extends PlugIn {

public void setSize(int size);

public void setSeparation(double separation);

public void setData(double[] data);

}

public interface Interpolator2D {

public double interpolate(double x);

}

In addition, the AbstractInterpolator class provides a common implementation of the Interpolator classmethods. Strictly speaking, the setSize method is not necessary for the two dimensional interpolator, but itmade coding simpler. It informs the interpolator of the number of samples in the data along one dimension.The setSeparation method sets the spacing between samples in the data. The setData method sets the actualdata to be interpolated. In addition, AbstractInterpolator contains a helper method, position, that convertsa coordinate in space into the corresponding coordinate along the detector.

The interpolate method of the Interpolator2D4,5 class performs that actual interpolation. There are twobuilt-in implementations of this interface, nearestNeighbor and linear. The nearestNeighbor implementationreturns the value of the data element that is closest to the converted coordinate position. The linearimplementation performs a linear interpolation between the two closest values. If the interpolator plug-in isnot specified, linear is assumed.

4Interpolator2D is a separate class because its input is a scalar, while the input for Interpolator3D has a two dimensionalvector as its input. It would require a one dimensional Vector class (Vector1D) to merge these two methods. It was not worththat complication.

5Interpolator2D is a one dimensional interpolator and Interpolator3D is a two dimensional interpolator. They have beengiven these names so that they correspond to the dimensionality of the reconstruction algorithm rather than the dimensionalityof the data.

7.11. ALGORITHM 107

7.11.2.2 Fan beam back-projection

This is a general implementation of the back-projection step of the divergent convolution reconstructionalgorithm described in Herman [20], Section 10.1. It also has the optional parameter, weighting, and theoptional imbedded plug-in, interpolator. They are identical to those used by the back-projection reconstruc-tion algorithm described above. It differs from that algorithm in that it is only applicable for fan beam data(either arc (7.2.1.2) or tangent (7.2.1.3)) and that the weight applied during the back-projection is also afunction of the image coordinates. The discrete reconstruction coefficients are given by

c(1)n =

S

4π2

∑NPROJj=1

1W 2 garc(α, θj)∆θj for arc projection data,

S4π2

∑NPROJj=1

1W 2 gtan(s, θj)∆θj for tangent projection data,

where S is the distance from the radiation source to the origin, j is the index of the projection, θj is theangle the source makes with the y axis, ∆θj is the weight applied to the jth projection,

α = tan−1

(r cos(θj − ϕ)

S + r sin(θj − ϕ)

)is the position on the detector where the line connecting the source and the point x⃗n hits the detector,r = ||x⃗n||, ϕ = arg(x⃗n),

W ={[r cos(β − ϕ)]

2+ [S + r sin(β − ϕ)]

2}1/2

,

s = D tanα, and D is the source to detector distance.Warning: This reconstruction algorithm is only applicable to fan beam projection data and will fail when

used with parallel data.

7.11.2.3 Parallel beam filtered back-projection

This is a generalized version of the filtered back-projection (convolution) reconstruction algorithm. It includesthe variations described in [43] and [44]. The coefficients of the digitized reconstruction are given by

c(1)n =NPROJ∑

j=1

g̃par(x⃗n · θ⃗j , θj)∆θj ,

where

g̃par(ks, θj) =N∑

k′=−N

q(k − k′)gpar(k′s, θj),

and s is the sampling distance of gpar, N is the number of rays in a projection, and q is described below.It uses the back-projection algorithm described in Section 7.11.2.1 above to perform the back-projection

step using filtered projection data instead of the raw projection data. In addition to an interpolator plug-in,the algorithm has an optional imbedded plug-in, filter. Its interface is:

public interface Filter extends TwoD, PlugIn {

public void createFilter(int size);

public void convolve(double[] input, double[] output) throws JSnarkException;

public double[] convolve(double[] input) throws JSnarkException;

}

All concrete instances of this filter class should be derived from the AbstractFBPFilter class that imple-ments the common methods of this interface.

The createFilter method precomputes the filter values. There are two methods for performing the con-volution step. The first convolve method requires the caller to pass in an output array the same size as the

108 CHAPTER 7. BUILT-IN PLUG-INS

input array. The second method returns a new instance of an output array that is allocated by the filterplug-in.

There are four built-in implementations of the filter interface, bandlimiting, sinc, cosine, and generalizedHamming. Except for the Hamming filter, all of them have an optional parameter, cutoff, that is used tovary the Fourier space cutoff of the filter. cutoff must be greater than ZERO and less than or equal to 1.0. Ifit is not specified, it defaults to the value 1.0. When the cutoff is 1.0, the bandlimiting filter implements thefilter described in [43] and the sinc filter implements the filter described in [44]. The generalized Hammingwindow has a required parameter, α. If α = 0.5 this is the Hanning window. If α = 0.54 this is the Hammingwindow. If the filter is not specified, it defaults to the bandlimiting filter with a cutoff of 1.0.

The AbstractFilter class implements a create method that sets the value of the cutoff parameter andimplements both of the convolve methods. A user written plug-in derived from this class need only implementthe createFilter method. The implementation should allocate the filter array and initialize it with the filtervalues. Only the positive half (including 0) of the filter is created.

In the following, m will be the index of the filter coefficient and c the value of the cutoff parameter, andδ the distance between detectors. The filter coefficients are given by

q(m) =

∫ ∞

−∞W (X)|X| exp(−2πimδX)dX

for some windowing function W . Let

Π(X) =

{1 if |X| < 1

2 ,0 otherwise.

For the bandlimiting filter, WB(X) = Π( δXc ) and the filter coefficients are given by

qB(m) =c2

4

[2sinc(mc)− sinc2(mc/2)

].

For the sinc filter, WS(X) = sinc( δXc )Π( δXc ) and the filter coefficients are given by

qS(m) =c2

[sinc(

1

4+mc

2) cos(

π

4− πmc

2) + sinc(

1

4− mc

2) cos(

π

4+πmc

2)

].

For the cosine filter, WC(X) = cos(πδXc )Π( δXc ) the filter coefficients are given by

qC(m) =c2

4

[sinc(

1

2+mc) + sinc(

1

2−mc)− 1

2

(sinc2(

1

4+mc

2) + sinc2(

1

4− mc

2)

)].

For the Hamming filter, WH(X) = [α+ (1− α) cos(2πδX)]Π(δX) for 0 ≤ α ≤ 1, where α is the parameterfor the filter and the filter coefficients are given by

qH(m) =

α4 + α−1

π2 if m = 0,− α

π2 − α−18 if m = 1,

α−12π2

[1

(m−1)2 + 1(m+1)2

]if m > 1 and even,

−απ2m2 otherwise.

After the convolve method is called, the projection data are divided by the detector spacing to normalizethe result. For fan beam data, the actual detector spacing is scaled as if the detector intersected the originby multiplying by the source to origin distance and dividing by the source to detector distance. Strictlyspeaking, the filtered back-projection reconstruction algorithm is not applicable to fan beam data. It willissue a warning when such a reconstruction is attempted.

7.11. ALGORITHM 109

7.11.2.4 Fan beam filtered back-projection

This is an implementation of the fan beam convolution reconstruction method described in [28]. It uses thefan beam back-projection algorithm described in Section 7.11.2.2 above using the filtered projection datainstead of the raw projection data. In addition to an interpolator plug-in, the algorithm has an optionalimbedded plug-in, filter. It reuses the filter interface of the filtered back-projection algorithm described inSection 7.11.2.3 above. All concrete instances of this class should be derived from the AbstractDconFilterclass that implements the common methods of this interface.

The discrete reconstruction coefficients are given by

c(1)n =

S

4π2

∑NPROJj=1

1W 2 g̃arc(α, θj)∆θj for arc projection data,

S4π2

∑NPROJj=1

1W 2 g̃tan(s, θj)∆θj for tangent projection data,

where S, W , α, θj , ∆θj , and s are as in Section 7.11.2.2.

g̃arc(n′λ, θj) = λ

∑Nn=−N cos(nλ)q(1)((n′ − n)λ)garc(nλ, θj)

+λ cos(n′λ)∑N

n=−N q(2)((n′ − n)λ)garc(nλ, θj), and

g̃tan(n′s, θj) = λ

∑Nn=−N cos(tan−1(ns/D))q(1)((n′ − n)s)gtan(ns, θj)

+λ cos(tan−1(n′s/D))∑N

n=−N q(2)((n′ − n)s)gtan(ns, θj),

where λ is the sampling distance of garc in radians, s is the sampling distance of gtan, and D is the sourceto detector distance.

There are three built-in implementations of the filter interface, bandlimiting, sinc and cosine. All have anoptional parameter, cutoff, that is used to vary the Fourier space cutoff of the filter. cutoff must be greaterthan ZERO and less than or equal to 1.0. If it is not specified, it defaults to the value 1.0. When the cutoffis 1.0, the bandlimiting filter implements the filter described in [34].

In the following, m will be the index of the filter coefficient and c the value of the cutoff parameter, andα the distance between detectors divided by the source to detector distance. The pair of filter coefficientsare given by

q1(m) =−mαQ(mα)

sin2(mα),

q2(m) =Q(mα) +mαQ′(mα)

sin(mα),

where

Q(u) = 2π

∫ c2α

0

w(ξ) sin(2πξu)dξ.

For the bandlimiting filter, w(ξ) = 1.For the sinc filter, w(ξ) = sinc(2ξ/cα).For the cosine filter, w(ξ) = cos(πξ/cα).Warning: This reconstruction algorithm is only applicable to fan beam projection data and will fail when

used with parallel data.

7.11.3 3D algorithms

7.11.3.1 Back-projection

This is a 3D extension of the back-projection algorithm described in Gullberg [16]. The 3D back-projectionplug-in has a single imbedded plug-in, interpolator. It essentially implements the same algorithm as the 2Dback-projection described above, but the projections are all given the weight 4π/NPROJ.

The basis element coefficients are given by

110 CHAPTER 7. BUILT-IN PLUG-INS

c(1)n =4π

NPROJ

NPROJ∑j=1

g(s⃗n, θ⃗j),

where s⃗n is the position on the detector where the line between the source and the point x⃗n intersects thedetector.

Instead of the Interpolator2D interface, this algorithm uses the Interpolator3D interface.

public interface Interpolator3D {

public double interpolate(double x, double y);

}

The AbstractInterpolator class described above is also available. The setSize method informs the inter-polator of the number of samples in the data along one dimension. The size of the actual data array will bethe square of this value.

The interpolate method of the Interpolator3D6 class performs the actual interpolation. There are twobuilt-in implementations of this interface, nearestNeighbor and bilinear. The nearestNeighbor implementationreturns the value of the data element that is closest to the converted coordinate position. The bilinearimplementation performs a bilinear interpolation between the four closest values. If the interpolator plug-inis not specified, bilinear is assumed.

7.11.3.2 Weighted Back-projection

This is an implementation of both of Radermacher’s weighted back-projection reconstruction algorithms inChapter 8 of [13].

The back project first variant of the weighted back-projection method consists of computing the three-dimensional back projection described above, take its 3D discrete Fourier transform, multipling it by aweighting function and taking the inverse Fourier transform. The weighting function is

Wa(X,Y, Z) =

NPROJ∑j=1

2asinc(2a(X sin θj cosϕj + Y sin θj sinϕj + Z cos θj))

−1

,

where a is the radius of the object being reconstructed, θj , and ϕj are the first two ZYZ Euler angles of therotation, and thus, X sin θj cosϕj + Y sin θj sinϕj + Z cos θj is the distance of the point (X,Y, Z) from theplane passing through the origin parallel to the detector. To avoid division by zero and numbers near zero,when the sum is smaller than some threshold, it is replaced by the threshold, keeping the proper sign.

The back-project last variant multiplies the 2D discrete Fourier transform of a projection by a slicethrough the Wa function in the plane passing through the origin parallel to the projection direction, performsthe inverse Fourier transform and back-projects the result.

The plug-in has two required parameters, two optional parameters and an optional imbedded plug-in.The two required parameters are the algorithm type and the filter threshold. The algorithm type parameterhas two possible values, backProjectFirst and backProjectLast. This specifies which of the two weightedback-projection algorithms is to be used. The threshold parameter is the lower limit value for the filter.Filter values less than this value are set to this value. The two optional values are cutoff and fftSize. Cutoffis the Fourier space cutoff. If the value specified is less than ZERO, it is set to ZERO. It’s maximum valuedepends on whether the type parameter is backProjectFirst or backProjectLast. For backProjectFirst, themaximum value is

√3. For backProjectLast, the maximum value is

√2. It specifies the greatest frequencies

that will be used in the reconstruction process. FftSize specifies the size of the Fourier transform array.FftSize must be greater than 2⌈log2 x⌉, where x is 4× width/3 in the backProjectFirst case and 4× nrays/3in the backProjectLast case. If it is less than that value, it is set to that value. The optional plug-in is the

6See previous two footnotes.

7.11. ALGORITHM 111

interpolator that will be used in the back-projection step. It may be either nearestNeighbor or bilinear. If itis not specified, bilinear will be used.

Warning: This reconstruction algorithm is only applicable to parallel projection data and will fail whenused with cone beam data.

7.11.3.3 Katsevich

This is an implementation of Katsevich’s filtered back-projection algorithm for helical computed tomography[31, 32, 33]. In the following, several theorems will be stated without proof. The proofs are in the citedpapers.

The implementation is restricted to helical scan data (see Section 7.3.2.6) with a pitch > 0 and cone-beamprojections (see Sections 7.2.2.3 and 7.2.2.4). It is further restricted to the helix axis being on the Z axisgoing in the counter clockwise direction in the direction of the Z axis. The implementation of this algorithmis based on [41] and [47].

The helix trajectory can be given as

y⃗(s) = (R cos(s), R sin(s), sP/2π),

where y⃗(s) is the position of the source, R is the radius of the helix and P is the pitch. For every point, x⃗,in the interior of the helix, there exist a unique pair, sa and sb, such that sa < sb < sa + 2π and for somet, 0 < t < 1, x⃗ = (1− t)y⃗(sa) + ty⃗(sb2). In words, y(sa) and y(sb) are less than one turn of the helix apartand x⃗ lies on the line segment joining y⃗(sa) and y⃗(sb). Define the PI interval of x⃗ as the set of real numbersin the interval

IPI(x⃗) = [sa(x⃗), sb(x⃗)]

where sa(x⃗) and sb(x⃗) are the sa and sb that satisfy the above conditions for the point x⃗.For s ∈ IPI(x⃗), define

β⃗(s, x⃗) =x⃗− y⃗(s)

|x⃗− y⃗(s)|.

β⃗(s, x) is a unit vector parallel to the vector from y⃗(s) to x⃗.Given s, s2 ∈ IPI(x⃗), let s1 = (s+ s2)/2 and define

u⃗(s, s2) =

(y⃗(s1)−y⃗(s))×(y⃗(s2)−y⃗(s))|(y⃗(s1)−y⃗(s))×(y⃗(s2)−y⃗(s))| sgn(s2 − s) if s ̸= s2

dds y⃗(s)×

d2

ds2y⃗(s)

| dds y⃗(s)×

d2

ds2y⃗(s)|

if s = s2,

where dds y⃗(s) is the first derivative of y⃗ with respect to s and d2

ds2 y⃗(s) is the second derivative. For any x⃗

in the interior of the helix and any s ∈ IPI(x⃗), there is a unique s2 such that β⃗(s, x⃗) · u⃗(s, s2) = 0. Defineu⃗(s, x⃗) = u⃗(s, s2) for the s2 that satisfies that condition. β⃗(s, x⃗) and u⃗(s, x⃗) form an orthonormal pair ofvectors. Define e⃗(s, x⃗) = β⃗(s, x⃗) × u⃗(s, x⃗) and θ⃗(s, x⃗, γ) = cos(γ)β⃗(s, x⃗) + sin(γ)e⃗(s, x⃗). θ⃗(s, x⃗, γ) is a unitvector in the plane generated by β⃗(s, x⃗) and e⃗(s, x⃗). Katsevich calls the plane generated by the vectorsβ⃗(s, x⃗) and e⃗(s, x⃗) a kappa plane.

The Katsevich reconstruction algorithm is

f(x⃗) =−1

2π2

∫IPI(x⃗)

1

|x⃗− y⃗(s)|

2π∫0

∂q[Df ](y⃗(q), θ⃗(s, x⃗, γ))|q=s

sin(γ)ds (7.18)

where [Df ](y⃗, θ⃗) is the cone beam transform defined in equation 7.5. Implementing this algorithm involvesfirst taking the derivative of the projection data, performing a Hilbert transform, and then a weightedback-projection. The description of the implementation that follows is based on [41].

The implementation has two required imbedded plug-ins, derivative and filter, and two optional imbeddedplug-ins, unitary and interpolator. The interpolator plug-in specifies how the derivative of the projection data

112 CHAPTER 7. BUILT-IN PLUG-INS

is to be taken. It can be either simple or chainRule. The filter plug-in determines the filter that will be appliedin the convolution step of the algorithm. The bandLimiting filter is the only filter currently implemented.The unitary plug-in describes how the filtered projection data will be treated near the boundary of the Tam-Danielsson window described in Section 7.2.2.2. The interpolator plug-in specifies how the interpolation willbe performed during the derivative and back-projection steps of the algorithm. These are described in moredetail below. The two detector configurations, tangent and arc are discussed separately.

Tangent detector. We can relate the measured projection data to the cone beam transform through thefollowing equations.

[Df ](y⃗(s), θ⃗) = gt(s, u, w)

whereθ⃗ = ue⃗u(s)+De⃗v(s)+we⃗w√

u2+D2+w2,

u = D θ⃗·e⃗u(s)θ⃗·e⃗v(s)

,

w = D θ⃗·e⃗wθ⃗·e⃗v(s)

,

D is the perpendicular distance between the source and the detector, and e⃗u(s), e⃗v(s), e⃗w are the equivalentsof e⃗u(y⃗), e⃗v(y⃗), and e⃗w(y⃗) defined in Section 7.2.2 for the source position y⃗(s). e⃗w is no longer dependenton the source position because it is parallel to the Z axis. gt(s, u, w) is known at a finite number of values,gt(sk, ui, wj), where sk = k∆s for −K < k < K, ui = i∆u for −I < i < I, and wj = j∆w for −J < j < J .The values between the sampled positions are obtained by interpolation. The interpolator plug-in determineswhether nearest neighbor or bilinear interpolation is used.

Two methods of approximating the partial derivative have been implemented, the simple derivative andthe chain rule derivative.

In the simple derivative, we assume that θ⃗(s, x⃗, γ) is independent of q. We approximate the derivative as

g(1)t (sk+1/2, ui, wj) ≈

g(sk+1, θ⃗t)− g(sk, θ⃗t)

∆s

where

θ⃗t =uie⃗u(sk+1/2)+De⃗v(sk+1/2)+wj e⃗w√

u2i+D2+w2

j

,

g(s, θ⃗t) = gt(s, u′w′),

u′ = D θ⃗t·e⃗u(s)θ⃗t·e⃗v(s)

, and

w′ = D θ⃗t·e⃗wθ⃗t·e⃗v(s)

.

This approximates the derivative at source positions half-way between the each pair of original positions.The chain rule derivative takes into account the dependence of θ⃗(s, x⃗, γ) on q. It is given by

d

dsgt(s, u, w) =

(∂gt∂s

+∂gt∂u

∂u

∂s+∂gt∂w

∂w

∂s

)where

∂u∂s = u2+D2

D and

∂w∂s = uw

D .

7.11. ALGORITHM 113

Figure 7.12: Intersection of kappa plane with detector plane

We then approximate the derivative by

g(1)t (sk+1/2, ui+1/2, wj+1/2) ≈

∑i+1I=i

∑j+1J=j

gt(sk+1,uI ,wk)−gt(sk,uI ,wJ )4∆s

+u2i+1/2+D2

D

∑k+1K=k

∑j+1J=j

gt(sK ,ui+1,wJ )−gt(sK ,ui,wJ )4∆u

+ui+1/2wj+1/2

D

∑k+1K=k

∑i+1I=i

gt(sK ,uI ,wj+1)−gt(sK ,uI ,wj)4∆w .

This approximates the derivative at source positions half-way between the each pair of original positions andat detector locations half-way between the original locations.

With interpolation, we can approximate

∂q[Df ](s, θ⃗) ≈ g

(1)t (s, u, w),

where g(1)t is one of the two derivatives above.The Hilbert transform is taken along kappa planes, so we reorganize the derivative data so that the rows

are the intersections of kappa planes with the detector plane. Using geometric arguments (see Figure 7.12),it can be shown that

sin(γ)=

√u2 +D2 + w2

√u′2 +D2 + w′2

du′

(u′ − u). (7.19)

Thus2π∫0

∂q[Df ](y⃗(q), θ⃗(s, x⃗, γ))|q=s

sin(γ)≈

∞∫−∞

√u2 +D2 + w2

√u′2 +D2 + w′2

g(1)t (s, u′, w′)

du′

(u′ − u).

This is implemented in five steps: pre-weighting the projection data, forward height rebinning, performingthe Hilbert transform, backward height rebinning, and post-weighting.

114 CHAPTER 7. BUILT-IN PLUG-INS

Pre-weighting the projection data is simply

g(2)t (s, u′, w′) =

g(1)t (s, u′, w′)√u′2 +D2 + w′2

.

A kappa plane intersects the tangent detector along a straight line. It is defined by the three points y(s),y(s+ ψ), and y(s+ 2ψ). The line connecting y(s) and y(s+ ψ) intersects the detector plane at(

D sin(ψ)

1− cos(ψ),

DPψ

2πR(1− cos(ψ))

).

Similarly, the line connecting y(s) and y(s+ 2ψ) intersects the detector plane at(D sin(2ψ)

1− cos(2ψ),

2DPψ

2πR(1− cos(2ψ))

).

The line connecting these two points is

wκ(u, ψ) =Pψ

2πR

(D +

u

tan(ψ)

). (7.20)

Figure 7.13 shows a typical set of lines showing the intersection of kappa planes with the detector.Forward height rebinning is performed by the step

g(3)t (s, u′, ψ) = g

(2)t (s, u′, wκ(u

′, ψ))

for equally spaced samples of ψ in the range [−π/2− αm, π/2 + αm] where αm = I∆αD . The sampling in u′

is kept as ∆u so that linear interpolation can be used.The Hilbert transform is the integral

g(4)t (s, u, ψ) =

∞∫−∞

g(3)t (s, u′, ψ)

du′

(u− u′).

This is an improper integral of both the first and second kind. It is an improper integral of the first kindbecause the limits of integration are −∞ to ∞. This is not an issue here because g

(3)t (s, u, ψ) is 0 for

|u| > D tan R0√R2−R2

0

. It is an improper integral of the second kind because of the singularity at u = u′.

LetG

(3)t (u) = g

(3)t (s, u, ψ)

andH(u) =

1

u.

Then the last integral can be written as

∞∫−∞

H(u− u′)G(3)t (u′)du′ = [H ⋆ G

(3)t ](u)

where ⋆ is the convolution operator. Taking the Fourier transform of this and applying the convolutiontheorem (see [4], p. 108), we get

[F [H ⋆ G(3)t ]] = [FH][FG(3)

t ].

The Fourier transform of H is −iπsgn (see [4], p. 365) where

sgn(x) =

1 if x > 0,0 if x = 0,−1 if x < 0.

7.11. ALGORITHM 115

Figure 7.13: Intersection of kappa planes with a tangent detector

116 CHAPTER 7. BUILT-IN PLUG-INS

name of window BA(U) for −A/2 ≤ U ≤ A/2 ρA(u)

band-limiting 1 cos(πAu)−1u

cosine cos(πU/A) A2

[cosπ(uA−1/2)−1

uA−1/2 + cosπ(uA+1/2)−1uA+1/2

]generalized Hamming α+ (1− α) cos(2πU/A) α

u [cos(πAu)− 1] + A(1−α)2

[cosπ(uA−1)−1

uA−1 + cosπ(uA+1)−1uA+1

]Table 7.4: Definitions of windows and filters.

It is easy to show that

H(u) = −2π

∞∫0

sin(2πuU)dU .

We replace H with a band-limited approximation,

ρA(u) =

A/2∫0

BA(U) sin(2πuU)dU ,

where BA is a window function that satisfies the following criteria:

1. 0 ≤ BA(U) ≤ 1 for ∀U ,

2. BA(U) = 0 if |U | > A/2,

3. BA(U) is a monotonically nonincreasing function of U , and

4. limA→∞BA(U) = 1.

Some common window functions are given in Table 7.4. The filter imbedded plug-in implements the windowedfilter. They all take a single parameter, the cutoff, which is a value between ZERO and 1.0 and representsthe fraction of the cutoff frequency of 1/(2∆w), that is, A = cutoff/∆w. The generalized Hamming windowtakes the additional parameter α. When α = 0.54, the generalized Hamming window is called just theHamming window and when α = 0.5 it is called the hanning window.

Backward height rebinning is performed by the step

g(5)t (s, u, w) = g

(4)t (s, u, ψ(u,w))

where given α and w, we use Newton’s method to solve for the smallest magnitude ψ that satisfies equation7.20 above. Again, linear interpolation is used.

Post-weighting the projection data is simply

gFt (s, u, w) =√u2 +D2 + w2g

(5)t (s, u, w).

We now perform the back-projection operation by evaluating

f(x⃗) ≈−1

∫IPI(x⃗)

1

|x⃗− y⃗(s)|gFt (s, u, w)ds

where (u,w) is the projection of the point x onto the detector provided it lies between the projections of thehelix onto the detector. The interpolator plug-in is used for this step. The two projections are given by

wtop(u) =P

2πRD (u2 +D2)(π/2− tan−1( uD )) and

wbot(u) = − P2πRD (u2 +D2)(π/2 + tan−1( u

D )).

A typical example of these curves is shown in figure 7.14.

7.11. ALGORITHM 117

Figure 7.14: Tam-Danielsson window for the tangent detector

To avoid the effects of the sharp edges of IPI(x), we introduce a form of unitary function given by

χa(u,w) =

0 if w > wtop(u) + a∆w,wtop(u)+a∆w−w

2a∆w if wtop(u)− a∆w < w < wtop(u) + a∆w,1 if wbot(u) + a∆w < w < wtop(u)− a∆w,w−wbot(u)+a∆w

2a∆w if wbot(u)− a∆w < w < wbot(u) + a∆w,0 if w < wbot(u)− a∆w,

The back-projection step then becomes

f(x⃗) ≈−1

∫1

|x⃗− y⃗(s)|χa(u,w)g

Ft (s, u, w)ds.

The detector must be large enough to contain the Tam-Danielsson window. This means that the numberof detector rows must be greater than

2

⌈a− wbot(detectorWidth/2)

∆w

⌉+ 1.

Arc detector. We can relate the measured projection data to the cone beam transform through thefollowing equations.

[Df ](y⃗(s), θ) = ga(s, α, w),

whereθ⃗ = D sin(α)e⃗u(s)+D cos(α)e⃗v(s)+we⃗w√

D2+w2,

α = tan−1 θ⃗·e⃗u(s)θ⃗·e⃗v(s)

,

w = D θ⃗·e⃗w√1−(θ⃗·e⃗w)2

, and

e⃗u(s), e⃗v(s), e⃗w are as in the tangent detector case above. ga(s, u, w) is known at a finite number of values,ga(sk, αi, wj) where sk = k∆s for −K < k < K, αi = i∆α for −I < i < I, and wj = j∆w for −J < j < J .

118 CHAPTER 7. BUILT-IN PLUG-INS

The values between the sampled positions are obtained by interpolation. The interpolator plug-in determineswhether nearest neighbor or bilinear interpolation is used.

Two methods of approximating the partial derivative have been implemented, the simple derivative andthe chain rule derivative.

In the simple derivative, we assume that θ(s, x, γ) is independent of q. We approximate the derivative as

ga(sk+1/2, αiwj) ≈g(sk+1, θ⃗a)− g(sk, θ⃗a)

∆s

whereθ⃗a =

D sin(α)e⃗u(sk+1/2)+D cos(α)e⃗v(sk+1/2)+wj e⃗w√D2+w2

j

,

g(s, θ⃗a) = ga(s, α′, wj), and

α′ = tan−1 θ⃗a·e⃗u(s)θ⃗a·e⃗v(s)

.

This approximates the derivative at source positions half-way between the each pair of original positions.The chain rule derivative takes into account the dependence of θ(s, x, γ) on q. It is given by

d

dsga(s, α, w) =

(∂ga∂s

+∂ga∂α

∂α

∂s+∂ga∂w

∂w

∂s

)(s, α, w)

where∂α∂s = 1 and

∂w∂s = 0.

We then approximate the derivative by

g(1)a (sk+1/2, αi+1/2, wj) ≈ ga(sk+1,αi,wj)−ga(sk,αi,wj)

2∆s +ga(sk+1,αi+1,wj)−ga(sk,αi+1,wj)

2∆s +ga(sk,αi+1,wj)−ga(sk,αi,wj))

2∆α +ga(sk+1,αi+1,wj)−ga(sk+1,αi,wj)

2∆α .

This approximates the derivative at source positions half-way between the each pair of original positions andhalf-way between the detector positions in the α direction.

With interpolation, we can approximate

∂q[Df ](s, θ⃗) ≈ g(1)a (s, α, w),

where g(1)a is one of the two derivatives above.The Hilbert transform is taken along kappa planes, so we reorganize the derivative data so that the rows

are the intersections of kappa planes with the detector plane. From equation 7.19 above and the relationshipbetween the tangent and arc detector planes (equation 7.6), it can be shown that

sin(γ)=

√D2 + w2

√D2 + w′2

dα′

sin(α′ − α).

Thus2π∫0

∂q[Df ](y⃗(q), θ⃗(s, x⃗, γ))|q=s

sin(γ)≈

∞∫−∞

√D2 + w2

√D + w′2

g(1)a (s, u′, w′)dα′

sin(α′ − α).

This is implemented in five steps: pre-weighting the projection data, forward height rebinning, performingthe Hilbert transform, backward height rebinning, and post-weighting.

7.11. ALGORITHM 119

Figure 7.15: Intersection of kappa planes with an arc detector

Pre-weighting the projection data is simply

g(2)a (s, α′, w′) =g(1)a (s, α′, w′)√D2 + w′2

.

Using equation 7.20 and the relationship between the tangent detector and the arc detector (equation7.6) we get

wκ(α, ψ) =DP

2πR

(ψ cosα+

ψ

tanψsinα

). (7.21)

Figure 7.15 shows a typical set of lines showing the intersection of kappa planes with the detector.Forward height rebinning is performed by the step

g(3)a (s, α′, ψ) = g(2)a (s, α′, wκ(α′, ψ))

for equally spaced samples of ψ in the range [−π/2−αm, π/2+αm] where αm = sin−1(I∆uD

). The sampling

in α′ is kept as ∆α so that linear interpolation can be used.The Hilbert transform is the integral

g(4)a (s, α, ψ) =

2π∫0

g(3)a (s, α′, ψ)

sin(α′ − α)dα′.

120 CHAPTER 7. BUILT-IN PLUG-INS

Figure 7.16: Tam-Danielsson window for the arc detector

Using the following1

sin(u)=

u

sin(u)

1

u≈ u

sin(u)ρA(u),

where ρA(u) is one of the regularized approximations of 1u given in Table 7.4, we get

g(4)a (s, α, ψ) =

2π∫0

α′ − α

sin(α′ − α)ρA(α

′ − α)g(3)a (s, α′, ψ)dα′

Backward height rebinning is performed by the step

g(5)a (s, α, w) = g(4)a (s, α, ψ(α,w))

where, given u and w, we use Newton’s method to solve for the smallest magnitude ψ that satisfies equation7.21 above. Again, linear interpolation is used.

Post-weighting the projection data is simply

gFa (s, α, w) =√D + w2g(5)a (s, α,w).

We now perform the back-projection operation by evaluating

f(x⃗) ≈−1

∫IPI(x⃗)

1

|x⃗− y⃗(s)|gFa (s, α,w)ds

where (α,w) is the projection of the point x onto the detector provided it lies between the projection of thehelix onto the detector. The interpolator plug-in is used for this step. The two projections are given by

wtop(u) =DP2πR

π/2−αcos2 α and

wbot(u) = − DP2πR

π/2+αcos2 α .

A typical example of these curves is shown in Figure 7.16.

7.12. RAY ORDERS 121

To avoid the effects of the sharp edges of IPI(x), we introduce a form of unitary function given by

χa(u,w) =

0 if w > wtop(α) + a∆w,wtop(α)+a∆w−w

2a∆w if wtop(α)− a∆w < w < wtop(α) + a∆w,1 if wbot(α) + a∆w < w < wtop(α)− a∆w,w−wbot(α)+a∆w

2a∆w if wbot(α)− a∆w < w < wbot(α) + a∆w,0 if w < wbot(α)− a∆w,

The back-projection step then becomes

f(x⃗) ≈−1

∫1

|x⃗− y⃗(s)|χa(α,w)g

Fa (s, α, w)ds.

The detector must be large enough to contain the Tam-Danielsson window. This means that the numberof detector rows must be greater than

2

⌈a− wbot(detectorWidth/2× sourceToDetector)

∆w

⌉+ 1.

7.12 Ray ordersThe series expansion reconstruction algorithms often work on just one ray at a time. The order in whichrays are processed can effect the rate at which the algorithm converges. The ray orders are various methodsof ordering the rays.

7.12.1 2D ray orders7.12.1.1 Efficient

The Efficient plug-in orders the projections and rays as described in [27]. The principle behind this orderingis to order the projections and rays within the projections such that each action is as independent as possiblefrom the previous action.

From [27], let P be some number and let

P = pU × . . .× p2 × p1 (7.22)

be the unique decomposition of P into its prime factors with pU ≥ . . . ≥ p2 ≥ p1. Let T be the set ofU -dimensional vectors t, whose u’th component tu is an integer which satisfies

0 ≤ tu < pu, (7.23)

for 1 ≤ u ≤ U . With any such vector t, we associate an integer

v(t) = (pU × pU−1 × . . .× p2 × t1)

+(pU × pU−1 × . . .× p3 × t2) + . . .

+(pU × tU−1) + tU . (7.24)

It is easy to prove that v is a one-to-one mapping of T onto the range [0, P ) of integers.

We now define, inductively, another one-to-one mapping τ of the range [0, P ) of integers onto T. First,τ(0) is the vector of all 0’s. Suppose now that τ(p− 1) = (t1, t2, . . . , tU ). Let v be the smallest integer suchthat tv ̸= pv − 1. Then τ(p) is the vector whose u’th component is 0 for 1 ≤ u < v, is tu+1 for u = v, and istu for v < u < U . Finally, our permutation π of the integers in the range [0, P ) is defined by π(p) = v(τ(p)).

The 2D Efficient plug-in applies the above permutation to the projections (P = numberOfProjections)and within that to the rays in the projection (P = numberOfRays). In order to achieve efficiency, it isimportant that the P be decomposable into many (preferably small) prime factors.

122 CHAPTER 7. BUILT-IN PLUG-INS

7.12.1.2 Sequential

The 2D Sequential plug-in has two optional parameters, the projection increment and the ray increment.They default to 1 if not specified.

Let P be some positive integer and inc another integer such that 0 < inc < P. The following Java codewill list the sequential permutation of the integers [0, P).

for (int i=0; i<inc; ++i) {

for (int j=i; j<P; j+=inc) {

System.out.println(j);

}

}

The 2D Sequential plug-in applies the above permutation to the projections (P = numberOfProjections) andwithin that to the rays in the projection (P = numberOfRays) where inc is the projection increment or theray increment.

7.12.1.3 Shuffle

The Shuffle plug-in presents the projections in a random order and within each projection presents the raysin a random order. If there are P projections numbered from 0 to P − 1 and R rays numbered from 0to R − 1, shuffle will supply the projections as a random permutation of the set [0, 1, · · · , P − 1] and foreach projection will supply the rays as a random permutation of the set [0, 1, · · · , R − 1]. The ray orderpermutation is recomputed for each projection.

7.12.2 3D ray orders7.12.2.1 Efficient

The 3D Efficient plug-in applies the permutation π(p) defined in Section 7.12.1.1 to the projections (P =numberOfProjections) and within that to the y coordinate of the ray (P = detectorRows) and within thatthe x coordinate of the ray (P = numberOfRays). In order to achieve efficiency, it is important that the Pbe decomposable into many (preferably small) prime factors.

7.12.2.2 Sequential

The 3D Sequential plug-in has two optional parameters, the projection increment and the ray increment.They default to 1 if not specified.

The 3D Sequential plug-in applies the permutation described in Section 7.12.1.2 to the projections (P =numberOfProjections) and within that to the to the y coordinate of the ray (P = detectorRows) and withinthat the x coordinate of the ray (P = numberOfRays) where inc is the projection increment or the rayincrement.

7.12.2.3 Shuffle

The Shuffle plug-in presents the projections in a random order and within each projection presents the raysin the detectors Y direction a random order and within that the rays in the detectors X direction in a randomorder. Shuffle will supply the projections as a random permutation of the set [0, 1, · · · ,numberOfProjections−1] and for each projection will supply the y coordinate of the ray as a random permutation of the set[0, 1, · · · , detectorRows−1] and for each such coordinate will supply the x coordinate as a random permutationof the set [0, 1, · · · ,numberOfRays − 1]. The ray order permutations are recomputed after each use.

7.13 Termination testsMost series expansion reconstruction algorithms almost never actually converge and some get close to anapproximate solution and then diverge (see [3]). The termination tests are heuristic routines to decide when

7.14. POST PROCESSING OPERATIONS 123

an algorithm should be terminated. For time varying reconstructions, the termination test suceeds when itsucceeds for all time slices simultaneously.

7.13.1 2D and 3D termination tests

7.13.1.1 Iteration

This is the simplest possible termination test. It merely runs the algorithm for a fixed number of iterations.

7.13.1.2 Variance

The variance plug-in takes a single required parameter, epsilon. Let S be the set of grid points in thereconstruction, ρq(p⃗) be the basis element coefficient at point p⃗ ∈ S at iteration q, and

ρq =1

|S|∑p⃗∈S

ρq(p⃗).

We define

υq =1

|S|∑p⃗∈S

(ρq(p⃗)− ρq)2.

The variance plug-in stops the iterative process at iteration q > 1 if |υq−υq−1|<epsilon×υq or if υq < ZERO.

7.13.1.3 Residual

The residual plug-in takes a single required parameter, epsilon. Let S be the set of grid points in thereconstruction, ρq(p⃗) be the basis element coefficient at point p⃗ ∈ S at iteration q, R be the set of all raysover all projections, P (p⃗, r) be the line integral over the ray r ∈ R with the basis element centered at p⃗, andPr be the real raysum for ray r ∈ R. The pseudo raysum of the reconstruction is

P qr =

∑p⃗∈S

ρq(p)× P (p⃗, r).

The residual is1

|R|

√∑(P q

r − Pr)2

The residual plug-in stops the iterative process at iteration q if the residual is less than epsilon.

7.14 Post processing operations

Post processing operations are algorithms that modify a reconstructed image in some way.

7.14.1 2D and 3D post processing operations

7.14.1.1 Normalize

The normalize plug-in modifies a reconstruction so that the average value of the reconstruction is the sameas the average value estimated from the projection data. It has one optional parameter, type. The typeparameter can be either additive or multiplicative. If additive is specified, the reconstruction is modified byadding a correction term to each basis element coefficient. If multiplicative is specified, the reconstructionis modified by multiplying each basis element coefficient by a correction term. If the type is not specified,additive is assumed.

124 CHAPTER 7. BUILT-IN PLUG-INS

7.14.1.2 Constrain

The constrain plug-in modifies the reconstruction so that basis element coefficients are constrained to liebetween two limits. The plug-in has two optional parameters, lowValue and highValue. Basis elementcoefficients less than lowValue are set to lowValue and coefficients greater than highValue are set to highValue.Omitting both limits leaves the picture unchanged.

7.14.1.3 Contour

The contour plug-in converts the reconstruction into a two valued picture. It has four required parameters,type, level, lowValue, and highValue. The type parameter specified whether or not the level parameter will beused. It has two values, averageDensity and contourLevel. If contourLevel is specified, those basis elementcoefficients smaller than level are set to lowValue, otherwise, they are set to highLevel. If averageDensityis specified, level will be computed such that the resulting picture has an average density as close to theestimated average density as possible. In this case, lowLevel must be less than the estimated average densityand highLevel must be greater.

7.14.1.4 Smooth

The smooth plug-in performs a weighted average of a basis element coefficient with that of its neighbors.It has four parameters, threshold, weight1, weight2, and weight3. Let i be the index of some basis element,N(i) be the set of neighbors of that basis element, C(j) be the coefficient of basis element j, and w(j) bethe weight of the basis element. Let S(i) = {j|j ∈ N(i) and |C(i)− C(j)| < threshold}. That is, the subsetof neighbors whose coefficient is within threshold of C(i). The the new value of the basis element is given by

w(i)C(i) +∑

j∈S(i) w(j)C(j)

w(i) +∑

j∈S(i) w(j).

w(i) is always weight1. For 2D pictures on a square grid, the four nearest neighbors are given weight2 andthe four diagonal neighbors are given weight3. For 2D pictures on a hexagonal grid, the six nearest neighborsare given weight2 and weight3 is not used. For 3D pictures on the simple cubic grid, the six nearest faceneighbors are given weight2 the twelve next nearest edge neighbors are given weight3, and the eight cornerneighbors are not used. For 3D pictures on the face centered cubic grid, the 12 nearest neighbors are givenweight2 and weight3 is not used. For 3D pictures on the body centered cubic grid, the eight nearest neighborsare given weight2 and the six next nearest neighbors are given weight3.

7.15 Figure of meritA figure of merit is a way of comparing a phantom with a reconstruction that results in a single numberthat represents the "goodness" of a reconstruction according to some criterion. The value should have theproperty that higher values indicate a better reconstruction according to the criterion. For the following, Nwill be the number of regions of interest, T will be the number of time intervals, k will be the kth region ofinterest, 0 ≤ k < N , Sk(t) will be the set of grid points in the kth region of interest at time t, 0 ≤ t < T ,f (0)(p⃗, t) is the density of the phantom at the point p⃗ and time t, and similarly f (q)(p⃗, t) is the value of theqth iteration of a reconstruction algorithm, and

A(q)Sk(t)

=1

|Sk(t)|∑

p⃗∈Sk(t)

f (q)(p⃗, t)

is the average density of image f over region k(t).

7.15.1 2D and 3D figures of merit7.15.1.1 Pointwise accuracy

This is the pointwise accuracy figure of merit described in [29]. It requires that a number of shapes (regions ofinterest) are specified. (See Section 4.4.1 for a description of regions of interest.) For each region of interest,

7.16. EVALUATOR 125

the root mean square distance between the phantom and reconstruction is computed. The pointwise accuracyis the negative average of these values.

More precisely, the pointwise accuracy of a reconstruction at iteration q is given by

√√√√∑T−1t=0

∑N−1k=0

∑p⃗∈Sk(t)

(f (q)(p⃗, t)− f (0)(p⃗, t))2∑T−1t=0

∑N−1k=0 |Sk(t)|

.

7.15.1.2 Structural accuracy

This is the structural accuracy figure of merit described in [29]. It requires that a number of shapes (regionsof interest) are specified. (See Section 4.4.1 for a description of regions of interest.) For each region ofinterest, the absolute value of the difference between the average density of the phantom and reconstructionis computed. The structural accuracy is the negative average of these values.

More precisely, the structural accuracy of a reconstruction at iteration q is

− 1

NT

T−1∑t=0

N−1∑k=0

∣∣∣A(q)Sk(t)

−A(0)Sk(t)

∣∣∣ .7.15.1.3 Hit ratio

This is the hit ratio figure of merit described in [29]. It requires that a number of shapes (regions of interest)are specified. (See Section 4.4.1 for a description of regions of interest.) It combines them in pairs. If thereare an odd number of shapes, the last shape is ignored and a warning is issued. For each pair of shapes itcomputes the average density of the phantom and reconstruction in each of the shapes. If the shape withthe higher average density in the phantom is also the shape with the higher density in the reconstruction,a hit occurs. The hit ratio is simply the number of hits divided by the number of pairs. For time varyingreconstructions, the calculation is performed over all the time intervals.

More precisely, the hit ratio of a reconstruction at iteration q is given by

|H|NT

,

whereH =

{kT + t|A(q)

S2k(t)> A

(q)S2k+1(t)

and A(0)S2k(t)

> A(0)S2k+1(t)

}∪{

kT + t|A(q)S2k(t)

< A(q)S2k+1(t)

and A(0)S2k(t)

< A(0)S2k+1(t)

}.

7.16 Evaluator

An evaluator is a way of comparing a phantom with a reconstruction that may result in multiple valuesbeing output.

7.16.1 2D and 3D evaluators

7.16.1.1 Distance

The distance plug-in computes a number of distance measures comparing the phantom with a reconstruction.It optionally may take a number of shapes (regions of interest). (See Section 4.4.1 for a description of regionsof interest.) If no region of interest is specified, the entire reconstruction region is the region of interest. Forthe phantom and each time interval and each region of interest, the number of grid points in the region,the average density in the region, and the variance and standard deviation of the densities in the region.For each reconstruction the same values are computed along with the distance and relative error measurescomparing the reconstruction with the phantom.

126 CHAPTER 7. BUILT-IN PLUG-INS

Let f (0) be the digitized phantom as defined by equation 2.2 and f (q) be the digitized reconstructionat iteration q. Let Sk(t) be the set of grid points that lie in the kth region of interest at time t. Letαk(t) = |Sk(t)|. Then,

f̄(q)k (t) =

1

αk(t)

∑p⃗∈Sk(t)

f (q)(p⃗, t),

ν(q)k (t) =

1

αk(t)

∑p⃗∈Sk(t)

(f (q)(p⃗, t)− f̄(q)k (t))2,

σ(q)k (t) =

√ν(q)k (t),

δ(q)k (t) =

1

σ(q)k (t)

√1

αk(t)

∑p⃗∈Sk(t)

(f (q)(p⃗, t)− f (0)(p⃗, t))2, if σ(q)k(t) > ZERO,

√∑p⃗∈Sk(t)

(f (q)(p⃗, t)− f (0)(p⃗, t))2, if σ(q)k(t) ≤ ZERO,

m =∑

p⃗∈Sk(t)

∣∣∣f (0)(p⃗, t)∣∣∣ ,R

(q)k (t) =

(∑

p⃗∈Sk(t)

∣∣f (q)(p⃗, t)− f (0)(p⃗, t)∣∣) /m if m > ZERO,

∑p⃗∈Sk(t)

∣∣f (q)(p⃗, t)− f(0)(p⃗, t)∣∣ if m ≤ ZERO.

The average value is given by f̄ (q)k (t), the variance by ν(q)k (t), the standard deviation by σ(q)k (t), the distance

by δ(q)k (t), and the relative error by R(q)k (t) .

7.16.1.2 Residual

The residual evaluator reports on the unnormalized residual described in Section 7.13.1.3 above.

7.16.2 2D Evaluators7.16.2.1 Lines

Lines will output the values of the phantom, the reconstruction, and their difference for up to four linesparallel to either the X or Y axis. The position of each line is given as the distance from the origin usingthe same units as the grid spacing.

7.16.3 3D Evaluators7.16.3.1 Line

This output class will output the values of the phantom, the reconstruction and their difference. The linecan be parallel to either the X, Y, or Z axis. It is specified by providing the coordinates where the lineintersects the X-Y, X-Z, or Y-Z plane. The coordinates are given in the same units as the grid spacing.

7.17 OutputThe output classes are ways of displaying phantoms and reconstructions, often with images. All plug-insthat output images take a required parameter imageFormat which specifies the format of the image, GIF,BMP, JPEG, etc. They also take two optional parameters, lowerBound and upperBound. If lowerBound isspecified, then image pixels that are smaller than lowerBound are set to that value. Similarly, if upperBoundis specified, then image pixles that are greater than upperBound are set to that value. After this processing,the largest and smallest pixel values of the image are determined and the output image is created whereeach output pixel is given the value ⌊256× (x− smallest)/(largest − smallest)⌋ where x is the original pixelvalue.

7.17. OUTPUT 127

7.17.1 2D output7.17.1.1 DisplaySinogram

This output class creates an image of the projection data where each row of the image is a projection. Thisworks best when the projections are equally spaced and the number of projections is roughly at least as largeas the number of rays in a projection.

7.17.1.2 ImageOutput

This output class creates images of a phantom and/or reconstruction in a number of formats.

7.17.2 3D output7.17.2.1 ConvertToChimera

This output class converts 3D images into a format that can be read by the Chimera7 program. Chimerais a program that displays 3D surfaces. Of the many formats accepted by Chimera, I decided to use theSPIDER8 format. To display this output with Chimera, click the File=>Open... menu item. In the opendiaglog box, navigate to the Spider file this command generated and click on it.

7.17.2.2 ConvertToSlicer

This output class converts 3D images into a format that can be read by the 3DSlicer9 program. 3DSlicer isa program that displays slices of 3D objects. It is useful for looking inside 3D objects. Of the many formatsaccepted by 3D Slicer, I decided to use the GE Signa format10. To display this output with 3DSlicer, clickthe “File=>Add Volume...” menu item. In the add volume dialog box, navigate to the directory where theslicer files were created and click on the *.001 file.

7.17.2.3 DisplayProjections

This output class converts the 2D projections into images. It reuses the 2D image output methods. As aresult, the images have names of the form <base>_pN_tM.<ext>, where <base> is the base name of theprojection file, N is the projection number, M is the time interval, and <ext> is the appropriate extensionfor the image format.

7.17.2.4 Slices

This output class converts slices of a 3D phantom or reconstruction into 2D images. The slices are planesthat are perpendicular the X, Y, or Z axis. The position of a slice is given as its distance from the origin inthe same units as the grid spacing.

7http://www.cgl.ucsf.edu/chimera/8http://www.wadsworth.org/spider_doc/spider/docs/spider.html9http://www.slicer.org/

10See http://www.dclunie.com/medical-image-faq/html/part3.html

128 CHAPTER 7. BUILT-IN PLUG-INS

Appendix A

Mathematical symbols and notation

⌈x⌉ Ceiling. The smallest integer greater than or equal to x.

⌊x⌋ Floor. The greatest integer less than or equal to x.

|S| If S is a number, then |S| is the absolute value of S or if S is a set, |S| is the number of elements in S.

•⃗ (Section 2.4) a point or vector in R2 or R3.

x⃗ · y⃗ The inner product of vectors x⃗ and y⃗.

f(x⃗, t) (Section 2.4) an image or picture.

bn(x⃗) (Equation 2.1) the nth basis function evaluated at the point x⃗.

cn(t) (Equation 2.2) the coefficient of the nth basis element at time t.

f (0)(x⃗,t) (Equation 2.2) a digitized picture.

c(q)n (Equation 2.3) the coefficient of the nth basis element of the qth iteration of a reconstruction method.

f (q)(x⃗) (Equation 2.3) the digitized image produced at the qth iteration of a reconstruction method.

Rθ (Section 2.7) rotation in R2 by the angle θ.

NERGY (Section 4.2.1.1) the number of energy levels in a simulated polychromatic Xray.

ei (Section 4.2.1.1) the fraction of the energy at energy level i.

Bi (Section 4.2.1.1) the background absorption at energy level i.

NOBJ (Section 4.2.1.1) the number of shapes describing a phantom.

ρj(x⃗, t) (Section 4.2.1.2) nominal value of the jth shape at the point x and time instant t.

dj,i(t) (Section 4.2.1.2) density of the jth shape for the ith energy level at time instant t.

C (Section 4.2.2) the length of a cycle of the phantom.

tk (Section 4.2.2) the time instant of the center of the kth time interval.

NAPER (Section 4.2.3.1) the number of subdivisions of a detector.

rn (Section 4.2.3.1) the weight of the nth aperture division.

f (q)(x⃗, t) (Section 4.3.2) the qth iteration of some algorithm at the point x and time instant t.

[Rf ](s, θ⃗) (Equation 7.1) the Radon transform of f.

129

130 APPENDIX A. MATHEMATICAL SYMBOLS AND NOTATION

gpar(s, θ) (Equation 7.1) 2D parallel projection data.

garc(s, θ) (Equation 7.2) 2D arc projection data.

S (Section 7.2.1.2) the source to origin distance

D (Section 7.2.1.2) the source to detector distance

gtan(s, θ) (Equation 7.3) 2D tangent projection data.

gp(u,w,R) (Equation 7.4) 3D parallel projection data.

[Df ](y⃗, θ⃗) (Equation 7.5) 3D cone beam projection.

quantumMean (Section 7.5) the mean number of photons for simulating quantum noise

quantumCalibration (Section 7.5) the mean number of photons during the calibration measurement forsimulating quantum noise.

⋆ (Section 7.11.3.3) the convolution operator.

Appendix B

Installation and running

The jSNARK home page is http://jsnark.sourceforge.net/. The latest version of jSNARK can be foundon the downloads page.

B.1 Windows InstallationjSNARK_GUI and jSNARK require Java version 1.8.0 or later. Go to http://www.java.com/en/download/installed.jsp to verify that your java is acceptable. This page can also be used to download the Java run-time environment (JRE) if you need to upgrade your Java. If you want to develop plug-ins for jSNARK, youwill need the Java development kit (JDK). It can be downloaded at http://www.oracle.com/technetwork/java/javase/downloads/index.html. Choose the version bundled with Netbeans if you want a nice inte-grated development environment (IDE).

After verifying that your Java is up to date, download jSNARK from the downloads page. Using thewindows explorer navigate to where you downloaded jSNARK and then click on jSNARK_1_0_6.zip. Click"Extract All Files" to extract the files. You may want to change the destination of the extracted files. Afterextracting the files, run jSNARK_GUI by double clicking the file jSNARK_GUI.jar.

B.2 Linux Installation (or other UNIX-like system)jSNARK_GUI and jSNARK require Java 1.8.0 or later to run. Enter the command:

$ java -version

to verify that your Java is acceptable. If not and you are running Linux or Solaris, go to http://www.oracle.

com/technetwork/java/javase/downloads/index.html to download the latest version of Java. Choosethe Java Runtime Environment (JRE) if you only want to run jSNARK. Choose the version bundled withNetbeans if you want to develop plug-ins for jSNARK and want a nice integrated development environment(IDE). If you are running some other variant of UNIX, you will have to go to your vendor to obtain a suitableversion of java.

After verifying that your Java is up to date, download jSNARK from the downloads page. Then unzipthe file by running the command:

$ unzip jSNARK_1_0_6.zip

To run jSNARK_GUI, cd to the directory containing jSNARK_GUI.jar and enter the command:

$ java -jar jSNARK_GUI.jar

or you can run the GUI from any directory with the command:

$ java -jar <full_path>/jSNARK_GUI.jar

where <full_path> is replaced with the full path of the directory containing the jar file.

131

132 APPENDIX B. INSTALLATION AND RUNNING

B.3 Stand alone execution of jSNARKThere are two scripts that allow jSNARK to be run from a shell (ksh or bash).1 They are jSNARK.sh andrunall.sh. jSNARK.sh is used to run a single jSNARK file while runall.sh will run all the *.snark files inthe current directory one after the other. Both scripts require that the environment variable JSNARK_HOME

is set to the directory where jSNARK was unzipped. JSNARK_HOME must be exported to be usable by thescripts. When running under cygwin, JSNARK_HOME can not start with /cygwin/c/. Instead either use c:/

or a relative path.To run jSNARK.sh, enter the command:

$ "$JSNARK_HOME/jSNARK.sh" <jsnark_input_file>

where <jsnark_input_file> is the path to the input file you wish to run. The quotes are necessary if thereare spaces in the path, otherwise they may be omitted. $JSNARK_HOME/ may be omitted if it is part ofyour $PATH. The main jSNARK output will appear in your shell window. Other jSNARK outputs will begenerated in the current directory unless they were specified with absolute paths. Unlike running jSNARKfrom the jSNARK_GUI, if your jSNARK script uses your plug-in jar file, you must copy that jar file to the$JSNARK_HOME/lib directory before executing this command. Otherwise, jSNARK will not be able to findyour plug-ins. The example plug-in jar file (jSNARK_plugins.jar) is already in this directory.

To run runall.sh, cd to the directory that contains the collection of scripts you want to run as a batchand enter the command:

$ "$JSNARK_HOME/runall.sh"

For each *.snark file contained in the current directory, runall will echo the file name and run jSNARK onthat file sending the output to a file with the same prefix and .snark replaced with .out.

If you are running either of these commands remotely, it is suggested that you run them nohup-ed sothat the execution continues even if you are disconnected from the machine.

B.4 Plug-insjSNARK makes extensive use of plug-ins. A plug-in is a class that implements a particular interface that isdiscovered at run time. The name of the class is read in at run time, the code for the class is located in one ofthe jar files in the classpath, and an instance of the class is created, cast to the appropriate interface and used.There are two types of plug-ins. Many plug-ins are included in the jSNARK jar file. jSNARK users maysupply their own plug-ins in a separate jar file. The distribution contains a jar file (jSNARK_plugins.jar)as an example of this.

jSNARK_GUI automatically determines the plug-ins that are contained in the jSNARK jar file. If youare using additional plug-ins, you must inform jSNARK_GUI about them before attempting to use themby clicking on the "Load Plug-ins" button. Some of the examples use plug-ins from the example plug-in jarfile. If you attempt to load one of files without first loading the needed plug-in jar file, you will get an errorof the form:

Load plug-in jar file containing XXX and try again.

where XXX is the name of the plug-in that could not be found.

1CYGWIN (http://www.cygwin.com/) is a Linux simulator that runs on Windows systems. It can be used to run thesestand alone scripts.

Appendix C

Examples

The a* examples are SNARK05 examples that have been converted to jSNARK examples. Not all of themare operational at this point. Some of them require loading the jSNARK_plugins.jar file to satisfy referencesto plug-ins that are not part of jSNARK v0.11. All of those plug-ins are stubs

C.1 b1

This is the example given in Appendix B.1 of the SNARK09 manual. It uses the triangle and segmentplug-ins located in jSNARK_plugins.jar.

C.2 b2

This is the example given in Appendix B.2 of the SNARK09 manual. The user written algorithm has notbeen converted to jSNARK.

C.3 b3

This is the example given in Appendix B.3 of the SNARK09 manual. It uses the segment and isoscelesplug-ins located in jSNARK_plugins.jar.

C.4 b4

This is the example given in Appendix B.4 of the SNARK09 manual. It uses the sector plug-in located injSNARK_plugins.jar.

C.5 b5

This is the example given in Appendix B.5 of the SNARK09 manual. It uses the sector plug-in along withmany plug-ins that represent as yet unimplemented algorithms.

C.6 b6

This is the example given in Appendix B.6 of the SNARK09 manual. It uses the sector plug-in along withmany plug-ins that represent as yet unimplemented algorithms.

133

134 APPENDIX C. EXAMPLES

C.7 b7

This is the example given in Appendix B.7 of the SNARK09 manual. It uses the EMAP plug-in located injSNARK_plugins.jar.

C.8 b8

This is the example given in Appendix B.8 of the SNARK09 manual. It uses the unimplemented linogramgeometry and terminates abnormally.

C.9 b10

This is the example given in Appendix B.10 of the SNARK09 manual.

C.9.1 2dblobs, 3dblobs_bccgrid, 3dblobs_fccgrid, 3dblobs_scgrid, 3dblobs_mandl

These examples demonstrate how well blobs can approximate a constant density. 2dblobs runs a number oftests on both the square and hex grids. 3dblobs_bccgrid, 3dblobs_fccgrid, 3dblobs_scgrid run a number oftests on the BCC, FCC, and SC grids, respectively. 3dblobs_mandl runs a number of tests using the Matejand Lewitt[40] blob parameters on the BCC, FCC, and SC grids.

C.10 2dGridTests

This example reconstructs the Shepp and Logan head phantom [44] using the ART algorithm on the variouscombinations of grids and basis elements.

C.11 2dHeadSum

This example reconstructs the head phantom of Herman [20] using a number of reconstruction algorithms.

C.12 2dHeadOver

This is a repeat of the previous example except that it uses the overlay method of phantom creation.

C.13 3DblobPhantom

This example reconstructs a phantom consisting of a number of blobs using the ART algorithm.

C.14 3dGridTests

This example reconstructs a 3D “head” phantom using the ART algorithm on the various combinations ofgrids and basis elements. The phantom consists of two ellipsoids that approximate the outside and insideof the skull and some additional shapes inside the skull that basically label the top, front and side of thephantom. The body centered cubic grid with voxel basis elements fails because the truncated octahedronshape is not implemented yet. Warning: this example takes a long time (hours) to run and requires morethan the default memory settings. Using maximumImageSetSize=300m, maximumProjectionSetSize=300m,and maximumMemory=900m (see Appendix E) will work.

C.15. 3DSLHEADKAKSUM 135

C.15 3DSLHeadKakSum

This example is the 3D version of the Shepp and Logan head phantom preposed by Kak.1

C.16 3DSLHeadKakOver

This is a repeat of the previous example except that it uses the overlay method of phantom creation.

C.17 3DSLHeadKatSum

This example is the 3D version of the Shepp and Logan head phantom preposed by Katsevich [32].

C.18 3DSLHeadKatOver

This is a repeat of the previous example except that it uses the overlay method of phantom creation.

C.19 3DSLHeadWangSum

This example is the 3D version of the Shepp and Logan head phantom preposed by Wang [46].

C.20 3DSLHeadWangOver

This is a repeat of the previous example except that it uses the overlay method of phantom creation.

C.21 additive

This example illustrates the use of additive noise.

C.22 arc

This example illustrates the arc projection geometry.

C.23 ARTon2Dhead

This example runs four of the ART variations on the b3 phantom (see Appendix C.3). It uses the segmentand isosceles plug-ins located in jSNARK_plugins.jar.

C.24 ARTvariations

This example runs eight variations of the ART algorithm on the Shepp and Logan brain phantom [44]. Thereis also a file named ARTvariations.in that is the equivalent input file for the SNARK05 program.

C.25 BART

This example runs the binary ART variation (BART) on a small checkerboard phantom. There is also a filenamed BART.in that is the equivalent input file for the SNARK05 program.

1Reference lost.

136 APPENDIX C. EXAMPLES

C.26 circleOrbitThis example illustrates the circle orbit. Warning: this example requires more than the default mem-ory settings. Using maximumImageSetSize=300m, maximumProjectionSetSize=300m, and maximumMem-ory=900m (see Appendix E) will work.

C.27 figure4, figure9, figure10ab, figure10cd, figure11, figure12ab,figure12cd

These examples create the figures in [2]. The outputs of figure12ab and figure12cd will not exactly duplicatethe results of the paper due to the random nature of the noise.

C.28 ForbildAbdomenThis example uses the FORBILD abdomen phantom.2 The densities have been changed from g/cm3 tocm−1.

C.29 ForbildHeadThis example uses the FORBILD head phantom.3 The densities have been changed from g/cm3 to cm−1

C.30 ForbildHipThis example uses the FORBILD hip phantom.4

C.31 ForbildJawThis example uses the FORBILD jaw phantom.5

C.32 ForbildThoraxThis example uses the FORBILD thorax phantom.6 The densities have been changed from g/cm3 to cm−1

C.33 helixThis example illustrates the helix orbit.

C.34 lineThis example illustrates the line orbit.

C.35 multiplicativeThis example illustrates the use of multiplicative noise.

2http://www.imp.uni-erlangen.de/phantoms/abdomen/abdomen.html3http://www.imp.uni-erlangen.de/phantoms/head/head.html4http://www.imp.uni-erlangen.de/phantoms/hip/hipphantom.html5http://www.imp.uni-erlangen.de/phantoms/jaw/jaw.htm6http://www.imp.uni-erlangen.de/phantoms/thorax/thorax.htm

C.36. OFFCENTERCIRCULAR 137

C.36 offCenterCircular

This example illustrates the circular orbit on an ellipsoid that is not centered at the origin.

C.37 offCenterHelix

This example illustrates the helix orbit on an ellipsoid that is not centered at the origin.

C.38 offCenterLine

This example illustrates the line orbit on an ellipsoid that is not centered at the origin.

C.39 PETNoise

This example illustrates generating projection data with quantum noise of the type created by positronemission tomography (PET).

C.40 quantum

This example illustrates generating projection data with the constant projection type of quantum noise witha very low quantum mean.

C.41 rotatingBox

This example generates a time varying phantom of a box rotating in space.

C.42 simple3DAperture

This example generates projection data of a 3D object simulating a 2D aperture function at the detector.

C.43 simple3DCone

This example generates cone beam projection data of a simple phantom.

C.44 simple3DParallel

This example generates parallel beam projection data of a simple phantom.

C.45 SLwithMottle

This example illustrates using the mottle shape with the Shepp and Logan brain phantom [44].

C.46 tangent

This example illustrates the tangent projection geometry.

138 APPENDIX C. EXAMPLES

C.47 timeVaryingThis example is an ellipse whose shape varies with time.

C.48 WBPThis is a small example illustrating the weighted back-projection reconstruction algorithm (see Section7.11.3.2).

C.49 webpageThis example creates the images shown on the jSNARK web page http://jsnark/sourceforge.net/.

Appendix D

jSNARK schema

The jSNARK schema is an XML file that was built and is being maintained by an outdated version ofXMLSPYTM1. Since it is computer generated, it does not display well in its raw form. It displays much betterin browsers like MS Internet Explorer. It’s URL is http://jsnark.sourceforge.net/JSnarkInput_1_00.The following are graphical representations of various parts of the schema which are probably easier tounderstand than the textual representation. The images have been produced by XMLSPY.

D.1 The snark element

The snark element, the root element of the schema, is shown in Figure D.1. It has a required integer valuedattribute, dimensions, which is restricted to the values 2 or 3. It can optionally contain the elements create,execute, figureOfMerit, evaluator, and output.

D.2 The create element

The create element is shown in Figure D.2. It has a required string valued attribute phantomName. Itmay optionally contain the elements shapeList, phantomGeometry, and projectionGeometry. The phantomgeometry element has six attributes, phantomFile, numberOfTimeBins, subsampling, numberBasisElements,basisElementSeparation, and imageSize. phantomFile is an optional string valued attribute that contains thename of the file where the phantom will be written. numberOfTimeBins is the number of intervals a cyclewill be broken into. An image will be generated corresponding to the center of each interval. subsamplingis an optional integer valued attribute that specifies any subsampling that is used during phantom creation.numberBasisElements is an optional integer valued attribute. basisElementSepartation and imageSize areoptional float attributes. Two of these three must be specified. The third is calculated as described in Section4.2.2.

D.3 The shapeList element

The shapeList element is shown in Figure D.3. It has an optional integer valued attribute cycleLength and anoptional indicator, densityType. cycleLength defines the length of one cycle of the phantom. It defaults to 1.0.densityType specifies whether the phantom is of the summation or overlay type. It defaults to summation. Itmay optionally contain the spectrum element and zero or more instances of the shape element. The spectrumelement must contain at least one and no more than seven occurrences of the energy element. The energyelement has three attributes, energyLevel, energyWeight, and background. energyLevel is a required stringattribute. energyWeight is a required float attribute. background is an optional float attribute that defaultsto 0.0. The shape element references a plug-in. All elements that reference a plug-in have two required stringvalued attributes, name and snarkClass. name is the name of the plug-in and snarkClass is the class name

1XMLSPY http://www.altova.com/

139

140 APPENDIX D. JSNARK SCHEMA

Figure D.1: XML root element

Figure D.2: The create element

D.4. THE PARAMS ELEMENT 141

Figure D.3: The shapeList element

of the plug-in. All plug-ins may contain the params class which is where the parameters for the plug-in arestored. The shape class may also contain 0 to 7 instances of the density element or the bounds element.When used within the create element, the number of instances of the density element must be the same asthe number of instances of the energy element in the spectrum element or a single instance if the spectrumelement is not present. The density element has a single required float valued attribute called value. Theshape element has an optional attribute identifyingString which can be used to name the shape. This namecan be referenced as a shape in the figureOfMeriitMethod or evaluatorMethod to avoid repeating the shapeparameters. When used within the figureOfMeritMethod or evaluatorMethod, the shape is a region of interestand the bounds element can be used to specify how the region is interpreted. This is described in Section4.4.1. In either case, the shape may contain zero or more instances of the halfSpace element. The halfSpaceelement is only meaningful if the shape is an instance of a ConstantDensityShape (see Section 6.3.1). It hasfour attributes, normX, normY, normZ, and distance. In two dimensions, the normX and normY attributesare the X and Y coordinates of the normal to the half-space, while in three dimensions normX, normY, andnormZ are the coordinates of the normal. The distance attribute is the distance of the half-space from theorigin.

D.4 The params element

The params element is shown in Figure D.4. It can contain zero or more instances of the plugIn elementand zero or more instances of the param element. The plugIn element allows a plug-in to have embeddedplug-ins. Embedded plug-ins can not in turn contain further embedded plug-ins primarily because XMLdoes not allow recursion. The plugIn element has three string valued attributes, type, name, and snarkClass.The type attribute is used by jSNARK_GUI to identify the plug-ins that are candidates for that operation.

142 APPENDIX D. JSNARK SCHEMA

Figure D.4: The params element

The name and snarkClass attributes are as described above. The param element has two string valuedattributes, name and value. The name attribute is the name of some parameter of the plug-in and value isits value in string form.

D.5 The projectionGeometry element

The projectionGeometry element is shown in Figure D.5. It has the optional string valued attribute projec-tionFile that contains the name of the file where the projection data will be written. It may contain zero toseven instances of the aperture element. It must contain the arrangement and rays elements, one or moreoccurrences of the directions element and may optionally contain an accuracy element. The aperture elementcontains a float valued parameter weight. The arrangement element references a plug-in. The rays elementcontains either a program element or a user element. The program element has two required float attributes,detectorSpacing and imageSize. The user element has three optional elements, numberRays, detectorSpacing,and detectorWidth. numberRays is an integer valued attribute. detectorSpacing and imageWidth are floatvalued. Exactly two of these three must be specified. The directions element references a plug-in. Thedirections element can contain the optional attribute identifyString to name it.

D.6 The accuracy element

The accuracy element is shown in Figure D.6. It may contain zero or more instances of the sliceNoise element,zero or more instances of the projectionNoise element, an optional instance of quantumNoise, and zero ormore instances of rayNoise. Each of these elements reference a plug-in. The sliceNoise, projectionNoise, andrayNoise elements may contain the optional identifyingString attribute to name them.

D.7 The execute element

The execute element is shown in Figure D.7. It has the optional string valued attribute reconstructionFilethat contains the name of the file where the reconstruction information will be written. It contains therequired element input and zero or more instances of the algorithm element.

D.8 The input element

The input element is shown in Figure D.8. It contains the required elements reconstructionRegion andprojection.

D.8. THE INPUT ELEMENT 143

Figure D.5: The projectionGeometry element

Figure D.6: The accuracy element

144 APPENDIX D. JSNARK SCHEMA

Figure D.7: The execute element

Figure D.8: The input element

D.9 The reconstructionRegion element

The reconstructionRegion element is shown in Figure D.9. It contains either the readPhantom or reconstruc-tionGeometry elements. The readPhantom element contains the optional string valued attribute phantomFilethat contains the name of the phantom file. reconstructionGeometry has five attributes cycleLength, num-berOfTimeBins, numberBasisElements, basisElementSeparation, and imageSize. numberBasisElements isan optional integer valued attribute. basisElementSepartation and imageSize are optional float attributes.Exactly two of these three must be specified. The third is calculated as described in Section 4.2.3.3.

D.10 The projection element

The projection element is shown in Figure D.10. It contains either the readProjections or pseudo elements.The readProjections element has an optional string valued attribute projectionFile that is the name of thefile containing the projection data.

Figure D.9: The reconstructionRegion element

D.11. THE PSEUDO ELEMENT 145

Figure D.10: The projection element

Figure D.11: The pseudo element

D.11 The pseudo element

The pseudo element is shown in Figure D.11. It is essentially the same as the projectionGeometry elementdescribed above.

D.12 The algorithm element

The algorithm element is shown in Figure D.12. It references a plug-in. It has a required string valuedattribute, identifyingString, that is used to identify the outputs of this algorithm execution. It has twooptional string valued attributes, startingImage and sequenceHandling. startingImage is restricted to thevalues zero, average, continue, and phantom. sequenceHandling is restricted to the values timeAveraging andsliceBySlice. It can contain the optional elements imageGeometry , basisElement , termination, rayOrder , andzero or more instances of postProcessing . These all reference plug-ins. In addition to the params element,the postProcessing element also contains the iterations element and the optional attribute identifyingStringthat can be used to name it.

146 APPENDIX D. JSNARK SCHEMA

Figure D.12: The algorithm element

D.13. THE ITERATIONS ELEMENT 147

Figure D.13: The iterations element

Figure D.14: The figureOfMerit element

D.13 The iterations element

The iterations element is shown in Figure D.13. It is used to decide when certain plug-ins are called. It hastwo optional boolean attributes, includePhantom and finalIteration. It contains zero or more instances of theelements single, range, and modulo. A phantom is passed to an output plug-in if and only if includePhantomis true. If finalIteration is true, the last iteration of a reconstruction algorithm should be always be processedThe single element has a required integer valued attribute, iteration. Reconstructions for this iteration will beprocessed. The range element has two required integer valued attributes, first and last. Reconstructions withiterations between these two numbers (inclusive) will be processed. The modulo element has two requiredinteger valued attributes, k and n. Reconstructions with iteration i will be processed if k ≡ i mod n.

D.14 The figureOfMerit element

The figureOfMerit element is shown in Figure D.14. It has two optional string valued attributes, reconstruc-tionFile and figureOfMeritFile. It has zero or more occurrences of the figureOfMeritMethod element. ThefigureOfMeritMethod element references a plug-in. It contains an iterations element and optionally one ormore instances of the shape element. It may also contain the optional attribute identifyingString to nameit.

D.15 The evaluator element

The evaluator element is shown in Figure D.15. It has an optional string valued attribute, reconstructionFile.It has zero or more occurrences of the evaluatorMethod element. evaluatorMethod references a plug-in. It

148 APPENDIX D. JSNARK SCHEMA

Figure D.15: The evaluator element

Figure D.16: The output element

contains an iterations element and optionally one or more instances of the shape element. It may also containthe optional attribute identifyingString to name it.

D.16 The output elementThe output element is shown in Figure D.16. It has an optional string valued attribute, reconstructionFile.It has zero or more occurrences of the outputMethod element. outputMethod references a plug-in. It containsan iterations element. It may also contain the optional attribute identifyingString to name it.

Appendix E

jSNARK memory usage

jSNARK uses the Java Preferences class to contain the values for three parameters that describe the memorylimits used in a jSNARK run. The user can set these parameters from jSNARK_GUI through the jSNARK->Set jSNARK options menu item.

maximumMemory specifies the maximum heap space that the program can use. It is a number optionallyfollowed by k or K for kilobytes or m or M for megabytes. It must be greater than 2 megabytes. The defaultis 500m.

maximumImageSetSize specifies the maximum size each ImageSet may occupy in memory without beingswapped to disk. It defaults to 100m. Using the default, if you are doing a 3D reconstruction of a 200x200x200voxel image with 10 time slices, then each time slice image occupies 64,000,000 bytes (8 bytes per double).The images for two time slices will exceed the maximum, so only one time slice image will be in memoryat any one time. If an entire image set can not be contained in memory, jSNARK only keeps a single timeslice image in memory. Setting maximumImageSetSize to a value smaller than the size of a single time slicewill not reduce the memory requirement. If the input element contained in the execute element of the inputXML specifies a phantom, then both the phantom and the reconstruction will be allowed this maximum. If areconstruction algorithm needs a workspace the size of an image, then it too will be allowed this maximum.Most figure of merit and evaluation methods will need two image sets - one for the phantom and one for thereconstruction being evaluated.

maximumProjectionSetSize specifies the maximum size a ProjectionSet may occupy in memory. Itdefaults to 100m. Using the default, if each projection is a 300x300 array and there are 2000 projec-tions, then the projection set occupies 1,440,000,000 bytes, exceeding the maximum. In this case, 145(100*1024*1024/(300*300*8)) projections will be kept in memory at one time.

For large reconstruction problems, these values can be increased, but when the program size is larger thanthe available physical memory of the machine, operating system paging (and thrashing) begin. Whether thepaging is done by jSNARK or the operating system, these large reconstructions take a significant amountof time. To avoid the "Heap Space Exhausted" error message, maximumMemory should be greater thanmaximumImageSetSize * maximum number of image sets + maximumProjectionSetSize + 200m. Thetemporary files used to swap images and projections are kept in the directory $java.io.tmpdir/$user.name.On Windows-XP, $java.io.tmpdir is C:/Documents and Settings/$user.name/Local Settings/Temp.On Linux it is /tmp. $user.name is the users login name. These files are automatically deleted whenjSNARK terminates normally, but are left around when it terminates abnormally. If you experience anumber of abnormal terminations, it is a good idea to manually delete these temporary files.

149

150 APPENDIX E. JSNARK MEMORY USAGE

Appendix F

Supplying real data

A jar file, jSNARK_tools.jar, is included with the jSNARK download. It is a skeleton program to allow aprogrammer to easily create a jSNARK *.proj file containing user supplied data.

This program is a command line tool. To run it, enter the command:

$ java -jar <jsnark-path>/jSNARK_tools.jar <conversion-class> <input> <output>

where <jsnark-path> is the path to where you unzipped jSNARK, <conversion-class> is the name ofyour class that reads the input file containing the real data, <input> is the file containing the projectiondata and <output> is where you would like the output stored. jSNARK expects projection files to end in.proj, so <output> should follow that convention.

Four example conversion classes are contained in the src directory. Snark05File11 is a class that readsa SNARK05, SNARK09, or SNARK14 file11 file and creates a jSNARK projection file version of that file.ConeStub3D and ParallelStub3D are stubbed classes that illustrate what is needed to create 3D projectionfiles. HelixStub illustrates how to create a projection file with a helix orbit.

Either subclass one of these files and override the necessary methods or copy whichever corresponds toyour data collection geometry to a new file and edit it so that it reads your input file extracting the neededdata. Compile the code creating a new jSNARK_tools.jar and run it as above. If desired, you can justcompile your class. In that case, to run the program you need to add the -cp argument to the command lineand point to where your class was built.

The 3D classes require that the projection directions are specified as quaternions. The code necessary tocreate quaternions from any of the 12 body centered Euler angle combinations is part of the main jSNARKprogram. See Sections 5.12 and5.12.1 for further details about these classes. For an unrotated projection,the source is on the positive Z axis and the detector on the negative Z axis. The 2D projection array isaligned with the X and Y axes and linearized going from -X to +X within -Y to +Y (See Sections 7.2.2.2and 7.2.2.1). The Rotation is used to rotate this unrotated source and detector geometry into the actualprojection geometry. After applying the rotation to the source position, it can be modified with a translationin order to obtain a line or helix orbit.

151

152 APPENDIX F. SUPPLYING REAL DATA

Bibliography

[1] M. Abramowitz and I.A. Stegun, editors. Handbook of Mathematical Functions With Formulas, Graphs,and Mathematical Tables. National Bureau of Standards, Washington, 1964.

[2] Y. Benkarroum, G.T. Herman, and S.W. Rowland. Blob Parameter Selection for Image Reconstruction.Submitted to J. Opt. Soc. Amer. A.

[3] M. Bertero and P. Boccacci. Introduction to Inverse Problems in Imaging. IOP Publishing Ltd, 1998.

[4] R.N. Bracewell. The Fourier trandform and its applications. McGraw-Hill, 1965.

[5] B. Carvalho, W. Chen, J. Dubowy, G.T. Herman, M. Kalinowski, H.Y. Liao, L. Rodek, L. Rusko, S.W.Rowland, and E. Vardi-Gonen. Snark05: A programming system for the reconstruction of 2D imagesfrom 1D projections. Technical report, CUNY, http://www.snark05.com/, 2007.

[6] Y. Censor. Finite series-expansion reconstruction methods. Proc. IEEE, 71(3):409–419, 1983.

[7] Per-Erik Danielsson, P. Edholm, and Maria Magnusson-Seger. Towards exact 3d-reconstruction forhelical cone-beam scanning of long objects: A new arrangement and a new completeness condition.In International Meeting on Fully Three-Dimensional Image Reconstruction in Radiology and NuclearMedicine, pages 141–144, 1997.

[8] R. Davidi, G.T. Herman, and J. Klukowska. SNARK09: A programming systemfor the reconstruction of 2D images from 1D projections. Technical report, CUNY,http://www.dig.cs.gc.cuny.edu/software/snark09/, 2009.

[9] Ran Davidi, Gabor T. Herman, Joanna Klukowska, and Oliber Langthaler. SNARK14: A program-ming system for the reconstruction of 2D images from 1D projections. Technical report, CUNY,http://www.dig.cs.gc.cuny.edu/software/snark14/, 2015.

[10] D.E. Dudgeon and R.M. Mersereau. Multidimensional Digital Signal Processing. Prentice-Hall, Engle-wood Cliffs, NJ, 1984.

[11] P.P.B. Eggermont, G.T. Herman, and A. Lent. Iterative algorithms for for large partitioned linearsystems with applications to image reconstruction. Linear Algebra Appl., 40:37–67, 1981.

[12] Martin Fowler. Refactoring - Improving the Design of Existing Code. Addison-Wesley, 1999.

[13] J. Frank, editor. Electron Tomography: Methods for Three-Dimensional Visualization of Structures inthe Cell. Springer, 2006.

[14] E. Gamma, R. Helm, R. Johnson, and J. Vlissides. Design Patterns. Professional Computing Series.Addison-Wesley, 1994.

[15] E. Garduño. Vizualization and Extraction of Structural Components from Reconstructed Volumes. PhDthesis, University of Pennsylvania, 2002.

[16] G.T. Gullberg. Fan beam and parallel beam projection and back-projection operators. Technical report,Conner Laboratory, 1977.

153

154 BIBLIOGRAPHY

[17] G.T. Herman. Reconstruction of binary patterns from a few projections. In A. Günther, B. Levrat,and H. Lipps, editors, International Computing Symposium 1973, pages 371–379, Amsterdam, 1974.North-Holland.

[18] G.T. Herman. A relaxation method for reconstructing objects from noisy x-rays. Math. Programming,8:1–19, 1975.

[19] G.T. Herman. On modifications to the algebraic reconstruction techniques. Comp. in Biol. Med, 9:271–276, 1979.

[20] G.T. Herman. Fundamentals of Computerized Tomography: Image Reconstruction from Projections.Advances in Pattern Recognition. Springer, 2009.

[21] G.T. Herman, H. Hurwitz, A. Lent, and H.-P. Lung. On the Bayesian approach to computerized imagereconstruction. Information and Control, 42:42, 1979.

[22] G.T. Herman and A. Lent. A computer implementation of a Bayesian analysis of image reconstruction.Inform. and Control, 31:364–384, 1976.

[23] G.T. Herman and A. Lent. Iterative reconstruction algorithms. Comp. Biol. Med, 6:273–294, 1976.

[24] G.T. Herman and A. Lent. A family of iterative quadratic optimization algorithms for pairs of inequal-ities, with applications in diagnostic radiology. Math Progr. Studies, 9:15–29, 1978.

[25] G.T. Herman, A. Lent, and P. Lutz. Relaxation methods for image reconstruction. Comm. Assoc.Comp. Mach., 21:152–158, 1978.

[26] G.T. Herman, R.M. Lewitt, D. Odhner, and S.W. Rowland. SNARK89 – a programming systemfor image reconstruction from projections. Technical Report MIPG160, Dept. of Radiol., Univ. ofPennsylvania, Philadelphia, PA, 1989.

[27] G.T. Herman and L.B. Meyer. Algebraic reconstruction techniques can be made computationally effi-cient. IEEE Trans. Med. Imag., 12:600–609, 1993.

[28] G.T. Herman and A. Naparstek. Fast image reconstruction based on a Radon inversion formula appro-priate for rapidly collected data. SIAM J. Appl. Math., 33:511–533, 1977.

[29] G.T. Herman and D. Odhner. Performance evaluation of an iterative image reconstruction algorithmfor positron emission tomography. IEEE Trans. on Medical Imaging, 10:336–346, 1991.

[30] G.T. Herman and S.W. Rowland. SNARK77 – a programming system for image reconstruction fromprojections. Technical Report MIPG130, Dept. of Comp. Sci., SUNY at Buffalo, Amherst, NY, 1978.

[31] A. Katsevich. Analysis of an exact inversion algorithm for spiral cone-beam CT. Phys. Med. Biol.,47:2583–2597, 2002.

[32] A. Katsevich. Theoretically exact filtered backprojection-type inversion algorithm for spiral CT. SIAMJ. Appl. Math., 62(6):2012–2026, 2002.

[33] A. Katsevich. An improved exact filtered backprojection algorithm for spiral computed tomography.Advances in Applied Mathematics, 32:681–697, 2004.

[34] A.V. Lakshminarayanan. Technical report 92 - reconstruction from divergent ray data. Technical report,State University of New York at Buffalo, 1975.

[35] R.M. Lewitt. Reconstruction methods: Transform methods. Proc. IEEE, 71(3):390–408, 1983.

[36] R.M. Lewitt. Multidimensional digital image representations using generalized Kaiser-Bessel windowfunctions. J. Opt. Soc. Amer. A, 7:1834–1846, 1990.

[37] R. Marabini, G.T. Herman, and J.M. Carazo. 3D reconstruction in electron microscopy using ART withsmooth spherically symmetric volume elements (blobs). Ultramicrosc., 72:53–65, 1998.

BIBLIOGRAPHY 155

[38] R. Marabini, G.T. Herman, and J.M. Carazo. Fully three-dimensional reconstruction in electron mi-croscopy. In C. Börgers and F. Natterer, editors, Computational Radiology and Imaging: Therapy andDiagnostics, pages 251–281. Springer, New York, 1999.

[39] S. Matej and R.M. Lewitt. Efficient 3D grids for image reconstruction using spherically-symmetricvolume elements. IEEE Trans. Nucl. Sci., 42:1361–1370, 1995.

[40] S. Matej and R.M. Lewitt. Practical considerations for 3–D image reconstruction using sphericallysymmetric volume elements. IEEE Trans. Med. Imag., 15:68–78, 1996.

[41] F. Noo, J. Pack, and D.Heuscher. Exact helical reconstruction using native cone-beam geometries. Phys.Med. Biol., 48:3787–3818, 2003.

[42] R.J. Polge and B.K. Bhagavan. Efficient fast Fourier transform programs for arbitrary factors with onestep loop unscrambling. IEEE Trans. Comps., C-25:534–539, 1994.

[43] G.N. Ramachandran and A.V. Lakshminarayanan. Three-dimensional reconstruction from radiographsand electron micrographs: application of convolutions instead of Fourier transforms. Proc. Nat. Acad.Sci. U.S.A., 68:2236–2240, 1971.

[44] L.A. Shepp and B.F. Logan. The Fourier reconstruction of a head section. IEEE Trans. Nucl. Sci.,NS-21:21–43, 1974.

[45] K.C. Tam, S. Samarasekera, and F. Sauer. Exact cone beam ct with a spiral scan. Phys. in Med. Biol.,43(4):1015–1024, 1998.

[46] G. Wang, T.-H. Lin, P. Cheng, and D.M. Shinozaki. A general cone-beam reconstruction algorithm.IEEE Trans. Med. Imag., 12:486–496, 1993.

[47] H. Yu and G. Wang. Studies on the implementation of the katsevich algorithm for spiral cone-beamCT. J. X-Ray Science and Tech., 12:97–116, 2004.

Index

*.image, 25*.phan, 23*.proj, 23*.recon, 25

AbstractCreateProjection, 65AbstractCreateProjection2D, 64AbstractCreateProjection3D, 64AbstractImage, 61AbstractImageGeometry2D, 58AbstractImageGeometry3D, 58AbstractImageIterator, 59AbstractProjection2D, 63AbstractProjection3D, 63actual measurement, 65Algorithm, 17Analysis files, 25Analysis Input, 23, 25Analysis Output, 25aperture, 142applyNoise, 34, 65, 66, 84, 85applyRayNoise, 64Arrangement, 17

background, 28, 65basic basis function, 19basis function, 19BasisElement, 17basisElement, 145bounds, 56

calibration measurement, 65Cancel, 27clone, 57, 60, 63closeOutput, 71coefficient, 19computeNumberOfRays, 63computePhi, 64computeXleg, 64constants, 18convertToGridPoint, 57, 60convertToIndex, 60convertToVector, 57, 60coordinate system, 18create, 23, 53createImage, 57

CreateProjection, 64createProjection, 64createProjectionFactory, 65

Data Generation Input, 23data generation input, 23data generation output, 23DDA, 91decorator, 48, 64, 67, 100defaultPlugIn, 55density, 19detector height, 31detector rows, 31detector spacing, 30, 31detector width, 31Digital difference analyzer, 91direction, 31Directions, 17directions, 142documentation, 55doPostProcessing, 69

entryExit, 56equals, 57, 58Euler angles, 20evaluate, 23, 25Evaluator, 17evaluatorMethod, 141execute, 23, 25

factory method, 58, 65, 67, 76, 100figureOfMeriotMethod, 141FigureOfMerit, 17, 70figureOfMerit, 23, 25, 70function, 18

getAngle, 63getBasisElement, 57, 60getBasisElementCoefficient, 60getBasisElementCoefficients, 60getBasisElementSeparation, 57getBasisElementSize, 57getCalibration, 66, 84getCalibrationMeasurement, 66getCos, 63getDensity, 60

156

INDEX 157

getDetectorSpacing, 63getEndTime, 61getGridHeight, 60getGridSpacing, 57, 60getGridWidth, 60getImage, 67getImageGeometry, 60getImageSize, 57getImageSizeinGridPoints, 60getImageSizeInPixels, 57getImageWidth, 57getMax, 60getMean, 66getMin, 60getNumberOfTimeBins, 57getNumberOfXBasisElements, 57getNumberOfYBasisElements, 57getParameters, 53getPlugInType, 53getPoint, 60getProjectionData, 63getRay, 63, 64getSin, 63getSourceToDetector, 64getSourceToOrigin, 64getSupport, 58getTime, 63getTotalDensity, 63getTotalNumberOfRays, 63, 69Grid, 17grid, 19

half-space, 28, 44hasNext, 68

identifyingString, 141Image, 17ImageGeometry, 17imageGeometry, 145ImageSet, 61, 67INFIN, 18Initialization and Reconstruction Input, 23, 25initialization and reconstruction output, 25intersections, 56isBasisElement, 57, 60isDone, 68isIn, 56, 58isInBaseShape, 56isIterative, 67iterator, 60

lineIntegral, 56, 58

name, 54, 55NestedPlugIn, 54, 55

next, 68nextProjection, 66number of detectors, 30number of rays, 30

optionalParameters, 54optionalPlugIns, 54Output, 17output, 23, 25overlay, 28

Parameter, 54, 55Parameters, 53, 54phantom, 23, 70picture, 18picture region, 18PID180, 18PID2, 18pixel, 19, 87plug-in, 15, 17, 53postProcessing, 145project, 63Projection, 17projection, 20projection data, 23, 31projectionFile, 70projectionGeometry, 31ProjectionNoise, 17projectionNoise, 142projections, 31ProjectionSet, 63, 64, 70pseudo ray sums, 20putPlugIn, 53

QuantumNoise, 17quantumNoise, 142quaternion, 20

Radon transform, 76Ray, 68RayNoise, 17rayNoise, 142rayOrder, 145rays, 31raysum, 34, 66, 84raysums, 31readImage, 60real ray sum, 20, 31reconstruction, 70reconstruction problem, 20region of interest, 36, 71, 124, 125, 141requiredParameters, 54requiredPlugIns, 54rotations, 20run, 67

158 INDEX

setAlgorithmName, 70setAll, 60setBasisElement, 57setBasisElementCoefficient, 60setBasisElementParameters, 58setBasisType, 58, 60setCenter, 58setDensityModel, 64setGridType, 60setIdentifyingString, 71setImage, 67setImageName, 57setImageSize, 60setNraysSpectrum, 66setNumberOfTimeBins, 57setParameters, 63setParent, 60setProjections, 67–69setProjectionSet, 63setRayOrder, 67setReconstructionName, 70setSamplingRates, 60setStartTime, 61setSubsampling, 64setTime, 63setTimeInterval, 67, 68Shape, 17shapeList, 71shapelist, 70Show optional parameters, 27SingleInstantAlgorithm, 67SingleInstantPostProcessing, 69SingleInstantTermination, 68singleton, 100size, 61slice by slice, 35SliceNoise, 17sliceNoise, 142SNARK05 major advances, 14SNARK05 run, 23SNARK09 major advances, 14SNARK14 major advances, 14snarkClass, 139spel, 19, 87SQRT2, 18SQRT3, 18summation, 28

Termination, 17termination, 145test picture, 23ThreeD, 53time averaging, 35TimeVaryingAlgorithm, 67

TimeVaryingPostProcessing, 69TimeVaryingTermination, 68TwoD, 53TWOPI, 18type, 55

unit of length, 18units, 18Update, 27

valueAt, 56, 58values, 55voxel, 19, 87

working directory, 27writeImage, 60writeProjection, 64writeXMLHeader, 57

ZERO, 18