44
Python Reconstruction Operators in Neural Networks PYRO-NN CCP PET-MR 24th Software Meeting Christopher Syben

Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Python Reconstruction Operators in Neural Networks

PYRO-NN

CCP PET-MR 24th Software Meeting

Christopher Syben

Page 2: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Overview

● Known Operators | CT-Reconstruction

● PYRO-NN

● Custom Layers

● Python integration

● Examples

Christopher Syben | LME | PYRO-NN 2

Page 3: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Reduce error bound

● Reduce amount of paramter

● Gain interpretability

● Gradient flow through different domains

● Gradient or Sub-gradients are required !

[1] Maier, Andreas K., et al. "Learning with Known Operators reduces Maximum Training Error Bounds."

arXiv preprint arXiv:1907.01992 (2019).

Christopher Syben | LME | PYRO-NN 3

Known Operators [1]

Page 4: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

• CT solves a system of equations

Christopher Syben | LME | PYRO-NN 4

7 2

4

5

𝑥1 + 𝑥2 = 5𝑥3 + 𝑥4 = 4𝑥1 + 𝑥3 = 7𝑥2 + 𝑥4 = 2

𝑥1 = 3𝑥2 = 2𝑥3 = 4𝑥4 = 0

CT Reconstruction (1)

Page 5: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Typically

512 x 512 x 512 = 134 217 728

unknown variables

● Operator A is large (about 65.000 TB in float precision)

● Efficient solution required

Christopher Syben | LME | PYRO-NN 5

CT Reconstruction (2)

Page 6: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Efficient solution via filtered back-projection:

● Three steps:

● Convolution along s

● Back-projection along θ

● Suppress negative values

Christopher Syben | LME | PYRO-NN 6

CT Reconstruction (3)

Page 7: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Efficient solution via filtered back-projection:

● Can also be derived in matrix notation:

Christopher Syben | LME | PYRO-NN 7

CT Reconstruction (4)

Page 8: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● All three steps can be modeled as neural network

● Interesting:

● Error can be propagated through the entire network

● Back-projection layer uses projection algorithms to compute the

result of A and AT

Christopher Syben | LME | PYRO-NN 8

CT Reconstruction using Neural Networks

Page 9: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Open Source

● Apache 2.0 Licence

● TF-Layers: Projector and back-projectors

● 2D parallel, fan and 3D cone-beam

● Python API:

● Layer abstraction

● Geometries

● Phantoms

● Data generators

Christopher Syben | LME | PYRO-NN 9

PYRO-NN: Python Reconstruction Operators in Neural Networks

Page 10: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Volume

● The volume size in [Z, Y, X] order

● The spacing between voxels in [Z, Y, X] order.

Volume center is world coordinate origin

Detector

● Shape of the detector in [Y, X] order.

● The spacing between detector voxels in [Y, X] order.

Acquisition Geometry

● Number of equidistant projections.

● The covered angular range.

Christopher Syben | LME | PYRO-NN 10

Base Geometry Class

Page 11: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Trajectory

● based on ray vectors

● unit vectors with angle ∈ [0 ; angular range]

Christopher Syben | LME | PYRO-NN 11

2D Parallel-Geometry Properties

𝜃

𝜃

Page 12: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Trajectory

● based on projection matrices

Calibrated matrices from real systems can be used

● Source to isocenter distance (sid)

● Source to detector distance (sdd)

Christopher Syben | LME | PYRO-NN 12

3D Cone-Geometry Properties

Maier, Andreas, et al., eds. Medical Imaging Systems: An Introductory Guide. Vol.

11111. Springer, 2018.

Page 13: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Forward projector

● 2D parallel- and fan-beam

● 3D cone-beam

Implemented as ray-driven

● Backprojector

● 2D parallel- and fan-beam

● 3D cone-beam

Implemented as voxel-driven

Gradient are internally already registered

Christopher Syben | LME | PYRO-NN 13

Layers

Page 14: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 14

Operators in TensorFlow

Page 15: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 15

Tensorflow

Training libraries Inference libs

Python client C++ client ...

C API

Distributed master Dataflow executor

Kernel implementations

Const Var MatMul Conv2D ReLU Queue ...

Network layer

RPC RDMA ...

Device layer

CPU GPU ...

Operators in TensorFlow

https://www.tensorflow.org/guide/extend/architecture

Page 16: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 16

PYRO-NNTensorFlow

Training libraries Inference libs

Python client C++ client ...

C API

Distributed master Dataflow executor

Kernel implementations

Const Var MatMul Conv2D ReLU Queue ...

Network layer

RPC RDMA ...

Device layer

CPU GPU ...

Python API

Kernel implementations

Conebeam projector Conebeam backprojector ...

Known Operators in TensorFlow

Page 17: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● TensorFlow allows custom algorithms as layers [1]

● Requirement:

● C++ / Cuda implementation of the operation

● C++ Class for OP registration (TensorFlow API)

● Gradient registration on Python level

[1] https://www.tensorflow.org/guide/extend/op

Christopher Syben | LME | PYRO-NN 17

Custom Ops in TensorFlow

Page 18: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● TensorFlow option:

● Pure compilation with gcc and nvcc [1]

● Using TensorFlow binary version conflicts (mostly fixed)

● Shared object must be loaded manually

● Within the TensorFlow sources

● TensorFlow sources must be build

● Shared object must be loaded manually

● PYRO-NN [2]:

● Side-by-side with TensorFlow sources

● Sources must be build

● Shared object is included into the resulting TensorFlow .whl file

[1] https://github.com/tensorflow/custom-op

[2] https://github.com/csyben/pyro-nn-layersChristopher Syben | LME | PYRO-NN 18

Compiling Custom Kernels

Page 19: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 19

TensorFlow sources

Shared objects

Python package

compile sources

building python package

TensorFlow sources

Shared objects

Python package

building python package

PYRO-NN

compile sources

TensorFlow python package contains

PYRO-NN in respective name space

PYRO-NN Architecture

Page 20: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 20

OP Registration – Layer description

#include "tensorflow/core/framework/op.h"

#include "tensorflow/core/framework/op_kernel.h"

using namespace tensorflow; // NOLINT(build/namespaces)

#define CUDA_OPERATOR_KERNEL "ParallelProjection2D"

REGISTER_OP(CUDA_OPERATOR_KERNEL)

.Input("volume: float")

.Attr("volume_shape: shape")

.Attr("projection_shape: shape")

.Attr("volume_origin : tensor")

.Attr("detector_origin : tensor")

.Attr("volume_spacing : tensor")

.Attr("detector_spacing : tensor")

.Attr("ray_vectors : tensor")

.SetShapeFn(….)

.Output("output: float")

.Doc(R"doc(

Computes the 2D parallel forward projection

of the input based on the given ray vectors

output: A Tensor.

output = A*x

)doc");

.SetShapeFn( []( ::tensorflow::shape_inference::InferenceContext* c ){

TensorShapeProto sp;

ShapeHandle sh;

ShapeHandle batch;

ShapeHandle out;

auto status = c->GetAttr( "projection_shape", &sp );

status.Update( c->MakeShapeFromShapeProto( sp, &sh ) );

c->Subshape(c->input(0),0,1,&batch);

c->Concatenate(batch, sh, &out);

c->set_output( 0, out );

return status;

} )

● Interesting:

● Shape function for

error checking and

keras layers

● Camel-case name will

be converted to PEP 8:

parallel_projection2d

Page 21: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 21

OP Registration – Layer description

#include "tensorflow/core/framework/op.h"

#include "tensorflow/core/framework/op_kernel.h"

using namespace tensorflow; // NOLINT(build/namespaces)

#define CUDA_OPERATOR_KERNEL "ParallelProjection2D"

REGISTER_OP(CUDA_OPERATOR_KERNEL)

.Input("volume: float")

.Attr("volume_shape: shape")

.Attr("projection_shape: shape")

.Attr("volume_origin : tensor")

.Attr("detector_origin : tensor")

.Attr("volume_spacing : tensor")

.Attr("detector_spacing : tensor")

.Attr("ray_vectors : tensor")

.SetShapeFn(….)

.Output("output: float")

.Doc(R"doc(

Computes the 2D parallel forward projection

of the input based on the given ray vectors

output: A Tensor.

output = A*x

)doc");

.SetShapeFn( []( ::tensorflow::shape_inference::InferenceContext* c ){

TensorShapeProto sp;

ShapeHandle sh;

ShapeHandle batch;

ShapeHandle out;

auto status = c->GetAttr( "projection_shape", &sp );

status.Update( c->MakeShapeFromShapeProto( sp, &sh ) );

c->Subshape(c->input(0),0,1,&batch);

c->Concatenate(batch, sh, &out);

c->set_output( 0, out );

return status;

} )

● Interesting:

● Shape function for

error checking and

keras layers

● Camel-case name will

be converted to PEP 8:

parallel_projection2d

Page 22: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 22

OP Registration – Layer description

#include "tensorflow/core/framework/op.h"

#include "tensorflow/core/framework/op_kernel.h"

using namespace tensorflow; // NOLINT(build/namespaces)

#define CUDA_OPERATOR_KERNEL "ParallelProjection2D"

REGISTER_OP(CUDA_OPERATOR_KERNEL)

.Input("volume: float")

.Attr("volume_shape: shape")

.Attr("projection_shape: shape")

.Attr("volume_origin : tensor")

.Attr("detector_origin : tensor")

.Attr("volume_spacing : tensor")

.Attr("detector_spacing : tensor")

.Attr("ray_vectors : tensor")

.SetShapeFn(….)

.Output("output: float")

.Doc(R"doc(

Computes the 2D parallel forward projection

of the input based on the given ray vectors

output: A Tensor.

output = A*x

)doc");

.SetShapeFn( []( ::tensorflow::shape_inference::InferenceContext* c ){

TensorShapeProto sp;

ShapeHandle sh;

ShapeHandle batch;

ShapeHandle out;

auto status = c->GetAttr( "projection_shape", &sp );

status.Update( c->MakeShapeFromShapeProto( sp, &sh ) );

c->Subshape(c->input(0),0,1,&batch);

c->Concatenate(batch, sh, &out);

c->set_output( 0, out );

return status;

} )

● Interesting:

● Shape function for

error checking and

keras layers

● Camel-case name will

be converted to PEP 8:

parallel_projection2d

Page 23: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 23

OP Registration – Layer description

#include "tensorflow/core/framework/op.h"

#include "tensorflow/core/framework/op_kernel.h"

using namespace tensorflow; // NOLINT(build/namespaces)

#define CUDA_OPERATOR_KERNEL "ParallelProjection2D"

REGISTER_OP(CUDA_OPERATOR_KERNEL)

.Input("volume: float")

.Attr("volume_shape: shape")

.Attr("projection_shape: shape")

.Attr("volume_origin : tensor")

.Attr("detector_origin : tensor")

.Attr("volume_spacing : tensor")

.Attr("detector_spacing : tensor")

.Attr("ray_vectors : tensor")

.SetShapeFn(….)

.Output("output: float")

.Doc(R"doc(

Computes the 2D parallel forward projection

of the input based on the given ray vectors

output: A Tensor.

output = A*x

)doc");

.SetShapeFn( []( ::tensorflow::shape_inference::InferenceContext* c ){

TensorShapeProto sp;

ShapeHandle sh;

ShapeHandle batch;

ShapeHandle out;

auto status = c->GetAttr( "projection_shape", &sp );

status.Update( c->MakeShapeFromShapeProto( sp, &sh ) );

c->Subshape(c->input(0),0,1,&batch);

c->Concatenate(batch, sh, &out);

c->set_output( 0, out );

return status;

} )

● Interesting:

● Shape function for

error checking and

keras layers

● Camel-case name will

be converted to PEP 8:

parallel_projection2d

Page 24: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 24

OP Registration - Constructor

class ParallelProjection2DOp : public OpKernel

{

TensorShape volume_shape;

int volume_width, volume_height;

float volume_origin_x, volume_origin_y;

….

Eigen::Tensor<float, 2, Eigen::RowMajor> ray_vectors_;

explicit ParallelProjection2DOp(OpKernelConstruction *context) : OpKernel(context)

{

//get volume shape from attributes

OP_REQUIRES_OK(context, context->GetAttr("volume_shape", &volume_shape));

volume_height = volume_shape.dim_size(0);

volume_width = volume_shape.dim_size(1);

//get volume origin from attributes

Tensor volume_origin_tensor;

OP_REQUIRES_OK(context, context->GetAttr("volume_origin", &volume_origin_tensor));

auto volume_origin_eigen = volume_origin_tensor.tensor<float, 1>();

volume_origin_y = volume_origin_eigen(0);

volume_origin_x = volume_origin_eigen(1);

//get rey vectors from attributes

Tensor ray_vectors_tensor;

OP_REQUIRES_OK(context, context->GetAttr("ray_vectors", &ray_vectors_tensor));

auto ray_vectors_eigen = ray_vectors_tensor.tensor<float, 2>();

ray_vectors_ = Eigen::Tensor<float, 2, Eigen::RowMajor>(ray_vectors_eigen);

}

Page 25: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 25

OP Registration – Layer function (1)

void Compute(OpKernelContext *context) override{

// Grab the input tensor

const Tensor &input_tensor = context->input(0);

auto input = input_tensor.flat<float>();

// Create output shape: [batch_size, projection_shape]

TensorShape out_shape = TensorShape(

{input_tensor.shape().dim_size(0), projection_shape.dim_size(0), projection_shape.dim_size(1)});

//Check supported batch size. Batch > 1 is not supported currently.

OP_REQUIRES(context, input_tensor.shape().dim_size(0) == 1,

errors::InvalidArgument("Batch dimension is mandatory ! Batch size > 1 is not supported."));

// Create an output tensor

Tensor *output_tensor = nullptr;

OP_REQUIRES_OK(context, context->allocate_output(0, out_shape,

&output_tensor));

auto output = output_tensor->template flat<float>();

// Call the cuda kernel launcher

Parallel_Projection2D_Kernel_Launcher(input.data(), output.data(), ray_vectors_.data(), number_of_projections,

volume_width, volume_height, volume_spacing_x, volume_spacing_y, volume_origin_x,

volume_origin_y, detector_size, detector_spacing, detector_origin);

}};

● Interesting:

● Batch size ?

● Most tensorflow layers

requires batch size

Page 26: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 26

OP Registration – Layer function (1)

void Compute(OpKernelContext *context) override{

// Grab the input tensor

const Tensor &input_tensor = context->input(0);

auto input = input_tensor.flat<float>();

// Create output shape: [batch_size, projection_shape]

TensorShape out_shape = TensorShape(

{input_tensor.shape().dim_size(0), projection_shape.dim_size(0), projection_shape.dim_size(1)});

//Check supported batch size. Batch > 1 is not supported currently.

OP_REQUIRES(context, input_tensor.shape().dim_size(0) == 1,

errors::InvalidArgument("Batch dimension is mandatory ! Batch size > 1 is not supported."));

// Create an output tensor

Tensor *output_tensor = nullptr;

OP_REQUIRES_OK(context, context->allocate_output(0, out_shape,

&output_tensor));

auto output = output_tensor->template flat<float>();

// Call the cuda kernel launcher

Parallel_Projection2D_Kernel_Launcher(input.data(), output.data(), ray_vectors_.data(), number_of_projections,

volume_width, volume_height, volume_spacing_x, volume_spacing_y, volume_origin_x,

volume_origin_y, detector_size, detector_spacing, detector_origin);

}};

● Interesting:

● Batch size ?

● Most tensorflow layers

requires batch size

Page 27: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 27

OP Registration – Layer function (1)

void Compute(OpKernelContext *context) override{

// Grab the input tensor

const Tensor &input_tensor = context->input(0);

auto input = input_tensor.flat<float>();

// Create output shape: [batch_size, projection_shape]

TensorShape out_shape = TensorShape(

{input_tensor.shape().dim_size(0), projection_shape.dim_size(0), projection_shape.dim_size(1)});

//Check supported batch size. Batch > 1 is not supported currently.

OP_REQUIRES(context, input_tensor.shape().dim_size(0) == 1,

errors::InvalidArgument("Batch dimension is mandatory ! Batch size > 1 is not supported."));

// Create an output tensor

Tensor *output_tensor = nullptr;

OP_REQUIRES_OK(context, context->allocate_output(0, out_shape,

&output_tensor));

auto output = output_tensor->template flat<float>();

// Call the cuda kernel launcher

Parallel_Projection2D_Kernel_Launcher(input.data(), output.data(), ray_vectors_.data(), number_of_projections,

volume_width, volume_height, volume_spacing_x, volume_spacing_y, volume_origin_x,

volume_origin_y, detector_size, detector_spacing, detector_origin);

}};

● Interesting:

● Batch size ?

● Most tensorflow layers

requires batch size

Page 28: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 28

OP Registration – Layer function (1)

void Compute(OpKernelContext *context) override{

// Grab the input tensor

const Tensor &input_tensor = context->input(0);

auto input = input_tensor.flat<float>();

// Create output shape: [batch_size, projection_shape]

TensorShape out_shape = TensorShape(

{input_tensor.shape().dim_size(0), projection_shape.dim_size(0), projection_shape.dim_size(1)});

//Check supported batch size. Batch > 1 is not supported currently.

OP_REQUIRES(context, input_tensor.shape().dim_size(0) == 1,

errors::InvalidArgument("Batch dimension is mandatory ! Batch size > 1 is not supported."));

// Create an output tensor

Tensor *output_tensor = nullptr;

OP_REQUIRES_OK(context, context->allocate_output(0, out_shape,

&output_tensor));

auto output = output_tensor->template flat<float>();

// Call the cuda kernel launcher

Parallel_Projection2D_Kernel_Launcher(input.data(), output.data(), ray_vectors_.data(), number_of_projections,

volume_width, volume_height, volume_spacing_x, volume_spacing_y, volume_origin_x,

volume_origin_y, detector_size, detector_spacing, detector_origin);

}};

● Interesting:

● Batch size ?

● Most tensorflow layers

requires batch size

Page 29: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 29

OP Registration – Layer function (1)

void Compute(OpKernelContext *context) override{

// Grab the input tensor

const Tensor &input_tensor = context->input(0);

auto input = input_tensor.flat<float>();

// Create output shape: [batch_size, projection_shape]

TensorShape out_shape = TensorShape(

{input_tensor.shape().dim_size(0), projection_shape.dim_size(0), projection_shape.dim_size(1)});

//Check supported batch size. Batch > 1 is not supported currently.

OP_REQUIRES(context, input_tensor.shape().dim_size(0) == 1,

errors::InvalidArgument("Batch dimension is mandatory ! Batch size > 1 is not supported."));

// Create an output tensor

Tensor *output_tensor = nullptr;

OP_REQUIRES_OK(context, context->allocate_output(0, out_shape,

&output_tensor));

auto output = output_tensor->template flat<float>();

// Call the cuda kernel launcher

Parallel_Projection2D_Kernel_Launcher(input.data(), output.data(), ray_vectors_.data(), number_of_projections,

volume_width, volume_height, volume_spacing_x, volume_spacing_y, volume_origin_x,

volume_origin_y, detector_size, detector_spacing, detector_origin);

}};

● Interesting:

● Batch size ?

● Most tensorflow layers

requires batch size

Page 30: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 30

OP Registration – Layer function (2)

void Parallel_Projection2D_Kernel_Launcher (…){

//Copy the trajectory to GPU

auto ray_size_b = number_of_projections * sizeof(float2);

float2 *d_rays;

cudaMalloc(&d_rays, ray_size_b);

cudaMemcpy(d_rays, ray_vectors, ray_size_b, cudaMemcpyHostToDevice);

// Bind input to texture and copy to GPU

cudaArray *volume_array;

cudaMallocArray(&volume_array, &channelDesc, volume_width, volume_height);

cudaMemcpyToArray(volume_array, 0, 0, volume_ptr, …. , cudaMemcpyHostToDevice);

cudaBindTextureToArray(volume_as_texture, volume_array, channelDesc);

// Call the CUDA Kernel

const unsigned blocksize = 256;

const dim3 gridsize = dim3((detector_size / blocksize) + 1, number_of_projections);

project_2Dpar_beam_kernel<<<gridsize, blocksize>>>(out, d_rays, number_of_projections, … );

// Cleanup

cudaFree(d_rays);

}

● Interesting:

● Tensorflow manages the

memory itself !

Every cuda allocation is

outside of the managed

segment

Tensorflow provides it‘s own

allocation strategies [1]

Page 31: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Memory:

● Use the TensorFlow allocators [1]

● Texture-interpolation doubles the memory requirement

● Ensure enough memory outside the TF context

[1] https://stackoverflow.com/questions/48580580/tensorflow-new-op-cuda-kernel-memory-management

Christopher Syben | LME | PYRO-NN 31

OP Registration - Memory

TensorFlow Free

GPU-Memory

Page 32: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

Christopher Syben | LME | PYRO-NN 32

Custom Ops in Python

def parallel_projection2d(volume, geometry):

return pyronn_layers.parallel_projection2d(

volume,

volume_shape = geometry.volume_shape,

projection_shape = geometry.sinogram_shape,

volume_origin = geometry.tensor_proto_volume_origin,

detector_origin = geometry.tensor_proto_detector_origin,

volume_spacing = geometry.tensor_proto_volume_spacing,

detector_spacing = geometry.tensor_proto_detector_spacing,

ray_vectors = geometry.tensor_proto_ray_vectors)

@ops.RegisterGradient("ParallelProjection2D")

def _project_grad(op, grad):

reco = pyronn_layers.parallel_backprojection2d(

sinogram = grad,

sinogram_shape = op.get_attr("projection_shape"),

volume_shape = op.get_attr("volume_shape"),

volume_origin = op.get_attr("volume_origin"),

detector_origin = op.get_attr("detector_origin"),

volume_spacing = op.get_attr("volume_spacing"),

detector_spacing = op.get_attr("detector_spacing"),

ray_vectors = op.get_attr("ray_vectors"),

)

return [reco]

● Interesting:

● Tensor attributes need to be

converted to tensor proto type:

tf.contrib.util.make_tensor_proto(member,

tf.float32)

● Gradient calculation can use

attributes from respective layer

Page 33: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Geometry & Phantom definitions

Christopher Syben | LME | PYRO-NN 33

Example: 2D Parallel (1)

# ------------------ Define Geometry------------------

# Volume Parameters:

volume_size = 256

volume_shape = [volume_size, volume_size]

volume_spacing = [1, 1]

# Detector Parameters:

detector_shape = 800

detector_spacing = 1

# Trajectory Parameters:

number_of_projections = 360

angular_range = 2* np.pi

# create Geometry class

geometry = GeometryParallel2D(volume_shape, volume_spacing,

detector_shape, detector_spacing,

number_of_projections, angular_range)

#compute and set trajectory

trajectory_vectors = circular_trajectory.circular_trajectory_2d(geometry))

geometry.set_ray_vectors(trajectory_vectors )

# Get Phantom

phantom = shepp_logan.shepp_logan_enhanced(volume_shape)

phantom = np.expand_dims(phantom, axis=0) #Add batch dimension

#Imports

import numpy as np

import tensorflow as tf

import matplotlib.pyplot as plt

from pyronn.ct_reconstruction.layers.projection_2d import parallel_projection2d

from pyronn.ct_reconstruction.layers.backprojection_2d import

parallel_backprojection2d

from pyronn.ct_reconstruction.geometry.geometry_parallel_2d import

GeometryParallel2D

from pyronn.ct_reconstruction.helpers.filters import filters

from pyronn.ct_reconstruction.helpers.phantoms import shepp_logan

from pyronn.ct_reconstruction.helpers.trajectories import circular_trajectory

Page 34: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Projection & Reconstruction

Christopher Syben | LME | PYRO-NN 34

Example: 2D Parallel (2)

config = tf.ConfigProto()

config.gpu_options.per_process_gpu_memory_fraction = 0.5

config.gpu_options.allow_growth = True

# ------------------ Call Layers ------------------

with tf.Session(config=config) as sess:

#Create sinogram according to geometry

result = parallel_projection2d(phantom, geometry)

sinogram = result.eval()

# Get filter

reco_filter = filters.ram_lak_2D(geometry)

# Filter sinogram in frequency domain

sino_freq = np.fft.fft(sinogram, axis=-1)

sino_filtered_freq = np.multiply(sino_freq,reco_filter)

sinogram_filtered = np.fft.ifft(sino_filtered_freq, axis=-1).real

#Imports

import numpy as np

import tensorflow as tf

import matplotlib.pyplot as plt

from pyronn.ct_reconstruction.layers.projection_2d import parallel_projection2d

from pyronn.ct_reconstruction.layers.backprojection_2d import

parallel_backprojection2d

from pyronn.ct_reconstruction.geometry.geometry_parallel_2d import

GeometryParallel2D

from pyronn.ct_reconstruction.helpers.filters import filters

from pyronn.ct_reconstruction.helpers.phantoms import shepp_logan

from pyronn.ct_reconstruction.helpers.trajectories import circular_trajectory

Page 35: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Projection & Reconstruction

Christopher Syben | LME | PYRO-NN 35

Example: 2D Parallel (2)

config = tf.ConfigProto()

config.gpu_options.per_process_gpu_memory_fraction = 0.5

config.gpu_options.allow_growth = True

# ------------------ Call Layers ------------------

with tf.Session(config=config) as sess:

#Create sinogram according to geometry

result = parallel_projection2d(phantom, geometry)

sinogram = result.eval()

# Get filter

reco_filter = filters.ram_lak_2D(geometry)

# Filter sinogram in frequency domain

sino_freq = np.fft.fft(sinogram, axis=-1)

sino_filtered_freq = np.multiply(sino_freq,reco_filter)

sinogram_filtered = np.fft.ifft(sino_filtered_freq, axis=-1).real

#Imports

import numpy as np

import tensorflow as tf

import matplotlib.pyplot as plt

from pyronn.ct_reconstruction.layers.projection_2d import parallel_projection2d

from pyronn.ct_reconstruction.layers.backprojection_2d import

parallel_backprojection2d

from pyronn.ct_reconstruction.geometry.geometry_parallel_2d import

GeometryParallel2D

from pyronn.ct_reconstruction.helpers.filters import filters

from pyronn.ct_reconstruction.helpers.phantoms import shepp_logan

from pyronn.ct_reconstruction.helpers.trajectories import circular_trajectory

Page 36: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Projection & Reconstruction

Christopher Syben | LME | PYRO-NN 36

Example: 2D Parallel (3)

# ------------------ Call Layers ------------------

with tf.Session() as sess:

…..

# Reconstruction from filtered sinogram

result_back_proj = parallel_backprojection2d(sinogram_filtered, geometry)

reco = result_back_proj.eval()

#Visualize result

plt.figure()

plt.imshow(reco, cmap=plt.get_cmap('gist_gray'))

plt.axis('off')

plt.savefig('2d_par_reco.png', dpi=150, transparent=False, bbox_inches='tight')

#Imports

import numpy as np

import tensorflow as tf

import matplotlib.pyplot as plt

from pyronn.ct_reconstruction.layers.projection_2d import parallel_projection2d

from pyronn.ct_reconstruction.layers.backprojection_2d import

parallel_backprojection2d

from pyronn.ct_reconstruction.geometry.geometry_parallel_2d import

GeometryParallel2D

from pyronn.ct_reconstruction.helpers.filters import filters

from pyronn.ct_reconstruction.helpers.phantoms import shepp_logan

from pyronn.ct_reconstruction.helpers.trajectories import circular_trajectory

Page 37: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

• Code examples are provided within the

PYRO-NN Repository under

https://github.com/csyben/PYRO-NN

• Layer implementations can be found under

https://github.com/csyben/PYRO-NN-Layers

• More material can be found under

https://www5.cs.fau.de/lectures/free-deep-learning-resources/

• 2 examples are provided in a runnable code capsule under

https://codeocean.com/capsule/6772846/

Christopher Syben | LME | PYRO-NN 37

PYRO-NN

Page 38: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Learn filter K

● Objective function:

Christopher Syben | LME | PYRO-NN 38

Gradients – Example

Page 39: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Learn filter K

● Objective function:

● Gradient:

Christopher Syben | LME | PYRO-NN 39

Gradients – Example

Page 40: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Learn filter K

● Objective function:

● Gradient:

Christopher Syben | LME | PYRO-NN 40

Gradients - Example

Recon

Page 41: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Learn filter K

● Objective function:

● Gradient:

Christopher Syben | LME | PYRO-NN 41

Gradients - Example

Error

Page 42: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Learn filter K

● Objective function:

● Gradient:

Christopher Syben | LME | PYRO-NN 42

Gradients - Example

Backpropagation

Error

Page 43: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Learn filter K

● Objective function:

● Gradient:

Christopher Syben | LME | PYRO-NN 43

Gradients - Example

Backpropagation „l-1“

Page 44: Python Reconstruction Operators in Neural Networks...CT Reconstruction using Neural Networks Open Source Apache 2.0 Licence TF-Layers: Projector and back-projectors 2D parallel, fan

● Learn filter K

● Objective function:

● Gradient:

Christopher Syben | LME | PYRO-NN 44

Gradients - Example

Backpropagation „l-1“