94
OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL COHERENCE TOMOGRAPHY AND GRAPHICS PROCESSING UNITS By Kevin S.K. Wong B.A.Sc., Simon Fraser University, 2013 Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Applied Science in the School of Engineering Science Faculty of Applied Sciences © Kevin S.K. Wong 2014 SIMON FRASER UNIVERSITY Fall 2014 All rights reserved. However, in accordance with the Copyright Act of Canada, this work may be reproduced without authorization under the conditions for Fair Dealing. Therefore, limited reproduction of this work for the purposes of private study, research, criticism, review and news reporting is likely to be in accordance with the law, particularly if cited appropriately.

OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

OPHTHALMIC IMAGING APPLICATIONS WITH

OPTICAL COHERENCE TOMOGRAPHY AND

GRAPHICS PROCESSING UNITS

By

Kevin S.K. Wong

B.A.Sc., Simon Fraser University, 2013

Thesis Submitted in Partial Fulfillment of the

Requirements for the Degree of

Master of Applied Science

in the

School of Engineering Science

Faculty of Applied Sciences

© Kevin S.K. Wong 2014

SIMON FRASER UNIVERSITY

Fall 2014

All rights reserved.

However, in accordance with the Copyright Act of Canada, this work

may be reproduced without authorization under the conditions for Fair

Dealing. Therefore, limited reproduction of this work for the purposes

of private study, research, criticism, review and news reporting is likely

to be in accordance with the law, particularly if cited appropriately.

Page 2: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

II

Approval

Name: Kevin S. K. Wong

Degree: Master of Applied Science

Title of Thesis: Ophthalmic Imaging Applications with Optical Coherence

Tomography and Graphics Processing Units

Examining Committee:

Chair: Dr. Ash M. Parameswaran, P. Eng.

Professor, School of Engineering Science

________________________________________

Dr. Marinko V. Sarunic, P. Eng.

Senior Supervisor

Associate Professor, School of Engineering Science

________________________________________

Dr. Yifan Jian

Supervisor

Research Associate, School of Engineering Science

________________________________________

Dr. Mirza Faisal Beg, P. Eng.

Internal Examiner

Professor, School of Engineering Science

Date Defended/Approved: December 5th

, 2014

Page 3: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

III

Partial Copyright License

Page 4: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Ethics Statement

iv

Page 5: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

V

Abstract

In ophthalmology, Optical Coherence Tomography (OCT) is becoming one of the dominant

imaging technologies for both clinical diagnostics and vision research. In a previous bachelor’s

thesis prior to this project, we described a highly optimized Graphics Processing Unit (GPU)

implementation of the Fourier Domain (FD)OCT processing and visualization pipeline that was

capable of real-time volume rendering at video rate.

In this thesis, we describe three applications that build upon the GPU-based processing

software: Speckle Variance (sv)OCT, Compressive Sample (CS)OCT, and Wavefront Sensorless

Adaptive Optics (WSAO)OCT. We first demonstrate that svOCT can be a powerful fundus

imaging technique for visualizing the retinal vasculature network, which may be comparable to

the gold-standard technique called Fluorescein Angiography (FA). The strongest attribute of

svOCT is that it bypasses the use of intravenous fluorescein for vascular contrast enhancement,

and can be highly suitable for longitudinal monitoring of patients with vasculature-related

pathologies in the retina. In our second application, we present a GPU-accelerated CS-OCT

processing pipeline. The principle behind CS-OCT is to take advantage of the redundancy in

biological structures, such as the retina, for reconstructing volumes acquired at a sampling

density below the Nyquist criterion; the purpose is to justify decreasing volume acquisition time

by significantly subsampling the datasets. In our final application, we present a WSAO-OCT

system as a novel technique for cellular resolution imaging of the human photoreceptor layer.

This technique leverages on the ultrahigh speed processing rates in our GPU-based processing

software in order to produce a real-time intensity-based merit function for en face image quality

optimization.

Page 6: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

VI

Acknowledgements

Throughout the course of my graduate studies, my senior supervisor, Dr. Marinko Sarunic, has

endowed me with knowledge of biomedical imaging and optics that would otherwise be

challenging to acquire elsewhere, and has spent much patience in training me to become an

independent engineer in preparation for my future career. For this, I would like to express my

deepest gratitude for his kindness, guidance, and willingness to put up with my half-baked

questions throughout both my undergraduate and graduate degrees.

My other supervisor and mentor, Dr. Yifan Jian, has always been extraordinarily patient

when explaining new concepts to me. For content where even he was not familiar with, such as

GPU programming issues, he was always pleased to sit alongside to carefully run through code

to decipher single lines of code. Learning from, and with, Dr. Jian was always enjoyable; for this

I am very grateful for his enthusiasm and companionship.

I would also like to thank Dr. Faisal Beg for being on my committee and contributing to

invaluable comments for my thesis, and providing insightful suggestions for my research.

I would like to thank Dr. Jing Xu and Michelle Cua, whom have provided me with so

much support throughout my research and thesis.

One of my greatest aspirations to pursue a graduate degree was the fact that I could spend

more time with the group of friends I established during my undergraduate internship with

BORG. To all members of the BORG group, thank you all for giving me the motivation to wake

up early in the morning and to take a 3-hour bus ride to and from SFU Burnaby campus just to sit

in a windowless lab for minimum of 8 hours a day, every day.

Lastly, I want to thank my family for granting me with an endless supply of love.

Page 7: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

VII

Table of Contents

Approval ......................................................................................................................................... II

Partial Copyright Licence............................................................................................................. IIIEthics Statement .............................................................................................................................IV

Abstract ......................................................................................................................................... V

Acknowledgements ....................................................................................................................... VI

Table of Contents......................................................................................................................... VII

List of Figures ............................................................................................................................... IX

List of Acronyms ....................................................................................................................... XIII

Chapter 1 Introduction ............................................................................................................... 1

1.1 Ophthalmic Imaging Techniques ..................................................................................... 2

1.1.1 Volumetric Imaging with OCT ................................................................................. 3

1.2 Fourier Domain Optical Coherence Tomography ............................................................ 4

1.3 Research Motivation ........................................................................................................ 6

Chapter 2 Graphics Processing Units in Optical Coherence Tomography ............................... 8

2.1 State-Of-The-Art Kepler GPUs ........................................................................................ 9

2.2 FD-OCT Processing Pipeline ......................................................................................... 11

2.2.1 Wavelength-To-Wavenumber Resampling ............................................................ 12

2.2.2 DC Subtraction........................................................................................................ 13

2.2.3 Dispersion Compensation ....................................................................................... 14

2.2.4 Fast-Fourier Transform ........................................................................................... 14

2.2.5 Post-FFT Processing ............................................................................................... 15

2.3 GPU-Accelerated FD-OCT Processing Pipeline............................................................ 15

2.3.1 Results ..................................................................................................................... 20

2.4 Summary ........................................................................................................................ 24

Chapter 3 Real-Time Speckle Variance Optical Coherence Tomography .............................. 26

3.1 Introduction .................................................................................................................... 27

3.2 Methods .......................................................................................................................... 28

Page 8: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

VIII

3.2.1 Human Imaging ...................................................................................................... 29

3.2.2 Processing ............................................................................................................... 30

3.2.3 Dynamic Region of Interest Selection .................................................................... 31

3.3 Results and Discussion ................................................................................................... 33

3.3.1 Human Imaging ...................................................................................................... 34

3.4 Conclusion ...................................................................................................................... 37

Chapter 4 Compressive Sampling Optical Coherence Tomography with GPU ...................... 39

4.1 CS Acquisition of OCT datasets .................................................................................... 41

4.2 GPU-Based CS Reconstruction ...................................................................................... 42

4.2.1 Streamlined OCT Processing and CS Reconstruction ............................................ 46

4.2.2 Results and Discussion ........................................................................................... 47

4.3 Conclusion ...................................................................................................................... 54

Chapter 5 Wavefront Sensorless Adaptive Optics Optical Coherence Tomography .............. 55

5.1 Introduction .................................................................................................................... 56

5.2 Methods .......................................................................................................................... 58

5.2.1 Optical Design ........................................................................................................ 58

5.2.2 Workstation and Software Specifications ............................................................... 60

5.2.3 Image Acquisition and Optimization ...................................................................... 60

5.3 Results ............................................................................................................................ 61

5.3.1 System Resolution .................................................................................................. 61

5.3.2 Human Retinal Imaging with WSAO-OCT ............................................................ 62

5.4 Discussion and Conclusion ............................................................................................ 65

Chapter 6 Conclusions ............................................................................................................. 68

6.1 Future Work ................................................................................................................... 70

6.1.1 Speckle Variance OCT ........................................................................................... 70

6.1.2 CS-OCT .................................................................................................................. 71

6.1.3 WSAO-OCT ........................................................................................................... 72

References ..................................................................................................................................... 73

Page 9: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

IX

List of Figures

Figure 1-1 Colour-coded illustration of the raster scanning pattern............................................ 3

Figure 1-2 Optical Coherence Tomography Volume .................................................................. 4

Figure 1-3 Topology of the SD-OCT and SS-OCT topology ..................................................... 5

Figure 2-1 Block diagram of the first generation Keplar GPU, GK104. This GPU includes 8

SMXs (192 cores each), and 512 kB of L2 Cache. ................................................. 10

Figure 2-2 Block diagram of the second generation Keplar GPU, GK110. This GPU includes

15 SMXs (192 cores each), and 1536 kB of L2 Cache. .......................................... 10

Figure 2-3: FD-OCT Processing pipeline, including resampling, DC subtraction, numerical

dispersion compensation, FFT, and post-FFT processing. ...................................... 12

Figure 2-4 Flowchart of the Multithreaded based GPU implementation of the OCT data

transfer and processing pipeline with CUDA’s stream processing approach. This

figure is taken from Jian et al [14]. .......................................................................... 17

Figure 2-5 SD-OCT Processing Kernel, Volume Rendering, and en face generation kernel. The

SD-OCT Processing pipeline included Resampling, DC Subtraction, Hilbert

Transform, Dispersion Compensation, Zero Padding, FFT, and Post-FFT (Modulus,

Log, and Scaling). .................................................................................................... 18

Figure 2-6 SS-OCT Processing Kernel, Volume Rendering, and en face generation kernel. The

SS-OCT Processing pipeline included DC Subtraction, Zero Padding, FFT, and

Post-FFT. ................................................................................................................. 18

Figure 2-7 A comparison of the processing rates between GTX Titan, GTX Titan Black, and

Quadro K6000 graphics cards, all based on the GK110 GPU. ................................ 19

Figure 2-8 Evaluation of the processing rates attained with the unpadded processing pipeline

which reached up to 5.6 MHz with the NVIDIA Quadro K6000. ........................... 20

Figure 2-9 Ultrahigh speed processing and video rate volume rendering with simulated

megahertz acquisition rates. Top left – B-scan at the red-line location. Top right –

En face image of retina. Bottom left – interferogram. Bottom right – volume

rendered image of dataset. ....................................................................................... 21

Figure 2-10 Screen capture of a video demonstrating real time acquisition, processing, and

visualization of the optic nerve head in human retinal imaging with a 100 kHz,

Page 10: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

X

1060 nm acquisition system [28]. Top Left – Image of the subject being imaged.

Top Right – Image of the OCTViewer program being used in real-time imaging

with OCT at the Optic Nerve Head (ONH). Bottom Left – Example of image

tearing artifact due to subject motion. Bottom Right – Visualization of the ONH

and the macular region of the eye. ........................................................................... 22

Figure 2-11 Similar video to Figure 2-10 except this provides visualization of the anterior

chamber in real-time with a 50 kHz 1310 nm acquisition system. .......................... 23

Figure 2-12 Screen capture of a video demonstrating ultrahigh speed volume acquisition and

volume processing as a preliminary demonstration of OCT-guided microsurgery.

The operating represents the surgical tool, while the fingertip represents the region

of interest to be operated on for the patient. ............................................................ 24

Figure 3-1 A visualization of the speckle variation computation. This demonstrates how

speckles corresponding to moving particles in the retina generate high variance

values (bright pixels). Three B-scans are acquired at a single location and shown on

the left, while the speckle variance image is shown on the right. ........................... 29

Figure 3-2 Motion registration and notch filtering on svOCT en face images. The motion

registration algorithm removes many of motion artifacts. However, for the severe

artifacts that cannot be corrected, they are removed with a Fourier-based notch filter

mask. ........................................................................................................................ 31

Figure 3-3 An example of the dynamic-user selection of the retinal layers for visualizing the en

face images of the vasculature network. The colours are associated with specific

layers of the retina: red - the NFL/GCL, green – the IPL, and blue – the OPL. The

right image represents the colour-mapped en face images of the three layers. ....... 32

Figure 3-4 Representative NVIDIA Visual profiler timeline for a single-svOCT iteration of the

SDOCT pipeline (Quadro K6000 GPU) is separated into three components: a series

of kernels for SDOCT processing, speckle variance kernel with en face rendering

kernels, and idle time for display. ............................................................................ 34

Figure 3-5 The svOCT B-scan images in a, d, and g illustrate the selected depths of interest for

an area of 2x2 mm2. The intensity en face images at the user-selected region of

interests are shown in b, e, and h. The svOCT en face images at the user-selected

region of interests are illustrated in c, f, and i. The speckle variance B-scan j

presents a colour-coded combination of a, d, and g. k is a combined gray-scale en

face image of all three layers. The superimposed colour coded svOCT en face

image is presented in l. ............................................................................................ 35

Figure 3-6 Comparison between the intensity en face images (a, c, e, g) and the color-coded

svOCT en face images (b, d, f, h), respectively. Images (a–d) were acquired at the

optic nerve head region, and images (e–h) were acquired in the macula. Field-of-

view: (a, b) 2 × 2 mm2, (c, d) 1 × 1 mm2, (e, f) 2 × 2 mm

2, and (g, h) 5 × 5 mm

2. 36

Page 11: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

XI

Figure 4-1 Comparison between a uniformly-spaced fully-sampled raster scan pattern (left),

and a randomly-spaced sparse B-scan raster scan pattern (right). Blue dots represent

the scanning beam. The green arrows represent the forward scan direction. The red

arrow represents the fly-back scan direction. .......................................................... 41

Figure 4-2 Transitional stages between 1.) CS-acquired volume, 2.) Redistributed volume, and

3.) Reconstruction volume with the CS algorithm. ................................................. 43

Figure 4-3 Plot of the IST solver algorithm. ............................................................................. 45

Figure 4-4 A representative screen capture of the GPU-Based reconstruction and visualization

software. Top left – Compressed Volume. Top Middle – Sparsely redistributed

Volume. Top Right – Reconstructed volume. Bottom Left – En face image of

reconstructed volume. Bottom Middle – fast B-scan extracted at the red line on the

en face image. Bottom right – slow B-scan extracted at the green line on the en face

image. ...................................................................................................................... 47

Figure 4-5 A logarithmic plot of the GPU-FFT-Based CS Reconstruction of the MSE Values

computed during real-time reconstruction. .............................................................. 49

Figure 4-6 Figures comparing quality of en face images at different iterations. A) Before

reconstruction. B) 200 iterations. C) 400 iterations. D) 600 iterations. E) 800

iterations. F) 1000 iterations. These images were extracted from a reconstructed

50% missing data cyst volume. ............................................................................... 49

Figure 4-7 Results comparing quality of slow-scan images at different iterations. A) Before

reconstruction. B) 200 iterations. C) 400 iterations. D) 600 iterations. E) 800

iterations. F) 1000 iterations. This was performed on a 50% subsampled cyst

dataset. ..................................................................................................................... 50

Figure 4-8 A comparison between a) 75% missing data, b) 50% missing data, and c) fully

acquired data. The first row of images, a, b, and c are slow-scan image at the

horizontal center of the volume. The second row of images, d, e, and f, are en face

images of the reconstructed datasets. A total of 1200 iterations were used for

reconstruction. ......................................................................................................... 52

Figure 4-9 A comparison between the reconstructed B-scans and acquired B-scans from 50%

and 75% missing data. The panels a and b are from 50% missing data volume,

while c and d are from the 75% missing data. ......................................................... 52

Figure 4-10 A comparison of CS reconstruction for 50% subsampled datasets for five different

subjects. Volumes 9-OD-V6, 10-OD-V5, and H011-OD-V6 are datasets taken from

healthy subjects, while volumes 11-OS-V3 and 17-OD-V4 are taken from patients

with age-related macular degeneration. ................................................................... 53

Page 12: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

XII

Figure 5-1 Schematic of the WSAO-OCT system: DM, Deformable Mirror; SLD,

Superluminescent Diode; FC, 10/90 reference/sample Fiber Coupler; G1, G2,

horizontal and vertical galvanometer scanning mirrors; PC, polarization controller;

DC, dispersion compensation; L0: f=16 mm; L1, L2: f = 300 mm; L3, L4: f = 200

mm, L5, L6: f = 150 mm; L7: f = 200 mm; L8: f = 250 mm; L9: two lenses f = 200

and f = 300 mm (equivalent to f = 120 mm); L10: two lenses of f = 300 mm

(equivalent to f = 150 mm). All lenses were achromatic doublets. A trial lens plane

is available for inserting corrective lenses for focusing. ......................................... 59

Figure 5-2 Experimental quantification of the lateral resolution using a USAF resolution target

and a 30 mm focal length air-spaced achromatic lens. The achievable lateral

resolution was 2.76 μm with a volume size of 1024x200x80. Scale bar is 50 μm. . 62

Figure 5-3 Comparison of the photoreceptor visibility in the en face image before and after

WSAO optimization. The left column is a comparison of B-scans, and the right

column is a comparison of the en face images generated at the IS/OS layer. NFL,

Nerve Fiber Layer; IS/OS, Inner and Outer Segment; RPE, Retina Pigment

Epithelium; BM, Bruch’s Membrane. Scale bar is 50 μm. A real-time optimization

video has been included in Media 1. ....................................................................... 64

Figure 5-4 Images of the photoreceptor mosaic acquired along the superior meridian in a non-

mydriatic pupil. These include results acquired at four retinal eccentricities for

comparison centered at: 5.1°, 2.2 °, 1.6°, and 1.0°. The field of view is 1.0°x0.4° for

all images. Scale bar is 50 μm. ................................................................................ 65

Page 13: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

XIII

List of Acronyms

AO Adaptive Optics

CPU Central Processing Unit

CS Compressive Sampling

CPU Central Processing Unit

CT Computed Tomography

CUDA Compute Unified Device Architecture

DM Deformable Mirror

FD Fourier Domain

FLOPS Float Point Operations per Second

GPU Graphics Processing Unit

MEMS Microelectromechanical System

MSE Mean Squared Error

NA Numerical Aperture

OCT Optical Coherence Tomography

SS Swept Source

SD Spectral Domain

SLD Super Luminescent Diode

svOCT Speckle Variance Optical Coherence Tomography

WFS Wavefront Sensor

WSAO Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

Page 14: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

1

Chapter 1

Introduction

In medical imaging, one of the greatest challenges faced today is the ever increasing demand for

high speed processing due to the rapidly growing high throughput of data acquisition rates. Many

medical imaging research fields have resorted to using new enabling technologies to provide

significant advancements in computational throughput by leveraging state-of-the-art

microprocessors, such as general purpose graphics processing units (GPU). These medical

imaging techniques include computed tomography [1–3], ultrasonography [4,5], magnetic

resonance imaging [6,7], and Optical Coherence Tomography, to name a few. Computed

tomography and magnetic resonance imaging has typically been used as a method of acquiring

images of large structural organs of the body, such as imaging the organs within the chest cavity

and brain imaging. Ultrasonography uses ultrasound as a source imaging musculoskeletal

structures harmlessly, thus lending this technique the ability for frequent diagnostic imaging

sessions. Optical Coherence Tomography (OCT), an imaging technique first introduced in 1991

by Huang et al. [8], has become a prominent imaging technology in ophthalmology. OCT is

often described as an optical analogue to ultrasound imaging. These two imaging modalities use

Page 15: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 1 – Introduction

2

analogous terminology (e.g. A-scans and B-scans) and they can both perform imaging non-

invasively and in vivo. However, their fields of applications differ due to their significant

variation in imaging resolution and depth of penetration. In OCT, the depth of penetration is

ultimately determined by the tissue or specimen and the wavelength of the light source being

used, which typically ranges from several hundred microns to several millimeters. In this thesis,

we investigated three applications of OCT that were made possible through leveraging the

computational power of the GPU, and described the contribution that each application can

potentially offer to ophthalmology and vision research.

1.1 Ophthalmic Imaging Techniques

In recent years, the emergence of OCT has made a significant impact to clinical ophthalmology.

In particular, OCT is used to image the retina, the light sensitive tissue at the back of the eye.

The importance of OCT for clinical applications is due to its complementarity to other imaging

techniques that have been predominantly used in the clinic, including fundus photography and

scanning laser ophthalmoscopy (SLO).

Fundus photography is commonly used in optometry and ophthalmology, and contains a

camera attached to a microscope used for imaging through the pupil onto the retina. The

advantage of using fundus photography is its fast image acquisition by flood-illumination

techniques and capturing the image scattered from the back of the eye. However, this technique

has several disadvantages: 1.) the flood illumination can be discomforting to the patient being

imaged due to significant exposure to visible light, 2.) fundus photography has limited lateral

resolution, and 3.) it cannot provide any volumetric information of the retina.

SLO is a non-invasive imaging technique that uses techniques from confocal microscopy

to generate high lateral resolution images in the retina. However, SLO has limited potential in

Page 16: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 1 – Introduction

3

capturing axial information. Therefore SLO is conventionally used for capturing fundus images

at lateral resolutions potentially higher than that achievable in fundus photography.

OCT is a technique that uses low coherence interferometry for capturing axial

information representing the retinal layers in the patients’ eye while maintaining very high lateral

resolution. OCT is also non-invasive, and conventionally uses near-infrared light for imaging to

prevent discomfort in patient during imaging sessions from exposure to the visible light. These

characteristics of OCT make it suitable for volumetric imaging in ophthalmology.

1.1.1 Volumetric Imaging with OCT

In OCT, the depth profile acquired at a single location is called an A-scan. In order to acquire a

volumetric image, the beam of light is scanned across the sample using a conventional a raster

scanning pattern, where an A-scan is acquired along discrete positions along the forward scan

direction. This mechanism is very similar to the scan pattern used in cathode ray tube television

screens, as shown in Figure 1-1.

Figure 1-1 Colour-coded illustration of the raster scanning pattern.

This raster scanning pattern is used to scan the OCT focused light beam across the

sample for acquiring a tomographic dataset. Discrete positions of the scanning beam of light

Page 17: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 1 – Introduction

4

represent the position of a single depth profile (e.g. A-scan). The acquisition of A-scans is

executed along the Forward Scan interval, and is suspended during the flyback interval. This

pattern is performed repeatedly until an entire volume within the region of interest has been

scanned. Other techniques of scanning can be used depending on application, such as radial

scanning for endoscopic OCT.

The reconstruction of an entire volume consists of assembling the A-scans from a single

forward scan interval into a B-scan. This is followed by combining the B-scans from each

forward scan interval and reconstructing the entire volume. A general illustration of the volume

reconstruction of the retinal dataset is shown in Figure 1-2.

Figure 1-2 Optical Coherence Tomography Volume

In OCT, the throughput rates are generally quantified in terms of number of A-scans per

second. For example, 100 000 A-scans per second is equivalent to 100 kHz. In Figure 1-2, the

volume shown has 1024-points per A-scan, 512 A-scans per B-scan, and 128 B-scans per

volume. This is equivalent to a total of 65 536 A-scans, which would require approximately 0.65

seconds to acquire with a 100 kHz system.

1.2 Fourier Domain Optical Coherence Tomography

Fourier Domain Optical Coherence Tomography (FD-OCT) is a subset of OCT that became a

significant breakthrough in ophthalmic imaging allowing for high throughput data acquisition in

Page 18: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 1 – Introduction

5

real-time (>100 kHz) [9–12]. There are two types of FD-OCT: Spectral Domain (SD) OCT and

Swept Source (SS) OCT. A diagram of both the SD-OCT and SS-OCT systems is presented in

Figure 1-3.

In SD-OCT, a broadband light source is used light into system via the fiber coupler which

is then divided into the reference arm and the sample arm. In the reference arm, the light is

reflected off of a stationary, silver-coated mirror. In the sample arm, the light is reflected off of

the scattering tissue, such as the retina. The light returning from both arms will interfere in the

fiber coupler. The resulting interference pattern is separated into its constituent wavelengths by a

spectrometer and is called an interferogram. Performing a Fourier-transform on the interferogram

obtains a spatial representation of the scattering signature of the sample.

In SS-OCT, a wavelength swept light source is used instead of a broadband light source,

which sweeps through a bandwidth of wavelengths and is used as the basis of the spectrum of

light. Both the reference arm and sample arm are configured the same as in a SD-OCT system.

The spectrometer, however, is replaced with a photodiode detector capable of capturing the

overall intensity of the light at each given swept wavelength and generating the corresponding

spectrum for analysis.

Figure 1-3 Topology of the SD-OCT and SS-OCT topology

Page 19: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 1 – Introduction

6

Both SD-OCT and SS-OCT require very similar processing steps that revolve around the

Discrete Fourier Transform (DFT) of the spectrum. However, before the introduction of

massively parallel computing, the high-throughput data acquisition in OCT has left the

processing rates achievable in conventional general purpose processors much slower than is

required for real-time visualization.

1.3 Research Motivation

The processing pipeline for OCT is computationally intensive, which has posed many limitations

for real-time imaging. In a previous bachelor’s thesis [13] and a peer-reviewed publication in the

Journal of Biomedical Optics [14], we described an in-depth description of the GPU-based

implementation of the OCT processing and display pipeline. This work included a detailed

technical overview of our video-rate volume rendering implementation and results, demonstrated

the enormous potential of GPUs for general purpose massively parallel computing applications

in real-time visualization for medical imaging. In order to further leverage the power of the

GPUs in ophthalmic imaging with OCT, this research concentrates on three applications for OCT

that benefitted significantly from the state-of-the-art GPUs:

1.) Label free angiography using a technique known as Speckle Variance OCT (svOCT) for

real-time in vivo visualization of the retinal vasculature network

2.) GPU-based accelerated volume reconstruction using a Compressive Sampling OCT (CS-

OCT) processing pipeline, which can be beneficial for enabling acceleration of volume

acquisition via a sparse subsampling scanning pattern.

Page 20: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 1 – Introduction

7

3.) High resolution imaging at the cellular level (e.g. photoreceptors) using Wavefront

Sensorless Adaptive Optics OCT (WSAO-OCT) for real-time in vivo imaging of the

photoreceptor layers.

This thesis is organized as follows. Chapter 2 first provides an in-depth description of the FD-

OCT processing pipeline, the GPUs used in this thesis, and a throughput comparison between

these GPUs and the GPUs used previously [13,14]. This chapter includes a background

overview of the GPU implementation of the FD-OCT processing pipeline. Chapter 3 introduces

the implementation of real-time speckle variance OCT with GPUs with the corresponding

motion correction algorithm for increasing SNR by reducing motion artifacts [15]. Chapter 4

presents the implementation of volume reconstruction of volumes acquired with compressive

sampling OCT in GPU, and provides a thorough comparison between the results generated in

CPU versus GPU. Chapter 5 introduces the first ever implementation of wavefront sensorless

adaptive optics OCT for imaging human photoreceptors. Finally, the thesis is concluded with

Chapter 6 with a discussion of the results presented in this thesis, and a generally proposal for

future possibilities for extending each component of this research.

Page 21: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

8

Chapter 2

Graphics Processing Units in Optical

Coherence Tomography

Before graphics processing units (GPUs) were used for general purpose processing, central

processing units (CPUs) were the predominant microprocessor for performing computational

tasks. However, CPUs maximize on sequential computational operation throughput for a few

number of processing cores, which poses the fundamental limitation on the maximum-achievable

processing speed for most computationally expensive processing pipelines. In other words, clock

frequencies need to be increased by one or even two orders of magnitude from modern day

frequencies (~2-4 GHz) as a requisite for CPUs to match throughputs rates achievable in GPUs.

Since the introduction of the Compute Unified Device Architecture (CUDA) for general purpose

computing with massively parallel processors in 2006 [16], GPUs have been increasingly used

for facilitating real-time imaging in medical imaging systems that have high data throughput and

computationally expensive processing algorithms. To date, NVIDIA has released two complete

generations of GPUs, the Fermi and Kepler Architectures, which have been targeted not only to

gamers, but also to a wide field of academic and industrial research fields that include medical

imaging. A detailed overview of the Fermi and Kepler architectures was provided in a previous

Page 22: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

9

bachelor’s thesis to demonstrate their applicability for computational acceleration in OCT,

including the chips GF110 and GK104 [13]. The bachelor’s thesis also reported on the hardware

components included in the corresponding GPU architectures, and emphasized on their

advantages and disadvantages with their application on the OCT processing pipeline. For

example, GK104 offers more cores than GF110 but is not optimized for double-precision

floating point computation. For OCT, only single-point precision arithmetic is required to

produce sufficient quality for real-time imaging; thus, Kepler-based GPUs are more suitable.

In this research, a new series of microprocessors based on the second generation state-of-

the-art Kepler architecture were used, called the GK110. The corresponding graphics cards

containing the GK110 were GTX Titan, GTX Titan Black, and Quadro K6000, which will be

described in the following subsections of this chapter. This chapter will first focus on the

improvements of the GK110 over the GK104 GPU, followed by a description of the OCT

processing pipeline, and finally a comparison between the throughput achievable in both

generations of GPUs.

2.1 State-Of-The-Art Kepler GPUs

In NVIDIA’s second generation Kepler GK110 GPU [17], the number of transistors doubled

from that of the GK104 chip [18] (from 3.54 billion transistors to 7.1 billion transistors). This

increase in transistors provided nearly double the number of processors available on each die,

from as many as 1536 cores in GK104, to as many as 2880 cores in the GK110. These

processing cores were then partitioned into many clusters of processors, called the stream

multiprocessor extreme (SMX). The number of operational SMX’s on the GPU depends on the

graphics card model. In the GTX 680, the GK104 had 8 operational SMXs with 192 cores per

Page 23: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

10

SMX, as shown in Figure 2-2. The GK110 had a total of 15 SMXs with the same 192 cores per

SMX, equivalent to an 88% increase in cores from GK104, as shown in Figure 2-2.

Figure 2-1 Block diagram of the first generation Keplar GPU, GK104. This GPU includes 8

SMXs (192 cores each), and 512 kB of L2 Cache.

Figure 2-2 Block diagram of the second generation Keplar GPU, GK110. This GPU

includes 15 SMXs (192 cores each), and 1536 kB of L2 Cache.

Page 24: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

11

The overall processing power of a GPU is dependent on two main contributing factors: the total

number of processors, and the core clock rate. Table 2-1 provides a general overview and

comparison between the Kepler-based Graphics cards used in this research.

Table 2-1 A comparison between the specifications of different Kepler-Based GPUs and

their corresponding single-precision floating point operations per second (FLOPS)

From this table, it can be seen that the Quadro K6000 is capable of up to 5.2 single-precision

Tera Floating-Point Operations per Second (TFLOPS). The nominal throughput of each GPU for

single-precision floating point operation is important when determining its potential for

accelerating the OCT processing pipeline. A brief description of the processing steps required in

FD-OCT, as well as its implementation with CUDA version 6.0 is first explained in section 2.2.

2.2 FD-OCT Processing Pipeline

The standard FD-OCT processing pipeline consists of five major procedures:

1.) Wavelength to wavenumber (λ-to-k) resampling

2.) DC Subtraction

3.) Dispersion Compensation

4.) Fast Fourier Transform

5.) Post-FFT Processing

These five procedures are illustrated in Figure 2-3 with the corresponding data transformation

after each step.

Page 25: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

12

Figure 2-3: FD-OCT Processing pipeline, including resampling, DC subtraction, numerical

dispersion compensation, FFT, and post-FFT processing.

2.2.1 Wavelength-To-Wavenumber Resampling

In FD-OCT, the Fourier transform pair of distance in the spatial domain is wavenumber. In SS-

OCT, the detected interferogram can be sampled linearly in wavenumber through the use of a

fixed-path interferometer that acts as a sample clock. However in SD-OCT, the line-scan CCD

camera captures a spectrum that is non-linear in wavenumber. The captured intensity signal must

be linear in wavenumber prior to performing the Fast-Fourier Transform (FFT), such that the

Page 26: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

13

transformed signal will be linear in the spatial domain. The wavelength-to-wavenumber (λ-to-k)

resampling procedure is performed in order to transform the interferogram into the

corresponding linear-in-wavenumber domain. Alternative methods to perform the Fourier

transform based on non-uniformly sampled spectra have been presented in the literature [19–

21].

In our SD-OCT processing pipeline, the GPU texture memory is an attractive solution for

implementing the resampling procedure. In fact, in modern graphics and video rendering

engines, texture memory is one of the most crucial component in GPUs as it allows hundreds of

billions of texture fetches per second, specified as the texture fillrate, for real-time resampling of

realistic and artistic scenery [16]. Each of these texture fetches performs a hardware-based

interpolation operation in the amount of time comparable to standard memory access to DRAM.

For this reason, the use of texture memory is well-suited for the wavelength-to-wavenumber

resampling operation for OCT.

2.2.2 DC Subtraction

The inherent spectrum of the light source acts as a carrier for the interferometric fringes.

However, during signal processing, the spectral shape does not contribute to the signal of the

sample; instead it adds undesirable artifacts. Subtracting this spectral shape from each

interferogram will provide the residual interferometric fringes caused by the pathlength

mismatches between the reflections from the reference arm and sample arm.

To remove these artifacts, the DC component is first estimated by taking the average of

multiple interferograms (on the order of 100s to 1000s of interferograms). In CUDA, the DC

spectral shape estimation is performed in a standard kernel that sums and averages many A-scans

Page 27: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

14

across multiple B-scans. This spectral shape is then subtracted from each raw spectrum to obtain

the residual interferometric fringes.

2.2.3 Dispersion Compensation

Dispersion is the effect of broadband light having different indices of refraction in differing

media. In OCT, hardware approaches, such as the addition of glass blocks in the reference arm to

match the dispersion caused by the lenses in the sample arm, can mitigate much of the dispersion

in the system; however, numerical dispersion compensation is still beneficial for fine-tuning

purposes, as hardware methods cannot provide exact index matching easily. The computational

procedure for dispersion compensation is based off of the technique introduced by Wojtkowski

et al. [22] which uses a Hilbert Transform for generating an analytic representation of the

interferometric fringes. This representation allows the user to modify the phase of the signal as a

solution for numerical compensating for dispersion.

The Hilbert Transform can be performed in software by first executing a forward FFT on

the resampled signal, then scaling the transformed signal by setting the negative spectrum to zero

and doubling the positive spectrum, and finally perform an inverse FFT to obtain the analytic

spectrum. The dispersion phase component can then be added to the analytic interferometric

fringes, and adjusted until a crisp image is observed. An example of a GPU-based

implementation of dispersion compensation was first demonstrated by Zhang et al. [23].

2.2.4 Fast-Fourier Transform

After performing the aforementioned pre-processing procedures, the interferometric fringes can

be Fourier-transformed to obtain the spatial information corresponding to the relative location of

the scattering media in the sample arm. The GPU Implementation uses the FFT operation

Page 28: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

15

provided in the CUDA FFT library, same as the one used for the Hilbert Transform, and is based

on the Cooley-Tukey algorithm for an efficient O(nlogn) complexity of the DFT, as opposed to

the standard O(n2) complexity.

2.2.5 Post-FFT Processing

After performing the FFT, it is often necessary to scale the image to enhance visualization and

contrast of the sample. These operations include modulus, logarithm, and normalization. The

purpose of the modulus operation is to remove all phase components in the dataset, and to only

capture the intensity values of the processed volumes. The logarithm operation is often used to

scale the intensity peaks in the linear scale such that the dataset is less skewed towards large

intensity peaks. Lastly, the normalization operation is used for scaling the entire dataset to the

range of 0 (black) to 1 (white). This is required for displaying with OpenGL when displaying

floating-point gray-scale textures.

Rather than performing these three operations in three separate kernels, the

implementation takes into consideration the added overhead for launching kernels, and performs

all three operations in a single kernel.

2.3 GPU-Accelerated FD-OCT Processing Pipeline In a previous publication [14], the GTX 680 was used with CUDA version 4.2. In this thesis, we

used CUDA version 6.0 for development with the GTX Titan, GTX Titan Black, and Quadro

K6000 GPUs. The development environment used was Microsoft Visual Studio 2012 for

programming C++, CUDA, and OpenGL. For performing the FFT, the CUDA FFT library was

used for performing the 1D batch FFT from the processing pipeline [24]. The 2D and 3D FFT’s

from the CUFFT library were used in this thesis, which will be elaborated in the following

Page 29: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

16

chapters. Similar to [14], the NVIDIA visual profiler was used for GPU benchmarking and

debugging purposes.

One predominant goal in GPU programming for OCT is to optimize the processing

pipeline such that each kernel can execute as fast as possible while outputting results equal in

quality to that of the corresponding gold-standard implementation, such as with MATLAB. One

major challenge in programming the GPU is memory management. A conventional

implementation for GPU processing in OCT is to perform memory transfers sequentially with

the processing kernels, such as the pipeline described in several publications [25–27]. However,

this implementation imposes a bottleneck on the GPU-accelerated algorithm that can often

render it even slower than the equivalent CPU implementation. Jian et al. demonstrated the first

successful implementation of GPU-accelerated OCT in which the GPU processing pipeline was

fully concurrent with the memory transfer operations. This implementation was capable of

eliminating the bottlenecks imposed by the memory transfers from CPU to GPU, and vice versa.

CUDA provides a functionality called “streams” that essentially allows full concurrency between

memory transfers and kernel operations. Figure 2-4 is a flowchart demonstrating the use of

multiple CPU threads for handling acquisition and GPU operations, and also the use of multiple

CUDA streams for handling memory transfer and the OCT processing pipeline within the GPU.

Page 30: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

17

Figure 2-4 Flowchart of the Multithreaded based GPU implementation of the OCT data

transfer and processing pipeline with CUDA’s stream processing approach. This figure is

taken from Jian et al. [14].

To demonstrate a real-time scenario of the flowchart presented from Figure 2-4, screen captures

of the NVIDIA Visual Profiler software for monitoring the GPU processing and volume

rendering pipeline for both SD-OCT and SS-OCT are presented in Figure 2-5 and Figure 2-6.

The pipeline is divided into three major sections: OCT processing, volume rendering, and en face

image generation. For SD-OCT, the processing pipeline consisted of Resampling, DC

Subtraction, Hilbert Transform, Dispersion Compensation, Zero Padding, FFT, and Post-FFT

operations [14]. For our SS-OCT implementations, the resampling was omitted due to the use of

a k-clock-controlled sampling mechanism, and the Hilbert Transform and Dispersion

Compensation were omitted due to sufficient hardware dispersion compensation.

Page 31: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

18

Figure 2-5 SD-OCT Processing Kernel, Volume Rendering, and en face generation kernel.

The SD-OCT Processing pipeline included Resampling, DC Subtraction, Hilbert

Transform, Dispersion Compensation, Zero Padding, FFT, and Post-FFT (Modulus, Log,

and Scaling).

Figure 2-6 SS-OCT Processing Kernel, Volume Rendering, and en face generation kernel.

The SS-OCT Processing pipeline included DC Subtraction, Zero Padding, FFT, and Post-

FFT.

Page 32: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

19

In our first demonstration of the GPU-based video rate volume processing and rendering

for FD-OCT, the attainable processing rate was 1.1 MHz processing rate for SD-OCT and 2.2

MHz for SS-OCT [14]. A comparison between the processing rates achieved with GTX 460

(Fermi), GTX 560 (Fermi), and GTX 680 (Kepler) was provided. For this research, a comparison

between the state-of-the-art Kepler GPUs, GTX Titan, GTX Titan Black, and Quadro K6000 is

provided in Figure 2-7. The performance between the GTX Titan Black and the Quadro K6000

are nearly identical due to their similar GK110 GPU specifications which differ by less than

1.5% in clock speed, as shown in Table 2-1. Regardless, the highest achievable processing rate

with the Quadro K6000 was 3.3 MHz for SS-OCT, and 1.9 MHz for SD-OCT.

Figure 2-7 A comparison of the processing rates between GTX Titan, GTX Titan Black,

and Quadro K6000 graphics cards, all based on the GK110 GPU.

In order to further increase the OCT processing rates, zero padding can be disabled from the

pipeline as a method of halving overall data for processing beginning from the FFT kernel.

Figure 2-8 demonstrates that the maximum-achievable A-scan rate was 5.6 MHz for the

unpadded SS-OCT processing pipeline with a single NVIDIA Quadro K6000. While these

Page 33: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

20

processing rates are unnecessary for most standard OCT applications, they become crucial in

other applications, such as svOCT and CS-OCT, which require more computationally-expensive

image processing techniques.

Figure 2-8 Evaluation of the processing rates attained with the unpadded processing

pipeline which reached up to 5.6 MHz with the NVIDIA Quadro K6000.

2.3.1 Results

A preliminary demonstration of the ultrahigh-speed processing rate was demonstrated with a

multi-megahertz acquisition simulation using file-reading. A screen capture of this video is

presented in Figure 2-9, where the upper left window shows an averaged and bilateral-filtered B-

scan, the upper right window is the en face image by sum-voxel projection of the volume, the

bottom left window is the interferogram, and the bottom right window is the volume-rendered

image of the retina.

Page 34: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

21

Figure 2-9 Ultrahigh speed processing and video rate volume rendering with simulated

megahertz acquisition rates. Top left – B-scan at the red-line location. Top right – En face

image of retina. Bottom left – interferogram. Bottom right – volume rendered image of

dataset.

To demonstrate the real-time acquisition and display capability of our GPU program, we

performed an in vivo real-time SS-OCT imaging session where we imaged the human retina and

anterior chamber using the system introduced by Young et al. [28]. This system had a maximum

A-scan acquisition rate of 100 kHz with a 1060 nm center wavelength swept source and was

deployed in the Eye Care Center clinic at the Vancouver General Hospital. A second SS-OCT

system operating at 50 kHz with a 1310 nm center wavelength swept source was specifically

designed for imaging the anterior chamber. Screen captures from two videos captured from an

imaging session of the retina and the anterior chamber are presented in Figure 2-10 and in Figure

2-11, respectively. In these two videos, the subjects were asked to move their eyes gradually for

demonstrating real-time alignment.

Page 35: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

22

In Figure 2-10, four screen captures of the first video are presented to demonstrate the

1060 nm SS-OCT system for imaging the retina. These include an image of the subject (top left),

an image of the Optic Nerve Head (top right), an image of image tearing artifact due to patient

motion (bottom left), and an image partially visualizing the macula and the ONH simultaneously

(bottom right).

Figure 2-10 Screen capture of a video demonstrating real time acquisition, processing, and

visualization of the optic nerve head in human retinal imaging with a 100 kHz, 1060 nm

acquisition system [28]. Top Left – Image of the subject being imaged. Top Right – Image

of the OCTViewer program being used in real-time imaging with OCT at the Optic Nerve

Head (ONH). Bottom Left – Example of image tearing artifact due to subject motion.

Bottom Right – Visualization of the ONH and the macular region of the eye.

In Figure 2-11, another four screen captures of the second video are presented during an imaging

session with the 1310 nm SS-OCT system for visualizing the anterior chamber. The top left is an

Page 36: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

23

image of the subject, while the other three screen captures are images of the anterior chambers at

different alignment positions.

Figure 2-11 Similar video to Figure 2-10 except this provides visualization of the anterior

chamber in real-time with a 50 kHz 1310 nm acquisition system.

Ultrahigh-speed OCT also has the potential of becoming a surgical guidance tool that can be

more effective than currently used stereo-vision binocular microscopes. A preliminary

demonstration of intraoperative OCT is presented in Figure 2-12, where a needle represents the

surgical tool, and the fingertip represents the operating region of interest for the patient. This

video used an 830 nm SLD running at acquiring at approximately 10 volumes per second for a

volume size of 1024*128*128.

Page 37: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

24

Figure 2-12 Screen capture of a video demonstrating ultrahigh speed volume acquisition

and volume processing as a preliminary demonstration of OCT-guided microsurgery. The

operating represents the surgical tool, while the fingertip represents the region of interest

to be operated on for the patient.

2.4 Summary

This preliminary research has provided the required GPU programming foundation for further

applications in OCT. Although the GPUs used in our previous research [14,26] are no longer the

state-of-the-art GPUs, our GPU-accelerated OCT implementation was scalable to the more

powerful graphics cards such as GTX Titan, GTX Titan Black, and Quadro K6000. The highest

processing rate achieved was: 1.9 MHz for SD-OCT, 3.3 MHz for SS-OCT, and 5.6 MHz for

unpadded SS-OCT. The following chapters will describe in detail how the state-of-the-art GPUs

were beneficial in implementing svOCT and associated image processing algorithms for imaging

Page 38: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 2 – Graphics Processing Units in Optical Coherence Tomography

25

the retinal vasculature, accelerating the reconstruction pipeline used in compressive sampling

(CS) OCT, and providing low-latency and high-throughput closed feedback loop adaptive optics

OCT.

Page 39: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

26

Chapter 3

Real-Time Speckle Variance Optical

Coherence Tomography

In this chapter, we describe a GPU-accelerated processing platform for real-time acquisition and

display of flow contrast images with FD-OCT in human eyes in vivo. This chapter is based on

the work published and presented by Xu, Wong et al. [15]. Motion contrast from blood flow is

processed using the speckle variance OCT technique (svOCT), which relies on acquiring

multiple B-scan frames at the same location and tracking the change of the speckle pattern. Real-

time human retinal imaging using a custom built OCT system with processing and display

performed on GPU are presented with an in-depth analysis of performance metrics. The display

output included structural OCT data, en face projections of the intensity data, and the svOCT en

face projections of retinal microvasculature; these results compare projections with and without

speckle variance in the different retinal layers to reveal significant contrast improvements. The

capability of performing real-time svOCT imaging of the retinal vasculature may be a useful tool

in a clinical environment for monitoring disease-related pathological changes in the

microcirculation, such as diabetic retinopathy.

Page 40: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

27

3.1 Introduction

OCT is a well-established non-invasive imaging modality for micrometer-resolution cross-

sectional visualization of retinal structures [29]. Specialized extensions, such as visualization of

blood flow, have been developed to enable functional imaging of biological tissues [22]. In

addition to traditional Doppler OCT imaging, which is sensitive to flow rate [30], techniques

have recently been proposed that highlight tissue in motion, but are insensitive to rate; these

include speckle variance (svOCT) [31], phase variance (pvOCT) [32] and optical

microangiography (OMAG) [33]. More details on these techniques are provided in the detailed

review article by Mahmud et al. [34]. Predominantly, the flow contrast work has been

performed in post processing. A few notable exceptions have been presented in [35,36], where

real-time flow contrast was demonstrated during acquisition in 2D as well as in 3D. For effective

volume acquisition of flow contrast data, real-time visualizations of capillary networks via en

face projections of vasculature are highly desirable.

We present GPU accelerated processing for real-time svOCT acquisition with cross-

sectional (B-scan) and en face display of flow contrast in the retina. We integrated a real-time

svOCT processing software for visualization of the human retinal vasculature into an SS-OCT

system. This system was used for demonstrating the potential of svOCT for clinical applications

in ophthalmology. We also introduce the implementation of svOCT GPU in our on-going Open

Source project [37]. A second svOCT system for mouse imaging with SD-OCT was presented

by Xu, Wong et al. [15] but will not be described in this thesis.

Page 41: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

28

3.2 Methods

Real-time flow contrast imaging requires high speed acquisition and processing. For SS-OCT,

both the resampling and dispersion compensation procedures were omitted due to hardware

compensation, as explained by Jian et al. [14]. For the purposes of this research, the term ‘BM-

scan’ is used to indicate a set of multiple B-scans acquired at the same location. For our svOCT

implementation, each BM-scan consisted of three B-scan frames. The equation used to compute

each speckle variance frame (svjk) from the OCT intensity BM-scans (Iijk) is:

𝑠𝑣𝑗𝑘 =1

𝑁∑ (𝐼𝑖𝑗𝑘 −

1

𝑁∑ 𝐼𝑖𝑗𝑘

𝑁

𝑖=1)

2𝑁

𝑖=1, Eq 3-1

where 𝑖 is the frame index within a BM-scan, j and k are the width and axial positions of each

frame, respectively, and N is the number of frames per BM-scan [7]. In our case, the volume

acquisition size was 1024 pixels per A-scan, 300 A-scans per B-scan, and N = 3, for a total of

900 B-scans (300 BM-scans) per volume. A visual example of a single speckle variance cross

sectional image is presented in Figure 3-1. In this figure, the three B-scans acquired at a single

location are inserted into Eq 3-1 as 𝐼𝑖𝑗𝑘, and the output 𝑠𝑣𝑗𝑘 is written into the speckle variance

cross sectional image. This speckle variance computation is capable of emphasizing particles in

the retina that are in motion by generating contrast to these corresponding speckles in the image.

For this reason, the blood vessels are easily differentiable from the structural tissue in the retina.

Page 42: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

29

Figure 3-1 A visualization of the speckle variation computation. This demonstrates how

speckles corresponding to moving particles in the retina generate high variance values

(bright pixels). Three B-scans are acquired at a single location and shown on the left, while

the speckle variance image is shown on the right.

3.2.1 Human Imaging

Human imaging was performed with a custom built 1060 nm SS-OCT system, with a line rate of

100 kHz and using a 500MSPS digitizer (AlazarTech), and 1024 acquired points per

interferogram. The details of this system have been previously reported [28]. The axial

resolution was ~6 m in tissue, and the lateral resolution was ~8.5 m in the retina using a beam

diameter of 1.3 mm at the pupil. Retinal images in the foveal region were acquired from a

healthy volunteer. These scanning parameters required a total acquisition time for an entire

volume (1024x300x900) of approximately ~3.1 seconds including fly-back time. This long

acquisition time is the main disadvantage of svOCT. Within a span of 3.1 seconds, the acquired

dataset is highly susceptible to involuntary human motions, which can severely deteriorate the

quality of the svOCT volumes due to motion artifacts. A strategy for numerically correcting

motion artifacts is explained as part of the svOCT processing pipeline in the next section.

Page 43: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

30

3.2.2 Processing

We used our custom GPU program, previously presented in [11], as the basis for implementing

svOCT. For development, we used CUDA Toolkit 6.0 and Microsoft Visual C++ 2012 on a 64-

bit Windows 7 operating system. In our OCT processing workstation, we used a Quadro K6000

GPU and two Intel Xeon E-2620 CPUs.

Our previous report described the structure of the program and GPU processing steps for

FD-OCT, including our approach for batch processing [14]. For svOCT processing, we selected

a batch size of 30 frames of raw data (10 BM-scans), transferred the batch to GPU, executed the

FD-OCT batch processing pipeline, and launched the speckle variance kernel which batch-

processed variance of the speckle intensity for the entire OCT data [38]. An en face projection

image was extracted from the selected region for visualizing the svOCT data. Previously, the en

face image was generated using a sum-voxel algorithm to visualize a two-dimensional projection

of the retina; in this chapter, the maximum-intensity projection (MIP) algorithm was used

instead, which projects the maximum value, as opposed to the mean value, in each A-scan.

Additionally, the GPU was used for real-time display of the images including the svOCT

B-scan, the intensity en face image, and the svOCT en face image. To enhance the quality of the

blood vessels in the en face images, a Gaussian filter was implemented to smooth the en face

svOCT image. For the target application of retinal imaging, the program extracts flow contrast

data from up to three user-selected depth regions, processes an en face projection for each

region, and combines all three projections into a superimposed, RGB colour-coded, en face

projection.

Page 44: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

31

As mentioned in the previous subsection, the motion artifacts can severely deteriorate the

svOCT en face images, which can render the images non-usable if left uncorrected. A single-

pixel rigid registration algorithm of the BM-scans was implemented on the GPU to reduce

motion artifact in the svOCT image. This implementation was based on the subpixel motion

registration algorithm presented by Guizar-Sicaros et al. [39]. For larger motion artifacts that

cannot be corrected with the motion registration algorithm, a notch filter was used to remove the

remaining white-stripe artifacts in the en face images. The motion registration and notch filtering

process is presented in Figure 3-2. The exact details of the svOCT implementation are available

in the open source code [37].

Figure 3-2 Motion registration and notch filtering on svOCT en face images. The motion

registration algorithm removes many of motion artifacts. However, for the severe artifacts

that cannot be corrected, they are removed with a Fourier-based notch filter mask.

3.2.3 Dynamic Region of Interest Selection

The generation of the svOCT en face images is possible only with the use of a dynamic user-

selection feature implemented into the svOCT program. The maximum intensity projection

algorithm is used for generating the en face images between the selected regions of interest.

During real-time imaging, the user has the option of selecting any three layers from the volume

for en face image display; this feature is demonstrated in Figure 3-3. On the left is a volume-

Page 45: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

32

rendered image of the fovea, where red, green, and blue enclosing boxes were outlined on the

volume. The different colour boxes were selected specifically to enclose three different layers of

interest in the retina: red – the Nerve Fiber Layer (NFL)/Ganglion Cell Layer (GLC), green – the

Inner Plexiform Layer (IPL), and blue – the Outer Plexiform Layer (OPL). The corresponding en

face images generated at each layer are shown in the middle column. In order to generate a

depth-encoded colour-mapped en face image of the vasculature network, we superimposed the

three en face images into a single RGB en face image shown on the right. This user-selection

approach assumes that the retinal layers are flat throughout the field of view (i.e. negligible

curvature and no motion) such that the rectangular boxes can accurately segment the three layers

of interest. For datasets with large curvatures and gross motion artifacts, a more sophisticated

segmentation algorithm, such as the Graph-Cut segmentation algorithm, is required for precise

segmentation of the retinal layers.

Figure 3-3 An example of the dynamic-user selection of the retinal layers for visualizing the

en face images of the vasculature network. The colours are associated with specific layers

of the retina: red - the NFL/GCL, green – the IPL, and blue – the OPL. The right image

represents the colour-mapped en face images of the three layers.

Page 46: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

33

3.3 Results and Discussion

In our previous report, we presented processing rates for the FD-OCT processing pipeline using

the Geforce GTX-680 GPU [14]. In this paper, the Quadro K6000 was used for benchmarking,

and provided an increase in the SS-OCT processing rate from 2.2 MHz to 3.3 MHz. To further

increase the processing rate, the zero-padding procedure was omitted from the processing

pipeline described by Jian et al. The achievable processing rate with the unpadded pipeline was

~5.6 MHz.

The entire processing pipeline for the spectral domain svOCT was captured in NVIDIA

Visual Profiler for a single batch and is shown in Figure 3-4. This profiler timeline includes the

standard SS-OCT kernels (i.e. DC subtraction, FFT, and post-FFT), the motion registration

algorithm, the speckle variance calculation, the notch filter, and the device-to-host (D2H)

memory transfer of the en face images. In our previous publication, the processing rate for the

entire motion correction and svOCT pipeline was reported to be 460 kHz [15]. In this research,

after omitting the zero-padding processing procedure and reducing the B-scan size from

1024x300 to 512x300 pixels, the execution time for a batch of 30 B-scans (10 BM-scans) was

~10.9 ms, which is equivalent to a processing rate of ~820 kHz. The quality of the svOCT en

face images were unaffected by omitting zero-padding from the SS-OCT pipeline and reducing

the B-scan size, and is suitable for real-time visualization purposes. For matching ultrahigh

acquisition rates > 1 MHz, a possible method for accelerating the processing pipeline is by

implementing a multi-GPU solution, where the processing of the volume can be partitioned into

multiple sections, each processed with the separate GPUs [15].

Page 47: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

34

Figure 3-4 Representative NVIDIA Visual profiler timeline for a single-svOCT iteration of

the SDOCT pipeline (Quadro K6000 GPU) is separated into three components: a series of

kernels for SDOCT processing, speckle variance kernel with en face rendering kernels, and

idle time for display.

3.3.1 Human Imaging

The representative svOCT images of the human retina in Figure 3-5 were acquired over an area

of 2x2 mm2. Panels a, d, and g in Figure 3-5 display the svOCT cross-sectional images generated

with Eq 3-1. These regions of interest selected correspond to: a – the NFL/GCL, d – IPL, and g –

OPL, respectively. The intensity en face images (b, e, and h) were generated at the corresponding

locations selected in a, d, and g. The svOCT en face images (c, f, and i) were generated between

the layers bounded by the red lines selected on the svOCT cross-sectional images. In Figure 3-5

j, all three user-selected regions of interest on the svOCT cross-sectional image were selected

using the red, green, and blue colour-coded lines. Figure 3-5 k is a gray-scale image of the

averaged intensity en face image for all three layers. Figure 3-5 l is the colour-coded en face

image that represents the superimposed svOCT en face projections of all three vascular layers.

Comparison of the intensity en face (center column) with the svOCT en face (right column)

images reveals a significant contrast improvement for blood vessels with svOCT; for example,

Page 48: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

35

capillaries from the NFL/GCL are barely visible in Figure 3-5 e, but are clearly distinguishable

in Figure 3-5 f.

Figure 3-5 The svOCT B-scan images in a, d, and g illustrate the selected depths of interest

for an area of 2x2 mm2. The intensity en face images at the user-selected region of interests

are shown in b, e, and h. The svOCT en face images at the user-selected region of interests

are illustrated in c, f, and i. The speckle variance B-scan j presents a colour-coded

combination of a, d, and g. k is a combined gray-scale en face image of all three layers. The

superimposed colour coded svOCT en face image is presented in l.

Page 49: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

36

Representative flow contrast images of retina acquired on human volunteers are presented in

Figure 3-6. Figure 3-6 a-d) are en face images of the Optic Nerve Disk, while Figure 3-6 e-h) are

en face images of the fovea. In the gray-scale intensity en face images (Figure 3-6 a, c, e, and g),

the larger blood vessels can be visualized, but the small vessels in the vascular network are

indistinguishable from the background structural intensity. The svOCT en face images show

well-defined capillary networks in the retina with depth-encoded via colour-coding.

Figure 3-6 Comparison between the intensity en face images (a, c, e, g) and the color-coded

svOCT en face images (b, d, f, h), respectively. Images (a–d) were acquired at the optic

nerve head region, and images (e–h) were acquired in the macula. Field-of-view: (a, b) 2 × 2

mm2, (c, d) 1 × 1 mm2, (e, f) 2 × 2 mm

2, and (g, h) 5 × 5 mm

2.

Page 50: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

37

3.4 Conclusion

We have demonstrated real-time flow contrast imaging for human retinal imaging. This

technology has high potential for angiography in clinical applications. Blood vessels in the NFL

and IP are often difficult to visualize in the OCT intensity images; svOCT enhances the contrast

for visualization of the blood vessels. The simplicity of the svOCT processing lends itself to real-

time angiography, which is important to study various diseases affecting the retinal vasculature,

e.g. diabetic retinopathy, ischemia, etc. In addition, more computationally intensive algorithms,

such as pvOCT, can be used in post-processing for retrieving potentially higher contrast images.

We demonstrated visualization of the capillary network in human retinas with our svOCT

over an area of ~2x2 mm2. In order to increase the field of view, more A-scans need to be

acquired at a faster rate to maintain the same resolution while mitigating motion artifacts. For our

current system, a simple extension of tiled acquisition and mosaicking of adjacent volumes

would also permit acquisition over larger areas [40]. Other simple techniques could also be used

to limit subject motion, such as incorporating a fixation target and a bite bar.

In conclusion, we have demonstrated the use of a Quadro K6000 to implement the overall

svOCT processing, motion registration, notch filtering, and en face display rates at up to 820 kHz

for SS-OCT. The microvasculature in the retina is clearly distinguishable with the addition of the

speckle variance calculation. The ultrahigh speed processing rates that we have demonstrated

provides opportunities to implement GPU-based image processing algorithms to further enhance

visualization quality for the blood vessels in the en face images. The applications of real-time

svOCT are numerous, such as monitoring progressive changes to retinal vessels in diabetic

retinopathy in ophthalmology, and visualizing blood vessel networks in cancer research [41].

The GPU used in this research is inexpensive, and the complete svOCT pipeline can easily be

Page 51: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 3 – Real-Time Speckle Variance Optical Coherence Tomography

38

integrated into practical FD-OCT systems for use in a clinic. The source code that includes

transferring interferometric data from the host to the GPU, processing, and displaying of svOCT,

is available as part of our Open Source project at [37].

Page 52: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

39

Chapter 4

Compressive Sampling Optical

Coherence Tomography with GPU

As described in the introductory chapter, OCT is a high throughput imaging modality that

requires stable fixation from the patient. The total amount of time typically needed to acquire a

volume with sufficient sample points to provide an accurate representation of the subject’s retina

typically ranges on the order of several seconds. The long acquisition time can pose a challenge

to patients that have difficulty with stability, especially for children or for patients with dementia.

One common approach for increasing throughput for high speed volume acquisition rates in

OCT is to integrate high-speed cameras in SD-OCT [33,42], or high-speed swept sources in SS-

OCT [43,44]. However these strategies are not always feasible due to the high costs of state-of-

the-art ultrahigh speed acquisition hardware. A second common approach is to change the

imaging protocol by reducing the volume size of datasets for increasing volume acquisition rate;

however, this approach compromises on the imaging resolution achievable with the optics.

An alternative to these techniques is to investigate software solutions that exploit image

sparsity for maintaining comparable image resolution while significantly reducing acquisition

time. Compressive Sampling (CS), first introduced by Candès et al. [45–48], is a computational

Page 53: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

40

approach used for reconstructing highly subsampled data with high fidelity. The combination of

CS with OCT imaging has been shown to reduce volumetric scan times [28,49]. In conventional

signal acquisition, the total number of samples acquired must obey the Nyquist-Shannon

sampling theorem for successful reconstruction of the acquired signal. For acquisition in CS-

OCT, the technique relies on randomly subsampling of the signal and leverages the redundancy

present in biological structures that can be represented in the frequency domain with high

compression and minimal loss of information. For example, in CT, the use of CS is highly

desirable for minimizing the dosage of harmful ionizing radiation, X-rays, required for capturing

an entire dataset. In other non-ionizing imaging modalities, such as MRI and OCT, the time

required to acquire sufficient samples in order to accurately represent the region of interest can

often exceed the period of involuntary motion, such as heart pulses, tremors, and movements

triggered by discomfort. Sparse-sampling of the volumetric datasets and reconstructing the

missing data using CS is effective for reducing these involuntary motion artifacts [28].

The drawback with CS reconstruction is that the algorithm is an iterative process that can

require on the order of an hour to over ten hours for fully reconstructing a single dataset when

performed using MATLAB. The exact amount of time depends on the target volume size and the

desired reconstruction error threshold, which is conventionally quantified by the Mean Squared

Error (MSE). The focus of this chapter is to provide a summary background of CS data

acquisition in OCT (CS-OCT), describe a GPU implementation of the CS reconstruction

algorithm that is capable of providing volumetric reconstruction times on the order of several

minutes, and discuss alternate solutions for volume compression as a method of efficiently

archiving OCT datasets.

Page 54: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

41

4.1 CS Acquisition of OCT datasets

A detailed description of the CS technique and applications to OCT is beyond the scope of this

thesis. Please refer to [49] for an in-depth manuscript of the problem formulation and

applications for CS. In short, the method proposed by Lebed et al. is to use a predefined pseudo-

random pattern as a method of data acquisition suitable for the CS reconstruction algorithm [49].

Young et al. presented a follow-up publication to demonstrate an OCT system for real-time CS

acquisition of sparsely acquired volume datasets [28]. In this publication, Young et al.

demonstrated a prototype CS-SS-OCT system that used a randomly-spaced raster scanning

pattern and provided a detailed comparison of these results with those acquired with the equi-

spaced pattern used in conventional OCT. This comparison included subsample volumes at

different percentages of subsampled volumes versus the fully acquired volume. An example of a

fully-sampled and a subsampled raster scan patterns are provided in Figure 4-1, where the fully-

sampled raster scan pattern is presented on the left, and the randomly-spaced B-scan raster scan

pattern is presented on the right.

Figure 4-1 Comparison between a uniformly-spaced fully-sampled raster scan pattern

(left), and a randomly-spaced sparse B-scan raster scan pattern (right). Blue dots represent

the scanning beam. The green arrows represent the forward scan direction. The red arrow

represents the fly-back scan direction.

Page 55: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

42

An undesirable aspect of the CS-OCT method was the long execution time for data

reconstruction after rapid acquisition. Young et al. stated that the reconstruction time for a

512x512x512 volume sized dataset with MATLAB running on a core i7 CPU was approximately

two hours [28]. However, for acquisition of multiple volumes per patients and tens of patients

per day, a two-hour reconstruction time per volume becomes highly impractical for CS to be

used regularly. Additionally, the CS reconstruction software in MATLAB requires first

executing the MATLAB OCT-processing software to obtain compressed volumetric information,

and then applying the CS algorithm to reconstruct the missing B-scans. This process requires an

in-depth technical knowledge of the processing pipeline beginning from the raw and compressed

OCT data, to the reconstructed volumetric retinal datasets. A clinically suitable CS-OCT

program for visualizing reconstructed subsampled data should encompass all components of

processing and display, and should be completed within the time of a given imaging session for

patients, which is nominally less than fifteen minutes.

4.2 GPU-Based CS Reconstruction

Previous GPU-Based CS reconstruction implementations were demonstrated on an SD-OCT

system by Xu et al. [50,51]. Rather than a sparse B-scan implementation, Xu et al. based their

compressive sampling mechanism off of a pseudo-random spectral subsampling approach

presented by Liu et al. [52]. The advantage of the implementation proposed by Lebed et al. [49]

is that the backscattered signal from the sample is fully acquired at the detector. In contrast, Liu

et al. discard a portion of their backscattered signal in order to accelerate the line scan rate of

their CCD. However, Lebed et al.’s approach is much more computationally expensive due to its

3D based reconstruction, while Liu et al.’s method requires only one-dimensional reconstruction

of the spectral information. Xu et al. presented a work that aggregated their own spectral CS-

Page 56: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

43

reconstruction algorithm with the 3D-based CS reconstruction algorithm presented by Lebed et

al. to reconstruct datasets with over 80% missing data [53]; however this publication was a

MATLAB implementation that was not optimized for speed. In this chapter, a GPU

implementation of the 3D-FFT-CS algorithm for reconstructing the missing B-scans is presented

to demonstrate the feasibility of implementing Lebed et al.’s approach for clinical applications.

During CS-OCT acquisition, B-scans were acquired at pre-recorded pseudo-random

locations based on a mask. The first step in the CS reconstruction is to redistribute the B-scan,

which are stored contiguously in memory, such that the acquired B-scans are placed into their

corresponding location in the volume. Next, the CS algorithm is used to fill in the missing data.

A flow chart showing the appearance of the CS-OCT volume data at three stages of the

reconstruction pipeline is presented in Figure 4-2. On the left is a representative image of the

sparsely acquired volume with the 100 CS-OCT B-scans. After placing the acquired B-scans in

the correct physical location in the volume, the CS reconstruction algorithm is applied to

generate the missing B-scans.

Figure 4-2 Transitional stages between 1.) CS-acquired volume, 2.) Redistributed volume,

and 3.) Reconstruction volume with the CS algorithm.

An iterative soft threshold (IST) solver introduced by Daubechies et al. [54] was used to

perform the CS reconstruction of the missing data. This IST solver takes advantage of objects

Page 57: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

44

that: 1) are spatially smooth, and 2) can be represented in the lower frequencies of the Fourier-

transformed dataset. Mathematically, the IST can be simplified to the following sequence of

iterates:

𝝌𝒏 = 𝑺𝝁𝒏(𝝌𝒏−𝟏 + 𝓕{𝒙𝟎 − 𝑲𝒙𝒏−𝟏}) , Eq 4-1

where 𝑥0 is the original sparsely acquired dataset, 𝑥𝑛−1 is the reconstructed volume from the

prior iteration (a volume of zeroes in the first iteration), 𝑲 is the acquisition mask containing 1’s

at sampled locations and 0’s elsewhere, 𝓕 is the 3D Fourier Transform operator, 𝜒𝑛−1 is the

Fourier-transformed equivalent of 𝑥𝑛−1 (i.e. 𝜒𝑛−1 ≡ 𝓕{𝑥𝑛−1}), 𝜒𝑛 is the Fourier-transformed

equivalent of 𝑥𝑛, and 𝑺𝝁𝒏 is the non-linear soft thresholding operator. As stated by Daubechies et

al. [54], 𝑺𝝁𝒏 can be represented using the following non-linear function definition:

𝑺𝝁𝒏(𝒙) = {

𝒙 − 𝝁𝒏𝐢𝐟 𝒙 ≥ 𝝁𝒏

𝟎 𝐢𝐟 |𝒙| < 𝝁𝒏

𝒙 + 𝝁𝒏 𝐢𝐟 𝒙 ≤ −𝝁𝒏

, Eq 4-2

where 𝜇𝑛 is defined as the threshold value used for the corresponding iteration of reconstruction.

This definition of 𝑺𝜇𝑛(𝑥) can be visualized from the plot shown in Figure 4-3.The total number

of thresholds for reconstruction is a user-defined parameter for determining the number of

iterations of reconstruction. For the purposes of this chapter, the total number of thresholds for

CS reconstruction is 𝑁. The total execution time for reconstruction is directly proportional to 𝑁.

Page 58: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

45

Figure 4-3 Plot of 𝑺𝝁𝒏(𝒙) from the IST solver algorithm.

By increasing the number of threshold values, 𝑁 , the CS-reconstruction process can

provide a smaller reconstruction steady-state error but requires far more time for execution.

These threshold values are selected based on the absolute values of the initial Fourier

representation, 𝜒0 , of the original dataset, 𝑥0 . To do this, the modulus of the Fourier

representation, |𝜒0|, must first be sorted into descending order. In CUDA, a sorting operation

from the Thrust library [55], which uses a no-comparison and highly-efficient radix sort

implementation for GPUs, is implemented to sort the modulus of the Fourier-domain dataset.

After sorting |𝜒0|, the sequence of thresholds, 𝜇0 to 𝜇𝑁, can be extracted and used for computing

Eq 4-1 and Eq 4-2.

We elected to use the FFT as the basis for the sparsifying transform. To execute the 3D

FFT operation from Eq 4-1, the CUDA FFT library was used [24]. In comparison to other types

of sparsifying transforms, such as Wavelets and Surfacelets, the FFT requires significantly more

iterations to reach comparable visualization quality. Despite this disadvantage, we chose the FFT

due to it being readily available for implementation in CUDA and that it provided a balance

between computation efficiency and recovered image quality [11].

Page 59: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

46

4.2.1 Streamlined OCT Processing and CS Reconstruction

In order to facilitate the utility of the CS-reconstruction pipeline for subsampled OCT-

datasets, the processing of OCT and CS reconstruction procedures should be unified into a single

program for convenient post-processing and visualization. In this research, we implemented the

new CS-OCT reconstruction pipeline on top of the original CUDA-based SS-OCT processing

pipeline described by Jian et al. [14]. After CS reconstruction, the raycasting algorithm for

volume rendering presented by Jian et al. was used to visualize the compressed dataset, the

sparsely redistributed dataset, and the CS-reconstructed dataset. In addition to the three volume-

rendered images, an en face image of the CS-reconstructed dataset, a B-scan window, and a slow

scan window were also provided for visualizing the quality of the CS reconstruction. The user is

capable of selecting the B-scan or slow scan for display by indicating the corresponding

locations on the en face image. An example of the GPU-based reconstruction and visualization

program is shown in Figure 4-4.

Page 60: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

47

Figure 4-4 A representative screen capture of the GPU-Based reconstruction and

visualization software. Top left – Compressed Volume. Top Middle – Sparsely

redistributed Volume. Top Right – Reconstructed volume. Bottom Left – En face image of

reconstructed volume. Bottom Middle – fast B-scan extracted at the red line on the en face

image. Bottom right – slow B-scan extracted at the green line on the en face image.

4.2.2 Results and Discussion

In this research, we used the MSE as a metric for characterizing the execution time required for

the CS algorithm to obtain a steady-state reconstruction error. The MSE metric was used to

quantify the differences of the dataset at different epochs. A single epoch can be user-defined for

any number of iterations; in this chapter, an epoch was defined to be 20 threshold iterations of

the IST algorithm. The experiments were performed on a dataset acquired from a patient with

severe cysts caused by age-related macular degeneration (AMD).

To demonstrate the MSE rate of convergence, we plotted a graph that presents the

calculated MSE value after each epoch as shown in Figure 4-5. The logarithmic MSE values

underwent a rapid decrease in the beginning, where the overall error dropped from a value of

Page 61: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

48

-0.5 to -3.5 within the first 100 iterations, equivalent to three orders of magnitude decrease. After

100 iterations, the intensities of the reconstructed B-scans were visually similar to the adjacently

acquired B-scans; however, they were much more blurry than the acquired B-scans with many

unresolvable features. This result is consistent with the fact that the lower frequency portion of

the spectrum is reconstructed before the higher frequency portion. Afterwards, the logarithmic

MSE value decreases from -3.5 at the 100th

iteration, to -5.0 after the 1200th

iteration and is

equivalent to approximately a two-orders-of-magnitude decrease over 1100 iterations. After

reconstructing the higher frequencies, the fine features, such as small blood vessels, became

more apparent. The time required for execution is approximately one minute per 100 iterations,

therefore a total of 1200 iterations required approximately 12 minutes to execute.

Figure 4-6 presents a qualitative comparison of the en face images generated from the

sparsely acquired volume after different number of iterations of the IST solver during

reconstruction. After 200 iterations, the en face image in b) is already acceptable, but slightly

blurry. After 400 iterations, the smaller blood vessels in the en face image become more

apparent, as shown in the panels c). The sharpness of the small blood vessel features, as outlined

by the red rectangular boxes, incrementally improves from 400 to 600 iterations, but does not

improve qualitatively beyond 600 iterations, as shown in d-f).

Page 62: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

49

Figure 4-5 A logarithmic plot of the GPU-FFT-Based CS Reconstruction of the MSE

Values computed during real-time reconstruction.

Figure 4-6 Figures comparing quality of en face images at different iterations. A) Before

reconstruction. B) 200 iterations. C) 400 iterations. D) 600 iterations. E) 800 iterations. F)

1000 iterations. These images were extracted from a reconstructed 50% missing data cyst

volume.

Page 63: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

50

The slow scans extracted from the middle of the same volumetric dataset during reconstruction

are presented in Figure 4-7. The fine details within the retinal structures begin to be visible after

600 iterations. Although the MSE plot from Figure 4-5 indicated that the rate of convergence

decreased approximately two orders of magnitude within the first 100 iterations, it cannot

distinguish between the reconstruction of fine structural features in the retina, such as those seen

in the choroid layer in Figure 4-7 e) and f).

Figure 4-7 Results comparing quality of slow-scan images at different iterations. A) Before

reconstruction. B) 200 iterations. C) 400 iterations. D) 600 iterations. E) 800 iterations. F)

1000 iterations. This was performed on a 50% subsampled cyst dataset.

The quality of the reconstructed data depends on the number of iterations, as presented above,

and also on the sparsity of the acquired data. The volumes presented above all had 50% missing

B-scans. Using the GPU software, a comparison of the quality of the reconstructed volume that

was acquired at 50% and 75% missing data is presented in Figure 4-8. For each dataset, 1200

iterations were performed to ensure we reached a plateau in the quality of the reconstructed B-

Page 64: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

51

scans. Further increasing the number of iterations, such as 2000 iterations, did not provide

visually higher quality B-scans. As shown in these images, the quality of the reconstructed

volume from a 75% missing dataset from a) is inadequate for resolving fine features in the

choroid. With a 50% missing dataset, the fine structures in the retina become more apparent than

from the 75% missing dataset, as shown in b). Additionally, the vasculature is more visible in e)

as opposed to d). Although the reconstructed slow scan in b) has less defined features than the

slow scan from the fully acquired data c), most of the fine details within the Nerve Fiber Layer

and the choroid are still visible, while the quality of the image at the cyst is comparable to that of

the original. This comparison suggests that 50% compression can be adequate for diagnostic

purposes, and that 75% may provide insufficient information for proper CS reconstruction of the

missing B-scans. During reconstruction for both the 50% and 75% missing data, we used a total

of 1200 iterations, which provided the maximum quality achievable in each situation and a

plateaued MSE value.

In Figure 4-9, a direct comparison between the reconstructed B-scans and acquired B-

scans from the 50% and the 75% missing data volumes are shown. This figure demonstrates that

the quality of the reconstructed B-scan in the 75% missing data volume, as shown in panel c),

was significantly degraded as opposed to the original B-scan acquired adjacent to this volume, as

shown in panel d). As for the reconstructed B-scan from the 50% missing data, shown in panel

a), the physiological shape and most retinal features were preserved in comparison to the

acquired B-scan adjacent to this B-scan, shown in panel b). Additionally, Figure 4-10 provides

more representative results extracted from 5 different patients acquired with a 50% subsampling

mask. These results include the en face images, reconstructed and acquired B-scans, and slow

scans for each acquired volume.

Page 65: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

52

Figure 4-8 A comparison between a) 75% missing data, b) 50% missing data, and c) fully

acquired data. The first row of images, a, b, and c are slow-scan image at the horizontal

center of the volume. The second row of images, d, e, and f, are en face images of the

reconstructed datasets. A total of 1200 iterations were used for reconstruction.

Figure 4-9 A comparison between the reconstructed B-scans and acquired B-scans from

50% and 75% missing data. The panels a and b are from 50% missing data volume, while c

and d are from the 75% missing data.

Page 66: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

53

Figure 4-10 A comparison of CS reconstruction for 50% subsampled datasets for five

different subjects. Volumes 9-OD-V6, 10-OD-V5, and H011-OD-V6 are datasets taken

from healthy subjects, while volumes 11-OS-V3 and 17-OD-V4 are taken from patients

with age-related macular degeneration.

Page 67: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 4 – Compressive Sampling Optical Coherence Tomography with GPU

54

4.3 Conclusion

In this research, a GPU-based CS reconstruction algorithm was demonstrated to be capable of

accelerating the IST solver algorithm to well below the several hours of reconstruction time

reported in previous publications [28]. For 1200 threshold iterations of the IST solver on a

400x400x400 voxel dataset, the time required to complete the CS reconstruction algorithm was

approximately 12 minutes, or 1 minute per 100 iterations. A comparison between the slow scans

of the original dataset and the volumes reconstructed from 75% and 50% missing data was

provided. In this comparison, a reduction in motion artifact was visible but with a major tradeoff

in the quality of the reconstructed B-scans. Therefore, with an appropriate compression

percentage of the B-scans and a GPU-accelerated CS-reconstruction implementation, CS-OCT

can potentially be a non-expensive tool for accelerating image acquisition to provide comparable

imaging quality for diagnostic purposes.

Page 68: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

55

Chapter 5

Wavefront Sensorless Adaptive Optics

Optical Coherence Tomography

Wavefront sensorless adaptive optics optical coherence tomography (WSAO-OCT) is a novel

technique for in vivo high-resolution imaging that bypasses the challenges encountered with

sensor-based adaptive optics designs. This technique replaces the Hartmann-Shack wavefront

sensor with an image-driven optimization algorithm that leverages our ultrahigh speed GPU

processing platform. WSAO-OCT is especially advantageous for developing a clinical high-

resolution retinal imaging system as it enables the use of a more compact and robust lens-based

adaptive optics design. In this chapter, we describe our WSAO-OCT system for imaging the

human photoreceptor mosaic in vivo. We validated our system performance by imaging the

human retina at several eccentricities, and demonstrated the improvement in photoreceptor

visibility with WSAO compensation.

Page 69: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

56

5.1 Introduction

The use of adaptive optics (AO) is essential for in vivo high resolution retinal imaging with large

numerical aperture (NA) (large imaging pupil) [58–66]. AO has been combined with various

ocular imaging techniques in order to achieve diffraction limited resolution by correcting optical

aberrations; fundus photography [67] and SLO [68–71] are two examples of biomedical

applications with AO allowing reliable imaging of cone and rod photoreceptor mosaics. OCT has

also been combined with AO to couple the high lateral resolution with the micron-scale axial

resolution of low coherence interferometry [64,68,72–77]. Although conventional imaging

systems have been demonstrated to be capable of imaging photoreceptor cones in healthy

subjects [42,78–80], without application of adaptive optics the cone mosaic cannot be reliably

imaged at retinal eccentricities near the fovea (<2°) or in patients with ocular defects and

aberrations.

Retinal imaging with a large NA inherently amplifies the impact of ocular aberrations in

the subject’s eye, which compromises the maximum-achievable resolution. The goal of AO is to

compensate for the optical aberrations with an adaptive element, such as a deformable mirror

(DM), which is conventionally controlled by a wavefront sensor (WFS) connected in a feedback

AO correction loop. However, aberration correction is limited by the sensitivity of the WFS to

non-common path aberrations, wavefront spot centroiding errors, and wavefront reconstruction

errors. The predominant limitation with using the WFS is its sensitivity to back-reflections. Most

AO systems use spherical-mirror telescopes to mitigate the back-reflections [77,81–84].

However, the use of spherical mirrors in off-axis configurations results in large system

aberrations, which can be reduced by using long focal length mirrors. Additionally, non-planar

folding of the spherical-mirror telescopes is a strategic approach for system aberration

Page 70: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

57

cancellation, but is sensitive to misalignment and requires a large footprint for the overall optical

system [82]. A lens-based AO-SLO design using polarization techniques was presented as an

alternative to spherical mirrors [85]. This approach reduced back-reflections on the WFS and

achieved similar performance as non-planar folding telescopes, but with added system

complexity.

Wavefront sensorless adaptive optics (WSAO) has been demonstrated to be a robust

strategy for circumventing the limitations associated with conventional sensor-based AO

systems. WSAO was first demonstrated in microscopy to correct low order aberrations [86–89].

The sensorless adaptive optics technique was first adapted into a human in vivo imaging system

using SLO, and was quantitatively compared with the corresponding wavefront-sensor-based

AO-SLO [90]. The authors used a stochastic optimization method to implement the sensorless

AO. Although the convergence time was relatively long, the authors demonstrated that image

based wavefront sensorless control is capable of producing images of comparable quality to

those acquired using wavefront sensor-based SLO.

In our previous report, we demonstrated a WSAO-OCT system for small animal imaging

which was presented and evaluated on pigmented and albino mice [91]. We developed the

WSAO-OCT technique using a modal control of the DM which enabled a faster convergence

rate in comparison to a stochastic approach. The system had an A-scan acquisition rate of 100

kHz, and the entire optimization process required about 60 seconds. While this acquisition and

optimization rate was sufficient for imaging anesthetized mice, it was not fast enough for reliable

in vivo human retinal imaging due to involuntary eye and head movements, blinks, and the

subject’s fatigue.

Page 71: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

58

In this chapter, we describe our WSAO-OCT system for in vivo imaging the

photoreceptor layer in the human retina. We first describe the design of our WSAO-OCT optical

system and real-time acquisition processing platform. Next, we describe our WSAO-OCT

optimization process tailored for human retinal imaging. Lastly, we present en face images of the

human photoreceptor mosaic reconstructed from the OCT volumes acquired in vivo near the

optic nerve disc and at various retinal eccentricities along the superior meridian.

5.2 Methods

The details of the hardware used in our WSAO FD-OCT optical system, as well as the software

used for operating the engine, are presented in the following subsections. All research procedures

were performed in accordance with the Declaration of Helsinki. The study protocol was

approved by the institutional review board of Simon Fraser University. Human imaging was

performed in a young (24 years of age) and healthy subject with low-degree myopia and a large

pupil in a dimly lit environment. The subject gave written informed consent before participating

in the study. The subject’s pupil was not dilated for the imaging sessions performed for this

research. The subject’s non-mydriatic pupil was measured to be ~7 mm in diameter during

imaging conditions.

5.2.1 Optical Design

Figure 5-1 presents a schematic of our lens-based WSAO-OCT system. Our adaptive optics

setup consisted of an SLD (SuperLum, Carrigtwohill, Ireland), a line scan camera (Basler,

Ahrensburg, Germany), and a 5-μm-stroke PT111 MEMS deformable mirror with 37 segments

and 111 actuators (Iris AO, Berkeley, CA). The light source had a center wavelength of 830 nm

Page 72: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

59

with a full width at half maximum (FWHM) of 80 nm (equivalent to an axial resolution of ~2.9

μm in the eye).

Figure 5-1 Schematic of the WSAO-OCT system: DM, Deformable Mirror; SLD,

Superluminescent Diode; FC, 10/90 reference/sample Fiber Coupler; G1, G2, horizontal

and vertical galvanometer scanning mirrors; PC, polarization controller; DC, dispersion

compensation; L0: f=16 mm; L1, L2: f = 300 mm; L3, L4: f = 200 mm, L5, L6: f = 150 mm;

L7: f = 200 mm; L8: f = 250 mm; L9: two lenses f = 200 and f = 300 mm (equivalent to f =

120 mm); L10: two lenses of f = 300 mm (equivalent to f = 150 mm). All lenses were

achromatic doublets. A trial lens plane is available for inserting corrective lenses for

focusing.

The optical power incident at the eye pupil was ~350 μW, which is below the ANSI limit at this

wavelength. The conjugate plane between telescopes L7/L8 and L9/L10 in Figure 5-1 was used

for placing trial lenses as a means of reducing defocus. A hot mirror was used to couple an

external fixation target into the system. The Gullstrand-LeGrand model of the human eye, which

was first proposed by Gullstrand [92] and later refined by Le Grand et al.. [93], was used for

approximating the average focal length to be feye=22.2 mm, or fair=16.7 mm. The 1/e2 beam

Page 73: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

60

diameter on the pupil was ~5.5 mm. Assuming a refractive index of n=1.33 for vitreous humor at

830 nm, the theoretical NA is 0.16, which results in a 1/e2 diameter spot size of 3.2 μm at the

retina and a depth of focus of approximately 25.9 μm. In order to mitigate human motion, we

constructed a bite bar with a dental impression tray to secure the upper jaw, and a forehead rest

for support. This design provided external support for the subject to relax their head throughout

the entire imaging session.

5.2.2 Workstation and Software Specifications

A modified version of the open source GPU-based FD-OCT program [14,26] was used in this

research. As previously presented, a dynamic depth selection option and system controls for the

DM were added to the software for enabling WSAO [91]. The GPU used during imaging was

the Quadro K6000 (NVIDIA, Santa Clara, California) and was capable of achieving a 1024-point

A-scan processing rate of 1.9 MHz as reported in a previous publication [15]. The ultrahigh

speed FD-OCT processing rate enabled real-time generation of volumetric images and extraction

of maximum intensity projection (MIP) en face images to be used as an image-based metric for

AO optimization.

5.2.3 Image Acquisition and Optimization

Our OCT engine was configured to acquire at an A-scan rate of 200 kHz. The camera was

triggered to acquire at an 80% duty cycle of the cycloidal raster scanning pattern to eliminate fly-

back artifacts. The volume acquisition size (1024x200x80 voxels) was carefully selected to

obtain a balance between sampling density and acquisition speed with consideration to the

limitations of the speed of the galvanometer-scanning mirrors. This resulted in an acquisition rate

Page 74: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

61

of 10 volumes per second, processed and displayed in real-time. The WSAO optimization

procedure for human imaging was modified from our previous report on mice [91]. For human

imaging, we only optimized Zernike radial orders 2 to 4 (Zernike modes 3 to 14) to maintain a

balance between optimization speed and effective aberration correction. For each Zernike mode,

the optimization was performed by acquiring an OCT volume for 5 different coefficients,

extracting an intensity en face image at the layer of interest from each volume, and selecting the

coefficient that gave the brightest image as the optimized value. For large aberration in a

particular Zernike mode, the algorithm performed a second search iteration of 5 data points, with

the coefficient range shifted in the direction of the best correction. The entire optimization

process required 6~12 seconds to complete, depending on the amount of the aberrations in the

subject’s eye.

5.3 Results

5.3.1 System Resolution

To evaluate the quality of our optical design before aberration correction, we placed a phantom

eye at the sample arm consisting of a 30 mm focal length air-spaced achromatic lens and a US

Air Force (USAF) resolution target. We imaged groups 6 and 7 with a flattened deformable

mirror. The acquired volumes consisted of 1024x200x80 voxels to simulate imaging conditions.

The spot size of our system for the phantom in air was calculated to be 5.8 μm 1/e2 diameter; this

was sufficient for resolving group 7 element 4 (line width of 2.76 μm) shown in Figure 5-2.

Page 75: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

62

Figure 5-2 Experimental quantification of the lateral resolution using a USAF resolution

target and a 30 mm focal length air-spaced achromatic lens. The achievable lateral

resolution was 2.76 μm with a volume size of 1024x200x80. Scale bar is 50 μm.

5.3.2 Human Retinal Imaging with WSAO-OCT

As a demonstration of our WSAO-OCT system performance for in vivo human retinal imaging,

we present images acquired before, during, and after the optimization process. Prior to WSAO

optimization, the subject was stabilized with the custom-designed head mount and fixation

target. Lens L10 was manually refocused to maximize the intensity of the photoreceptor layer.

Media 1 presents an example of the real-time optimization process on the photoreceptor layer at

a region near the optic nerve disc. In this video, a ten-second sequence of unoptimized en face

images is presented at the inner and outer segment (IS/OS) and retinal pigment epithelium (RPE)

layers. The optimization then proceeds, where the viewer can observe periodic fluctuations in

intensity with an increasing trend. The Zernike modes being optimized were included in the

video for illustration purposes. The resolution gradually increases throughout the optimization

Page 76: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

63

until individual cone photoreceptors can be clearly resolved as bright circular structures. After

WSAO optimization, a ten-second sequence of real-time optimized en face images is included

for comparison. The shadows of blood vessels are landmarks for determining the motion of the

subject. It is interesting to note that the optimization algorithm is capable of performing

aberration correction even with the subject’s involuntary motion.

A set of B-scan and en face images before and after WSAO optimization from Media 1 is

presented in Figure 5-3. The en face images were generated by performing a maximum intensity

projection on the voxels within the IS/OS layer (layer 2 on the A-scan plots). The improvement

in intensity after optimization is also apparent in the line graphs of A-scan intensity (red and

green boxes). The value of the optimized Zernike coefficient for each mode (representing the

shape of the DM after optimization), and the corresponding increase in merit function are also

presented in Figure 5-3.

Page 77: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

64

Figure 5-3 Comparison of the photoreceptor visibility in the en face image before and after

WSAO optimization. The left column is a comparison of B-scans, and the right column is a

comparison of the en face images generated at the IS/OS layer. NFL, Nerve Fiber Layer;

IS/OS, Inner and Outer Segment; RPE, Retina Pigment Epithelium; BM, Bruch’s

Membrane. Scale bar is 50 μm. A real-time optimization video has been included in Media

1.

In order to demonstrate our system’s capability of resolving photoreceptor mosaic near the

fovea, we present the en face images acquired at four different retinal eccentricities. Each image

was acquired after re-optimizing the ocular aberrations at the corresponding retinal location.

Figure 5-4 provides a comparison between unoptimized and optimized en face images acquired

at these locations. In the unoptimized images, the cones are mostly indistinguishable from

random speckle noise. After optimization, the image contrast increased, and the cone mosaic can

Page 78: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

65

be resolved at eccentricities as small as 1.0° from the fovea. In these en face images, the fovea is

located toward the bottom left corner.

Figure 5-4 Images of the photoreceptor mosaic acquired along the superior meridian in a

non-mydriatic pupil. These include results acquired at four retinal eccentricities for

comparison centered at: 5.1°, 2.2 °, 1.6°, and 1.0°. The field of view is 1.0°x0.4° for all

images. Scale bar is 50 μm.

5.4 Discussion and Conclusion

We have demonstrated real-time WSAO-OCT for human retinal imaging using a lens-based

optical design and a Zernike modal optimization algorithm of the DM shapes enabled by our

ultrahigh-speed GPU-based processing platform. In this research, experiments with the WSAO-

Page 79: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

66

OCT system were performed on one subject with large non-mydriatic pupils as a proof-of-

concept; a larger study in collaboration with ophthalmic clinicians on mydriatic patients is left

for future work. For validation, we presented the improvements in the quality of human

photoreceptor mosaic images acquired with OCT after WSAO optimization. These results

demonstrate that WSAO-OCT is capable of circumventing the hardware challenges and

limitations encountered with WFS-based imaging by using an image-based optimization.

WSAO-OCT can be made more suitable for clinical imaging with additional modifications to the

optimization algorithm. For example, the algorithm could readily be extended to include real-

time blink detection. Also, a real-time segmentation algorithm would enable layer-specific

aberration correction, which could be highly beneficial for visualizing retinal layers that do not

have a strong scattering signature on a wavefront-sensor, such as the inner and outer nuclear

layers.

In order to further improve the resolution of the WSAO-OCT, there are several

extensions worth exploring. Increasing the NA is necessary for visualizing smaller cellular

structures in the retina, such as rod photoreceptor cells and the cones at the center of the fovea.

This would require the use of topical agents such as phenylephrine and tropicamide to dilate the

pupil in order to enable imaging with a larger diameter incident beam. An additional benefit of

using tropicamide is that it can induce cycloplegia in the subject’s eye, resulting in a loss of

accommodation reflexes that might otherwise interfere with the optimization algorithm. Imaging

with a higher NA introduces additional ocular aberrations and will require a DM with a larger

stroke and a higher number of segments for successful aberration correction in both lower and

higher order Zernike modes. Alternatively, a woofer-tweeter DM configuration for separating the

Page 80: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 5 – Wavefront Sensorless Adaptive Optics Optical Coherence Tomography

67

lower and higher order Zernike modes to two different DMs could be used to augment the

aberration correction ability of our experimental setup [94–97].

The major advantage with WSAO-OCT imaging over conventional WFS-based AO

systems is that it no longer requires a well-defined conjugate plane as a requisite for successful

wavefront sensing. This underlying characteristic enables WSAO-OCT to reliably image non-

planar retinal structures in high resolution, such as the optic nerve head or retinas with

pathological changes (e.g. choroidal neovascularization). This means that WSAO-OCT does not

suffer from non-common path aberrations [98]. The WSAO OCT technique holds great promise

for imaging patients with irregular pupils, cataracts, or in general with increased opacity in the

eye. These cases have often shown it to be impossible to measure reliable wavefront data to be

used for AO correction. With WSAO-OCT, as long as the retinal structures can be detected, then

the AO correction can be performed. Another potential advantage of WSAO-OCT is the

possibility of designing a system with variable imaging beam size, which would be of interest for

optimal imaging of large populations of clinical patients.

In summary, we have demonstrated a lens-based approach for WSAO-OCT that is

capable of resolving the cone mosaic at small angles of eccentricity with non-mydriatic pupils

even with a small-stroke DM and an optical power below the ANSI limit at the eye pupil. Most

importantly, the reduced complexity of the lens-based WSAO design can facilitate a robust and

compact imaging system that is highly suitable for clinical applications in ophthalmology.

Page 81: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

68

Chapter 6

Conclusions

In this thesis, we reviewed the GPU-based implementation of the FD-OCT processing and

display pipeline, which to the best of our knowledge is the fastest implementation presented in

the literature [14]. This ultrahigh speed processing pipeline enabled three extensions for FD-

OCT, which would otherwise have been much more challenging to perform. These extensions

include svOCT, CS-OCT, and WSAO-OCT.

In Chapter 3, a GPU-based real-time svOCT system was presented. We described a

program that allowed real-time user-selected layer-based en face image generation of the

vasculature network. This research presented svOCT as a viable non-invasive alternative towards

other angiography imaging applications, such as fluorescein angiography (FA). The main

disadvantage with fluorescein angiography is the injection of fluorescein into the subject prior to

imaging. Common adverse side effects of fluorescein injection are allergic reactions, including

nausea, itching, and excessive sneezing [99]. In svOCT, no contrast enhancement agents are

necessary, and the imaging resolution is sufficient to visualize even the smallest blood vessels,

the capillaries, within the retina. This attribute of svOCT is highly desirable for longitudinal

Page 82: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 6 – Conclusion

69

studies of patients with vasculature-related pathologies, such as diabetic retinopathy. This

research enables opportunities for future work involving clinical studies with svOCT, or for

performing an in-depth comparison of the vasculature resolution achievable in svOCT with FA.

In Chapter 4, we described a GPU-accelerated CS-OCT reconstruction pipeline that was

expanded from our ultrahigh speed FD-OCT processing and display program. This CS-OCT

software implementation was used for post-processing and visualization of sparsely subsampled

datasets, and was able to perform this on volumes of size of 400x400x400 voxels in less than 15

minutes. We also presented the GPU-based CS-OCT reconstruction results for both 50% and

75% randomly subsampled datasets for healthy and diseased retina. The reconstructed results for

50% subsampling retained sufficient information for reconstructing the missing B-scans with

comparable quality to the acquired B-scans. CS-OCT can potentially enable more fields of

research for OCT, such as high-resolution imaging of large field of views with shorter volume

acquisition times, or to combine with other imaging modalities that require large volumetric

datasets, such as svOCT.

Lastly, in Chapter 5, we described a novel human imaging technique for in vivo high-

resolution cellular imaging of the photoreceptors in the retina called WSAO-OCT. This

technique utilized the ultrahigh speed GPU-based FD-OCT processing pipeline as a method of

extracting summed values of the intensity en face images for use in the optimization of the

quality merit function. By using this optimization scheme, we were able to bypass some of the

challenges faced in sensor-based adaptive optics system, such as the large optical footprints

required in mirror-based AO systems, wavefront reconstruction errors, and deleterious back-

reflections. All of these advantages make WSAO-OCT promising as a viable clinical imaging

modality for volumetric cellular imaging in the human retina.

Page 83: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 6 – Conclusion

70

6.1 Future Work

In each of the three presented applications of OCT that were enabled with state-of-the-art

GPUs, there are several opportunities for improving and expanding the capabilities of each work,

which can increase the impact of each imaging application in clinical imaging and diagnostics.

6.1.1 Speckle Variance OCT

In svOCT, one of the limiting factors was the slow volumetric acquisition rate, which was

mainly due to the factor of three increase in the number of acquired B-scans. For this reason, the

volumes were prone to processing artifacts caused by involuntary motion. Additionally, the

layer-selection operation for generating svOCT en face images also suffered in quality due to the

effects of motion. In Chapter 3, we demonstrated the use of a motion registration algorithm that

was capable of correcting many motion artifacts generated from the speckle variance algorithm.

However, large motions were still deleterious to the acquired datasets.

Possible hardware solutions to increase subject stability would be to incorporate a

fixation target or to install a bite bar. Another solution would be to use a faster swept source for

enabling faster acquisition rates than the existing 100 kHz SS-OCT system. Alternatively,

software solutions, such as adding a Graph-Cut segmentation to the svOCT pipeline, could also

increase the quality of the en face images, which may otherwise have been distorted due to

motion. The Graph-Cut segmentation algorithm is a useful but computationally expensive

processing technique for medical imaging that has conventionally been limited to post-

processing. Although the current single-GPU implementation already provides sufficient

processing throughput for real-time svOCT processing with motion registration and notch

filtering, one GPU has insufficient processing power for including Graph-Cut segmentation to

Page 84: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 6 – Conclusion

71

real-time svOCT imaging, as explained in a previous bachelor’s thesis [100]. Instead, a dual- or

multi-GPU implementation was suggested for enabling Graph-Cut segmentation in real-time

imaging with OCT. Real-time Graph-Cut segmentation would enable many more possibilities

over the current dynamic-selection implementation, such as real-time layer thickness

measurements, and real-time en face image generation of layer-specific vasculature networks

from retinas that have large curvatures or pathological deformations.

6.1.2 CS-OCT

In Chapter 4, we presented a GPU-accelerated implementation of the CS-OCT reconstruction

pipeline proposed by Lebed et al. [49]. We further demonstrated this reconstruction pipeline on

subsampled data acquired using the CS-OCT acquisition system presented by Young et al. [28].

However, one of the operations excluded from this thesis that was implemented in the post-

processing procedure presented by Young et al. was volume registration and averaging as a

method of noise reduction and feature enhancement. Young et al. acquired six subsampled

volumes at a single location, performed the CS reconstruction for each subsampled volume,

registered all six volumes with each other, and averaged the six volumes into a single

representation of the retina. In their results, the authors demonstrated that the registration and

averaging procedure enhanced the visualization of fine features that were also apparent in the

fully-acquired dataset, which were less visible in the corresponding frame from an un-averaged

volume.

One future improvement to our current GPU-based CS-OCT reconstruction algorithm

would be to incorporate the volume registration and averaging into the pipeline. In our

implementation, we concluded that a subsampling rate of 50% was the highest subsampling rate

Page 85: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

Chapter 6 – Conclusion

72

that still returned acceptable quality reconstructed B-scans. The addition of the volume

registration and averaging procedure to our GPU implementation should allow us to increase the

subsampling rate without compromising the quality of the reconstructed B-scans [28].

6.1.3 WSAO-OCT

In Chapter 5, we described the proposed WSAO-OCT technique for human photoreceptor

imaging, and presented results acquired on a single subject. To demonstrate that WSAO-OCT is

a feasible clinical imaging modality, a study involving a large range of subjects must be

performed, which would include subjects with defects in the eye such as cataracts. This study

would require inducing mydriasis and cycloplegia in the subjects to increase the imaging NA and

to prevent biological variables that could affect image quality, such as accommodation or pupil

contraction. However, increasing the NA also increases the magnitude of ocular aberrations,

which can exceed the magnitude of correction available in a 37-segment 5-μm stroke DM. For

this reason, either a woofer-tweeter configuration, or a DM with more segments and larger

stroke, is required to correct aberrations in higher NA imaging.

Another extension to WSAO-OCT worth exploring is to implement a GPU-based layer

tracking algorithm. In the current implementation, the photoreceptor layer has the strongest

scattering signature; therefore, the intensity-based optimization algorithm only optimizes for this

layer. Adding a layer tracking algorithm would allow us to optimize different retinal layers that

have smaller scattering signatures, such as the NFL, IPL, or INL. This extension would also

enable axial stitching of volumes optimized at different depths such that multiple layers in the

retina can be visualized with cellular resolution.

Page 86: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

73

References

[1] A. M. Walczak, K. R. Hoffmann, J. Xu, J. J. Corso, and S. Schafer, “Clinical Evaluation

of GPU-Based Cone Beam Computed Tomography.”

[2] P. B. Noël, A. M. Walczak, J. Xu, J. J. Corso, K. R. Hoffmann, and S. Schafer, “GPU-

based cone beam computed tomography.,” Comput. Methods Programs Biomed., vol. 98,

no. 3, pp. 271–7, Jun. 2010.

[3] Y. Okitsu, F. Ino, and K. Hagihara, “High-performance cone beam reconstruction using

CUDA compatible GPUs,” Parallel Comput., vol. 36, no. 2–3, pp. 129–141, Feb. 2010.

[4] D. Liu and E. S. Ebbini, “Real-Time 2-D Temperature Imaging Using,” vol. 57, no. 1, pp.

12–16, 2010.

[5] T. Reichl, J. Passenger, O. Acosta, and O. Salvado, “Ultrasound goes GPU: real-time

simulation using CUDA,” vol. 7261, pp. 726116–726116–10, Feb. 2009.

[6] T. Schiwietz, T. Chang, P. Speier, and R. Westermann, “MR image reconstruction using

the GPU,” vol. 6142, p. 61423T–61423T–12, Mar. 2006.

[7] T. S. Sørensen, D. Atkinson, T. Schaeffter, and M. S. Hansen, “Real-time reconstruction

of sensitivity encoded radial magnetic resonance imaging using a graphics processing

unit.,” IEEE Trans. Med. Imaging, vol. 28, no. 12, pp. 1974–85, Dec. 2009.

[8] D. Huang, E. A. Swanson, C. P. Lin, J. S. Schuman, W. G. Stinson, W. Chang, M. R. Hee,

T. Flotte, K. Gregory, C. A. Puliafito, and J. G. Fujimoto, “Optical Coherence

Tomography,” Science (80-. )., vol. 254, no. 5035, pp. 1178–1181, 1991.

[9] M. Wojtkowski, R. Leitgeb, A. Kowalczyk, T. Bajraszewski, and A. F. Fercher, “In vivo

human retinal imaging by Fourier domain optical coherence tomography.,” J. Biomed.

Opt., vol. 7, no. 3, pp. 457–63, Jul. 2002.

[10] M. Choma, M. Sarunic, C. Yang, and J. Izatt, “Sensitivity advantage of swept source and

Fourier domain optical coherence tomography,” Opt. Express, vol. 11, no. 18, p. 2183,

Sep. 2003.

[11] R. Leitgeb, C. Hitzenberger, and A. Fercher, “Performance of fourier domain vs time

domain optical coherence tomography,” Opt. Express, vol. 11, no. 8, p. 889, Apr. 2003.

Page 87: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

References

74

[12] M. Wojtkowski, V. J. Srinivasan, T. H. Ko, J. G. Fujimoto, A. Kowalczyk, and J. S.

Duker, “Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography

and methods for dispersion compensation,” vol. 12, no. 11, pp. 707–709, 2004.

[13] K. Wong, “FOURIER DOMAIN OPTICAL COHERENCE TOMOGRAPHY REAL

TIME HIGH RESOLUTION VOLUME RENDERING AND by,” no. December, 2012.

[14] Y. Jian, K. Wong, and M. V Sarunic, “Graphics processing unit accelerated optical

coherence tomography processing at megahertz axial scan rate and high resolution video

rate volumetric rendering.,” J. Biomed. Opt., vol. 18, no. 2, p. 26002, Feb. 2013.

[15] J. Xu, K. Wong, Y. Jian, and M. V Sarunic, “Real-time acquisition and display of flow

contrast using speckle variance optical coherence tomography in a graphics processing

unit.,” J. Biomed. Opt., vol. 19, no. 2, p. 026001, Feb. 2014.

[16] NVIDIA, “NVIDIA CUDA C Programming Guide Version 4.2,” 2012.

[17] NVIDIA, “Whitepaper - NVIDIA’s Next Generation CUDA Compute Architecture:

Kepler GK110,” 2012.

[18] NVIDIA, “Whitepaper: NVIDIA GeForce GTX 680,” 2012.

[19] S. S. Sherif, C. Flueraru, Y. Mao, and S. Change, “Swept Source Optical Coherence

Tomography with Nonuniform Frequency Domain Sampling,” in Biomedical Optics,

2008, p. BMD86.

[20] K. Wang, Z. Ding, T. Wu, C. Wang, J. Meng, M. Chen, and L. Xu, “Development of a

non-uniform discrete Fourier transform based high speed spectral domain optical

coherence tomography system,” Opt. Express, vol. 17, no. 14, p. 12121, Jul. 2009.

[21] K. Zhang and J. U. Kang, “Graphics processing unit accelerated non-uniform fast Fourier

transform for ultrahigh-speed, real-time Fourier-domain OCT.,” Opt. Express, vol. 18, no.

22, pp. 23472–87, Oct. 2010.

[22] M. Wojtkowski, “High-speed optical coherence tomography: basics and applications.,”

Appl. Opt., vol. 49, no. 16, pp. D30–61, Jun. 2010.

[23] K. Zhang and J. U. Kang, “Real-time numerical dispersion compensation using graphics

processing unit for Fourier-domain optical coherence tomography,” Electron. Lett., vol.

47, no. 5, p. 309, 2011.

[24] NVIDIA, “CUFFT LIBRARY USER ’ S GUIDE,” no. February, 2014.

[25] K. Zhang and J. U. Kang, “Real-time 4D signal processing and visualization using

graphics processing unit on a regular nonlinear-k Fourier-domain OCT system.,” Opt.

Express, vol. 18, no. 11, pp. 11772–84, May 2010.

Page 88: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

References

75

[26] J. Li, P. Bloch, J. Xu, M. V Sarunic, and L. Shannon, “Performance and scalability of

Fourier domain optical coherence tomography acceleration using graphics processing

units.,” Appl. Opt., vol. 50, no. 13, pp. 1832–8, May 2011.

[27] K. Zhang, S. Member, and J. U. Kang, “Graphics Processing Unit-Based Ultrahigh Speed

Real-Time Fourier Domain Optical Coherence Tomography,” vol. 18, no. 4, pp. 1270–

1279, 2012.

[28] M. Young, E. Lebed, Y. Jian, P. J. Mackenzie, M. F. Beg, and M. V Sarunic, “Real-time

high-speed volumetric imaging using compressive sampling optical coherence

tomography.,” Biomed. Opt. Express, vol. 2, no. 9, pp. 2690–7, Sep. 2011.

[29] W. Drexler and J. G. Fujimoto, “State-of-the-art retinal optical coherence tomography.,”

Prog. Retin. Eye Res., vol. 27, no. 1, pp. 45–88, Jan. 2008.

[30] M. Adhi and J. S. Duker, “Optical coherence tomography--current and future

applications.,” Curr. Opin. Ophthalmol., vol. 24, no. 3, pp. 213–21, May 2013.

[31] A. Mariampillai, M. K. K. Leung, M. Jarvi, B. a Standish, K. Lee, B. C. Wilson, A.

Vitkin, and V. X. D. Yang, “Optimized speckle variance OCT imaging of

microvasculature.,” Opt. Lett., vol. 35, no. 8, pp. 1257–9, Apr. 2010.

[32] J. Fingler, R. J. Zawadzki, J. S. Werner, D. Schwartz, and S. E. Fraser, “Volumetric

microvascular imaging of human retina using optical coherence tomography with a novel

motion contrast technique.,” Opt. Express, vol. 17, no. 24, pp. 22190–200, Nov. 2009.

[33] R. K. Wang, L. An, P. Francis, and D. J. Wilson, “Depth-resolved imaging of capillary

networks in retina and choroid using ultrahigh sensitive optical microangiography.,” Opt.

Lett., vol. 35, no. 9, pp. 1467–9, May 2010.

[34] M. S. Mahmud, D. W. Cadotte, B. Vuong, C. Sun, T. W. H. Luk, A. Mariampillai, and V.

X. D. Yang, “Review of speckle and phase variance optical coherence tomography to

visualize microvascular networks.,” J. Biomed. Opt., vol. 18, no. 5, p. 50901, May 2013.

[35] K. K. C. Lee, A. Mariampillai, J. X. Z. Yu, D. W. Cadotte, B. C. Wilson, B. A. Standish,

and V. X. D. Yang, “Real-time speckle variance swept-source optical coherence

tomography using a graphics processing unit.,” Biomed. Opt. Express, vol. 3, no. 7, pp.

1557–64, Jul. 2012.

[36] P. Sylwestrzak, M., Szlag, D., Szkulmowski, M., Gorczynska, I., Bukowska, D.,

Wojtkowski, M., and Targowski, “Four-dimensional structural and Doppler optical

coherence tomography imaging on graphics processing units and Doppler optical coher-,”

J. Biomed. Opt.

[37] J. Xu, K. Wong, Y. Jian, and M. V Sarunic, “GPU Open Source Code with svOCT

Implementation,” 2013. .

Page 89: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

References

76

[38] A. Mariampillai, B. A. Standish, E. H. Moriyama, M. Khurana, N. R. Munce, M. K. K.

Leung, J. Jiang, A. Cable, B. C. Wilson, I. A. Vitkin, and V. X. D. Yang, “Speckle

variance detection of microvasculature using swept-source optical coherence

tomography,” Opt. Lett., vol. 33, no. 13, p. 1530, Jun. 2008.

[39] M. Guizar-Sicairos, S. T. Thurman, and J. R. Fienup, “Efficient subpixel image

registration algorithms,” Opt. Lett., vol. 33, no. 2, p. 156, 2008.

[40] H. C. Hendargo, R. Estrada, S. J. Chiu, C. Tomasi, S. Farsiu, and J. A. Izatt, “Automated

non-rigid registration and mosaicing for robust imaging of distinct retinal capillary beds

using speckle variance optical coherence tomography.,” Biomed. Opt. Express, vol. 4, no.

6, pp. 803–21, Jun. 2013.

[41] L. Conroy, R. S. DaCosta, and I. A. Vitkin, “Quantifying tissue microvasculature with

speckle variance optical coherence tomography.,” Opt. Lett., vol. 37, no. 15, pp. 3180–2,

Aug. 2012.

[42] B. Potsaid, I. Gorczynska, V. J. Srinivasan, Y. Chen, J. Jiang, A. Cable, and J. G.

Fujimoto, “Ultrahigh speed spectral / Fourier domain OCT ophthalmic imaging at 70,000

to 312,500 axial scans per second.,” Opt. Express, vol. 16, no. 19, pp. 15149–69, Sep.

2008.

[43] W. Wieser, B. R. Biedermann, T. Klein, C. M. Eigenwillig, and R. Huber, “Multi-

Megahertz OCT: High quality 3D imaging at 20 million A-scans and 45 GVoxels per

second,” Opt. Express, vol. 18, no. 14, p. 14685, Jun. 2010.

[44] T. Klein, W. Wieser, L. Reznicek, A. Neubauer, A. Kampik, and R. Huber, “Multi-MHz

retinal OCT.,” Biomed. Opt. Express, vol. 4, no. 10, pp. 1890–908, Jan. 2013.

[45] E. J. Candès and D. L. Donoho, “New Tight Frames of Curvelets and Optimal

Representations of Objects with Piecewise C 2 Singularities,” vol. LVII, pp. 219–266,

2004.

[46] D. L. Donoho, “Compressed Sensing,” IEEE Trans. Inf. THEORY, vol. 52, no. 4, pp.

1289–1306, 2006.

[47] E. J. Candès, J. Romberg, and T. Tao, “Robust Uncertainty Principles : Exact Signal

Frequency Information,” vol. 52, no. 2, pp. 489–509, 2006.

[48] E. J. Candès, “The restricted isometry property and its implications for compressed

sensing,” Comptes Rendus Math., vol. 346, no. 9–10, pp. 589–592, May 2008.

[49] E. Lebed, P. J. Mackenzie, M. V Sarunic, and M. F. Beg, “Rapid volumetric OCT image

acquisition using compressive sampling.,” Opt. Express, vol. 18, no. 20, pp. 21003–12,

Sep. 2010.

Page 90: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

References

77

[50] D. Xu, Y. Huang, and J. U. Kang, “Real-time compressive sensing spectral domain optical

coherence tomography.,” Opt. Lett., vol. 39, no. 1, pp. 76–9, Jan. 2014.

[51] D. Xu, Y. Huang, and J. U. Kang, “GPU-accelerated non-uniform fast Fourier transform-

based compressive sensing spectral domain optical coherence tomography Abstract :,” vol.

22, no. 12, pp. 4209–4211, 2014.

[52] X. Liu and J. U. Kang, “Compressive SD-OCT: the application of compressed sensing in

spectral domain optical coherence tomography.,” Opt. Express, vol. 18, no. 21, pp.

22010–9, Oct. 2010.

[53] D. Xu, Y. Huang, and J. U. Kang, “Volumetric (3D) compressive sensing spectral domain

optical coherence tomography,” Biomed. Opt. Express, vol. 5, no. 11, p. 3921, Oct. 2014.

[54] I. Daubechies, M. Defrise, and C. De Mol, “An Iterative Thresholding Algorithm for

Linear Inverse Problems with a Sparsity Constraint,” Commun. Pure Appl. Math., vol.

LVII, no. 2004, pp. 1413–1457, 2004.

[55] NVIDIA, “Thrust quick start guide,” no. February, 2014.

[56] A. B. Wu, E. Lebed, M. V Sarunic, and M. F. Beg, “Quantitative evaluation of transform

domains for compressive sampling-based recovery of sparsely sampled volumetric OCT

images.,” IEEE Trans. Biomed. Eng., vol. 60, no. 2, pp. 470–8, Feb. 2013.

[57] E. Lebed, S. Lee, M. V Sarunic, and M. F. Beg, “Rapid radial optical coherence

tomography image acquisition.,” J. Biomed. Opt., vol. 18, no. 3, p. 036004, Mar. 2013.

[58] J. Liang, D. R. Williams, and D. T. Miller, “Supernormal vision and high-resolution

retinal imaging through adaptive optics,” J. Opt. Soc. Am. A, vol. 14, no. 11, p. 2884, Nov.

1997.

[59] R. J. Zawadzki, S. M. Jones, S. S. Olivier, M. Zhao, B. a Bower, J. a Izatt, S. Choi, S.

Laut, and J. S. Werner, “Adaptive-optics optical coherence tomography for high-

resolution and high-speed 3D retinal in vivo imaging.,” Opt. Express, vol. 13, no. 21, pp.

8532–8546, Oct. 2005.

[60] J. Porter, H. M. Queener, J. E. Lin, K. Thorn, and A. Awwal, Eds., Adaptive Optics for

Vision Science. Hoboken, NJ, USA: John Wiley & Sons, Inc., 2006.

[61] P. Godara, A. M. Dubis, A. Roorda, J. L. Duncan, and J. Carroll, “Adaptive Optics Retinal

Imaging: Emerging Clinical Applications,” Optom. Vis. Sci., vol. 87, no. 12, pp. 930–941,

2011.

[62] D. R. Williams, “Imaging single cells in the living retina.,” Vision Res., vol. 51, no. 13,

pp. 1379–96, Jul. 2011.

Page 91: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

References

78

[63] Y. Geng, A. Dubra, L. Yin, W. H. Merigan, R. Sharma, R. T. Libby, and D. R. Williams,

“Adaptive optics retinal imaging in the living mouse eye,” vol. 3, no. 4, pp. 715–734,

2012.

[64] F. Felberer, J.-S. Kroisamer, B. Baumann, S. Zotter, U. Schmidt-Erfurth, C. K.

Hitzenberger, and M. Pircher, “Adaptive optics SLO/OCT for 3D imaging of human

photoreceptors in vivo.,” Biomed. Opt. Express, vol. 5, no. 2, pp. 439–56, Feb. 2014.

[65] X. Zhou, P. Bedggood, and A. Metha, “Improving high resolution retinal image quality

using speckle illumination HiLo imaging,” Biomed. Opt. Express, vol. 5, no. 8, p. 2563,

Jul. 2014.

[66] G. Palczewska, Z. Dong, M. Golczak, J. J. Hunter, D. R. Williams, N. S. Alexander, and

K. Palczewski, “Noninvasive two-photon microscopy imaging of mouse retina and retinal

pigment epithelium through the pupil of the eye.,” Nat. Med., vol. 20, no. 7, pp. 785–9,

Jul. 2014.

[67] M. Zacharria, B. Lamory, and N. Chateau, “Biomedical imaging: New view of the eye,”

Nat. Photonics, vol. 5, no. 1, pp. 24–26, Jan. 2011.

[68] M. Pircher, B. Baumann, E. Götzinger, H. Sattmann, and C. K. Hitzenberger,

“Simultaneous SLO/OCT imaging of the human retina with axial eye motion correction.,”

Opt. Express, vol. 15, no. 25, pp. 16922–16932, 2007.

[69] J. Carroll, S. S. Choi, and D. R. Williams, “In vivo imaging of the photoreceptor mosaic

of a rod monochromat.,” Vision Res., vol. 48, no. 26, pp. 2564–2568, 2008.

[70] A. Roorda, “Applications of Adaptive Optics Scanning Laser Ophthalmoscopy,” Optom.

Vis. Sci., vol. 87, no. 4, pp. 260–268, 2010.

[71] D. Scoles, Y. N. Sulai, C. S. Langlo, G. a Fishman, C. a Curcio, J. Carroll, and A. Dubra,

“In Vivo Imaging of Human Cone Photoreceptor Inner Segments.,” Invest. Ophthalmol.

Vis. Sci., vol. 55, no. 7, pp. 4244–4251, Jan. 2014.

[72] O. P. Kocaoglu, S. Lee, R. S. Jonnal, Q. Wang, A. E. Herde, J. C. Derby, W. Gao, and D.

T. Miller, “Imaging cone photoreceptors in three dimensions and in time using ultrahigh

resolution optical coherence tomography with adaptive optics.,” Biomed. Opt. Express,

vol. 2, no. 4, pp. 748–63, Jan. 2011.

[73] Y. Zhang, J. Rha, R. Jonnal, and D. Miller, “Adaptive optics parallel spectral domain

optical coherence tomography for imaging the living retina.,” Opt. Express, vol. 13, no.

12, pp. 4792–811, Jun. 2005.

[74] A. G. Podoleanu and R. B. Rosen, “Combinations of techniques in imaging the retina with

high resolution.,” Prog. Retin. Eye Res., vol. 27, no. 4, pp. 464–99, Jul. 2008.

Page 92: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

References

79

[75] R. S. Jonnal, O. P. Kocaoglu, Q. Wang, S. Lee, and D. T. Miller, “Phase-sensitive imaging

of the outer retina using optical coherence tomography and adaptive optics.,” Biomed.

Opt. Express, vol. 3, no. 1, pp. 104–24, Jan. 2012.

[76] Y. Jian, R. J. Zawadzki, and M. V Sarunic, “Adaptive optics optical coherence

tomography for in vivo mouse retinal imaging.,” J. Biomed. Opt., vol. 18, no. 5, p. 56007,

May 2013.

[77] O. P. Kocaoglu, R. D. Ferguson, R. S. Jonnal, Z. Liu, Q. Wang, D. X. Hammer, and D. T.

Miller, “Adaptive optics optical coherence tomography with dynamic retinal tracking,”

Biomed. Opt. Express, vol. 5, no. 7, p. 2262, Jun. 2014.

[78] M. Pircher, B. Baumann, E. Götzinger, and C. K. Hitzenberger, “Retinal cone mosaic

imaged with transverse scanning optical coherence tomography.,” Opt. Lett., vol. 31, no.

12, pp. 1821–3, Jun. 2006.

[79] T. Schmoll, C. Kolbitsch, and R. a Leitgeb, “Ultra-high-speed volumetric tomography of

human retinal blood flow.,” Opt. Express, vol. 17, no. 5, pp. 4166–76, Mar. 2009.

[80] M. Pircher, E. Götzinger, H. Sattmann, R. a Leitgeb, and C. K. Hitzenberger, “In vivo

investigation of human cone photoreceptors with SLO/OCT in combination with 3D

motion correction on a cellular level.,” Opt. Express, vol. 18, no. 13, pp. 13935–44, Jun.

2010.

[81] R. J. Zawadzki, S. M. Jones, S. Pilli, S. Balderas-Mata, D. Y. Kim, S. S. Olivier, and J. S.

Werner, “Integrated adaptive optics optical coherence tomography and adaptive optics

scanning laser ophthalmoscope system for simultaneous cellular resolution in vivo retinal

imaging.,” Biomed. Opt. Express, vol. 2, no. 6, pp. 1674–86, Jun. 2011.

[82] A. Dubra and Y. Sulai, “Reflective afocal broadband adaptive optics scanning

ophthalmoscope.,” Biomed. Opt. Express, vol. 2, no. 6, pp. 1757–68, Jun. 2011.

[83] A. Dubra, Y. Sulai, J. L. Norris, R. F. Cooper, A. M. Dubis, D. R. Williams, and J.

Carroll, “Noninvasive imaging of the human rod photoreceptor mosaic using a confocal

adaptive optics scanning ophthalmoscope.,” Biomed. Opt. Express, vol. 2, no. 7, pp. 1864–

76, Jul. 2011.

[84] S.-H. Lee, J. S. Werner, and R. J. Zawadzki, “Improved visualization of outer retinal

morphology with aberration cancelling reflective optical design for adaptive optics -

optical coherence tomography.,” Biomed. Opt. Express, vol. 4, no. 11, pp. 2508–17, Jan.

2013.

[85] F. Felberer, J.-S. Kroisamer, C. K. Hitzenberger, and M. Pircher, “Lens based adaptive

optics scanning laser ophthalmoscope.,” Opt. Express, vol. 20, no. 16, pp. 17297–310, Jul.

2012.

Page 93: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

References

80

[86] M. J. Booth, “Wavefront sensorless adaptive optics for large aberrations,” vol. 32, no. 1,

pp. 2006–2008, 2007.

[87] D. Débarre, E. J. Botcherby, M. J. Booth, and T. Wilson, “Adaptive optics for structured

illumination microscopy.,” Opt. Express, vol. 16, no. 13, pp. 9290–305, Jun. 2008.

[88] N. Ji, D. E. Milkie, and E. Betzig, “Adaptive optics via pupil segmentation for high-

resolution imaging in biological tissues.,” Nat. Methods, vol. 7, no. 2, pp. 141–7, Feb.

2010.

[89] D. E. Milkie, E. Betzig, and N. Ji, “Pupil-segmentation-based adaptive optical microscopy

with full-pupil illumination.,” Opt. Lett., vol. 36, no. 21, pp. 4206–8, Nov. 2011.

[90] H. Hofer, N. Sredar, H. Queener, C. Li, and J. Porter, “Wavefront sensorless adaptive

optics ophthalmoscopy in the human eye.,” Opt. Express, vol. 19, no. 15, pp. 14160–71,

Jul. 2011.

[91] Y. Jian, J. Xu, M. a. Gradowski, S. Bonora, R. J. Zawadzki, and M. V. Sarunic,

“Wavefront sensorless adaptive optics optical coherence tomography for in vivo retinal

imaging in mice,” Biomed. Opt. Express, vol. 5, no. 2, p. 547, Feb. 2014.

[92] A. Gullstrand, “‘Appendix II,’ in Handbuch der Physiologischen Optik, 3rd ed,” Opt. Soc.

Am. 1924, vol. 1, no. J.P.Southall trans., ed., pp. 351–352, 1909.

[93] Y. Le Grand and S. G. El Hage, “Physiological Optics,” Springer Ser. Opt. Sci., 1980.

[94] D. C. Chen, S. M. Jones, D. a Silva, and S. S. Olivier, “High-resolution adaptive optics

scanning laser ophthalmoscope with dual deformable mirrors.,” J. Opt. Soc. Am. A. Opt.

Image Sci. Vis., vol. 24, no. 5, pp. 1305–12, May 2007.

[95] R. J. Zawadzki, S. S. Choi, S. M. Jones, S. S. Oliver, and J. S. Werner, “Adaptive optics-

optical coherence tomography: optimizing visualization of microscopic retinal structures

in three dimensions.,” J. Opt. Soc. Am. A. Opt. Image Sci. Vis., vol. 24, no. 5, pp. 1373–

83, May 2007.

[96] C. Li, N. Sredar, K. M. Ivers, H. Queener, and J. Porter, “A correction algorithm to

simultaneously control dual deformable mirrors in a woofer-tweeter adaptive optics

system.,” Opt. Express, vol. 18, no. 16, pp. 16671–84, Aug. 2010.

[97] W. Zou, X. Qi, and S. A. Burns, “Woofer-tweeter adaptive optics scanning laser

ophthalmoscopic imaging based on Lagrange-multiplier damped least-squares

algorithm.,” Biomed. Opt. Express, vol. 2, no. 7, pp. 1986–2004, Jul. 2011.

[98] Y. N. Sulai and A. Dubra, “Non-common path aberration correction in an adaptive optics

scanning ophthalmoscope,” Biomed. Opt. Express, vol. 5, no. 9, p. 3059, Aug. 2014.

Page 94: OPHTHALMIC IMAGING APPLICATIONS WITH OPTICAL …summit.sfu.ca/system/files/iritems1/14890/etd8733_KWong.pdf · In this thesis, we describe three applications that build upon the GPU-based

References

81

[99] K. A. Kwiterovich, M. G. Maguire, R. P. Murphy, A. P. Schachat, N. M. Bressler, S. B.

Bressler, and S. L. Fine, “Frequency of Adverse Systemic Reactions after Fluorescein

Angiography,” Ophthalmology, vol. 98, no. 7, pp. 1139–1142, Jul. 1991.

[100] V. P. K. Wong, “GPU Acceleration of Volume Segmentation for Retinal Thickness,”

Bachelor’s Thesis, 2013.