13
AUGMENTED REALITY WITH X-RAY LOCALIZATION FOR TOTAL HIP REPLACEMENT Authors: Yeo Seng Jin, FRCS(Ed), FAMS Yung Shing Wai Kwoh Chee Keong, PhD, MSc Seah Evan, B A Sc Wong Thong Seng, BEng Ng Wan Sing, PhD, DIC Lond., MEng (NUS'pore) Teo Ming Yeong, SM, B.Sc(Hons) Attribute: The work is the result of collaboration of the Department of Orthopaedic Surgery, Singapore General Hospital and the Computer Integrated Medical Intervention Laboratory, Nanyang Technological University. Acknowledgment: We are very grateful to Mr. Robert Ng, Manager of Department of Experimental Surgery of Singapore General Hospital for making the arrangements for our experiments and assist us in use the C-arm fluoroscopic machine in the mortuary. Corresponding Address: A/P Kwoh Chee Keong Division of Computing Systems, School of Computer Engineering Nanyang Technological University Blk N4 #2A-32 Nanyang Avenue Singapore 639798 Tel: (65) 790 6057 Fax: (65) 792 6559 Meeting: The paper was presented in the Third Annual NTU-SGH Biomedical Engineering Symposium.

AUGMENTED REALITY WITH X-RAY LOCALIZATION … Papers/2001... · AUGMENTED REALITY WITH X-RAY LOCALIZATION FOR TOTAL HIP REPLACEMENT Authors: Yeo Seng Jin, FRCS(Ed), FAMS Yung Shing

Embed Size (px)

Citation preview

AUGMENTED REALITY WITH X-RAY LOCALIZATION FOR TOTAL HIP REPLACEMENT

Authors: Yeo Seng Jin, FRCS(Ed), FAMS Yung Shing Wai Kwoh Chee Keong, PhD, MSc Seah Evan, B A Sc Wong Thong Seng, BEng Ng Wan Sing, PhD, DIC Lond., MEng (NUS'pore) Teo Ming Yeong, SM, B.Sc(Hons)

Attribute: The work is the result of collaboration of the Department of Orthopaedic Surgery, Singapore General Hospital and the Computer Integrated Medical Intervention Laboratory, Nanyang Technological University.

Acknowledgment: We are very grateful to Mr. Robert Ng, Manager of Department of Experimental Surgery of Singapore General Hospital for making the arrangements for our experiments and assist us in use the C-arm fluoroscopic machine in the mortuary.

Corresponding Address: A/P Kwoh Chee Keong

Division of Computing Systems, School of Computer Engineering

Nanyang Technological University

Blk N4 #2A-32

Nanyang Avenue

Singapore 639798

Tel: (65) 790 6057

Fax: (65) 792 6559

Meeting: The paper was presented in the Third Annual NTU-SGH Biomedical Engineering Symposium.

ABSTRACT An approach for the localization of acetabular prosthesis cup placement during total hip replacement (THR) surgery, which is based on only one X-ray image is described. The purpose of this project is to assist the surgeon in placing the hip-prosthesis cup at the right orientation. The ultimate aim is to use the procedure intraoperatively. From X-ray images, the 2D coordinates of points in images is picked and the 3D world coordinates of the hip can be calculated using a mathematical model. The method has been applied on mock bone and cadaver trials and gave satisfactory result in finding “the center” of the acetabulum cup and the desired orientation of implant insertion (45° of abduction and 15°-20° of anteversion) for implanting the acetabular component. The calculated information is then integrated to into a new augmented reality system to provide real-time fusion of video and virtual information for online, real-time visualisations during actual clinical procedures.

Keywords: image-guided surgery, computer aided surgery, augmented reality, X-ray localization, total hip replacement, femoral implant, orthopedics surgery, image intensifier, distortion and calibration. Camera tracking

1. INTRODUCTION The primary motivation of this research project is to set up an Augmented Reality System for Therapy (ART) for the purpose of Total Hip Replacement. Total hip replacement (THR) is an operation in which, the damaged hip (sometimes due to arthritis and sometimes due to damage caused by an accident) is replaced with an artificial hip, which consists of the Acetabulum cup and Femoral Stem. This operation involves a number of steps to complete the THR. Our primary aim is to provide assistance to place the acetabulum cup at the right orientation.

In Total Hip Replacement operation, the patient is covered up with drapes. This makes it difficult for the surgeon to accurately determine the position of the limb. As surgeons do not have the information of the correct orientation of acetabular cup, they implant the acetabular cup based on his experience without tracking and localizing equipment. This limitation may causes dislocation of the artificial hip and revision of surgery has become necessary. The incidence of dislocation following primary total hip replacement (THR) surgery is between 2-6% and even higher following revisions [1][2].

The aim of this project is to equip the surgeon with “X-ray eyes” to see through the drape. This is achieved via an augmented reality system. The system will therefore be able to assist the surgeon to place the tools and hence the prosthesis at the correct position and orientation to achieve best clinical outcome. Since the patient is completely covered up and therefore Augmented Reality (AR) comes in useful to reveal what’s not visible directly by overlays the computer-synthesized images onto the user’s real world views.

Augmented Reality has been applied in many fields such as medical application, military training, engineering design, manufacturing, architecture, maintenance and repair (as shown in Figure 1.1).

(i) Mechanical (b) Interior Design, only the phone is real

(c) Exterior Construction [3] (d) Breast needle biopsy [4]

Figure 1.1: Different Applications of Augmented Reality

2. System overview AR needs internal image to superimpose over the real video image. There are various internal imaging techniques for medical purpose, such as X-ray, fluoroscope (which is low intensity X-ray), ultrasound scanner, computed tomography (CT) and magnetic resonance imaging (MRI). For THR the present technique to localize the hip is CT. The images are taken preoperatively for diagnosis and planning [3]. During operation, if CT is used, users will be exposed to high amount of radiation. Hence, in our system of X-ray localization of hip, in order to minimize the radiation exposure, we must minimize the number of X-ray images to be taken. So, we explore a system with one or two X-ray images to determine the coordinates of the points on the screen and then calculate 3D world coordinates of the hip without actual reconstruction of the 3D model. We also developed a localization & multi-modal image registration method for this application [3].

The advantages of the system are: First, the resulting system is cost effective because of the use of few X-ray images instead of continuous CT. Second, radiation dosage both to surgeons and patients can be reduced drastically. Third, the new calibration method does not taking geometric model into consideration. Figure 2.1 shows the system overview of our system and the first prototype of the augmented images. The potential of the system lies in direct, fully immersive, real-time multisensory fusion of real and virtual information data stream into online, real-time visualisations available during actual clinical procedures.

Network

Image Overlay Unit

Patient

Tracking Unit

See-through Display

TrackingCameras

Figure 2.1 shows the system overview of our system and the first prototype of the augmented images. In this

demonstration, we modeled the alignment tool and the corrected line of action for THR. To present the information to the user, we augment the image and present as one image to the user.

The overall system can be subdivided into Image Intensifier sub-system, Tracking unit sub-system, Video Cameras sub-system and Image Overlay Unit. In this paper we will first focus our discussion on Image Intensifier sub-system where we take X-ray images and determine all the important parameters and information to determine the cup size and the ideal line of action and orientation in the given images. With the known information, we then passed the necessary information and markers locations for the tracking unit sub-system for tracking. Finally, these information are used to register and fused the generated images to the real-time video.

3. Image Intensifier sub-system In the Image Intensifier sub-system, the main objective is to come up with the image intensifier (II) distortion calibration method and X-ray localization technique for THR. The following summarized the procedure for THR in our system will give reader a better understanding of the sequence of actions.

• Placing 6 marker on the pelvis with at least one of the marker is out of plane

• Get image using C-arm X-ray machine.

• Find 3D coordinates of markers in world coordinate with ancillary device.

• Find 2D coordinates of these markers on the screen for the images in screen coordinates.

• Develop the transformation between 3D and 2D coordinates system.

• Select at least three points on the pelvis periphery on screen projection and find its center on 2D and hence on 3D.

• Show the desired orientation of the tool by rotating first 45° around Z-axis (abduction) and then 20° around X-axis (anteversion).

• Pass the information to 3D tracking device.

• Determine the 3D position and orientation of the pelvic at each frame.

• Generate the correct graphics and fused them with stereo video inputs.

3.1 The image intensifier (II) distortion calibration method When fluoroscopy is used, we must ensure that a rectangular grid appears as it is in the X-ray. If not correction must be made to restore the capture images. It is important to note that in the actual usage, it is not always possible to use standard gantry angles for oblique fields, particularly where conformal planning (to confirm the size, orientation of an organism etc ) is employed. Hence, distortion calibration must take into consideration of this variable.

Some well known sources of distortion as reported in papers are as followed. First, the projection of the X-ray image onto the curved surface of the image intensifier front end. Second, the electron optics of the image intensifier, interactions with external magnetic fields and the video component of the fluoroscopic system (including the optical coupling between the output phosphor screen of the image intensifier and the video camera). The most visually apparent of these is the pincushion effect of the projection of the X-ray image onto the curved surface of the image intensifier (I.I) front end (Figure 3.1 (a)). Rotation and ‘S’ distortion introduced by the electron optics of the image intensifier and interaction with external magnetic fields (specifically the earth’s magnetic field) is shown in (Figure 3.1 (b)).

Figure 3.1 (a) The pincushion effect of the projection of the X-ray image onto the curved surface of the image intensifier (I.I) front end. (b) Rotation and ‘S’ distortion introduced by the electron optics of the image intensifier and interaction with

external magnetic field

It may be sufficient to acquire digital fluoroscopic images with the image intensifier in predetermined positions. In these situation it is feasible to remove distortion by simple warping [4]. However, fluoroscopy during simulation often involves the arbitrary panning and scrolling of the I.I. Models that describe distortion as a function of radial distance from the center of the image [5], are capable of removing the pin-cushion component of the distortion, and do so when the image intensifier is centered with respect to the central axis (CA) of the X-ray beam. Fahring [6] described a method that offers very

accurate correction based on two-dimensional polynomial warping, but at the cost of an excessively precise calibration, and ignoring the lateral, longitudinal shifts and vertical elevation of the image intensifier. A method that is applicable to arbitrary of the I.I. has been proposed by [7]. This model separates the image distortion into two components, view depended distortion (VDD), i.e. projection of the X-ray image onto the spherical surface of the I.I., and view independent distortion (VID) i.e. mapping from the input phosphor to the output phosphor and to the digital image. A geometrical model corrects for the first component and the second is modeled by a linear transformation. In [8], more accurate method for calibration of arbitrary rotation and shifts of I.I. is described which has two extensions compared to [7].

In THR we are interested in the acetabulum image at one position of the I.I., therefore, we a need a fast method which can be used intraoperatively to minimize the system lag. In our system, we used two-dimensional linear and 3rd order transforms respectively. This method does not require a geometric model of the I.I. Hence there is no need of I.I. front-end radius of curvature. We find the coefficients for different lateral, longitudinal, vertical and gantry rotations and took the mean of these.

3.2 Methods For our experiment, we used the SERIES-9600 Mobile Digital Imaging System from OEC Medical Systems Inc. It has a DICOM 3.0 interface. The source to image (film) distance (SFD) is 990mm and the gantry rotation is 360°. A calibration template made of plexiglass is used. It has a rectilinear array of holes each of 5mm diameter, and center to center distance between holes is 10mm. In each hole there is a steel ball of slightly bigger diameter than the hole. There were 31 x 31 of steel balls.

3.2.1 Distortion model This method removes the distortion without the need of a geometric model. We have performed the two-dimensional transform in two steps, first step is the linear transform and then these points are further transformed by using two-dimensional third order transformation. For the first step we use

yaxaax 2101 ++=

ybxbby 2101 ++= …...(3.1)

Where, x, y are distorted points, and x1 , y1 are the compensated points after two-dimensional linear transform.

Coefficients 21021 ,,,, bandbbaaao can be found from the known points (x’,y’) on the calibration template.

For the 2nd step, we have

3191

211811

2171

3161

21511141

213112111101

' yayxayxaxayayxaxayaxaax +++++++++=

3191

211811

2171

3161

21511141

213112111101

' ybyxbyxbxbybyxbxbybxbby +++++++++= …(3.2)

where x’ and y’ are the undistorted points on the template. This method gave good accuracy for our application for THR and it is straightforward.

3.2.2. Image reconstruction For each pixel position in the reconstructed image the corresponding position in the distorted image is calculated in order to avoid “holes” in the reconstructed image. For this we used inverse transformation as it being used for transformation from distorted to undistorted points. The transformations are as follows:

yaxaax '2

'1

'01 ++=

ybxbby '2

'1

'01 ++= …...(3.3)

31

'19

211

'181

21

'17

31

'16

11

'1511

'14

21

'131

'121

'11

'1 yayxayxaxayayxaxayaxaau o +++++++++=

31

'19

211

'181

21

'17

31

'16

11

'1511

'14

21

'131

'121

'11

'1 ybyxbyxbxbybyxbxbybxbbv o +++++++++= …...(3.4)

3.3 Results And Discussion We set the image center at 685.6mm SAD without gantry rotation. After the calibration for distortion correction, the mean error is reduced to ±0.8mm and maximum error is ±3mm. These values can be attributed to those few pixels at the edges of the image and quantisation error. At center and ±10mm of SAD, the accuracy is well within the limits of ±1mm, enough for orthopaedic surgery. Figure 3.3 shows different distorted and undistorted images of the calibration template at different lateral, longitudinal and vertical shifts.

Distorted Compensated

Figure 3.3: Distorted and compensated image of the calibration piece at different lateral, longitudinal and vertical shifts

4. X-RAY LOCALIZATION FOR TOTAL HIP REPLACEMENT In order to minimize exposure to radiation, we model 3D points matrices from 2D image(s) [9]-[12]. This is to model the transformation between 3D coordinates (world coordinates) and 2D (image coordinates) as shown in Figure 4.1

4.1 Mathematics In general any transformed image can be represented by

]][[][ *cTPPh = (4.1)

Here, h is the normalization factor, [P*] is the transformed points matrix, [P] is the original points matrix and [Tc] is the transformation matrix that may include perspective information. Since screen projection [P*] has only x and y components, the third column of [Tc] must be zero.

[Tc] =

⎥⎥⎥⎥

⎢⎢⎢⎢

444241

343231

242221

141211

0000

TTTTTTTTTTTT

(4.2)

A point matrix with one point

[P] = [x y z 1] [P*] = [x* y* 0 1] (4.3)

Combining Eq (4.1) to Eq (4.3) yield

T11x + T21y + T31z + T41 = hx*

T12x + T22y + T32z + T42 = hy*

T14x + T24y + T34z + T44 = h (4.4)

x*

y*

(x*,y*) X

Y

Z

(X,Y,Z)

[P] [P*]

World Coordinates Image Coordinates

Figure 4.1 Showing the transformations between world (3D) and Image (2D)

Eliminating h from Eq (4.4) yields

(T11 - T14x*)x + (T21 - T24x*)y + (T31 - T34x*)z + (T41 - T44x*) = 0 (T12 - T14y*)x + (T22 - T24y*)y + (T32 - T34y*)z + (T42 - T44y*) = 0

(4.5) x* and y* are known values, since there are 12 unknowns in [Tc], six discrete points with known 3D location (six markers) are required for reconstruction. With subscripts denoting the individual points and for non-trivial solution of these Eq(4.5), one of the unknowns must be specified. T44 is used as a scaling factor. Therefore, T44=1. Hence Eq(4.5) can be written in the following form.

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢

−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−

0100010000010001000001000100000100010000010001000001000

*661

*661

*666

*551

*551

*555

*551

*551

*555

*441

*441

*444

*441

*441

*444

*331

*331

*333

*331

*331

*333

*221

*221

*222

*221

*221

*222

*111

*111

*111

*111

*111

*111

xzzxyyxxxyzzyyyyxxxzzxyyxxxyzzyyyyxxxzzxyyxxxyzzyyyyxxxzzxyyxxxyzzyyyyxxxzzxyyxxxyzzyyyyxxxzzxyyxxx

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢

42

41

34

32

31

24

22

21

14

12

11

TTTTTTTTTTT

=

⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥

⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢

*6

*5

*5

*4

*4

*3

*3

*2

*2

*1

*1

xyxyxyxyxyx

(4.6)

Because there are zeros on the diagonals of Eq (4.6), the matrix become singular. For solving this system of linear equations we used the singular value decomposition (SVD).

The resulting solution for the transformation matrix [Tc] is used to determine the world coordinates of points other than the original six that defined the transformation. If the ith point has the coordinates ),( **

ii yx on the screen projection, the world coordinates can be found from Eq (4.7).

[ ]iii zyx ⎥⎥⎥

⎢⎢⎢

343231

242221

141211

TTTTTTTTT

= [ ]042*

41* TyTx ii −− (4.7)

2.1 Orientation of the tool Once we know the 3D location and center of the acetabulum we know must find its orientation of the tool. This has done by rotating a line first 45° around Z-axis and then 20° around X-axis at the center of acetabulum. Eq (4.8) will give the rotation transformation matrix [Tr].

⎥⎥⎥⎥

⎢⎢⎢⎢

⎥⎥⎥⎥

⎢⎢⎢⎢

⎡−

⎥⎥⎥⎥

⎢⎢⎢⎢

⎡−

⎥⎥⎥⎥

⎢⎢⎢⎢

−−−

=

1010000100001

10000cossin00sincos00001

1000010000cossin00sincos

1010000100001

][

ooo

xx

xxzz

zz

ooo

r

zyxzyx

Tθθθθθθ

θθ

(4.8)

4.3 Experiment Material used for the experiment are the markers, digitizing probe, OPTOTRACK (Northern Digital Inc. CA), C-arm fluoroscope machine, mock bone and the cadaver. For the markers we used steel balls to be seen clearly on X-ray projection. In the experiments, we affix the markers on the mock bone and on the body of the cadaver. A digitizing probe is used to fine the3D orientation of the tool with OPTOTRACK.

We first performed our experiments with mock than we went for cadaver trial. Six markers were placed on a Plexiglas plate, which was fixed on to the mock bone. The mock bone was adjusted to a similar orientation as it would be during the

Fig (4.2): Set up markers on the mock Fig (4.3): Finding 3D coordinates suing digitizing

Fig (4.4): Mock bone with C-arm ready to take a shot

operation. Figure 4.2 to 4.4 showed the mock bone experiment with C-arm machine set at 0°-gantry rotation by setting other positions at the center. By knowing 3D of the markers, the transformation coefficients can be determined using the above equations.

Figures 4.5 to 4.7 showed the cadaver experiment. Surgeon first dislocated femur from the acetabulum cup, as it will be in the real THR operation. Than six markers were placed on the patient body. After that 3D-locations were found using OPTOTRACK probe. With these information, we can determine the 3D location structure of interest as shown in the image.

Fig (4.5): Surgeon is dislocating the Fig (4.6): Placing steel balls markers on the

Fig (4.7): X-ray image of cadaver trial

4.4 Results and Discussions An X-ray localization technique for the localization of acetabular prosthesis cup placement during total hip replacement surgery has been developed using the above equations. This technique only uses one X-ray image to calculate the size of the cup and its psuedo-3D world coordinates of the hip, in particularly the socket, as shown in Figure 4.8.

-200-10 0

010 0

20 0

-500

5010 0150

200

250

300

350

400

450

500

X-axis (mm)

X-ray Localization of Ace tabu lum Cup

Y-axis (mm)

Z-ax

is (M

m)

Artificial Ac e ta b ulu m

Im plan t ins ertionToo l

Cen te r o f Ac e ta b ulu m

Figure 4.8 Calculated cup size and its psuedo-3D world coordinates of the hip and the augmented X-ray image.

5. Conclusion This paper presented in the works done in distortion correction and calibration in the intensifier sub-system and the X-ray localization technique to calculate the cup size and the psuedo-3D position. These information will be used in the subsequent sub-system for tracking and accomplished our augmented reality integration.

The main contribution in this paper are: An efficient and robust method for C-arm fluoroscopic image intensifier machine calibration (This method gives good accuracy of 0.8mm -1.5 mm for our application for THR); A new algorithm for X-ray localization for total hip replacement using only one X-ray image.

REFERENCES [1] D.E. McCollum and W.J. Gray. Dislocation after total hip arthoplasty. Clincal Orthopaedics, (261): pp. 159-170, Dec.

1990.

[2] Jaramaz B, Digioia AM III. Pre-operative Surgical Simulation Of An Acetabular Press Fit: Assembly Strains In Bone. Proceedings of the International Conference of the IEEE Engineering in Medicine and Biolog Society, pp. 1101-1102, 1993.

[3] B.W. Fei, T.G.Zhuang, J.Hu, F.M.Zhou. Frameless stereotactic localization and multimodal image registration using DSA/CT/MRI. Proceedings - 20th Annual International Conference - IEEE/EMBS Oct. 29 - Nov. 1, 1998, Hong Kong.

[4] Pratt W K. Digital Image Processing. 2nd edn New York, Wiley, 1991.

[5] D.M.Kahler,and R. Zura. Evaluation of a Computer Integrated Surgical Technique for Percutaneous Fixation of Transverse Acetabular Fractures. Proceeding of MRCAS, pp. 565-572, 1996.

[6] Rudin S, Bednarek D R and Wong R. Accurate characterization of image intensifier distortion. Med. Phys. 18;1145-1151,1991

[7] Fahring R, Moreau M and Holdsworth D W. Three-dimensional computed tomographic reconstruction using a C-arm mounted XRII: Correction of image intensifier distortion. Med. Phys. 24;1097-1106,1997

[8] Chakraborty D P. Imager intensifier distortion correction. Med. Phys. 14;249-252,1987

[9] C.S. Fraser and Q.A. Abdullah. A simplified mathematical model for application of analytical X-ray photogrammetry in orthopaedics. 14th congress of the International Society of Photogrammetry V-6, pp. 211-220, 1980.

[10] D.G., Cain. X-ray image enhancement by least-square estimation. Applied Optics, II, pp. 2940-2955, 1972.

[11] Branislav Jaramaz, Anthony M. DiGioia III. Computer assisted measurement of cup placement in total hip replacement., Clinical Orthopaedics, no 354, pp. 70-81, 1998.

[12] Bernard Ghelman. Radiographic localization of the acetabular component of a hip prosthesis. Radiology, 130, pp. 540-542, 1979.