A Virtual Reality Simulator for Ultrasound-Guided Biopsy Training

  • Upload
    ssm

  • View
    214

  • Download
    1

Embed Size (px)

Citation preview

  • 36 March/April2011 PublishedbytheIEEEComputerSociety 0272-1716/11/$26.002011IEEE

    FeatureArticle

    A Virtual Reality Simulator for Ultrasound-Guided Biopsy TrainingDong Ni, Wing Yin Chan, Jing Qin, Yim Pan Chui, and Yingge Qu Chinese University of Hong Kong

    Simon S.M. Ho Union Hospital, Hong Kong

    Pheng Ann Heng Chinese University of Hong Kong

    One of the most fundamental, but dif-ficult, skills to acquire in interventional radiology is ultrasound-guided biopsy. A biopsy takes a tissue sample from the body for analysis. This procedures success depends on the biopsy needles correct alignment with both the ultrasound probes scanning plane and the target

    lesion, which requires extensive training and practice. Realis-tic presentation of visual and haptic feedback is essential for trainee surgeons to acquire the required hand-eye coordination skills. The ability to precisely insert and place the needle into the target under ultrasound guidance is also the first step in many complex interventional procedures such as draining bile from the liver.Traditionally, the training can

    be practiced only on live patients. However, such training is controversial because it might imperil their health. Besides, access to training scenarios is limited, which makes it difficult to train stu-dents or surgeons in a time-efficient manner.Recently, researchers have developed simulators

    for teaching ultrasound-guided needle insertion (see the sidebar). However, none of them provide the necessary realism. So, we developed a VR-based

    simulator for ultrasound-guided needle insertion in the liver that provides realistic visual and haptic feedback.Using a 3D scale-invariant feature transform

    (SIFT) algorithm, the simulator stitches together ultrasound volumes that have been scanned by a dedicated ultrasound probe. We believe this is a novel approach for generating an interactive de-formable ultrasound volume. To reproduce inter-nal motions, we apply spatial transformations on the resulting stitched volume. By integrating the biomechanical properties of soft tissues, a 6-DOF force modeling of needle insertion provides real-istic haptic feedback. Our method can benefit VR-based medical simulation of diagnostic navigation and percutaneous needle puncture procedures, such as biopsies, for training and education.

    The System FrameworkFigure 1 shows our systems overall framework. The system first stitches together the ultrasound volumes with different scan angles to generate an ultrasound panorama. It registers the resulting stitched volume with the corresponding computed tomography (CT) volume to obtain a correlation. It extracts anatomic surfaces from the CT volume for both visual and haptic rendering while using the stitched volume to simulate the ultrasound imagery. We model respiratory motions as peri-odic movements and simulate the corresponding

    AVR-basedtrainingsystemforpracticingbiopsiessimulatesultrasoundimagerybystitchingmultipleultrasoundvolumesonthebasisofa3Dscale-invariantfeaturetransformalgorithm.Inaddition,asix-degree-of-freedomforcemodeldeliversarealistichapticrenderingofneedleinsertion.

  • IEEEComputerGraphicsandApplications 37

    needle bending through an explicit position solver. To provide two-hands training, two haptic devices simulate the transducer and the needle.To minimize the performance overhead from

    concurrently running haptic and visual render-ing on the same machine, we adopt a client-server architecture to parallelize the rendering. The vi-sualization client first collects the poses from the

    two haptic devices. It then renders the simulated ultrasound imagery, given by the stitched ultra-sound volume, with an overlay of shadows and the needle. Finally, it renders a navigation view show-ing the virtual tools and virtual anatomy. The two haptic servers can simultaneously perform colli-sion detection and haptic rendering at a relatively higher update rate.

    F ranck Vidal and his colleagues presented a simulator for training in a virtual environment.1 The haptic model in-tegrated into this system allows skin penetration. However, the system cant guarantee the simulated ultrasound images realism because it produces them by moving an ultrasound scanner on a foam-filled box.

    Derek Magee and his colleagues introduced an augmented-reality training system.2 Although this system uses several techniques to improve the simulated ultrasound images realism, it cant provide realistic force sensation.

    Clment Forest and his colleagues presented a simula-tor with haptic devices for ultrasound examination and needle insertion.3 The simulated ultrasound images general appearance is based mainly on computed tomography or magnetic resonance imaging, which might differ from real ultrasound. Meanwhile, a three-degree-of-freedom force feedback device (the SensAble Phantom Omni) simulates the needle. This device cant provide resistance torque force to maintain the needle insertion path. However, maintain-

    ing that path is important during surgery and thus indis-pensable for the training system.

    Also, none of these simulators take into account the influence of respiration movements, which is crucial dur-ing biopsies.

    References 1. F.P. Vidal et al., Developing a Needle Guidance Virtual En-

    vironment with Patient-Specific Data and Force Feedback,

    Proc. Computer Assisted Radiology and Surgery (CARS 2005),

    Elsevier, 2005, pp. 418423; www.hpv.cs.bangor.ac.uk/Papers/

    CARS-2005.pdf.

    2. D. Magee et al., An Augmented Reality Simulator for Ultra-

    sound Guided Needle Placement Training, Medical and Bio

    logical Eng. and Computing, vol. 45, no. 10, 2007, pp. 957967.

    3. C. Forest et al., Ultrasound and Needle Insertion Simulators

    Built on Real Patient-Based Data, Studies in Health Technol

    ogy and Informatics, vol. 125, 2007, pp. 136139.

    Related Work on Simulators for Ultrasound-Guided Needle Insertion

    Computed tomography (CT) Registration Ultrasound (US)

    ForceForce

    Orientation Orientation

    PositionPositionDevice Device

    Force model Collisiondetection

    Volumerendering

    Informationcollection

    Final output Force modelCollisiondetection

    Needleoverlay

    Shadowoverlay

    Deformation

    Visualization clientNeedle server Transducer server

    Segmentation

    Skin surface

    CT volume

    CT/USregistration

    Surfaceextraction

    Inner tissue-layer surfaces

    Multiple USvolumes

    Stitched USvolume

    Figure1.TheframeworkforaVR-basedsimulatorforultrasound-guidedneedleinsertionintheliver.Thesimulatorprovidesrealisticvisualandhapticfeedback.

  • 38 March/April2011

    FeatureArticle

    Volumetric Ultrasound StitchingCompared with other modalities such as CT and magnetic resonance imaging (MRI), ultrasound imaging has a limited field of view. To enlarge the field of view, our system creates the ultrasound panorama we mentioned earlier. Registration of the ultrasound volumes has been the key to this processs success. Unfortunately, traditional intensity-based algorithms, including the sum of squared differences, normalized cross-correlation, mutual information, and the correlation ratio, might not perform well on ultrasound data. This poor performance is due to the low signal-to-noise ratio and the shadows, speckles, and other noises in ultrasound data.Other problems also hinder the registration of

    ultrasound volumes. Because the ultrasound probe must be arbitrarily oriented during the acquisition of multiple volumes, such variation of rotation imposes a great challenge to traditional registra-tion methods in feature matching. In addition, a structures intensity might differ in multiple ul-trasound volumes obtained under different scan angles, which also affects registration accuracy.Recently, David Lowe successfully applied SIFT

    in 2D space for object recognition.1 Paul Scovan-ner and his colleagues proposed a 3D SIFT descrip-tor for video analysis and action recognition based on the similarity of the two tasks feature repre-sentations.2 SIFT features are rotation-invariant and provide robust feature matching across a sub-stantial range of added noise and changes in il-lumination (intensity). This motivated us to apply SIFT-based registration to ultrasound data.On the basis of Scovanner and his colleagues

    research, we modified a 3D SIFT descriptor to reg-ister and stitch the ultrasound volumes. We first preprocess the ultrasound data and detect 3D key-points from the multiple volumes. We then con-struct a 3D feature descriptor for stitching.

    Artifact RemovalSpeckles and shadows in ultrasound images will likely hinder feature point detection and match-ing. Speckles refer to a random intensity distri-bution due to the scattering of ultrasound in an inhomogeneous medium. Shadows are caused mainly by the ultrasound signals attenuation along the beam direction owing to a strong acous-tic reflection.To reduce speckles, you can apply a Gaussian fil-

    ter (or other similar low-pass filters) to the origi-nal ultrasound images. SIFT-based approaches are resistant to noise similar to speckles because they apply a Gaussian kernel during keypoint detection.

    A common technique in handling shadow-like artifacts is to detect the regions containing them and apply a mask onto these regions during reg-istration. First, you scan the ultrasound image along the echo direction from the bottom to the transducer surface. Along the beam, you label as shadows all consecutive pixels with an inten-sity value lower than a predefined threshold. The search terminates once you find a value higher than the threshold. You determine the threshold value on the basis of prior experimental ultra-sound scans. Radiologists then label the shadows border regions; the threshold value is the average value of the intensities in the labeled regions. The subsequent registration and stitching steps wont process the detected shadow regions.

    Keypoint Detection and the 3D SIFT DescriptorWe first extend David Lowes research1 to a 3D version of difference-of-Gaussians (DoG) scale-space extrema computation. Specifically, an in-put volume I(x, y, z) is first convolved with a 3D Gaussian filter G(x, y, z, ks) of different scales, ks, to obtain the scale space S(x, y, z, ks). That is,

    S(x, y, z, ks) = G(x, y, z, ks) * I(x, y, z). (1)

    Then, we calculate the DoG volumes as

    D(x, y, z, kjs) = S(x, y, z, kis) - S(x, y, z, kjs), (2)

    where ki and kj are Gaussian kernels in the neigh-boring scale space. Figure 2a shows the construc-tion process.We find keypoint candidates using the 3D DoG.

    We compare each sample point to its 26 neigh-bors in the current volume, the 27 neighbors in the volume of the upper scale space, and the 27 neighbors in the volume of the lower scale space. The sample point becomes a keypoint candidate if its value is a maximum or a minimum among all these neighbors (see Figure 2b). We dont use all the candidates because some might have poor con-trast and others might be poorly localized along an edge. We must eliminate these candidates before proceeding.To measure edge responses in 3D space, we can

    use a 3 3 Hessian matrix, H, which describes the curvatures at the keypoint. A poorly localized candidate has a large principal curvature along the edge direction and a relatively smaller curvature along the two orthonormal directions. Given or-dered eigenvalues of H(|l1| < |l2| < |l3|) with corresponding eigenvectors (e1, e2, e3), the eigen-vectors define an orthogonal coordinate system

  • IEEEComputerGraphicsandApplications 39

    aligned with the direction of minimal (e1) and maximal (e3) curvature. Then, we consider a key-point to be valid under the constraint

    r T=