9
Vol.ll No.5 J. of Comput. Sci. & Technol. September 1996 Displaying of Details in Subvoxel Accuracy ~t Cai Wenli (~3~ ,_c) , Che;t Tianzhou (N~.N) and Shi Jiaoying ( ~ ) State Key Lab of CAD & CG, Zhejiang University, Hangzhou 310027 t Email: [email protected] Received July 4, 1995; revised March 20, 1996. Abstract Under the volume segmentation in voxel space, a lot of details, some fine and thin objects, are ignored. In order to accurately display these details, this paper has developed a methodology for volume segmentation in subvoxel space. In the subvoxel space, most of the "bridges" between adjacent layers are broken down. Based on the subvoxel space, an automatic segmentation algorithm reserving details is discussed. After segmentation, volume data in subvoxel space are reduced to original voxel space. Thus, the details with width of only one or several voxels are extracted and displayed. Keywords: Volume rendering, volume segmentation, voxel, subvoxel. 1 Introduction Three-dimensional segmentation and volume rendering are two fundamental procedures in the displaying of 3D scalar field, including CT and MR data set. Currently, most 3D segmentation algorithms are based on voxel and extended from 2D image segmentation operators. Interesting regions in volume data set, such as brain and eyeball, are extracted by edge detection and region growing. The common strategies for edge detection are distinguished into locating the maxima of the first derivative and detecting zero-crossing of the second derivative. A lot of papers supplying detailed mathematical investigations and consolidating approaches have appeared[i-3]. With the 3D extension of Marr-Hildreth operator, the complete human brain was segmented and visualized from MRI by Hoehne [4'5] for the first time. Uflfortunately, due to the limitations of data accuracy, these operators are only suited for detecting large objects. They fail to extract thin and fine objects which are only one or several voxel widths in voxel accuracy. Fig.1 is one of the slices of an MR 3D image of head data set. Little thin and fine objects can be observed in the ring between outer skin and skull. In this paper, the little and fine objects are called details. Outer skin and skull are called outer circle and inner circle respectively. According to the similarity of neighboring slices, details which are not belonging to noise are determined. If we directly use the traditional edge detection operators, the details form "bridges" [5] between outer circle and inner This research was supported by the National Natural Science Foundation of China.

Displaying of details in subvoxel accuracy

Embed Size (px)

Citation preview

Page 1: Displaying of details in subvoxel accuracy

V o l . l l No.5 J. of C o m p u t . Sci. & Technol. S e p t e m b e r 1996

Displaying of Details in Subvoxel Accuracy

~ t Cai Wenli ( ~ 3 ~ ,_c) , Che; t T i a n z h o u ( N ~ . N ) and Shi J i aoy ing ( ~ )

State Key Lab of CAD & CG, Zhejiang University, Hangzhou 310027 t Email: [email protected]

Received July 4, 1995; revised March 20, 1996.

A b s t r a c t

Under the volume segmentation in voxel space, a lot of details, some fine and thin objects, are ignored. In order to accurately display these details, this paper has developed a methodology for volume segmentation in subvoxel space. In the subvoxel space, most of the "bridges" between adjacent layers are broken down. Based on the subvoxel space, an automatic segmentation algorithm reserving details is discussed. After segmentation, volume data in subvoxel space are reduced to original voxel space. Thus, the details with width of only one or several voxels are extracted and displayed.

Keywords : Volume rendering, volume segmentation, voxel, subvoxel.

1 I n t r o d u c t i o n

Three-dimensional segmentation and volume rendering are two fundamental procedures in the displaying of 3D scalar field, including CT and M R da ta set. Currently, most 3D segmentat ion algorithms are based on voxel and extended from 2D image segmentat ion operators. Interesting regions in volume da ta set, such as brain and eyeball, are extracted by edge detection and region growing. The common strategies for edge detection are distinguished into locating the maxima of the first derivative and detecting zero-crossing of the second derivative. A lot of papers supplying detailed mathematical investigations and consolidating approaches have appeared[i-3]. Wi th the 3D extension of Marr-Hildreth operator, the complete human brain was segmented and visualized from MRI by Hoehne [4'5] for the first time. Uflfortunately, due to the limitations of data accuracy, these opera tors are only suited for detecting large objects. They fail to extract thin and fine objects which are only one or several voxel widths in voxel accuracy.

Fig.1 is one of the slices of an MR 3D image of head da ta set. Little thin and fine objects can be observed in the ring between outer skin and skull. In this paper , the little and fine objects are called details. Outer skin and skull are called outer circle and inner circle respectively. According to the similarity of neighboring slices, details which are not belonging to noise are determined. If we directly use the tradit ional edge detect ion operators, the details form "bridges" [5] between outer circle and inner

This research was supported by the National Natural Science Foundation of China.

Page 2: Displaying of details in subvoxel accuracy

No. 5 Displaying of Details in Subvoxel Accuracy 481

Fig.1. One slice of a 3D MR image. Fig.2. Zero-crossing image containing "Bridges".

circle, as shown in Fig.2. The existence of bridges becomes the main obstacle of automat ic segmentation. Currently, researchers including HoehneIS,s] had to remove these bridges manually in order to segment the skull from brain. As a result, seg- mentat ion becomes interactive and details are lost with the deletion of bridges. The purpose of this paper is to supply a way to visualize these details by automat ic segmentat ion in subvoxel accuracy.

First, we generate the zero-crossing image in subvoxel accuracy by magnifying the Marr-Hildreth convoluted image. In subvoxel volume space, most of the bridges in voxel space are broken down and become isolated details or only connected to one of the circles, inner one or outer one. Then, based on the subvoxel da ta set, an automat ic segmentation is designed. Details are reserved in the segmentat ion of head, skull, meninx and brain. After that , CSM [7] is applied to the rendering of the segmented da ta sets. In order to clearly display the details,, including posit ion and size, t ransparency and scattering intensity are increased to distinguish them from their neighborhood.

In this paper, all images are made based on the 3D M R image of size 150 x 200 x 192. It is the first time to extract and display blood vessel and meninx in MR images, which are ignored by most researchers in this field.

2 Subvoxel Volume Space

It is very difficult to do automat ic segmentation in voxel volume space due to the "bridge". A natural way to investigate the details in ring area is magnifying the voxel into subvoxel volume space. In subvoxel accuracy, we can finish segmentat ion automatical ly since bridges are broken down.

The gray of each voxel is the sum of contributions of its neighboring sampling points. In order to obtain zero-crossing images in subvoxel volume space, we have to follow the common ways: (1) reconstructing the da ta field, (2) resampling the reconstructed field in subvoxel accuracy,

Page 3: Displaying of details in subvoxel accuracy

482 J. of Comput. Sci. & Technot. Vol. 11

(3) detecting zero-crossing edges in subvoxel volume space.

The above procedures are very time consuming. Considering tha t if a voxel is magnified to 5 x 5 x 5 subvoxels, edge detection in subvoxel space will spend much more time than in voxel space. So from the view point of computing, this common way is useless. The practical way is to magnify the zero-crossing image ra ther than the gray image, i.e. magnifying the image convoluted by Marr-Hildreth operator.

We assume tha t the gray value G(r) at point (i, j, k) presents a Gauss distribu- tion:

G(r) = ~ z 3 .exp (1)

where, r is the radius centered at ( i , j , k), and a is the variance.

Convoluting the original da ta f(i , j, k) with G(r), we can reconstruct a continuing data field I(i, j, k).

I(i, j, k) = G(r) * f(i , j, k) (2)

In the object space in voxel accuracy, edge detection may be performed by convo- luting the da ta field with Marr-Hildreth operator and evaluating the zero-crossing.

M(i , j , k ) = ~72(I(i,j,k)) = ~72G(r) * f ( i , j , k )

X/(27r)3(75 -exp �9 3 --

(a)

(4)

The zero-crossing of M(i, j, k) is the edges in voxel space, shown as Fig.2. In this paper, the size of operator is about 8.5cr, as proposed in Ref..[8], i.e. 99.7% of the excitatory region is included into the operator.

We extend the facet model in Ref.[1] to three dimensions. Voxel space is magni- fied into subvoxel volume space. A set of orthogonal polynomial bases in R • R x R is applied to fit the M(i, j, k) in Eq.(3). Since V2G(r) shows Gauss distribution as well, fitting in R x R x R volume is more accurate than linear interpolation.

In our work, we adopt 3 x 3 x 3 volume to compute the coefficients in polynomial. We assume tha t the polynomial takes the form:

2 2 2

F(r, c, 0 = Z z (5) i=0 j = 0 k=O

where r, c, l are the local distances in i, j , k voxel.

To compute the coefficients in Eq.(5), we extend the facet model in 2D image into volume fit t ing with the tensor products. The set of orthogonal polynomial bases Hi is as follows.

Page 4: Displaying of details in subvoxel accuracy

No. 5 Displaying of Details in Subvoxel Accuracy 483

Slice in subvoxel space

Fig.3. Interpolating subvoxel in [-1, 1] x [-1, 1] x [-1, 1].

H 0 = 1, H1 = r, H2 = c, H3 = t, H4 = rc, H5 = cl, H6 = rl, H7 = r 2 - 2/3, H8 = c 2 - 2/3, H9 = t 2 - 2/3, H10 = H i * HS, H i t = HI. * H9, H12 = H2 * HT, H13 = H2 * H9, H14 = H3 * HT, H15 = H3 * H8, H16 = H7 * H8, H17 = H7 * H9, H18 = H8 * H9, H19 = H1 * H8, H20 = H2 * H17, H21 = H3 * HI6 , H22 = H4 * H9, H23 = H5 * H8, H24 = H6 * H7, H 2 5 = H l * H 2 , H 3 , H 2 6 = H 7 * H 8 * H 9

(6)

The po lynomia l g consists of 27 or thogonal bases in (6).

26

g(i + r , j + c ,k + l) = ~ an" H n ( r , c , l ) (7) n ~ O

where (r, c, l) E R x R x R, ai is the coefficient of Hi . The best f i t t ing of the volume is (50, ~ ] , . . . , &26), sat isfying (8)

E ( & 0 , 5 1 , . . . , & 2 6 ) = ~ [ F ( i + r , j + c , k + l ) - g ( i + r , j + c , k + l ) ] 2 = m i n (S) (~,c,l)

Due to its o r thogonal i ty of H n ( r , c , l ) , comput ing (&0, & l , . . - , &26) is very con- venient. In vo lume space of R • R • R = { - 1 , 0 , 1 } • { - 1 , 0 , 1 } • { - 1 , 0 , 1 } , 27 templa tes of size 3 x 3 x 3 corresponding to H 0 - H 2 6 are direct ly convolu ted wi th the 3 x 3 x 3 zero-crossing image centered at (i, j , ]~). The product of these convolut ions generates (~%o, & i , - - . , a26).

After we evaluate the coefficients of (a0, a l , . - . , a 2 6 ) , M ( i , j , k ) in cont inuous ne ighborhood volume [ -1 , 1] x [ -1 , 1] • [ -1 , 1] is in terpola ted according to Eq.(7), as shown in Fig.3.

Fig.4 is t he zero-crossing of Marr -Hi ldre th opera tor in voxel accuracy magn i fy ing 4 scales. Fig.5 is the zero-crossing image by in terpola t ing the g in Eq.(7) of coeff• (a0, 5 ] , . . . , &26) wi th 1/4 .voxel accuracy. It is clear tha t inner circle a n d ou te r circle are separa ted wi th no "bridges" between them. In the ring circle detai ls are isolated. Fig.5 is r educed to its original voxel space, as shown in Fig.6. C o m p a r i n g Fig.6 wi th Fig.2, we observe tha t zero-crossing bounda ry is thinner and finer t h a n in

Page 5: Displaying of details in subvoxel accuracy

484 J. of Comput. Sci. &: Technol. Vol. 11

voxel accuracy. After we reduce the magnified image to its original size, the result indicates that edge positions are more accurate in subvoxel space than in voxel space. Based on these accurate edge positions, we designed the automat ic segmentat ion in

subvoxel space.

Fig.4. Zero-crossing Fig.5. Zero-crossing Fig.6. Zero-crossing detected after magnifying 4 scMes, in 1/4 subvoxel accuracy, in subvoxel space.

To compute the zero-crossing in subvoxel volume space, we carry out the follow- ing steps: (1) Convolute the image f of size R x C x L with a 3D Marr-Hildreth operator of

size h x l x k, and obtain the image M of size R • C x L. (2) Compute a zero-crossing image Mzxings of size R • C x L by locating the zero-

crossing in each slice of M. (3) For each zero-crossing detected in Mzxings and its neighborhood of size 3 • 3 x 3,

compute the interpolation function of g by fitting the polynomial set to the response of templates in corresponding position in M.

(4) Compute a new zero-crossing image Gzxings of size nR • nC x nL by locating the zero-crossing in 1/n voxel volume space.

3

In subvoxet volume space, most of the to Case 3, as showh in Fig.7. Rarely does Case 4 exist. But in Case 4, detail is very thin and distr ibutes very scarcely in subvoxel space. Thus, its neighborhood S T O P signal will be prevented from grow- ing into the inner circle as we will dis- cuss later in this section. As a result, de- tails in Case 4 will remained as in Case 1. We designed the following region growing method to remove the outer circle.

A u t o m a t i c S e g m e n t a t i o n in Subvoxe l Space

"bridges" are broken down into Case 1

~ Inaer Circle

Ring Area

Outer Circle

Fig.7. Detail distribution in subvoxel space.

Page 6: Displaying of details in subvoxel accuracy

No. 5 Displaying of Details in Subvoxel Accuracy 485

(1) Closing the Gaps in Outer Circle

Usually, researchers used dilation followed by an erosion to close the "outer circle [6]. In our algorithm, we use a scanline algorithm to close the outer circle. To each scanline, there are two hit points on the outer circle, called begin hit point (BHP) and end hit point (EHP). BHP and EHP are distinguished into confirmed boundary point (CBP) and suspended boundary point (SBP). The outer circle with- out gaps is formed by connecting all neighbor CBPs. The gap is represented as two neighbor CBPs with "more than one line between them.

Current CBP is determined by its displacement and slope to its previous CBP. The thresholds of displacement and slope at current BHP or EHP are evaluated by the width of outer circle and the gap with its previous CBP at current scanline. Large width indicates small displacement threshold and large gap relates to small slope threshold. After two times of scanning in X and Y directions, all gaps wilt become close. This scanning algorithm is faster than commonly used morphological operations.

(2) Removing the Outer Circle

A region growing method is used to remove the outer circle and maintain the details attached to it. When removing outer circle from outside to inside, the current point will not stop until it hits the background. Then this point is marked as a stop point. Stop point is a spreading point, i.e. the STOP signal at stop point will be spread to its neighborhood along the removing frontal surface with an increasing speed. Therefore, the circle is removed but details on it are reserved. There are two cases at stop point, as discussed below.

Case I (Fig.8(a)). No detail is attached to the outer circle. Removing frontal surface is stopped when current point is the stop point. No STOP \ \ . \ \ _ signal is spread to its neighbors. Case I I (Fig.8(b)). Details are attached to the circle. In this case, STOP signal should be spread to its neighbors in order to extract the details from the outer circle. STOP signal is (a) (b) spread on both sides. Consequently, detail is isolated from its attached circle.

Fig.8. Removing.

Fig.9 is the result of our segmentation algorithm. After finishing segmentation in subvoxel space, the zero-crossing image is reduced to its original size and its corresponding gray image is extracted. In order to enhance the details, they are encoded and indicated as the small bright dots in Fig.9. These small dots are the blood vessel in the head skin, which are regarded as some of the "bridges" between skin and skull and are always removed by interactions in current research. In subvoxel space, these details are reserved by the automatic volume segmentation algorithm. The inner circle can be removed with the same procedures as well.

Page 7: Displaying of details in subvoxel accuracy

486 J. of Comput. Sci. & Technol. Vol. 11

..... Fig]9. Slices-of gray-irnage with outer circle removed.

4 R e n d e r i n g

The volume rendering model used in this paper is CSM [7]. Due to the fact that details are thin and fine objects, it is very difficult to display them with highlighting. In CSM, it is very convenient to control the rendering result. We take two ways to render the details clearly.

(I) Increasing the Opacity. of Details In order to stress the effect of details in the resultant images, we increase the

opacity of the details as maximum value projection does. Being less transparent prevents details from blurring into their neighborhood.

t = e x p ( - a t . / i ) , at = crt,p,g(p + gp) (9)

By increasing crt,p,g , transparency is decreased. Thus, details become opaque.

(2) Increasing the Surface Scattering Intensity In order to display details distinguished from their neighborhood, we increase the

surface scattering intensity by increasing its weight function. With larger surface scattering intensity, details look highlighting from the surface.

5 E x p e r i m e n t and Discuss ion

As we mentioned in Section 1, the data set is 3D head MR image of size 150 • 200 x 192. Fig.10 and F ig . l l (side I of color page) are the rendering results obtained by the above procedures. In Fig.10, image (a) is the image before segmentation. By CSM model, the blood vessels are enhanced to be different from their background. Image (b) demonstrates that these blood vessels do exist in the skin by the segmentation in subvoxel space. This accuracy cannot be reached with the traditional method in voxel volume space.

F ig . l l clearly displays the four layers in head: skin, skull, meninx and brain, as indicated by medical references. Currently published papers have failed to segment the meninx between skull and brain from MR image data set. Most researchers either ignored meninx or merged it with the skull due to the fact that meniax is very thin which is only one to several voxels width and it is closely connected with

Page 8: Displaying of details in subvoxel accuracy

No. 5 Displaying of Details in Subvoxel Accuracy 487

the skull and brain. But in subvoxel space, the breakdown of "bridges" between layers makes it possible to segment it accurately.

The total time from source data set to resultant volume rendering image consists of five procedures listed below. Procedure 1. Solving zero-crossing by Marr-Hildreth operator in voxel space. Procedure 2. Magnifying and refining the zero-crossing by orthogonal polynomial

in subvoxet space. Procedure 3. Extracting the objects by growing method in subvoxel space. Procedure 4. Reducing the object in subvoxel space into voxel space. Procedure 5. Displaying the result by volume rendering.

Due to different objects in data set, it is difficult to give an exact t ime estimation. Table 1 lists the time to generate the image (b) in Fig.10. The workstation used is IRIS 4D/320 VGX. The size of resultant image is 512 x 512.

Table 1. The Approximate Time to Segment Head MR Data Set and Generate Image (b) in Fig.10

Procedure 1 Procedure 2 Procedure 3 Procedure 4 Procedure 5 Total 0.5 rain. 34.6 min. 57.7 min. 36.8 rain. 25rain. 154.6 min.

6 Conc lus ion

The work in this paper indicates the following points. (1) In subvoxel volume space, zero-crossing is located more accurately than in

voxel volume space. As a result, details are separated from the circle. (2) Automatic segmentation is based on the breaking down of "bridge". (3) In MR image data set, blood vessel and meninx are first segmented out and

displayed accurately. (4) In subvoxel volume space, computing time is increased due to large volume

space. But the interpolating supports parallel processing quite well.

R e f e r e n c e s

[1] Haralick R M. The digital step from zero crossings of second directional derivatives. IEEE PAMI, 1984, 6(1).

[2] Canny J F. A computational approach to edge detection. IEEE PAMI, 1986, 8(6).

[3] Clark J J. Authenticating edges produced by zero-crossing algorithm. IEEE PAMI, 1989, 11(1).

[4] Bomans H, Hoehne K H et al. 3D-segmentation of MR-images of the head for 3D-display. IEEE Trans. Med. Imaging, 1990, MI-9(2).

[5] Hoehne K H, Riemer M e t al. Viewing operations for 3D-tomographics gray level data. In Proc. CAR'87 Lemke H et al. (eds.) , Berlin, 1987.

[6] Hoehne K H, Hanson W A. Interactive 3D-segmentation MRI and CT volumes using morpho- logical operations. J. Comput. Assist. Tomogrof, 1992, 16(2).

[7] Cai Wenli, Shi Jiaoying. Composed scattering model based on equation of transfer. Journal of Computer Science and Technology, 1996, 11(5): 433-442.

[8] Marr D, Hildreth E. Theory of edge detection. In Proc. Roy. Soc, London, Vol. B207, 1980.

Page 9: Displaying of details in subvoxel accuracy

488 J. of Comput. Sci. & Technol. Vol. 11

Cai Wenli is an Associate Researcher in State Key Lab of CAD&:CG at Zhejiang Uni- versity. His research interests are volume rendering, medical image visualization, vector field visualization, and scientific visualization tools. He received his B.S. in 1988 from Zhejiang University, and his M.S. in 1991 from Shanghai University of Science and Technology, both in computer science.

Chen Tianzhou is a Ph.D. candidate in Department of Computer Science, Zhejiang University. He is interested in image processing and analysis, and medical imaging. He received his B.S. in 1992, from Department of Computer Science, Zhejiang University.

Shi J iaoying is a Professor of computer science Department, Zhejiang University. He is also the Director of State Key Lab of CAD&:CG. He is the Vice-Chairman of China Image and Graphics Association and a member of CAD&Graphics Society of China Computer Federation. He is on the editorial board of International Journal of Computer and Graphics and Chinese Journal of CAD and Graphics. His current research interests lie in scientific visualization, distributed computer graphics and virtual environment. He graduated from physics department at Leningrad University, former USSR in 1960 with speciality in Nuclear Physics.