10
Vis Comput (2011) 27: 677–686 DOI 10.1007/s00371-011-0570-2 ORIGINAL ARTICLE Shape-enhanced maximum intensity projection Zhiguang Zhou · Yubo Tao · Hai Lin · Feng Dong · Gordon Clapworthy Published online: 20 April 2011 © Springer-Verlag 2011 Abstract Maximum intensity projection (MIP) displays the voxel with the maximum intensity along the viewing ray, and this offers simplicity in usage, as it does not require a complex transfer function, the specification of which is a highly challenging and time-consuming process in direct volume rendering (DVR). However, MIP also has its inher- ent limitation, the loss of spatial context and shape informa- tion. This paper proposes a novel technique, shape-enhanced maximum intensity projection (SEMIP), to resolve this lim- itation. Inspired by lighting in DVR to emphasize surface structures, SEMIP searches a valid gradient for the maxi- mum intensity of each viewing ray, and applies gradient- based shading to improve shape and depth perception of structures. As SEMIP may result in the pixel values over the maximum intensity of the display device, a tone reduction technique is introduced to compress the intensity range of the rendered image while preserving the original local con- trast. In addition, depth-based color cues are employed to enhance the visual perception of internal structures, and a Z.-G. Zhou · Y.-B. Tao ( ) · H. Lin State Key Lab of CAD&CG, Zhejiang University, Hangzhou, China e-mail: [email protected] Z.-G. Zhou e-mail: [email protected] H. Lin ( ) e-mail: [email protected] Y.-B. Tao · F. Dong · G. Clapworthy University of Bedfordshire, Luton, UK F. Dong e-mail: [email protected] G. Clapworthy e-mail: [email protected] focus and context interaction is used to highlight structures of interest. We demonstrate the effectiveness of the proposed SEMIP with several volume data sets, especially from the medical field. Keywords Maximum intensity projection · Phong shading · Tone reduction · Depth-based color · Shape perception 1 Introduction Direct volume rendering (DVR) is a well-established vi- sualization method for the interactive exploration of volu- metric data in a variety of fields, e.g. medicine and geo- physics. The usability of DVR largely depends on the trans- fer function, which defines a mapping from data properties to optical properties. Although a large number of classifica- tion techniques have been proposed for the automatic/semi- automatic specification of transfer functions, it is still a te- dious and highly challenging task to highlight features of interest and dampen others as the context, even for skilled researchers [12]. Maximum intensity projection (MIP) [26], a variant of direct volume rendering, uses the maximum in- tensity along the viewing ray to determine the color of the corresponding pixel. Compared to DVR, MIP can gener- ate a feasible visualization result without specifying a well- designed transfer function. Thus, MIP is widely used in the medical area due to its simplicity usage, such as the extrac- tion of vascular structures in the angiography imaging [22]. However, MIP results do not provide sufficient spatial con- text information, and this leads to the difficulty of depth and shape perception of features. This limitation has gained much attention in the vi- sualization community, and several novel techniques have

Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

  • Upload
    others

  • View
    5

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

Vis Comput (2011) 27: 677–686DOI 10.1007/s00371-011-0570-2

O R I G I NA L A RT I C L E

Shape-enhanced maximum intensity projection

Zhiguang Zhou · Yubo Tao · Hai Lin · Feng Dong ·Gordon Clapworthy

Published online: 20 April 2011© Springer-Verlag 2011

Abstract Maximum intensity projection (MIP) displays thevoxel with the maximum intensity along the viewing ray,and this offers simplicity in usage, as it does not requirea complex transfer function, the specification of which isa highly challenging and time-consuming process in directvolume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper proposes a novel technique, shape-enhancedmaximum intensity projection (SEMIP), to resolve this lim-itation. Inspired by lighting in DVR to emphasize surfacestructures, SEMIP searches a valid gradient for the maxi-mum intensity of each viewing ray, and applies gradient-based shading to improve shape and depth perception ofstructures. As SEMIP may result in the pixel values over themaximum intensity of the display device, a tone reductiontechnique is introduced to compress the intensity range ofthe rendered image while preserving the original local con-trast. In addition, depth-based color cues are employed toenhance the visual perception of internal structures, and a

Z.-G. Zhou · Y.-B. Tao (�) · H. LinState Key Lab of CAD&CG, Zhejiang University, Hangzhou,Chinae-mail: [email protected]

Z.-G. Zhoue-mail: [email protected]

H. Lin (�)e-mail: [email protected]

Y.-B. Tao · F. Dong · G. ClapworthyUniversity of Bedfordshire, Luton, UK

F. Donge-mail: [email protected]

G. Clapworthye-mail: [email protected]

focus and context interaction is used to highlight structuresof interest. We demonstrate the effectiveness of the proposedSEMIP with several volume data sets, especially from themedical field.

Keywords Maximum intensity projection · Phongshading · Tone reduction · Depth-based color · Shapeperception

1 Introduction

Direct volume rendering (DVR) is a well-established vi-sualization method for the interactive exploration of volu-metric data in a variety of fields, e.g. medicine and geo-physics. The usability of DVR largely depends on the trans-fer function, which defines a mapping from data propertiesto optical properties. Although a large number of classifica-tion techniques have been proposed for the automatic/semi-automatic specification of transfer functions, it is still a te-dious and highly challenging task to highlight features ofinterest and dampen others as the context, even for skilledresearchers [12]. Maximum intensity projection (MIP) [26],a variant of direct volume rendering, uses the maximum in-tensity along the viewing ray to determine the color of thecorresponding pixel. Compared to DVR, MIP can gener-ate a feasible visualization result without specifying a well-designed transfer function. Thus, MIP is widely used in themedical area due to its simplicity usage, such as the extrac-tion of vascular structures in the angiography imaging [22].However, MIP results do not provide sufficient spatial con-text information, and this leads to the difficulty of depth andshape perception of features.

This limitation has gained much attention in the vi-sualization community, and several novel techniques have

Page 2: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

678 Z. Zhou et al.

been proposed to improve the spatial comprehension offeatures in MIP. Heidrich et al. [11] introduced a depth-based modulation for the data value to enhance depth per-ception, i.e., the maximum value far away from the view-point is displayed much darker than the same value nearthe viewpoint. Depth-enhanced maximum intensity projec-tion (DEMIP) [6] uses occlusion revealing and depth-basedcoloring to disambiguate the ordering of internal structures.These depth-based intensity modulations provide the spatialcontext, but also make structures far away from the view-point hard to distinguish due to the lack of contrast. Lo-cal maximum intensity projection (LMIP) [23] displays thefirst local maximum above the user-defined threshold, and itprovides spatial continuity for better understanding of com-plex structures. Maximum intensity difference accumulation(MIDA) [3] adaptively adjusts the accumulated opacity andintensity to integrate the advantages of DVR and MIP. Allthese approaches enhance the spatial comprehension of in-ternal structures, especially depth perception. However, lo-cal shapes and fine details of features are still little depicted.

In DVR, gradient-based shading is commonly used to re-veal the appearance of surface structures. As the maximumintensity of each viewing ray is located in more or less ho-mogeneous regions, the gradient is easily ruined by the noisecaused by relatively weak differences in data values. Thegradient at the position of the maximum intensity is not validfor shading in MIP and would result in disturbing illumina-tion effects. To avoid invalid shading effects, MIDA [3] ap-plies a blending between the unshaded color and the shadedcolor based on the gradient magnitude, but this would re-sult in structures much darker than the normal lighting andproduce limited lighting effects. It is much challenging tocombine MIP with lighting to better enhance shape and de-tail perception of features.

This paper proposes a novel shape-enhanced maximumintensity projection (SEMIP) technique, which employsgradient-based shading to provide shape cues and detail

information about internal structures in MIP. The core ofSEMIP is to find a valid gradient for the maximum inten-sity of each viewing ray. The valid gradient should be ableto depict the shape and detail of structural surface, and alsoable to represent the gradient of the feature containing themaximum intensity. This can be achieved by searching theboundary of the feature along the viewing ray before theposition of the maximum intensity. The found gradient inthe boundary is regarded as a valid gradient for the maxi-mum intensity, and the Phong local illumination model [17]is then applied to enhance shape perception.

Since MIP already renders each pixel using the maximumintensity along the viewing ray and these maximum intensi-ties may reach the maximum intensity of the display device,illumination effects would not enhance the local contrast offeatures in the bright area, such as the bone and vascular. Ac-tually, this problem has been well investigated as tone reduc-tion [5] in the computer graphics community. Tone reductionis a technique to map high dynamic range (HDR) images tolow dynamic range (LDR) images, such as the range 0–255,and it has been used in volume visualization for the HDRvolumetric data [28]. Thus, we resort to tone reduction tosolve the lighting problem in SEMIP to preserve the localcontrast of structures.

In addition, depth-based color cues are added to furtherimprove depth perception, and a focus and context interac-tion is provided to selectively highlight features of interestwith lighting, while preserving other features with the orig-inal MIP colors as the context. Figure 1 shows the renderedresults of different visualization techniques. It is clearly seenthat SEMIP provides abundant depth, shape, and detail in-formation compared to MIP and DEMIP, such as the bound-ary of the shoulder, the shape of the colon, and the detail onthe vertebra.

The paper is structured as follows. The related work isdiscussed in Sect. 2. Section 3 details SEMIP, including thevalid gradient search, MIP lighting, and tone reduction. In

Fig. 1 The rendered results of MIP, DEMIP, SEMIP, and DVR for thencat_phantom volumetrical data (256×256×256). (a) The traditionalMIP result. (b) The DEMIP result. (c) The SEMIP result, which pro-

vides more shape cues and detail information than the results of MIPand DEMIP. (d) The DVR result

Page 3: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

Shape-enhanced maximum intensity projection 679

Sect. 4, we describe depth-based color cues and a focus pluscontext interaction. We show and discuss several results ofSEMIP in Sect. 5. Finally, we give concluding remarks inSect. 6.

2 Related work

The transfer function plays a crucial role in DVR, and itreceives continuous attention from researchers in the visu-alization community. Various derived properties have beenproposed, such as the second directional derivative alongthe gradient direction [14], the curvature [13], and the shapesize [4], and these properties introduce additional new di-mensions to better separate features of interest, but the spec-ification of an appropriate transfer function is still very com-plex and time-consuming.

A realistic illumination model could effectively enhanceperception of features in a natural way. In DVR, the mostcommon physically-based optical model is absorption plusemission [17], i.e., each voxel emits light and absorbs in-coming light. The Phong local illumination model is oneof the widely used illumination models, and shadow effectsare incorporated into DVR by Behrens and Ratering [1].Kniss et al. [15] presented a volume lighting model andshowed various effects, such as volumetric shadows and for-ward scattering. A thorough overview of advanced illumi-nation techniques for GPU volume ray-casting can be foundin [21]. In this paper, we apply gradient-based lighting inMIP to provide the spatial context expressively.

Illustrative techniques are usually used to improve theperception of depth and structure in DVR. Ebert and Rhein-gans [9] introduced various feature and orientation enhance-ments, such as boundary enhancement, silhouette enhance-ment, tone shading, distance color blending, and distancecolor cues. Nagy et al. [20] added line rendering to DVR,as feature lines are one of the most effective visual abstrac-tions for highlighting features of interest in traditional illus-trations. Bruckner and Gröller [2] proposed volumetric ha-los to enhance depth perception, while Svakhine et al. [25]discussed illustration-inspired effective outlining techniquesand selective depth enhancement.

MIP was first proposed by Wallis et al. [26] for the explo-ration of Positron Emission Tomography (PET) data, and alow-complexity algorithm based on the octree has been pro-posed for interactive MIP rendering of large volumes [18].As MIP highlights high-intensity data values, it can producea feasible visualization result without the specification ofcomplex transfer functions. Due to the missing spatial con-text and shape information in MIP, many efforts have beenmade to resolve this limitation. The common way resorts toviewing the volume from different viewpoints by rotationto reconstruct the spatial context by the user [19]. Heidrich

et al. [11] proposed a depth-based modulation for the datavalue along the viewing ray to improve depth perception.Sato et al. [23] suggested to display the first local maximumrather than the global maximum to provide more clear spa-tial context. MIP and DVR were first fused in two-level vol-ume rendering [10] for a single volumetric data based on thepre-classification, and Straka et al. [24] applied a combina-tion of rendering techniques for vascular structures. MIDApresented by Bruckner and Gröller [3] can be used for thesmooth transition between DVR and MIP. Like MIP, MIDAdoes not require complex transfer functions, but has the spa-tial and occlusion context like DVR. Although MIDA com-bines gradient-based shading to enhance shape perception,the gradient at the position of the maximum value is inac-curate for shading. Our proposed approach searches a rel-ative valid gradient to represent structural surfaces of themaximum intensity, and integrates gradient-based shadingof DVR into MIP.

Ropinski et al. [22] emphasized depth cues by changingthe color of the maximum intensity based on its position(pseudo chromadepth). This color cue allows the user forbetter judgement of the spatial context. DEMIP proposedrecently by Díaz and Vázquez [6] searches the similar ma-terial before the maximum intensity, and utilizes the depthof the similar material to adjust the color in order to en-hance depth perception. Like pseudo chromadepth, DEMIPalso uses the color sphere to further distinguish the spatialorder of internal structures. Our gradient search method isinspired by the search of DEMIP’s similar material. In addi-tion, our gradient-based lighting is orthogonal to the depth-based color modulation. We can use both techniques to en-hance shape and depth perception in MIP.

3 Shape-enhanced maximum intensity projection

MIP depicts internal structures inside the volume without acomplex interaction, and it is especially useful in the fieldof medicine. The proposed shape-enhanced maximum in-tensity projection (SEMIP) further provides the spatial con-text and shape information for users, which could enhanceshape and depth perception of internal structures for the de-tailed exploration. Figure 2 illustrates the pipeline of theproposed SEMIP. The original MIP is firstly performed toobtain the value and position of the maximum intensity ofeach viewing ray. Ray-casting is then used to search a validgradient before the position of the maximum intensity alongthe viewing ray, and this should be located at the bound-ary of feature containing the maximum intensity. The foundvalid gradient is referred to as the representative gradient inthe rest of this paper. Based on the representative gradients,the Phong local illumination model is applied to produceshading effects for structural surface, and then a tone reduc-tion technique is introduced to compress the intensity range

Page 4: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

680 Z. Zhou et al.

Fig. 2 The SEMIP pipeline.A two-pass ray-castingoperation is used to search therepresentative gradient of themaximum intensity. The Phongshading is then performed toenhance shape perception ofinternal structures. Moreover,tone reduction is applied tooptimize the rendered result. Atlast, two visual enhancementtechniques can be used toimprove the visual perception ofstructures

Fig. 3 The comparison of gradient-based shading based on differentfound gradients. (a) The MIP result. (b) The rendered result based onthe gradient of the maximum intensity. (c) The rendered result basedon the gradient of the nearest boundary from the maximum intensity.

(d) The rendered result based on the gradient of the farthest boundaryfrom the maximum intensity. The same global threshold is used for thevalid gradient search in (c) and (d)

while preserving the local contrast of the rendered image. Inaddition, depth-based color cues are used to further improvedepth perception of internal structures, and the user can usea focus and context interaction to emphasize structures ofinterest using the SEMIP colors. The details of these stepsare discussed in the following subsections.

3.1 Representative gradient search

DVR employs gradient-based shading to improve the visualperception of features. In order to apply such shading inMIP, it is necessary to obtain a valid gradient for the maxi-mum intensity. One simple choice is the gradient at the posi-tion of the maximum intensity. However, it has been provedthat such gradient is inaccurate and produces poor lighting

effects for visual perception, as shown in Fig. 3(b). This isbecause the maximum intensity may be located in the ho-mogeneous area, and the gradient at that position tends tozero or deviates from the correct orientation of the structuralsurface due to relatively weak differences in data values.

As the valid gradient should be able to depict the orienta-tion of the structural surface, it must be located in the bound-ary of feature. The valid gradient is selected from the gradi-ents before the position of the maximum intensity along theray, and it should be near the maximum intensity. Thus, thevalid gradient can be obtained by casting a ray back fromthe position of the maximum intensity to the viewing planeto search the first valid gradient in the boundary. Althoughthis gradient is more accurate than the one at the position ofthe maximum intensity, the lighting effects may be not con-

Page 5: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

Shape-enhanced maximum intensity projection 681

Fig. 4 The representative gradient search in different viewing rays.Yellow areas present the internal structures. The black point indicatesthe positions of the maximum intensity of each viewing ray, whilethe red one is the position on the structure’s boundary in the secondray casting. Green arrows are the gradient directions of the maximumintensities, and blue arrows present the gradients of the boundaries,which are the representative gradients

tinuous on the rendered image, as shown in Fig. 3(c). Thiscan be explained by the fact that the positions of the max-imum intensity among neighboring rays may present littlecoherence due to the independent ray casting, and they maybe in different structures of the same feature.

Figure 4 illustrates the relationship of the maximum in-tensity and its valid gradients. If the structure of the max-imum intensity is not occluded by other structures of thesame feature (such as the bone) from the viewing direction,there is only one valid gradient discussed above (the view-ing rays b and c in Fig. 4). Otherwise, there would be sev-eral valid gradients available among all gradients before theposition of the maximum intensity, and they are all locatedat the boundaries of different structures of the same feature(the viewing ray a in Fig. 4). We can take the farthest validgradient, not the nearest valid gradient, from the positionof the maximum intensity as the representative gradient ofthe maximum intensity. In this way, the representative gra-dients are all located in the boundaries of the nearest struc-tures from the viewing direction, and these continuous gra-dients would present coherent lighting effects, as shown inFig. 3(d).

The method to find the nearest structure from the view-ing plane is inspired by the one to locate the similar materialin DEMIP [6]. In order to obtain the representative gradi-ent, a ray is cast from the viewing plane, and terminated atthe nearest structure of the same feature to which the max-imum intensity belongs. The intensity value of the neareststructure is similar to the maximum intensity, and we use aglobal threshold to check whether the value is in the inten-sity range of the feature. The user can adjust the similarity

of two structures by changing the global threshold. Due tothe difference between the value at the terminated positionand the maximum intensity, the terminated position wouldbe usually in or near the boundary of the structure, and thegradient at that position would well depict the orientationof the structural surface. The pseudo code for the search ofrepresentative gradient is shown as follows.

REPRESENTATIVE GRADIENT SEARCH

1 maxValue ← 0 // the maximum value2 p ← 0 // the maximum intensity position3 q ← 0 // the boundary position

// the first ray-casting (MIP).4 for i ← 0 to steps5 while Sample[i] > maxValue6 do maxValue ← Sample[i]7 p ← i

// the second ray-casting (search the nearest structure)8 for j ← 0 to p

// threshold is used to determine the similarity9 while Sample[j ] > threshold ∗ maxValue

// record the boundary position10 do q ← j

3.2 MIP lighting

With the representative gradients of the maximum intensity,the Phong local illumination model can be applied to en-hance shape perception of structural surfaces. Formally, thecolor of a shaded maximum intensity can be expressed as

C = (ka + kd(Nvalid · L)

) · CMIP + ks(Nvalid · H)n, (1)

where ka , kd , and ks are the ambient, diffuse, and specu-lar lighting coefficients respectively, n is the shininess ex-ponent, CMIP is the color of the maximum intensity, Nvalid

is the representative gradient of the maximum intensity, L

is the light direction, and H is the normalized half-way di-rection. The shaded maximum intensity not only preservesthe original maximum intensity value for the abnormal de-tection, but also makes the shape and boundary of struc-ture more easily comprehensible in the rendered image. Forexample, Fig. 3(d) clearly presents the convex shape andsmoothness of the skull.

The proposed MIP lighting is complement to other MIPenhancement techniques. For instance, CMIP can be replacedwith CDEMIP [6], which is obtained by blending the maxi-mum color and the depth of the similar material. This com-bination further enhances depth perception of the shaded re-sult.

3.3 Tone reduction

Based on the proposed MIP lighting, lighting effects are im-posed on the maximum intensity of every viewing ray. How-ever, it is difficult to distinguish internal structures in the

Page 6: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

682 Z. Zhou et al.

Fig. 5 Tone reduction forSEMIP. The left image presentsthe SEMIP colors without tonereduction. Tone reduction isapplied with the parameterη = 0.6 and the result is shownin the middle image. On theright, η = 0.9

bright area, because the shaded value may exceed the max-imum intensity of the display device and this results in thereduction of the rendered image’s contrast. This problem hasbeen well investigated as tone reduction in computer graph-ics. Many tone-reduction operators have been developed toeffectively map HDR images to LDR display devices whilepreserving the contrast of HDR images [16]. These oper-ators can be classified into global spatially uniform opera-tors [7] and local spatially varying operators [8]. Yuan etal. [27, 28] introduced the HDR technique into volume visu-alization, and presented an interactive HDR volume visual-ization framework. In this paper, we simply choose a globaloperator, the adaptive logarithmic reduction operator [7], tocompress the MIP shaded result, as it is very fast and canwork interactively for SEMIP. The formula of this operatoris

Cnew = (Dmax − Dmin) ∗ log(C + γ ) − log(Cmin + γ )

log(Cmax + γ ) − log(Cmin + γ )

+ Dmin, (2)

where Cnew is the resultant color values using tone reduc-tion, Cmin and Cmax are the minimum and maximum colorvalues obtained by the MIP lighting, Dmin and Dmax are theminimum and maximum luminance of the normal displaydevice. γ = η ∗ (Cmax − Cmin), where η is an effective pa-rameter for users to tune the brightness of rendered images.Figure 5 shows the effectiveness of tone reduction in theSEMIP result. Without tone reduction, the shape and bound-ary of the colon are hard to distinguish on the left of Fig. 5.By tuning the parameter η, tone reduction reduces the inten-sity of the rendered images and enhances the local contrastof internal structures, especially the colon, as shown in themiddle and right of Fig. 5.

4 Visual enhancements

With lighting effects, the local shape and relation of internalstructures are well presented for the spatial comprehensionof features. Two visual enhancement schemes are presentedin this section to further enhance the visual perception of

features. One is the depth-based color cues generated by theHSV color model. The other is a focus and context interac-tion to highlight features of interest.

4.1 Color cues

Although lighting effects improve the visual perception ofstructures, it is still a little difficult to distinguish the order-ing of complex structures. The depth information is usuallyrepresented by the color, i.e., different colors correspond todifferent depths. In this paper, we resort to the HSV colormodel for producing depth-based color cues, to further en-hance depth perception of internal structures.

Figure 6 shows the rendered results using depth-basedcolor cues for the engine data set (256 × 256 × 256). TheMIP result in Fig. 6(a) could not identify the depth order ofinternal features. Although the SEMIP result in Fig. 6(b) en-hances shape perception of features, the depth order of inter-nal features is still hard to distinguish for users. Two depth-based color cues, the HSV color model and color sphere [6],are used to improve depth perception in parts (c) and (d) ofFig. 6, respectively. As a result, users can easily distinguishthe depth order of internal features by the color.

4.2 Focus and context

A simple focus and context interaction is provided to high-light features of interest. The user clicks the interested fea-ture in the MIP rendered image. The clicked position istaken as the center of ROI (region of interest) to determinea focus area, and the radius of the focus area can be fur-ther adjusted according to the user’s requirement. The focusarea is emphasized by the MIP lighting or depth-based colorcues, while others are displayed using the original MIP coloras the context. A focus and context example for the enginedata set is shown in Fig. 6(e), where internal features arehighlighted by depth-based color cues.

5 Results and discussion

We have implemented the proposed SEMIP in a GPU-basedray casting volume renderer using the fragment shaders in

Page 7: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

Shape-enhanced maximum intensity projection 683

Fig. 6 Visual enhancements based on SEMIP for the volumetric engine data. (a) MIP. (b) SEMIP. (c) Depth-based color cues based on the HSVcolor model. (d) Depth-based coloring operation based on the color sphere. (e) The focus and context interaction

Fig. 7 The rendered results using (a) MIP, (b) SEMIP, (c) MIDA and (d) DVR for the PELVIX CT data set. MIDA and DVR use the same transferfunction

the Cg shading language. All the experiments were per-formed on an Intel Core 2 Quad CPU Q9550 @ 2.83 GHzwith 4.00GB of RAM. The graphics card is a NVIDIAGeForce GTX 260 GPU with 1.00GB of RAM. MIP,DEMIP, MIDA, DVR and our SEMIP are applied to a num-ber of different volume data sets for the visual comparisonin the information communication of internal structures.

Figure 1 shows the rendered results of the ncat_phantomdata set using MIP, DEMIP, SEMIP, and DVR. Compared toMIP and DEMIP, more shape, detail, and depth informationare provided in the SEMIP result. Figure 7 shows the ren-dered results of MIP, SEMIP, MIDA and DVR for a PELVIXdata set (512 × 512 × 355). Although MIDA also supportslighting, the MIDA result in Fig. 7(c) is much darker thanthe SEMIP and DVR results due to the blending betweenthe shaded color and MIP color. Like MIP, SEMIP considersthe maximum intensity along each viewing ray as structuresof interest. Abundant local shape cues are further presentedin the SEMIP result by means of gradient-based shading,which is similar to DVR. As SEMIP have advantages of bothMIP and DVR, the users could analyze shape-enhanced fea-tures in the SEMIP result more efficiently. As can be seenfrom Fig. 7(b), more perceptible structural shape informa-tion is well presented and the visual perception of featuresis largely enhanced. In addition, SEMIP does not need to

specify a well-designed transfer function, and is more suit-able for medical applications than DVR and MIDA.

Figure 8 shows various rendered results of an aneurismdata set (256×256×256) to compare MIP, DEMIP, MIDA,SEMIP, and DVR. Figure 8(a) is the rendered result of MIP,similar to the X-ray image. DEMIP is designed to improvedepth perception of feature by blending the MIP color andthe depth value of the similar material, but the shape in-formation of the aneurism is not well displayed, as can beseen from Fig. 8(b). The spatial context of the aneurism aregreatly revealed in the SEMIP result in Fig. 8(c), and depth-based color cues are added in Fig. 8(d). The rendered resultsof MIDA and DVR based on the same transfer function aredisplayed in parts (e), (f) and (g) of Fig. 8. Focus and con-text is a useful technique to attract the user’s attention to thehighlighted feature. As shown in Fig. 8(h), the local contextand shape information of the aneurism is highlighted, whileother blood vessels are preserved as the context.

An MRI scan FELIX volume data set (448 × 576 × 120)is used to further demonstrate the effectiveness of SEMIP.Figure 9 presents the rendered results of MIP, DEMIP,SEMIP, MIDA and DVR. The MIP result as shown inFig. 9(a) is hard to analyze the aneurism due to the lackof the context information. The DEMIP result in Fig. 9(b)presents the depth order between the aneurism and othervessels. SEMIP combines the advantages of MIP and DVR

Page 8: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

684 Z. Zhou et al.

Fig. 8 The rendered results for the volumetric aneurism data set based on various rendering techniques

Fig. 9 The rendered results using (a) MIP, (b) DEMIP, (c) SEMIP, (d) MIDA and (e) DVR for the FELIX MRI data set

to provide clear shape cues of the aneurism and vesselsby using gradient-based shading. Compared to MIDA andDVR, users could obtain more local shape cues and impor-tant contextual information in the SEMIP result, without thespecification of a complex transfer function.

The shading effects of SEMIP largely depend on the va-lidity of the representative gradient. A global threshold isused to determine the similarity between the current sam-pled value and the maximum intensity. As the maximum in-tensities of viewing rays are different, it is hard to set upan optimal global threshold for various situations. Currently,the users may need to try several values to find the optimalthreshold for some complex data sets, and this would addmore burden to the users. A local adaptive threshold wouldbe better for the search of the representative gradient. As therepresentative gradient is calculated based on the sampledvalues of the volume, the noise in the volume would affect

the accuracy of the representative gradient, similarly to theshading in DVR. Denoising methods can be used to reducethe influence of noises before applying the SEMIP render-ing. Another limitation of SEMIP is the inaccurate depth or-der for overlapped features. As shown in Fig. 1, the lungis behind the bone. Actually, this is an inherent limitationof MIP, as MIP visualizes only the maximum intensity ofeach viewing ray. For overlapped features, MIP renders fea-tures according to the intensity order, and takes this orderas the depth order of features. Features with the lower in-tensity are occluded by features with the higher intensity,even if features with the higher intensity are behind featureswith the lower intensity in the physical order. Fortunately,depth-based color cues can solve this problem to some ex-tent.

Table 1 is the performance comparison between MIP,DEMIP, DVR, MIDA, and SEMIP. MIP is fastest among

Page 9: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

Shape-enhanced maximum intensity projection 685

Table 1 The performance comparison in fps (frames per second)

Method Ncat_phantom Aneurism PELVIX FELIX

MIP 52.6342 32.8827 15.2737 40.3712

DVR 47.6677 21.1535 15.5505 34.3322

MIDA 38.6829 20.1417 13.7135 34.7621

DEMIP 40.5504 26.7283 13.0363 35.7377

SEMIP 39.8433 20.1883 13.0834 34.7093

these rendering techniques, as it only needs to trace one rayto find the maximum value along the viewing ray. DEMIPtraces two rays to look for the maximum intensity value andthe similar material, while SEMIP also traces two rays tosearch the maximum intensity and the representative gradi-ent. In addition, SEMIP needs to perform a costly Phongshading operation at last. Thus, the performance of SEMIPis a little worse than DEMIP. Although DVR performsPhong shading for each valid gradient, early ray terminationtechnique makes DVR without a need to traverse the wholevolume and improves the rendering efficiency compared toSEMIP for some volume data sets. However, SEMIP canfast obtain a shape-enhanced rendered result without thetime-consuming specification of a well-designed transferfunction.

6 Conclusion

We have presented a shape-enhanced maximum intensityprojection visualization technique, which can fast render in-ternal structures with clear shape cues and local context in-formation. We first resort to two-pass ray casting to searchthe representative gradient of the maximum intensity, andthen gradient-based Phong shading is applied to enhanceshape perception of structures. As shaded values may beover the maximum intensity of the display device, a globaltone reduction operation is used to compress the intensityrange of the rendered image while preserving the originallocal contrast. Furthermore, depth-based color cues basedon the HSV color model are utilized to improve the vi-sual perception of structures, especially depth perception.A focus and context interaction is used to highlight struc-tures of interest, and also improves the rendering efficiencyof SEMIP. Compared to traditional rendering techniques,SEMIP presents much more shape and local detail infor-mation than DEMIP and MIP. Without a need to specify awell-designed transfer function, SEMIP is simpler and moreeffective than DVR and MIDA. Thus, SEMIP is more suitedfor medical applications.

In the future, we plan to employ a local adaptive parame-ter to replace the current global threshold in the representa-tive gradient search. This will help find more accurate rep-resentative gradients for each viewing ray and provide more

accurate visual perception of structures. Besides the localPhong illumination model, global illumination models suchas ambient occlusion would further improve shape and detailperception. As a result, we will investigate how to integrateglobal illumination models into MIP.

Acknowledgements The authors would like to thank the anonymousreviewers for their valuable comments. This work was supported byNFS of China (Nos. 60873122 and 60903133), and the Open ProjectProgram of the State Key Lab of CAD&CG (Grant No. A1012), Zhe-jiang University. The data sets are courtesy of VoreenPub, The VisibleHuman Project, General Electric, and Philips Research.

References

1. Behrens, U., Ratering, R.: Adding shadows to a texture-based vol-ume renderer. In: VVS ’98: Proceedings of the 1998 IEEE Sym-posium on Volume Visualization, pp. 39–46. ACM, New York(1998)

2. Bruckner, S., Gröller, M.E.: Enhancing depth-perception withflexible volumetric halos. IEEE Trans. Vis. Comput. Graph. 13(6),1344–1351 (2007)

3. Bruckner, S., Gröller, M.E.: Instant volume visualization usingmaximum intensity difference accumulation. Comput. Graph. Fo-rum 28(3), 775–782 (2009)

4. Correa, C., Ma, K.L.: Size-based transfer functions: A new volumeexploration technique. IEEE Trans. Vis. Comput. Graph. 14(6),1380–1387 (2008)

5. Debevec, P.E., Malik, J.: Recovering high dynamic range radi-ance maps from photographs. In: SIGGRAPH ’97: Proceedings ofthe 24th Annual Conference on Computer Graphics and Interac-tive Techniques, pp. 369–378. ACM/Addison-Wesley, New York(1997)

6. Díaz, J., Vázquez, P.: Depth-enhanced maximum intensity pro-jection. In: 8th IEEE/EG International Symposium on VolumeGraphics, pp. 93–100 (2010)

7. Drago, F., Myszkowski, K., Annen, T., Chiba, N.: Adaptive log-arithmic mapping for displaying high contrast scenes. Comput.Graph. Forum 22(3), 419–426 (2003)

8. Durand, F., Dorsey, J.: Fast bilateral filtering for the display ofhigh-dynamic-range images. ACM Trans. Graph. 21(10), 257–266(2002)

9. Ebert, D., Rheingans, P.: Volume illustration: non-photorealisticrendering of volume models. In: VIS ’00: Proceedings of the con-ference on Visualization ’00, pp. 195–202. IEEE Computer Soci-ety, Los Alamitos (2000)

10. Hauser, H., Mroz, L., Bischi, G.I., Gröller, M.E.: Two-level vol-ume rendering. IEEE Trans. Vis. Comput. Graph. 7(3), 242–252(2001)

11. Heidrich, W., McCool, M., Stevens, J.: Interactive maximum pro-jection volume rendering. In: Proceedings of the 6th Conferenceon Visualization ’95, pp. 11–18. IEEE Computer Society, Wash-ington (1995)

12. Kindlmann, G.: Transfer function in direct volume rendering: de-sign, interface, interaction. In: ACM SIGGRAPH Course Notes(2002)

13. Kindlmann, G., Whitaker, R., Tasdizen, T., Moller, T.: Curvature-based transfer functions for direct volume rendering: Methods andapplications. In: VIS ’03: Proceedings of the 14th IEEE Visual-ization 2003, pp. 513–520. IEEE Computer Society, Washington(2003)

Page 10: Shape-enhanced maximum intensity projection...volume rendering (DVR). However, MIP also has its inher-ent limitation, the loss of spatial context and shape informa-tion. This paper

686 Z. Zhou et al.

14. Kniss, J., Kindlmann, G., Hansen, C.: Multidimensional transferfunctions for interactive volume rendering. IEEE Trans. Vis. Com-put. Graph. 8(3), 270–285 (2002)

15. Kniss, J., Premoze, S., Hansen, C., Shirley, P., McPherson, A.:A model for volume lighting and modeling. IEEE Trans. Vis.Comput. Graph. 9(2), 150–162 (2003)

16. Ledda, P., Chalmers, A., Troscianko, T., Seetzen, H.: Evaluationof tone mapping operators using a high dynamic range display.ACM Trans. Graph. 24(3), 640–648 (2005)

17. Max, N.: Optical models for direct volume rendering. IEEE Trans.Vis. Comput. Graph. 1(2), 99–108 (1995)

18. Mora, B., Ebert, D.S.: Low-complexity maximum intensity pro-jection. ACM Trans. Graph. 24(4), 1392–1416 (2005)

19. Mroz, L., Hauser, H., Gröller, M.E.: Interactive high-quality max-imum intensity projection. Comput. Graph. Forum 19(3), 341–350(2000)

20. Nagy, Z., Schneider, J., Westermann, R.: Interactive volume il-lustration. In: Proceedings of Vision, Modeling and Visualization2002, pp. 497–504 (2002)

21. Rezk-Salama, C., Hadwiger, M., Ropinski, T., Ljung, P.: Ad-vanced illumination techniques for GPU volume ray-casting. In:ACM SIGGRAPH, Course Notes 2009 (2009)

22. Ropinski, T., Steinicke, F., Hinrichs, K.: Visually supporting depthperception in angiography imaging. In: Lecture Notes in Com-puter Science, vol. 4073, pp. 193–104 (2006)

23. Sato, Y., Shiraga, N., Nakajima, S., Tamura, S., Kikinis, R.: LMIP:Local maximum intensity projection: Comparison of visualizationmethods using abdominal CT-angiography. J. Comput. Assist. To-mogr. 22(6), 912–917 (1998)

24. Straka, M., Cervenanský, M., Cruz, A.L., Köchl, A., Srámek, M.,Gröller, M.E., Fleischmann, D.: The vesselglyph: Focus and con-text visualization in CT-angiography. In: IEEE Visualization, pp.392–385 (2004). Commission for Scientific Visualization, Aus-trian (2004)

25. Svakhine, N.A., Ebert, D.S., Andrews, W.M.: Illustration-inspireddepth enhanced volumetric medical visualization. IEEE Trans.Vis. Comput. Graph. 15(1), 77–86 (2009)

26. Wallis, J., Miller, T., Lerner, C., Kleerup, E.: Three-dimensionaldisplay in nuclear medicine. IEEE Trans. Med. Imaging 8(4), 297–230 (1989)

27. Yuan, X., Nguyen, M.X., Chen, B., Porter, D.H.: High dynamicrange volume visualization. In: Proceedings of the Conference onVisualization, pp. 327–334. IEEE Computer Society, Washington(2005)

28. Yuan, X., Nguyen, M.X., Chen, B., Porter, D.H.: HDR VolVis:High dynamic range volume visualization. IEEE Trans. Vis. Com-put. Graph. 12(4), 433–445 (2006)

Zhiguang Zhou received his B.Sc.and M.Sc. degrees in Computer Sci-ence and Technology from Liaon-ing Normal University, Liaoning,China, in 2006 and 2009, respec-tively. Currently he is a Ph.D. candi-date in the State Key Laboratory ofCAD&CG of Zhejiang University.His research interest is scientific vi-sualization.

Yubo Tao received the B.Sc. andPh.D. degrees in Computer Sci-ence and Technology from ZhejiangUniversity in China, in 2003 and2009, respectively. He is currentlya Postdoctoral Researcher in theState Key Laboratory of CAD&CGof Zhejiang University, and a Re-search Fellow in the Center forComputer Graphics & Visualization(CCGV) at the University of Bed-fordshire. His research interests in-clude data visualization and compu-tational electromagnetics.

Hai Lin received the B.Sc. andM.Sc., in Electrical Engineeringfrom Xidian University, Xi’an,China, in 1987 and 1990, respec-tively. He received the Ph.D. de-gree in Computer Science from Zhe-jiang University, Hangzhou, Chinain 2000. Currently, he is Professorof Visual Computing in the StateKey Lab of CAD&CG, ZhejiangUniversity. He is also a Visiting Pro-fessor of the Department of Com-puting and Information Systems,University of Bedfordshire, UK. His

research interests include computational electromagnetic, computergraphics, scientific visualization.

Feng Dong is Professor of VisualComputing in the Department ofComputing and Information Sys-tems, University of Bedfordshire,UK. His research interests includefundamental computer graphics al-gorithms, texture synthesis, image-based rendering, medical visual-ization, volume rendering, humanmodeling/rendering and virtual real-ity. He received a Ph.D. in ComputerScience from Zhejiang University,China.

Gordon Clapworthy is Professorof Computer Graphics and Head ofthe Center for Computer Graphics& Visualization (CCGV) at the Uni-versity of Bedfordshire, Luton, UK.He received a B.Sc. (1st class) inMathematics and a Ph.D. in Aero-nautical Engineering from the Uni-versity of London, and an M.Sc.(dist.) in Computer Science fromthe City University in London. Hehas published nearly 150 refer-eed papers in journals and interna-tional conferences and is member of

ACM, ACM SIGGRAPH and Eurographics. His areas of interest spancomputer graphics, computer animation, visualization, computer inter-action, virtual reality and biomechanics.