8
Illustrative Context-Preserving Volume Rendering Stefan Bruckner, Sören Grimm, Armin Kanitsar, and M. Eduard Gröller {bruckner | grimm | kanitsar | groeller}@cg.tuwien.ac.at Institute of Computer Graphics and Algorithms, Vienna University of Technology, Austria (a) (b) (c) (d) Figure 1: Contrast-enhanced CT angiography data set. (a) Gradient-magnitude opacity-modulation. (b) Direct volume render- ing. (c) Direct volume rendering with cutting plane. (d) Context-preserving volume rendering. Abstract In volume rendering it is very difficult to simultaneously visualize interior and exterior structures while preserving clear shape cues. Very transparent transfer functions produce cluttered images with many overlapping structures, while clipping techniques completely remove possibly important context information. In this paper we present a new model for volume rendering, inspired by techniques from illustration that provides a means of interactively inspecting the interior of a volumetric data set in a feature-driven way which retains context information. The context-preserving volume rendering model uses a function of shading intensity, gradient magnitude, distance to the eye point, and previously accumulated opacity to selectively reduce the opacity in less important data regions. It is controlled by two user-specified parameters. This new method represents an alternative to conventional clipping techniques, shares their easy and intuitive user control, but does not suffer from the drawback of missing context information. Categories and Subject Descriptors (according to ACM CCS): I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism 1. Introduction Theoretically, in direct volume rendering every single sam- ple contributes to the final image allowing simultaneous vi- sualization of surfaces and internal structures. However, in practice it is rather difficult and time-demanding to spec- ify an appropriate transfer function. Due to the exponential attenuation, objects occluded by other semi-transparent ob- jects are difficult to recognize. Furthermore, reducing opaci- ties also results in reduced shading contributions per sample which causes a loss of shape cues, as shown in Figure 1 (a). One approach to overcome this difficulty is to use steep transfer functions, i.e., those with rapidly increasing opac-

Illustrative Context-Preserving Volume Rendering...2001/02/05  · practice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Illustrative Context-Preserving Volume Rendering...2001/02/05  · practice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

Illustrative Context-Preserving Volume Rendering

Stefan Bruckner, Sören Grimm, Armin Kanitsar, and M. Eduard Gröller

{bruckner | grimm | kanitsar | groeller}@cg.tuwien.ac.atInstitute of Computer Graphics and Algorithms, Vienna University of Technology, Austria

(a) (b) (c) (d)

Figure 1: Contrast-enhanced CT angiography data set.(a) Gradient-magnitude opacity-modulation.(b) Direct volume render-ing. (c) Direct volume rendering with cutting plane.(d) Context-preserving volume rendering.

AbstractIn volume rendering it is very difficult to simultaneously visualize interior and exterior structures while preservingclear shape cues. Very transparent transfer functions produce cluttered images with many overlapping structures,while clipping techniques completely remove possibly important context information. In this paper we present anew model for volume rendering, inspired by techniques from illustration that provides a means of interactivelyinspecting the interior of a volumetric data set in a feature-driven way which retains context information. Thecontext-preserving volume rendering model uses a function of shadingintensity, gradient magnitude, distance tothe eye point, and previously accumulated opacity to selectively reduce the opacity in less important data regions. Itis controlled by two user-specified parameters. This new method represents an alternative to conventional clippingtechniques, shares their easy and intuitive user control, but does not suffer from the drawback of missing contextinformation.

Categories and Subject Descriptors(according to ACM CCS): I.3.7 [Computer Graphics]: Three-DimensionalGraphics and Realism

1. Introduction

Theoretically, in direct volume rendering every single sam-ple contributes to the final image allowing simultaneous vi-sualization of surfaces and internal structures. However, inpractice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

attenuation, objects occluded by other semi-transparent ob-jects are difficult to recognize. Furthermore, reducing opaci-ties also results in reduced shading contributions per samplewhich causes a loss of shape cues, as shown in Figure1 (a).One approach to overcome this difficulty is to use steeptransfer functions, i.e., those with rapidly increasing opac-

Page 2: Illustrative Context-Preserving Volume Rendering...2001/02/05  · practice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

2 Bruckner et al. / Illustrative Context-Preserving Volume Rendering

Image courtesy of Kevin Hulsey IllustrationCopyright c©2004 Kevin Hulsey Illustration, All rights reserved.

http://www.khulsey.com

Figure 2: Technical illustration using ghosting to display in-terior structures.

ities at particular value ranges of interest. This may increasedepth cues by creating the appearance of surfaces within thevolume, but it does so by hiding all information in some re-gions of the volume, sacrificing a key advantage of volumerendering. Figure1 (b) shows an example.

Thus, volume clipping usually plays a decisive role in un-derstanding 3D volumetric data sets. It allows us to cut awayselected parts of the volume, based on the position of vox-els in the data set. Very often, clipping is the only way touncover important, otherwise hidden, details of a data set.However, as these clipping operations are generally not data-aware they do not take into account features of the volume.Consequently, clipping can also remove important contextinformation leading to confusing and partly misleading re-sult images as displayed in Figure1 (c).

In order to resolve these issues, we propose to only sup-press regions which do not contain strong features whenbrowsing through the volume. Our idea is based on the ob-servation that large regions of high lighting intensity usuallycorrespond to rather homogenous areas which do not containcharacteristic features. While thepositionandshapeof spec-ular highlights, for example, gives good cues for perceivingthe curvature of a surface, the area inside the highlight couldalso be used to display other information. Thus, we proposeto make this area transparent allowing the user to see theinterior of the volume.

In illustration, when artists want to visualize specific in-ternal structures as well as the exterior, the use ofghostingis common where less significant items are reduced to an il-lusion of transparency. For example, rather flat surfaces arefaded from opaque to transparent to reveal the interior of anobject. Detailed structures are still displayed with high opac-ity, as shown in the technical illustration in Figure2. Thegoal of this illustration technique is to provide enough hintsto enable viewers to mentally complete the partly removedstructures.

Using our idea of lighting-driven feature classification,we can easily mimic this artistic concept in an illustra-

tive volume rendering model. Our approach allows context-preserving clipping by adjusting the model parameters. Theapproach of clipping planes is extended to allow feature-aware clipping. Figure1 (d) shows a result achieved without method - the blood vessels inside the head are revealedwhile preserving context information.

2. Related Work

We draw our inspiration mainly from three areas of re-search in volume visualization: volume clipping, illustrativevolume rendering, and transfer functions. The term illus-trative volume rendering refers to approaches dealing withfeature enhancement, alternative shading and compositingmodels, and related matters, which are usually called non-photorealistic volume rendering methods. We believe thatthe adjective illustrative more accurately describes the na-ture of many of these techniques.

Volume Clipping. There are approaches which tryto remedy the deficiencies of simple clipping planesby using more complex clipping geometry. Weiskopf etal. [WEE02, WEE03] presented techniques for interactivelyapplying arbitrary convex and concave clipping objects.Konrad-Verse et al. [KVPL04] use a mesh which can be flex-ibly deformed by the user with an adjustable sphere of influ-ence. Zhou et al. [ZDT04] propose the use of distance toemphasize and de-emphasize different regions. They use thedistance from the eye point to directly modulate the opac-ity at each sample position. Thus, their approach can beseen as a generalization of binary clipping. Volume sculpt-ing, proposed by Wang and Kaufman [WK95], enables in-teractive carving of volumetric data. An automated way ofperforming clipping operations has been presented by Vi-ola et al. [VKG04]. Inspired by cut-away views, which arecommonly used in technical illustrations, they apply differ-ent compositing strategies to prevent an object from beingoccluded by a less important object.

Illustrative Volume Rendering. A common illustra-tive method is gradient-magnitude opacity-modulation.Levoy [Lev88] proposed to modulate the opacity at a sam-ple position using the magnitude of the local gradient. Thisis an effective way to enhance surfaces in volume rendering,as homogeneous regions are suppressed. Based on this idea,Ebert and Rheingans [ER00, RE01] presented several illus-trative techniques which enhance features and add depth andorientation cues. They also propose to locally apply thesemethods for regional enhancement. Using similar methods,Lu et al. [LME∗02] developed an interactive direct volumeillustration system that simulates traditional stipple draw-ing. Csébfalvi et al. [CMH∗01] visualize object contoursbased on the magnitude of local gradients as well as onthe angle between viewing direction and gradient vector us-ing depth-shaded maximum intensity projection. The con-cept of two-level volume rendering, proposed by Hauser etal. [HMBG01], allows focus+context visualization of vol-

Page 3: Illustrative Context-Preserving Volume Rendering...2001/02/05  · practice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

Bruckner et al. / Illustrative Context-Preserving Volume Rendering 3

ume data. Different rendering styles, such as direct volumerendering and maximum intensity projection, are used to em-phasize objects of interest while still displaying the remain-ing data as context.

Transfer Functions. Multi-dimensional transfer func-tions have been proposed to extend the classification spaceand thus allow better selection of features. These transferfunctions take into account derivative information. For ex-ample, Kindlmann et al. [KWTM03] use curvature infor-mation to achieve illustrative effects, such as ridge and val-ley enhancement. Lum and Ma [LM04] assign colors andopacities as well as as parameters of the illumination modelthrough a transfer function look up. They apply a two-dimensional transfer function to emphasize material bound-aries using illumination. While there are many advantagesof multi-dimensional approaches, they also have some is-sues: it is, for example, difficult to develop a simple and intu-itive user interface which allows the specification of multi-dimensional transfer functions. One possible solution waspresented by Kniss et al. [KKH01] who introduce probingand classification widgets.

The main contribution of this paper is a new illustrativevolume rendering model which incorporates the functional-ity of clipping planes in a feature-aware manner. It includesconcepts from artistic illustration to enhance the informationcontent of the image. Cumbersome transfer function specifi-cation is simplified and no segmentation is required as fea-tures are classified implicitly. Thus, our approach is espe-cially well-suited for interactive exploration.

3. The Context-Preserving Volume Rendering Model

Lighting plays a critically important role in illustrating sur-faces. In particular, lighting variations provide visual cuesregarding surface orientation. This is acknowledged by Lumand Ma [LM04], who highlight material boundaries by us-ing two-dimensional lighting transfer functions. In contrast,in our approach the lighting intensity serves as an input toa function which varies the opacity based on this informa-tion, i.e. we use the result of the shading intensity functionto classify features.

3.1. Background

We assume a continuous volumetric scalar fieldf (Pi). Asample at positionPi is denoted byfPi . We denote the gra-dient at positionPi by gPi = ∇ f (Pi). We use ˆgPi for the nor-malized gradient and‖gPi‖ for the gradient magnitude nor-malized to the interval[0..1], where zero corresponds to thelowest gradient magnitude and one corresponds to the high-est gradient magnitude in the data set.

A discrete approximation of the volume rendering integralalong a viewing ray uses the front-to-back formulation of theover operator to compute the opacityαi and the colorci ateach step along the ray:

αi = αi−1 +α(Pi) · (1−αi−1)ci = ci−1 +c(Pi) ·α(Pi) · (1−αi−1)

(1)

α(Pi) andc(Pi) are the opacity and color contributions atpositionPi , αi−1 andci−1 are the previously accumulatedvalues for opacity and color.

For conventional direct volume rendering using shading,α(Pi) andc(Pi) are defined as follows:

α(Pi) = αt f ( fPi )c(Pi) = ct f ( fPi ) ·s(Pi)

(2)

αt f andct f are the opacity and color transfer functions;they map an opacity and color to each scalar value in thevolumetric function.s(Pi) is the value of shading intensity atthe sample position.

For the Phong-Blinn model using one directional lightsource,s(Pi) is:

s(Pi) = cd ·∥

∥L · gPi

∥+cs ·(∥

∥H · gPi

)ce +ca (3)

cd, cs, andca are the diffuse, specular, and ambient light-ing coefficients, respectively, andce is the specular expo-nent.L is the normalized light vector andH is the normal-ized half-way vector. For conventional direct volume ren-dering, the opacity at pointPi is only determined by thescalar valuefPi and the opacity transfer function. Gradient-magnitude opacity-modulation additionally scales the opac-ity by the gradient magnitude, causing an enhanced dis-play of boundaries. Thus, for gradient-magnitude opacity-modulationα(Pi) is defined in the following way:

α(Pi) = αt f ( fPi ) · ‖gPi‖ (4)

3.2. Model

In direct volume rendering, the illumination intensity nor-mally does not influence the opacity at a sample position.Large regions of highly illuminated material normally cor-respond to rather flat surfaces that are oriented towards thelight source. Our idea is to reduce the opacity in these re-gions. Regions, on the other hand, which receive less light-ing, for example contours, will remain visible.

Therefore we choose to use the result of the shading in-tensity function for opacity modulation. Furthermore, as wewant to mimic - to a certain extent - the look and feel of aclipping plane, we also want to take the distance to the eyepoint into account. For viewing rays having already accu-mulated a lot of opacity, we want to reduce the attenuationby our model. The total effect should also take the gradientmagnitude into account.

Page 4: Illustrative Context-Preserving Volume Rendering...2001/02/05  · practice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

4 Bruckner et al. / Illustrative Context-Preserving Volume Rendering

κt = 1.5 κt = 3.0 κt = 4.5 κt = 6.0

κs = 0.4

κs = 0.6

κs = 0.8

Figure 3: Context-preserving volume rendering of contrast-enhanced CT angiography data set using different values forκt andκs. Columns have the sameκt value and rows have the sameκs value.

These requirements lead us to the following equation forthe opacity at each sample positionPi :

α(Pi) = αt f ( fPi ) · ‖gPi‖(κt ·s(Pi)·(1−‖Pi−E‖)·(1−αi−1))

κs

(5)

‖gPi‖ is the gradient magnitude ands(Pi) is the shadingintensity at the current sample position. A high value ofs(Pi)indicates a highlight region and decreases opacity. The term‖Pi −E‖ is the distance of the current sample position to theeye point, normalized to the range[0..1], where zero corre-sponds to the sample position closest to the eye point andone corresponds to the sample position farthest from the eye

point. Thus, the effect of our model will decrease as dis-tance increases. Due to the term 1−αi−1 structures locatedbehind semi-transparent regions will appear more opaque.The influence of the product of these three components iscontrolled by the two user-specified parametersκt and κs.The parameterκt roughly corresponds - due to the position-dependent term 1− ‖Pi −E‖ - to the depth of a clippingplane, i.e. higher values reveal more of the interior of thevolume. This is the one parameter the user will modify in-teractively to explore the data set. The effect of modifyingκs is less pronounced - its purpose is to allow control ofthe sharpness of the cut. Higher values will result in verysharp cuts, while lower values produce smoother transitions.

Page 5: Illustrative Context-Preserving Volume Rendering...2001/02/05  · practice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

Bruckner et al. / Illustrative Context-Preserving Volume Rendering 5

(a) (b) (c)

Figure 4: CT scan of a tooth rendered with three differenttechniques.(a) Gradient-magnitude opacity-modulation.(b)Direct volume rendering with clipping plane.(c) Context-preserving volume rendering.

Figure 3 shows results for different settings ofκt and κs.It can be seen that asκt increases more of the interior ofthe head is revealed - structures on the outside are more andmore reduced to the strongest features. An increase inκs

causes a sharper transition between attenuated and visibleregions. A further property of this model is that it is a unify-ing extension of both direct volume rendering and gradient-magnitude opacity-modulation. Ifκt is set to zero, the opac-ity remains unmodified and normal direct volume renderingis performed. Likewise, whenκs is set to zero, the opacity isdirectly modulated by the gradient magnitude.

4. Results

We experimented with the presented model using a wide va-riety of volumetric data sets. We have found that our ap-proach makes transfer function specification much easier, asthere is no need to pay special attention to opacity. Normally,tedious tuning is required to set the right opacity in order toprovide good visibility for all structures of interest. Usingthe context-preserving volume rendering model, we just as-sign colors to the structures and use the parametersκt andκs to achieve an insightful rendering. Opacity in the trans-fer function is just used to suppress certain regions, such asbackground. This contrasts the usual direct volume render-ing approach, where opacity specification is vital in orderto achieve the desired visual result. In many cases, however,good results are difficult and laborious to achieve. For ex-ample, for structures sharing the same value range, as it isoften the case with contrast-enhanced CT scans, it is impos-sible to assign different opacities using a one-dimensionaltransfer function. If one object is occluding the other, set-ting a high opacity will cause the occluded object to be com-pletely hidden. Using high transparency, on the other hand,will make both objects hardly recognizable. Our method in-herently solves this issue, as it bases opacity not only on datavalues, but also includes a location-dependent term. Thus, akey advantage of our approach is that it reduces the com-

(a)

Image courtesy of Nucleus Medical Art, Inc.Copyright c©2004 Nucleus Medical Art, All rights reserved.

http://www.nucleusinc.com

(b)

Figure 5: Comparing context-preserving volume renderingto illustration.(a) Context-preserving volume rendering of ahand data set.(b) Medical illustration using ghosting.

plexity of transfer function specification. In the following,we present some results achieved with our model in com-bination with a one-dimensional color transfer function. Nosegmentation was applied.

Figure4 shows the tooth data set rendered with gradient-magnitude opacity-modulation, direct volume rendering us-ing a clipping plane, and context-preserving volume render-ing using the same transfer function. Gradient-magnitudeopacity-modulation shows the whole data set but the over-

Page 6: Illustrative Context-Preserving Volume Rendering...2001/02/05  · practice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

6 Bruckner et al. / Illustrative Context-Preserving Volume Rendering

Figure 6: Contrast-enhanced CT scan of a leg rendered us-ing context-preserving volume rendering.

lapping transparent structures make the interpretation of theimage a difficult task. On the other hand, it is very difficultto place a clipping plane in a way that it does not cut awayfeatures of interest. Using context-preserving volume ren-dering, the clipping depth adapts to features characterizedby varying lighting intensity or high gradient magnitude.

Figure5 and Figure6 show CT scans of a hand and a legrendered using our model. These images have a strong re-semblance to medical illustrations using theghostingtech-nique, as can be seen by comparing Figure5 (a) and (b).By preserving certain characteristic features, such as creaseson the skin, and gradually fading from opaque to transpar-ent, the human mind is able to reconstruct the whole objectfrom just a few hints while inspecting the detailed internalstructures. This fact is commonly exploited by illustratorsfor static images. For interactive exploration the effect be-comes even more pronounced and causes a very strong im-pression of depth.

Finally, Figure7 shows a CT scan of a human torso. Whilethe image shows many of the features contained in the dataset, no depth ambiguities occur as the opacity is selectivelyvaried.

As some of the strengths of our model are mostvisible in animated viewing, several supplemen-tary video sequences are available for download at:http://www.cg.tuwien.ac.at/research/vis/adapt/2004_cpvr

5. Discussion

The context-preserving volume rendering model presents analternative to conventional clipping techniques. It providesa simple interface for examining the interior of volumet-ric data sets. In particular, it is well-suited for medical data

Figure 7: Contrast-enhanced CT scan of a torso renderedusing context-preserving volume rendering.

which commonly have a layered structure. Our method pro-vides a mechanism to investigate structures of interest thatare located inside a larger object with similar value ranges,as it is often the case with contrast-enhanced CT data. Land-mark features of the data set are preserved. Our approachdoes not require any form of pre-processing, such as seg-mentation. The two parametersκt andκs allow intuitive con-trol over the visualization:κt is used to interactively browsethrough the volume, similar to the variation of the depth of aclipping plane;κs normally remains fixed during this processand is only later adjusted to achieve a visually pleasing re-sult.

While we have found that these parameters provide suffi-cient control, a possible extension is to make them data de-pendent, i.e. they are defined by specifying a transfer func-tion. This increases the flexibility of the method, but alsoraises the burden on the user, as transfer function specifica-tion is a complex task. Thus, we propose a hybrid solutionbetween both approaches. We keep the global constantsκt

and κs, but their values are modulated by a simple func-tion of the scalar value. In Equation5, κt is replaced byκt ·λt( fPi ) andκs is replaced byκs ·λs( fPi ). Bothλt andλs

are real-valued functions in the range[0..1]. For example, theuser can specify zero forλt to make some regions impene-trable. Likewise, settingλs to zero for certain values ensurespure gradient-magnitude opacity-modulation. If one of thesefunctions has a value of one, the corresponding global pa-rameter remains unchanged. Figure8 (a) shows the visiblehuman male CT data set rendered using just the global para-meters, while in Figure8 (b) bone is made impenetrable bysettingλt to zero for the corresponding values.

Further degrees of freedom of our method are providedby its close connection to the illumination model. By chang-

Page 7: Illustrative Context-Preserving Volume Rendering...2001/02/05  · practice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

Bruckner et al. / Illustrative Context-Preserving Volume Rendering 7

(a) (b)

Figure 8: Context-preserving volume rendering of the vis-ible human male CT data set.(a) Only global parametersettings are used.(b) Bone is made impenetrable by usingdata-dependent parameters.

ing the direction of a directional light source, for exam-ple, features can be interactively highlighted or suppressed,based on their orientation. Modifying the diffuse and spec-ular factors will result in a variation of directional depen-dency, while adjusting the ambient component has a globalinfluence. As means of changing these illumination proper-ties are included in every volume visualization system, thisadditional flexibility will not increase the complexity of theuser interface.

Our method does not pose any restrictions on the type oftransfer function used. The model could be applied to mod-ulate the opacity retrieved from a multi-dimensional transferfunction without changes. Likewise, the modulation func-tions λt andλs could be multi-dimensional. For reasons ofsimplicity, we have only considered simple one-dimensionaltransfer functions in this paper.

6. Implementation

The implementation of context-preserving volume render-ing is straight-forward, as it only requires a simple addi-tion in the compositing routine of an existing volume ren-dering algorithm. The model only uses quantities whichare commonly available in every volume renderer, suchas gradient direction and magnitude and the depth alonga viewing ray. We have integrated this method into ahigh-quality software volume ray casting system for largedata [GBKG04a, GBKG04b]. It could also be used in an im-plementation using graphics hardware, such as GPU-basedray casting presented by Krüger and Westermann [KW03].

In our implementation, the most costly part of the context-preserving volume rendering model are the exponentiations.However, as with the specular term of the Phong illumina-tion model, it is sufficient to approximate the exponentia-tion with a function that evokes a similar visual impression.Schlick [Sch94] proposed to use the following function:

xn ≈x

n−nx+x(6)

Thus, we can approximate the two exponentiations usedby our model in the following way:

x(a·y)b

≈x · (b+a ·y−a ·b ·y)

a ·y+b · (a ·x ·y)(7)

This can be efficiently implemented using just 4 multi-plications, 2 additions, 1 subtraction, and 1 division. In ourimplementation, this optimization reduced the cost from a15% to a 5% increase compared to normal direct volumerendering.

7. Conclusions

The focus of our research was to develop an effective al-ternative to conventional clipping techniques for volume vi-sualization. It preserves context information when inspect-ing the interior of an object. The context-preserving volumerendering model is inspired by artistic illustration. The opac-ity of a sample is modulated by a function of shading in-tensity, gradient magnitude, distance to the eye point, andpreviously accumulated opacity. The model is controlled bytwo parameters which provide a simple user-interface andhave intuitive meaning. Context-preserving volume render-ing it is a unifying extension of both direct volume render-ing and gradient-magnitude opacity-modulation and allowsa smooth transition between these techniques. The approachsimplifies transfer function specification, as the user onlyneeds to specify constant opacity for structures of interest.Variation of the parameters of the model can then be used tointeractively explore the volumetric data set. The achievedresults have a strong resemblance to artistic illustrations and,thus, despite their increased information content, are veryeasy to interpret. Furthermore, as this approach adds littlecomplexity to conventional direct volume rendering, it iswell-suited for interactive viewing and exploration.

We believe that research dealing with the inclusion of low-level and high-level concepts from artistic illustration is verybeneficial for the field of volume visualization. The continu-ation of this research will include the further investigation ofthe presented approach as well as other illustrative volumerendering techniques in the context of an automated intent-based system for volume illustration [SF91, VKG04].

Page 8: Illustrative Context-Preserving Volume Rendering...2001/02/05  · practice it is rather difficult and time-demanding to spec-ify an appropriate transfer function. Due to the exponential

8 Bruckner et al. / Illustrative Context-Preserving Volume Rendering

8. Acknowledgements

The presented work has been funded by theADAPT

project (FFF-804544).ADAPT is supported byTiani Med-graph, Austria (http://www.tiani.com), and theForschungs-förderungsfonds für die gewerbliche Wirtschaft, Austria. Seehttp://www.cg.tuwien.ac.at/research/vis/adaptfor further in-formation on this project.

References

[CMH∗01] CSÉBFALVI B., MROZ L., HAUSER H.,KÖNIG A., GRÖLLER M. E.: Fast visualiza-tion of object contours by non-photorealisticvolume rendering.Computer Graphics Forum20, 3 (2001), 452–460.2

[ER00] EBERT D. S., RHEINGANS P.: Volume illus-tration: non-photorealistic rendering of vol-ume models. InProceedings of IEEE Visu-alization 2000(2000), pp. 195–202.2

[GBKG04a] GRIMM S., BRUCKNER S., KANITSAR A.,GRÖLLER E.: Memory efficient accelerationstructures and techniques for CPU-based vol-ume raycasting of large data. InProceedingsof the IEEE/SIGGRAPH Symposium on Vol-ume Visualization and Graphics 2004(2004),pp. 1–8.7

[GBKG04b] GRIMM S., BRUCKNER S., KANITSAR A.,GRÖLLER M. E.: A refined data address-ing and processing scheme to accelerate vol-ume raycasting.Computers & Graphics 28, 5(2004), 719–729.7

[HMBG01] HAUSER H., MROZ L., BISCHI G. I.,GRÖLLER M. E.: Two-level volume render-ing. IEEE Transactions on Visualization andComputer Graphics 7, 3 (2001), 242–252.2

[KKH01] K NISS J., KINDLMANN G., HANSEN C.:Interactive volume rendering using multi-dimensional transfer functions and direct ma-nipulation widgets. InProceedings of IEEEVisualization 2001(2001), pp. 255–262.3

[KVPL04] K ONRAD-VERSEO., PREIM B., LITTMANN

A.: Virtual resection with a deformable cut-ting plane. InProceedings of Simulation undVisualisierung 2004(2004), pp. 203–214.2

[KW03] K RÜGER J., WESTERMANN R.: Accelera-tion techniques for GPU-based volume ren-dering. InProceedings of IEEE Visualization2003(2003), pp. 287–292.7

[KWTM03] K INDLMANN G., WHITAKER R., TASDIZEN

T., MÖLLER T.: Curvature-based transfer

functions for direct volume rendering: Meth-ods and applications. InProceedings of IEEEVisualization 2003(2003), pp. 513–520.3

[Lev88] LEVOY M.: Display of surfaces from volumedata. IEEE Computer Graphics and Applica-tions 8, 3 (1988), 29–37.2

[LM04] L UM E. B., MA K.-L.: Lighting trans-fer functions using gradient aligned sampling.In Proceedings of IEEE Visualization 2004(2004), pp. 289–296.3

[LME∗02] LU A., MORRISC. J., EBERT D. S., RHEIN-GANS P., HANSEN C.: Non-photorealisticvolume rendering using stippling techniques.In Proceedings of IEEE Visualization 2002(2002), pp. 211–218.2

[RE01] RHEINGANS P., EBERT D. S.: Volume il-lustration: Nonphotorealistic rendering of vol-ume models. IEEE Transactions on Visual-ization and Computer Graphics 7, 3 (2001),253–264.2

[Sch94] SCHLICK C.: A fast alternative to Phong’sspecular model. InGraphics gems IV, Heck-bert P., (Ed.). Academic Press, 1994, pp. 385–387. 7

[SF91] SELIGMANN D. D., FEINER S. K.: Auto-mated generation of intent-based 3d illustra-tions. InProceedings of ACM Siggraph 1991(1991), pp. 123–132.7

[VKG04] V IOLA I., KANITSAR A., GRÖLLER M. E.:Importance-driven volume rendering. InPro-ceedings of IEEE Visualization 2004(2004),pp. 139–145.2, 7

[WEE02] WEISKOPF D., ENGEL K., ERTL T.: Vol-ume clipping via per-fragment operations intexture-based volume visualization. InProc-ceedings of IEEE Visualization 2002(2002),pp. 93–100.2

[WEE03] WEISKOPFD., ENGEL K., ERTL T.: Interac-tive clipping techniques for texture-based vol-ume visualization and volume shading.IEEETransactions on Visualization and ComputerGraphics 9, 3 (2003), 298–312.2

[WK95] WANG S. W., KAUFMAN A. E.: Volumesculpting. In Proceedings of the Sympo-sium on Interactive 3D Graphics 1995(1995),pp. 151–156.2

[ZDT04] ZHOU J., DÖRING A., TÖNNIES K. D.:Distance based enhancement for focal regionbased volume rendering. InProceedings ofBildverarbeitung für die Medizin 2004(2004),pp. 199–203.2