12
EUROGRAPHICS’99 / P. Brunet and R. Scopigno Volume 18, (1999), Number 3 (Guest Editors) © The Eurographics Association and Blackwell Publishers 1999.Published by Blackwell Publishers, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA 02148, USA. Data Intermixing and Multi-volume Rendering Wenli Cai and Georgios Sakas Fraunhofer Institute for Computer Graphics, Rundeturmstrasse 6, 64283 Darmstadt, Germany email: {wcai, gsakas}@igd.fhg.de Abstract The main difference between multi-volume rendering and mono-volume rendering is data intermixing. In this pa- per, we present three levels of data intermixing and their rendering pipelines in direct multi-volume rendering, which discriminate image level intensity intermixing, accumulation level opacity intermixing, and illumination model level parameter intermixing. In the context of radiotherapy treatment planning, different data intermixing methods are applied to three volumes, including CT volume, Dose volume, and Segmentation volume, to compare the features of different data intermixing methods. Keywords: Scientific Visualisation , Volume Rendering, Multi-volume Rendering 1. Introduction Many applications in scientific visualisation require visual output generated from several volumes (called multi- volume) rather than only one volume (called mono- volume). Especially applications in Radiotherapy Treat- ment Planning (RTP) involve three volumes: CT volume, Dose volume and Segmentation volume, and require the visualisation of dose distribution in patient body (CT vol- ume or Segmentation volume). Surface rendering (SR) and direct volume rendering (DVR) are two principal volume visualisation approaches. The main difference lies in the rendering primitive: geo- metric primitive (including patch and line) for SR and vol- ume primitive (voxel) for DVR. In SR, the objects in vol- ume are first modelled with lines or patches by iso-surface modelling methods, typically like Marching Cube algo- rithm [LoCl87]. Then geometric objects are rendered us- ing traditional surface rendering techniques (hardware or software). In DVR, no intermediate geometric objects are modelled. Voxels are directly projected to the viewing plane, such as DVR algorithms in [DrCH88, Kaji84, Levo88]. Assuming the general case of volume visualisation, objects are represented either by iso-surface mesh (like tri- angle mesh or quadrilateral mesh) or by voxel. The differ- ent primitives have three possible intermixing options: geometric primitive with geometric primitive (surface- surface), geometric primitive with voxel (surface-voxel), and voxel with voxel (voxel-voxel). The first option is straightforward, merging geometry data sets and rendering it. The second intermixing option is discussed and solved by several papers [KaYC90, Frue91]. In this paper we will discuss the last option, the voxel-voxel intermixing of di- rect multi-volume rendering. In SR, patches and lines usually have no thickness. When two patches are intersecting each other, they are decomposed into smaller patches which only intersect along lines or points. Then all patches are sorted by the depth along the viewing direction to create a patch list projected to each pixel. If two patches share an intersected surface, then the projection order of these two patches is created somehow randomly, either by the modelling order or by the floating point error. In Figure 1, we rendered the intersected surface of two neighbour cubes with OpenGL [Open92]. Each cube is represented by 6 patches. One cube is blue, another cube is yellow. The artefacts on the intersected surface are very obvious: many yellow spots within the blue surface. Since patches have no thickness, SR uses the patch order list to decide the visibility and to calculate the blended colour. Strictly speaking, two patches are projected to the same pixel based on a certain priority measure rather than intermixed at the intersection point. Different to SR, DVR maps the voxel intensity on op- tical parameters (like colour and opacity), then computes

Data Intermixing and Multi-volume Rendering

Embed Size (px)

Citation preview

Page 1: Data Intermixing and Multi-volume Rendering

EUROGRAPHICS’99 / P. Brunet and R. Scopigno Volume 18, (1999), Number 3(Guest Editors)

© The Eurographics Association and Blackwell Publishers 1999.Published by BlackwellPublishers, 108 Cowley Road, Oxford OX4 1JF, UK and 350 Main Street, Malden, MA02148, USA.

Data Intermixing and Multi-volume Rendering

Wenli Cai and Georgios Sakas

Fraunhofer Institute for Computer Graphics, Rundeturmstrasse 6, 64283 Darmstadt, Germanyemail: {wcai, gsakas}@igd.fhg.de

Abstract

The main difference between multi-volume rendering and mono-volume rendering is data intermixing. In this pa-per, we present three levels of data intermixing and their rendering pipelines in direct multi-volume rendering,which discriminate image level intensity intermixing, accumulation level opacity intermixing, and illuminationmodel level parameter intermixing. In the context of radiotherapy treatment planning, different data intermixingmethods are applied to three volumes, including CT volume, Dose volume, and Segmentation volume, to comparethe features of different data intermixing methods.

Keywords: Scientific Visualisation , Volume Rendering, Multi-volume Rendering

1. Introduction

Many applications in scientific visualisation require visualoutput generated from several volumes (called multi-volume) rather than only one volume (called mono-volume). Especially applications in Radiotherapy Treat-ment Planning (RTP) involve three volumes: CT volume,Dose volume and Segmentation volume, and require thevisualisation of dose distribution in patient body (CT vol-ume or Segmentation volume).

Surface rendering (SR) and direct volume rendering(DVR) are two principal volume visualisation approaches.The main difference lies in the rendering primitive: geo-metric primitive (including patch and line) for SR and vol-ume primitive (voxel) for DVR. In SR, the objects in vol-ume are first modelled with lines or patches by iso-surfacemodelling methods, typically like Marching Cube algo-rithm [LoCl87]. Then geometric objects are rendered us-ing traditional surface rendering techniques (hardware orsoftware). In DVR, no intermediate geometric objects aremodelled. Voxels are directly projected to the viewingplane, such as DVR algorithms in [DrCH88, Kaji84,Levo88].

Assuming the general case of volume visualisation,objects are represented either by iso-surface mesh (like tri-angle mesh or quadrilateral mesh) or by voxel. The differ-ent primitives have three possible intermixing options:geometric primitive with geometric primitive (surface-

surface), geometric primitive with voxel (surface-voxel),and voxel with voxel (voxel-voxel). The first option isstraightforward, merging geometry data sets and renderingit. The second intermixing option is discussed and solvedby several papers [KaYC90, Frue91]. In this paper we willdiscuss the last option, the voxel-voxel intermixing of di-rect multi-volume rendering.

In SR, patches and lines usually have no thickness.When two patches are intersecting each other, they aredecomposed into smaller patches which only intersectalong lines or points. Then all patches are sorted by thedepth along the viewing direction to create a patch listprojected to each pixel. If two patches share an intersectedsurface, then the projection order of these two patches iscreated somehow randomly, either by the modelling orderor by the floating point error. In Figure 1, we rendered theintersected surface of two neighbour cubes with OpenGL[Open92]. Each cube is represented by 6 patches. Onecube is blue, another cube is yellow. The artefacts on theintersected surface are very obvious: many yellow spotswithin the blue surface. Since patches have no thickness,SR uses the patch order list to decide the visibility and tocalculate the blended colour. Strictly speaking, twopatches are projected to the same pixel based on a certainpriority measure rather than intermixed at the intersectionpoint.

Different to SR, DVR maps the voxel intensity on op-tical parameters (like colour and opacity), then computes

Page 2: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

its contribution to the final image using different opticalmodels (see [Nelso95] for different volume illuminationmodels). Each voxel has a thickness and occupies a limitedspace domain and its value is distributed continuouslywithin the volume. When two voxels intersect, the opticalparameters must be intermixed at each point within theintersection volume before they are projected on pixels.As a result, volume intermixing makes the multi-volumeDVR more complicated than and different to multi-volumeSR, and creates artefacts-free and more realistic imagesthan SR.

In the next sections of this paper, we present first theoptical volume illumination model and the rendering pipe-line that we use in our application. Based on this model,three levels of data intermixing are discriminated, whichare image level intensity intermixing, accumulation levelopacity intermixing, and illumination model level parame-ter intermixing. Further, the intermixing methods of differ-ent levels are discussed in detail. All of our examples andexperiments are based on the application of radiotherapytreatment planning. Different intermixing methods arecompared to each other to illustrate their features andtheir applications.

2. Background and Motivation

The optical properties affecting the light passing through a“participating medium” are described by absorption, scat-tering, or emission of light from small particles [Nels95].In medical applications, we are looking for a simple andefficient physical model rather than for some realistic ren-dering effects. Although there are various types of DVRmodels [Blinn82, KaBe84, Krue88, Sabe88], and algo-rithms [DrCH88, West90, LaLe94], all DVR models andalgorithms implicitly rely on some adaptation or improve-ment of the simple source-attenuation model introducedby [JaDu80].

The differential equation of the source-attenuation modelis,

dI

dse s s I s= −( ) ( ) ( )τ Eq. 1

where s is the length parameter along the ray; I(s) is thelight intensity at distance s; e(s) is the source term; τ(s) iscalled attenuation coefficient. When e(s) is zero, the resultimage is an X-ray like image.

Based on the source-attenuation model, the discreterendering algorithm from front to back is,

I i I i I i i

i i i is

s

( ) ( ) ( ) * ( . ( ))

( ) ( ) ( ) * ( . ( ))

= − + − −= − + − −

1 1 0 1

1 1 0 1

αα α α αwith i = 1, . . . , n + 1

Eq. 2where Is(i) and αs(s) are the local intensity and opacity atdistance s; I(i) and α(i) are the accumulative intensity andaccumulative opacity; the background intensity is corre-sponding to Is(n+1).

From the point of view in physics, when rays passthrough a medium, such as gas or liquid, the basic opticalproperties remain the same no matter if we regard amono-volume space (such as one liquid) or a multi-volume space (such as the mixture of several liquids).Both mono-volume and multi-volume are mixed mediumsonly with different compounds. Their individual illumina-tion model is basically the same before and after volumedata are intermixed in the volume space. Therefore, themain difference between mono-volume rendering andmulti-volume rendering is the data intermixing.

[ChLe96] described a multi-volume rendering algo-rithm, in which the bounding boxes of different volumesdo not intersect with each other. In fact, no intermixingbetween different volumes happens in this situation. Eachray is divided into several line segments, which belong todifferent volumes. The line segments are calculated by theintersections of the ray and the bounding boxes of differ-ent volumes. In this case, the rendering is the same asmono-volume rendering except that sampling within dif-ferent line segments is done in different volumes.

[JaRo97] presented a multi-volume DVR aiming at thevisual assessment of 3D images registration. The paper isbased on the material classification and mixture model[DrCH88]. At each sampling point, the percentages ofdifferent materials in different volumes are calculated first.Then, different material percentages in different volumesare intermixed by different merging rules designed by theauthors which fulfill the fact that the sum of the new per-centages equals to one. The main restriction of thismethod is that two volumes must have the same materialproperties (or physical information), such as the compari-son of two CT data sets. However, this method can not beapplied in multi-volumes with different material properties.For example, it is impossible to calculate the percentagesof “materials” in an irradiation volume (actually there is nomaterial percentage in dose volume according to its physi-

Figure 1: Artefacts in intersected patches

Page 3: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

cal background) and intermix them with percentages cal-culated from a CT volume!

In this paper, the application context is RadiotherapyTreatment Planning (RTP). Usually RTP involves threevolumes: CT volume, Dose volume and Segmented Objectvolume (which we called Segmentation volume in this pa-per). CT volume is the patient CT scan data, the value ofwhich is the CT Hounsfield number. Dose volume is theradiation distribution calculated by the physicist duringRTP, the value thereof is the dosimeter energy. Segmen-tation volume contains the objects segmented from the CTscans, such as tumour, critical organs, or other importanttissues, the value of which is a segmented object ID num-ber corresponding to an object look-up table1. The threevolumes have different physical properties. For simplicity,we define that the three volumes share the same geometrysize and position - the same X, Y, Z length and show norotation against each other2. But different volumes mayhave different resolutions, for example, the resolutions ofCT and Dose volumes are usually different.

The purpose of RTP is to deliver an adequate dose tothe tumour while minimising damage to surrounding nor-mal tissues. Thus, it requires the physicians to investigatethe relationship among tumour, normal tissues or organs,and dose distribution, the visual output of which comesfrom multi-volumes. The traditional RTP system is basedon SR, all objects are represented by surface meshes, seg-mented object surfaces or iso-dose surfaces. In this paper,there is no surface representation in any volume. All im-ages are rendered by multi-volume DVR, including theiso-volume rather than iso-surface.

3. Rendering Pipeline

In this paper, DVR algorithm is the ray casting algorithmbased on Eq. 1 and Eq. 2. The main rendering pipeline issimilar to the mono-volume rendering pipeline described in[Saka93]. The data flow through three different stages,Geometric Transformation, Integration-in-Depth, andMapping. In the context of this paper, since different vol-umes share the same size and position, geometric trans-

1 One segmented object look-up table contains the ID ofthe object, the name, the colour, the opacity, and othervisual parameters of the object. Object ID is the index ofthe table.2 There is no restriction on geometric orientation and po-sition of volumes. If different volumes have translationsand rotations against each other, one ray will be split intoseveral segments, which belong to one volume or severalvolumes. Within one volume segment, sampling is thesame as with mono-volume. Within one multi-volumesegment, intermixing is done as we describe later in thepaper.

formation and the mapping are the same as in mono-volume rendering. The difference is in the second stage,

see Figure 2.

Since the resolutions of multi-volumes are different inour context, the same start and end points in world co-ordinates relate to different traversing coordinates in vol-ume data spaces. In addition, the number of samplingpoints along the same ray are different in different volumesdue to the different resolutions. For simplicity, we use thelargest number of sampling points as the number of stepsin the ray traversing in multi-volumes, which results in anidentical loop for all volumes although this causes over-sampling in the smaller resolution volume. At each sam-pling point along the ray, the values from different vol-umes are obtained by sampling and interpolation in thecorresponding volume data spaces. Thus, the first twosteps in the Integration-in-Depth stage become the parallelcopies of the corresponding steps in mono-volume ren-dering.

After that, data intermixing maybe involved in differ-ent steps in the Integration-in-Depth stage: in the illumi-nation step, in the accumulation step or after accumula-tion. They are corresponding accordingly to illuminationmodel level intermixing, accumulation level intermixing,and image level intermixing. Different intermixing levelsresult in different rendering pipelines. In Figure 3, thethree different rendering pipelines corresponding to thethree intermixing levels discussed in this paper are illus-trated. We use two volumes in our example.

Geometric Transformation

Mapping

Traversing*

Sampling & Interpolation*

Illumination*

Accumulation*

Pixel Intensityand Opacity*

Start and end pointMulti-volume Data

*Steps involved in multi-volume rendering

Figure 2: Multi-volume rendering pipeline

Page 4: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

4. Volume Intermixing

Based on these different rendering pipelines, intermixing isdiscriminated into three levels: illumination model levelparameter intermixing, accumulation level opacity inter-mixing, and image level intensity intermixing. In this sec-tion, we present different level intermixing methods andthe different rendering algorithms in each level.

4.1 Image Level Intermixing

The simplest way to multi-volume rendering is to mergetwo rendering images. For each volume, one result image(intensity and opacity after Integration-in-Depth) is ren-

dered respectively. Then, the final image is calculated byintermixing the pixel intensity and opacity coming fromtwo volumes. We called it image level intermixing, seeFigure 3(c).

There are different options in image level intermixing,• intensity intermixing only,• intensity intermixing with opacity, and• intensity intermixing with opacity and Z-buffer depth.

4.1.1 Intensity Intermixing Only

Defining an intermixing factor (or weight) w, where I1 andI2 are intensities coming from two different images, then

(a) Illumination model level intermixing(b) Accumulation level intermixing(c) Image level intermixing

* : Stage where intermixing is performed.

Volume 2Volume 1Traversing

Sampling &Interpolation

Illumination*

Accumulation

Pixel Intensityand Opacity

Traversing

Sampling &Interpolation

(a)

Pixel Intensityand Opacity

Volume 1 Volume 2

Traversing

Sampling &Interpolation

Illumination

Accumulation*

Traversing

Sampling &Interpolation

Illumination

(b)

Volume 1 Volume 2Traversing

Sampling &Interpolation

Illumination

Accumulation

Pixel Intensityand Opacity

Traversing

Sampling &Interpolation

Illumination

Accumulation

Pixel Intermixing*

(c)

Pixel Intensityand Opacity

Figure 3: Rendering pipelines of different intermixing modes

Page 5: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

the result image intensity is the linear composition of twoimages, i.e.

I w I w I= + −* ( . )1 210 Eq. 3The visual effect of Eq. 3 is the fade-in and fade-out oftwo images when the weight is changed interactively. Withthis effect, the physicians can investigate the relationshipof objects in two volumes on the projection plane, seeFigure 4(a).

4.1.2 Intensity Intermixing with Opacity

Instead of an intensity intermixing factor, another way isto define an opacity intermixing factor (opacity weight).Then, the intermixing is as follows,

I w I w I= + −* ( . * ) *1 1 210 α Eq. 4

where αi is the accumulative opacity of pixels in image i.

If α1 is 1.0 in the first image, the result is the same asin Eq. 3. But at the pixel where α1 is zero (I1 is zero too)the result pixel intensity I will be replaced by intensity I2.The depth cueing looks more reasonable than linearlychanging the intensity I2 in Eq. 3, especially when no ob-stacles hide from each other, see Figure 4(b). Althoughthe weight in Figure 4(b) is the same as in Figure 4(a),eyes and other objects without obstacles are clearer andsharper than in Figure 4(a).

4.1.3 Intensity Intermixing with Opacity and Z-buffer Depth

The main problem of image level intermixing is the lack ofcorrect depth cueing in the overlapping area, where differ-ent objects coming from different data sets overlap. Forexample, in Figure 4(a) and Figure 4(b) it is difficult todetermine whether the tumour (the red object) is locatedinside or outside the iso-volume (the gray object).

We partially solved the problem using the Z-buffervalue3 of each pixel. First, for the pixels in a non-overlapping area, just copy the intensity to the result pixel.For the overlapping pixels, before intermixing with Eq. 4,we sort their values according to their depth in Z-buffer. Ifd1 and d2 are the depth values in Z-buffer of the two im-ages, then the intermixing is

Iw I w I d d

w I w I d d=

+ − ≤+ − >

* ( . * ) *

* ( . * ) * ;1 1 2 1 2

2 2 1 1 2

10

10

αα

; if

if

Eq. 5In Figure 4(c), it is quite clear that the iso-volume is lo-cated inside the tumour.

3 In volume rendering, the Z-buffer value is defined as thedepth along the ray from the view point to the first visiblevoxel of a certain segmented object.

Image level intermixing is very simple and we do notneed to change the basic rendering algorithm. The Draw-back is that the results do not provide the exact depthcueing among the two volumes. The user can change the

�A�� Intensity intermixing only

�B�� Intensity intermixing with opacity

�C�� Intensity intermixing with opacity andZ-buffer depth

Figure 4: Image level intermixing

Page 6: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

mixing factor interactively and view the fade-in and thefade-out of two images to smooth over the disadvantage.The correct depth cueing comes from data intermixing, i.e.intermixing during sampling, presented below.

4.2 Accumulation Level Intermixing

Within the loop of ray casting in accumulation level inter-mixing, one ray samples voxel values in different volumesat each point and mixes their visual contributions step bystep along the ray at the accumulation step in the Integra-tion-in-Depth stage, see Figure 3(b). The accumulation in-volves opacity and intensity. The opacity and intensity arederived from the voxel density by a transfer function in thevolume illumination model. Since different volumes havedifferent physical background and value ranges, theirtransfer functions are also different. But their opacity andintensity have the same range (between 0.0 ~ 1.0) and thesame optical property after the mapping of the transferfunctions. Thus, another way is to intermix the differentopacities and intensities coming from different volumesduring the accumulation step.

In detail, we designed several ways to intermix opac-ity,• exclusive opacity,• inclusive opacity, and• mixture of the above two.

4.2.1 Exclusive Opacity

The opacity at one sampling point is one of the opacitiescalculated from the two volumes,

opacity Exclusive opacity opacity

opacity opacity

opacity

=

=≠

( , )

;

;

1 2

1 1

2

0

if

otherwise

Eq. 6

From Eq.6, opacity1 has higher priority than opacity2, seeFigure 5. If we define that Segmentation volume hashigher priority than CT volume, the result is to display theposition of the segmented objects in CT data, an aid to as-sess the position of the segmented objects in anatomystructure and to localise tumour and other critical tissues.

4.2.2 Inclusive Opacity

Inclusive opacity is to sum up the accumulative effectcaused by both opacities from two different volumes andregarding it as the opacity at current point. So, the resultopacity at the sampling point is,

opacity opacity opacity

opacity opacity opacity opacity

= − − −= + −

10 10 101 2

1 2 1 2

. ( . ) * ( . )

* Eq. 7

Then, the intensity is,I norm opacity I norm opacity I

norm opacity opacity opacityi i n

= += ∑

_ * _ *

_ /1 1 2 2

Eq. 8

where I is the object colour I= (R, G, B).

Figure 5: Exclusive opacity

�A�� Gradient surface of CT volume and iso-dosevolume

�B�� Segmentation volume and iso-dose volume

Figure 6: Inclusive opacity

Page 7: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

In Eq. 8, we use the normalised opacity instead of theoriginal opacity. It means that a high opacity voxel con-tributes more to the intensity than a low one.

Although opacity and intensity are mapped from dif-ferent methods, for example, gradient opacity, iso-valueopacity, attenuation opacity, they can be always inter-mixed by Eq. 7 and Eq. 8 after mapping them to corre-sponding opacity and intensity. In Figure 6(a), gradientsurface [Levo88] in CT volume is intermixed with iso-dose volume. In Figure 6(b), segmentation volume is in-termixed with iso-dose volume. The colour and opacity ofthe segmented object is calculated from a lookup table de-fined by users, while opacity of iso-dose volume is calcu-lated by a function in [Levo88].

In fact, inclusive opacity cannot only be used in vol-ume intermixing, this method has also been used in surfaceand volume (surface-voxel) intermixing [Frue91]. Gener-ally speaking, it can be applied in most of the intermixingcases if we can map the voxel value (or any object, likesurface and line) into opacity and intensity.

In ray casting, other accumulation parameters can alsobe inclusively intermixed by the normalised opacity as wedo in Eq. 8. For example, the gradient of the surface,which is used in the rendering model in [Levo88], can alsobe intermixed by normalised opacity in the same way asthe intensity in Eq. 8 to compute the final gradient vectorat each pixel. In fact, the gradient in Figure 6(a) is accu-mulated with normalised gradient intermixing.

4.2.3 Combining of Exclusive and Inclusive Opacity

Exclusive opacity and inclusive opacity intermixing can beregarded as two operand operators. Then, they can beused recursively among more volumes. For example, twovolumes which are intermixed by exclusive opacity can beintermixed by inclusive opacity further with another thirdvolume and so on. In our case, we can intermix three vol-umes together as following,

Volume = Inclusive( Dose, Exclusive(Segmentation, CT))

Eq. 9see Figure 7. Segmentation volume and CT volume(encoded as yellow) are intermixed by Exclusive opera-tion. Then, iso-dose volume (encoded by gray) is inter-mixed with the intermediate result by Inclusive operation.Comparing Figure 6 and Figure 7, in Figure 7 not only thesegmentation objects but also other anatomy structuresare rendered to view their relationship with dose distribu-tion.

4.3 Illumination Model Level Intermixing

Accumulation level intermixing in Section 4.2 is the opac-ity and intensity intermixing within the accumulation stepin the rendering pipeline. In this section we discuss the in-termixing in illumination model, see Figure 3(a). In thiscase, the opacity and intensity at each sampling point iscalculated directly from the illumination model (multi-volume illumination model), rather than intermixed fromseveral opacity and intensity values. The rendering pipe-line including accumulation is the same as with mono-volume rendering.

4.3.1 Attenuation Coefficient Intermixing

According to the illumination model in section 2, we knowthe transparency of the medium between 0 and s is,

T s s t dts

( ) exp( ( )) exp( ( ) )= − = −∫Γ τ0

Eq. 10

where Γ is the optical depth; and τ is the attenuation coef-ficient.

Different volumes own different attenuation coeffi-cients. Instead of intermixing the opacity with T(s) fromdifferent volumes, we intermix τ from different volumes.The final attenuation coefficient is regarded as the combi-nation of all individual coefficients from different volumes.For example, we can define Eq. 11 as the attenuation co-efficient intermixing.

τ τ τ τ( ) * * ... *s w w wn n= + + +1 1 2 2 Eq. 11where wn is the weight of τn respectively.

Since the ranges of gray value in different volumes aredifferent, their transfer functions are also different4. In or-der to keep a visual balance among multi-volumes, wn isnecessary. Otherwise, one volume may be too dark or toobright on the rendering image compared to another one.Also, we can assign different weights to emphasise differ-ent volumes in the result image. If we only consider at-tenuation, an appropriate example is X-ray image render-ing of CT and Dose volumes, see Figure 8 which displaysthe whole dose distribution in CT volume in the means of 4 This results in different visual effects (brighter or darker)in the case of X-ray volume rendering. In mono-volumerendering, we usually use contrast and intensity to adjustthe final display of the rendering image.

Figure 7: Mixture of exclusive and inclusive Opacity

Page 8: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

X-ray image familiar to the physicians in RTP. The col-oured X-ray image with dose distribution on it is impossi-ble to be obtained using normal X-equipment.

In X-ray volume rendering, the optical length Γ is de-fined as,

Γ ∆ ∆( ) ( ( ) * ) ( * * ),s K s s K g sm= = ∑∑ λ λ Eq. 12

where Kλ is the attenuation coefficient; Kλ,,m is the massattenuation coefficient; g is the gray value; ∆s is the lengthof sampling step. So, considering Eq. 11 and Eq. 12, thefinal optical length of the ray is,

Γ ∆( ) ( * * * * ) *, ,s w K g w K g sm m= +∑ 1 1 1 2 2 2λ λ

Eq. 13where the subscript (1 or 2) refers to the involved volume.

Using Eq. 13, we can directly calculate the opticallength of the ray and convert it to the pixel opacity withEq. 10. Finally, the opacity and intensity of each ray is cal-culated without any intermediate calculation of opacityand intensity.

Compared with the opacity intermixing in Section 4.2,if two volumes are rendered by the X-ray multi-volumerendering, the intermixing using intermediate opacitiescalculated by the inclusive opacity method needs the cal-culation of an exponent function at each sampling point.It is obvious that attenuation coefficient intermixing ismuch more efficient than opacity intermixing in X-raymulti-volume rendering because the calculations of an ex-ponent function at each step are avoided.

4.3.2 Illumination Property Intermixing

In the volume illumination model, illumination propertiesinclude absorption, emission, scattering, and shadow,which represent different optical phenomena. This gives usa hint to display different volume data with different illu-mination properties. In this section, we discuss the ab-

sorption and emission model in the multi-volume illumina-tion model.

In a usual X-ray image (absorption only model), theintensity is,

I E I T I Tbackground( ) * ( . ) *= − +0 10 Eq. 14

where T s ds= ∫exp( ( ) )τ is the transparency at [0,E]

(from 0 to eye point).

This is an absorption only illumination model. Further,we can write the above formula as accumulation of subse-quent contributions,

I I I t

I I ti i

i i i

= + −=

1

1

10*( . )

*Eq. 15

where i=1..n; I0 is the incoming intensity; I=0 at i=0; ti isthe transparency at the sampling point.

In the absorption and emission illumination model, weassume that dose volume is a kind of light emitting field,which illuminates the CT volume. So, according to the in-cremental formula Eq. 15, the intensity is,

I I I I t

I I I ti s i

i i s i

= + + −= +

( ) *( . )

( ) *1

1

10Eq. 16

where ti is the transparency; Is is the emission inten-sity at the sampling point; I=0 at i=0.

A problem is how to set the transfer functions of ab-sorption and emission since they belong to different vol-umes and have different value ranges. The transfer func-tion is critical to balance the visual contributions from twovolumes. In our example, see Figure 9, CT volume is re-lated to absorption property while emission property iscalculated in terms of Dose volume independently. Wedefined two transfer functions, one is the absorptiontransfer function which converts the CT intensity to theattenuation coefficient (τ in Eq. 10, and relates to ti in Eq.16), the other is the emission transfer function which con-

Figure 8: X-ray image of CT and dose volumes Figure 9: Absorption and emission model

Page 9: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

verts the dose value to the emission intensity (Is in Eq.16).

Comparing Figure 8 and Figure 9, we can see, that theanatomy structures in the beam or in the dose distributionregion in Figure 9 are more clear than in Figure 8. Actu-ally in Figure 8, the beam shape overrides the anatomystructures. We cannot figure out the anatomy structures inthe centre of the beam. But in Figure 9, the anatomystructures in the beam are quite clear, and even morebright than outside the beam. This is quite useful for thephysicians to see which anatomy structures are in thebeam, especially in the centre of the beam in the RTP.

5. Comparison and Discussion

In Section 4, we present three levels of data intermixingand different methods in each level. Each method has itsadvantages and disadvantages compared with the others.

Image level intermixing is the simplest way. Since in-termixing is finished after the ray-casting loop, no changeis needed inside the loop. Two images are mixed with theinformation after ray-casting loop, such as intensity, opac-ity and Z-buffer value. The main disadvantage is the lackof correct depth of cueing. Although the Z-buffer valuepartially solves the problem, it is difficult to generate ex-act depth of cueing information only using the final pixelvalue. On the other side, the fade-in and fade-out visualeffect relating to the interactive changing of weight in in-termixing is very useful to decide the projection positionof different objects.

Accumulation level intermixing solves the problemwith the intermixing done at the accumulation level. Acorrect depth of cueing information is generated by opac-ity and intensity intermixing at every sampling point. Dif-ferent rules are designed based on different requirements.This kind of intermixing can be used in general purposeapplications if voxel values can be converted to opacityand intensity at each sampling point no matter if it is gra-dient surface opacity, or iso-volume opacity, or X-ray at-tenuation opacity. Even intermixed volumes can be inter-mixed again with another volume. The disadvantage hereis that at each step we have to calculate the opacity andintensity for each individual volume, and therefore some-times the time consumption is high in the case of X-ray

multi-volume rendering. Also we have to change the ren-dering pipeline inside the loop.

At the illumination model level we can intermix thecontributions from different volumes into illumination pa-rameters. Further, we can use different illumination prop-erties to emphasise volume contributions. This level of in-termixing is more complicated and more realistic than theother two cases. We must design the intermixing modelcarefully. The result of the intermixing looks more naturalthan in the other two cases. But we need a new illumina-tion model and have to change the rendering pipelines inmore depth. In Table 1, we list the comparison results ofthe three intermixing levels.

In some of the above intermixing methods, see Eq. 3,Eq. 4, Eq. 5, Eq. 11 and Eq.13, the weight factors are as-signed in the equations. The purpose of the weight lies inthe following two aspects.

• Since the rendering speed is an interactive speed, theinteractive changing of the weight will generate thefade-in / fade-out effect, which is very useful to fig-ure out the objects from different volumes projectedon the overlapped area.

• The visual output requirements of different cases anddifferent users are quite different. Even in the samedata sets, the visual emphases are different underdifferent visualisation requirements. For example,sometime the physicians are more interested in theanatomy structure in the beam, sometimes maybemore interested in the beam structures. We decidedto let the users select the best visual effect by chang-ing the weight interactively, instead of automatically.

6. Conclusion

Volume (voxel-voxel) intermixing is different to surface-surface intermixing and surface-voxel intermixing. In thispaper, we present three levels of data intermixing and theircorresponding rendering pipelines in voxel-voxel inter-mixing of multi-volume rendering. Different volume in-termixing levels and methods have their own features. Weconclude from our above discussions.

Table 1 Comparison of different intermixing levels

Intermixing Level

Changing inRenderingPipeline

Image Quality Rendering Speed ApplicationDomain

Image Level Little Low Slow All

Accumulation Level More Middle / High Slow Almost

Illumination Model Level More High Fast Limited

Page 10: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

• Image level intermixing is easier to implement thantwo other cases, but suffers from incorrect depth ofcueing.

• Accumulation level intermixing is a general way tointermix all illumination with opacity and intensityand requires moderate changes to the rendering pipe-line.

• Illumination level intermixing is a high level inter-mixing but can not be applied to any application do-mains. This method needs special-purpose illumina-tion models and more changes to the rendering pipe-line.

• Illumination level intermixing is a promising directionto generate high-quality intermixing results. The re-search on this area has not yet been fully investigated.

• Although our application context is RTP, the inter-mixing methods presented here can also be applied inanother multi-volume rendering context.

7. Acknowledgment

The RTP data (CT, Dose and Segmentation volume data)in this paper are the courtesy from German Cancer Re-search Center (DKFZ) in Heidelberg. The authors wouldlike to thank Prof. Dr. Rolf Bendl and Mr. Michael Kieberfor the use of the data as well as the Deutsche ForschungsGesellschaft (DFG) for partially sponsoring this work.Thanks also go to the reviewers for their valuable sugges-tions and careful review.

8. References

[Blin82] Blinn, J., Light Reflection Functions for Simulationof Clouds and Dusty Surfaces, ACM Computer Graphics,16(3):21-29, 1982

[ChLe96] Chen, M., Leu, A., Parallel Multi-volume Ren-dering on distributed Memory Architecture, First Euro-graphics Workshop on Parallel Graphics and Visualization,Bristol 1996, pp 173-187

[DrCH88] Drebin, R., Carpenter, L., and Hanrahan, P., Vol-ume Rendering, ACM Computer Graphics (ACMSIGGRAPH’88 Proceedings), 22(4):65-74, 1988

[Frue91] Fruehauf, M., Combining Volume Rendering withLine and Surface Rendering, EUROGRAPHICS’91 Pro-ceedings, Elsevier Science Publisher, 1991, pp 21-32

[JaDu80] Jaffery, S., Dutta, K. Digital Reconstruction Meth-ods for 3D Image Visualization, SPIE, Vol.507, 1984

[JaRo97] Jaeq, J., Roux, C, A Direct Multi-volumes Ren-dering Methods Aiming at Comparison of 3D Images andMethods, IEEE Trans. On Information Technology in Bio-medicine, 1(1):30-43, 1997

[KaBe84] Kajiya, T., Berzen, B., Ray Tracing Volume Ren-dering, ACM Computer Graphics 18(3): 143-150, 1982

[Kaji84] Kajiya, T., Ray Tracing Volume Densities, ACMComputer Graphics (ACM SIGGRAPH’84 Proceedings),18(3):165-174, 1984

[KaYC90] Kaufman, A., Yagel, R., and Cohen, D., Inter-mixing Surface and Volume Rendering, 3D Imaging in Medi-cine: Algorithms, Systems, applications, K. H. Hoehne, H.Fuchs, S. M. Pizer, Eds., Springer-Verlag, Berlin, 1990, pp.217-227

[Krue88] Krueger, W., Volume Rending and Data FeatureEnhancement, ACM Computer Graphics 24(5), 1988

[LaLe94] Lacroute, P., and Levoy, M., Fast Volume Ren-dering Using a Shear-warp Factorisation of the ViewingTransform, ACM Computer Graphics (ACM SIGGRAPH’94Proceedings), 28(3):451-459, 1994

[Levo88] Levoy, M., Display of Surface from Volume Data,IEEE CG&A, 8(3):29-37, 1988

[LoCl87] Lorensen, W.E., Cline, H.E., Marching Cubes: AHigh Resolution 3D Surface Construction Algorithm, ACMComputer Graphics (ACM SIGGRAPH’87 Proceedings),21(4), 1987

[Nels95] Nelson, M., Optical Models for Direct VolumeRendering, IEEE Trans. On Visualization & ComputerGraphics, 1(2):99-108, 1995

[Open92] OpenGL Programming Guide, Addison-WesleyPublising Company, ISDN 860-0105-002, 1992

[Sabe88] Sabella, P., A Rendering Algorithm for Visualiza-tion 3D Scalar Fields, ACM Computer Graphics, 22(4), 1988

[Saka93] Sakas, G., Interactive Volume Rendering of LargeFields, The Visual Computer, 9(8):425-438, 1993

[West90] Westover, L., Footprint Evaluation for VolumeRendering, ACM Computer Graphics (ACM SIGGRAPH’90Proceedings), 24(2):367-376, 1990

Page 11: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

�A�� Intensity intermixing only

�B�� Intensity intermixing with opacity

�C�� Intensity intermixing with opacity andZ-buffer depth

Figure 4: Image level intermixing

Figure 1: Artefacts in intersected patches

Figure 5: Exclusive opacity

Figure 7: Mixture of exclusive and inclusive opacity

Page 12: Data Intermixing and Multi-volume Rendering

Cai and Sakas / Data Intermixing and Multi-volume Rendering

© The Eurographics Association and Blackwell Publishers 1999

�A�� Gradient Surface of CT volume and iso-dose volume

�B�� Segmentation Volume and iso-dose volume

Figure 6: Inclusive opacity

Figure 8: X-ray image of CT and dose volumes

Figure 9: Absorption and emission model