25
Chapter Two: Motivation and Background Research in data visualization is best done when motivated by the needs of a specific application. In the final analysis, the usefulness of any visualization technique can only be objectively measured in terms of its ability to enable improved performance on specific tasks relevant to the objectives of the display. In addition to providing guidance and motivation for visualization research, a driving application also provides the basis for evaluating the results of that research. The focus on a driving problem need not limit the applicability of research results; a visualization method whose usefulness has been demonstrated in one domain can be applied with good results in other areas. 2.1: Applications for transparent surface display There are many applications that require or can benefit from the simultaneous display of multiple superimposed surfaces. In chemistry, for example, where the problem of visualizing molecular docking is relevant for a number of purposes, including drug design, a scientist’s understanding of the fit between two molecular surfaces might be enhanced by a clear and unambiguous display of both surfaces in a single image [Gillilan and Wood 1995]. A similar type of surface matching problem arises in the visual registration of surfaces from medical datasets, possibly obtained from different imaging modalities, such as MR and PET [Lichter et al. 1992], or from different subjects, as in the comparison of a patient’s data with reference data stored in an atlas [Schiemann et al. 1994]. In geology, the need to simultaneously view multiple overlaid surfaces arises in the visualization of such things as ancient bedrock formations underlying the current surface topography [Prueitt and Baldridge 1993] or oil saturation at various depths [Lathrop 1993]. The ability to visualize the outer surface of an object along with its underlying inner structures is helpful in a wide variety of applications including, for example: craniofacial surgery planning, where the goal of the physician is to correct facial deformities by altering the underlying bone structure and it can be enlightening to view the skin surface superimposed over the bony anatomy [Vannier et al. 1983]; confocal microscopy [Bron et al. 1994], where it is often desirable to view intracellular structures within the context of the outline of the cell that contains them; and computer-aided geometrical design, where engineers need to understand the complex spatial relationship between multiple superimposed surfaces of a machine part [Lorensen 1993]. In fluid flow, important insights can be gained from a display of the topology of the multiple numerically computed separation surfaces [Helman and Hesselink 1990]. The display of multiple superimposed isovalue surfaces is a technique frequently used in a variety of applications to visualize the distribution of a smoothly varying quantity defined within a three-dimensional volume. In chemistry, the display of multiple isodensity surfaces of electron charge can be used to clarify appreciation of the molecular structure [Wolfe et al. 1991][Koide and Doi 1992]. In atmospheric data visualization, multiple isovalue surfaces of air temperature have been used to help clarify the shifting patterns and dependencies of ozone depletion in the atmosphere [Treinish 1993], and isovalue surfaces of water vapor density have been used to visualize storm progression [Upson and Keeler 1988]. In aerodynamic simulation, insight can be gained from the display of isosurfaces of air pressure [Globus et al. 1993]; in finite element analysis, from the display of isosurfaces of pressure [Mayer and Tabatabai 1993],

Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

Chapter Two: Motivation and Background

Research in data visualization is best done when motivated by the needs of a specificapplication. In the final analysis, the usefulness of any visualization technique can only beobjectively measured in terms of its ability to enable improved performance on specific tasksrelevant to the objectives of the display. In addition to providing guidance and motivation forvisualization research, a driving application also provides the basis for evaluating the results ofthat research. The focus on a driving problem need not limit the applicability of research results;a visualization method whose usefulness has been demonstrated in one domain can be appliedwith good results in other areas.

2.1: Applications for transparent surface display

There are many applications that require or can benefit from the simultaneous display ofmultiple superimposed surfaces. In chemistry, for example, where the problem of visualizingmolecular docking is relevant for a number of purposes, including drug design, a scientist’sunderstanding of the fit between two molecular surfaces might be enhanced by a clear andunambiguous display of both surfaces in a single image [Gillilan and Wood 1995]. A similar typeof surface matching problem arises in the visual registration of surfaces from medical datasets,possibly obtained from different imaging modalities, such as MR and PET [Lichter et al. 1992], orfrom different subjects, as in the comparison of a patient’s data with reference data stored in anatlas [Schiemann et al. 1994]. In geology, the need to simultaneously view multiple overlaidsurfaces arises in the visualization of such things as ancient bedrock formations underlying thecurrent surface topography [Prueitt and Baldridge 1993] or oil saturation at various depths[Lathrop 1993]. The ability to visualize the outer surface of an object along with its underlyinginner structures is helpful in a wide variety of applications including, for example: craniofacialsurgery planning, where the goal of the physician is to correct facial deformities by altering theunderlying bone structure and it can be enlightening to view the skin surface superimposed overthe bony anatomy [Vannier et al. 1983]; confocal microscopy [Bron et al. 1994], where it is oftendesirable to view intracellular structures within the context of the outline of the cell that containsthem; and computer-aided geometrical design, where engineers need to understand the complexspatial relationship between multiple superimposed surfaces of a machine part [Lorensen 1993].In fluid flow, important insights can be gained from a display of the topology of the multiplenumerically computed separation surfaces [Helman and Hesselink 1990].

The display of multiple superimposed isovalue surfaces is a technique frequently used ina variety of applications to visualize the distribution of a smoothly varying quantity definedwithin a three-dimensional volume. In chemistry, the display of multiple isodensity surfaces ofelectron charge can be used to clarify appreciation of the molecular structure [Wolfe et al.1991][Koide and Doi 1992]. In atmospheric data visualization, multiple isovalue surfaces of airtemperature have been used to help clarify the shifting patterns and dependencies of ozonedepletion in the atmosphere [Treinish 1993], and isovalue surfaces of water vapor density havebeen used to visualize storm progression [Upson and Keeler 1988]. In aerodynamic simulation,insight can be gained from the display of isosurfaces of air pressure [Globus et al. 1993]; in finiteelement analysis, from the display of isosurfaces of pressure [Mayer and Tabatabai 1993],

Page 2: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

6

temperature [Gallagher et al. 1993] or stress [Gallagher and Nagtegaal 1989]. Multiple isosurfacesof velocity and pressure can be useful for helping to visualize fluid flow characteristics [Klein etal. 1993].

Figure 2.1: A selection of published images illustrating some of the applications thatrequire or can benefit from the display of multiple superimposed surfaces. Credits (clockwisefrom top left): [Koide and Doi 1992], [Bron et al. 1994], [Lorensen 1993], [Treinish 1993],[Gallagher et al. 1993], [Mayer and Tabatabai 1993].

2.2: The driving application: radiation therapy treatment planning

Radiation therapy treatment planning has been the driving application for my researchinto developing visualization techniques to improve intuitive communication of the shape andrelative depth of superimposed transparent surfaces.

2.2.1: Overview of the goals and methods of radiation therapy treatment planning

Why is radiation therapy treatment planning an important application, and what are thepotential benefits of achieving improved treatment plans?

Radiation therapy is one of the three principal treatments for the control and cure ofcancer, with surgery and chemotherapy being the other two. Of the nearly 1,000,000 individualsnewly diagnosed with cancer each year, about 500,000 are treated with radiation therapy, over50% of these with curative intent [Perez and Purdy 1992]. Broadly stated, the goals of radiationtherapy are to cure cancer or alleviate its symptoms — while maintaining a high quality of life —by homogeneously irradiating a target area with a prescribed amount of therapeutic dose and atthe same time minimizing the amount of radiation delivered to healthy tissue. These goals aresometimes expressed as maximizing the tumor control probability (TCP) while simultaneouslyminimizing the normal tissue complication probability (NTCP).

Approximately 65% of invasive cancers are confined to their local or local-regional sitesat the time of initial diagnosis [Leibel 1994]. But statistics from a 10-year follow-up studypublished in 1989 indicate that, through surgery and/or conventional (2-D) radiotherapy,permanent control is only achieved in around two-thirds of these cases [Leibel 1994]. About 68%

Page 3: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

7

of all patients treated for cancer will die because of failure to control the local-regional tumor,with about half of these deaths attributable to failure at the primary site alone and half due torelapse at the primary site in conjunction with failure at distant locations [Perez and Purdy 1992].(The correlation between local relapse and metastatic spread is well documented in [Leibel 1994]and [Leibel and Fuks 1993]). While the failure to achieve local control can be attributed to manydifferent factors, Suit and du Bois [1991] eloquently argue the considerable potential benefits offurther optimizing radiation treatment plans. Studies have shown that targeting errors occur, onthe average, in about 49% of cases [Rosenman et al. 1991], and it has been estimated that tumorcontrol probabilities might be improved by 6% (leading to a 3.5% increase in 5-year survival) ifthese geometric misses could be avoided [Goitein 1979]. In addition, there are indications thattumor control probabilities might be raised if higher amounts of dose could be more preciselydelivered [Leibel et al. 1993].

What role can better visualization techniques play in improving the quality of radiationtreatment plans? To answer that question, it is first necessary to understand the basic processesof designing and evaluating a state-of-the-art, three-dimensional radiation therapy treatmentplan.

The first step in treatment plan design is to thoroughly examine all available patient dataand histories of the past behavior of similar tumors in a matched population, so as to determinethe location and extent of the tumor, its histology, most likely routes of spread, and probabledose response. Using this information, the physician defines the clinical target volume (CTV), toencompass the gross tumor, its microscopic extensions and any suspected regions of subclinicaldisease, and the planning target volume (PTV), to fit the CTV as closely as possible with anadditional small margin to ensure full irradiation of the CTV after allowing for any expectedinternal or external patient movement either within or between treatments. In addition, one ormore boost volumes may be defined, containing only gross residual tumor and indicating regionsthat need to be irradiated at higher dose levels. Along with these target volumes, other volumesof interest that must be defined include the organs at risk — critical normal tissues that should notreceive more than a very low amount of radiation — and anatomical landmarks, such as bone orskin surfaces, that are useful for providing a reference context (position, orientation and scale) forthe other structures.

The next step is to plan the radiation delivery. After a decision about the treatment aim— cure or palliation — is made, the prescribed dose and fractionation [Fowler 1992] are defined.(In many cases, treatments are scheduled to be given over a period of several days or weeks,among other things to provide an opportunity for normal tissues to recover from the damagingeffects of the radiation [Stewart and van der Kogel 1994]; higher doses, and the increased risks ofcomplication that come with them, are more often justified when the goal is curative than whenpalliation is the aim [Kijewski 1994].) In a typical iterative "design and evaluate" process, thephysician defines a beam configuration that he believes will result in an optimal dosedistribution- homogeneously delivering the complete prescribed dose to the target volume whileminimally irradiating other, normal tissues- by first choosing the number, type (electron, photon,proton or other high linear energy transfer particle), energy ("weight") and orientation of thebeams to be used, and then shaping the beam outline to fit the target (using either a multileafcollimator [Mohan 1995] or manufactured lead-alloy shielding blocks) and modulating itsintensity (with wedges, manufactured compensators, moving field collimators [Mohan 1995] or acombined set of boost fields [Lane et al. 1991]) to correct for dose inhomogeneities that wouldotherwise result at the target due to the nonuniformity of densities encountered along the beampath in different areas of the field. The physician then calculates the three-dimensional dosedistribution that would result from applying this plan to this patient, either at regularly spacedgrid points throughout the volume or at a smaller set of randomly located points [Niemierko andGoitein 1990]. He then examines the results to verify the suitability of the plan or to determinefruitful directions for its modification.

Page 4: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

8

The final step in this process is to verify the accuracy of the treatment plan delivery.Portal images, acquired in conjunction with the treatment delivery, either electronically[Meertens et al. 1990] or on film, geometrically indicate the areas actually irradiated by eachbeam. They are examined, together with registered simulator images or digitally reconstructedradiographs in which the CT data has been reprojected to match the perspective of the portal film[Sherouse et al. 1990], to determine the direction and extent of any internal or external patientmovement either between simulation and treatment or, when possible, during the treatmentprocess, or for any unexpected errors in patient setup or treatment delivery. Another techniquethat is sometimes used to help verify treatment delivery is dosimetry, in which the accumulateddose is measured at selected points in or around the patient and these values are compared withthe expected values indicated by the dose calculations [Photon Treatment Planning CollaborativeWorking Group 1991a].

There are several points in the treatment planning process where techniques for moreeffective data visualization would be useful. For example, accurate definition of the three-dimensional target volumes might be facilitated if information from various sources, includingthe different modalities such as computed tomography (CT), magnetic resonance (MR),fluoroscopy, emission tomography (SPECT or PET), etc. could be integrated into a single unifiedimage [Rosenman et al. 1995]. While CT data is needed for dose calculations and clearly definesbony structures and tissues of various distinct densities, separate tissue types of nearly similardensity are much more easily differentiated in MR data, and crucial functional information can bederived from emission data such as SPECT or PET. Moreover, with fluoroscopic imaging it ispossible to measure the displacement of internal structures resulting from factors such cardiacmotion or respiration. Along with efficient and accurate methods for three-dimensional imageregistration (excellently reviewed in [van den Elsen et al. 1993]), intuitively meaningfulmultivariate display techniques, such as those investigated by [Hill et al. 1992], are required toaccomplish this task effectively. The treatment planning process might also be enhanced by theintroduction of visualization techniques for the explicit display of various inherent uncertainties[Photon Treatment Planning Collaborative Working Group 1991b] such as the expected extent ofpatient motion, organ motion, setup errors or known margins of accuracy in dose calculations.Techniques for explicitly quantifying and visually representing the separate extents of thesevarious uncertainties have only begun to be addressed [Urie et al. 1991] [Kutcher et al. 1995].

The main role and most important area for visualization research in radiation treatmentplanning, however, lies in providing a clear, visual representation of the spatial distribution ofthe three-dimensional dose in the context of the volume of the patient anatomy, enablingphysicians to more rapidly and efficiently design an optimal treatment plan, evaluate thesuitability of a given treatment plan, or weigh the relative merits of several alternative plans.

Defining an optimal 3D treatment plan for a particular patient can be an arduous andtime-consuming task not only because of the enormous search space defined by the large numberof parameters that can be varied but, perhaps more importantly, because of the complexity ofdetermining appropriate trade-offs between irradiation of the target volume and irradiation ofnormal tissues. While in some situations the design of an appropriate plan may be fairlystraightforward, it can also be the case that the relative locations of the target volumes and criticalstructures make it difficult, if not impossible, to achieve a plan that simultaneously maximizestumor coverage and minimizes the irradiation of normal tissues. A significant amount ofresearch effort is currently being devoted to the development of algorithms for computer-aidedselection of optimal treatment parameters, such as beam weights [Niemierko 1992], or beamprofiles and directions [Källman 1992]. Although these methods are still in their infancy andhave yet to gain widespread clinical acceptance, there seems to be considerable promise in thisapproach for easing some aspects of the complicated task of treatment plan design [Niemierko etal. 1992][Mohan et al. 1992].

Page 5: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

9

The evaluation of a treatment plan is ideally based upon an assessment of how well theproposed treatment is expected to satisfy the twin objectives of maximal tumor control andminimal complications due to the radiation-induced damage of non-target tissue. Butdetermining an appropriate trade-off between TCP and NTCP is by necessity a subjective processthat requires the consideration of such unquantifiable factors as a patient’s willingness to riskcomplications. For example, several studies cited by [Liebel 1994] indicate that patients may insome cases prefer a treatment with a lower possibility for cure over one that, while potentiallymore effective, is also more likely to result in concomitant damage that would substantiallyreduce the quality of life. In standard practice, the final assessment of the suitability of a plan(which will vary from physician to physician and from institution to institution) is generallybased upon a subjective weighing, by an experienced clinician, of the complicated tradeoffsbetween numerous measurable and immeasurable criteria [Goitein 1992]. Among the criteriamost easily quantified are measures such as

- adequacy of coverage of the target by the prescribed dose, either from statisticalmeasures such as highest dose received by at least 95% of a volume or by visuallyverifying conformity of the fit of specific isointensity levels of dose to the 3D targetvolume;

- adequacy of the dose distribution over the target volume indicated either by theabsence of hot or cold spots or by a minimizing of their volume and/or deviationfrom the prescribed dosage;

- avoidance or minimization of the size and/or intensity of hot spots in critical non-target volumes;

- minimal integral irradiation of non-target tissues (to avoid the subsequentappearance of radiation-induced cancers).

Individual numerical statistics (such as the percentage of the target volume receiving atleast 95% of the prescribed dose, or the highest level of dose received by at least 5% of an organ atrisk) can usefully summarize some of the dose-distribution criteria important for treatment planevaluation [Munzenrider et al. 1991]. Mathematical models for computing TCP based on dose-volume statistics, and estimates of tumor growth rates and expected response to radiation,although not practical for predicting actual tumor control, can be of considerable help in rankingalternative plans by giving a rough indication of some aspects of the clinical significance of thedifferences in the respective dose distributions. Much less reliable estimates of NTCP can bepredicted for a given dose distribution because of the lack of reliable biological models for theradiation response of the various organ and other non-tumor tissues, complicated by the poorunderstanding of the apparently significant effects of a multitude of cofactors and theconsiderable inter-patient variation in intrinsic cellular radiosensitivities [Bentzen and Overgaard1994], combined with a lack of data on the tolerances of various organs to different patterns ofpartial irradiation [Goitein 1992].

The dose-volume histogram (DVH), which summarizes quantitative (but not spatial)information about the dose distribution over a given volume by conveying, for a progression ofdose levels, the percentage of the specified volume that receives at least that amount of dose, is ofindispensable value for flagging the presence and indicating the aggregate amount and intensityof hot and cold spots in specified target volumes and critical structures [Drzymala et al. 1991].DVH statistics cannot form the sole basis for treatment plan evaluation, however, because theydo not convey essential information about the location and distribution (number and individualextent) of the regions of excess or insufficient radiation.

Because of the quantity and complexity of the information on which the clinical judgmentof a plan must be based, a clear display of the complete dose distribution over the target volumeand organs at risk, in the context of anatomical landmarks such as skin or bone, is essential forproviding an immediate, intuitive appreciation of the complicated geometrical relationshipbetween the full dose distribution and the relevant underlying anatomy.

Page 6: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

10

2.2.2: 2D and "2 12 D" display techniques for treatment planning data

The most common technique for displaying radiation treatment planning data is tosuperimpose a color-coded representation of the radiation dose intensity onto a two-dimensionalslice through the anatomical volume, as shown in figure 2.2. The dose distribution can beillustrated either as a set of nested isointensity curves (figure 2.2-left) or, in a potentially moreeasily assimilated form, as a "color wash" (figure 2.2-right), in which the hue at any given point inthe composite image is determined by the intensity range of the dose at that point, while theluminance is derived from the anatomical data.

Figure 2.2: Two-dimensional display of the dose distribution over anatomy. Left:multiple isodose lines [Photon Treatment Planning Collaborative Working Group 1991b].Right: dose color-wash [Niemierko et al. 1992].

Color scales consisting of a limited number of distinct hues, each mapped to a specific,fixed range of dose intensities, have been advocated by some investigators [Goitein 1992] to allowa more rapid and intuitive appreciation of the essential qualities of the dose distribution thancolor scales that vary smoothly across a continuous rainbow of hues. In addition, the standardassociation of specific colors with specific dose levels is recommended both to facilitate thecomparison of alternative dose distributions developed at different institutions, and also as ameans of enabling physicians, through experience with the color scale, to rapidly and intuitivelyobtain a quantitative appreciation of the displayed dose distribution.

Alternative methods for the color-coding of dose distributions have also beeninvestigated. To give an intuitive appreciation of how well the intensities of a particular dosedistribution match the desired radiation levels (constraints) specified for various regions acrossthe volume, the hue at any given point can be defined in terms of the signed difference betweenthe actual and desired (or acceptable) radiation dose at that point, to produce what are referred toas "images of regret" [Shalev et al. 1988]. In a similar approach, Goitein [1982] has suggested that"difference images", in which the color-wash values at each point are defined by the differencebetween the dose values from two competing distributions, can be useful for highlighting theaspects in which alternative dose distributions differ, either to emphasize the trade-offs betweenrival treatment plans or to provide insight into how various changes in a treatment planningparameter influence the resulting dose distribution.

Because of the difficulty of mentally reconstructing from one or more sequentiallyviewed two-dimensional slices an accurate perception of the geometrically complex three-dimensional dose distribution produced by an unconstrained 3D treatment plan (in which thebeam directions are not coplanar with each other or with the transaxial slice on which the dosedistribution is superimposed), investigators are more frequently turning to display techniques

Page 7: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

11

that extend the dimensionality of the data representation. Probably the simplest of theseapproaches, illustrated in figure 2.3-left [Photon Treatment Planning Collaborative WorkingGroup 1991b], is to display the dose distribution on multiple, orthogonal slices shown side-by-side in adjacent regions of the screen. The positional correspondence between the multiple slicescan be made explicit by displaying the multiple two-dimensional images as planes in three-space,a technique illustrated in figure 2.3-right [Kessler et al. 1994].

Figure 2.3: "2 1

2D" representation in which the dose distribution is displayed onorthogonal 2D slices through the anatomical volume. Left: shown side-by-side [PhotonTreatment Planning Collaborative Working Group 1991b]. Right: shown as intersectingplanes [Kessler et al. 1994].

Full appreciation of the spatial relationship between the three-dimensional distribution ofradiation dose and the three-dimensional structures of the patient anatomy cannot, however, beeasily drawn from viewing a set of two-dimensional images, even when they are cleverlycombined or displayed in rapid succession, as in an interactive "scrolling" through multiple slicesalong a single axis. With the introduction of faster and more powerful graphics workstations, itis becoming increasingly feasible to display shaded three-dimensional surfaces, such as thoseillustrated in figure 2.4 [Kijewski 1994](left) [Rosenman et al. 1989](center), [Photon TreatmentPlanning Collaborative Working Group 1991a](right) at the nearly interactive frame ratesrequired for practical clinical use.

Figure 2.4: Three-dimensional display of radiation beam and anatomical surfaces.Left: opaque rendering [Kijewski 1994]. Center: transparent rendering [Rosenman et al. 1989].Right: color wash [Photon Treatment Planning Collaborative Working Group 1991a].

Page 8: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

12

Three-dimensional rendering allows an excellent appreciation of the shapes and relativelocations of individual non-overlapping surfaces, but difficulties still exist in the visualization ofmultiple superimposed structures with this method. If all of the relevant surfaces in the data arerendered opaquely, only the outermost surface can be clearly seen; if they are renderedtransparently, much of the surface information is lost, and the shape and position are clearlycommunicated only at or near the silhouette edges. One method for providing an understandingof the spatial relationship between multiple, opaquely rendered overlapping surfaces is toalternately (over time) display and then remove the outermost surface, allowing its shape andposition to be committed to memory and then mentally superimposed. Alternatively, variousdifferent portions of the outermost surfaces can be selectively removed, to produce images suchas the one shown in figure 2.5-left [Höhne et al. 1992]. We will see later that this technique can beunderstood as a very coarse sort of see-through texture; the goals in either case are very similar,namely to provide enough information about the outer surface, through opaque rendering, sothat the observer can mentally interpolate it across the regions over which it is not shown.Another approach, illustrated in figure 2.5-right [Cullip et al. 1993], is to use one of the two-dimensional data display methods described above in combination with an opaquely-renderedset of three-dimensional dose and anatomy surfaces, incorporating aspects of both the 2D and 3Drendering methods and using each to its best advantage. Solutions have also been proposed thatsidestep the problem of representing multiple superimposed surfaces of dose and anatomyaltogether, illustrating the distribution of radiation dose not as a separate entity but as a colorwash over the surfaces of the volumes of interest, as shown in figure 2.4-right [Photon TreatmentPlanning Collaborative Working Group [1991a].

Figure 2.5: Multiple superimposed opaque surfaces. Left: with cut-outs to reveallayers of structure in a single, integrated image [Höhne et al. 1992]. Right: with a traditionaltwo-dimensional image used as a cut-plane to reveal interior information in the context of thethree-dimensional surface rendering [Cullip et al. 1993].

2.2.3: Advantages of using 3D transparent surfaces for data display

Despite the relative success of the various alternative approaches, there remain severalimportant advantages to using transparency to represent multiple superimposed surfacescontinuously in both time and space, together in a single image. The first is simultaneity:displaying all relevant external and internal anatomical objects together with one or moreisointensity surfaces of radiation dose should enable a clearer and more immediate perception ofthe complex spatial relationships between these various structures than if each had to be viewedindependently and then mentally combined. (Psychophysical experiments [Westheimer andHauske 1975] indicate, for example, that people are measurably better at estimating the distancesbetween two lines in a flat display when the lines, rather than being shown sequentially, aredisplayed simultaneously. It appears that our ability to use positional information to estimate

Page 9: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

13

distances is substantially degraded when one of the locations must be remembered and thenmentally superimposed.) This is a particularly important concern for applications in which weneed to convey information not only about what each surface looks like on its own but also abouthow the two (or more) surfaces are positioned relative to each other in space.

The second advantage is completeness: it has been repeatedly emphasized by researchersin a number of medical and scientific disciplines (for example [Purdy et al. 1993], [Fishman et al.1991], [Wolfe et al. 1991]) that important features of complicated three-dimensional datadistributions can be overlooked when the relevant surfaces must be mentally reconstructed froma sequence of two-dimensional slices; three-dimensional display allows maximumcomprehension of three-dimensional form. Finally, there is the advantage of efficiency: if thephysician is able to rapidly assimilate more of the information he needs about the data from asingle image, he may be able to more rapidly evaluate a particular treatment plan or experimentwith adjusting a greater number of different possible treatment parameters [State et al. 1994].

Good visualization techniques can enhance both the accuracy and efficiency of theradiation treatment planning process by enabling a clear, complete and perceptually intuitiveunderstanding of the most important characteristics of the complex three-dimensionaldistribution of radiation dose in the context of the three-dimensional patient anatomy. But, oneof the most critical elements in the potential effectiveness of a visualization approach is the speedat which images can be generated. Physicians have made it clear that for a visualizationtechnique to be of practical clinical use, it must allow them to interact with the data at a"reasonable" speed, suffering delays between successive images of no more than one or twoseconds at the very most. Pre-processing times are less critical; delays of minutes or even hoursbetween data acquisition and image rendering are often acceptable.

2.2.4: The role of visualization in the context of other treatment planning needs

Additionally, although substantial gains in treatment optimization have been and willcontinue to be facilitated by better visualization techniques, it must be emphasized that advancesin these techniques contribute only one piece to the overall puzzle of improving local control.Further improvements will come not only from optimizing the dose distribution over statically-defined anatomical volumes, but also from the following:

- controlling or at least accurately accounting for the displacement of internal organsduring or between treatments (not only to ensure adequate coverage of the targetand minimal irradiation of adjacent critical structures, but also to allow greateramounts of therapeutic dose to be safely prescribed);

- increasing the accuracy of target volume definition (differentiating subregions ofdenser or more resistant cell populations that might require boosted amounts of doseand more precisely identifying areas of microscopic tumor advance), both with thedevelopment of more accurate and efficient three-dimensional segmentationtechniques and the specific incorporation of information from additional modalitiessuch as MR or emission tomography into the segmentation process;

- improving the relative radiation-sensitivity of tumor cells as opposed to normaltissues through the use of biological modifiers; and

- achieving a greater understanding of the biological processes of radiation response,to provide a more solid foundation for improving the accuracy of dose prescriptions(both total amounts and fractionation).

Nevertheless, visualization has its important role to play, and the remainder of thisdissertation will be concerned with efforts to improve the quality and quantity of informationthat can be effectively communicated in images of multiple superimposed transparently-rendered surfaces.

Page 10: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

14

2.3: A characterization of transparent surface types

The principal difficulty with using transparency to display multiple superimposedsurfaces in an image is that it can be very hard, both in actuality and in computer-generatedimages, to fully appreciate the three-dimensional shape and relative depth of a transparentsurface, particularly in the areas where it overlaps an underlying opaque object.

Transparent surfaces, when they are encountered in life, tend mostly to fall into one oftwo general categories: surfaces of complex, self-contained objects, such as crystal vases or solidsculptures, that are designed primarily to be seen; and smooth, thin surfaces, such as windows orglass jars, that are generally lacking in shape detail and are designed primarily to be seenthrough. Figure 2.6-left (a photograph of a glass mug with "thumbprint" detailing, half-filledwith colored water and set in front of a calendar background to give context to the refractivedistortion), shows an example of this first kind of transparency, in which our attention focuses onthe object itself. We don’t need or even care to get an accurate view of what’s behind thetransparent surface; the surface detailing is specifically designed to maximize the backgrounddistortion for æsthetic effect. Figure 2.6-center (a photograph of a jar of jalapeños) shows anexample of the second kind of transparency, which is "transparency" in the sense we usuallymean — basically invisible. In this case the focus is on the contents and not on the container,whose surface shape in the front-facing area is almost imperceptible.

Figure 2.6: A series of photographs illustrating the three basic types of transparentsurfaces encountered in everyday life.

Only rarely in the physical world do we encounter transparent surfaces that are intendedsimultaneously to be seen and not seen — surfaces that contain relevant visual detail but at thesame time permit other surfaces to be seen behind them. Figure 2.6-right (a photograph of a "see-inside" toy train) shows one of these relatively uncommon examples. For diagrammaticpurposes, however, this third kind of transparency is exactly what we want for portraying theouter surface of an object together with underlying structures in a single image.

2.4: Photorealistic rendering of transparent surfaces: a brief history

Although many computer-generated images of transparent surfaces, including those Ipresent in the subsequent chapters of this dissertation, continue to be created using simplifiedlighting models that fall considerably short of representing reality, many more sophisticatedapproaches are available, and a significant amount of progress has been made over the years inimproving the photorealistic quality of transparent surfaces in computer-generated images.

Page 11: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

15

Newell et al. [1972] were perhaps the first in the literature to explicitly address therepresentation of transparent surfaces in computer-generated images. Attempts to improve therealistic appearance of transparent surfaces began with Phong [1975] who proposed a shadingmodel for realistically representing specular highlights (and illustrated the effects of histechnique on a transparent champagne glass). Several years later, Kay and Greenberg [1979]proposed several empirical approximations to improve the realism of computer-generatedimages of transparent objects, including weighting the transparency coefficient by a function ofthe surface normal direction to create the effect of increased opacity near the silhouette edges andshifting the background image seen through a transparent surface to create the illusion ofrefraction. With the introduction of recursive ray tracing [Whitted 1980], significantly morecomplex reflective and refractive effects could be modeled, at correspondingly greatercomputational expense. Hall and Greenberg [1983] were among the first to make systematicattempts to increase the photorealistic appearance of transparent surfaces by faithfully simulatingthe physical behavior of light. They introduced the idea of incorporating such phenomena asthe Fresnel relationships for the wavelength and angle-of-incidence dependence of transmittedand reflected light into the rendering model (to accurately represent color), along with termsused to account for the scattering of transmitted light and the attenuation of light intensity as afunction of the filter properties of the transparent material through which it passes. With theintroduction of generalized radiative transport methods by Kajiya [1986], the range oftransparency-relevant optical phenomena that could be accurately modeled was further extendedto include such things as the formation of caustics. (The specific modeling of caustics was laterpursued in greater depth by [Watt 1990].) Combining elements of both the raytracing andradiosity illumination models, [Wallace et al. 1987] introduced a hybrid rendering approach thatenabled, among other things, the simulation of varying degrees of translucency in transparentmaterials, and [Arvo 1995] showed how irradiance tensors could be used to accurately model thetransmission of light through glossy surfaces. Dispersive refraction (the phenomenonresponsible for effects such as rainbows) was first modeled in a technique introduced by Thomas[1986] and was also the subject of [Musgrave 1989]. A complete, theoretical and practicaldescription of the principles and techniques of rendering realistic computer images can be foundin [Glassner 1995].

Figure 2.7: An illustration of the progression toward achieving photorealism in themodeling and illumination of transparent surfaces. (clockwise from top left: [Whitted 1980](refraction), [Hall and Greenberg 1983] (color), [Kajiya 1986] (caustics), [Wallace et al. 1987](combining diffuse and specular transmission))

Page 12: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

16

While realism is a worthy goal to strive for and a considerable amount of progress hasbeen made toward achieving this goal, the computational expense of generating physicallyaccurate images of transparent surfaces can be overwhelming. The pursuit of photorealism at anaffordable cost is an important research area in computer graphics, and advances continue to bemade in developing rendering techniques that incorporate many aspects of the effects describedabove using approximations that require less of a computational burden ([Pearce 1990][Lee andUselton 1991], for example). It is important to remember, however, that for the purposes ofscientific visualization, the goal of effectively rendering transparent objects is not an end in and ofitself but merely one of many means for communicating information about layered surfaces in aclear and efficient manner. For these applications, it can be argued that photorealism is neithersufficient or entirely necessary and may even in some aspects hinder the effective communicationof relevant aspects of the data.

2.5: Empirical evidence for the insufficiency of photorealism

To illustrate the some of the inherent weaknesses in our natural ability to fully andaccurately perceive the three-dimensional shape and relative depth of actual transparent surfaceswhile retaining a clear and undistorted view of the surfaces behind them, I designed a set ofsimple examples using clear plastic bears and nesting colored pumpkins. Figure 2.8-left shows aphotograph of a clear plastic bear superimposed over a child’s alphabet flashcard, photographedin direct sunlight. The refraction of optical rays as they pass through the thin transparent surfaceand the complex resulting patterns of light attenuation combine to produce an interesting visualeffect but do not in any obvious way provide useful shape or distance cues. The detrimentalimpact on our ability to reconstruct an accurate perception of the underlying surface is quiteclear. In the absence of luminance attenuation or color contribution from the transparent surface,the relatively stronger effects of refraction might help localize the silhouette edges, but this is aneffect that might also be less deleteriously accomplished through other means, including carefulbacklighting. It is my impression that, as a whole, the perceptual difficulties introduced byrefraction for the visibility of underlying objects outweigh the advantages it offers for localizingor clarifying the shape of the outer surface; for the purposes of illustrating layered structures, itmay be best if refraction is excluded from the transparent rendering model. Figure 2.8-centershows a photograph of the same plastic bear, illuminated by primarily indirect light. Elimination

Figure 2.8: Photographs illustrating the insufficiency of physically accurate renderingas a complete solution to the problem of representing transparent surfaces that can beadequately seen and also seen through. Photographed under direct sunlight (left), indirectlight (center), and filtered light (right).

Page 13: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

17

of the shadow effects clarifies the relevant information content of the flashcard background, butthe surface shape remains largely imperceptible, particularly in the area around the nose. Infigure 2.8-right, the bear has been photographed under filtered lighting conditions that increasethe incidence of reflected rather than transmitted rays. The shape of the outer surface becomessomewhat more readily apparent as the regions of illumination-induced opacification increase inarea and intensity. This provides motivation for opacifying areas of the outer surface but leavesopen the question of what type of opaque texturing would yield the best results. The degree towhich the effectiveness of each of these images is influenced by the type of lighting usedhighlights the importance of illumination considerations in the design of effective pictures, anaspect of scene preparation that has long been of central concern in photography andfilmmaking.

Figure 2.9 shows an object consisting of several layers of nested transparent surfaces,photographed, on the left, under filtered light and on the right, under direct sunlight. The shapeof the outer surface is relatively obvious in both pictures, but its location relative to theunderlying surfaces is very unclear. Although the nested shapes are familiar and self-similar, it isdifficult to perceive even the number of figures in the set (there are three). The opacitiesgenerated by the filtered light in figure 2.9-left almost entirely obscure the view of the interior;even the sharper opacities generated by the specular highlights in figure 2.9-right don’t offermuch help in clarifying the inner surfaces, primarily because of the self-similarity of the nestedshapes. The structure of this model is only very slightly easier to understand when it is vieweddirectly. Layered transparent surfaces are simply very difficult to both see and see through,independent of the "photorealism" of the rendering.

Figure 2.9: Photographs illustrating the difficulty of interpreting models constructedfrom layered transparent surfaces. The image on the left was photographed under directsunlight; the image on the right was indirectly illuminated.

2.6: The perception of transparency

To improve the effectiveness of computer renderings of transparent surfaces for thepurposes of simultaneously representing layered structures, and to help ensure that our effortsare focused in areas that promise maximum potential gain, we must begin with an explicitunderstanding of the concept of transparency: what does it mean to both see and see through asurface? When we look at a picture of a transparent surface superimposed over some opaqueobject, do we actually perceive the two distinct surfaces simultaneously, at every point of theiroverlap, or do we subjectively reconstruct a continuous representation of each surface from theperception of either one of them at any given location, but not both at the same time in the sameplace?

Page 14: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

18

A number of general conclusions can be drawn both from informal observation and fromthe results of psychophysical experiments that quantitatively examine various aspects of theperception of transparency. The first observation, and one of the most important, is thattransparency is a global rather than a local perception. The global nature of transparencyperception, first described by W. Fuchs in 1923, is demonstrated in figure 2.10 [Albert JosephMoore 1841-1893]. Viewed in isolation, the transparent character of the richly-illustrated materialis not perceived, whereas the separate colors of skin and cloth can be easily distinguished whenthe image fragment is viewed in its larger context.

Figure 2.10: An illustration of the global nature of transparency perception, based onthe painting “Silver” by Albert Joseph Moore.

The perception of transparency in two-dimensional images is strongly influenced byfigural conditions. Metelli [1974a] designed a series of experiments to explore the nature of thesefigural influences on transparency perception in two-dimensional images, demonstrating (amongother things) how the perception of transparency can be destroyed when certain figuralconditions are altered to contradict it as the most ecological explanation, as shown by the upperimages of figure 2.11. In subsequent experiments he and others elaborated on this theme [Metelliet al. 1985], presenting evidence that in figurally neutral conditions, such as in the image of figure2.11-lower left, reports of transparency perception may be elicited from “experienced” (i.e.,receptive and highly-motivated) observers, although others have shown that this is notnecessarily the first perception for naïve subjects [Beck and Ivry 1988]. While some researchershave suggested that transparency perception might be stimulated by certain luminanceconditions at X-junctions (a type of border crossing in which the locally straight edge of atransparent object overlaps with a luminance edge in the underlying object and the visible edgesform an “x”) [Watanabe and Cavanagh 1993], evidence for a more global, as opposed to strictlylocal, definition of transparency-motivating figural conditions is given by Kersten [1991], whoshows using stereo images similar to the one shown monocularly in figure 2.11-lower right that

Page 15: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

19

the presence of X-junctions is not a prerequisite for transparency perception, at least when depthinformation is available.

Figure 2.11: Images demonstrating the effects of various figural conditions on theperception of transparency, after [Metelli 1974](upper right), [Metelli 1985](lower left),[Kersten 1991](lower right).

In addition to being a largely global phenomenon, transparency perception is integrallyrelated to the perception of depth. Through a series of experiments demonstrating bistability inthe characteristics of perceived motion (rigidly rotating vs. non-rigidly sliding about) due to aninterchanging perception of the surface transparency and relative depth when these twoattributes are placed in conflict, Kersten et al. [1992] show that transparency and depth frommotion seem to be computed cooperatively: the perception of transparency can influence theperception of depth from motion, and vice versa. Nakayama et al. [1989] demonstrate a similarinterrelationship between the perception of transparency and the binocular perception of depththrough stereo, and Stoner et al. [1990] demonstrate a connection between the perception oftransparency and the perception of independent (as opposed to coherent) motion insuperimposed transversely-oriented grating (line) patterns.

It has long been known that transparency can be perceived in situations where it doesn’tphysically exist and also not perceived in situations where it does. As described by Metelli[1974], Metzger (1955) showed how transparency could be perceived in two-dimensional imagesconstructed from pieces of opaque colored paper, while experiments by Tudor-Hart (1928)showed that transparency cannot be perceived in a flat see-through surface superimposed over ahomogeneous background. The sculpture by Croff [1869] shown in figure 2.12 provides aparticularly compelling example of the perception of transparency in a perfectly opaque medium.The success of this sculpture, I believe, lies in its ability to sustain the appearance of figuralcontinuity: at any particular point, either the overlying or underlying surface is represented butnot both at the same time in the same place. It appears, to me, that our visual system integrates the

Page 16: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

20

visible fragments of each surface across the areas in which they are unrepresented, resulting in aperception of two continuous surfaces overlapping at all points. Related information about thephenomena of occlusion and continuity can be found in [Nakayama et al. 1989].

Figure 2.12: A photograph of the marble sculpture Veiled Nun by Guiseppe Croff.

There are two basic optical models for surface transparency. The "additive" model isused to describe the type of transparency observed with dust clouds, haze, or sheer fabric, inwhich the spaces between the bits of occluding material are too small to be individually resolvedby the eye. This kind of transparency effect can be represented mathematically as a linearcombination of the intensities

I f , Ib of the occluding (foreground) and occluded (background)surfaces, weighted by the relative concentration of opaque material in the occluder:

I = αI f + (1 − α)Ib . When there are no opaque elements in the occluding material ( α = 0 ), the finalimage is determined purely by the background surface; when α = 1, the occluding elements areso densely concentrated as to make the foreground surface completely opaque. In astraightforward implementation of this algorithm, the final intensity computed for a region ofoverlap will be independent of the depth order of the overlying and underlying surfaces; inpractice most lighting models incorporate some amount of "depth-cueing", or relative darkeningof a surface in proportion to its distance from the observer, which can compensate somewhat forthis distance ambiguity. A more serious problem with the additive model is that it cannotrepresent the perceptual opacities on a perfectly transparent surface that result from the reflectionof high-intensity light rays; it was simply not intended to model these types of transparentmaterials. Despite its shortcomings, for which a number of work-arounds have been devised (seesection 2.3, or [Kay and Greenberg 1979], for more details), this optical model is so widely usedthat it could possibly be described as the standard method for representing transparent surfacesin computer graphics.

The "subtractive" (or "multiplicative") model of transparency describes the type oftransparency observed with light-absorbing filters such as film or colored tape. Foley et al. [1990]give approximately the following equation for this type of transparency:

I = I f + tIb, where t

describes the transmissiveness of the filter in terms of the amounts and wavelengths of light thatpass through it.

Page 17: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

21

Further widening the gulf between perception and reality, it has been shown that theperception of transparency, while generally consistent with the laws of optics, is not completelyconstrained by them. Clearly, certain absolute and relative luminance conditions do favor theperception of transparency. In addition to the luminance relationships described by the modelsabove, it has been shown that transparency is a more often reported perception for a surfacesuperimposed on darker as opposed to lighter backgrounds [Beck and Ivry 1988] and thatwhiteness is perceptually associated with increased opacity [Gerbino 1994]. However, it is alsopossible, and fairly common, for people to perceive transparency in images in which the colorsand relative brightnesses of the foreground, the background and their overlapping regions arenot consistent with any physically realizable optical model.

Metelli [1974a, 1974b] proposes that the perception of transparency might be describedby a physical model based on the reflectance of light from a rotating disk with pieces cut out,which he calls the "episcotister model". According to this model, the physical constraints for theperception of transparency in figure 2.13-left are c = α1a + (1 − α1)e, d = α2b + (1 − α2 )e , where e isthe reflectance of the episcotister and, in the case of "balanced transparency", α1 = α2 is therelative amount of surface area that has been removed from it. An assumption of α ≤ 1 leads tothe condition i) a > b ⇒ c > d , a < b ⇒ c < d , which states that the order of brightness patterns in abackground image cannot be reversed by the superposition of a transparent layer, and anassumption of α ≥ 0 leads to the condition ii) a − b > c − d , which states that a superimposedtransparent surface will reduce the magnitude of the brightness contrast in the areas of thebackground image over which it lies. It has been shown that additive and subtractivetransparency are both physically constrained by these conditions [Beck and Ivry 1988], andnumerous experiments with both "experienced" and naïve observers [Metelli 1974a][Beck et al.1984] have verified that transparency is almost always perceived in images in which theseconditions are met and perceived with diminishing frequency as these conditions are moreflagrantly violated [Beck and Ivry 1988]. Figure 2.13 gives some examples of images in whichthese conditions are either both met or individually violated.

Figure 2.13: An illustration of some of the luminance constraints affecting ourperception of transparency. Left: luminance conditions i and ii met. Center: condition iiviolated. Right: condition i violated.

Although the episcotister model is useful for describing the luminance conditions thatbound our perception of transparency, it is important not to interpret this model too literally. Forexample, one cannot in general assume α1 = α2 , solve for e in terms of a, b, c, d , and then deriveconditions bounding the relative values of a, b, c, d based on the range of physically validreflectance values for e . Beck et al. [1984] present numerous examples of images in whichtransparency is readily perceived despite various contradictions of reflectance relations derivedfrom the episcotister model. They argue that our perception of transparency does not rely on an

Page 18: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

22

accurate interpretation of the physical reflectance and transmission of light by our visual systembut rather that this perception is better explained by subjective lightness values than byphysically measured brightnesses. They also show that human observers cannot accurately judgethe degree of transparency of a surface, and they provide evidence that the perception of theexistence of transparency is mediated by a different set of stimulus properties than thosedetermining the degree of transparency perceived [Beck 1985].

Nakayama et al. [1989] provide convincing evidence of the achromatic nature of ourperception of transparency, which is not surprising in light of the work by Livingstone and Hubel[1987] which describes in great detail the achromatic nature of many other visual phenomenaincluding depth perception. In addition to this apparent insensitivity to color, the subjectiveperception of transparency also appears to be resistant to luminance inconsistencies. Fukuda andMasin [1994] show that an assumption of “balanced transparency”, in which the perceivedopacity α1 of the surface in region c is equal to the perceived opacity α2 of the same surface inregion d , cannot in general be supported, nor can it be assumed that a single opacity value α forthe overlying surface is perceived based on a weighted combination of α1 and α2 . Figure 2.14attempts to provide examples of these phenomena.

I hope to have shown from the above discussion that a physically accurateimplementation of the transmission and reflection of light based on the material properties of aparticular transparent medium is not strictly necessary or even sufficient to enable a reasonableunderstanding of the transparent character of a superimposed surface. To improve thecommunication of the shape and depth of layered transparent surfaces, our efforts will be betterspent in other directions.

Figure 2.14: An illustration of some of the luminance and color contradictions that aretolerated in transparency perception. Left: unbalanced transparency ( α1 = 0.2, α2 = 0.8). Right:physically unrealizable color combinations.

2.7: Representations of transparency in art and illustration – basic techniques

If reproducing reality is not the answer, the question then becomes: in what way can webest enhance (add to or subtract from) the surface model or its representation to better meet ourgoal of effectively communicating the shape and depth of a superimposed transparent surface?

A number of insights can be gained from observations of the modeling and illustration oftransparent surfaces by artists in various media. Figures 2.15a and 2.15b give examples of thebasic techniques most prevalently used to communicate superimposed surfaces. In figure 2.15a-left [Demarest 1979], the artist has chosen to represent only one of the overlapping surfaces at atime, completely omitting the outer skin surface where the internal organs are visible. In figure

Page 19: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

23

Figure 2.15a: Techniques for the representation of transparency in art and illustration.Left: outer surface omitted [Demarest 1979]. Center: selective shading of object boundariesand ridges [Perez et al. 1984]. Right: silhouette lines and specular highlights along ridges[Cranach 1532].

Figure 2.15b: Texturing techniques for the representation of transparency inillustration. Left: shaded outlines, specular highlights and shadows, texture lines followingthe intrinsic texture of the represented objects [Perez 1993]. Right: valley lines, texture lines inthe direction of maximum curvature [Dodson 1988].

Page 20: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

24

2.15a-center [Perez et al. 1984], which represents an artist’s rendition of nested isointensitysurfaces of radiation dose surrounding a target volume in the head, the silhouette edges andridge regions of the multiple superimposed surfaces are explicitly emphasized, with theremainder of the surface left to be filled in “by default”. In figure 2.15a-right [Cranach 1532], thetransparent veil is elegantly represented by a few silhouette lines and what appear to be specularhighlights in the ridge regions. In figure 2.15b-left [Perez 1993] and 2.15b-right [Dodson 1988],the artists use distributed texture patterns to explicitly communicate the existence of atransparent surface across its full extent. In figure 2.15b-right, the artist explicitly marks thedarkened valley regions and orients the line texture approximately in the direction of greatestsurface curvature.

The computer graphics literature contains few references to artistic efforts to improve thecommunication of transparent surfaces. One of the few is an article describing the process ofcreating a stained glass man for the movie "Young Sherlock Holmes" [Shay 1987]. For this task,the animators concentrated on modeling detail such as scratches, bumps, dirt specks and bubbles,mimicking the natural subtle texturing that sometimes helps us to better see transparent surfacesin reality. They also devoted some effort to manipulating color in physically unrealizable waysto achieve a maximally æsthetic effect.

The addition of opaque elements to a transparent surface displayed in conjunction withan underlying opaque object appears to enable both the shape of the transparent surface and thedepth distance between the transparent and opaque surfaces to be more easily and accuratelyperceived. In the remainder of this dissertation, I will describe the perceptual motivation for andthe design and implementation of two shape-based approaches for determining opaque elementsthat exhibit particular promise for enhancing the comprehensibility of transparent surfaces: forsurfaces with well-defined shape-based features, a three-dimensional "line drawing" based uponthe valley and ridge lines; for generally smooth surfaces with few shape-based features, a patternof short strokes in which line orientation is determined by the direction of maximum normalcurvature and line length is determined by the magnitude of the normal curvature in thisdirection.

Page 21: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

25

References for section 2.1: Applications for transparent surface display

Cph. Bron, F. Sadlo, G. Székely, W. Neuenschwander, O. Kübler and J. Schüpbach. “Segmentation andvisualization of membranes and intracellular organelles contours in 3D electron microscopy”,Visualization in Biomedical Computing III, SPIE vol. 2359, pp. 706-714, 1994.

Richard S. Gallagher, Todd Gerhardt and Joop C. Nagtegaal. “Combination plot shows internal andexternal values relative to external shape”, in Visual Cues: Practical Data Visualization, Peter R.Keller and Mary M. Keller, IEEE Computer Society Press, p. 153, 1993.

Richard S. Gallagher and Joop C. Nagtegaal. “An Efficient 3-D Visualization Technique for Finite ElementModels and Other Coarse Volumes”, Computer Graphics, vol. 23, no. 3, pp. 185-194, July 1989.

Richard E. Gillilan and Frank Wood. “Visualization, Virtual Reality, and Animation within the Data FlowModel of Computing”, Computer Graphics, vol. 29, no. 2, pp. 55-58, May 1995.

A. Globus, D. Kerlick, G. Bancroft, P. Kelaita, R. McCabe, Fergus J. Merritt, T. Plessel, Y. M. Rizk, P. G.Buning, I. T. Chiu and J. L. Steger. “Overlaying a 3-D model on a pseudocolor plot locatesextremum”, in Visual Cues: Practical Data Visualization, Peter R. Keller and Mary M. Keller, IEEEComputer Society Press, p. 118, 1993.

J. L. Helman and Lambertus Hesselink. “Surface Representations of Two- and Three-Dimensional FluidFlow Topology”, Visualization ‘90, pp. 6-13, October 1990.

S. J. Klein, S. K. Robinson and P. R. Spalart. “Multiple isosurfaces relate vector and scalar fields of a fluidflow”, in Visual Cues: Practical Data Visualization, Peter R. Keller and Mary M. Keller, IEEEComputer Society Press, p. 103, 1993.

Akio Koide and Akio Doi. “Visualization of Generalized Multiple Molecular Surfaces”, in VisualComputing: Integrating Computer Graphics with Computer Vision, Tosiyasu L. Kunii, ed. (Proc.10th International Conference of the Computer Graphics Society, CG International ‘92), pp. 365-375, Springer-Verlag, 1992.

Olin Lathrop. “Color bands and translucency locate regions of interest”, in Visual Cues: Practical DataVisualization, Peter R. Keller and Mary M. Keller, IEEE Computer Society Press, p. 57, 1993.

Allen S. Lichter, Howard M. Sandler, John M. Robertson, Theodore S. Lawrence, Randall K. Ten Haken,Daniel L. McShan and Benedick A. Fraass. “Clinical Experience With Three-DimensionalTreatment Planning”, Seminars in Radiation Oncology, vol. 2, no. 4, pp. 257-266, October 1992.

William Lorensen. “Geometric Clipping Using Boolean Textures”, Visualization ‘93, pp. 268-274, October1993.

Harald F. Mayer and Behnam Tabatabai. “Visualizing Results of Transient Flow Simulations”, Visualization‘93, pp. 406-409, October 1993.

Melvin L. Prueitt and Scott Baldridge. “Superimposing a transparent surface on a solid surface permitscomparison”, in Visual Cues: Practical Data Visualization, Peter R. Keller and Mary M. Keller, IEEEComputer Society Press, p. 49, 1993.

Thomas Schiemann, Karl Heinz Höhne, Christoph Koch, Andreas Pommert, Martin Riemer, Rainer Schubertand Ulf Tiede. “Interpretation of Tomographic Images Using Automatic Atlas Lookup”,Visualization in Biomedical Computing III, SPIE vol. 2359, pp. 457-465, October 1994.

Lloyd Treinish. “Visualization of Stratospheric Ozone Depletion and the Polar Vortex”, Visualization ‘93, pp.391-396, October 1993.

Craig Upson and Michael Keeler. “V-BUFFER: Visible Volume Rendering”, Computer Graphics, vol. 22, no. 4,pp. 59-64, 1988.

Michael W. Vannier, Jeffrey L. Marsh and James O. Warren. “Three Dimensional Computer Graphics forCraniofacial Surgical Planning and Evaluation”, Computer Graphics, vol. 17, no. 3, pp. 263-273, July1983.

Robert H. Wolfe, Mark Needels and John D. Joannopoulos. “The Electronic Structure of Oxygen in Siliconas Revealed by Volume Visualization of Ab Initio Calculations”, Visualization ‘91, pp. 109-115,October 1991.

References for section 2.2: The driving application: radiation therapy treatment planning

Søren M. Bentzen and Jens Overgaard. “Patient-to-Patient Variability in the Expression of Radiation-Induced Normal Tissue Injury”, Seminars in Radiation Oncology, vol. 4, no. 2, pp. 68-80, April 1994.

Page 22: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

26

Timothy J. Cullip, James R. Symon, Julian G. Rosenman and Edward L. Chaney. “Digitally ReconstructedFluoroscopy and Other Volume Visualizations in 3-D Treatment Planning”, Int. J. RadiationOncology Biol. Phys., vol. 27, pp. 145-151, 1993.

R. E. Drzymala, R. Mohan, L. Brewster, J. Chu, M. Goitein, W. Harms and M. Urie. “Dose-VolumeHistograms”, Int. J. Radiation Oncology Biol. Phys., vol. 21, no. 1, pp. 71-78, May 1991.

Petra A. van den Elsen, Evert-Jan D. Pol and Max A. Viergever. “Medical Image Matching- A Review withClassification”, IEEE Engineering in Medicine and Biology, vol. 12, no. 1, pp. 26-39, 1993.

Elliot Fishman, Donna Magid, Derek Ney, Edward Chaney, Stephen Pizer, Julian Rosenman, David Levin,Michael Vannier, Janet Kuhlman and Douglas Robertson. “Three-Dimensional Imaging”,Radiology, vol. 181, pp. 321-337, 1991.

Jack F. Fowler, guest editor, “Fractionation in Radiation Therapy”, Seminars in Radiation Oncology, vol. 2, no.1, January 1992.

Michael Goitein. “The Utility of Computed Tomography in Radiation Therapy: An Estimate of Outcome”,Int. J. Radiation Oncology Biol. Phys., vol. 5, pp. 1799-1807, 1979.

Michael Goitein. “Limitations of Two-Dimensional Treatment Planning Programs”, Medical Physics, vol. 9,pp. 580-586, 1982.

Michael Goitein. “The Comparison of Treatment Plans”, Seminars in Radiation Oncology, vol. 2, no. 4, pp.246-256, October 1992.

D. L. G. Hill, S. E. M. Green, J. E. Crossman, D. J. Hawkes, G. P. Robinson, C. F. Ruff, T. C. S. Cox, A. J.Strong and M. J. Gleeson. “Visualization of Multi-Modal Images for the Planning of Skull BaseSurgery”, Visualization in Biomedical Computing II, SPIE vol. 1808, pp. 564-573, October 1992.

K. H. Höhne, A. Pommert, M. Riemer, Th. Schiemann, R. Schubert and U. Tiede. “Framework for theGeneration of 3D Anatomical atlases”, Visualization in Biomedical Computing II, SPIE vol. 1808, pp.510-520, October 1992.

P. Källman, B. K. Lind and A. Brahme. “An Algorithm for Maximizing the Probability of Complication-FreeTumor Control in Radiation Therapy”, Phys. Med. Biol., vol. 37, no. 4, pp. 871-890, 1992.

Marc L. Kessler, Randall K. Ten Haken, Benedick A. Fraass and Daniel L. McShan. “Expanding the Use andEffectiveness of Dose-Volume Histograms for 3-D Treatment Planning I: Integration of 3-D Dose-Display”, Int. J. Radiation Oncology Biol. Phys., vol. 29, no. 5, pp. 1125-1131, 1994.

Peter Kijewski. “Three-Dimensional Treatment Planning”, in Radiation Oncology Technology and Biology,Mauch and Loeffler, eds., W. B. Saunders Co., pp. 10-33, 1994.

Gerald J. Kutcher, Gikas S. Mageras and Steven A. Leibel. “Control, Correction, and Modeling of SetupErrors and Organ Motion”, Seminars in Radiation Oncology, vol. 5, no. 2, pp. 134-145, April 1995.

Richard G. Lane, Michael D. Loyd, C. Helen Chow, Emmanuel Ekwelundu and Isaac I. Rosen. “CustomBeam Profiles in Computer-Controlled Radiation Therapy”, Int. J. Radiation Oncology Biol. Phys.,vol. 22, pp. 167-174, 1991.

Steven A. Leibel. “Improvement of Local Control: Overview”, in Radiation Oncology Technology andBiology, Mauch and Loeffler, eds., W. B. Saunders Co., pp. 1-9, 1994.

Steven A. Liebel, Ruth Heimann, Gerald J. Kutcher, Michael J. Zelefsky, Chandra M. Burman, EdwardMelian, John P. Orazem, Radhe Mohan, Thomas J. LoSasso, Yeh-Chi Lo, Jill Wiseberg, Douglass S.Chapman, C. Clifton Ling and Zvi Fuks. “Three-Dimensional Conformal Radiation Therapy inLocally Advanced Carcinoma of the Prostate: Preliminary Results of a Phase I Dose-EscalationStudy”, Int. J. Radiation Oncology Biol. Phys., vol. 28, pp. 55-65, 1993.

S. A. Leibel and Z. Fuks. “The Impact of Local Tumor Control on the Outcome in Human Cancer”, inCurrent Topics in Clinical Radiobiology of Tumors, H.-P. Beck-Bornholdt, ed., Springer-Verlag, pp.113-126, 1993.

Steven A. Leibel, Gerald J. Kutcher, Radhe Mohan, Louis B. Harrison, John G. Armstrong, Michael J.Zelefsky, Thomas J. LoSasso, Chandra M. Burman, Gikas S. Mageras, Chen-Shou Chui, Linda J.Brewster, Mary Ellen Masterson, Yeh-Chi Lo, C. Clifton Ling and Zvi Fuks. “Three-DimensionalConformal Radiation Therapy at the Memorial Sloan-Kettering Cancer Center”, Seminars inRadiation Oncology, vol. 2, no. 4, pp. 274-289, October 1992.

H. Meertens, M. van Herk, J. Bijhold and H. Bartelink. “First Clinical Experience with a Newly DevelopedElectronic Portal Imaging Device”, Int. J. Radiation Oncology Biol. Phys., vol. 18, pp. 1173-1181, 1990.

Radhe Mohan. “Field Shaping for Three-Dimensional Conformal Radiation Therapy and MultileafCollimation”, Seminars in Radiation Oncology, vol. 5, no. 2, pp. 86-89, April 1995.

Page 23: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

27

R. Mohan, G. S. Mageras, B. Baldwin, L. J. Brewster, G. J. Kutcher, S. Leibel, C. M. Burman, C. C. Ling and Z.Fuks. “Clinically Relevant Optimization of 3-D Conformal Treatments”, Medical Physics, vol. 19,no. 4, pp. 933-945, Jul./Aug. 1992.

J. E. Munzenrider, A. P. Brown, J. C. H. Chu, L. R. Coia, K. P. Doppke, B. Emami, G. J. Kutcher, R. Mohan, J.A. Purdy, B. Shank, J. R. Simpson, L. J. Solin and M. M. Urie. “Numerical Scoring of TreatmentPlans”, Int. J. Radiation Oncology Biol. Phys., vol. 21, pp. 147-163, 1991.

Andrzej Niemierko. “Random Search Algorithm (RONSC) for Optimization of Radiation Therapy withboth Physical and Biological End Points and Constraints”, Int. J. Radiation Oncology Biol. Phys., vol.23, pp. 89-98, 1992.

Andrzej Niemierko, Marcia Urie and Michael Goitein. “Optimization of 3D Radiation Therapy with BothPhysical and Biological End Points and Constraints”, Int. J. Radiation Oncology Biol. Phys., vol. 23,pp. 99-108, 1992.

Andrzej Niemierko and Michael Goitein. “Random Sampling for Evaluating Treatment Plans”, MedicalPhysics, vol. 17, no. 5, pp. 753-762, Sept./Oct. 1990.

Carlos A. Perez and James A. Purdy. “Rationale for Treatment Planning in Radiation Therapy”, in Levittand Tapley’s Technological Basis of Radiation Therapy: Practical Clinical Applications, Seymour H.Levitt, Faiz M. Khan, Roger A. Potish, eds., Lea & Febiger, pp. 14-26, 1992.

Photon Treatment Planning Collaborative Working Group. “State-of-the-art of External Photon BeamRadiation Treatment Planning”, Int. J. Radiation Oncology Biol. Phys., vol. 21, pp. 9-23, 1991(a).

Photon Treatment Planning Collaborative Working Group. “Three-Dimensional Display in PlanningRadiation Therapy: A Clinical Perspective”, Int. J. Radiation Oncology Biol. Phys., vol. 21, pp. 79-89,1991(b).

James A. Purdy, William B. Harms, John W. Matthews, Robert Drzymala, Bahman Emami, Joseph R.Simpson, Janice Manolis and Fred U. Rosenberger. “Advances in 3-Dimensional RadiationTreatment Planning Systems: Room-View Display with Real Time Interactivity”, Int. J. RadiationOncology Biol. Phys., vol. 27, pp. 933-934, 1993.

Julian Rosenman, Mitchel Soltys, Tim Cullip and J. Chen. “Improving Treatment Planning AccuracyThrough Multiple Modality Imaging”, presented at the ASTRO Annual Meeting, 1994, to appear inInt. J. Radiation Oncology Biol. Phys.

Julian Rosenman, Tim Cullip and Mitchel Soltys. “New Visualizations for Radiation Therapy TreatmentPlanning”, Visualization in Biomedical Computing III, SPIE vol. 2359, pp. 518-524, October 1994.

Julian Rosenman, Scott L. Sailer, George W. Sherouse, Edward L. Chaney and Joel E. Tepper. “VirtualSimulation: Initial Clinical Results”, Int. J. Radiation Oncology Biol. Phys., vol. 20, pp. 843-851, 1991.

Julian Rosenman, George W. Sherouse, Henry Fuchs, Stephen M. Pizer, Andrew L. Skinner, Charles Mosher,Kevin Novins and Joel Tepper. “Three-Dimensional Display Techniques in Radiation TherapyTreatment Planning”, Int. J. Radiation Oncology Biol. Phys., vol. 16, pp. 263-269, 1989.

Shlomo Shalev, Lydia Bartel, Pierre Therrien, Per Hahn and Maxine Carey. “The Objective Evaluation ofAlternative Treatment Plans: I. Images of Regret”, Int. J. Radiation Oncology Biol. Phys., vol. 15, pp.763-767, 1988.

George W. Sherouse, Kevin Novins and Edward L. Chaney. “Computation of Digitally ReconstructedRadiographs for use in Radiotherapy Treatment Design”, Int. J. Radiation Oncology Biol. Phys., vol.18, pp. 651-658, 1990.

Andrei State, Julian Rosenman, Henry Fuchs, Tim Cullip and Jim Symon. “VISTAnet: Radiation TherapyTreatment Planning Through Rapid Dose Calculation and Interactive 3D Volume Visualization”,Visualization in Biomedical Computing III, SPIE vol. 2359, pp. 484-492, October 1994.

Fiona A. Stewart and Albert J. van der Kogel. “Retreatment Tolerance of Normal Tissues”, Seminars inRadiation Oncology, vol. 4, no. 2, pp. 103-111, April 1994.

Herman Suit and Willem du Bois. “The Importance of Optimal Treatment Planning in Radiation Therapy”,Int. J. Radiation Oncology Biol. Phys., vol. 21, pp. 1471-1478, 1991.

M. M. Urie, M. Goitein, K. Doppke, J. G. Kutcher, T. LoSasso, R. Mohan, J. E. Munzenrider, M. Sontag and J.W. Wong. “The Role of Uncertainty Analysis in Treatment Planning”, Int. J. Radiation OncologyBiol. Phys., vol. 21, no. 1, pp. 91-107, May 1991.

Gerald Westheimer and Gert Hauske. “Temporal and Spatial Interference with Vernier Acuity”, VisionResearch, vol. 15, pp. 1137-1141, 1975.

Robert H. Wolfe, Mark Needels and John D. Joannopoulos. “The Electronic Structure of Oxygen in Siliconas Revealed by Volume Visualization of Ab Initio Calculations”, Visualization ‘91, pp. 109-115,October 1991.

Page 24: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

28

References for section 2.4: Photorealistic rendering of transparent surfaces: a brief history

James Arvo. “Applications of Irradiance Tensors to the Simulation of Non-Lambertian Phenomena”,SIGGRAPH ‘95 Conference Proceedings, pp. 335-342, 1995.

Andrew Glassner. Principles and Techniques for Digital Image Synthesis, Morgan Kaufman, 1995.

Roy A. Hall and Donald P. Greenberg. “A Testbed for Realistic Image Synthesis”, IEEE Computer Graphicsand Applications, vol. 3, no. 8, pp. 10-20, November 1983.

James T. Kajiya. “The Rendering Equation”, Computer Graphics, vol. 20, no. 4, pp. 143-150, August 1986.

Douglas Scott Kay and Donald Greenberg. “Transparency for Computer Synthesized Images”, ComputerGraphics, vol. 13, no. 2, pp. 158-164, July 1979.

Mark. E. Lee and Samuel P. Uselton. “More Shadow Attenuation for Ray Tracing Transparent orTranslucent Objects”, Graphics Gems II, James Arvo, ed., pp. 283-289, 1991.

M. E. Newell, R. G. Newell and T. L. Sancha. “A Solution to the Hidden Surface Problem”, Proceedings of theACM National Conference, pp. 443-450, 1972.

F. Kenton Musgrave. “Prisms and Rainbows: a Dispersion Model for Computer Graphics”, GraphicsInterface ‘89, pp. 227-234, June 1989.

Bui-Tuong Phong. “Illumination for Computer-Generated Pictures”, Communications of the ACM, vol. 18, no.6, pp. 311-317, June 1975.

Andrew Pearce. “Shadow Attenuation for Ray Tracing transparent Objects”, Graphics Gems, Andrew S.Glassner, ed., pp. 397-399, 1990.

Spencer W. Thomas. “Dispersive Refraction in Ray Tracing”, The Visual Computer, vol. 1, no. 2, pp. 3-8, 1986.

John R. Wallace, Michael F. Cohen and Donald P. Greenberg. “A Two-Pass Solution to the RenderingEquation: A Study of Ray Tracing and Radiosity Methods”, Computer Graphics, vol. 21, no. 4, pp.311-320, July 1987.

Turner Whitted. “An Improved Illumination model for Shaded Display”, Communications of the ACM, vol.23, no. 6, pp. 343-349, 1980.

References for section 2.6: The perception of transparency

Jacob Beck, K. Prazdny and Richard Ivry. “The Perception of Transparency with Achromatic Colors”,Perception and Psychophysics, vol. 35, no. 5, pp. 407-422, 1984.

Jacob Beck. “Perception of Transparency in Man and Machine”, Human and Machine Vision II, AzrielRosenfeld, ed., Academic Press, pp. 1-12, 1985.

Jacob Beck and Richard Ivry. “On the Role of Figural Organization in Perceptual Transparency”, Perceptionand Psychophysics, vol. 44, no. 6, pp. 585-594, 1988.

Guiseppe Croff. The Veiled Nun (1869), a sculpture in the Corcoran Museum, Washington D. C.

James Foley, Andries van Dam, Steven Feiner and John Hughes. Computer Graphics: Principles andPractice, second edition, pp. 754-758, 1990.

Mami Fukuda and Sergio Masin. “Test of Balanced Transparency”, Perception, vol. 23, no. 1, pp. 37-43, 1994.

Walter Gerbino. “Achromatic Transparency”, Lightness, Brightness, and Transparency, Alan L. Gilchrist,ed., Lawrence Erlbaum Associates, pp. 215-255, 1994.

Daniel Kersten, Heinrich H. Bülthoff, Bennett L. Schwartz and Kenneth J. Kurtz. “Interaction betweenTransparency and Structure from Motion”, Neural Computation, vol. 4, pp. 573-589, 1992.

Daniel Kersten. “Transparency and the Cooperative Computation of Scene Attributes”, in ComputationalModels of Visual Processing, M. Landy and A. Morshon, eds., pp. 209-228, MIT Press, 1991.

Margaret S. Livingstone and David H. Hubel. “Psychophysical Evidence for Separate Channels for thePerception of Form, Color, Movement, and Depth”, The Journal of Neuroscience, vol. 7, no. 11, pp.3416-3468, November 1987.

Fabio Metelli. “Stimulation and Perception of Transparency”, Psychological Research, vol. 47, pp. 185-202,1985.

Fabio Metelli, Osvaldo Da Pos and Adele Cavedon. “Balanced and Unbalanced, Complete and PartialTransparency”, Perception and Psychophysics, vol. 38, no. 4, pp. 354-366, 1985.

Fabio Metelli. “The Perception of Transparency”, Scientific American, vol. 230, pp. 47-54, 1974a.

Page 25: Chapter Two: Motivation and Backgroundinterran/dissn/ch2.pdf · the same time minimizing the amount of radiation delivered to healthy tissue. These goals are sometimes expressed as

29

Fabio Metelli. “Achromatic Color Conditions in the Perception of Transparency”, Perception: Essays inHonor of James J. Gibson, Robert B. MacLeod and Herbert L. Pick, Jr., eds., Cornell UniversityPress, pp. 95-116, 1974b.

Albert Joseph Moore. Silver, Fine Art Images, Inc., New York.

Ken Nakayama, Shinsuke Shimojo and Vilayanur S. Ramachandran. “Transparency: relation to depth,subjective contours, luminance and neon color spreading”, Perception, vol. 19, pp. 497-513, 1990.

Ken Nakayama, Shinsuke Shimojo and Gerald H. Silverman. “Stereoscopic Depth: its relation to imagesegmentation, grouping, and the recognition of occluded objects”, Perception, vol. 18, pp. 55-68,1989.

G. R. Stoner, T. D. Albright and V. S. Ramachandran. “Transparency and Coherence in Human MotionPerception”, Nature, vol. 344, no. 6262, pp. 153-155, 8 March, 1990.

Takeo Watanabe and Patrick Cavanagh. “Transparent Surfaces Defined by Implicit X Junctions”, VisionResearch, vol. 33, no. 16, pp. 2339-2346, 1993.

References for section 2.7: Representations of transparency in art and illustration – basictechniques

Lucas Cranach. Venus (1532), in Die Gemälde von Lucas Cranach, Max. J. Friedländer and Jakob Rosenberg,Cornell University Press, 1978.

Robert J. Demarest. in “Gallery”, Journal of Biocommunication, vol. 7, no. 3, p. 20, November 1980.

Rebekah Dodson. in “Gallery”, Journal of Biocommunication, p. 12, Winter 1988.

Carlos A. Perez, James A. Purdy and Don Ragan. “Rationale for Treatment Planning in Radiation Therapy”,in Technological Basis of Radiation Therapy: practical clinical applications, Seymour H. Levitt andNorah duV. Tapley, eds., Lea & Febiger, p. 10, 1984.

Vincent Perez. Cover illustration for Science, vol. 260, no. 5110, 14 May 1993.

Jody Duncan Shay. “Young Sherlock Holmes- Anything But Elementary”, Cinefex, Issue #26, pp. 36-55,1987.