56
1 MASTER INNOVA - CHAPTER 33 (Week 33, 30 hours) 3D VISUALIZATION OF CULTURAL HERITAGE Francesco Gabellone CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism and realism in Virtual Reality 2.0 Types of 3D visualization 2.1 Non-photorealistic rendering Technical tip. NPR 2.2 Photorealistic rendering 2.3 Illumination of the scene Technical tip. Illumination 2.4 Rendering engines and new perspectives Techincal tip. Remove flickering 2.4.1 Unbiased Rendering 2.4.2 GPU- based Rendering 2.5 Stereoscopic visualisation 2.5.1 Distance between left and right cameras and the objects 2.5.2 Hyper-stereo effect 2.5.3 Hypo-stereo effect Technical tip. The Set-up of the cameras 3.0 Texture mapping techniques Technical tip. Creation of Shaders in Cinema 4D Technical tip. Mapping of models in Cinema 4D 3.1. Texture mapping in laser scanner applications 3.2. Mapping using camera position informations 3.3 Texture baking 4.0 Interactive navigation and cognitive models 4.1 An example of project for distance visit of inaccessible artworks 4.2 Survey and contextualization 5.0 Some considerations on graphic engines 6.0 Conclusions

3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

Page 1: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

1

MASTER INNOVA - CHAPTER 33 (Week 33, 30 hours) 3D VISUALIZATION OF CULTURAL HERITAGE Francesco Gabellone CNR IBAM Theoretical content INDEX 1.0 Introduction

1.1Technicism and realism in Virtual Reality 2.0 Types of 3D visualization 2.1 Non-photorealistic rendering Technical tip. NPR 2.2 Photorealistic rendering 2.3 Illumination of the scene Technical tip. Illumination 2.4 Rendering engines and new perspectives Techincal tip. Remove flickering 2.4.1 Unbiased Rendering 2.4.2 GPU- based Rendering 2.5 Stereoscopic visualisation 2.5.1 Distance between left and right cameras and the objects 2.5.2 Hyper-stereo effect 2.5.3 Hypo-stereo effect Technical tip. The Set-up of the cameras 3.0 Texture mapping techniques Technical tip. Creation of Shaders in Cinema 4D Technical tip. Mapping of models in Cinema 4D 3.1. Texture mapping in laser scanner applications 3.2. Mapping using camera position informations 3.3 Texture baking 4.0 Interactive navigation and cognitive models 4.1 An example of project for distance visit of inaccessible artworks 4.2 Survey and contextualization 5.0 Some considerations on graphic engines 6.0 Conclusions

Page 2: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

2

1.0 Introduction The main focus of Computer Graphics in the last few years has been the search for realism. The old systems of calculation succeeded in creating a sense of depth by combining simple algorithms to visualise solids in perspective, helped only by the use of lines tracing the edges (wireframes). It was a short step from this primitive form of representation to algorithms that made it possible to remove the hidden lines and subsequently to shading systems that simulated the illumination of the surfaces. The next development was raytracing, which made it possible to manage transparencies, reflections and the calculation of the shading. Today, thanks to the introduction of radiosity, synthetic images have reached a high level of realism. With radiosity, the calculation of the illumination benefits from multiple environmental interactions, thus opening the doors to synthetic images that are truly indistinguishable from real ones. Photorealism has now been enthusiastically adopted by all entertainment sectors. Hollywood cinematography and the development of video games both draw on the emotional impact, the “awe effect”, that Computer Graphics is able to transmit, thanks to hyper-realistic visual effects. For some years now however this phenomenon has also been of interest to scientific fields in which research and communication via images is fundamental, involving for example the simulation of physical phenomena and the reconstruction of ancient contexts in archaeology. The results of the studies set out below were obtained largely by these techniques, in an attempt to fulfil one of the main aims of modern archaeology: to reconstruct increasingly extensive vistas of ancient civilisations with the highest possible level of reliability and verisimilitude. The study of a monument for the purposes of its reconstruction must however bring together historical and humanistic knowledge with the use of modern digital technologies. Such technologies serve not only to understand and interpret the object in question but also and above all to transmit the knowledge acquired to a vast and heterogeneous public, using media that are suitable for all levels of interest and understanding1. I believe that archaeology is currently experiencing a very exciting period and is attracting the interest of the wider public thanks to the expressive power of the new tools of communication. These have made it possible to recreate not only the shapes and materials of ancient times but also to harness their power to evoke scenes of everyday life. This is shown by the enormous growth on the web of virtual and thematic museums and virtual collections and galleries, all designed to celebrate and promote knowledge of archaeological and monumental heritage via the use of Computer Vision. In this context 3D images are no longer seen as pure iconic representation, but as tools for bringing together, transferring and channelling in graphic form a large part of the data acquired by scientific research. The result is the representation, via the immediate language of visual signs, of elements that are indispensable for a correct interpretation and reading of scientific information.

1.1 Technicism and realism in Virtual Reality

1 For an example of the possibilities offered by information technology for the enhancement and use of archaeometric, historical and spatial data, see: GABELLONE F., GIANNOTTA M.T., 2005, Realtime 3d multimedia system for the distance visiting of cultural heritage. A case study on the chamber tombs in via Crispi, Taranto. In the cited example it is possible to search the Databases associated with a 3D model to obtain information on constituent materials, grave goods and any other available information on the monument.

Page 3: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

3

This approach to ultra-realistic rendering is part of a tendency over the last few years, which have seen considerable progress and growth in the technologies used for the study and visualisation of ancient contexts. The specific solutions produced by software companies, in terms of both realism and Real Time 3D applications, are being taken up by increasing numbers of archaeologists and historians. The use of Virtual Reality enables them to give their research and products a more interesting appearance and to provide non-expert users with communication tools that can have great emotional impact. This approach is not just about seeking the most technologically advanced solution, but is in line with current developments in entertainment and Cultural Heritage communications. Indeed, we are convinced that the effectiveness of Cultural Heritage communications depends to a large extent on freeing representation from the sterile VR interfaces of the 1990s. At that time a narrow technicism was vaunted as evidence of having reached high scientific standards, with results that were defined historically as “cold” – “synthetic images” that were characterised by their typically computer-generated appearance. The conviction that Virtual Archaeology has no need of extreme realism still persists in some research environments and is often accompanied by reconstructions may be regarded as merely typological or general in character. All this in the age of unbiased rendering engines, ultra-realistic Real Time simulators and CG productions that may truly be said to constitute new forms of visual art. Today representation has to aim at realism and the emotional involvement of the spectator, using the same techniques as modern cinematography. In the project presented here, the realism comes in the simple form of a video, which however has great value from both the emotional impact and the scientific point of view, making use of laser scanning, camera mapping, particle effects, new rendering engines, image-based modelling and other highly advanced modelling techniques. Communication in this case becomes “spectacular” and a vehicle for high-level content, suitable for all levels of user, but created with tools of great scientific value, founded on interdisciplinary research and dialogue between different forms of knowledge. 2.0 Types of 3D visualization Since the 1990s, the Virtual Reality make it possible for anyone with a personal computer to access images of heritage sites that have been displayed on the Internet. Recently, VR technologies have seen large improvements worldwide and become accessible to all types of communities that are characterized by different backgrounds, interests and computerised platforms. Technologies such as 3D televisions, interactive computer games without physical controllers, 3D movies and cinemas and web-based virtual tours have become more available and accessible to the public. VR is no longer viewed as a field for specialists; on the contrary, it has become a mainstream medium and a part of global pop culture. The major applications of this technology are in architecture and urbanisation, archaeology and preservation of cultural heritage sites, the military, visualisation entertainment, manufacturing, augmented reality, education, tourism, employee training and medicine. However, the most familiar applications of virtual reality are in the spatial and architectural fields, in which it is used as a tool for presenting historical sites and buildings and for creating a walkthrough environment in architectural projects, i.e., to let the user explore a 3D scene in real-time. Virtual reality enables cultural heritage sites, often inaccessible to the public or even no longer existing, to be recreated extremely accurately so that the recreations can be published in various media. This visualisation allows the ‘virtual visit’ of a site, of heritage buildings and of objects surrounding them. This improvement has introduced major improvements in the fields of education, tourism and sustainable

Page 4: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

4

planning, and thus provides new tools for heritage site interpretation and presentation and for sustainable tourism (Lettelier, 1999) 2.1 Non-photorealistic rendering The computer graphics field has long been dominated by the desire to produce graphic effects that mimic the look of pictures taken by photographic cameras. This approach, known as photo-realism, is now a mature area of application. Many techniques have been designed that solve the problems of modelling and rendering objects with smooth surfaces and achieving realistic lighting conditions. In contrast, non-photorealistic rendering (NPR) refers to the generation of images that through technological means mimic images that have been created by “the human hand”. These types of images include paintings, illustrations, sketches and drawings. This approach is characterised by the use of randomness and arbitrary interpretation of features being drawn, rather than adherence to realistic properties. Whereas the process of refining a hand drawn image is one human creativity, a computer-based approach involves some level of automatic search for optimal features. Sample computer-generated NPR images are shown in figures given later. All research efforts in NPR share a common characteristic: taking an existing image and producing a new image which possesses some likeness to the original image. Like many other research efforts, our rendering techniques fall under the umbrella term Stroke Based Rendering (SBR). A stroke is defined as a function that can be applied to an image structure (Hertzmann, 2003). An image structure, in turn, is defined as a data structure containing a set of pixels that initially represents a “blank canvas”. An image structure can subsequently be altered by applying an ordered list of strokes. A variety of SBR algorithms and styles have been identified. These include, but are not limited to, pen-and-ink drawings, tiled mosaics, painterly techniques, stippling techniques and vector-field visualisation. The styles of most interest to us and are pen-and-ink drawings, tiled mosaics and painterly techniques. Photorealism is certainly well-suited for the job of documentation, where the task is to record imagery in all its detail for posterity. Crime scene photography is a perfect example. Should new evidence or theory prompt a reinvestigation, crime scene photos may be probed for new clues. NPR, such as a sketch of the scene, could certainly record the relative positions of the purse, garment and fence in the image above, but any details missed by the artist at the time would be permanently lost once the items are taken into evidence. Likewise, photorealistic computer graphics are uniquely suited to the situation where one seeks live-action imagery (e.g. movies of television) but where its difficult to stage the actual scene. Here the task is to simulate the desired effect - huge explosions, tidal waves washing over manhattan, and Yoda dropping his cane and grabbing his light saber - requiring photorealistic computer graphics imagery for convincing effect. Even though we know that these effects are not real, cartoonish NPR could not be employed in there circumstances because it would destroy the illusion necessary for the audience to suspend disbelief and enjoy the story. Depending upon the task at hand NPR may be the appropriate means for communication. Because computer graphics has traditionally focused on tools for photorealistic rendering, there remain many untapped opportunities for new forms of content for authors. One might even argue that there should be more applications under the NPR heading because photorealism is fairly narrowly defined. Nonetheless, there remain compelling reasons why

Page 5: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

5

computer graphics has focused on realism, not the least of which are limited tools for stylization and abstraction. Of course, there are also various applications uniquely suited to NPR, for example when imagery is used for explanation such as this technical illustration of a mechanical tool. Since the intended viewer probably has the real tool in front of him, a photographic image is redundant; instead, the artist can better explain the form and function of the item by disambiguating different components in contrasting colors and suppressing physically accurate shading that obscures this clarity. Silhouette and feature lines are also frequently used to further enhance the objects description. Stylization is often also used to illustrate abstract concepts using very succinct imagery. In this example the artist communicates an abstract idea to accompany an op-ed piece originally in the new york times concerning the role of higher education in our society. If, instead, we imagine replacing this with a comparable photoreal image the abstract concept would be lost amid the details. Rather we’d be distracted by the absurd image of a tiny man standing astride the mortar boards of two giant college graduates. Finally, storytelling is example where stylized and abstract imagery can help to communicate scenes which exist only in the author’s imagination. Nere NPR is particularly adept for the fantastical nature of the Dr. Seuss’s stories. The effect would be wholly different with comparable photoreal imagery. But there’s also another reason. Children, the intended audience in this case generally respond best to brightly colored illustrations, and somehow find them more engaging than photos. So if you look down the row of shelves devoted to children’s books at your local book store you’ll see that the great preponderance of kid’s books use hand-crafted imagery rather than photos. Technical tip. NPR The use of non-photorealistic rendering is frequent in the presentation of projects or when there is a need for a precise description of the elements making up a simulation. The most frequently used form of representation in these cases is the style sketch. In general it is sufficient to set a specific shader that describes the ways in which the various parameters are applied. Each software product has its own philosophy of NPR application. In Maxon Cinema 4D everything is highly intuitive and simple. All that is needed to add the Sketch and Toon effect to the scene is a single click on a menu. Before anything else one has to have an object. 1. Create a cube. 2. Choose Render Settings > and click on the "Effect” key. Select Sketch and Toon

from the menu that appears.

Page 6: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

6

Figure 1 and 2: Set-up of Sketch and Toon rendering in Cinema4D

In Material Manager you will note that a Sketch and Toon material has also been added. The material that is automatically added when the post-effect is inserted is called the Sketch Material. It controls the line, for example its colour and thickness. In practice, the Sketch Material is used by the post-effect to draw the lines. 3. In Material Manager, double click on the Sketch Material icon to see the settings in

the Material Editor. 4. In the Color tab, choose a new colour for the line, for example black. In the Thickness tab, set the value to 4. Render the scene.

Figure 3 and 4: Material editor and sketch rendering preview The Sketch and Toon post-effect is the main control centre for the effect. Among other functions, it controls the shading of the post-effect and what materials are used by the effect. In the example below we have personalised the colour of the cube by setting it to pure white, the same as the background.

Page 7: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

7

Figure 5 and 6: Set-up of outline rendering At this point we need to decide what effect to obtain, i.e. if we intend to create something geometric, with regular lines, or if we want to confer a more artistic appearance on our project. For this I advise experimenting with the effects of various parameters. An example of the final product with a manual drawing effect is shown in Figure 7, in which the backdrop has been pre-treated with a pictorial effect using GMXPhotopainter by Gertrudis Graphics (http://www.gertrudisgraphics.com). The sketch effect described above is then applied in Adobe Photoshop with the various project specifications. In contrast, the subsequent image shows a simulation with NPR in which I have applied very simple Sketch and Toon parameters for the edges, adding cast shadows and object shading in the compositing phase. Also added in this phase are simulations of water jets and human figures, which serve to demonstrate the scale of the project. The same settings are used in the subsequent images, with varying degrees of personalisation.

Figure 7 : Example of sketch rendering using multiple effects. Final compositing in Photoshop

Page 8: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

8

Figures 8-10: Various examples of NPR

Page 9: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

9

2.2 Photorealistic rendering As pointed out at the beginning and also in the previous chapter, despite the great interest in non-photorealistic rendering and its multiple applications, all current efforts in Computer Graphics are focused on the search for realism. In the field of computer graphics, Rendering is the process of "production", i.e. the generation of an image based on a mathematical description of a three-dimensional scene interpreted by algorithms that specify the colour of each point of the digital image. The description is given in a language or a data structure and must contain the geometry, the point of view, information about the optical characteristics of the visible surfaces and the illumination. The research and progress in the field of rendering have been motivated largely by the need to simulate accurately and efficiently the main physical characteristics of the materials and, at the same time, the behaviour of the objects and materials associated with them when they are illuminated. The main physical characteristics and optical phenomena that are simulated in the rendering are: Shading — variation of the colour and luminosity of a surface depending on the incident light Texture mapping — a method for specifying the colour details of a surface that will be mapped to a 3D image Bump mapping — a method for simulating irregularities in the shape of a surface by applying an image (bump map) that describes imaginary bumps and wrinkles. It is used to achieve a distortion of the surface normals of the object which is then used in the calculations for the simulated reflection of light. Normal mapping — a method similar to bump mapping in which the image directly specifies how to disrupt the surface normals of the surface in any given point. Displacement mapping — extrusion of a surface in accordance with the surface normals via a grey-scale image, producing a real perturbation of the shape of the surface (for example to create a mountain from a flat surface). Distance fog — attenuation and scattering of light so that objects become progressively more obscured as their virtual distance from the camera increases; a simulation of objects “disappearing into the mist”. Shadows — management of projected shadows Soft shadows — partial shadows produced by diffuse light sources Reflection —specular or near-specular reflections Transparency — transmission of light through an object Refraction — deviation of light when passing from one medium to another Indirect illumination and Global Illumination — taking account of inter-reflections of light Caustics — accumulation of reflected or refracted light projected in characteristic shapes and on other objects (for example the cardioid shape of the light reflected from inside a cylinder or the irregular shapes moving on the bottom of a swimming pool) Depth of Field or DoF — simulation of the progressive loss of contrast in objects as their distance from the plane of focus grows. Motion blur — simulation of the blurring of rapidly moving objects as in photographs. Subsurface scattering or SSS — simulation of the behaviour of the light that penetrates an object made of translucent material such as wax or human skin.

Page 10: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

10

Ambient occlusion — simulation of the behaviour of the light near occluding bodies where rays of light barely enter and emerge. Anisotropy — simulation of a material that reflects light differently as it is rotated about its geometric normal. Photorealistic rendering must necessarily control, accurately and correctly, the characteristics of the materials. The scrupulous management of the materials is generally managed by a mathematical function known as BRDF, the "Bidirectional Reflectance Distribution Function". The BRDF gives the reflectance of a target as a function of illumination geometry and viewing geometry. The BRDF depends on wavelength and is determined by the structural and optical properties of the surface, such as shadow-casting, multiple scattering, mutual shadowing, transmission, reflection, absorption and emission by surface elements, facet orientation distribution and facet density. BRDF simply describes what we all observe every day: that objects look differently when viewed from different angles, and when illuminated from different directions. It is for this reason that painters and photographers have for centuries explored the appearance of trees and urban areas under a variety of conditions, accumulating knowledge about "how things look", knowledge that today we would call BRDF-related knowledge. Like painters, virtual reality computer programmers also need to be concerned about the BRDFs of the surfaces they use. The BRDF encompasses the main optical properties of an object, such as diffusion and whether it looks metallic, vitreous or transparent. Every parameter of a material's BRDF is linked to all the others, so that if an object is not wrinkly it is probably reflective: a highly polished metallic sphere has no bump effects and will be certainly be highly reflective. The BRDF parameters are calculated in a physically correct way, and so it is very difficult to create “impossible” surfaces by this method. In contrast, with no control of the BRDF, simple materials can have inconsistent characteristics. The main types of algorithm used in the rendering process are: Ray tracing: linked to mathematical probabilities. Radiosity: linked to the mathematics of finite elements. Raytracing produces results similar to ray casting and scanline rendering, but simplifies some advanced optical effects, for example an accurate simulation of reflection and refraction, while remaining efficient enough to allow their use if a high quality result is desired. In ray tracing, a ray of light is traced in a backwards direction. That is, we start from the eye or camera and trace the ray through a pixel in the image plane into the scene and determine what it hits. The pixel is then set to the colour values returned by the ray. Then the rays are tested against all objects in the scene to determine if they intersect any objects. If the ray misses all objects, then that pixel is shaded the background colour. Ray tracing handles shadows, multiple specular reflections, and texture mapping in a very easy and straightforward manner. As a rendering method, radiosity was first presented in 1984 by researchers at Cornell University (C. Goral, K. E. Torrance, D. P. Greenberg and B. Battaile) in an article entitled "Modeling the interaction of light between diffuse surfaces". The theory had been used in engineering to study the transmission of heat since 1950. One simple method for verifying the effects of radiosity is the Cornell Box. This is a test aimed at determining the accuracy of rendering software by comparing the rendered scene with an actual photograph of the same scene, and has become a commonly used 3D test model. It was created by Cindy M. Goral, Kenneth E. Torrance, Donald P. Greenberg, and Bennett Battaile as part of the Cornell University Program of Computer Graphics for their paper Modeling the Interaction

Page 11: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

11

of Light Between Diffuse Surfaces, published and presented at SIGGRAPH'84.

Figure 11: A typical scene with Cornell Box for Radiosity simulation

Radiosity is often assimilated with Global Illumination (GI), which takes account of the interaction of light between separate objects within a scene. In simple terms, it reproduces the reflection of light on to objects (indirect illumination), as happens in the real world. GI also allows light to reach areas (such as corners) that normal sources of light would not reach (in the example above it can be noted that many black areas cannot be reached by any light source in a scene rendered without GI). Although it seems simple, the process is quite complicated: when a scene is rendered using a conventional light source, the light is emitted only by a limited number of sources (the lights themselves). In contrast, when GI is applied, light can also be reflected by other parts of the scene. To simplify, the two main parameters in the calculation of the GI (in almost all its software applications) are linked to the number of stochastic samples generated and the depth of diffusion of the rays.

Page 12: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

12

Figure 12: Number of stochastic samples set to 3 When a ray strikes a point on a surface, new rays depart from it in a dome shape. The Stochastic Samples value determines the number of these rays. If your scene has many areas that are not directly struck by a light, such as the part beneath a table or in dark corners, you may wish to increase the number of Stochastic Samples. As soon as the Samples strike the surface, they are irradiated in all directions. Consequently, if you need to bounce your light around two corners, very few samples will reach these areas, because they will be scattered before reaching them. By increasing the number of Stochastic Samples, you can obtain an appropriately lit geometric structure illuminated indirectly by GI.

Figure 13: The diffusion depth

The depth of diffusion is the maximum number of reflections and refractions for each ray of light and the relative distance that the ray will have to travel. Increasing this value may lead to longer rendering times, especially for scenes in closed environments, where the rays may strike many surfaces rather than exit the scene after one or two bounces. For this reason, the

Page 13: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

13

depth of diffusion values should usually be reserved for open-air scenes, where most of the light rays can leave the scene more or less directly. Naturally today there are numerous rendering engines that can be used autonomously or within the main commercial software programmes in the form of plug-ins. A list of the main software applications and their characteristics is provided in the next chapter. 2.3 Illumination of the scene Visible irradiated light is a small portion of electromagnetic waves. This irradiation is propagated in space at varying wavelengths, giving rise to gamma rays, x rays, ultraviolet rays, infrared rays, radio waves, etc.

Figure 14: The visible spectrum is the portion of the electromagnetic spectrum “White light” is the combination of all the wavelengths in the spectrum of visible frequencies, and when we perceive a red object, as in the figure below for example, it is because the white light is transformed into red light when it comes into contact with the surface. The material absorbs all the light except red, which is then reflected. We can see an object because the light is reflected from the surface towards our eyes. In practice, all the light that we perceive in nature is reflected. In the interstellar space light is perceived only if it strikes an object. From these brief considerations it follows that light does not only serve to make objects visible, but to emphasise their shape and to represent their characteristics in a more or less real and natural fashion. Expert control of light confers a characteristic “look” on our scenes, giving them a recognisable imprint. It is perhaps the most difficult aspect to manage, since it is precisely the illumination that determines a 3D scene's level of realism.

Page 14: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

14

Figure 15: The color perception. Subtractive color model used in color printing In the previous chapter we saw that in the case of rendering by raytracing, the inter-reflections of the light, i.e. the contribution of the bouncing light rays and the environment, are not calculated. For this reason an object will be illuminated only if the light strikes it “directly”. Before the advent of radiosity engines and their operational implementation in 3D software, the effect of the inter-reflection of light had to be simulated indirectly by means of very simple empirical methods. The first of these, also widespread in traditional photography, is the so-called three point method. The main aim of “three point” illumination is to obtain a uniform distribution of light on the subject to avoid deep shadows: to obtain this result we need three lights. Think of a simple scene: a human figure in a study, a room and three lights. Technical tip. Illumination

The first light (or lamp) of which we speak is the ‘key light’. As the name suggests, it is the key to the whole illumination set-up. The “key light” functions as the main source of illumination in our scene. Usually the best way to position the key light is at an angle of 45° to the frame of the video-camera. The reason why it is not positioned directly opposite the subject is that we want to add a certain level of definition to the edges of the face and shoulders; aiming the light straight at their face would give the subject the typical “overexposed” look of a photograph taken with the flash. A crude error that should never be made is to aim the key light in the same direction as the camera. This seems to illuminate the subject well, but does not help in any way to reconstruct the subject's three-dimensional appearance. Never aim a key light in the same direction as the camera! Once you have pointed the key light at the correct angle, you can see that your subject is better defined and that the light tends to envelop the face: even so you will still see dark shadows on the opposite side of the face. The next step is to fill these shadows and to do this we shall use the aptly named “fill light”. This light should also be positioned at 45° to the video-camera but this time on the opposite side of the key light. It is also better to make sure that the fill light is less intense than the key light so that it doesn't compete with it. There are various ways to do this correctly: one is obviously to use a lamp with a lower voltage (lower intensity); the other solution is to position the fill light at a greater distance from the subject (the intensity of the light is almost exponentially related to distance so even by moving it a small amount has a considerable effect on the intensity of the illumination); the third way is to use a “neutral density gel”.

Page 15: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

15

The key light is still throwing some shadows: indeed the idea is not to eliminate all the shadows, but to soften them considerably, ensuring that they assume a pleasant and natural appearance. Now, with the key-light and the fill-light we have a homogeneous and pleasant light, but the overall result is a bit flat: the subject tends to “merge” with the background. To make it stand out better from the background we will use a third light, called the “back light”: the best place to position it is naturally behind the subject. When setting the angle of illumination of the third lamp it should be remembered that it must be positioned outside the frame, so that the light illuminates only the subject and does not point at the lenses of the video-camera. This light should also be of a rather low intensity. The aim of the “back light” is to shine on the subject and to create an aura of light around the shoulders and the head. When used together with the other two lights, the “back light” causes the subject to stand out better from the background and draws attention to where it is supposed to go, i.e. the person being represented.

Figure 16: The 3 points illumination technique The second method of illuminating a scene when the radiosity calculation is too resource-heavy or problematic is the “Dome Light” method. This is still used frequently today and very often it produces excellent results.

Page 16: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

16

Figure 17: A “dome light” simulating the GI

There are various ways of creating a Dome light. The simplest is to position the Omni lights near the vertices of a hemisphere. A dome light such the one shown in Figure 17 contains 74 omni lights with a very low intensity (2.8); the soft shadows are activated and are also set at a very low value (shadow resolution 100x100). Naturally these parameters vary in the various software applications and both the number of lights and their intensity must be calibrated to fit the scene. There are also numerous plug-ins for the various applications that can be used to generate and manage a Dome light for the simulation of GI. For example, 51grEy is a Cinema 4D plug-in by Boris Cargo which generates a lighting set-up based on a Polygon Object's points and Texture Map.

Figure 18: Simulation of the GI whith a dome light

Page 17: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

17

As you can see in Figure 18, the result is very good, despite the absence of GI and the Key light.

2.4 Rendering engines and new perspectives As has already been pointed out, the quality of the images in 3D scenes is dependent on many factors. The most important of these is the quality of the illumination, followed by the quality of the set-ups for reproducing the materials and the quality of the models. The sense of realism is however indissolubly bound to the quality of the rendering engine. It is universally acknowledged that the top spot here has been occupied for many years by Pixar’s prize-winning RenderMan, which is used throughout the CGI industry for the creation of stunning visual effects. This software includes numerous high-level characteristics but requires a long training period, associated mainly with achieving the right definition of the materials and setting up the numerous parameters, which can scare off the inexperienced.

Figure 19: RenderMan rendering Today RenderMan is a hybrid renderer offering the latest advancements in ray tracing, as well as excelling in those areas that have traditionally been RenderMan’s strengths: displacements, motion blur, extreme memory efficiency, and unmatched flexibility. RenderMan allows these different techniques to be integrated in creative ways. With the recent addition of Physically Plausible Shading, RenderMan delivers highly realistic lighting with minimal set-up, including full support for multi-bounce ray-traced global illumination and ray-traced subsurface scattering. New additions to the shading language and enhancements to the core renderer make these traditionally expensive effects a production reality. There are however many alternatives to RenderMan on the market that are enjoying great success among 3D artists precisely because of their greater facility of use. Examples include V-ray by Chaos Group, Mental Ray by Mental Images, Brazil by McNeel and Final Render by Cebas, to cite but a few. The technologies are very similar and the quality of the final product is also now broadly the same. All these software products are integrated into the main 3D applications and the main differences, to be born in mind when choosing between them, come down a small number of features. The most important are of course ease of use, or rather the learning curve, calculation times and lastly overall quality, understood as the absence of artefacts, precision in GI calculation, camera effects and

Page 18: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

18

antialiasing. Having used almost all the above-mentioned software products, I can definitely say that making something habitual is the most important thing! Once one has become used to using even a complicated application one forgets the initial complexity, and the repetition of habitual commands makes that software the best one to use. Techincal tip. Remove flickering

Un elemento sicuramente da non sottovalutare nel computo di una scena in CG è il calcolo di animazioni in GI con oggetti e camera in movimento. In questo tipo di scene è spesso presente un difetto noto con il termine di flickering. Questa anomalia colpisce quelle scene in cui i fotogrammi contigui presentano delle piccole differenze, per esempio dovute ad un calcolo imperfetto dell’anti-aliasing, a difetti di shading, a problemi di z-buffer ed infine a difetti nel calcolo della GI. A meno che non usiate Pixar’s RenderMan il controllo del flickering sarà sempre un problema importante nelle produzioni di animazioni in CG. Le prime anomalie sono facilmente rimovibili con l’adozione di questi accorgimenti: Flickering due to antialiasing. This problem is seen when the AA parameters are

set too low or are calculated with an inappropriate algorithm. Very often the elements that suffer most from this defect are lines, materials with very detailed textures, textures applied with low sampling aliasing, materials with excessive contrast and the edges of elements that are very thin or intricate. This happens above all because in the anti-aliasing calculation each frame is slightly different from the subsequent one. The result is that we see an irritating fluttering or flickering in the animation, which is highly accentuated when these animations are displayed on a television. The solution is thus not to overdo it with the contrast in textures, avoid super-

detailed surfaces and increase the AA values. This will produce images that are slightly out of focus but without flickering. In live animations the presence of motion blur, i.e. the blurring of objects that are rapidly moving, makes it possible to naturally eliminate this defect. All that is needed in CG is to simulate reality, i.e. the behaviour of the live video-camera. The frames must appear soft and the textures will always require an MIP-type sampling (multum in parvo, which means "a lot in a little"). Flickering due to shading defects. The problem is linked to the lack of continuity in surfaces or, more frequently, to the adoption of soft shadows with an inappropriate Bias level. The distance from the object to the shadow also depends on the distance from the light source to the object (also called relative bias). With relative bias, i.e. set without regard to the scale of the scene, the further the light from the object the further the shadow will be from the object. When the camera approaches a particularly small object, the distance between the object and the shadow becomes visible (Figure 20a). This error can be corrected by inserting a lower value. In certain cases, it is possible that the Bias value is too small (for example with very large objects), which results in the object projecting the shadow on itself (Figure 20b). In cases like these, set the Bias to a higher value.

Page 19: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

19

Figure 20: Some problems of shadow and Bias parameters Flickering due to z-buffer problems. In three-dimensional computer graphics, z-buffering is a technique for managing the coordinates corresponding to the depth of an object or its distance from the camera. Z-buffering is one possible strategy for resolving the problem of visibility in three-dimensional rendering, i.e. for establishing what parts of the objects in the scene are visible and what are hidden by other elements in the foreground. Naturally, when two or more coplanar objects present themselves in a scene, our rendering engine will find an ambiguity, and because of this it will not know which of the coplanar objects it must render. This ambiguity will almost certainly generate an annoying flicker. The defect is easily eliminated by cancelling the duplicate polygons and/or surfaces that are superimposed one on top of the other. Flickering due to GI problems. The last and most difficult problem to eliminate. In general the solution is to pre-calculate the GI in the cache, but the solution is often ineffective. There are numerous tutorials for calculating objects in motion without flickering, but based on my experience in complex projects I can highlight some of the issues. Pre-calculation in the cache generates files that store information on illumination and are loaded when the process is concluded. Subsequently one can move ahead with the final rendering. This pre-calculation must necessarily be performed on a single machine and subsequently passed to your Render Farm. If you have to pre-calculate a very heavy scene then you must expect extremely long processing times. An alternative to pre-calculation could be, for example in V-Ray, the simple calculation with Brute Force+Light Cache. It is sufficient to bring the BF parameters to values around 30 in order to obtain animations without flickering, granularity or GI artefacts.

2.4.1 Unbiased Rendering In the last few years our aim of representing Virtual Heritage with the highest possible realism – and with the minimum effort – has been partly fulfilled. I refer to the advent of unbiased engines, which have brought about a revolution in the field of photorealistic 3D representation. This approach is now widely used and the last few years have seen the appearance of numerous companies providing engines based on this philosophy. We have seen that calculation of the GI is a rather complex operation, which – precisely because of its infinite interactions between objects and environment – is in need of a “simplification”, an approximation of the number of the stochastic samples. Instead of an infinite number of

Page 20: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

20

bouncing rays and different depths of diffusion, a compromise is sought that reduces the accuracy of the final product but also reduces calculation times. Unbiased rendering is a rendering technique that does not introduce systematic errors or bias into the radiance approximation. Specifically, biased engines usually make use of algorithms optimised for calculating global illumination, caustic reflections, refraction, sub-surface scattering and so on, introducing artefacts. Unbiased engines in contrast use much more accurate physical and optical models and take account of light interactions between the elements of the scene. Examples of phenomena envisaged in the calculations of unbiased rendering engines include spectral dispersion, Fraunhofer diffraction, Newton's rings, the polarisation of electromagnetic radiation, the “pinhole camera” effect and optical aberration.Only by increasing the power of computers, by means of multi-core processors and render farms, has it been possible to implement complex calculation algorithms and make the processing times of these engines acceptable. Unbiased engines are used mainly in architecture and when seeking quality without compromises. But to my mind the most interesting aspect is the possibility of correctly simulating light. In the example shown here, a scene I rendered in 2006 with a GI-based engine (the engine embedded in Cinema 4D) is compared with the same scene rendered in 2012 with an unbiased engine (Next Limit Maxwell Render). The differences are considerable, but the most important point about these engines is that it is not the user who decides the balance of lights; rather, it is the software that meets our need to “verify” the quantity and nature of the illumination in a scene.

Figure 21: top left) raytracing rendering, top right and below) unbiased rendering (Metapontum Snctuary)

Page 21: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

21

In the three images shown below it is possible to verify the solar exposure of the temples located in the sanctuary of Metapontum. This is a normal operation even in the “oldest” CAD/BIM applications, but in this case it requires no subjective intervention.

Figure 22: Metapontum Sanctuary, simulation at various hours of the day Naturally, the correct calculation of light and materials requires considerable precision in terms of the colours represented, an onerous task in digital restoration or simulations involving works of art, whose reproducibility with reliable colour correspondences is of fundamental importance. But what happens when it is necessary to calculate an animation? These engines are notoriously very slow, precisely because of the considerations discussed above. Unbiased calculation proceeds by successive steps that each improve, in a random order (based on a stochastic principle), the previous image. By definition, unbiased calculation tends to infinity. The successive improvements increase the details and reduce the overall noise. This means that it is necessary to establish a level of quality for the rendering at which noise is minimal. The same image calculated twice with an identical Sampling Level (the level of quality of the rendering) displays differences due to random noise. In animations this produces flickering, the number one enemy. Naturally it is always possible to reduce noise in post-production; for example I use the well-known DeNoise by RedGiant as part of the After Effects package, very effective and at times even miraculous. It is also possible to use a Render Farm to distribute the rendering, but in complex scenes you should budget for long calculation times.

Figure 23: Apollonion of Syracuse, unbiased rendering

Page 22: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

22

2.4.2 GPU- based Rendering We all know that in Computer Graphics innovations follow each other in quick succession. In the last year or so the community of 3D software users has seen numerous advances in both real-time visualisation on the viewport (the 3D work window) and the final calculation. GPU-based rendering is clearly the result of an old philosophy, which has given rise to Real Time simulators, which means being able to visualise a scene without needing to wait for the rendering process. It is normally what happens in video games. This is possible thanks to the use of graphics libraries, generally OpenGL, DirectX or Shader Model, which offset the CPU processing load by directing a large part of it to the graphics card, i.e. the GPU (Graphics Processing Unit). To clarify this concept I would like to illustrate a real-life commercial example, the well-known Furryball by Art And Animation studio, which was developed in technical collaboration with Nvidia/AMD ATI.

This engine is totally integrated into Autodesk Maya for exclusive use. The new release 3.0 (2012) introduces certain new features that at first glance seem revolutionary: Global Illumination, Area Light and reflections together, totally in real time. What is truly impressive is that for the first time the concept of “real time” is not used spuriously as is often the case with many rendering engines, which manage to achieve this goal only partly and only by means of extremely powerful hardware. Observing the results of the various video tests, the possibility of moving totally reflecting objects inside the scene with immediate consequences on the surrounding surfaces is impressive. Also noteworthy is the ability to implement complex shaders for Maya Hair-type filaments without affecting in any way the immediacy of the dynamic response in the animation phase.

Page 23: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

23

The numerous advanced-level characteristics make it possible to render images in 1920x1080 Full HD in a few seconds, using Global Illumination, DoF and complex materials such as Sub Surface Scattering (SSS) and shaders that envisage substantial amounts of displacement. In the benchmarks declared by the company, tests show that the use of an NVidia GeForce GTX 680 graphics card generates such a striking improvement in performance as to make CPU-based rendering totally obsolete and useless for production purposes. On average, the use of software such as Furryball increases the speed of calculation by 30 to 300 times (depending on the type of image) compared to CPU-based rendering. Another important point is the revolution in terms of the hardware investment necessary to make this technology work properly. Indeed, the engine can be used only with graphics cards that support DirectX 11, OpenGL 4 and Shader Model 5.0. This means that an NVidia Quadro FX 4800, despite its High End price, does not satisfy the minimum requirements, while a more modest card (such as a GeForce GTX) gives excellent results in terms of overall performance. The level of efficiency achieved by GPU-based architectures such as the one described should give pause for reflection on the possibility of using hardware that is affordable but at the same time able to apply intelligent and high-performing software solutions.

Page 24: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

24

2.5 Stereoscopic visualisation Stereoscopy is the perception of relief in an object that arises from binocular vision. It plays on the fact that because of the different position of the two eyes on the human face, each sees a slightly different image from the other. This is because our eyes are in two different positions in space, about 65mm apart. The brain thus receives two slightly different images, which it processes to create a single image containing a precise perception of depth and other characteristics that determine an object's position, distance and size. For us, seeing a solid object in three dimensions is so natural that we do not even think about it. Just as normal is seeing "flat" images, i.e. those without the third dimension, such as drawings and photographs. Have you ever wondered why this happens? It is so natural and obvious that many have never even thought about it. To try and understand it, do this experiment: while you observe a solid object, such as a bottle or a chair, cover one eye with your hand. In this case too the image loses the third dimension. It is true that clues remain in the form of shadows and the variation of size with distance, which help to give you an idea of the volume of things and their position in space. However, using just one eye, you will have no perception of depth. If you attempt to grasp an object, you may misjudge it. If you then uncover your eye you will see the third spatial dimension return and objects are easier to locate. Stereo-photography or stereoscopy was first proposed by Wheatstone in 1838 not long after the advent of traditional photography. In 1891 Louis Arthur Ducos du Hauron proposed a method to obtain stereoscopic images printed on a single support (anaglyphs). Lumière studied and perfected this technique, creating animated anaglyphs which, when observed with special glasses, created the three-dimensional effect in the cinema. 3D photography was widely practised in the last century and there are many photographs from the First World War. Today the low level of interest in this type of photography is due to the fact that the viewing system is not very practical (requiring glasses, visors etc.). It is still used in cartography, especially in anaglyphic prints of map contours, the processing of which is conducted with a mechanical instrument (anaglyphograph) or a computer. To create photographs in three dimensions it is thus necessary to obtain two images of the same object, taken from different directions. The angle between the two shots can be similar to that formed by the eyes or even greater (hyperstereoscopy) for objects with very few surface features. On a professional level, special cameras with two lenses are used. In the absence of equipment of this type, conceived specifically for stereophotography, a normal camera can also be used. Two shots need to be taken, moving the camera between one shot and the other. When doing this it is necessary to move the camera crab-wise, keeping it pointed at the main subject. It is important to keep vertical frame variations to a minimum. For moving subjects, it is necessary to have two paired cameras operating together. Even drawing can be seen in 3D. To do this, two perspective or axonometric views from two different directions are needed. This operation is simplified when drawing with a computer (CAD), since it is possible to create the two images simply by rotating the drawn object. 2.5.1 Distance between left and right cameras and the objects

Page 25: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

25

To appreciate the stereo effect, it is necessary to position the right and left cameras at an appropriate distance from each other, so as to reproduce the space that separates the human eyes. If the aim is to obtain images that respect reality as much as possible (natural stereoscopy), then the rule is to place the cameras 6-7 cm apart (the distance that typically separates one eye from the other in adults). However, in some circumstances you may wish to increase or reduce the distance between the cameras; for example, when one wishes to obtain a stereoscopic effect on very large objects, such as a whole city. In this case, a distance of 6 cm is negligible with respect to the dimensions of the object and so it would be better to place the two cameras a few metres from each other. On the other hand, if one seeks to obtain a stereoscopic view of a molecule then the distance between the cameras will need to be no more than a few Ångströms.

Figure 24: On the top, various stereoscopic efeects. Below, distance between cameras in accordante with

nearest object 2.5.2 Hyper-stereo effect

Page 26: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

26

There are cases in which one wishes to increase the stereoscopic effect, for example, to obtain a greater visual impact. This type of effect is often known as "Hyper-Stereo", to stress that the perception of depth has been enhanced somehow compared to normality. To obtain "hyper-stereo" images it is sufficient to increase the distance between the video-cameras. The hyper-stereo effect should however be used with caution, because the increased separation gives the impression of simulating the point of view of a giant. In practice, objects tend to appear smaller than normal. For example, a 3D model of a woman can look like a toy doll if rendered with a pronounced hyper-stereo effect. A tip is to limit the use of the hyper-stereo effect to objects of large dimensions (entire cities, planets, landscapes). 2.5.3 Hypo-stereo effect Unlike the previous case, the hypo-stereo effect is obtained by placing the two cameras closer together. The effect is diametrically opposed to that of hyper-stereo images. The objects look larger than they would in real life, reproducing in some way the point of view of small animals or insects. Technical tip. The Set-up of the cameras From the operational point of view the set-up of the cameras in a 3D scene is fairly simple. However care must be taken with certain steps, which have been empirically shown to confirm the theoretical notions described above. The first parameter to set is the camera target, i.e. the point of interest, the area on which the attention of the director is focused. The target is extremely important because the following stereoscopic results will be obtained with respect to it:

1. The objects in the cameras' plane of focus have no disparity; that is, the right image and the left image coincide, superimposing themselves perfectly. These objects will be coplanar with the plane of projection of your film clip. If we use a stereoscopic display, these objects will appear in the same plane as the surface of the display itself;

2. The objects behind the target will appear to be beyond the plane of projection; 3. The objects in front of the target will appear to emerge from the screen, towards the

viewer.

The second parameter to check is the distance between the cameras (B in Figure 24), which produces a horizontal disparity in projection. The greater this distance, the more pronounced the effect of stereoscopic depth, but only up to a point. The stereoscopic effect in the scene can easily be compromised if this value is too low or too high. We suggest, from empirical observations, that B must correspond to about 1/30 of the distance between the camera and the nearest object (L), but this value must always be verified because with the same camera settings, the differences between the various scenes can give different stereoscopic results. The stereoscopic effect must always be checked in the production phase, and for this reason in cinematography there is always direct output of the filmed material on a stereoscopic display on set which a specialised operator must monitor continuously. In general the parallel camera solution tends to be used only rarely, as it produces less interesting stereoscopic effects than when the cameras point at a target. In real life we tend to always look towards something specific and hardly ever look at infinity.

Page 27: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

27

Figure 25: A typical set-up of stereo-cameras in Cinema 4D The filming set-up with parallel optical axes is interesting because of the physical simplicity of the configuration and formal simplicity of its geometric properties. It also has interesting properties concerning the elimination of certain distortions. It does however suffer from certain limitations concerning the space perceived in the visualisation phase, which can be resolved with adequate – and simple – intervention in post-production. Specifically, it is necessary to horizontally shift the images making up the stereoscopic pair. As we have already pointed out, the configuration with convergent optical axes is what most closely resembles the geometry of human vision, and it has interesting properties, such as allowing us to perceive the element of interest with part of the scene visualised “on this side” of the screen and part “on the other side” of it. Thus it natively makes use of all the perceivable space in the visualisation phase, without requiring post-processing intervention. However it suffers from some distortions, inevitable precisely because they derive from the geometry of the filming configuration itself. The pros and cons of the different filming configurations are not easily assessable on paper, above all concerning the tolerance of the observer with respect to any distortions. Figure 25 shows a standard set-up for a stereoscopic scene with the target. Note that the left camera is the “child” of the right camera, while both cameras and the TARGET are “children” of a null object. This arrangement makes it possible to move the two cameras with the target always at the same distance. This is useful for following an object in motion, but it also makes it possible to move the right camera independently. Naturally movement must be assigned ONLY to the right camera, while the left camera will follow the movement always at the same distance “B”, which is set in relation to the target and the

Page 28: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

28

nearest object.

Figure 26: A typical anagliph (Red-Blue) set-up in Photoshop. In the DX frame must be reduce to zero the white level output; in the SX frame zero output in the green and blue channel. The layers (DX and SX) must be mixed with the “add” ink mode. 3.0 Texture mapping techniques Why dedicate a chapter to texture mapping in a discussion of 3D visualisation in Virtual Archaeology? In general this phase of the work belongs to the usual process of creating a 3D scene and indeed, it is precisely for this reason that I am dedicating a tutorial to the main issues arising from this phase. Its importance is considerable when realistic models are needed, especially when our objective is the restitution of works of art, including the façades of existing monuments that require a representation that is not just realistic, but metrically correct. In general I argue that the quality of the mapping depends “only” on three factors:

- the quality of the texture; - the approach to the mapping - the management of the overlapping (multiple image blending)

The quality of the texture depends on other factors such as resolution, format (jpg, tga, tiff, hdri, exr, etc.) and naturally its physically correct management. We spoke of this in the context of BRDF materials in Chapter 2. The choice of resolution of the texture depends on

Page 29: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

29

the final resolution of the whole frame; ideally, the texture resolution should be greater than the frame resolution, to allow the viewer to zoom in to some degree. In a real-time visualisation project the textures are generally of small to medium dimensions, for obvious reasons of GPU management, but in the planning phase it is possible to adopt measures that allow effective management of textures of large dimensions. The distance from the camera to the texturised object should be assessed carefully because the texture should never exceed its native resolution. The quality of the texture is also given by clearly contrasted sources that have not been excessively manipulated and above all have been created from a photographic base. The type of mapping is an extremely important component because only rarely do we have the possibility of mapping pure volumes. We shall return to this topic in the next chapter, with reference to Camera Mapping. The last element that should not be neglected is the merging of the textures or blending. We will see below that it is almost impossible to map a complex object with just one texture. This can only be the final result of operations blending numerous textures, which at the end of the mapping can be merged in a single texture by means of Texture Baking (see below). Currently some software applications automatically generate the blending of different textures. Examples include MeshLab (http://meshlab.sourceforge.net/) and software for generating 3D models from photographs, such as UZR (http://www.uzr.de/en/) and Arc3D web service.

Figure 27: Example of texture blending before and after

Although in my profession photography-based textures are by far the most commonly used, there are other techniques for generating a texture. Another approach that should not be neglected is the generation of procedural textures.

Page 30: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

30

Shaders (also known as procedural textures) are definitely more sophisticated than conventional textures. Shaders are calculated on the basis of mathematical formulae, while conventional textures are based on pixels. One advantage of using shaders is that their resolution is independent of the distance from the camera, and so they do not lose resolution when they are foregrounded. Shaders make it possible to create textures that cannot be created with texture maps or other methods. Anisotropic surfaces, multiple specular highlights, volumetric wood and other features are not obtainable in any other way. Despite their complexity, with a bit of practice and study, it is possible to create personalised materials using shading engines. Shaders are procedural, which means that the colour you see is calculated by a programme based on location in space and other factors in the scene. This makes it possible to create things that are not possible with texture maps, since shaders take account of elements such as light intensity, the direction of surfaces, the position of the video-camera and so on. Thus these aspects can be exploited to change the surface. Examples of procedural textures are metallic surfaces, water, clouds and mist, some types of wood, ceramics, fabrics, hair and human skin.

Figure 28: Example of shader for water simulation Technical tip. Creation of Shaders in Cinema 4D Shaders can change the way in which CINEMA 4D renders a surface by changing the illumination, surface normals and other parameters. CINEMA 4D distinguishes between two-dimensional and three-dimensional shaders. 3D shaders take account of the volume of an object, while 2D shaders and texture normals are simply applied to the surface of the object. 3D shaders are independent of the geometry of the object and the projection of the texture (the only exception is the UVW projection which can be applied to volume shaders). All the parameters of 2D and 3D shaders can be animated using XPresso or the Timeline, by clicking on the small circle next to the name of the parameter with the right-hand mouse button (or with the left button while pressing Ctrl). If you choose “Shader” instead of “New Material” under “File” in Material Manager, you will see a list of 3D shader presets. The advantage of these shaders is that you don't have to worry about the mapping of your texture because a 3D shader will be calculated for the 3D space.

Page 31: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

31

Here are some examples described in detail:

Cheen: generates an electronic microscope effect, excellent for the description of bacteria or microscopic features.

Danel: very good for simulating shiny surfaces.

Banzi: makes it possible to paint various types of wood.

Nukei: conceived to enable users to create surfaces that include rusty and age-worn materials that generally have two distinct surfaces with different attributes.

Page 32: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

32

Banji: makes it possible to render translucent illumination from behind a surface together with its shadows. The other main use of BANJI is the creation of excellent refracting and

transparent materials, such as glass or water.

Technical tip. Mapping of models in Cinema 4D. Main types of projection When you assign a material to an object, a Texture tag is created and the parameters of the tag appear in Attribute Manager. These texture parameters determine how the texture is mapped (positioned) on the surface of the object. For example, you can ensure that the texture is repeated or position a texture on a specific part of the object. You can also have a texture mapped on only one side of the object (decal mapping).

Page 33: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

33

Figure 28: Attribute Manager parameters

In the Name field insert a new name for the Texture tag. This is useful above all when you want to animate the materials, given that the name of the tag will appear in the Timeline. Select what material the Texture tag must use, drag the desired material from the Object Manager and drop it inside this box. If more than one Texture tag is selected, the material will be applied to all.

Polygon Selection The Polygon Selection field makes it possible to use different materials on different parts of the same object. This is a convenient method for adding labels to objects for example. Before anything else a Polygon Selection tag is needed: Select the object, the Polygon tool and the Live Selection tool. Select a number of polygons (if you select a primitive object, you must first convert it into polygons using the Make Editable tool).

Page 34: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

34

- Choose Select > Set Selection to create a Polygon Selection Tag. - In the Basic Properties section of Attribute Manager, insert a meaningful name for the Polygon Selection Tag. - If the object does not yet have a texture, apply a new texture (drag the material from Material Manager and drop it on the name of the object in Object Manager). The parameters of the Texture tag appear in Attribute Manager. Insert your parameters, for example the type of projection and the repetition, then insert the name of the selection in the Selection box. When you are positioning a texture on a selection, you may find it useful to hide the non-selected surfaces. You can hide them using Select > Hide Unselected. Projection The Projection parameters in the Texture Tag determine how the texture is projected or mapped on to a surface. The projection surface is independent of the object's real surface, although it often has the same basic shape. UVW mapping sets the projection on the points of the surface of an object so that when the object is deformed (such as a flag in the wind for example), the texture is deformed with it. The best type of mapping depends on the shape of the object to which it is being applied. For examples and details, see “mapping, types of” in the index. Side (decal mapping)

Figure 1. Figure 2. Figure 3. Figure 4. Suppose you are projecting a texture on to a tube with a Flat mapping. If you move the camera around to take a look at the other part of the surface, you will still see the texture, but this will be in the wrong position. You can resolve this problem by using a decal: a material that is projected only on to one side of the surface. The direction of the surface normals for each polygon plays a fundamental role in deciding what side the texture is mapped on to. Front is the direction of the surface normals and Back is the opposite direction (Figure 1 above). With Flat projection, a texture is projected on to both the front and the back of an object. Consequently the texture is visible even where it should not be, in this case, on the front and on the back of the tube (Figure 2). You can avoid these problems with decal mapping. Change Side from Both to Front. Now render the tube again. This time the texture is visible only on the front (Figure 3). If the viewing angle (for example, the camera angle) and the surface normal form an angle of less than 90° to each other, the polygon is a frontal polygon; otherwise it is a posterior polygon (Figure 4). The only exceptions are Frontal and Spatial mapping. Here there is an additional criterion: the direction of the Z axis of the projection of the texture. If the points of

Page 35: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

35

the texture's Z axis point in the opposite direction to the surface normals and the viewing angle and the normals form an angle less than 90° to each other, the polygon is a frontal polygon; otherwise it is a posterior polygon (Figure 4). Both The texture is projected in the frontal and posterior directions of each polygon, regardless of the type of mapping selected. Front You see the texture where the surface normals point towards the camera, otherwise the material is invisible. Back You see the texture where the surface normals point away from the camera, otherwise the material is invisible. Offset X, Offset Y, Length X, Length Y The Offset and the Length determine the position and the dimension of the texture in the X and Y directions. For example, if Length X and Length Y are both set at 100%, the texture completely covers the object. Repeat X, Repeat Y

The Repeat values represent the number of times that the texture covers the object in the X and Y directions (Length X and Length Y). CINEMA 4D calculates the dimension of a single repetition from the current dimension of the texture. For example, if you have scaled the texture in such as way as to have a length of 25% in direction X and 50% in Y, the texture fills direction X four times. Mix Textures If this option is enabled, the material will be mixed with the ones underneath it. Seamless This option is rarely used for photographed textures, although it is possible to create interesting patterns. The option is more suitable for patterns such as wood, stone or marble. If this option is enabled, the repetitions will be mirrored to avoid the presence of visible signs. However, at times this causes a flickering effect in the pattern.

Page 36: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

36

Se questa opzione è abilitata, le ripetizioni saranno specchiate per evitare la presenza di segni visibili. Tuttavia, questo a volte provoca un effetto a farfalla nella trama. Repetitions If you enable this option for a 2D or 3D shader this is not, strictly speaking, repeated, but instead fills the entire object without interruption. There may or may not be repetitions depending on the settings of the shader.

If this option is enabled, the image of the texture will be repeated on the surface ad infinitum. The effect becomes visible if you scale the texture or when the geometry of the texture is not adapted using the Fit to Object command. Otherwise, the texture fills the geometry of the object just once. If this option is disabled, the texture map will not be repeated on the surface. Whatever material is below, it will be shown in the parts that are not covered by the repetition of the texture. P, S, R (Position, Scale, Rotation) In this section of Attribute Manager you will find the parameters that control the position, scale and rotation of the texture. You can also change these values interactively in the views (by selecting Texture Axis mode), assuming that UVW mapping is not selected. Figures 1 and 2 show the difference between the Texture tool and the Texture Axis tool. The texture geometry is represented as a blue grid. The texture will be projected in real time if the shading is set to Gouraud Shading or Quick Shading.

Page 37: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

37

Types of mapping To select how a material is placed on a surface, set Projection to the desired mode. The available modes are listed below and explained in the following pages.

Spherical applied to a plane. Spherical applied to a cylinder. Spherical applied to a sphere.

Cylindrical applied to a plane. Cylindrical applied to a cylinder. Cylindrical applied to a sphere.

Flat applied to a plane. Flat applied to a cylinder. Flat applied to a sphere.

Cubic applied to a cuboid . Frontal applied to a cuboid. Flat (left) and Spatial(right)

UVW Mapping. Difference between Flat and UVW Shrink Wrapping. Spherical Spherical projection is rarely adaptable to flat objects. A distortion is also seen with cylindrical objects. Cylindrical Cylindrical projection is rarely adaptable to flat objects. It can produce distortions even when used with spherical objects. When Cylindrical is applied to a cylinder, note how the pixels at the upper and lower ends of the texture map are pulled towards the centre of the caps. For the latter you should apply separate textures. Flat Flat projection tends to be used only with flat objects. The texture is immediately distorted if applied to a sphere or a cylinder, as the examples show. Cubic Cubic mapping projects a texture on to all six sides of a cube.

Page 38: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

38

Frontal The texture is projected on to the object from the position of the camera. This ensures that if you project the texture on to a Polygon object and a Background object, the two textures match perfectly (assuming that the Texture tags for both objects use the same Offset and the same Length). Various special effects can be created with Frontal mapping. You can also do compositing directly in CINEMA 4D. You have probably seen science fiction films in which characters or spaceships gradually disappear into the background. Use Frontal mapping for this type of effect. For another interesting effect, remove the Background object and move the polygon. Spatial Spatial mapping is similar to Flat projection. However, with Spatial mapping, the texture is pulled upwards and to the left just as it passes through the object. Spatial mapping does however cause distortions and for this reason it is not suited to photographic images. Spatial mapping is more suitable for structural textures such as cement and marble. UVW Mapping If an object has UVW coordinates, you can select them as a type of projection. In this case, the texture geometry is bound to the surface of the object and is subject to all the consequent movements and deformations applied to the object. An example of UVW mapping is the page of a book that is turned over. First you must attach the texture to the page using UVW mapping (for example, use text and a nice image). Next, animate the page being turned with a deformation. The texture folds with the page. All the CINEMA 4D primitives and NURBS objects have UVW coordinates. If you apply a new texture to these objects, the type of projection in the Texture tag will automatically pass to UVW mapping. All the Polygon objects with UVW coordinates will have the UVW tag in Object Manager. The primitives and NURBS objects have internal UVW coordinates and do not have a UVW tag in Object Manager. You can however use the UVW mapping with these objects. If you convert a Primitive or a NURBS object into a Polygon object, the UVW tag will appear in Object Manager. You might be wondering why there are three UVW coordinates. What is the third one for? Conventional textures have two coordinates, one for the horizontal X position and one for the vertical Y position. To make it clear that the UVW coordinates refer to a texture, X is renamed U and Y is called V. The two coordinates (U and V) would be sufficient if they were not used with 3D shaders, which are three-dimensional textures and as such require a third coordinate (W) in order to attached to the object. You can apply more than one UVW texture geometry to an object. Create a new Texture tag for the object, then set the projection you want, for example Flat for the texture of a label. Next create new UVW coordinates for the active texture by selecting Generate UVW Coordinates from the Texture menu in Object Manager. The selected Texture tag will be set to UVW mapping and will be deformed together with the object. A new UVW tag is added to the existing ones every time you use Generate UVW Coordinates. A Texture tag (selected UVW mapping) always uses the UVW map

Page 39: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

39

directly to its right in Object Manager. This makes it possible to assign different UVW maps to different Texture tags. If there is no UVW tag on the right of the Texture tag, the first UVW tag will be used. What is the structure of the UVW coordinates? Imagine a grid with U and V axes (Figure 1 below). The UV field begins at 0,0 and ends at 1,1. For a polygon in the top right, 0,0 describes the top left position; 0,1 the bottom left; 1,0 the top right; 1,1 the bottom right. A texture is “stretched” between these four coordinates (Figure 2 below).

Figure 1. Figure 2. But where is the W coordinate in this system? Remember that conventional textures are two-dimensional: the W coordinate is created only when necessary. Once created, the W coordinate behaves just like the UV coordinates.

3.1. Texture mapping in laser scanner applications

The progress of laser scanning technology is focused for some time on two different areas: on the one hand the continuous increasing demand from the users of more accurate and faster machines, on the other a tendency to build the same in light alloy, easy to handle and more user-friendly. Within few years it has gone from a standard of 50.000 pt/sec. to 1.000.000 pt/sec., with a performance increase of about twenty times higher than the past. At the same time, these machines have fully entered in the standard equipment of a small and medium companies, a process which has established, after a gestation period of about twenty years, their full usability even by users different from those one historically identified with universities and research centers. To this increasing spread of the “tool” laser scanner, encouraged by its relative ease of use, corresponds a transfer of advanced technologies, which have been merged from the research centers into big manufacturers, to improve not just the acquisition process of points in 3D space, but rather all post-processing data. I refer to the process of data-fusion for alignment and registration of single shots, to the process of creation of meshes and their optimization, to the inspection of models, and finally to the texturing. Currently several software solutions exist, able to manage successfully this critical stage of the acquisition and restitution process of historical and artistic artifacts. In this contest some research groups are included, now focused on the creation of tools that allow the total (or almost) automation of all post-processing processes of point clouds management, capturing data from the visible color of surfaces with Color-Per-Vertex (CPV) information, i.e. with point clouds that directly give spatial information and color information in RGB space, associated with each vertex. This kind of approach carries a not insignificant problem: the color information associated with the CPV points can’t be reworked or replaced with new color information. A case of point in this purpose, can be represented by laser scanning aimed at the documentation for the restoration, in which it is extremely useful to compare “ante and post rem” (before and after the restoration), or in case studies where it is useful to replace a color texture with images related to scientific analysis, such as the IR observation, or the false-color maps coming from GPR. On the contrary, the adoption of a classical texture mapping method allows the full flexibility in creating multi-resolution maps – very useful for the development of real-time and gaming applications – and an easy replacement of the same maps with scientific information about various themes, that may arise for the artifact studied. Leaving aside the most common mapping systems (cubic, spherical, planar, etc.), the most effective and advanced texture mapping technique available in software for the processing of point

Page 40: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

40

clouds, are substantially referable to the point-to-point method [2]. In concrete terms, the operator has to identify the correspondences existing between geometry and texture to map, assigning each map some UVW coordinates that link a point on the surface with a pixel of the texture. This is all the more complex the more the surface is not well characterized in terms of three-dimensional topology, so on a perfectly flat (or smooth) surface will be very difficult to find corresponding points to the pixels of a texture. The research work of point-pixel correspondences is equally challenging in complex artifacts, which generally require a subdivision of the total geometry in small parts to map with different textures ( Fig. 01).

Figure 29: Example of multiple patching for the application of many textures (M. delle Croci, Matera)

Figure 30: Realistic interior after bake texture process. Note the use of ambient occlusion and radiosity

algorithm (Tomba dei Festoni, Taranto)

Page 41: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

41

3.2. Mapping using camera position informations

The correct mapping of the three-dimensional model acquired through laser scanning is usually one of the most problematic issues in the process of accurate and verisimilar restitution of an artifact. Very often, some important works by laser scanning are presented as well as mere geometric shapes, with simple shaded views without any applied texture. Certainly a simplified presentation of reality, in which the chromatic values of the surfaces are essential elements for a correct reading of the conservation status, of the characteristics of the constituent materials, of superficial cracks and other micro detail characteristics, impossible to reproduce with a time-of-flight laser scanner. For this reason, we tried to refine an old method of mapping, widely known, in the attempt to provide, even the beginner, the necessary tools to produce 3D textured models with good accuracy. In this kind of applications the use of quality images at high resolution will allow to get extremely realistic models, almost indistinguishable from the real one, to be used productively in the documentation for the knowledge of historical buildings, in restoration works, in faithful representations of the current state, in serious games, etc. So the basis of this method is the simple way of mapping according to a camera projection mode, in some software called Camera Mapping, or Projection Mapping.

Page 42: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

42

Figure 31: Realistic interior after bake texture process (M. delle Croci, Matera) This technique is generally known and applied in Computer Graphics especially for making interactive 3D models from two-dimensional images, very useful for the conversion of two-dimensional paintings, frescoes, engravings, in 3D explorable scenes. The usefulness of this method is even more appreciated in mapping operations starting from photos. It is indeed known that cylindrical, spherical or cubical projections, are applicable only in specific cases and can never be used to overlap the photos directly on a scanned object. The planar mapping, in particular, projects the texture on the object according to the plane normal, the plane orientation and so the projection direction is chosen according to the necessities and the kind of the object to texturize, but this type of projection is not coincident with a camera take. In a lot of case studies is wrongly used precisely this technique to map complex objects, by assigning small parts of the scanned object to specific photo-textures, with poor results and obvious signs of stretching in the areas with a different angle projection. This means, strictly speaking, that the planar mapping could be applied just on a planar object with an ortho-rectified texture, according to the rules of the orthogonal projection. Each photographic image is rather a perspective view, with a point of view, a target point of the perspective, a visual field and deformations dependent on the quality and nature of the lenses. Then in theory, knowing exactly these four parameters and mapping the photographic images according to the perspective rules, that is the same Camera Mapping method, you can get a mapping with an almost perfect texture-3D model overlap. Let’s examine the solutions proposed for the definition of each of these parameters: shooting position (or point of view, XYZ position of picture center), target point of the perspective (look at), characteristics of the lenses (focal length and distortion).

Page 43: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

43

Figure 12: Church of Santo Spirito in Lecce, multiple texture applied on South wall and example of texture replacement using same UVW coordinates with color correct image There are several criteria to establish with precision the camera position in a 3D scene. The first, widely experienced by the Information Technologies Lab of Lecce (ITLab), consists in the recognition of only the significant points of the scene with digital photogrammetric techniques, and then, after the orientation process, in the recovery of the camera positions of each shot after a patient work of restitution. However, this technique can produce also significant residual errors and lead to further uncertainty degrees in the research of the camera positions, moreover this technique, although it is classified as “low cost”, requires a long processing time, which should be added to that one needed for the post-processing of point clouds.

Page 44: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

44

Figure 33: Screen shot of 3D model with and without texture applied. The look of final work is very similar to real

Figure 34: The rupestrian church of Santo Spirito, Monopoli (Bari), wireframe view with texture applied The method proposed in this article consists in shooting with laser scanner the camera positions within the scene and in overlapping them with their respective virtual cameras in a 3D graphic software, respecting some important procedures. The first consideration concerns the difficulty of reproducing the exact point of view of the camera, which even though the double pass with the scanner set at the highest resolution (about 2 mm), can’t be identified with certainty. For this purpose a metal viewfinder long about 8 cm has been built, which mounted on the top of the photo

Page 45: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

45

camera provides a clear position reference that accurately identifies the point of view, and it has been also useful for a first orientation of the target point of the perspective. This first orientation is to be considered only rough, because a very little rotation fault of only few tenths of a degree on the basis of the virtual camera, produces significant phase shifts of the target point on the 3D surface. Therefore, the target point has been determined with certainty considering the center of the photographic image. Indeed, turning the camera around its point of view, now identified with certainty, is it possible to fit the center of the frame, marked by the intersection of the diagonals of the image, with the corresponding target point of the perspective, identified on the 3D model surface. This must of course be chosen on site, so it can be recognized during the mapping stages. Under low surface characterization, may be suggested the placing of a marker visible by the laser scanner. The last element, extremely important for the success of the method, is the elimination of the lens distortion. “Barrel” or “pincushion” distortions are always present, despite the use of aspherical professional lenses usually advertised with no distortions lens. These can be easily removed by the use of specific software, usually supplied by the manufacturers. The 3D model mapping with the Camera Mapping method, together with the identification of the point of view, the target point of the perspective and the elimination of distortions, creates in our opinion the best method of mapping complex surfaces. Of course this method can also be applied in an empirical way, manually moving the camera by making attempts up to find its exact position in space, but the results will be obtained only for approximation and they will not guarantee the best accuracy and the least time taken. 3.3 Texture baking The process of texture backing is widely used in CG, especially to eliminate problems relating to the portability of models between different applications, but also to simplify the final rendering. In texture baking, for example, what is originally a procedural texture can be recorded as an image. Sometimes various "channels" of a material can be consolidated into a single image, simplifying the number of texture images used. Texture baking can also help with disguising seams on a UV unwrap, a somewhat complicated but very useful process. In normal baking, the mesh normals (which affect how light appears to reflect from the model's surface) can be recorded, which results in very specialized images with RGB values based on normal vectors. There are other types of "baking" employed in physics simulations and animation, where the action of a mesh that is computed frame-by-frame (which can be very slow) is recorded (baked) to a set of keyframes that display much faster than the full sim. In NLA (non-linear animation), a sequence that is composed of many short "strips" of animation, combined to produce a single, full sequence, can be "baked" to a single set of composite keyframes. For our purposes, with this technique it is possible to transform a complex object with many textures into an equivalent object, to which a single texture in UVW projection is applied and on which the shadows, environmental occlusion and radiosity have all been “pre-calculated”. This technique is widely used in video games and above all in RealTime applications, since it is highly useful for managing a scene with pre-calculated shadows and especially pre-calculated radiosity! I have often found the Baking of textures useful for solving Blending problems, but also just to have a single texture available in the software for interactively exploring Real Time.

Page 46: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

46

Figure 35: RealTime platform for the distance visit of a rupestrian Church (Cripta di S. Antonio Abate, Nardò) In most 3D software applications texture Baking is automated, but it is one of the most advantageous operations in the texturing of complex objects or when many textures are used on a single object. Very often it is necessary however to optimise the unwrapping process, i.e. the way in which the individual ‘pieces’ of texture are ordered on the image framework and above all it is important to establish the textures' minimum dimensions. The two images shown here in Figures 30 and 31 allow the observer to appreciate the results of Texture Baking performed on a laser scan with high-resolution texture mapping. The Baking command was deployed at the end of the texturing of the individual patches mapped in accordance with the Camera Mapping method described above.

Page 47: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

47

Figure 36: Example of UVW arrangement after “bake texure” process 4.0 Interactive navigation and cognitive models

The models shown in these pages are entirely explorable in real time, i.e. without the need to calculate every change of position of the chamber with long pauses. This because RealTime technology, by making reference to OpenGL and DirectX graphic libraries, removes the burden of calculating the representation of the model from the CPU, directing this to a large extent to the graphics card. Consequently, the three dimensional model can be manipulated rapidly even on a desktop computer of average power and can be easily distributed on a network. The creation of a virtual navigation platform in Realtime usually implies the use of three-dimensional models with a low number of polygons in order to guarantee an optimal user experience with reasonable response times. The speed of the system is the indispensable condition for the emotional involvement of the user in a virtual context. The process of simplification and decimation of the data entails a loss of information, which is most important when the object being modelled is a fundamental part of a scientific or teaching

Page 48: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

48

process involving the transmission of historical and artistic information. The results visible in these pages derive from a measured equilibrium between the level of modelling and the realism of the textures, which compensate for the deficit in micro-geometric detail. But the aim of this works remains however essentially communicative and museum-oriented, since it will prove useful and effective only if it transmits to the final user the greatest amount of information possible. This includes «highlighting» aspects of interest that might be missed by the inexpert user, using suitable methods and communication tools that are also appropriate for the various possible levels of interest and comprehension. The user is free to explore the three dimensional model from any point of view, but can also activate at any time multiple interactive points or automatic behaviors, from which they can access sources of information which in theory are inexhaustible. This is the start of an integrated journey of discovery, which goes beyond the logic of a merely ‘aesthetic’ virtual experience, typical of virtual reality applications of the nineteen-nineties. As provided by our navigation platform, the viewing process involves much more than the mere contemplation of the object in itself, but rather aims to decipher the content of that object, by whatever means. A visiting system organized in this way combines historical and humanistic knowledge with technical and scientific disciplines: as an object of value, the item is analysed in terms of its mineralogical, petrographic, chemical and physical components, also in terms of its historical and formal aspects, its relationship with the historical context. The virtual space becomes a topological space. Every 3D object included in the scene can be examined and measured, compared and restored to its original context. Even the data from the analyses of the materials can be accessed directly by clicking on the point from which they were sampled. The hierarchical structure of the available data thus becomes the indispensable premise for a virtual visit based on a cognitive model that is modern and flexible. 4.1 An example of project for distance visit of inaccessible artworks This project was launched about three years ago (2008), from the City of Abriola (Potenza, Italy) and the GAL Basento Camastra, Axis II transnational cooperation project "Cammini d’Europa". It was produced in collaboration with Officina Rambaldi, an international team of professionals led by Carlo Rambaldi, the special effects Oscar award for the film "King Kong", "Alien" and "ET". The project proposes the use and enjoyment of the works of Giovanni and Girolamo Todisco through virtual musealization forms obtained using primary and integrated information technology. The particular location, with the consequent difficulty of access, together with the quality of preserved evidence and the state of conservation of the paintings in the sanctuary, make it particularly challenging search for new strategies of communication and development for this site, which is located at about 1400 meters high above sea level. Just to respond to these needs, the Information Technologies Lab IBAM CNR of Lecce has launched, from many years, a research line with the aims to propose a virtual visit of ancient contexts with problems of access, obtained through desktop-class technology that respond to three basic requirements: good usability, extreme verisimilitude and great technological appeal. In order to satisfy these features it was necessary to draw on technologies of various types, each known to respond to specific needs. Within a unique use and compact environment, the set of these technologies is not totally perceived by the user, which finds himself to interact with 3D

Page 49: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

49

models and sensitive points in a very simple and "natural" manner, receiving information of three-dimensional nature, historical information, artistic and topographical information. Moreover for those which do not wish to enable in-depth itineraries, was made a video summary, incorporating the main results of this work, seeking to narrate the contents in a concise and passive way, i.e. without any intervention required from the user. The technological tools used to therefore respond to the requirements of precise documentation and communication, and have been identified not only for their high innovation, but their effectiveness within the overall project. The virtual tour runs from the survey of three-dimensional structures inside and outside the sanctuary, obtained with the methods and techniques described below. The monument was subsequently contextualized in the surrounding area and integrated with an interactive navigation based on high-resolution spherical panoramas and OpenGL realtime 3D navigation.

Figure 37: Interiors of Monteforte sanctuary, screen shot of real time 3D platform 4.2 Survey and contextualization of the Sanctuary

Starting from these assumptions the intention was to create a multimedia platform, developed in a hybrid environment, html, Flash, Shockwave and realtime 3D, characterized by an interface providing access to the information that was able to meet the following requirements: 1. Make available methodologies based on web techniques for the navigation of knowledge; 2. Collocate the knowledge acquired of the historic building in an environmental context, highlighting the possible relationship between the building and the surrounding area; 3. Integrate the 3D models of the historic buildings into three-dimensional digital models of the terrain, correlating the historic and documentary information. Starting from this premise, in the initial phases the work addressed itself to the acquisition of basic photographic documentation and the search for techniques that were suitable for creating a restitution of the sanctuary at a high level of detail and precision. Considering the

Page 50: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

50

aims of the communication project (and the products that will result from it) and the specific surveying problems, these basic requirements were met by restitution techniques based on digital photogrammetry, particularly photo-modelling. The reasons for this choice lie in its greater flexibility and ease of use with respect to other passive techniques, but above all in the possibility it offers of obtaining three-dimensional models of great precision at a reasonable cost. In this specific case, it is important to stress that the surveying difficulties resulting from the height of the buildings, together with the problems of accessibility, mean that traditional photogrammetric surveying techniques would almost certainly have entailed considerable effort, as well as additional costs in terms of the equipment required. Furthermore, the aims and premise of the project ruled out the adoption of any system based on laser scanning. There are two main motives for this: the first is the currently high cost of long-range laser scanners, the second lies in the excessive number of polygons produced automatically by a normal scan, clearly superfluous for the description of the regular polygonal surfaces that characterise the external geometry of the architecture under study here. In contrast, in architectural surveys conducted with a specific purpose, a fundamental role is played by the person conducting the survey, who must identify, by means of a careful analysis of the architectural features, only the main points necessary for the restitution. The task of identifying and matching corresponding points on the structure appearing in photographs taken from different angles is also a fundamental part of surveying with photo-modelling. In operational terms it is sufficient to identify the vertex of each architectural detail, or insert some targets into the poorly characterized areas of the facade, in order to obtain complete three-dimensional models of the textures mapped in UVW projection. The complete survey of sanctuary entailed the use of about twenty oriented photos and the intervention of a mid-level surveyor for about 10 days. The use of commercial software and a simple digital camera for the production of the basic medium (digital photography), necessary for the creation of the various contributions (3D models, QTVR), also represents a useful methodology in small to medium-sized operational situations. From the methodological point of view the 3D restitution is not only a system of documentation of the current state of the monitored monument, both morphological and textural (colours and conservation of the surfaces), but also the basic structure on which to build the communication product. Thanks to the three-dimensional survey, it is possible activate a full enjoyment of the monitored structure: the user have the opportunity to visit the sanctuary and to observe it from any spatial point of view, inside as well as outside, from top to bottom.

Page 51: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

51

Figure 38: Exteriors of Monteforte sanctuary, screen shot of real time 3D platform The building seen from above will give the opportunity to view the planimetric structure, including non-usual angles, to understand its relationship with the town and the surrounding area. The first expected result will be its contextualization within a region rich in history, with the natural characteristics not yet fully exploited. For this purpose, the virtual tour of the sanctuary extends to a very large portion of territory, embracing not only the town of Abriola, but proceeds to lick the town of Potenza at the North side and the town of Calvello at South-East side. The technology on which the virtual tour system was based is now widely known and used, it uses the VirTools engine distributed by France's Dassault Systems, as the main development environment. The most interesting characteristic of this family of applications is the possibility to manage all programming phases through block diagrams (building blocks), without requirement to write any code. The use of 3D models based on photogrammetry survey, the low number of polygons and textures with good resolution has allows to obtain ultra-realistic scenes, a necessary prerequisite for effective virtual tour, which will reclaim the real space in extremely precise mode relatively to the visible colours and metrically accurate from the geometrical point of view. Particular attention has also been yield to the realistic reproduction of the conditions of indoor and outdoor lighting, simulated in three dimensions environment with the same key of light and the same scattering characteristics that visitors can enjoy during a visit in situ. For this purpose were the widely used radiosity simulation algorithms in conjunction with HDRI maps to a colour depth of 12 bits per channel, through which is possible to illuminate the 3D scenes with the same characteristics as the real scene. The subsequent rescue of the models with bake texture has produced maps complete with shadows and radiosity bounces, which of course are applicable in real time, also on low quality computers. The virtual tour will be not limited only to the aesthetic vision of the architectural aspects of the sanctuary. The virtual platform also aims to inform the visitors about all aspects that characterize the historic and environmental asset. The clearly heterogeneous data-set collected and generated can be received directly into three-dimensional environment, in a

Page 52: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

52

comprehensive framework and multi-scale, where are not exclude teaching aids useful to the understanding the covered topics, especially to advantage of the general public. Since near to classical and didactic form of pictorial cycles description, it is possible to navigate inside 3D scenes depicted on the vault of the main hall. Thanks to camera mapping techniques, the frescoes have been subdivided according to plans and their vision is made with an approach of scenic dramatization, in which individual figures are isolated and separated from each other, pointing out the meaning, symbolism, work critical aspects that can be captured and focused more effectively. With this technique, through which a pictorial scene can also be explored in the third dimension, is it possible to enhance the artist's intentions, which in this case sees the use of perspective a means to involve and to direct the perception towards a more plastic rational sense. A widely shared attitude, constantly pursued in our work on Computer Graphics, which we define as "attempt to realism".

Figure 39: The territory near Abriola within the Real Time platform

Page 53: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

53

Figures 41-41: Some examples of Real Time platforms

Page 54: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

54

5.0 Considerations on graphic engines Obviously there are many approaches and applications for authoring scenes in RealTime 3D. Some programs share the same working philosophy as VirTools by Dassault Systèmes (http://www.3ds.com/products/3dvia/3dvia-virtools/), Quest 3D (http://www.quest3d.com/) or the excellent Demicron Wirefusion (http://www.demicron.com/wirefusion/). Others have been completely abandoned, such as Adobe's Shockwave platform, because they are now obsolete. Both VirTools and Quest 3D follow an approach that uses Building Blocks containing pre-set instructions.

Figures 42: Dassault Systèmes, VirTools engine A given action, behaviour or programming of interactivity entails connecting up the various “Logic Blocks”. This type of approach does not require particular programming skills, but is highly complex nevertheless, because it requires precise knowledge of many blocks. Naturally the block structures can be reused and thus prolonged experience and use will surely lead to excellent results.

Page 55: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

55

In contrast, for those who have a good working knowledge of the C++ programming language, a highly comprehensive environment is provided by OSG (http://www.openscenegraph.org/projects/osg). The OpenSceneGraph is an Open Source, cross-platform graphics toolkit for the development of high-performance graphics applications such as flight simulators, games, virtual reality and scientific visualisation. It is based on the concept of a SceneGraph, providing an object-oriented framework using top-end OpenGL. This frees the developer from the need to implement and optimise low-level graphics tasks and provides many additional utilities for rapid development of graphics applications. 6.0 Conclusions Techniques for simulating light via radiance maps or HDRI, the restitution of walls by sub-polygonal displacement and grass by generating filaments, particle effects for the simulation of fire and water, engines that do calculation in real time, advanced rigging and subdivision surfaces for posing and modelling human figures, these are just some of the many features available today for creating high-impact scenes. However, what is most important is that the technology that has made all this possible is today available to everybody. The days when it was necessary to invest large sums in order to be able produce photorealistic synthetic images are over. Although Virtual Reality seems to coincide and overlap with the same technology from which it arose, the ‘discipline’ has always been marked by profound contradictions. Today, researchers in all branches of Virtual Archaeology have an increasing need to establish and verify standards of quality. The objective is to use the technologies purely as a ‘means’ for channelling content, helping to establish standards of quality that raise the scientific value of a discipline which promises to inform, amaze and fascinate with ever greater effectiveness and scientific rigour.

Page 56: 3D VISUALIZATION OF CULTURAL HERITAGE - Home - ITLab …itlab.ibam.cnr.it/new_itlab/wp-content/uploads/RT/CHAPTER33.pdf · CNR IBAM Theoretical content INDEX 1.0 Introduction 1.1Technicism

56

References Lettelier, R. 1999. Virtual reality... A new tool for sustainable tourism and cultural heritage sites management. In CIPA XVII International Symposium Web, http://cipa.icomos.org/index.php?id=59. Accessed 21 August, 2012. Barile, P., Ciesielski, V., and Trist, K. (2008). Non-Photorealistic Rendering Using Genetic Programming. In Proceedings of the 7th International Conference on Simulated Evolution and Learning, Lecture Notes in Computer Science, pages 299–308, Berlin Heidelberg. Springer-Verlag. Hertzmann, A. (2001). Algorithms for Rendering in Artistic Styles. PhD thesis, New York University. Hertzmann, A. (2003). A Survey Of Stroke-Based Rendering. Computer Graphics and Applications, pages 70–81. Agathos, A., Fisher, R. B., Colour Texture Fusion of Multiple Range Images, Proc. 4th Int. Conf. on 3-D Digital Imaging and Modeling, Banff, to appear, 2003. Callieri, M., Cignoni, P., Rocchini, C., and Scopigno, R., Weaver, an automatic texture builder, 3D Data Processing, Visualization and Transmission, Int. Conf., Padova, 2002, pp. 562-565. Bernardini, F., Martin, I., and Rushmeier, H., High-quality Texture Reconstruction from Multiple Scans, IEEE Transactions on Visualization and Computer Graphics, 7(4), 2001, pp. 318-332 Beraldin, J-A., Blais, F., Cournoyer, L., Picard, M., Gamache, D., Valzano, V., Bandiera, A., and Gorgoglione, M., Multi-resolution digital 3D imaging system applied to the recording of grotto sites: the case of the Grotta dei Cervi, in VAST, October 30 - November 4 2006. Gabellone, F., Ancient contexts and Virtual Reality: From reconstructive study to the construction of knowledge models, Journal of Cultural Heritage, Journal number 9069, Elsevier B.V., 2009. Gabellone, F., Virtual Cerrate: a DVR-based knowledge platform for an archaeological complex of the Byzantine age, in CAA 2008, Computer applications and quantitative methods in Archaeology, Budapest 2008. Gabellone, F., Metodologie integrate per lo studio ricostruttivo e la conoscenza dello stato attuale dei Beni Culturali, In: Il dialogo dei Saperi, metodologie inte-grate per i Beni Culturali, a cura di F. D’Andria, D. Malfitana, N. Masini, G. Scardozzi, Edizioni Scientifiche, 2010.