5
Making “The Parthenon”: Digitization and Reunification of The Parthenon and its Sculptures Paul Debevec Chris Tchou Andrew Gardner Tim Hawkins Charalambos Poullis Jessi Stumpfel Andrew Jones Per Einarsson Marcos Fajardo Philippe Martinez University of Southern California Institute for Creative Technologies 13274 Fiji Way, 5th Floor Marina del Rey, CA 90292 Contact Email: [email protected] Fig. 1. Virtual model of the modern Parthenon acquired with laser scanning and surface reflectometry. The image is rendered using the Arnold global illumination system using light captured in Marina del Rey, California. Abstract— In this paper we overview the technology and production processes used to create “The Parthenon”, a short computer animation that visually reunites the Parthenon and its sculptural decorations, separated since the beginning of the 19th century. The film uses a variety of technologies including 3D laser scanning, structured light scanning, photometric stereo, inverse global illumination, photogrammetric modeling, image- based rendering, BRDF measurement, and monte-carlo global illumination to create the imagery used in the film. I. I NTRODUCTION Since its completion in 432 B.C., the Parthenon has stood as the crowning monument of the Acropolis in Athens. Built at the height of classical Greece and dedicated to the goddess Athena, The Parthenon was the most refined structure ever constructed, and featured sculptural decorations that are re- garded among the greatest achievements in classical art. Over time, the structure has been damaged by fire, vandalism, and war as Athens fell under the control of the Roman, Byzan- tine, Frankish, and Ottoman Empires. In the early 1800’s, with Athens under the waning control of the Ottomans, the British Lord Elgin negotiated to remove the majority of the Parthenon’s frieze, metopes, and pediment sculptures from the Acropolis to England; since 1816 these sculptures have been in the collection of the British Museum. Whether the sculptures should be repatriated to Greece is the subject of a longstanding international debate. With the sculptures far removed from the Parthenon, the possibility of visually reuniting the Parthenon with its sculptures in an animated visualization presented itself as an appropriate challenge for newly developed computer graphics techniques. This paper describes the work of our computer graphics research team to realize this goal. Our group’s first research in modeling architectural scenes was presented at the SIGGRAPH 96 conference, where Paul Debevec presented results from his University of California at Berkeley Ph.D. thesis regarding a technique for modeling and rendering architecture from photographs [1] using photogram- metry and image-based rendering. Shortly thereafter Debevec was forwarded an email from the Foundation of the Hellenic World organization in Greece inquiring if the work could be applied to creating a virtual version of the Parthenon. Since the Parthenon is a geometrically complex ruin, the techniques from the thesis were not fully applicable; they were suited to more modern architecture with regular geometric forms. Nonetheless, the letter inspired research into the Parthenon’s architecture and history, a visit to the Parthenon sculptures cast collection at the University of Illinois in Urbana-Champaign, and initial experiments with 3D laser scanner data for cap- turing complex geometry. In August 1999 Debevec was in- troduced to archaeologist Philippe Martinez and learned of Martinez’s previous 3D scanning work including the sculptural decorations of the circular Tholos temple at Delphi [2] using a MENSI scanner from Electricite de France. From this meeting it became clear that visualizing the Parthenon and its sculptures using computer graphics would be a fruitful endeavor both technologically and artistically. The work we performed on this project involved extending currently available research techniques in order to: 1) Acquire 3D models of the Parthenon’s sculptures, 2) Acquire a 3D model of the Parthenon, 3) Derive the surface reflectance properties of the Parthenon, and 4) Render the Parthenon under novel real-world illumination. With these techniques in place, our group created a short computer animation, “The Parthenon”, that includes visualizations of the Parthenon and its sculptures, both as they exist in the British Museum and

Http Www.poullis.org Papers Fhw2005 Making the Parthenon

Embed Size (px)

Citation preview

Page 1: Http Www.poullis.org Papers Fhw2005 Making the Parthenon

Making “The Parthenon”:Digitization and Reunification of The Parthenon and its Sculptures

Paul Debevec Chris Tchou Andrew Gardner Tim Hawkins Charalambos PoullisJessi Stumpfel Andrew Jones Per Einarsson Marcos Fajardo Philippe Martinez

University of Southern CaliforniaInstitute for Creative Technologies

13274 Fiji Way, 5th FloorMarina del Rey, CA 90292

Contact Email: [email protected]

Fig. 1. Virtual model of the modern Parthenon acquired with laser scanningand surface reflectometry. The image is rendered using the Arnold globalillumination system using light captured in Marina del Rey, California.

Abstract— In this paper we overview the technology andproduction processes used to create “The Parthenon”, a shortcomputer animation that visually reunites the Parthenon andits sculptural decorations, separated since the beginning of the19th century. The film uses a variety of technologies including3D laser scanning, structured light scanning, photometric stereo,inverse global illumination, photogrammetric modeling, image-based rendering, BRDF measurement, and monte-carlo globalillumination to create the imagery used in the film.

I. I NTRODUCTION

Since its completion in 432 B.C., the Parthenon has stoodas the crowning monument of the Acropolis in Athens. Builtat the height of classical Greece and dedicated to the goddessAthena, The Parthenon was the most refined structure everconstructed, and featured sculptural decorations that are re-garded among the greatest achievements in classical art. Overtime, the structure has been damaged by fire, vandalism, andwar as Athens fell under the control of the Roman, Byzan-tine, Frankish, and Ottoman Empires. In the early 1800’s,with Athens under the waning control of the Ottomans, theBritish Lord Elgin negotiated to remove the majority of theParthenon’s frieze, metopes, and pediment sculptures from theAcropolis to England; since 1816 these sculptures have been inthe collection of the British Museum. Whether the sculptures

should be repatriated to Greece is the subject of a longstandinginternational debate. With the sculptures far removed from theParthenon, the possibility of visually reuniting the Parthenonwith its sculptures in an animated visualization presented itselfas an appropriate challenge for newly developed computergraphics techniques. This paper describes the work of ourcomputer graphics research team to realize this goal.

Our group’s first research in modeling architectural sceneswas presented at the SIGGRAPH 96 conference, where PaulDebevec presented results from his University of California atBerkeley Ph.D. thesis regarding a technique for modeling andrendering architecture from photographs [1] using photogram-metry and image-based rendering. Shortly thereafter Debevecwas forwarded an email from the Foundation of the HellenicWorld organization in Greece inquiring if the work could beapplied to creating a virtual version of the Parthenon. Sincethe Parthenon is a geometrically complex ruin, the techniquesfrom the thesis were not fully applicable; they were suitedto more modern architecture with regular geometric forms.Nonetheless, the letter inspired research into the Parthenon’sarchitecture and history, a visit to the Parthenon sculptures castcollection at the University of Illinois in Urbana-Champaign,and initial experiments with 3D laser scanner data for cap-turing complex geometry. In August 1999 Debevec was in-troduced to archaeologist Philippe Martinez and learned ofMartinez’s previous 3D scanning work including the sculpturaldecorations of the circular Tholos temple at Delphi [2] usinga MENSI scanner from Electricite de France. From thismeeting it became clear that visualizing the Parthenon andits sculptures using computer graphics would be a fruitfulendeavor both technologically and artistically.

The work we performed on this project involved extendingcurrently available research techniques in order to: 1) Acquire3D models of the Parthenon’s sculptures, 2) Acquire a 3Dmodel of the Parthenon, 3) Derive the surface reflectanceproperties of the Parthenon, and 4) Render the Parthenonunder novel real-world illumination. With these techniques inplace, our group created a short computer animation, “TheParthenon”, that includes visualizations of the Parthenon andits sculptures, both as they exist in the British Museum and

Page 2: Http Www.poullis.org Papers Fhw2005 Making the Parthenon

Fig. 2. Scanning a cast of a Caryatid sculpture in the Basel SkulpturhalleMuseum using structured light.

Fig. 3. The virtual Caryatid model included in a virtual reconstruction ofthe Erechtheion. The model was formed from 3D scans of a cast augmentedby photometric stereo albedo and surface normal measurements derived fromphotographs of the original.

where they were originally placed, with the goal of providing avisual understanding of their relationship to the architecture ofthe temple. This paper provides an overview of the techniquesused to create the film.

II. SCANNING THE SCULPTURES

The film’s shots are arranged in four sequences. The firstsequence shows 3D models of sculptures from the Parthenon’sfrieze, metopes, and pediments; these are seen more exten-sively later in the film. While the majority of these sculpturesare in the British Museum, a significant proportion of themremain in Athens, with only about half of them currently ondisplay. To conveniently capture the full set of sculptures, Dr.Martinez arranged a collaboration with Dr. Thomas Lochmanat Switzerland’s Basel Skulpturhalle museum which featureshigh-quality casts of nearly all of the surviving Parthenonsculptures. We designed a custom structured-light based 3Dscanning system [3] using a desktop video projector anda high-resolution monochrome video camera to efficientlycapture the sculptures at between 1 and 2 millimeters ofaccuracy. In five days in October of 2001, our team of fourpeople acquired 2,200 3D scans including the Parthenon’s160m of frieze, its 52 surviving metopes, the East and Westpediment arrangements, and a cast of a Caryatid figure fromthe Erechtheion (see Fig. 2). In addition, through the supportof Jean-Luc Martinez at the Louvre, we scanned the originalpanel of Frieze, metope, and pediment head in Paris. Thesescans of the originals were added to the dataset and also used

to verify the accuracy of the scanned cast models. To assemblethe individual 3D scans into complete surface models, weused the MeshAlign 2.0 software [4] developed at the NationalResearch Center (CNR) Pisa, which implements a volumetricrange scan merging technique [5]. Our 3D scan assemblyprocess is described in [6], and sample models can be foundat: http://www.ict.usc.edu/graphics/parthenongallery/.

In the film, the introductory sculptures were rendered usinga photographic negative shader to emphasize the sculptures’contours and to provide an abstract introduction to the film.The second half of the sequence continues this appearanceand chooses new images that hint at the history that connectsthe Parthenon to the present day. These images include aByzantine cross (Fig. 5) carved into one of the Parthenon’scolumns, a cannonball from one of the site’s battles, anda cannonball impact in one of the cella walls. The carvingand the impact were recorded on the site of the Acropolisin November 2003 using a computer vision technique knownasphotometric stereo, in which the surfaces were illuminatedusing wired camera flash unit (Fig. 4) and the appearance ofthe surfaces under different lighting directions were analyzedto determine the surface orientations and geometry at a finescale. The cannonball model was scanned in April 2003 froma real cannonball on the Acropolis site using the structuredlight scanning system.

Fig. 4. Photometric stereo data capture setup; consisted of a digital camera,a hand-held camera flash and a calibration frame.

Fig. 5. A rendering of Byzantine Christian inscription from the film; thegeometry was recovered using photometric stereo using the device describedabove.

Page 3: Http Www.poullis.org Papers Fhw2005 Making the Parthenon

Fig. 6. The QuantaPoint 3D scanner at the site of the Acropolis.

III. SCANNING AND RENDERING THEPARTHENON

The film’s second sequence shows a time-lapse day oflight over the modern Parthenon. The Parthenon was three-dimensionally scanned over a period of five days in April 2003using a Quantapoint time-of-flight laser range scanner (Fig. 6).Each eight-minute scan covered a 360 degree horizontal by84 degree vertical field of view, and consisted of 57 millionpoints. Fifty-three of the 120 scans acquired were assembledusing the MeshAlign 2.0 software and post-processed usingGeometry Systems Inc.’s GSI Studio software. The final modelwas comprised of 90 million polygons, which for efficientprocessing was divided into an8 × 17 × 5 lattice of cubicalvoxels each approximately 4m on a side. The model also in-cluded a lower-resolution polygonal model of the surroundingterrain.

The surface colors of the Parthenon were recovered fromdigital photographs using a novel environmental reflectometryprocess described in [7]. A Canon EOS 1Ds digital camerawas used to photograph the Parthenon from many angles ina variety of lighting conditions. Each photograph shows thesurface colors of the Parthenon, but transformed by the shadingand shadowing present within the scene. In order to showthe Parthenon under new illumination conditions, we neededto be able to determine the actual surface coloration at eachsurface point on the monument. These values are independentof the lighting in the scene, and are calibrated values such thatzero represents a black surface that absorbs all light and onerepresents a white surface reflecting all light. To determinethese colors we constructed a customized light probe devicebased on that in [8] to measure the incident illumination fromthe sun and sky on the Acropolis at the same moment that eachpicture of the Parthenon was taken (Fig. 7). We then appliedan iterative algorithm to determine surface texture colors forthe Parthenon model such that, when lit by each capturedlighting environment, they produced renderings that matchedthe appearance in the corresponding photographs. The surfacereflectance properties were stored in texture maps, one foreach voxel of the Parthenon model. An orthographic view ofthe surface reflectance properties obtained for the front of theParthenon’s West Facade is shown in Figure 8.

Fig. 7. The light probe device in operation near the Parthenon on theAcropolis, used in determining the surface reflectance colors of the Parthenon.

Fig. 8. Recovered surface reflectance colors for the West Facade of theParthenon.

Once the surface reflectance properties of the Parthenonwere recovered, we had the ability to virtually render theParthenon model from any viewpoint and under any form ofillumination. The time-lapse image-based lighting was chosenfrom one of several days recorded in Marina del Rey, CAusing a new high dynamic range photography process [9].This capturing process involves using a camera equipped witha fisheye lens pointing at the sky that takes a seven - oreight-exposure high dynamic range image series every minute.Covering the full dynamic range of the sky, from before dawnto the full intensity of the disk of the sun, required using acombination of shutter speed variation, aperture variation, anda factor of one thousand neutral density filter attached to theback of the fisheye lens. The captured image sequence spanneda dynamic range of over 1 million to one. We rendered thesequence (and the rest of the film) using the Arnold Monte-Carlo global illumination rendering system written by MarcosFajardo. Rendering this sequence virtually allowed us to showthe Parthenon without its current scaffolding and tourists andto orchestrate the movement of the camera through the sceneduring the postproduction process.

The second sequence ends with a view of the Parthenonseen from a virtual reconstruction of the Caryatid porch of

Page 4: Http Www.poullis.org Papers Fhw2005 Making the Parthenon

the Erechtheion (Fig. 3). Our 3D model of the Caryatid fromthe Basel Skulpturhalle was duplicated several times to formthe complete Caryatid porch. We obtained the fine detail ofthe Caryatid’s face by acquiring a photometric stereo datasetof the original statue in the British Museum, and this detailwas added to the lower-resolution 3D model for the close-upof the sculpture.

IV. RECREATING THEBRITISH MUSEUM

The film’s third sequence takes place in a virtual re-creationof the Parthenon Sculptures Gallery in the British Museum.This gallery is the most recent space to exhibit the sculpturessince being transported from Athens by Lord Elgin in theearly 1800’s. The dimensions of the Gallery were obtainedfrom digital photographs using the Facade photogrammetricmodeling system [1], and details were added to the modelusing traditional 3D modeling in Maya program made byAlias, Inc. Texture maps were created from ”unlit” digitalphotographs, with absolute color and reflectance values de-termined using a reference chart to correctly simulate indirectlight within the museum. Photographs of the real sculptureswere projected onto 3D models from the cast collection toproduce the virtual models in the museum. The torso ofPoseidon, a particularly dramatic sculptural fragment, was aparticular challenge to visualize since the scanned 3D modelfrom the Basel Skulpturhalle included a restoration of thefrontal fragment which remains in Athens. Lacking a usefulmodel of the torso of Poseidon, the final shot in the gallerysequence was created using only a set of still photographstaken in a circle around the torso. Tim Hawkins developeda silhouette-based reconstruction algorithm to derive approx-imate geometry from the photographs, and then implementeda view-dependent image-based rendering algorithm to createthe virtual camera move around the sculpture.

V. THE FINAL TRANSITION SEQUENCE

The final sequence of the film shows a continuous cameramove with five cross-dissolves between the sculptures in themuseum and their original locations on the Parthenon. Thematched-motion transitions were made possible by having ac-curate three-dimensional models for all of the visual elements.These models allowed the viewpoint to be made consistent,and where it was necessary the illumination could be matchedbetween the elements as well.

Near the end of the sequence, the view transitions froma panel of the North frieze in the British museum to itspainted appearance on the ancient Parthenon. This colorationis conjectural since essentially no trace of the original poly-chromy survives; educated theories draw inspiration fromcontemporary painted artifacts and surviving traces of color onthe temple itself. Our team used several references including[10] and [11] to develop our coloration of the frieze, whichincluded a blue background and brightly colored clothing withan emphasis on the heroic red. As the camera pulls backfrom the frieze, the film shows the Ancient Parthenon in itsoriginal glory amongst its neighboring buildings on the ancient

Fig. 9. A shot of the torso of Poseidon (left) was created by deriving geometryfrom points and silhouettes (right).

Fig. 10. Virtual model of the Parthenon Gallery in the British Museum,obtained through photogrammetry and structured light scanning.

Acropolis. Though the sequence is brief, efforts were madeto model the Parthenon’s original refinements such as theentasis of the columns. These final reconstructions were basedon architectural plans displaying the results of archaeologicalresearch including the drawings of Prof. Manolis Korres [12].

VI. CONCLUSION

The Parthenon film is the effort of several researchers andartists over the course of four years. As a short animation, itwas necessary to be selective in the imagery chosen to tella story about the temple to a contemporary audience. Oneprinciple that guided our choices was to construct the imageryas much as possible from what can be recorded today, whichmotivated the use of 3D scanning and reflectance measurementtechnology to obtain accurate geometry and coloration wher-ever possible. The sequence showing the restored frieze wasnecessary to connect the ruin to its original appearance, but it iskept brief, perhaps in proportion to what is known of PericleanAthens to that which has been lost to time. While there are

Fig. 11. Close-up of the painted frieze.

Page 5: Http Www.poullis.org Papers Fhw2005 Making the Parthenon

Fig. 12. The painted frieze of the ancient Parthenon.

Fig. 13. The restored Parthenon on the ancient Acropolis.

many stories to tell about the temple, the separation of thearchitecture and its decorations seemed the most immediate tothe monument’s place in modern consciousness. We hope thatour film has virtually reunited the Parthenon and its sculptures,not only as datasets on a computer screen, but in the mind ofthe viewer who may some day visit the British Museum orascend the Acropolis.

ACKNOWLEDGMENTS

The Parthenon film was created by Paul Debevec, BrianEmerson, Marc Brownlow, Chris Tchou, Andrew Gardner,Andreas Wenger, Tim Hawkins, Jessi Stumpfel, Charis Poullis,Andrew Jones, Nathan Yun, Therese Lundgren, Per Einars-son, Marcos Fajardo, and John Shipley and produced byDiane Piepol, Lora Chen, and Maya Martinez. We wish tothank Tomas Lochman, Nikos Toganidis, Katerina Paraschis,Manolis Korres, Jean-Luc Martinez, Richard Lindheim, DavidWertheimer, Neil Sullivan, and Angeliki Arvanitis, CherylBirch, James Blake, Bri Brownlow, Chris Butler, ElizabethCardman, Alan Chalmers, Yikuong Chen, Paolo Cignoni, JonCohen, Costis Dallas, Christa Deacy-Quinn, Paul T. Debevec,Naomi Dennis, Apostolos Dimopoulos, George Drettakis, PaulEgri, Costa-Gavras, Darin Grant, Rob Groome, Christian Guil-lon, Craig Halperin, Youda He, Eric Hoffman, Leslie Ikemoto,Peter James, David Jillings, Genichi Kawada, Shivani Khanna,Randal Kleiser, Cathy Kominos, Jim Korris, Marc Levoy, DellLunceford, Donat-Pierre Luigi, Mike Macedonia, Brian Man-son, Jean-Luc Martinez, Paul Marty, Hiroyuki Matsuguma,Gavin Miller, Michael Moncreif, Chris Nichols, ChrysostomosNikias, Mark Ollila, Yannis Papoutsakis, John Parmentola,

Fred Persi, Dimitrios Raptis, Simon Ratcliffe, Mark Sagar,Roberto Scopigno, Alexander Singer, Judy Singer, DianeSuzuki, Laurie Swanson, Bill Swartout, Despoina Theodorou,Mark Timpson, Rippling Tsou, Zach Turner, Esdras Varag-nolo, Michael Wahrman, Greg Ward, Karen Williams, and MinYu for their help making this project possible. We also offerour great thanks to the Hellenic Ministry of Culture, the WorkSite of the Acropolis, the Basel Skulpturhalle, the Musee duLouvre, the Herodion Hotel, Quantapoint, Alias, CNR Pisa,and Geometry Systems, Inc. for their support of this project.This work was sponsored by TOPPAN Printing Co., Ltd.,the University of Southern California office of the Provost,and Department of the Army contract DAAD 19-99-D-0046.Any opinions, findings and conclusions or recommendationsexpressed in this paper are those of the authors and do notnecessarily reflect the views of the sponsors.

REFERENCES

[1] P. E. Debevec, C. J. Taylor, and J. Malik, “Modeling and renderingarchitecture from photographs: A hybrid geometry- and image-basedapproach,” inProceedings of SIGGRAPH 96, ser. Computer GraphicsProceedings, Annual Conference Series, Aug. 1996, pp. 11–20.

[2] B. J., “Marmaria, le sanctuaire d’athna delphes.”Ecole Franaised’Athenes, Paris, July 1996.

[3] C. Tchou, “Image-based models: Geometry and reflectance acquisitionsystems,” Master’s thesis, University of California at Berkeley, 2002.

[4] M. Callieri, P. Cignoni, F. Ganovelli, C. Montani, P. Pingi, andR. Scopigno, “Vclab’s tools for 3d range data processing,” inVAST2003 and EG Symposium on Graphics and Cultural Heritage, 2003.

[5] B. Curless and M. Levoy, “A volumetric method for building complexmodels from range images,” inProceedings of SIGGRAPH 96, ser.Computer Graphics Proceedings, Annual Conference Series. NewOrleans, Louisiana: ACM SIGGRAPH / Addison Wesley, August 1996,pp. 303–312.

[6] J. Stumpfel, C. Tchou, N. Yun, P. Martinez, T. Hawkins, A. Jones,B. Emerson, and P. Debevec, “Digital reunification of the parthenonand its sculptures,” in4th International Symposium on Virtual Reality,Archaeology and Intelligent Cultural Heritage, Brighton, UK, 2003.

[7] P. Debevec, C. Tchou, A. Gardner, T. Hawkins, A. Wenger, J. Stumpfel,A. Jones, C. Poullis, N. Yun, P. Einarsson, T. Lundgren, P. Martinez,and M. Fajardo, “Estimating surface reflectance properties of a complexscene under captured natural illumination,”Conditionally Accepted toACM Transactions on Graphics, 2004.

[8] P. Debevec, “Rendering synthetic objects into real scenes: Bridgingtraditional and image-based graphics with global illumination and highdynamic range photography,” inProceedings of SIGGRAPH 98, ser.Computer Graphics Proceedings, Annual Conference Series, July 1998,pp. 189–198.

[9] J. Stumpfel, A. Jones, A. Wenger, C. Tchou, T. Hawkins, and P. Debevec,“Direct HDR capture of the sun and sky,” August 2004, 3rd InternationalConference on Virtual Reality, Computer Graphics, Visualization andInteraction in Africa (AFRIGRAPH 2004).

[10] J. Neils,The Parthenon Frieze. Cambridge University Press, 2001.[11] M. Robertson and A. Frantz,The Parthenon Frieze. Phaidon Press,

1975.[12] M. Korres, The Parthenon and its Impact on Modern Times. Melissa

Publishing House, 1994, ch. 2, The Architecture of the Parthenon.