Improved [0.2] Persistent Grid Mapping

Preview:

Citation preview

Masterstudium:Visual Computing

Diplomarbeitspräsentation

ImprovedPersistent Grid Mapping

Peter Houska

Technische Universität WienInstitute of Visual Computing and Human-Centered Technology

Arbeitsbereich: ComputergraphikBetreuer: Assoc. Prof. Dipl.-Ing. Dipl.-Ing. Dr.techn. Michael Wimmer

Mitwirkung: Prof. Mag.rer.soc.oec. Dipl.-Ing. Dr.techn. Daniel Scherzer

Motivation & ProblemTerrains are an essential part of applicationsinvolving interactive visualizations of outdoorscenes. Terrain surfaces typically cover the entirevirtual environment, making it impossible to ren-der the landscape as a single high-detail object atinteractive frame rates. A widespread approachis to only keep the terrain data which surroundsthe camera in video memory, and adapt triangledensity based on camera distance to maintain theillusion of rendering the entire surface at full de-tail. Consequently, the terrain mesh needs to be

re-adapted constantly as the camera roams freely.Persistent Grid Mapping (PGM) addresses thesechallenges by projecting the terrain mesh fromthe camera’s point of view, through a sec-ond projection camera, onto the ground plane,thereby achieving continuous camera-distance-based adaption of the mesh’s detail level, as wellas automatic view-frustum culling. After projec-tion, mesh vertices are displaced vertically by thesampled heightmap value.While PGM is an elegant algorithm, it unfortu-

nately suffers from vertex swimming, meaningthat especially peaks and ridges flicker undercamera movement, as shown in the figure below.

We describe four GPU-based techniques to in-crease the fidelity of the reconstructed terrain.

Our Method

While PGM performs view-frustum culling bydefinition, many vertices still end up off screen,as the grid is projected to an unnecessarily largearea on the ground plane. Grid tailoring shrinksthe grid such that every grid vertex is potentiallyvisible after displacement, which is achieved by in-tersecting the view frustum with the volume thatthe terrain potentially occupies. Setting the heightof all intersection-volume vertices to the ground-plane height gives the minimal area on the groundplane that the grid needs to cover.

Grid warping redistributes grid vertices fromthe camera’s vicinity toward the horizon, therebyreducing excessive grid stretching in the distance.The redistribution process is based on the aspectratio of the projected grid quads.

Local edge search moves grid vertices thatmissed a terrain peak toward the viewer onto thepeak’s ridge.

Image-based bidirectional temporal coher-ence (TC) is extended by scattering points onterrain silhouettes (as seen from the camera)from fully rendered I-frames to interpolated B-frames to reduce reconstruction artifacts. Further,whenever the camera stops translating, jitteredgrid projections are accumulated through uni-directional TC.

basic PGM

tailored

view frustumprojection camera

view frustummain camera

max terrain height

ground plane

tailoring

inpu

tgrid

impo

rtan

cei

war

ped

grid

i ∫ i

(∫

i)−1

warping

maincamera

∆h>0

∆h<0

front view

top viewedge search

fully renderedframes (I-frames)

calculation of I-frameFt is distributedover four frames

displayI-frame Ft

intermediate frames (B-frames)are calculated from I-frames

Ft is future I-frame

calculation of I-frame Ftmust be finished by this time

Ft is past I-frame

display B-frameBt+0.75

t-2 t-1 t t+1

. . . I-frame calculation

. . . B-frame calculation scatter points at depth discontinuitiesfrom past and future I-frames

into current B-frame

Ft Bt+0.5 Ft+1

frame 5 frame 20

frame 50 frame 90

unidirectionaltemporal coherence

ResultsWe compared PGM and our method to a ref-erence solution in a virtual flight over PugetSound, rendering 500K triangles per frame to a1600 x 900 screen. The reference solution wasgenerated by rendering a densely tessellated ter-rain mesh without any kind of resolution adaption.The quality of each method was quantifiedwith the peak-signal-to-noise-ratio (PSNR) metric(larger values mean better quality).Method Min PSNR Avg PSNR Max PSNRPGM 25.45 dB 27.69 dB 31.65 dB

29.22 dB 31.25 dB 34.75 dB26.33 dB 36.41 dB 41.01 dB

The following images show zoomed-in samplescreenshots from the virtual fly-through, along

with a difference picture of the reference solu-tion to our method (all four improvements are en-abled).

Reference 38.47 dB PSNR

Techniques incur virtually no performancepenalty, while technique drops the framerateconsiderably, and also increases memory- andbandwidth requirements. We measured the fol-lowing frame durations on a computer with a3 GHz dual-core CPU, 3 GB RAM, and a GeForce

GTS 450 with 1 GB GDDR5 VRAM (128-bit mem-ory interface).

Memory requirements for 1080p and 2160pscreen resolutions are given below.Screen Resolution1920 x 1080 126.5 KB 77.2 MB3840 x 2160 126.5 KB 308.6 MB

Kontakt: peter.houska@gmail.com

Recommended