Engaging Place with Mixed Realities: Sharing Multisensory Experiences of Place 2 days ago¢  Engaging

  • View
    0

  • Download
    0

Embed Size (px)

Text of Engaging Place with Mixed Realities: Sharing Multisensory Experiences of Place 2 days...

  • Engaging Place with Mixed Realities: Sharing Multisensory Experiences

    of Place Through Community-Generated Digital Content and Multimodal

    Interaction

    Oliver Dawkins1 and Gareth W. Young2(B)

    1 MUSSI, Maynooth University, Co., Kildare, Ireland Oliver.Dawkins@MU.ie

    2 V-SENSE, Trinity College Dublin, Co., Dublin, Ireland YoungGa@TCD.ie

    Abstract. This paper discusses the motivation and potential method- ologies for the use of mixed reality and multimodal interaction technolo- gies to engage communities and members of the public with participation in the active creation and use of urban data. This research has been con- ducted within the context of a wider research program investigating the use of data dashboard technologies and open data to more effectively communicate information to urban authorities and citizens and enable more evidence-based decision making. These technologies have drawn criticism for promoting objectifying, data-driven approaches to urban governance that have proven insensitive to the specificity of place and the contexts of citizens’ daily lives. Within the digital and spatial human- ities, there has been growing interest in ‘deep mapping’ as a means for recovering the sense of place and the nuances of everyday life through the incorporation of spatial narratives and multimedia in their mapping practices. This paper considers the ways in which mixed realities can contribute to these efforts, and in particular the unique affordances of virtual reality for evoking an embodied sense of presence that contributes to the communication of a sense of place via rich multisensory experi- ences. The paper concludes with the discussion of a pilot study conducted with members of the public. This demonstrates the ways in which vir- tual environments can be created in ways that maintain contextual and affective links to the places they represent as a result of involvement in ‘hands-on’ activity of mapping through urban sensing and the capture of place-based media.

    Keywords: Mixed reality · Space & place · Deep mapping · Urban sensing · Community engagement

    c© Springer Nature Switzerland AG 2020 J. Y. C. Chen and G. Fragomeni (Eds.): HCII 2020, LNCS 12191, pp. 199–218, 2020. https://doi.org/10.1007/978-3-030-49698-2_14

    http://crossmark.crossref.org/dialog/?doi=10.1007/978-3-030-49698-2_14&domain=pdf http://orcid.org/0000-0002-5294-4375 http://orcid.org/0000-0002-8763-4668 https://doi.org/10.1007/978-3-030-49698-2_14

  • 200 O. Dawkins and G. W. Young

    1 Introduction

    For more than a decade technology vendors and urban administrations have courted a form of ‘smart’ urbanism that has sought to leverage technological innovation as a means for monitoring, communicating, and addressing urban concerns. These include the provision and availability of services; the movement, comfort, and safety of people; the management, sustainability, and security of utility and transportation infrastructures; and the impact on and of environmen- tal conditions. The solutions offered by technology providers typically involved the use of advanced information and communications technologies (ICTs) to con- nect urban sensing infrastructures with cloud-based platforms that facilitate the aggregation, analysis, and visualization of urban data, often at multiple scales and aggregations, and in near real-time. Given the growing range of location- based data available to cities data dashboards, and digital maps, in particular, have become powerful tools for visualizing urban conditions at scale and making spatially informed decisions. In this way, they provide the principal means of enabling ‘the spatialised intelligence of the city to represent itself to itself’ [37]. Providing local governments with a means for planning, displaying and eval- uating the performance of their policy decisions and interventions, they have also become a tool for communicating to other city stakeholders and their wider communities [53].

    Despite advances in the technology, effective use of dashboards and online maps requires varying degrees of data literacy and familiarity with the relevant visualization conventions [54]. They also pose problems of context for decision making due to the separation they introduce between the phenomena they rep- resent and the unique spatial and temporal contexts in which those phenomena occur. In the critical discourse on smart cities, these technologies have become symbols for wider trends in the ‘datafication’ of society; a process by which the ordinary practices of everyday life become quantified as discrete and abstract ‘data points’ which derive their meaning and value from their position on a map or sequence in a time-series [27]. The concern is that the quantifiable aspects of everyday phenomena take precedence over more nuanced and qualitative under- standings of social behavior, otherwise grounded in the unique relational contexts of specific places and practices of everyday life. In the absence of such context non-experts and outsiders can easily reach false conclusions. Moreover, the peo- ple and communities those abstractions represent may feel misrepresented in the absence of the daily sights and sounds of their local streets.

    Human geographers and researchers working on pervasive and ubiquitous computing in HCI have been particularly vocal in their calls for more citizen- centric, participatory, and place-based approaches to smart urbanism. From the perspective of human geography, it is our sense of place which frames our cul- tural understanding of human behavior and frames our day-to-day activities in geographic space [47]. In the emerging field of the spatial humanities, new methodologies of ‘deep mapping’ are being explored to re-inscribe a sense of place back into our maps and spatial representations through the integration of

  • Engaging Place with Mixed Realities 201

    varied place-based media such as written narrative accounts, pictures, and sound recordings:

    “A deep map is simultaneously a platform, a process, and a product. It is an environment embedded with tools to bring data into an explicit and direct relationship with space and time; it is a way to engage evidence within its spatiotemporal context and to trace paths of discovery that lead to a spatial narrative and ultimately a spatial argument; and it is the way we make visual the results of our spatially contingent inquiry and argument.” [4, pp. 2–3]

    The concept of the deep map has already informed practical research into the construction of online multimedia mapping platforms such as HyperCities [43]. More recent proposals have advanced deep mapping as a means for understand- ing smart cities through conceptual archaeology and practical excavation, and mapping of their material and media infrastructures [26]. This earlier research provides the point of departure for our own investigation of deep mapping as an activity which can utilize mixed realities, both as a technical means and concep- tual framework, to engage communities in a participatory process that leverages new technologies, while accommodating different modes of participation, and different levels of data literacy and technical ability, to viscerally communicate a shared sense of place.

    We propose that Mixed Reality (MR) technologies provide an ideal means for undertaking the construction and presentation of deep maps. MR is inher- ently spatial and affords the potential for experiences of place that incorporate a wide range of data while selectively engaging the entire sensory spectrum. With the aid of MR, physical reality and digital virtuality can be combined to varying degrees. MR represents a continuum of digitally mediated experience which spans the range of unmediated experience of the physical environment at one extreme, to full immersion in entirely synthetic computer-generated environ- ments at the other [28]. Between these extremes, MR can vary the degree and nature of digital content presented to the user, but also the level of interaction between the user, the content presented to them, and the environment in which it is presented. Visual elements can take the form of simple text and image over- lays, georeferenced objects and information popups, or even AI characters that respond to the user and the structure of the physical environment. They can also be accompanied by sound and, in some cases, by haptic feedback, engaging multiple senses simultaneously.

    Multimodal interactions in MR serve to combine sensory modalities and provide the user with a richer set of interactions [19]. Although multimodal- ity has many different definitions, they can be broadly categorized into three main areas of interest for MR: human-centered, system-centered, and defini- tions that incorporate human and system-type classifications [51]. As proposed by Moller et al. [31], the latter category of definitions offers the most compre- hensive characterization of multimodality for MR, specifically – “systems which enable human-machine interaction through a number [of] media, making use

  • 202 O. Dawkins and G. W. Young

    of different sensoric channels” [31]. Furthermore, embodied, multimodal, mul- timedia interactions have been demonstrated to enhance dynamic, interactive maps in terms of their flexibility, naturalness, and efficiency in use [15,33]. Mul- timodal MR technologies can, therefore, communicate a wide range of digital content while selectively engaging any or all of the multiple sensory channels available to potential users. This provides opportunities for MR experiences to be more readily tailored to the requirements of particular user groups and offers greater scope for users to engage with the use and gene