6
The University of Manchester Research Technology-Induced Human Memory Degradation Document Version Final published version Link to publication record in Manchester Research Explorer Citation for published version (APA): Clinch, S., Alghamdi, O., & Steeds, M. (Accepted/In press). Technology-Induced Human Memory Degradation. Paper presented at Creative Speculation on the Negative Effects of HCI Research. Workshop at The ACM CHI Conference on Human Factors in Computing Systems (CHI 2019)., Glasgow, United Kingdom. Citing this paper Please note that where the full-text provided on Manchester Research Explorer is the Author Accepted Manuscript or Proof version this may differ from the final Published version. If citing, it is advised that you check and use the publisher's definitive version. General rights Copyright and moral rights for the publications made accessible in the Research Explorer are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Takedown policy If you believe that this document breaches copyright please refer to the University of Manchester’s Takedown Procedures [http://man.ac.uk/04Y6Bo] or contact [email protected] providing relevant details, so we can investigate your claim. Download date:07. Dec. 2021

Technology-Induced Human Memory Degradation

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Technology-Induced Human Memory Degradation

The University of Manchester Research

Technology-Induced Human Memory Degradation

Document VersionFinal published version

Link to publication record in Manchester Research Explorer

Citation for published version (APA):Clinch, S., Alghamdi, O., & Steeds, M. (Accepted/In press). Technology-Induced Human Memory Degradation.Paper presented at Creative Speculation on the Negative Effects of HCI Research. Workshop at The ACM CHIConference on Human Factors in Computing Systems (CHI 2019)., Glasgow, United Kingdom.

Citing this paperPlease note that where the full-text provided on Manchester Research Explorer is the Author Accepted Manuscriptor Proof version this may differ from the final Published version. If citing, it is advised that you check and use thepublisher's definitive version.

General rightsCopyright and moral rights for the publications made accessible in the Research Explorer are retained by theauthors and/or other copyright owners and it is a condition of accessing publications that users recognise andabide by the legal requirements associated with these rights.

Takedown policyIf you believe that this document breaches copyright please refer to the University of Manchester’s TakedownProcedures [http://man.ac.uk/04Y6Bo] or contact [email protected] providingrelevant details, so we can investigate your claim.

Download date:07. Dec. 2021

Page 2: Technology-Induced Human Memory Degradation

.

.

Technology-Induced HumanMemory Degradation

Sarah [email protected] of Computer ScienceThe University of ManchesterManchester, UK

Omar AlghamdiMadeleine [email protected]@postgrad.manchester.ac.uk

INTRODUCTIONHumans have long used technologies to extend aspects of their memory [Figure 1]. For as long assuch technologies have existed, there have been critics insisting that these augmentations will makehumans lazy, leading them to use the technology instead of their brains (e.g. [1]). Indeed, somerecent evidence suggests that technologies such as digital cameras [4, 8], social media [8], searchengines [7, 14] and digital file storage [7] are negatively impacting our ability to remember.

Figure 1: Example technologies used toextend aspects of human memory: cavepainting, written word, photographs,PDAs, World Wide Web, smartphones.

Recent developments in passive sensing, storage, machine learning, and in wearable actuators anddisplays have led researchers to express new visions for human memory augmentation [2]. The reachof these envisioned systems go far beyond existing tools to act as integrated “memory prosthetics”that are as ever-present as our companion smartphones, but that provide an even more seamlessexperience – transparently augmenting our cognition [6]. Such technologies have obvious benefits,improving recall whilst simultaneously freeing up cognitive resource to be allocated to other tasks.However, these technologies introduce a new point of failure – individuals dependent on a technologymay become dysfunctional when it is removed. In the best case, individuals revert to their naturalstate, losing any “superhuman” abilities enabled by the technology. In the worst case, the technology

CHI4EVIL 2019 @ ACM CHI 2019, May 2019, Glasgow, UK© 2019 Association for Computing Machinery.This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version ofRecord was published in Proceedings of Creative Speculation on the Negative Effects of HCI Research. Workshop at The ACM CHIConference on Human Factors in Computing Systems..

Page 3: Technology-Induced Human Memory Degradation

Technology-Induced Human Memory Degradation CHI4EVIL 2019, May 2019, Glasgow, UK

has irreversibly altered cognition, leading to poorer performance if the technology becomes absentthan if it had never been present. In the case of memory, this may manifest as an absence of memory,or as a distortion in remembered events, skills or knowledge.KEYWORDS

Human memory, cognition, lifelogging. In this paper, we explore possible futures for memory technologies, highlighting a need for furtherresearch into their use. We propose a “white hat” approach to this research, that seeks to deliberatelydegrade and manipulate cognition, in order to understand and mitigate potential risks.

A VISION OF THE FUTUREJeffrey has long been a keen adopter of technology. His first mobile phone (2001, a Nokia 3310) took the pain out ofremembering the phone numbers of family and friends, and his Google calendar quickly became a relationship-savingrecord of birthdays, anniversaries and other important events.

It’s the summer of 2020 and Jeffrey is sorely tempted to purchase the new MemIt device. Launched by startupMEMIN in February the previous year, the device has catapulted its creators into the public eye; a popular celebrity isrumoured to have invited them to holiday with them! MemIt has been featured heavily at tech expos in the last twelvemonths, and all the hype suggests that the device transforms you into the best version of yourself. “You are yoursocial network!” declare adverts for the new budget model, promoting the device’s ability to help restore memories ofeven the most distant of acquaintances.

Two months on, and Jeffrey continues to be amazed by his vastly improved memory – he is sure that his recall nowsurpasses that of his early 20s. Just last week Jeffrey reminded his boss of a passing comment made during a meetingshortly after he bought his new MemIt – it turns out that the flippant suggestion made that day could actually be theperfect approach for their current client. During his lunch break, Jeffrey heads out to to a nearby park. On the wayMemIt reminds him of a previous trip made on one of the hottest days of the year; Jeffrey makes a quick detour topickup a Snup soda to accompany his lunch.

Just before Christmas, Jeffrey’s company announces a set of redundancies. Jeffrey’s job is safe, but his close friendAlan is losing his job. Jeffrey takes Alan out for a drink, but after a few beers Alan observes that none of the redundantstaff have a MemIt device despite their growing popularity at the firm. Jeffrey is hurt by the suggestion that MemIt isthe reason he has kept his job, he leaves the bar soon after.

A year on, and Jeffrey has never been more pleased with his MemIt. He can think of at least three occasions todaywhen the device has helped him avoid making a foolish mistake. This morning his boss asked about an email sent lastweek; Jeffrey is sure he’d not seen an email, but MemIt was quick to find the message so that he could provide a quickverbal response with the promise of something in writing later. Thank goodness MemIt is so reliable. Last week’sbrief outage left Jeffrey utterly lost (and almost certainly lost his company a new client); he was so glad that MEMINrestored service within just 25 minutes.

DiscussionJust one possible scenario, Jeffrey’s encounters with the MemIt device highlight a number of potentialnegative consequences for memory technologies.

Page 4: Technology-Induced Human Memory Degradation

Technology-Induced Human Memory Degradation CHI4EVIL 2019, May 2019, Glasgow, UK

Two months after his initial purchase, Jeffrey’s experiences seem entirely positive. However, thereader should note Jeffrey’s assimilation of the device and its capabilities into the beliefs he holdsabout his own faculties. As individuals today have a tendency to misattribute information gleanedfrom the Internet to their own knowledge and intelligence [3, 11], so too is Jeffrey “amazed by hisvastly improved memory”.On that summer day, we see an indication that the MemIt device has the potential to shape

Jeffrey’s behaviour. In this case, something the memory triggered by his device leads Jeffrey to makean unplanned purchase. The (mis)use of memory technologies for advertising has previously beenproposed in [2]. In their more innocuous example, an individual is simply reminded of memoriesassociated with a buying need when approaching a store that could fulfil that need; in a more sinistercase, the authors note how psychological phenomenon such as retrieval induced forgetting mightbe used to amplify some memories and attenuate others – potentially allowing big brands to pay toshape the MemIt retrieval algorithm and thus Jeffrey’s memory for products and product experiences.Later, we see how prosthetics become so ingrained into both personal and social perceptions of

ability, and how critical they can become to workplace efficiency that those without devices are lefton the wrong side of a new digital divide. It’s not known whether Alan chooses not to use a MemItdevice or whether his circumstances prevent use (e.g. because he cannot afford one), but it is highlylikely that the emergence of future memory prosthetics could amplify existing social tensions as wellas education and class divides.

Towards the end of the scenario we see a case in which theMemIt device is at odds with Jeffrey’s ownrecollections, leading him to capitulate to the device.Whilst this could simply be a demonstration of thesuperiority of the device over Jeffrey’s memory, it’s equally possible that technologies could convincetheir users of the validity of false memories – could it be that the email wasn’t sent when claimed,but data simply added retrospectively? If the evidence is there, the human mind may be convinced(as demonstrated through false memory studies using digitally manipulated photographs [10]).

Finally, we see a hint of the problems caused when a technology on which Jeffrey is now highlydependent becomes unavailable, even temporarily. Whilst transient failures are likely to be the causeof only minor annoyance, this moderate outage of 25 minutes meant that Jeffrey found himself unableto complete work tasks to satisfaction, leading him to have “almost certainly lost his company a newclient”. Were the technology to disappear more permanently (e.g. should the MEMIN company gobust), Jeffrey and his contemporaries might be at risk of losing a lifetime of personal memories inaddition to the knowledge on which their jobs depend.

A CALL FOR "WHITE HAT" HCI RESEARCHRecent research has begun to explore negative cognitive impacts of the Internet and smartphones,two of our most dominant technologies [5, 7, 8, 12, 13]. As technology begins to deliberately target

Page 5: Technology-Induced Human Memory Degradation

Technology-Induced Human Memory Degradation CHI4EVIL 2019, May 2019, Glasgow, UK

cognition, there is a growing need for a critical evaluation that focuses on unintended side effects.Research is needed to explore interactions between cognition and technology for two distinct cases:(1) augmentation systems that are primarily benevolent in nature, but that pose a risk to cognitivefunction, and (2) augmentation systems that deliberately manipulate cognition. In the first case, thecore focus of research may simply be the provision for measurement of cognitive function, helpingusers to understand the unintentional tradeoffs they may be making. Initially, this may take theform of a general disclaimer or warning, but over time systems could provide self-test capabilitiesand detailed user feedback – "your use of XXX software coincides with a 5% increase in prospectivememory, but a 0.34% decrease in phonological working memory". Key research challenges likely centreon cognitive measurement: which capabilities should be monitored, what form and frequency oftesting is appropriate, and how results should be interpreted and delivered to users.

The second class of memory augmentation pose a greater challenge. Inspired by the security domain,we propose a need for “White Hat” or “ethical” memory augmentation research that explicitly sets outto discover and exploit vulnerabilities in human cognition [Figure 2].

“A white hat hacker is a computer securityspecialist who breaks into protected systems andnetworks to test and asses their security. Whitehat hackers use their skills to improve securityby exposing vulnerabilities before malicious

hackers (known as black hat hackers) can detectand exploit them. Although the methods usedare similar, if not identical, to those employed by

malicious hackers, white hat hackers havepermission to employ them against theorganization that has hired them.” –

Techopedia [9]

Figure 2: White Hat - Image fromXKCD.com (used under a CreativeCommons License).

ACKNOWLEDGEMENTSThe authors acknowledge partial fundingfrom the UK EPSRC under grant numberEP/N028228/1 (PACTMAN).

The idea of cognitive vulnerabilities as a security concern for human memory was first proposed byDavies et al. [2], with the suggestion that real-time monitoring, similar to current anti-virus software,might be a necessary addition to augmentation technologies. Whilst we agree that safeguards shouldbe added to our emerging cognitive technologies, we go one step further with the security metaphorand identify a need for efforts that build our knowledge of cognitive vulnerabilities in human memory(c.f. the virus definitions of common anti-virus software).

Conducting research in this space raises considerable ethical concerns; deception of human partic-ipants is a given (e.g. [10]), and the very need for this research centres on our uncertainty of bothshort- and long-term effects. One option may lie in the development of cognitive simulations as asafe testbed for attacks. However, given the important role of humans in these vulnerabilities, it’slikely that research will still require human subjects. A middle-ground may see simulations used as alow-fidelity platform against which a battery of attacks can be trialled; once a potential weakness isfound, a minimally-damaging corollary of the simulated attack could be proposed for human testing.Following identification of a vulnerability, the next challenge centres on the creation of ‘definitions’to be used by detection and intervention systems. This requires (1) a need to establish reliable andmeasurable physiological or behavioural indicators of the ‘attack’; (2) a need for active detection oftarget indicators; (3) identification and deployment of effective countermeasures. Finally, we note that(as in traditional cybersecurity research) uncovering risks without solutions could lead to a windowof opportunity for malicious parties. We therefore envisage the emergence of an ethical (or legal)framework that governs white hat research in the cognitive domain, enabling effective countermeasuredeployment before publication of an attack.

Page 6: Technology-Induced Human Memory Degradation

Technology-Induced Human Memory Degradation CHI4EVIL 2019, May 2019, Glasgow, UK

REFERENCES[1] Nicholas Carr. 2011. The shallows: What the Internet is doing to our brains. WW Norton & Company.[2] Nigel Davies, Adrian Friday, Sarah Clinch, Corina Sas, Marc Langheinrich, Geoff Ward, and Albrecht Schmidt. 2015.

Security and privacy implications of pervasive memory augmentation. IEEE Pervasive Computing 14, 1 (2015), 44–53.[3] Matthew Fisher, Mariel K Goddu, and Frank C Keil. 2015. Searching for explanations: How the Internet inflates estimates

of internal knowledge. Journal of experimental psychology: General 144, 3 (2015), 674.[4] Linda A. Henkel. 2014. Point-and-Shoot Memories: The Influence of Taking Photos on Memory for a Museum Tour.

Psychol Sci 25, 2 (Feb. 2014), 396–402. https://doi.org/10.1177/0956797613504438[5] Kep Kee Loh and Ryota Kanai. 2016. How has the Internet reshaped human cognition? The Neuroscientist 22, 5 (2016),

506–520.[6] Albrecht Schmidt. 2017. Augmenting Human Intellect and Amplifying Perception and Cognition. IEEE Pervasive Computing

16, 1 (2017), 6–10.[7] B. Sparrow, J. Liu, and D. M. Wegner. 2011. Google Effects on Memory: Cognitive Consequences of Having Information at

Our Fingertips. Science 333, 6043 (Aug. 2011), 776–778. https://doi.org/10.1126/science.1207745[8] Diana I. Tamir, EmmaM. Templeton, Adrian F.Ward, and Jamil Zaki. 2018. Media usage diminishes memory for experiences.

J. Exp. Soc. Psychol 76 (May 2018), 161–168. https://doi.org/10.1016/j.jesp.2018.01.006[9] Techopedia. 2019. White Hat Hacker.

https://www.techopedia.com/definition/10349/white-hat-hacker [Accessed: February 15, 2019].[10] Kimberley A Wade, Maryanne Garry, J Don Read, and D Stephen Lindsay. 2002. A picture is worth a thousand lies: Using

false photographs to create false childhood memories. Psychonomic Bulletin & Review 9, 3 (2002), 597–603.[11] Adrian FrankWard. 2013. One with the cloud: Why people mistake the Internet’s knowledge for their own. Ph.D. Dissertation.[12] Adrian F. Ward. 2013. Supernormal: How the Internet Is Changing Our Memories and Our Minds. Psychol. Inq. 24, 4 (Oct.

2013), 341–348. https://doi.org/10.1080/1047840X.2013.850148[13] Adrian F Ward, Kristen Duke, Ayelet Gneezy, and Maarten W Bos. 2017. Brain drain: the mere presence of one?s own

smartphone reduces available cognitive capacity. Journal of the Association for Consumer Research 2, 2 (2017), 140–154.[14] Daniel M. Wegner and Adrian F. Ward. 2013. How Google Is Changing Your Brain. Sci. Am. 309, 6 (Nov. 2013), 58–61.

https://doi.org/10.1038/scientificamerican1213-58