83
This report has been authored by: Laurens Naudts * , Ingrid Lambrecht * , Pierre Dewitte * , Jef Ausloos * , Oscar Luis Alvarado Rodriguez , Jeroen Wauman * . Contributors to the report were Aleksandra Kuczerawy * and David Geerts . Mapping Legal and HCI Scholarship Interdisciplinary Problem Formulation ATAP – Work Package 1 – Tasks 1, 2 and 3 Report prepared by KU Leuven CiTiP * and MintLab Abstract Task 1 of WP 1 outlines the general analysis of the state of the art of news-recommender systems from an ethical, legal and HCI perspective. After having provided an overview of the functioning of news recommender systems, the deliverable’s first section analyses the societal and normative values at stake where news recommender systems are deployed. The second section provides an overview of the legal frameworks potentially governing news recommender system. Given the focus of the research on transparency and explanations, particular attention will be provided to the legal instruments which require an explanation to be given to the outside world, including the individuals who receive recommended news-items. The third section relates to Human Computer Interaction. At this stage of the process, an overview will be provided on current approaches towards creating transparent algorithmic systems. Finally, a preliminary analysis will be made regarding the perceived challenges that currently exist in making news recommender systems transparent and explainable. Algorithmic Transparency and Accountability in Practice (ATAP) is an interdisciplinary research project of the Centre for IT & IP Law (CiTiP) and the Meaningful Interactions Lab (MintLab), both part of KU Leuven.

Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

This report has been authored by: Laurens Naudts*, Ingrid Lambrecht*, Pierre Dewitte*, Jef Ausloos*, Oscar Luis Alvarado Rodriguez†, Jeroen Wauman*. Contributors to the report were Aleksandra Kuczerawy* and David Geerts†.

Mapping Legal and HCI Scholarship Interdisciplinary Problem Formulation

ATAP – Work Package 1 – Tasks 1, 2 and 3 Report prepared by KU Leuven CiTiP* and MintLab†

Abstract

Task 1 of WP 1 outlines the general analysis of the state of the art of news-recommender systems from an ethical, legal and HCI perspective. After having provided an overview of the functioning of news recommender systems, the deliverable’s first section analyses the societal and normative values at stake where news recommender systems are deployed. The second section provides an overview of the legal frameworks potentially governing news recommender system. Given the focus of the research on transparency and explanations, particular attention will be provided to the legal instruments which require an explanation to be given to the outside world, including the individuals who receive recommended news-items. The third section relates to Human Computer Interaction. At this stage of the process, an overview will be provided on current approaches towards creating transparent algorithmic systems. Finally, a preliminary analysis will be made regarding the perceived challenges that currently exist in making news recommender systems transparent and explainable. Algorithmic Transparency and Accountability in Practice (ATAP) is an interdisciplinary research project of the Centre for IT & IP Law (CiTiP) and the Meaningful Interactions Lab (MintLab), both part of KU Leuven.

Page 2: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Table of Contents INTRODUCTION: SCOPE OF THE PROJECT. ..............................................................................................4

1.1 SCOPE AND STRUCTURE OF THE DELIVERABLE ...................................................................................................... 6

THE ETHICS OF NEWS RECOMMENDER SYSTEMS ....................................................................................8

2.1 INTRODUCTION AND RELATION TO OTHER SECTIONS ............................................................................................. 8 2.2 NEWS RECOMMENDER SYSTEMS: AN OVERVIEW ................................................................................................ 8

2.2.1 The Phases of News Recommendation ........................................................................................... 8 2.2.2 Forms of News Curation .................................................................................................................. 9 2.2.3 Actors of News Curation ................................................................................................................. 9

2.3 RISKS AND CHALLENGES ASSOCIATED WITH AUTOMATED NEWS CURATION .............................................................. 9 2.3.1 Knowledge: Mis- and Disinformation............................................................................................ 10 2.3.2 Political Bias .................................................................................................................................. 11 2.3.3 User Monitoring and Behaviour Manipulation ............................................................................. 11 2.3.4 The Filter Bubble’ .......................................................................................................................... 12

2.4 KEY VALUES AFFECTED BY AUTOMATED NEWS CURATION ................................................................................... 13 2.4.1 Citizen Participation/Deliberative Discourse ................................................................................. 13 2.4.2 Media Pluralism ............................................................................................................................ 14 2.4.3 Trust (in Media)............................................................................................................................. 15 2.4.4 Fundamental Rights ...................................................................................................................... 15

THE LEGAL FRAMEWORK ...................................................................................................................... 17

3.1 ATAP AND DATA PROTECTION LAW ............................................................................................................... 17 3.1.1 General principles ......................................................................................................................... 17 3.1.2 General transparency rights and obligations ................................................................................ 20 3.1.3 Core transparency provisions ........................................................................................................ 20 3.1.4 Satellite transparency provisions .................................................................................................. 25 3.1.5 Transparency for Automated Decision-Making ............................................................................ 26 3.1.6 News Recommender Systems and the GDPR ................................................................................ 29

3.2 ATAP AND CONSUMER PROTECTION LAW ....................................................................................................... 34 3.2.1 eCommerce Directive (Directive 2000/31) .................................................................................... 35 3.2.2 Unfair Commercial Practices Directive (Directive 2005/29) .......................................................... 37 3.2.3 Unfair Terms in Consumer Contracts Directive (Directive 93/13) ................................................. 41 3.2.4 Unfair terms .................................................................................................................................. 42 3.2.5 Consumer Rights Directive (Directive 2011/83) ............................................................................ 43

3.3 ATAP AND MEDIA LAW ............................................................................................................................... 44 3.3.1 Advertising principle of identifiability of commercial content as such ......................................... 44 3.3.2 Recommender systems for audiovisual news: sponsorship and protection of minors .................. 46

3.4 ATAP AND INTELLECTUAL PROPERTY LAW ....................................................................................................... 49 3.4.1 Traditional intellectual property schemes .................................................................................... 49

3.5 TRADE SECRETS ........................................................................................................................................... 51 3.5.1 Trade Secrets Directive ................................................................................................................. 51 3.5.2 Interplay between the Trade Secrets Directive and the GDPR ...................................................... 51 3.5.3 Trade Secrets Directive and media pluralism ................................................................................ 52 3.5.4 Conclusion ..................................................................................................................................... 52

3.6 ONGOING LEGAL INITIATIVES.......................................................................................................................... 52 3.6.1 EU Parliamentary resolution on media pluralism ......................................................................... 52 3.6.2 Recommendation of the Committee of Ministers to member States on media pluralism and transparency of media ownership ............................................................................................................... 53 3.6.3 Audiovisual Media Services Directive reform ................................................................................ 55 3.6.4 EU Code of Practice on Disinformation ......................................................................................... 56

HUMAN COMPUTER INTERACTION RESEARCH EXPLORATION FOR NEWS RECOMMENDER SYSTEMS .. 58

Page 3: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

4.1 ALGORITHM AS A CONCEPT ............................................................................................................................ 59 4.2 ALGORITHMIC IMAGINARY AND FOLK THEORIES ................................................................................................ 60 4.3 ALGORITHMS AND DEMOCRACIES ................................................................................................................... 61 4.4 ALGORITHMS, GOVERNMENT AND REGULATIONS ............................................................................................... 62 4.5 ALGORITHMS AND NEWS ............................................................................................................................... 63 4.6 ALGORITHMS AND CULTURE........................................................................................................................... 64 4.7 ALGORITHMS AND POWER ............................................................................................................................. 65 4.8 ALGORITHMS AND TRANSPARENCY .................................................................................................................. 65 ALGORITHMS AND DISCRIMINATION .......................................................................................................................... 68 4.9 ALGORITHMS AND BIAS ................................................................................................................................. 68 4.10 ALGORITHMS AND PRIVACY ....................................................................................................................... 69 4.11 ALGORITHMS AND VISIBILITY ..................................................................................................................... 70 4.12 ALGORITHMS AND CONTENT DIVERSITY ....................................................................................................... 71 4.13 ALGORITHMS AND THEIR CREATION ............................................................................................................ 71 4.14 ALGORITHMS IN INTELLIGENT SYSTEMS ....................................................................................................... 71 4.15 ALGORITHMS IN SOCIAL MEDIA ................................................................................................................. 72 4.16 ALGORITHMS IN SEARCH ENGINES .............................................................................................................. 73 4.17 CONCLUSIONS ........................................................................................................................................ 74

CHALLENGES TO EXPLANATIONS AND TRANSPARENCY ........................................................................ 75

5.1 TECHNICAL COMPLEXITY: .............................................................................................................................. 75 5.2 DATA COMPLEXITY ...................................................................................................................................... 75 5.3 LEGAL COMPLEXITY ...................................................................................................................................... 75 5.4 DIFFERENT PURPOSES REQUIRE DIFFERENT EXPLANATIONS ................................................................................. 76 5.5 DESIGN COMPLEXITY .................................................................................................................................... 76 5.6 THE ATAP PROJECT: OBJECTIVES ................................................................................................................... 76

FUTURE RESEARCH ............................................................................................................................... 79

ANNEX: OVERVIEW TABLE OF RELEVANT DATA PROTECTION TRANSPARENCY ENABLERS .................... 81

Page 4: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Introduction: Scope of the project ATAP will empirically investigate compliance with the data protection right to an explanation. Empirical research data will constitute the basis for pinpointing key issues, evidence-based policy guidance and conducting further interdisciplinary research. Moreover, it will inform the co-creation of a concrete prototype for making algorithms that affect our daily lives understandable to the average individual.

With the rise of Artificial Intelligence (AI) and Machine Learning (ML), technology is increasingly mediating the lives of individuals, both online (e.g. media consumption, social networking) and offline (e.g. smart cities). Algorithms are everywhere, and they increasingly affect what we can find, see and say online, how we interact with our homes and public spaces, our access to jobs, loans, insurance, and justice.1 As such, and for better or worse, algorithms have a growing impact on core democratic values. Problematic in this regard is that currently, much of these algorithms operate behind closed doors. Even if we were to see them, algorithms are opaque, complex and constantly changing2, making it even harder to get a grasp on them. At the moment, policy-makers and academia are struggling to ensure fundamental norms and values are upheld against this backdrop of exponential economic and technological progress.3 Policy-makers and academia are increasingly looking into the effect of algorithms on society. Despite several initiatives to increase transparency and accountability in this context, there is a manifest lack of empirical research.4 More specifically, it is not entirely clear how current legal rules are interpreted and applied in practice. European Data Protection law, notably, includes a ‘right to an explanation’, but there is no academically sound evidence on how it is interpreted and accommodated ‘on the ground’. Trustworthy, empirical data is crucial to better regulation and enforcement, as well as more

1 David Robinson, Harlan Yu, en Aaron Rleke, “Civil Rights, Big Data, and Our Algorithmic Future”, 2014, http://centerformediajustice.org/wp-content/uploads/2014/10/Civil-Rights_Big-Data_Our-Future.pdf. 2 Seda Gurses en Joris Vredy Jan van Hoboken, “Privacy after the Agile Turn”, preprint (SocArXiv, 2 mei 2017), https://doi.org/10.31235/osf.io/9gy73. 3 Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing Group, 2016). 4 In the academic year 2017-2018, CiTiP already conducted a small-scale pilot, testing a legal-empirical research methodology for gathering data on compliance with data subject rights. Led by a senior researcher, three LL.M students exercised their rights of access and to erasure with around 60 online service providers. Interactions were systematically mapped in order to enable a critical evaluation of the general state of affairs of these rights in practice. Importantly, the research has pointed to deeper non-legal issues related to the complexity of algorithms and how to make them understandable to the average individual (For more information on this study and its results, see: Jef Ausloos en Pierre Dewitte, “Shattering One-Way Mirrors – Data Subject Access Rights in Practice”, International Data Privacy Law 8, nr. 1 (1 februari 2018): 4–28, https://doi.org/10.1093/idpl/ipy001.). The empirical analysis into data subject rights inspired CiTiP to take a more holistic approach and seek active collaboration from other disciplines, in particular the field of HCI. During the same period, Mintlab had also concluded a research project where algorithms were being used to support home care planners through automated scheduling techniques and decision support software (See also: https://soc.kuleuven.be/mintlab/blog/project/haces/). While the human task planners were positive with the received support, it was paramount for them to gain an understanding of how the decisions were being made by the system. Researchers from Mintlab want to continue this line of research, but have a need for deeper insight into legal implications, notably regarding the right of access and explanation. In this regard, ATAP seeks to establish an intensive cooperation between researchers in both HCI and law. The project aims to analyse to what extent the aforementioned rights can be given meaning in an effort to provide more accountability in today’s complex information society.

Page 5: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

informed design of user interfaces to accommodate such rights. In light of this, the present interdisciplinary project proposal contributes to answering one of the key questions in today’s information society: how to make algorithmic transparency and accountability work in practice? The ATAP (Algorithmic Transparency & Accountability in Practice) project is aimed at providing an empirical foundation for assessing legal tools that enable breaking open so-called algorithmic black boxes.5 Given the ubiquity of algorithms today however, it was crucial to define a clear and narrow scope for the ATAP project. During an initial workshop and brainstorming session amongst the collaborating research groups, it was decided that for the purposes of this project, news-recommender algorithms would be the focus and subject of the study. News recommender systems were chosen in particular because, as modern curators of the news content that citizens are provided, their impact on the democratic discourse and political structuring of society is substantial.6 Yet, despite their impact, uncertainty still exists regarding the way recommender algorithms actually function. The rise of the internet and the ubiquity of content available, mechanisms have been developed to facilitate the navigation through the information-overload. While search engines may be the best example of this phenomenon, social media and content-providers are also progressively fine-tuned in the way through which content is found, ranked and presented. In this context, recommender systems enable the provision of ‘personalised’ recommendations to individuals on the basis of their (or others) past and current behaviour, user or community preferences, or any other available information deemed relevant to the individuals to whom the recommendation is presented.7,8 Their main goal is to ‘predict as precisely as possible which of the recommendable items will be of interest for and accepted by the user.’9 Applied to news content, recommender systems can be described as the deployment of algorithms by a news content provider in order to enable the provision of news content that is catered to the interests of the individuals that are subject to the news service provision. News can be recommended on the basis of the users’ past or current behaviour. Other elements taken into account (can) include the user’s or community’s preferences, or any other information deemed relevant in determining interesting news content for that user. One could say that news-recommender algorithms effectively subsume (and automate) the role of editors and/or other trusted parties that traditionally were relied on to select relevant information (e.g. doctors, lawyers, etc.). For the purpose of the ATAP study, three types of news recommender entities have been identified: (a) first party content providers (e.g. standaard.be); (b) news-aggregators (e.g. GoogleNews); and (c) social media platforms (e.g. Facebook).10 These three types will also form the subject of ATAP’s empirical analysis. Whilst news recommender systems will form the subject of analysis, the results

5 Frank Pasquale, “The Black Box Society”. 6 See infra Section 2.3: Risks and Challenges Associated with Automated News Curation 7 Reda Alhajj en Jon Rokne, red., Encyclopedia of Social Network Analysis and Mining (New York, NY: Springer New York, 2014), https://doi.org/10.1007/978-1-4614-6170-8. It should also be noted that the term ‘personalised’ content can be somewhat misleading. Personalisation is still based on generalisations. For instance, where a community’s preferences are used to recommend items, a general picture of that community’s interests will be used as a basis to provide personal recommendations. The recommendations might be aligned with the generalised community’s interests, but not necessarily with the interests of the individual community members. 8 Two main classes of news recommender systems can be discerned. On the one hand, news recommendation can be based on collaborative filtering. Such an approach recommends items on the basis of other users activity, e.g. it recommends items to a user on the basis of users with similar tastes. On the other hand, recommendations can be content-based. They are performed on the basis of an analysis of the content of items. Of course, similarity, whether in terms of taste or content should be defined to. 9 Alhajj en Rokne, Encyclopedia of Social Network Analysis and Mining, 1502 10 They form the subject of the empirical study.

Page 6: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

sought within ATAP bear relevance for the regulation of other forms of algorithmically guided decision-making. Making these algorithms more transparent and accountable can potentially increase media literacy as well as trust in traditional media. Despite this limited scope, needed for such a focused research project, it is expected that the outcome of the research will be transposable relatively easily to test other types of algorithms (e.g. in the context of Smart Cities, Healthcare or the Internet of Things).

1.1 Scope and structure of the Deliverable

The deliverable outlines the main research results related to Work Package 1 of the ATAP Project, i.e. the preparatory phase, which runs from M1-M4. Under Work Package 1, the following tasks has been performed:

• Task 1.1 Mapping of Legal Scholarship o Traditional legal desktop research, mapping and summarising the relevant literature

on the right to explanation in EU data protection law.

• Task 1.2 Mapping of HCI Scholarship o Literature review of research on the design and evaluation of transparent algorithmic

systems, documenting best practices and guidelines as input for WP3, i.e. design research.

• Task 1.3 Interdisciplinary Problem Formulation o Combine insights gained in Tasks 1.1. and 1.2. to develop a holistic problem

statement.

The main output of D1.1 is the general analysis of the state of the art of news-recommender systems from an ethical, legal and HCI perspective. The deliverable represents Work Package 1’s task structure. After having provided an overview of the functioning of news recommender systems, the deliverable’s first section analyses the societal and normative values at stake where news recommender systems are deployed. It is important to understand how recommender systems might affect societal values in order to analyse the need for both legal and HCI solutions that can mitigate or remedy the identified risks. The section will conclude with a brief analysis on how explanations can be a valuable instrument towards the identification and mitigation of the risks. The second section provides an overview of the legal frameworks potentially governing news recommender system. Given the focus of the research on transparency and explanations, particular attention will be provided to the legal instruments which require an explanation to be given to the outside world, including the individuals who receive recommended news-items. The third section relates to Human Computer Interaction. At this stage of the process, an overview will be provided on current approaches towards creating transparent algorithmic systems. Finally, a preliminary analysis will be made regarding the perceived challenges that currently exist in making news recommender systems transparent and explainable. As such, the necessity for empirical research can be illustrated, as it allows to gauge these perceived challenges and how those challenges play out in practice. Deliverable 1.1 thus provides the appropriate knowledge base for subsequent research tasks within the ATAP Project. More specifically, the literature overview of the ethical, legal and HCI aspects of automated news curation, will inform the legal empirical analysis performed under Work Package 2. In addition, the literature study also provides further insight as to the contextual elements that shape the design process of automated news curation processes. As such, the deliverable equally grounds

Page 7: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

the design research performed under Work Package 3. Taking into account the results from Work Package 1 and 2, the latter research phase aims to develop a prototype for the delivery of explanations within the context of news recommender systems, taking into account individual’s comprehension, acceptance and trust.

Work Package Interaction ATAP

Page 8: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

The Ethics of News Recommender Systems

2.1 Introduction and relation to other sections

Technology is not neutral, it engages both actively and passively with its environment. Similarly, the deployment of news recommender systems has resulted in changes concerning the way we consume media. Given the pervasive and complex nature of these systems however, one should remain mindful of the potential changes brought forth by news recommenders concerning the interests deemed valuable within a democratic society. This section aims to explore the risks and challenges associated with automated news recommendations, and the values that could potentially be negatively affected as a result. The second section also aims to illustrate the need for legal approaches towards receiving an explanation of entities that wish to, or already, deploy automated curation systems for news provision. These legal approaches will be subject to discussion in section 3 of the deliverable. Likewise, an ethical mapping of these problems, in combination with a legal analysis, should enable a better understanding concerning the importance of human computer interaction. However, in order to enable a better understanding of the ethical challenges associated with news curation, an overview will first be provided concerning the functioning of recommender systems.

2.2 News Recommender Systems: An Overview

2.2.1 The Phases of News Recommendation In general, the functioning of news recommender systems can be divided into three specific phases: data collection, system design and system deployment. The impact automated news curation might have on individuals, groups of individuals and society as a whole, might differ depending on the phase that is under discussion. An illustrative example can provide some insight into what will be discussed. Recommender algorithms do not work in a vacuum. First, news recommender systems require (personal) data in order to ‘personalise’ news content. For instance, personalised content might be based upon the news content an individual has favorited in the past. Here, the data collection of previous online behaviour brings along risks related to the fundamental rights to privacy and data protection. Second, the system’s design, whether knowingly or not, might be biased towards certain political values, e.g. liberal or socialist. As such, it might act against or instead enforce that person’s value system or behaviour, and in casu political preference. For example, the news recommender system might hide the content that a person would prefer in favour of other content. Alternatively, it could reinforce actions online by providing only content that is similar to the one the person has liked in the past in order to improve personalisation. Here, the design of the system might come into conflict with media pluralism as not all online content is shown to the individual. Rather, preference is given to specific content that is catered best to the individual’s tastes or inferred interests. Once such a system is deployed, it will further interact with users and its environment. Here, additional risks may crop up. Over time, the individual might only receive self-affirming content. Hence, the individual is locked into a filter bubble11, where he only comes into contact with like-minded individuals due to the additional personalisation of communities online. Hence, he might also become part of an echo chamber.12 As a result, the individual ‘radicalizes’ in his or her beliefs, negating any future form of

11 In everyday language, and as defined by Merriam Webster, the filter bubble refers to “an environment and especially an online environment in which people are exposed only to opinions and information that conform to their existing beliefs”. 12 The problem of echo chambers is closed related to that of the filter bubble. The figurative echo chamber refers to the idea that social media compartmentalizes public discourse: individuals remain within their online community and are only confronted with people who share their ideas or interests thus creating an ‘echo chamber’ of those same ideas or interests, negating opposing on different views. See inter alia: Alessandro Bessi,

Page 9: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

deliberative discourse (misinformation/pluralism/democracy/trust in media). Consequentially, understanding the ‘other’ or ‘multicultural’ values are carved out. Of course, such a scenario should not always occur. Indeed, the project’s empirical research might show otherwise. Nevertheless, it points out risks generally associated with news recommendation. Moreover, one can clearly see, that certain risks are mainly associated with a given phase of the news recommendation cycle.

2.2.2 Forms of News Curation There are multiple ways through which automated news curation services can be provided. Generally, three main ways to provide news content can be discerned. Content-based systems cater the preferences of the users on the basis of content previously read, watched or ‘liked’ by the user. Collaborative filtering provides users recommendations on the basis of items that users with similar tastes have consumed in the past. Finally, community-based13 systems provide recommendations on the basis of the preferences of a user’s network, e.g. friends. Here, the selection of peers is based on an explicit friendship link, and not through the deduction of recommendations through patterns of similar past behaviour.

2.2.3 Actors of News Curation There are a variety of actors who make use of the filtering algorithms. In the context of the ATAP project, the following three key actors will be subject to the study: traditional content providers, content aggregators and social media platforms. First, there are the traditional content providers. Their main activity is the creation and provision of news content. Here, one can find actors such as newspaper media in digital form, or digital news providers. They are the actors that are most typically associated with the provision of news content online. Second, news provision can be curated by content aggregators. These aggregators do not create their own content, or at least, it does not pertain their core functioning. Rather, they select news content, from a variety of sources, in an aggregated way. For instance, a website like BingNews brings together the headlines from major news sources on their website. Their role is not as much content creation, but rather, simplified content provision. Finally, social media platforms have also increased their presence in the provision of news content. Platforms such as YouTube and Facebook provide timelines to their users where content is presented on the basis of the user’s past behaviour, e.g. subscriptions, or on the basis of the activities of friends and communities. Like news aggregators, these platforms do not create news content themselves. Instead, social media platforms present news content shared or created by their users. However, the aggregation of news content is not their main function. Social media platforms aggregate a variety of content, such as music and video clips, but amongst others also news content. Given the prevalence of social media platforms in the digital ecosystem however, they nonetheless play an active role in news curation as their algorithms partially determine what users see on their platform.

2.3 Risks and challenges Associated with Automated News Curation

Technology is neither active, nor passive. The deployment of technologies inevitably brings along interactions with the environment it is deployed in. In this regard, it is said that technology not only

“Personality traits and echo chambers on facebook”, Computers in Human Behavior 65 (2016): 319–324, https://doi.org/10.1016/j.chb.2016.08.016; Elizabeth Dubois en Grant Blank, “The Echo Chamber Is Overstated: The Moderating Effect of Political Interest and Diverse Media”, Information, Communication & Society 21, nr. 5 (4 mei 2018): 729–45, https://doi.org/10.1080/1369118X.2018.1428656; Justin Farrell, “Politics: Echo chambers and false certainty”, Nature Climate Change 5, nr. 8 (2015): 719–720, https://doi.org/10.1038/nclimate2732. 13 Ansgar Koene e.a., “Ethics of Personalized Information Filtering”, in Internet Science, onder redactie van Thanassis Tiropanis e.a., vol. 9089 (Cham: Springer International Publishing, 2015), 4, https://doi.org/10.1007/978-3-319-18609-2_10.

Page 10: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

takes meaning, e.g. through data collection, but in addition, generates new meaning.14 News recommender systems, as a form of technology, have the ability to reconstruct the perception of individuals the technology interacts with. As such, their deployment might affect a variety of societal values deemed important. The following section lists some risks typically associated with automated news curation services. It should be noted that some of these values are interlinked. For instance, and as already indicated in the illustrative example above, the presence of a political bias within news recommender systems, which in itself might be considered a detriment, might also affect media pluralism. The following section first indicates the risks associated with automated news curation. Next, an overview is provided concerning the values that could be at stake if these risks would manifest.

2.3.1 Knowledge: Mis- and Disinformation15 Given the popularity of news content sharing online, it can be difficult to ensure the provenance of a specific article. Because shared articles become dissociated with their initial source, or the source where one would usually find their news, it becomes difficult to discern who wrote the piece, and whether the source is trustworthy. Hence, the presence of content intended to mis- or disinform the public might become more prevalent. The risks related to news content are highlighted on a European level, in both the EP resolution16 and the CoE Recommendation17 on media pluralism, stating that the new digital environment has exacerbated the already existing problem of disinformation. Both texts generally encourage social media companies and online platforms to provide extra efforts in tackling these risks, additionally calling for effective self-regulation. The measures proposed include self-regulatory obligations, instruments and tools on source verification. The tools should enable users to report and flag potential articles containing false or unverifiable information, showing a preference for ex-post rectification and review by third- party fact checking organisations. Both documents do stress the need for caution when implementing or enforcing any type of tool or mechanism. On the one hand, facts and information are now easier to verify than in the past, as manipulation of data can be tracked and investigated. On the other hand, requiring press and news to merely present facts or ‘truths’, without editorial efforts of interpretation, contextualization or critical reflection, should not be the goal either. Encouraging private actors to provide and implement tools for fact-checking and fighting disinformation, should therefore first provide them with the right incentives to do so correctly and with full respect for fundamental rights and freedoms. Additionally, any type of enforcement

14 Jonathan Roberge en Louis Melançon, “Being the King Kong of Algorithmic Culture Is a Tough Job after All: Google’s Regimes of Justification and the Meanings of Glass”, Convergence: The International Journal of Research into New Media Technologies 23, nr. 3 (juni 2017): 306–24, https://doi.org/10.1177/1354856515592506. 15 Dominic DiFranzo en Kristine Gloria-Garcia, “Filter Bubbles and Fake News”, XRDS: Crossroads, The ACM Magazine for Students 23, nr. 3 (5 april 2017): 32–35, https://doi.org/10.1145/3055153.Dominic DiFranzo and Kristine Gloria-Garcia, ‘Filter Bubbles and Fake News’, XRDS: Crossroads, The ACM Magazine for Students 23, no. 3 (5 April 2017): 32–35, https://doi.org/10.1145/3055153.Edson C. Tandoc, Zheng Wei Lim, en Richard Ling, “Defining ‘Fake News’: A Typology of Scholarly Definitions”, Digital Journalism 6, nr. 2 (7 februari 2018): 137–53, https://doi.org/10.1080/21670811.2017.1360143.Edson C. Tandoc, Zheng Wei Lim, and Richard Ling, ‘Defining “Fake News”: A Typology of Scholarly Definitions’, Digital Journalism 6, no. 2 (7 February 2018): 137–53, https://doi.org/10.1080/21670811.2017.1360143. 16 European Parliament resolution of 3 May 2018 on media pluralism and media freedom in the European Union (2017/2209(INI)), http://www.europarl.europa.eu/sides/getDoc.do?type=TA&language=EN&reference=P8-TA-2018-0204 17 Recommendation CM/Rec(2018)1,of the Committee of Ministers to member States on media pluralism and transparency of media ownership, https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=0900001680790e13

Page 11: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

should simultaneously provide effective redress, complemented by the necessary transparency to monitor these mechanisms. 18

2.3.2 Political Bias19 Recently, certain news media providers, and social media in particular, have been accused of being politically biased.20 In essence, complainants accuse social media of (actively) filtering out news content promoting ideas that go against the political mind set of the owners of the social media platform, or the designers of the platform. In this regard, Facebook has been accused of snot representing adequately the diverse political landscape through the news filtering mechanisms the social media platform employs.21 Scandals like Cambridge Analytica have also shown that social media, and user data, can be used to steer and manipulate people, often in favour of populist parties.22

2.3.3 User Monitoring and Behaviour Manipulation23 Considering that news recommender systems want to ‘personalise’ news content to their users’ tastes, they need to monitor, to a certain extent, the behaviour of their users online. In particular, the news recommender systems monitor the users’ news consumption or the interests and preferences of their users. Recommender systems, therefore, heavily rely upon the collection of personal data in order to map the individuals whom a news content providers caters news to. Though the personalisation of content might be prima facie perceived as a benefit, it nonetheless involves the monitoring of users. Online platforms might thus learn more information about their users than the

18 Lees zeker ook: https://www.alexanderdecroo.be/expertengroep-formuleert-aanbevelingen-aanpak-fake-news/ met link naar het rapport hier: https://www.dropbox.com/s/99iza9kmbwjbels/20180718_rapport_onlinedesinformatieNL.pdf?dl=0 19 https://www.nytimes.com/2016/05/10/technology/conservatives-accuse-facebook-of-political-bias.html; https://www.cnbc.com/2016/05/19/facebook-mark-zuckerberg-met-with-conservatives-over-the-trending-bias-spat.html; https://www.theguardian.com/technology/2017/may/22/social-media-election-facebook-filter-bubbles; https://www.demorgen.be/politiek/facebook-personeel-uit-kritiek-op-intolerante-linkse-bedrijfscultuur-b412758a/2fHsq5/ 20 See inter alia: Carole Cadwalladr en Emma Graham-Harrison, “How Cambridge Analytica Turned Facebook ‘Likes’ into a Lucrative Political Tool”, The Guardian, 17 maart 2018, sec. Technology, https://www.theguardian.com/technology/2018/mar/17/facebook-cambridge-analytica-kogan-data-algorithm; Stephanie Kirchgaessner, “Cambridge Analytica Used Data from Facebook and Politico to Help Trump”, The Guardian, 26 oktober 2017, sec. Technology, https://www.theguardian.com/technology/2017/oct/26/cambridge-analytica-used-data-from-facebook-and-politico-to-help-trump; Nathan Robinson, “Media Bias Is OK – If It’s Honest | Nathan Robinson”, The Guardian, 10 september 2019, sec. Opinion, https://www.theguardian.com/commentisfree/2019/sep/10/media-bias-is-ok-if-its-honest; Oscar Schwartz, “Are Google and Facebook Really Suppressing Conservative Politics?”, The Guardian, 4 december 2018, sec. Technology, https://www.theguardian.com/technology/2018/dec/04/google-facebook-anti-conservative-bias-claims. 21 See inter alia : Emily Bell, “Facebook and Twitter Bias: It All Depends How You Look at It | Emily Bell”, The Guardian, 8 juli 2018, sec. Media, https://www.theguardian.com/media/media-blog/2018/jul/08/facebook-twitter-bias-politics-news; Cadwalladr en Graham-Harrison, “How Cambridge Analytica Turned Facebook ‘Likes’ into a Lucrative Political Tool”; Elle Hunt, “Facebook to Change Trending Topics after Investigation into Bias Claims”, The Guardian, 24 mei 2016, sec. Technology, https://www.theguardian.com/technology/2016/may/24/facebook-changes-trending-topics-anti-conservative-bias; Schwartz, “Are Google and Facebook Really Suppressing Conservative Politics?” Mashable, October 30 2019, Matt Binder, ''Facebook leaves no doubt: It's the right wing's social network now", available at: https://mashable.com/article/facebook-right-wing-social-network/?europe=true 22 Lisa Maria Neudert en Nahema Marchal, “Polarisation and the Use of Technology in Political Campaigns and Communication” (European Parliamentary Research Service, maart 2019), http://www.europarl.europa.eu/RegData/etudes/STUD/2019/634414/EPRS_STU(2019)634414_EN.pdf. 23 https://edps.europa.eu/sites/edp/files/publication/18-03-19_online_manipulation_en.pdf

Page 12: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

information users were initially willing to share. Users also open themselves up to manipulation. For instance, if a bias would be present in a recommender algorithm, the users will likely be unaware of its presence. Nevertheless, a continuous confrontation with one-sided news stories could psychologically influence a user’s thinking and position.

2.3.4 The Filter Bubble24’ 25 The filter bubble is a communication science theory that originates from Eli Pariser. Pariser explains that more often than not computer algorithms decide which information they will show you based on your preferences, which the algorithms can derive from previously collected (personal) data.26 Experts warn people of the effect these algorithms may have in our daily lives.27 The more personal data a person has shared through (interactions with) e.g. social media platforms or search engines, the more data the algorithms will use to echo the person’s opinions and preferences. Such reinforcing creates a confirmation bias, making it increasingly difficult to access the plurality of opinions essential for the good functioning of a democracy. To start with Martin Moore’s caveat: “If this window is filled with highly partisan and, in some cases, false news, then many people will be assessing political candidates and information on the basis of distorted and misleading information”.28 Note that these bubbles are not limited to social media platforms, but equally apply to search engines. Current search engines will automatically complete your search entry with your preferences and filter search results to prioritize those that are ‘most relevant to you’.

Filter bubbles may have great effects on both the media sector as on the democratic and electoral process, but they also raise legal concerns. In particular, filter bubbles raise concerns relating to fundamental rights of access to information, freedom of information, liabilities and transparency, data protection and behavioural profiling. Since the publication of the theory, empirical studies have shown that the filter bubble effect is greatly exaggerated.29 In reality, citizens consume news and content

24 Frederik J. Zuiderveen Borgesius e.a., “Should We Worry about Filter Bubbles?”, Internet Policy Review, 2016, https://doi.org/10.14763/2016.1.401; Seth Flaxman, Sharad Goel, en Justin M. Rao, “Filter Bubbles, Echo Chambers, and Online News Consumption”, Public Opinion Quarterly 80, nr. S1 (2016): 298–320, https://doi.org/10.1093/poq/nfw006; Mario Haim, Andreas Graefe, en Hans-Bernd Brosius, “Burst of the Filter Bubble?: Effects of Personalization on the Diversity of Google News”, Digital Journalism 6, nr. 3 (16 maart 2018): 330–43, https://doi.org/10.1080/21670811.2017.1338145; Koene e.a., “Ethics of Personalized Information Filtering”.Frederik J. Zuiderveen Borgesius et al., ‘Should We Worry about Filter Bubbles?’, Internet Policy Review, 2016, https://doi.org/10.14763/2016.1.401; Seth Flaxman, Sharad Goel, and Justin M. Rao, ‘Filter Bubbles, Echo Chambers, and Online News Consumption’, Public Opinion Quarterly 80, no. S1 (2016): 298–320, https://doi.org/10.1093/poq/nfw006; Mario Haim, Andreas Graefe, and Hans-Bernd Brosius, ‘Burst of the Filter Bubble?: Effects of Personalization on the Diversity of Google News’, Digital Journalism 6, no. 3 (16 March 2018): 330–43, https://doi.org/10.1080/21670811.2017.1338145; Koene et al., ‘Ethics of Personalized Information Filtering’. 25 Ibid. 26 Eli Pariser, The Filter Bubble: What The Internet Is Hiding From You (Penguin UK, 2011). 27 Pariser. 28 Martin Moore, “Inquiry into Fake News. Submission to UK Culture, Media and Sport Select Committee.” (Centre for the Study of Media, Communication and Power. King’s College London), accessed on 23rd of October 2019, https://www.kcl.ac.uk/policy-institute/assets/cmcp/cmcp-consultation-fake-news.pdf. 29 Haim, M., Graefe, A., & Brosius, H.-B. (2018). Burst of the Filter Bubble?: Effects of personalization on the diversity of Google News. Digital Journalism, 6(3), 330–343, https://www.tandfonline.com/doi/full/10.1080/21670811.2017.1338145; Zuiderveen Borgesius, F. J., Trilling, D., Möller, J., Bodó, B., De Vreese, C. H., Helberger, N., et al., Internet Policy Review. (2016). Should we worry about filter bubbles? (p. ). HIIG - Alexander von Humboldt Institute for Internet and Society.

Page 13: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

from a diverse range of media, both offline and online. Similarly, recent studies show that the filter bubble created through algorithms on social media networks have less effect on people’s critical information processes. These studies show that incidental news discovery through social networks, as opposed to active news seeking, has less influence on people’s opinion shaping. These studies thus imply that, while the existence of online filter bubbles due to algorithms is realistic, the effect of reinforcing individual echo chambers, in which individuals obtain their political information and express themselves in a politically like-minded environment, is limited.30

However, academic discourse is still aware of the potential dangers of algorithms in creating filter bubbles and echo chambers. Academics thus continue investigating this field, having in mind both increased applications of AI and automated decision-making processes and the easy uptake by the younger generation growing up in this environment. While the issue of filter bubbles and echo chambers in general may be greatly exaggerated for the largest group of working-age citizens (25-65), numbers for teenagers and young adults show a more worrisome picture. In Belgium for example, a quarter of all young adults obtain their news primarily through social media networks in 2017.31

2.4 Key Values Affected by Automated News Curation

The following section further elaborates some of the key societal values affected by news curation.

2.4.1 Citizen Participation/Deliberative Discourse32 News recommender systems appear to have the capacity to disrupt citizen participation in negative and positive ways both. While the negative disruption is mostly known as the ‘filter bubble’ or ‘echo chamber’ effect, in which citizens essentially consume information that aligns with their existing opinions and beliefs, there are others who argue that ‘filter bubbles’ in a broad sense may also have positive effects. For example, ZUIVDERVEEN BORGESIUS e.a. argue that automated news curation allows for more efficient information gathering on specific topics that carry the individual citizen’s interest. This would in turn increase the chances of engagement on specific topics. 33 Additionally, it is argued that when a topic is outside an individual citizens interest, exposure to the content alone does not suffice and will have near zero impact on their critical reflection regarding the matter.34 These studies combined show that it may be more important to provide citizens with topics that align

https://doi.org/10.14763/2016.1.401; Pascal Verhoest, ‘Diversiteit, ongelijkheid en polarisatie’, DIAMOND SBO research blog series, available at: https://soc.kuleuven.be/fsw/diamond/diversiteit%2C%20ongelijkheid%20en%20polarisatie.pdf; Jonathan Hendrickx, ‘Is de filterbubbel fakenews?’, DIAMOND SBO research blog series, available at: https://soc.kuleuven.be/fsw/diamond/Jonathan%20Hendrickx%20V3.pdf 30 Colleoni, E., Rozza, A., & Arvidsson, A. (2014). Echo Chamber or Public Sphere? Predicting Political Orientation and Measuring Political Homophily in Twitter Using Big Data: Political Homophily on Twitter. Journal of Communication, 64(2), 317–332. https://doi.org/10.1111/jcom.12084 31 Bart Vanhaelewyn, Lieven Demarez, ‘IMEC Digimeter 2017. Measuring digital trends in Flanders.’ https://www.imec-int.com/drupal/sites/default/files/inline-files/imec-digimeter-full-2018.pdf 32 James Bohman en William Rehg, “Jürgen Habermas”, in The Stanford Encyclopedia of Philosophy, onder redactie van Edward N. Zalta, Fall 2017 (Metaphysics Research Lab, Stanford University, 2017), https://plato.stanford.edu/archives/fall2017/entries/habermas/. Phil Parvin en Ben Saunders, “The Ethics of Political Participation: Engagement and Democracy in the 21st Century”, Res Publica 24, nr. 1 (1 februari 2018): 3–8, https://doi.org/10.1007/s11158-017-9389-7. 33 For more examples and arguments, please read: Frederik J. Zuiderveen Borgesius e.a., “Should We Worry about Filter Bubbles?”, Internet Policy Review, 5, nr. 1 (31 maart 2016), https://doi.org/10.14763/2016.1.401. 34 Judith Möller e.a., “Do Not Blame It on the Algorithm: An Empirical Assessment of Multiple Recommender Systems and Their Impact on Content Diversity”, Information, Communication & Society 21, nr. 7 (3 juli 2018): 959–77, https://doi.org/10.1080/1369118X.2018.1444076.; See also, Verhoest, P., 2018, 'Resonance in the Echo Chamber: A cognitive analysis on the perception of content diversity', forthcoming.

Page 14: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

with their interests, but that provide viewpoint diversity within the content itself. In light of this, HELBERGER e.a. suggest that news recommender systems may also provide the solution to some of the claimed negative effects. They suggest designing a recommender system in a way that effectively stimulates diverse exposure, rather than limiting it, referred to as “diversity sensitive design”.35

2.4.2 Media Pluralism36 In a broad sense, media pluralism is a value which touches upon many aspects of the media sector as it is intrinsic to freedom of expression. It underpins that citizens ought to be enabled to be well-informed and capable of creating their own critical opinion, thus deemed to safeguard the fundamental well-functioning of a democratic society. In other words, media pluralism aims to enable an open and free media to help maintain democracy, by way of empowering and informing its individual citizens. Legal literature shows great variety of terms used in relation thereto, such as internal and external pluralism, cultural and political pluralism, open and representative pluralism, structural and content pluralism, polarized and moderate pluralism, organised and spontaneous pluralism, reactive, interactive and proactive pluralism, descriptive and evaluative pluralism, etc.37 The multiplicity of dimensions emphasizes the need for media to reflect the diversity that exists in society. A legal basis for media pluralism can be found in the EU Charter of Fundamental Rights where it is defined as an intrinsic value to the freedom of expression: “the freedom and pluralism of the media shall be respected” (Art. 11(2)). The value stems from the idea to create the so-called ‘public sphere’, which is deemed crucial for open, democratic debate. The Council of Europe broadly defines media pluralism as a combination of diversity of media supply, reflected, for example, in the existence of a plurality of independent and autonomous media (generally called structural pluralism) and of diversity of media types and contents (views and opinions) made available to the public.38 In a similar way, the European Commission acknowledged in 2007 that media pluralism: “embraces many aspects, ranging from, for example, merger control rules to content requirements in broadcasting licensing systems, the establishment of editorial freedoms, the independence and status of public service broadcasters, the professional situation of journalists, the relationship between media and political actors, etc. It encompasses all measures that ensure citizens’ access to a variety of information sources and voices, allowing them to form opinions without the undue influence of one dominant opinion forming power.”39

35 Natali Helberger, Kari Karppinen, en Lucia D’Acunto, “Exposure Diversity as a Design Principle for Recommender Systems”, Information, Communication & Society 21, nr. 2 (februari 2018): 191–207, https://doi.org/10.1080/1369118X.2016.1271900. 36 Peggy Valcke, Robert G. Picard, en Miklós Sükösd, “A Global Perspective on Media Pluralism and Diversity: Introduction”, in Media Pluralism and Diversity: Concepts, Risks and Global Trends, onder redactie van Peggy Valcke, Miklós Sükösd, en Robert G. Picard (London: Palgrave Macmillan UK, 2015), 1–19, https://doi.org/10.1057/9781137304308_1.Peggy Valcke, Robert G. Picard, and Miklós Sükösd, ‘A Global Perspective on Media Pluralism and Diversity: Introduction’, in Media Pluralism and Diversity: Concepts, Risks and Global Trends, ed. Peggy Valcke, Miklós Sükösd, and Robert G. Picard (London: Palgrave Macmillan UK, 2015), 1–19, https://doi.org/10.1057/9781137304308_1. 37 Peggy Valcke, Miklos Sükösd, en Robert Picard, red., Media Pluralism and Diversity. Concepts, Risks and Global Trends., Palgrave Global Media Policy and Business (Springer, 2015); Petra Bárd, Judit Bayer, en Sergio Carrera, “A comparative analysis of media freedom and pluralism in the EU Member States. Study for the LIBE Committee.” (Brussels, Belgium: Directorate-General for Internal Policies. Policy Department for Citizens’ Rights and Constitutional Affairs, september 2016), http://www.statewatch.org/news/2016/oct/ep-study-media-freedom-in-EU.pdf; Andrea Czepek, Melanie Hellwig, en Eva Nowak, red., Press Freedom and Pluralism in Europe: Concepts and Conditions (Bristol, UK: Intellect Books, 2009). 38 “Recommendation CM/Rec(2007)2 of the Committee of Ministers to member states on media pluralism and diversity of media content.”, Pub. L. No. CM/Rec(2007)2 (2007), https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=09000016805d6be3; “Recommendation CM/Rec(2018)1 of the Committee of Ministers to member States on media pluralism and transparency of media ownership.”, CM/Rec(2018)1 § (2018), https://search.coe.int/cm/Pages/result_details.aspx?ObjectId=0900001680790e13. 39 “Media pluralism in the Member States of the European Union. Commission Staff Working Document.” (2007), p.5 http://ec.europa.eu/information_society/media_taskforce/doc/pluralism/media_pluralism_swp_en.pdf.

Page 15: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

In essence, pluralism is about diversity in the publicly available media. However, in an increasingly digitized media environment, what is made available to the public, does not always coincide with what is actually consumed by citizens. A key challenge is that while it is assumed that the abundance of online media content assures the diversity of opinions and perspectives available, the use of this abundance of content by an individual with limited concentration span and available time is sometimes called the ‘new scarcity’ of the online environment. Where media pluralism safeguards traditionally primarily dealt with the need to foster diversity in supply (eg. actors, viewpoints and types of content), and distribution (eg. net neutrality, standards and interoperability, must-carry, …)40 the online environment presses the question of how diverse the news consumption of an online user is.

2.4.3 Trust (in Media)41 Trust in media is essential to safeguard its functioning as watchdogs of the government. When citizens do not believe the media any more than they do other information sources, this threatens their possibility to inform themselves on public and political affairs and to critically reflect on them. Trust in media is traditionally safeguarded by a combination of journalistic deontology, transparency to the public and regulatory protections (eg. freedom of the press) and limitations (competition and consumer regulations).42 With the digital transition to the online environment, many of these traditional safeguards are currently failing or highly inadequate in creating trust with citizens. Deontological standards appear to be more flexible or less stringent than before, there is less transparency on the use of personal data by media companies, both for news recommendation and advertising personalisation. More importantly, intermediaries and social media networks have similar communicative powers to that of traditional media without the corresponding responsibilities of deontology, transparency or regulatory obligations. The abovementioned recent recommendations from the EP43 and the CoE44 therefore state that “The visibility, findability, accessibility and promotion of media content online are increasingly influenced by automated processes, whether used alone or in combination with human decisions. States should encourage social media, media, search and recommendation engines and other intermediaries which use algorithms, along with media actors, regulatory authorities, civil society, academia and other relevant stakeholders to engage in open, independent, transparent and participatory initiatives”.

2.4.4 Fundamental Rights Given the need to collect data in order to provide automated news curation and news provision, the deployment of these algorithms might come into conflict with the fundamental rights to privacy and

40 “European Parliament resolution of 25 September 2008 on concentration and pluralism in the media in the European Union.”, Pub. L. No. 2007/2253(INI) (2008); Peggy Valcke e.a., “The European Media Pluralism Monitor: Bridging Law, Economics and Media Studies as a First Step towards Risk-Based Regulation in Media Markets.”, Journal of Media Law, 2, nr. 1 (2010): 85–113, https://doi.org/10.1080/17577632.2010.11427355; Peggy Valcke, Digitale Diversiteit. Convergentie van media-, telecommunicatie- en mededingsrecht. (Brussels, Belgium: Larcier, 2004). 41 Trust in Media and Journalism (New York, NY: Springer Berlin Heidelberg, 2018).Trust in Media and Journalism (New York, NY: Springer Berlin Heidelberg, 2018). 42 For example: the application and enforcement of standards of fact-checking, factual accuracy, source reliability, re-use policies, etc. (See amongst others: Trust in Media and Journalism (New York, NY: Springer Berlin Heidelberg, 2018).Trust in Media and Journalism (New York, NY: Springer Berlin Heidelberg, 2018); JRC Digital Economy Working Paper 2018-02 The digital transformation of news media and the rise of disinformation and fake news. Mertens et al. https://ec.europa.eu/jrc/sites/jrcsh/files/jrc111529.pdf; Special Eurobarometer 452 – November 2016 ‘Media pluralism and democracy’ European Commission, Directorate-General for Justice and Consumers, co-ordinated by the Directorate-General for Communication. https://ec.europa.eu/digital-single-market/en/news/media-pluralism-and-democracy-special-eurobarometer-452 43 “European Parliament resolution of 3 May 2018 on media pluralism and media freedom in the European Union (2017/2209(INI))”, Pub. L. No. 2017/2209(INI), P8_TA (2018), http://www.europarl.europa.eu/sides/getDoc.do?pubRef=-//EP//NONSGML+TA+P8-TA-2018-0204+0+DOC+PDF+V0//EN.. 44 Recommendation CM/Rec(2018)1 of the Committee of Ministers to member States on media pluralism and transparency of media ownership.

Page 16: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

data protection. Indeed, the collection of these services should always comply with the right to data protection. Moreover, as more information can be inferred from an individual’s personal data, including political preference, one should also be weary that the collection of data does not interfere with a person’s privacy. Additionally, the automated provision of news content potentially affects the fundamental rights to freedom of expression and freedom of information.

Page 17: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

The Legal Framework This section will analyse the legal state of the art where algorithms are concerned. News-recommender algorithms have an important impact on a number of fundamental rights and freedoms in the Charter, notably respect for private life (Art.7); protection of personal data (Art.8); freedom of expression and information (Art.11); and non-discrimination (Art.21). Without discarding its importance, it is not the purpose of ATAP to analyse the impact on fundamental rights per se. Rather, the project is aimed at generating transparency of such algorithms, which can be deemed a crucial requirement for assessing said impact. Overall, it is believed that adequate transparency will lead to better accountability of the news-recommenders. What exactly constitutes adequate transparency is what will be investigated in the ATAP project. This should contribute to restoring trust in a sector whose role in society has been severely challenged over the past few years (not in the least as a result of the ‘fake news’ crisis). The focus of the legal Chapter in particular is to lay down the legal building-blocks for the rest of the project. Research on so-called ‘algorithmic accountability’ has been burgeoning recently. This deliverable is aimed at mapping the most pertinent contributions to the debate so far, focusing on the legal aspects. It is structured along different legal disciplines that are relevant for the overall purposes of this project. The most important framework – at least as appears from the proportion of scholarship – appears to be data protection law. That being said, other legal frameworks may be particularly suitable in tackling some of the ATAP questions as well, notably consumer protection and media law.

3.1 ATAP and Data Protection Law

Data protection law may well be the most relevant legal framework to ensure algorithmic transparency and accountability in practice. The reason for data protection’s importance is because the operation of algorithms will necessarily involve data processing, and oftentimes involve personal data processing. Advancements in information communication technology enable ever-more expansive data-processing practices in the delivery of information society services. Indeed, the current software paradigm is that services are constantly monitoring users and adjusting the service-delivery in a continuous feedback-loop.45 This significantly boosts the ability to personalise services and content-delivery to end-users (who are increasingly counting on services to help sort through the information overload). Indeed, the way in which services filter, sort, arrange, organise and/or personalise information is becoming an important asset and competitive advantage. The General Data Protection Regulation 2016/679 (GDPR) constitutes the default framework for regulating personal data processing in the EU. It contains several provisions that are relevant with regard to algorithmic transparency and accountability. For the purposes here, three relevant layers can be identified, in order of specificity:

3.1.1 General principles PRIMARY LAW AND SUPRA-NATIONAL DOCUMENTS – Transparency for automated data processing is understood as a central component for the fundamental right to data protection (Art.8 Charter of

45 Seda Gürses and Joris van Hoboken, ‘Privacy after the Agile Turn’ in Jules Polonetsky, Omer Tene and Evan Selinger (eds), Cambridge Handbook of Consumer Privacy (CUP 2017) <https://osf.io/preprints/socarxiv/9gy73/> accessed 12 November 2017

General principles Transparency rights and obligations

Transparency for automated decision-

making

Page 18: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Fundamental Rights of the European Union). Indeed, not only should data be processed fairly,46 paragraph 2 proclaims that everyone ‘has the right of access to data which has been collected concerning him or her, and the right to have it rectified.’ While the latter does not go as far as requiring an explanation about the how and why of processing, the former requirement (i.e. fairness) implies it nonetheless. Transparency of automated processing, and profiling in particular, is also a big concern for the Council of Europe, as illustrated in the (recently modernised) Convention 10847 and earlier recommendations.48

LAWFULNESS, FAIRNESS AND TRANSPARENCY – Zooming into the GDPR in particular, Article 5 GDPR contains the principles relating to the processing of personal data, i.e. the overarching rules of the game each processing operation will have to abide by. Art.5(1)a requires that ‘personal data shall be processed lawfully, fairly and in a transparent manner in relation to the data subject’. The three requirements – lawfulness, fairness and transparency – each dictate a minimum level of explanation regarding the use and practical operation of (news-)recommender algorithms. The lawfulness requirement refers to Article 6(1), which specifies six distinct conditions under which personal data may be processed. Only three of the conditions are relevant with regard to news-recommender algorithms as discussed in the ATAP project: (a) consent; (b) necessary for the performance of a contract; and (f) legitimate interests. In order for consent to be valid, strict transparency requirements are put in place (Art.4(11) and 7). Indeed, the controller needs to be able to establish that data subjects fully understand what they are consenting to, and such consent should also be granular (i.e. distinguish between the different types of processing taking place). As explained by the Article 29 Working Party (WP29):

‘For the consent to be informed, and to ensure transparency, data subjects/consumers should be given access to their 'profiles', as well as to the logic of the decision-making (algorithm) that led to the development of the profile. In other words: organisations should disclose their decisional criteria. This is a crucial safeguard and all the more important in the world of big data. More often than not, it is not the information collected in itself that is sensitive, but rather, the inferences that are drawn from it and the way in which those inferences are drawn, that could give cause for concern. Further, the source of the data that led to the creation of the profile should also be disclosed.49

The second lawfulness ground (processing is necessary for the performance of a contract) may be relied on when personalised news-recommendation is the very reason why end-users contracted with service providers in the first place (e.g. service delivering personalised news digests). This implies knowledge about the personalisation taking place (parties need to be aware of the scope of a contract for it to be valid). The last lawful ground requires a balancing act between the interests, fundamental rights and freedoms of data subjects versus the legitimate interests of the controller and/or third parties. In order to rely on this last lawfulness-ground, the controller will need to be proactively transparent, taking into account data subjects’ legitimate expectations as well.50 As explained by WP29:

46 Cf. Damian Clifford en Jef Ausloos, “Data Protection and the Role of Fairness”, Yearbook of European Law, geraadpleegd 13 augustus 2018, https://doi.org/10.1093/yel/yey004. 47 In particular Artt.8-9. 48 Council of Europe, “Recommendation on the Protection of Individuals with Regard to Automatic Processing of Personal Data in the Context of Profiling CM/Rec(2010)13”, 23 november 2010. 49 Article 29 Working Party, “Opinion 03/2013 on purpose limitation” (Brussels: Article 29 Working Party, 2 april 2013), 47, http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf. 50 WP29 explains that ‘The controller must perform a careful and effective test in advance, based on the specific facts of the case rather than in an abstract manner, taking also into account the reasonable expectations of data subjects. As a matter of good practice, where appropriate, carrying out this test should be documented in a

Page 19: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

‘In order to enable data subjects to exercise their rights, and to allow public scrutiny by stakeholders more broadly, the Working Party recommends that controllers explain to data subjects in a clear and user-friendly manner, the reasons for believing that their interests are not overridden by the interests or fundamental rights and freedoms of the data subjects, and also explain to them the safeguards they have taken to protect personal data, including, where appropriate, the right to opt out of the processing.’51

The Working Party continues to explain that the relevant information cannot be phrased in ‘legalistic terms buried in the small print of a contract’.52 In order to comply with the lawfulness, fairness and transparency requirements, controllers will need to make sure that data subjects are fully aware of the extent to which their personal data is processed. Indeed, Recital 60 GDPR also states that the ‘principles of fair and transparent processing require that the data subject is informed of the existence of the processing operation and its purposes. The controller should provide the data subject with any further information necessary to ensure fair and transparent processing taking into account the specific circumstances and context in which the personal data are processed.’ Even if the transparency principle is not defined in the GDPR, recital 39 does provide more clarity as to its rationale and meaning: ‘The principle of transparency requires that any information and communication relating to the processing of those personal data be easily accessible and easy to understand, and that clear and plain language be used.53 That principle concerns, in particular, information to the data subjects on the identity of the controller and the purposes of the processing and further information to ensure fair and transparent processing in respect of the natural persons concerned and their right to obtain confirmation and communication of personal data concerning them which are being processed. Natural persons should be made aware of risks, rules, safeguards and rights in relation to the processing of personal data and how to exercise their rights in relation to such processing.’ ACCURACY – Article 5(1)d requires controllers to ensure that the personal data are ‘accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay’. The most evident way for ensuring accuracy, is to run it by the data subject and/or leaving an opportunity to rectify the data. Having said that, it is still very unclear what exactly ‘accuracy’ means in a big data environment, where inferences and predictions are made with probabilities attached. What is the ‘accuracy’ of an 80% likelihood Alice likes baroque music, or, is 20% likely to default in repaying her student loan? Perhaps the obligation to (take every reasonable measure to) ensure accuracy, necessitates close scrutiny and substantial/procedural safeguards when it comes to automated decision-making processes (cf. Data protection impact assessments and accountability). ACCOUNTABILITY – Finally, the last paragraph of Article 5 installs the so-called ‘accountability-principle’. It should be read together with Article 24 (responsibility of the controller). The accountability principle postulates that ‘[t]he controller shall be responsible for, and be able to demonstrate compliance with, the core data quality principles in Art.5(1). Being able to demonstrate compliance with these principles

sufficiently detailed and transparent way so that the complete and correct application of the test could be verified - when necessary - by relevant stakeholders including the data subjects and data protection authorities, and ultimately, by the courts.’ Article 29 Working Party, “Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC”, Opinion (Brussels: European Commission, 9 april 2014), 43, http://ec.europa.eu/justice/data-protection/article-29/documentation/opinion-recommendation/files/2014/wp217_en.pdf. 51 Article 29 Working Party, 43–44.ibid 43–44. 52 Ibid. 53 Article 29 Working Party, “Guidelines on transparency under Regulation 2016/679” (Brussels, 11 april 2018), 6, http://ec.europa.eu/newsroom/just/document.cfm?doc_id=48850.

Page 20: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

goes beyond detailing one’s processing operations in a privacy policy. Indeed, it may require controllers to explain in detail how a specific processing operation works for one or more individuals, and how it might affect these individuals. The extent of such an obligation will be a function of a number of factors. Indeed, Article 24(1) requires that ‘[t]aking into account the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for the rights and freedoms of natural persons, the controller shall implement appropriate technical and organisational measures to ensure and to be able to demonstrate that processing is performed in accordance with the GDPR. Those measures shall be reviewed and updated where necessary.’ In other words, the regulatory burden on controllers’ shoulders is dynamic (varies depending on context and time).

3.1.2 General transparency rights and obligations A useful way of categorising the key GDPR provisions is by making the following two distinctions: ex ante v ex post and empowerment v protective measures. This results in a ‘data protection matrix’ with four quadrants (Table 1). Categorising the key GDPR provisions in this way is relevant here because it enables to better understand the scope, meaning and impact of the respective provisions. As such it helps manage expectations as to the provisions discussed in the following pages.

Ex Ante Ex Post

Protective Measures E.g. Data Quality Principles E.g. DPA Enforcement

Empowerment Measures E.g. Consent E.g. Right to Erasure

Table 1 - Data Protection Matrix

The division between ex ante and ex post measures is quite straightforward. Data protection law provides for measures at different stages throughout a data processing operation’s lifecycle. Some of these measures can be situated at a specific moment in time (e.g. data subject rights). Other measures rather take place over a more or less extended period of time (e.g. security obligations). Categorising these measures will depend on the moment when they are triggered: before or after the relevant processing operation initiates. The second divide that can be made in the way data protection law aims to achieve its goals, is the one between protective v empowerment measures. The distinction between the two is more profound than the one between (controller) obligations and (data subject) rights (which only describes specific kinds of implementation). The basic idea is simple: empowerment measures are aimed at providing individuals with the tools to control their data and protect themselves; and protective measures aim to offer protection without the individual having to be proactive. In other words, the empowerment approach presupposes an active role by the individual, whereas the protective approach allows the individual to be more passive. As will appear in the following pages, the four quadrants of the data protection matrix are not mutually exclusive.

3.1.3 Core transparency provisions EX ANTE TRANSPARENCY – Transparency in the GDPR is further specified in Articles 12-15. Whether the personal data is obtained directly (Art.13) or indirectly (Art.14), controllers have an obligation to provide certain information to data subjects proactively, i.e. without the data subjects having to take any action themselves (Table 2). These provisions can first and foremost be qualified as protective measures. They protect data subjects by forcing controllers to give proper thought to, and be upfront about, their processing operations. As such, it also serves a useful compliance-testing tool for data protection authorities and/or other interests groups. Having said that, Articles 13-14 also have an empowering facet to them. After all, the provisions make data subjects aware of processing taking

Page 21: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

place and as such can be seen as a sine qua non for empowering data subjects to invoke one or more of their rights (e.g. object, erasure, portability). The most important components of ex ante transparency relate to the scope and purposes of processing, the risks involved, the retention period and how to exercise data subject rights.

Information Requirement Art. 13 Art. 14

Identity and contact details of the controller and, where applicable, its representative 1(a) 1(a)

Contact details of the data protection officer, where applicable 1(b) 1(b)

Purposes of the processing for which the personal data are intended and legal basis 1(c) 1(c)

Categories of personal data concerned - 1(d)

Where the processing is based on point (f) of Article 6(1), the legitimate interests pursued by the controller or by a third party 1(d) 2(b)

Recipients or categories of recipients of the personal data, if any 1(e) 1(e)

Details on potential data transfers to third countries 1(f) 1(f)

Retention period, or if that is not possible, the criteria used to determine that period 2(a) 2(a)

Existence of the data subject rights to access, rectification, erasure, restriction of processing, to object, and to data portability 2(b) 2(c)

Where the processing is based on consent, the existence of the right to withdraw consent at any time 2(c) 2(d)

Right to lodge a complaint with a supervisory authority 2(d) 2(e)

Whether the provision of personal data is a statutory or contractual requirement, or a requirement necessary to enter into a contract, as well as whether the data subject is obliged to provide the personal data and of the possible consequences of failure to provide such data

2(e) -

Source from which the personal data originate, and if applicable, whether it came from publicly accessible sources54

- 2(f)

Existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

2(f) 2(g)

Table 2 - Ex Ante Transparency

For the purposes of the ATAP project, it is worth emphasizing that all of the transparency requirements need to be clear and specific. Data subjects should be informed in detail, what aspects of their experience are ‘personalised’ and how exactly.55 In light of the overarching fairness and transparency principles, data subjects should also be made aware of the risks, rules, safeguards and rights in relation to the processing and how to exercise their rights (recital 39). EX POST TRANSPARENCY – Article 15 complements the information obligations in Artt. 13-14, by granting data subjects an explicit right to obtain additional information (Table 3). Within the data protection

54 Recital 61 does acknowledge that ‘[w]here the origin of the personal data cannot be provided to the data subject because various sources have been used, general information should be provided.’ 55 Article 29 Working Party, “Guidelines on transparency under Regulation 2016/679”, 9.Article 29 Working Party, ‘Guidelines on Transparency under Regulation 2016/679’ (n 10) 9.

Page 22: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

matrix (cf. Table 1), Article 15 can be qualified as an ex post empowerment measure.56 Essentially, the right of access gives data subjects the ability to force more transparency, specifically with regard to their individual situation. Indeed, ex ante transparency generally relates to information that is relevant to all processing operations and data subjects. Article 15 is focused on making sure transparency is also effective at the individual level. Importantly, article 15 also includes the provision of details on ‘any personal data used for profiling, including the categories of data used to construct a profile.’57

Controllers should be transparent about the full set of personal data they use to create the profile, where they got each piece of information from, what the exact purposes are of the profiling, and who it has been (or might be) shared with. Moreover, they will also have to specify the retention period(s) and the availability of data subject rights, including the ability to file a complaint with the DPA.

Information Requirement Art. 15

Confirmation as to whether or not personal data concerning him or her are being processed, and, where that is the case, access to the personal data 1

Purposes of the processing 1(a)

Categories of personal data concerned 1(b)

Recipients or categories of recipient to whom the personal data have been or will be disclosed, in particular recipients in third countries or international organisations 1(c)

Retention period, or if that is not possible, the criteria used to determine that period 1(d)

Existence of the data subject rights to rectification, erasure, restriction of processing, and to object 1(e)

Right to lodge a complaint with a supervisory authority 1(f)

Where personal data are not collected from the data subject, any information on the source 1(g)

Existence of automated decision-making, including profiling, referred to in Article 22(1) and (4) and, at least in those cases, meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject

1(h)

In case of transfer to third country, information about the appropriate safeguards 2 Table 3 - Ex Post Transparency

TRANSPARENCY MODALITIES – The GDPR also lists a number of modalities, so as to ensure transparency is effective. The key provision for this is Article 12, but some specific modalities can also be found within the respective provisions discussed above. Most importantly, data subjects cannot be charged a fee for transparency,58 there are strict timing requirements, as well as broader conditions for the way in which transparency is provided (Table 4). Working Party 29 has further specified that controllers should actively consider the audience’s ‘likely level of understanding’ when accommodating transparency (e.g. appropriate level of detail, prioritising information, format, etc).59 The controller will need to consider the context of data processing, the product/service experience, device used,

56 Jef Ausloos en Pierre Dewitte, “Shattering One-Way Mirrors – Data Subject Access Rights in Practice”, International Data Privacy Law 8, nr. 1, geraadpleegd 11 maart 2018, https://doi.org/10.1093/idpl/ipy001. 57 Article 29 Working Party, “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679”, 17.Article 29 Working Party, ‘Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679’ (n 2) 17. 58 A controller may not require you to be a paying customer as a condition to accommodate your rights. Article 29 Working Party, “Guidelines on transparency under Regulation 2016/679”, 13. Previous empirical work has demonstrated that certain controllers effectively only enable access requests filed by people who have an account with the service and/or have bought something with the service before. See: {Citation} 59 See also Recital 60 Article 29 Working Party, 11.

Page 23: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

nature of interactions, etc.60 As a result, the information obligation may also differ throughout time.61 In order to render transparency more meaningful, non-written material (e.g. interactive AV material) may complement the information provision.62 As explained in Recital 60, information ‘may be provided in combination with standardised icons in order to give in an easily visible, intelligible and clearly legible manner, a meaningful overview of the intended processing. Where the icons are presented electronically, they should be machine-readable.’ Even though the increasing complexity of data processing ecosystems may render it hard to accommodate the core transparency requirements, it by no means exonerates controllers. To the contrary, Recital 58 highlights transparency is even more important in complex situations involving many actors.63 When the controller processes a large quantity of personal data, Recital 63 does permit the controller to request the data subject to specify the information or processing activities to which the request relates. One way to enable effective transparency regarding large data-sets, is to provide remote direct access.64

Modality Content Provision

Fee

Free of charge, but controllers may charge reasonable fee when: Rec.59; Art. 12(5)

Requests are manifestly unfounded or excessive Art. 12(5)a

Any further copies of personal data are requested. Art. 15(3)

Time limit

For ex ante transparency, at the moment of obtaining the personal data, when collected from the data subject directly, Art. 13(1)

when collected indirectly: (a) within a reasonable period after obtaining, but within one month, considering processing-context; (b) if used for communication with the data subject, at the latest at the time of the first communication; or (c) if a disclosure to another recipient is envisaged, at the latest when the personal data are first disclosed.

Art.14(3)

For ex post transparency, without undue delay and, in any event within one month of receipt of the request (whether the controller intends to take action or not). Period may be extended by two further months where necessary, taking into account the complexity and the number of the requests.

Rec.59; Art.12(3)-(4)

Form for exercising

Controllers should provide means for requests to be made electronically, especially where personal data are processed by electronic means. Rec.59

Form for answering

Information shall be provided in writing, or by other means, including, where appropriate, by electronic means. Where possible, direct remote access to a secure system should be made available. When requested by

Rec.63; Art.12(1)

60 This may require running (and documenting) trials before ‘going live’. Article 29 Working Party, 14. 61 Cf. Article 29 Working Party, 16–17. 62 Article 29 Working Party, 12. WP29 further lists the following examples: cartoons, infographics or flowcharts. Where transparency information is directed at children specifically, controllers should consider what types of measures may be particularly accessible to children (e.g. these might be comics/ cartoons, pictograms, animations, etc. amongst other measures).’ 63 Article 29 Working Party, “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679”, 25. 64 Recital 63 and Article 29 Working Party, 17.

Page 24: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

the data subject, the information may be provided orally, provided that the identity of the data subject is proven by other means.

Intelligibility In a concise, transparent, intelligible and easily accessible form, using clear and plain language, in particular for information addressed specifically to a child.

Rec.58; Art.12(1)

Verification of identity

Controllers may request additional information necessary to confirm the identity of the data subject, and should use all reasonable measures to do so, in particular in the context of online services and online identifiers.

Rec.64; Art.12(6)

Limitations

Union or Member State law may restrict by way of a legislative measure the scope of the obligations and rights provided for in Articles 12 to 22 when such a restriction respects the essence of the fundamental rights and freedoms and is a necessary and proportionate measure in a democratic society to safeguard: see list Art. 23(1)a-j. These measures must contain specific provisions at least, where relevant, as to: see list of Art. 23(2)a-h.

Rec.73; Art.23(1)-(2):

When processing is carried out for journalistic purposes or the purpose of academic artistic or literary expression, Member States shall provide for exemptions or derogations from Chapter III (rights of the data subject) if they are necessary to reconcile the right to the protection of personal data with the freedom of expression and information.

Recital 153; Art.85(2)

Where personal data are processed for scientific or historical research purposes or statistical purposes, Union or Member State law may provide for derogations from the rights referred to in Articles 15, 16, 18 and 21 subject to the conditions and safeguards referred to in Art. 89(1) in so far as such rights are likely to render impossible or seriously impair the achievement of the specific purposes, and such derogations are necessary for the fulfilment of those purposes.

Rec. 156; Art. 89(2)

Table 4 - Transparency Modalities

HURDLES – The GDPR contains several provisions that might effectively raise hurdles to (full) transparency. Article 11(1) explains that controllers do not have to retain personal data only for the ability potentially accommodate data subject rights at a later stage. Put differently, the requirement to accommodate data subject rights does not prevent them from anonymising their datasets. Be that as it may, data subjects still have the possibility to provide the controller with additional information so as to (re-)identify their data in anonymised data-sets (Art.11(2)).65 Furthermore, Article 13(4) exonerates controllers who obtained personal data directly, from having to provide ex ante information when they can establish the data subject already has that information.66 Controllers having obtained information indirectly are not subject to the ex ante transparency requirements when (a) the data subject already has the information; (b) providing such information proves impossible or would involve a disproportionate effort67; (c) obtaining or disclosure is expressly laid down by Union

65 In practice, this may lead to a frustrating back-and-forth between data subject and controller, as further detailed in: Michael Veale, Reuben Binns, en Jef Ausloos, “When Data Protection by Design and Data Subject Rights Clash”, International Data Privacy Law 8, nr. 1 (z.d.): 4–28, https://doi.org/10.1093/idpl/ipy002; Jef Ausloos, “Paul-Olivier Dehaye and the Raiders of the Lost Data”, Academic Blog, CITIP blog (blog), 10 april 2018, https://www.law.kuleuven.be/citip/blog/paul-olivier-dehaye-and-the-raiders-of-the-lost-data/. 66 This exemption is subject to a narrow interpretation however. Article 29 Working Party, “Guidelines on transparency under Regulation 2016/679”, 27. 67 This exemption continues however, ‘…in particular for processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes, subject to the conditions and safeguards referred to in Article 89(1) or in so far as the obligation referred to in paragraph 1 of this Article is likely to render impossible or seriously impair the achievement of the objectives of that processing. In such cases the controller shall take appropriate measures to protect the data subject's rights and freedoms and legitimate interests, including making the information publicly available’.

Page 25: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

or Member State law to which the controller is subject and which provides appropriate measures to protect the data subject's legitimate interests; or (d) where the personal data must remain confidential subject to an obligation of professional secrecy regulated by Union or Member State law, including a statutory obligation of secrecy (Art.14(5)).68 With regard to ex post transparency, Article 15(4) specifies that providing copies of personal data to the data subject should not adversely affect the rights and freedoms of others (e.g. when personal data cannot be dissociated from personal data of another person). Finally, recital 63 also hints at the fact that controllers may oppose accommodating data subject rights when granting the rights would impact their trade secrets or intellectual property. Such concerns may be more or less relevant in the context of automated decision-making and profiling, which are often the product of proprietary algorithms and/or training-data-sets. Still, such concerns cannot result in flat-out denying data subject rights altogether (cf. Section on IP law below).

3.1.4 Satellite transparency provisions Apart from the core transparency provisions discussed above, there are a number of other articles throughout the GDPR that may affect algorithmic transparency and accountability. The most relevant ones will be discussed in this sub-section. DATA PROTECTION BY DESIGN AND BY DEFAULT – Article 25 sets out the obligation of data protection by design and by default. In essence, this provision requires controllers to implement appropriate technical and organisational measures for ensuring compliance with the GDPR, ‘taking into account the state of the art, the cost of implementation and the nature, scope, context and purposes of processing as well as the risks of varying likelihood and severity for rights and freedoms of natural persons posed by the processing’. Following this prerogative, ‘transparency mechanisms should be built into processing systems from the ground up so that all sources of personal data received into an organisation can be tracked and traced back to their source at any point in the data processing life cycle’.69

RECORDS OF PROCESSING ACTIVITIES – Article 30 obliges every controller to maintain a ‘record of processing activities’, unless it is an organisation employing fewer than 250 persons, and the processing is occasional, unlikely to result in a risk to the rights and freedoms of data subjects, and does not include sensitive data (cf. Artt.9-10). The information that should be included in such records, corresponds to a large extent with the information that needs to be provided under Art. 13-15 already anyway. Hence, Article 30 constitutes an added layer of accountability, ensuring that controllers put in place the infrastructure internally to accommodate proper and meaningful transparency. DATA PROTECTION IMPACT ASSESSMENT – Article 35 formalises the requirement to make a data protection impact assessment (DPIA) whenever ‘a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk to the rights and freedoms of natural persons’. Guidance on what exactly can be considered ‘high-risk’ processing is further clarified in Art.35(3),70 the recitals (i.e. Recitals 90-91) and WP29

68 These exemptions are also subject to a narrow interpretation, and their invokability can evolve over time. See: Recital 62 and Article 29 Working Party, “Guidelines on transparency under Regulation 2016/679”, 29–30. 69 Article 29 Working Party, 29.ibid 29. 70 Listing three situations in particular requiring a DPIA, i.e. in case of: ‘(a) a systematic and extensive evaluation of personal aspects relating to natural persons which is based on automated processing, including profiling, and on which decisions are based that produce legal effects concerning the natural person or similarly significantly affect the natural person; (b) processing on a large scale of special categories of data referred to in Article 9(1), or of personal data relating to criminal convictions and offences referred to in Article 10; or (c) a systematic monitoring of a publicly accessible area on a large scale.

Page 26: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

guidance.71 Synthesising different components, the EDPB lists nine criteria for evaluating whether processing is ‘likely to result in a high risk’: evaluation or scoring, automated-decision making with legal or similar significant effect, systematic monitoring, sensitive data or data of a highly personal nature, large-scale processing, matching or combining datasets, vulnerability of data subjects, innovative use or applying new technological/organisational solutions, or when the processing itself prevents data subjects from exercising a right or using a service or contract.72 Article 35(7) explains that a DPIA will comprise at least (a) a systematic description of envisaged processing; (b) assessment of necessity and proportionality; (c) assessment of the risks to data subjects’ rights and freedoms; and (d) measures envisaged to address these risks. The GDPR also actively calls for considering codes of conduct (Art.35(8)) and consulting data subjects themselves (Art.35(9)). In light of the aforementioned, DPIAs certainly contribute to transparency and accountability overall. Indeed, a DPIA both forces transparency, promotes active engagement with data subjects, and may reveal the need for (better) algorithmic transparency. CODES OF CONDUCT AND CERTIFICATION – Articles 40-43 foresee the ability for putting in place codes of conduct and certification mechanisms. These regulatory tools help distribute and mitigate the costs of interpreting and applying vague GDPR provisions in concrete scenarios. They can help formalise and standardise industry-wide compliance with transparency requirements and algorithmic transparency and accountability measures more broadly. INVESTIGATORY/ENFORCEMENT POWERS – Finally, it is of course important to emphasise that compliance with the GDPR is subject to independent oversight by data protection authorities (Artt.51 et seq). Importantly, these authorities have a number of corrective powers, including forcing (more/better) transparency, banning further processing and issuing fines (Art.58(2)). Regarding the latter, the GDPR enables fines of up to 4% of total worldwide annual turnover when a controller infringes basic data protection principles (Artt.5, 6, 7, 9) and/or data subject rights (Artt.12-22), which include the core transparency and accountability provisions (Art.83(5)).

3.1.5 Transparency for Automated Decision-Making CONTEXT – The so-called right to explanation is not mentioned explicitly in any of the GDPR’s articles. Instead, it is only referred to in Recital 71, which clarifies Article 22’s right not to be subject to automated decision-making. This contentious provision puts in place a general prohibition for decisions based solely on automated processing, including profiling, which produce legal effects concerning the data subject or similarly affect him or her. Such processing is only allowed when the decision is (a) necessary for initiating/performing a contract between controller and data subject; (b) authorised by a dedicated law which includes suitable safeguards; or (c) is based on the data subject’s explicit consent (Art.22(2)).73 Article 22(3) requires controllers to implement suitable safeguards to safeguard data subjects’ interests, rights and freedoms (at least including the ability to obtain human intervention, express one’s views and contest the decision).74 Importantly here, Article 15(1)h grants data subjects the right to obtain ‘meaningful information about the logic involved, as well as the significance and the envisaged consequences’ for the data subject of processing covered by Art.22. Put differently, the so-called right to explanation can be derived from a joint reading of Art.15(1)h, 22 and recital 71. A number of important questions remain however.

71 Article 29 Working Party, “Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is ‘likely to result in a high risk’ for the purposes of Regulation 2016/679 (last revised 4 October 2017)”, Guidelines (Brussels: Article 29 Working Party, 4 april 2017). 72 Article 29 Working Party, 9–11.ibid 9–11. 73 Even stricter requirements apply when the underlying personal data can be qualified a ‘sensitive data’ (cf. Article9(1)) 74 Recital 71 includes the ability to obtain an explanation to the three safeguards already mentioned in Art.22(3).

Page 27: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

REQUIREMENTS – As already mentioned in the previous paragraph, the right to explanation, as traditionally understood to be captured by Artt.15(1)h, 22 and recital 71, is subject to a number of requirements. In other words, the ‘right’ as derived from these provisions, can only be invoked by data subjects with regard to specific processing operations: i.e. decisions (a) based solely on automated processing, including profiling; and (b) producing legal effects concerning the data subject or similarly significantly affects him or her. As explained above, however, there are a number of satellite provisions in the GDPR that may further be relied on to obtain (pseudo-)explanations of data processing and thus achieve algorithmic transparency and accountability in practice. Having said that, it is worth briefly exploring the two requirements specified in Article 22(1), as this provision in combination with Art.15(1)h probably comes closest to what is popularly understood as the ‘right to an explanation’ of automated decision-making. AUTOMATED DECISION-MAKING – First of all, Art.22(1) only covers decisions that are solely based on automated processing. While technology experts may sometimes argue that machine learning algorithms only produce output, on the basis of which another system or person may make a decision, the GDPR (and WP29) seems to understand this concept broadly.75 Recital 71 explains that such ‘decisions’ may include measures and Article 22(1) specifies profiling76 is included as well. The wording of Art.22(1) also implies there is no human involvement in the decision-making process. Having said that, merely putting a human in the loop without any actual influence (e.g. merely rubber-stamping automated decision-making) is generally considered not to be sufficient to exclude said processing from Art.22(1).77

PRODUCING LEGAL EFFECT, OR SIMILARLY AFFECTS – An important constraint on the scope of Art.22(1) is that it only captures automated decision-making producing legal effects or similarly significantly affecting the data subject. A legal effect would comprise cancellation of a contract or denial of a specific legal benefit for example.78 Yet, even when the decision does not directly impact a legal right, it can still considerably affect the data subject and fall within Art.22(1)’s scope. In order to be considered as such, WP29 explains the respective decision must ‘have the potential to: significantly affect the circumstances, behaviour or choices of the individuals concerned; have a prolonged or permanent impact on the data subject; or at its most extreme, lead to the exclusion or discrimination of individuals.’79 While targeted advertisements will often not fall within the scope of Art.22(1), it may depending on: ‘the intrusiveness of the profiling process, including the tracking of individuals across different websites, devices and services; the expectations and wishes of the individuals concerned; the way the advert is delivered; or using knowledge of the vulnerabilities of the data subjects targeted.’80 Indeed, the profiling may create so-called ‘sensitive data’ (Art.9(1)) when making

75 Lilian Edwards en Michael Veale, “Slave to the Algorithm: Why a Right to an Explanation Is Probably Not the Remedy You Are Looking for”, Duke Law & Technology Review 16 (2018 2017): 46. 76 Profiling is defined in Art.4(4) as ‘any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person […]’. Profiling is generally understood to consist out of three stages: (a) collection, (b) automated analysis to identify correlations, and (c) applying the correlations. See: Council of Europe, “Recommendation on the Protection of Individuals with Regard to Automatic Processing of Personal Data in the Context of Profiling CM/Rec(2010)13”; Article 29 Working Party, “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679”, 7. 77 Article 29 Working Party, “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679”; Lilian Edwards en Gianclaudio Malgieri, “The Algorithmic Battle” (4 juni 2018), https://northumbria.hosted.panopto.com/Panopto/Pages/Viewer.aspx?id=382301cd-e2ed-4a28-8724-a8f500c6699e. 78 Article 29 Working Party, ‘Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679’ (n 2) 21. 79 Article 29 Working Party, 21–22. WP29 gives the examples of decisions affecting someone’s financial circumstances, or access to health services, employment opportunities or education. 80 Article 29 Working Party, 22

Page 28: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

inferences and/or combining data from different sources.81 It is also important to note that automated decision-making that has little impact on (some individuals), may still have a significant effect on specific groups or society at large. Some have argued that the mere fact that automated decision-making affects fundamental rights (e.g. privacy, data protection, freedom of expression and information, or non-discrimination) may be sufficient to trigger Art.22(1).82 Others still, argue that the right to obtain meaningful information in Art.15(1)h is not limited to Art.22(1) processing operations only.83 After all, Art.15(1)h says it applies at least to Art.22(1) type processing.

WHAT INFORMATION/EXPLANATION – What information should be included in an ‘explanation’? The GDPR remains unclear as to what information is covered exactly with under Art.15(1)h combined with Art.22(1). Article 15(1)h requires at the very least (a) acknowledgment of the existence of automated decision-making, and meaningful information about (a) the logic involved, and (c) the significance and envisaged consequences.84

Importantly, controllers need to provide meaningful information (Art.15(1)h). Moreover, in order for the rights of a data subject to ‘express his or her point of view and to contest the decision’ (Art.22(3)) would remain impracticable (or even impossible) without adequate transparency as to how the decision was made in the first place. This holds even more true when considering other data subject rights such as the right to rectification (Art.16), erasure (Art.17), or object (Art.21).85 Put differently, a clear ‘explanation’ of the specific processing operations an individual is subject to, can be considered a sine qua non for meaningfully exercising any of the other data subject rights. Even if some scholars questioned whether only generic, system-level, information should be provided, WP29 confirmed that controllers are also required to find ‘simple ways to tell the data subject about the rationale behind, or the criteria relied on in reaching the decision’ (i.e. individualised explanation).86 Complexity is no excuse for not providing the required information. And the ‘simplification’ should not go at the cost of comprehensiveness (i.e. the explanation should still allow the data subject to fully understand the reasons behind

81 Article 29 Working Party, 15. 82 Gianclaudio Malgieri en Giovanni Comandé, “Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation”, International Data Privacy Law, geraadpleegd 26 november 2017, https://doi.org/10.1093/idpl/ipx019.. 83 Edwards en Veale, “Slave to the Algorithm”, 53.Edwards and Veale (n 32) 53. 84 According to MALGIERI, meaningful information about the logic involved (b) relates to the general system architecture, whereas meaningful information on the significance and envisaged consequences of the decision (c) relates to the implementation in context (purposes, levels and tasks of eventual human intervention, (non-)commercial nature of decision, statistical impact, possibility to reconsider a decision, environment in which employed). Edwards en Malgieri, “04 June 2018 at 12”. 85 See similarly: Ausloos en Dewitte, “Shattering One-Way Mirrors – Data Subject Access Rights in Practice”; Margot E. Kaminski, “The Right to Explanation, Explained”, Research Paper, U of Colorado Law Legal Studies Research Paper (Colorado, USA: University of Colorado, 15 juni 2018), 21, https://papers.ssrn.com/abstract=3196985. 86 Article 29 Working Party, “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679”, 25.Article 29 Working Party, ‘Guidelines on Automated Individual Decision-Making and Profiling for the Purposes of Regulation 2016/679’ (n 2) 25.

WP29 lists the following types of information as examples of clear and comprehensive ways to fulfil the transparency requirements: Categories of data that have been or will be

used in the profiling or decision-making process; why these categories are considered pertinent; how any profile used in the automated decision-

making process is built, including any statistics used in the analysis;

why this profile is relevant to the automated decision-making process; and

how it is used for a decision concerning the data subject.

Page 29: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

the respective decision).87 Information on the significance and envisaged consequences of the processing should be meaningful and understandable as well. This can be done by providing real and tangible examples of possible effects and a clarification as to how a decision was reached.88 Visualisation and interactive techniques may be particularly useful in order to effectuate adequate explanations (cf. Chapter II on HCI).89 BEYOND ARTICLE 22 – When discussing the ‘right to an explanation’, it is important not to over-focus on the Article 15(1)h / Article 22 tandem. An obligation to provide meaningful information about what, why and how personal data is being processed, follows from many other provisions, as demonstrated above. Apart from explicit requirements in Artt.13-14, such information is also implied as a precondition for effectively exercising data subject rights (Artt.16-21) as well as a component of the overarching fairness and transparency principles (Art.5(1)a, Recital 60). WP29 also emphasised that even if the respective processing does not fall within the Art.22(1) definition, it is nevertheless good practice to provide the information required by Art.15(1)h nonetheless.90

3.1.6 News Recommender Systems and the GDPR The previous pages identified and clarified the GDPR-hooks for algorithmic transparency and accountability in theory. How do these relate to news recommender systems in particular? Put differently, how can data subjects and/or other stakeholders, effectively obtain some sort of explanation as to how news-content is ranked and presented on the basis of the GDPR? COVERED OR NOT – Whenever news-content is arranged or presented on the basis of non-personal data (e.g. traditional editorial selection), the recommender system falls outside the GDPR’s scope. From the moment however, content is selected on an individual basis, the operations enter the GDPR’s scope. This may range from simple characteristics such as location, age and sex, to more complex categorisations based on a variety of information-sources (e.g. browsing behaviour, device characteristics, etc.). EXPLANATION OBLIGATIONS – The general data protection principles mentioned above (notably: lawfulness, fairness and transparency, accuracy and accountability) at the very least already require a good faith openness about what, how and why news is recommended based on personal data processing. Put differently, not being upfront about news personalisation or even concealing it, would already violate these overarching principles. Secondly, Articles 12-15 provide detailed lists and modalities of what information (explanation) should be provided at the very least. Thirdly, several satellite provisions further give shape as to how controllers may or should explain their news recommender algorithms. DPIAs (Art.35) and data protection by design and by default (Art.25) obligations may require news recommenders to develop clear visualisations and controls over how content is in fact arranged and presented to the individual (e.g. through personalisation dashboards). Such tools may also be mainstreamed through codes of conduct and/or certification mechanisms. NEWS RECOMMENDER ALGORITHMS AND ARTICLE 22(1) – The most straightforward ‘hook’ for a right to explanation vis-à-vis news recommenders would be the aforementioned Art.15(1)h and Art.22(1) tandem. Yet, one may wonder to what extent news recommender systems are covered by Art.22(1) in the first place. I.e. are they based solely on automated decision-making including profiling, and do they produce legal effects or similarly significantly affect the data subject? Firstly, it will be important to establish whether profiling is in fact taking place. Does the news recommender gather information about (groups of) individuals and evaluate ‘their characteristics or behaviour patterns in order to place them into a certain category or group, in particular to analyse and/or make predictions about, for

87 Article 29 Working Party, 25. ibid 88 Article 29 Working Party, 26. 89 Article 29 Working Party, “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679” Annex 1. 90 Article 29 Working Party, 25.

Page 30: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

example, their: […] interests; or likely behaviour’?91 If so, to what extent do the recommendations produce legal effects or similarly significantly affect data subjects? Whereas news recommendations will generally not produce legal effects with regard to their users, it is conceivable they still significantly affect them. After all, WP29 explained that even ad targeting may fall under Art.22(1) depending on the intrusiveness, user expectations/wishes; delivery; or data subject vulnerabilities. This could be the case when news-recommendations are based on unexpected data-streams, cannot be turned off, and/or are not marked as being personalised. Moreover, one could also argue that personalisation of news-content may affect one’s political views or equal access to information and as a result affect media pluralism and democratic values more broadly.92 As such, news recommender systems have the potential to affect fundamental rights and freedoms (and thus produce a ‘legal effect’ as required under Art.22).93 CONCRETE SUGGESTIONS – Several oft-recurring suggestions have been made as to how a right to explanation could be effectuated in practice. To start with, a layered approach does not only seem useful with regard to the standard privacy policy94, but also in accommodating ex post, individualised transparency. For example, data subjects may already see high-level indications of how and why they are seeing specific news content (in a specific order) when simply browsing (e.g. by adding tags like ‘you see Article B, because you liked Article A’). Interested data subjects should have the opportunity to dig deeper and see more details for each specific post. Secondly, data subjects should have an easy way to obtain explanations at any time.95 In other words, the right to explanation should be built into the design of the news recommender service. Given the dynamic nature of profiling and news recommendations, the information provided as part of the right to an explanation should be (near) real-time. Thirdly, a very instructive way for explaining how recommender systems work, is to actively let users play around with the relevant parameters. In other words, controllers could present different sliders so that data subjects can learn in an experiential way by devising their own hypotheticals (e.g. what news do I see when I change my sex, location, political interest, etc.).96 This somewhat relates to the concepts of ‘interactive modelling’97 and ‘counterfactual explanations’98. Fourthly, explanations should not be limited to textual or oral information only.99 Interactive multimedia may often provide the most meaningful explanation of how a recommender system or profiling operates. Fifthly, one of the most

91 Article 29 Working Party, 7. 92 Natali Helberger, “Challenging Diversity—Social Media Platforms and a New Conception of Media Diversity”, in Digital Dominance. The Power of Google, Amazon, Facebook, and Apple, onder redactie van Martin Moore en Damian Tambini, First (Oxford University Press, 2018). Arguing for more attention to and a reconfiguration of (a) the relationship between all relevant actors in media-creation and distribution ecosystem, and (b) the relationship between individuals and the entities through which they consumer media. 93 Argument made in: Malgieri en Comandé, “Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation”, 10–11. 94 Article 29 Working Party, “Guidelines on transparency under Regulation 2016/679”, 19. 95 Cf. Article 29 Working Party, 16–17. 96 Cf. Edwards en Veale, “Slave to the Algorithm”, 62–63. 97 Danielle Keats Citron en Frank Pasquale, “The Scored Society: Due Process for Automated Predictions”, Washington Law Review 89 (2014): 28–29. The authors describe how consumers could get a better understanding (or ‘explanation’) of their credit-scores, by giving them the chance to ‘see what happens to their score with different hypothtetical alteraions of their credit histories.’ 98 Sandra Wachter, Brent Mittelstadt, en Chris Russell, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology 31, nr. 2 (2018). Put simply, counterfactual explanations consist of listing the relevant decision-parameters that – if altered – would produce a different outcome. In the authors’ own words ‘they provide insight into which external facts could be different in order to arrive at a desired out- come.’ 99 Article 29 Working Party, “Guidelines on transparency under Regulation 2016/679”, 25.

Explanations should be:

• Layered • Real-time • Experiential • Multimedia

Page 31: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

straightforward ways for facilitating the previous suggestions, is through ‘privacy dashboards’. Privacy dashboards may provide useful interfaces for data subjects to explore how they are profiled and how/why news content is arranged and selected in a certain way. Incorporating such a dashboard into the architecture of the service also enables it to provide personalised, relevant and real-time explanations.100 Finally it is worth referring to two recent academic works that provided further categorisations of information to be provided through a ‘right to an explanation. Firstly, EDWARDS and VEALE distinguish model-centric from subject-centric explanations and give examples of what information could be included in either of them.101 Whereas the former provides ‘broad information about a Machine Learning model which is not decision or input-data specific,’ the latter ‘are built on and around the basis of an input record.’102 Subject-centric explanations may be less-suited to analyse procedural integrity for example, but are better-suited to give meaningful explanations as to actual decisions made.

Model Centric Explanation Setup information the intentions behind the modelling process, the family of model (neural

network, random forest, ensemble combination), the parameters used to further specify it before training;

Training metadata summary statistics and qualitative descriptions of the input data used to train the model, the provenance of such data, and the output data or classifications being predicted in this model;

Performance metrics performance metrics: information on the model's predictive skill on unseen data, including breakdowns such as success on specific salient subcategories of data;

Estimated global logics these are simplified, averaged, human-understandable forms of how inputs are turned into outputs, which by definition are not complete, else you could use them instead of the complex model to achieve the same results. These might include variable importance scores, rule extraction results, or sensitivity analysis;

Process information how the model was tested, trained, or screened for undesirable properties

Subject Centric Explanation Sensitivity-based subject-centric explanations

what changes in my input data would have made my decision turn out otherwise? (Where do I have to move in Figure 1 to be classified differently?)

Case-based subject-centric explanations

which data records used to train this model are most similar to mine? (Who are the ticks and crosses nearest to me?)

Demographic-based subject-centric explanations

what are the characteristics of individuals who received similar treatment to me? (Who, more broadly, was similarly classified?)

Performance-based subject-centric explanations

how confident are you of my outcome? Are individuals similar to me classified erroneously more or less often than average? (How many ticks and crosses nearer me were misclassified during training? Am I a difficult case?)

100 Cf. Article 29 Working Party, 20. 101 Edwards en Veale, “Slave to the Algorithm”, 55 et seq.Edwards and Veale (n 32) 55 et seq. 102 Edwards en Veale, 55 et seq. Importantly, subject-centric explanations can be provided ex ante or ex post. As explained by Edwards and Veale, ‘Computer scientists would refer to this type of explanation as 'local', as the explanation is restricted to the region surrounding a set of data.’

Page 32: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Table 5: Information lists for Model- and Subject- Centric Explanation (Edwards & Veale, 2018)

Similarly to the previous authors, MALGIERI and COMANDÉ also make a distinction between two key elements of automated decision-making requiring transparency: architecture and implementation (cf. Table 6). Whereas the former ‘represents algorithm functionality and the “logic involved” in the automated processing,’ the latter represents ‘the overall decision-making process and thus the context in which the architecture works, i.e. the significance of a decision- making and its envisaged consequences.’103 Properly accommodating the right to an explanation would require a two-step test, providing transparency on both elements. Building on this two-pronged test, the authors also developed a non-exhaustive questionnaire of questions (cf. Table 7).

Architecture Auditing The creation of the algorithm

How it was designed, which data have fed the machine-learning algorithm, which categories have been used, whether there are some feedback or correction mechanism.

The functioning/use of the algorithm

How the algorithm works, what (personal) data it needs, if any, which scoring parameters it has or, e.g., the weighting of features.

The expected outputs of the algorithm

What the algorithm can determine/decide, which kind of output is provided (predictive analytics, prescriptive analytics, inferences, etc.).

Implementation Auditing Purposes for which algorithmic decision-making is employed (credit scoring, work assessment, behavioural advertisement). Level and tasks of eventual human intervention (setting the score, the parameters, etc.). Commercial/non-commercial nature of that decision-making. Statistical impacts on past costumers. Possibility to reconsider a decision after the algorithm has released its output. Environment in which it is deployed.

Table 6 Two-steps auditing test (Malgieri & Comandé, 2017)

Architecture Auditing

orig

in

What is the background of who designed the algorithm? For instance, was it a group com- posed by only ICT experts or an interdisciplinary group composed by social scientists, lawyers and ICT experts? In which specific environment has it been developed? For instance, within university research, a private business, a public administration (within national healthcare research, national defence research, etc.)? Did the design of the system only have civil objectives? Were potential dual uses assessed? How? Was it designed envisioning secondary uses? If any, which ones? Which was the initial purpose of the algorithm when it was designed? It is a generic decision-making algorithm readapted to the finalities of the data controller or was it conceived from the outset in order to perform the specific role that it performs? Was it internally developed or outsourced? Is it a simple or a complex algorithm? Is it partly/totally auto-generated with machine learning features? Can the algorithm modify its own code? Are these modifications predictable and/or verifiable? Are they traceable? If previous two questions were positive, with which technology? When it was designed, which data were used to ‘feed’ the algorithm during its training?

103 Malgieri en Comandé, “Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation”, 16 et seq.Malgieri and Comandé (n 39) 16 et seq.

Page 33: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Does the algorithm, when operating, use external data and, if so, does it select its own datasets? If no, how are the dataset selected? Did the algorithm design/training phase use as much as possible neutral statistical data expressly avoiding datasets containing non-permitted biases?

use/

oper

atio

n

Which personal data are used to perform the specific automated processing at issue? Are personal data used collected from other sources? If no, how are the dataset(s) selected? What methods, if any, are used to enrich data? Which categories are considered relevant in the profiling? Are they considering criteria expressly regulated by law (e.g. gender, age, ethnicity, sexual orientation, health conditions, economic situations, etc.)? Are they considering new patterns and/or individual, permanent or transient, traits? Does it serve directly (explicitly) or indirectly (e.g. communicating to data brokers, subsidiaries or parent organizations) secondary uses of the datasets?

outp

ut

What is the output of the algorithm? A final decision, a score, a profile? Are the outputs produced in an intelligible and easily accessible form, using clear and plain language? Does the algorithm produce data analytics? If yes, what kind of analytics: descriptive analytics, diagnostic analytics; predictive analytics or prescriptive analytics? Does it provide a feedback mechanism in order to check the quality of the output produced? Is it provided a feedback mechanism to check the quality of the algorithm if auto-generated? Does the algorithm interoperate with other automatic processing?

Page 34: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Implementation Auditing For what purposes is the algorithmic decision-making process programmed to be used (e.g. credit scoring, work assessment, behavioural advertisements, other uses)? Does it depart from the original designed scope? Is any human intervention required in the decision-making process? At which level? With what powers of autonomous assessment? What is the level of human intervention adopted in the class of decisions to which belongs the decision possibly questioned? What are the human criteria to ‘override’ the automatic decision-making? Which tasks do humans perform in the overall decision-making process? E.g. Do they periodically re- combine scores with contractual conditions? They pre-set score parameters, consumers’ categories? Run periodical quality checks? What are the possible effects on data subjects? Do they encompass legally recognized rights or freedoms? Do they impact the individual decision-making context (e.g. Higher/lower prices of goods, services, etc.? Better/worse contractual conditions? And for the decision at stake?) Within the Implementation, has the data controller the possibility to reconsider a decision after the algorithm has released its output? Under what objective circumstances? Are those circumstances made clear to data subjects ex ante? In the notice? Are they expressed in an intelligible and easily accessible form, using clear and plain language ex post? Considering past data subjects, can the data controller show that the outputs of that decision- making were not illegitimately discriminatory in statistical terms? In case of scoring, what was the score of the concerned data subjects in the decision at stake? How is it assessed the GDPR compliance in the implementation?

Table 7 Questionnaire for ensuring legible transparency (Malgieri & Comandé, 2017)

3.2 ATAP and Consumer Protection Law

Apart from data protection law, consumer protection law may also play an important role in ensuring algorithmic transparency and accountability in practice. News recommender systems (whether being managed by first party content providers, news-aggregators or social media) can affect consumers’ ability to freely exercise their choices. Various normative frameworks protecting European consumers against information/power asymmetries exist in EU law. This section is aimed at identifying relevant legal hooks that could be used to foster algorithmic transparency and accountability in the context of news-recommender systems, in the following instruments:

(a) eCommerce Directive (Directive 2000/31),104 (b) Unfair Commercial Practices Directive (Directive 2005/29),105 (c) Unfair Terms in Consumer Contracts Directive (Directive 93/13)106 and

104 Directive 2000/31 of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market (Directive on electronic commerce), [2001] O.J.E.U. L178/1. 105 Directive 2005/29 of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market and amending Council Directive 84/450, Directive 97/7, 98/27 and 2002/65 of the European Parliament and of the Council and Regulation 2006/2004 of the European Parliament and of the Council (Unfair Commercial Practices Directive), [2005] O.J.E.U. L149/22. 106 Council Directive 93/13 of 5 April 1993 on unfair terms in consumer contracts, [1993] O.J.E.U. L95/29.

Page 35: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

(d) Consumer Rights Directive (Directive 2011/83).107 As underlined by the European Data Protection Supervisor in its preliminary Opinion on privacy and competitiveness in the age of Big Data,108 data protection is not the only EU legislative framework designed to protect and empower individuals in their relationship with online information services. Consumer protection law, which aims at mitigating power and information asymmetries between consumers and businesses, also grants individuals the right to seek remedies against unfair commercial practices.109 More and more, ‘free’ services indeed require payment in the form of personal information. While personal data are not entirely assimilated with tradeable commodities,110 this nonetheless paves the way for the application of traditional consumer protection law provisions to an online environment. More specifically, consumer protection law could serve as a legal basis to claim and enforce algorithmic transparency and accountability in the field of recommender systems, and in casu of their effects on news distribution.

3.2.1 eCommerce Directive (Directive 2000/31)

a) Scope of application

The eCommerce Directive applies to ‘information society services’ (ISS), a concept defined by Article 2(a) as ‘services within the meaning of Article 1(2) of Directive 98/34’. In turn, this provision defines an information society service as ‘any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services’. Recital 18 eCommerce Directive clarifies that it also encompasses ‘services which are not remunerated by those who receive them, such as those offering online information or commercial communications, or those providing tools allowing for search, access and retrieval of data’. The latter stipulation includes most online platforms relying on a business model where remuneration come from third party advertisers.111 As a consequence, the three types of actors mentioned above, i.e. first party content providers, news aggregators and social media platforms, are likely to be governed by the eCommerce Directive, whose relevant legal provisions are analysed hereunder.

b) Specific rules on commercial communications

The eCommerce Directive contains specific provisions for identification and transparency regarding ‘commercial communications’. Those are defined by Article 2(f) as ‘any form of communication

107 Directive 2001/83 of the European Parliament and of the Council of 25 October 2011 on consumer rights, amending Council Directive 93/13 and Directive 1999/44 of the European Parliament and of the Council and repealing Council Directive 85/577 and Directive 98/7 of the European Parliament and of the Council, [2011] O.J.E.U. L304/64. 108 EDPS, “Preliminary Opinion of the European Data Protection Supervisor Privacy and competitiveness in the age of big data: The interplay between data protection, competition law and consumer protection in the Digital Economy”, maart 2014, https://edps.europa.eu/sites/edp/files/publication/14-03-26_competitition_law_big_data_en.pdf. See notably point 3.3: ‘Consumer protection’. 109 Natali Helberger, Frederik J Zuiderveen Borgesius and Agustin Reyna, ‘The Perfect Match? A Closer Look at the Relationship between EU Consumer Law and Data Protection Law’ (2017) 54 Common Market Law Review 1427. 110 Axel Metzger, ‘Data as Counter-Performance: What Rights and Duties Do Parties Have?’ (2017) 8 JIPITEC <http://www.jipitec.eu/issues/jipitec-8-1-2017/4528>; Damian Clifford, ‘The Digital Content Directive, the Value of Data and the Privacy and Data Protection Framework’ (CITIP blog) <https://www.law.kuleuven.be/citip/ blog/the-digital-content-directive-the-value-of-data-and-the-privacy-and-data-protection-framework/> accessed 10 August 2018. 111 See on that specific point: Ingrid Lambrecht, Valerie Verdoodt, en Jasper Bellon, “Platforms and commercial communications aimed at children: a playground under legislative reform?”, International Review of Law, Computers & Technology 32, nr. 1 (2 januari 2018): 62, https://doi.org/10.1080/13600869.2018.1443378.

Page 36: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

designed to promote, directly or indirectly, the goods, services or image of a company, organisation or person pursuing a commercial, industrial or craft activity or exercising a regulated profession’.112 While news content as such is not normally deemed a commercial communication, it can be argued that the commercial nature of a user’s news feed and the promoted content therein, do require a look at these provisions. Indeed, it is not uncommon for users to see promoted news content in their social media feeds.

Article 6 of the eCommerce Directive provides that commercial communications which are part of, or constitute, an information society service must be clearly identifiable as such, while also containing information as to the natural or legal person on whose behalf the commercial communication is made. The same goes for any promotional offers, competitions or games and their conditions for participation. In other words, when content providers, aggregators and social media platforms display communications of a commercial nature to their users, they have an obligation to make that commercial nature clear to the average user as well as provide information on the natural or legal person who paid for it.

At first sight, the identification and transparency requirements applicable to commercial communications seem to be of marginal importance in promoting transparency of news recommender systems. At best, Article 6 eCommerce Directive allows users to distinguish commercial content from other types of information, which in turn provides an explanation on the reason why such communication is being displayed (i.e. someone paid for it). It does not, however, impose platforms to provide users with the reasons why he/she is targeted with a specific advertisement.113 In short, Article 6 eCommerce Directive clarifies the distinction between commercial and editorial content on a given platform, but is of limited relevance when it comes to providing explanations on the exact reasons why other types of content are displayed in a certain way. The case of content providers paying aggregators and/or social media platforms to promote their own news articles raises interesting issues. In that case, the question arises whether the paid-for-article itself should be considered as a ‘commercial communication’ under Article 2(f) eCommerce Directive, therefore triggering the application of the identification and transparency requirements. In such a case, both the social media platform/aggregator and the content provider would have to comply with Article 6 eCommerce Directive. In other words, users should be presented with clear information on the nature of the communication (i.e. a promoted news article) and, per se, one of the reasons why the said article is being displayed (i.e. the publisher – which should also be clearly identified – has paid the platform to promote its content). While clear communication cannot reasonably be considered as a sound legal basis for algorithmic transparency – mainly because it does not entitle users to gain insight on the other reasons why some specific news articles are displayed – it nonetheless provides transparency regarding the commercial nature of users’ news feeds, containing both naturally generated and promoted content.

112 In the specific context of news curation, one must bear in mind that a ‘commercial communication’ in the sense of Article 2(f) eCommerce Directive might be subject to additional requirements when promoted in a video format. In that case, not only the lex generalis provisions of the eCommerce Directive, but also the lex specialis regime of the AVMS Directive – currently undergoing major changes – will apply. See: Damian Clifford en Valerie Verdoodt, “Integrative Advertising: The Marketing ‘dark Side’ or Merely the Emperor’s New Clothes?”, European Journal of Law and Technology 8, nr. 1 (4 maart 2017): 5, http://ejlt.org/article/view/547. Reference to Ingrid’s part on Media Law aspects. 113 This gap may be filled by more far-reaching transparency requirements under data protection law (cf. supra).

Page 37: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

3.2.2 Unfair Commercial Practices Directive (Directive 2005/29)

a) Scope of application

The Unfair Commercial Practices Directive (UCPD) applies to unfair business-to-consumer commercial practices taking place before, during and after a commercial transaction in relation to a product. ‘Unfair business-to-consumer commercial practices’ are defined by Article 2(d) as ‘any act, omission, course, of conduct or representation, commercial communication including advertising and marketing, by a trader, directly connected with the promotion, sale or supply of a product to consumers’. In turn, Article 2(a) and (b) define the notions of ‘consumers’ and ‘traders’ as actors respectively acting outside their trade, business, craft or profession and those acting within that scope. Given the broad wording of these notions, all three types of actors mentioned above are likely to fall under the definition of a trader when offering tailor-made newsfeeds to consumers.

b) Unfair commercial practices

The UCPD aims at promoting fairness in business-to-consumer commercial practices so as to safeguard individuals’ autonomous decision-making. In that sense, it shares a key principle with data protection law, the goal of which is to ensure the fair collection and processing of personal information. As a result, assessing the fairness of a trader’s commercial practices in light of the UCPD might be an indirect way of appraising the fairness of data processing activities such as the training and use of personalised news curation algorithms.114 This paves the way for the enforcement of data protection issues through consumer protection remedies. While such a conception might appear innovative from a European perspective, this road is the way privacy and data protection issues have been addressed in the US for ages.115

The UCPD relies on a three-tiered approach when it comes to assessing the unfairness of a given commercial practice. First, Article 5(2) prohibits commercial practices that are contrary to the requirements of professional diligence and materially distort consumers’ economic behaviour. Second, Articles 6 to 9 further identify and define aggressive commercial practices. Thirdly, Annex I contains a blacklist of commercial practices which shall, in all circumstances, be regarded as unfair. In order to determine whether a practice is prohibited, the Directive must be read backwards, starting with the blacklist of Annex I all the way up to the general rule.116 Each layer contains rules that might be relevant in the context of recommender systems applied to news curation.

c) Blacklisted commercial practices

Most social media platforms and news aggregators advertise themselves as ‘free’. While it is true that no monetary form of payment is required to enjoy the benefits of those services, they still rely on the

114 Inge Graef, Damian Clifford, en Peggy Valcke, “Fairness and Enforcement: Bridging Competition, Data Protection and Consumer Law”, z.d., 36; Damian Clifford en Jef Ausloos, “Data Protection and the Role of Fairness”, CiTiP Working Paper Series, 2017, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3013139; Damian Clifford, Inge Graef, en Peggy Valcke, “Pre-Formulated Declarations of Data Subject Consent - Citizen-Consumer Empowerment and the Alignment of Data, Consumer and Competition Law Protections”, German Law Journal Forthcoming (2018): 41; Dan Jerker B. Svantesson, “Enter the Quagmire – the Complicated Relationship between Data Protection Law and Consumer Protection Law”, Computer Law & Security Review, september 2017, https://doi.org/10.1016/j.clsr.2017.08.003; BEUC, “Data collection, targeting and profiling of consumers online - BEUC discussion paper”, 15 februari 2010, http://www.beuc.eu/publications/2010-00101-01-e.pdf. 115 N. van Eijk, C. Jay Hoofnagle, en E. Kannekens, “Unfair Commercial Practices: A Complementary Approach to Privacy Protection”, European Data Protection Law Review 3, nr. 3 (2017): 325–37, https://doi.org/10.21552/edpl/2017/3/7. 116 Ibid, 333.

Page 38: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

collection and processing of their users’ data as a counterpart. This might threaten consumers’ ability to freely exercise their choices. Annex I, point 20 of the UCPD considers misleading in all circumstances the practice of ‘describing a product as ‘gratis’, ‘free’, ‘without charge’ or similar if the consumer has to pay anything other than the unavoidable cost of responding to the commercial practice and collecting or paying for delivery of the item’. The gathering of users’ personal data in exchange for the provision of the service while advertising it as ‘free’ could therefore be considered as a blacklisted practice under Annex I, point 20 of the UCPD. At first sight, this seems to be in line with the European Commission which considers that ‘the marketing of such products as ‘free’ without telling consumers how their preferences, personal data and user-generated content are going to be used could, in some circumstances, be considered a misleading practice’.117 In other words, a lacunar, inaccurate privacy policy could lead to consumer protection remedies where the extent of the collection and processing operations are not clearly stipulated. In turn, consumer protection could serve as a potential basis to remedy the lack of transparency in the use of recommender systems for news curation. The foregoing, however, must be assessed in light of the specific circumstances surrounding the provision of the service. For instance, despite having accepted that issues relating to the processing of personal data come within the scope of consumer protection, the Berlin Regional Court found that Facebook could position itself as a ‘free’ service since the applicability of Annex I, point 20 UCPD requires the payment of a tangible price.118 In short, the German court refused to recognize the economic value of personal data within the meaning of the UCPD. The judgment is in line with the proposed Digital Content Directive which distinguishes the notions of ‘price’ as the ‘money that is due in exchange for digital content supplied’ on the one hand, and ‘counter performance other than money in the form of personal data or any other data’ on the other.119 The proposed text therefore suggests that providing personal data in exchange for a service does not amount to paying a ‘price’ within the meaning of its Article 2(6), but rather as providing a ‘counterpart’.120 However, one could underline that Annex I, point 20 UCPD does not rely on the notion of ‘price’. Rather, it refers to the notions of ‘payment’ and ‘cost’, which appear broad enough so as to encompass any form of (non-)monetary payment such as personal data. The interpretation thereof however, is a matter for national courts and tribunals to agree on.

d) Misleading/aggressive commercial practices

Under data protection law,121 controllers have an obligation to provide data subjects with the relevant details surrounding the collection and processing of their personal data. More specifically, Articles 13 and 14 GDPR meticulously list the pieces of information which should be made available to them (cf. supra). In practice, compliance with those provisions generally requires the drafting and publishing of a privacy policy. Consumer protection law, on the other hand, provides an extra layer of flexibility when it comes to assessing compliance with information requirements.122

117 European Commission, “Commission staff working document - Guidance on the implementation/application of Directive 2005/29 on unfair commercial practices”, 25 mei 2016, 89, https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52016SC0163&from=EN. 118 Graef, Clifford, en Valcke, “Fairness and Enforcement: Bridging Competition, Data Protection and Consumer Law”, 14. 119 Proposal for a Directive of the European Parliament and of the Council on certain aspects concerning contracts for the supply of digital content, 9 December 2015, 2015/0287(COD). 120 For more details, see: Metzger, “Data as Counter-Performance”. 121 See Articles 13 and 14 GDPR. Cf. supra. 122 Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1436.

Page 39: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

e) The existence of a misleading/aggressive commercial practice

According to article 6(1) UCPD on misleading actions, ‘a commercial practice shall be regarded as misleading if it contains false information and is therefore untruthful or in any way, including overall presentation, deceives or is likely to deceive the average consumer’. More specifically, the false and/or deceptive information must relate to one of the elements listed in Article 6(1)a-g, among which (b) the ‘main characteristics of the product’ and (c) ‘the extent of the trader’s commitment, the motives for the commercial practice and the nature of the sale process’. In cases where controllers fail to provide data subjects with exhaustive and adequate information on the scope and purposes of their processing activities – including, for instance, the use of personal data to tailor their news feeds –, these provisions could serve as an additional legal basis to address non-compliance with Articles 13 and 14 GDPR. In other words, a breach of information requirement could also be seen as an unfair commercial practice, which would pave the way for the application of contractual law remedies under national law. In the same vein, Article 7(1)-(2) UCPD on misleading omissions specifies that a commercial practice shall also be regarded as misleading if it omits, hides, or provides in an unclear, unintelligible, ambiguous or untimely manner material information that the average consumer needs to take an informed transactional decision. The above-mentioned reasoning could therefore be transposed to situations where controllers, while not providing false or deceptive information, nonetheless try to dissimulate details about the processing operations such as, for instance, the impact of the personal data gathering on the personalisation of content. Article 8, finally, prohibits aggressive commercial practices, i.e. practices which significantly impair or are likely to significantly impair the average consumer’s freedom of choice or conduct with regard to a product, whether by coercion or undue influence. While it is highly unlikely that harassment and coercion (especially the use of physical force) would be applicable in the present context, there might still be undue influence123 in light of the asymmetric power relationships between businesses and consumers.124 In that sense, the European Consumer Organisation (BEUC) has already underlined that the repetitive aspect of behavioural advertising may put pressure on consumers, while the selection of advertisement based on the presumed consumer interests may prevent the display of other advertisement, therefore reducing consumers’ choices.125 The question therefore arises whether the above reasoning could be extended to the use of algorithms that offer users a tailor-made experience with regard to any type of content, regardless of their commercial or editorial nature.126 In other words, can the extreme personalisation of services be regarded as a form of undue influence within the meaning of Article 8 UCPD (and in light of the UCPD’s objective to safeguard individuals’ autonomous decision-making process)?127 There is, however, no legal certainty as to the outcome of such a reasoning. As underlined by MALGIERI and COMANDÉ, a parallel might nonetheless be drawn between Article 8 UCPD and Article 22 GDPR, which grants data subjects the right not to be subject to a decision based solely on automated processing which produces legal effects or similarly significantly affect them (cf.

123 Undue influence is defined by Article 2(j) UCPD as the act of ‘exploiting a position of power in relation to the consumer so as to apply pressure, even without using or threatening to use physical force, in a way which significantly limits consumer’s ability to make an informed decision’. 124 Clifford en Verdoodt, “Integrative Advertising”, 16. 125 BEUC, “Data collection, targeting and profiling of consumers online - BEUC discussion paper”, 6. 126 Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1454. 127 Geraint Howells, Hans-W. Micklitz, en Thomas Wilhelmsson, “Towards a Better Understanding of Unfair Commercial Practices”, International Journal of Law and Management 51, nr. 2 (20 maart 2009): 69–90, https://doi.org/10.1108/17542430910947103.

Page 40: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

supra).128 Both provisions indeed aim at ensuring the weaker party a certain degree of protection against pervasive marketing manipulations by either empowering data subjects with a specific prerogative or prohibiting practices impairing consumers’ freedom of choice. Interestingly, each regulatory framework also uses similar wordings when it comes to describing the impact of the decision/practice on the affected party. While these approaches are not entirely substitutable, they rely on comparable rationales. In practice, they are also likely to result in similar outcomes. As such, one could reasonably argue that Article 8 UCPD could serve as an additional legal basis to challenge the lawfulness of manipulating consumers’ vulnerability through algorithmic decision-making processes.

f) The (likelihood of a) transactional decision

In order for a commercial practice to be considered as misleading or aggressive within the meaning of Articles 6 to 8 UCPD, it must cause or be likely to cause a transactional decision the average consumer would not have taken otherwise. In other words, the qualification of a practice as misleading or aggressive is only the first step of the unfairness assessment, which must be further completed by an examination of the consequence of the said practice on the consumer’s behaviour. Article 2(k) UCPD defines a ‘transactional decision’ as ‘any decision taken by a consumer concerning whether, how and on what terms to purchase, make payment in whole or in part for, retain or dispose of a product or to exercise a contractual right in relation to the product, whether the consumer decides to act or to refrain from acting’.

This notion has been interpreted by the European Commission as encompassing the decision to enter into a contractual relationship.129 Extrapolating on the above, it could be argued that consenting to the processing of one’s personal data amounts to a transactional decision.130 Such an interpretation would pave the way for assessing the fairness of the conditions under which users are required to agree to the collection and use of their personal information, notably the completeness and accuracy of the information provided about the personalisation of their newsfeed (i.e. the actual data that are used to tailor their experience, the existence and extent of any combination of multiple datasets, the retention period, etc.). If it can be demonstrated that, if the user had been provided with all the necessary information, he/she would not have consented to the collection and processing of his/her personal data, then such a breach of information duty could reasonably be considered as a misleading commercial practice under Article 6 UCPD. This would be another way of enforcing the ‘informed consent’ requirement stemming from Article 7 GDPR.131 Now the question arises as to whether the ‘informed consent’ threshold is high enough to cover what is generally understood as a form of algorithmic transparency within the meaning of the present study.

In its recent guidelines on consent, the Article 29 Working Party requires the disclosure of numerous details in order for consent to be valid, among which information about the use of data for automated

128 Gianclaudio Malgieri en Giovanni Comandé, “Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation”, International Data Privacy Law 7, nr. 4 (1 november 2017): 243–65, https://doi.org/10.1093/idpl/ipx019. 129 European Commission, “Commission staff working document - Guidance on the implementation/application of Directive 2005/29 on unfair commercial practices”, 33–37. See also: C.J.E.U., Case C-281/12 Trento Sviluppo srl, Centrale Adriatica Soc. Coop. Arl v Autorità Garante della Concorrenza e del Mercato, 19 December 2013, paragraphs 35 sqq. 130 This path is suggested by Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1454. 131 European Commission, “Commission staff working document - Guidance on the implementation/application of Directive 2005/29 on unfair commercial practices”, 22. See also van Eijk, Jay Hoofnagle, en Kannekens, “Unfair Commercial Practices”, 334. The authors underline that ‘Article 7 UCPD has a strong overlap with Articles 13 (‘Information to be provided where personal data are collected from the data subject’) and 14 (‘Information to be provided where personal data have not been obtained from the data subject’) GDPR dealing with informing data subjects’.

Page 41: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

decision-making.132 This information requirement also follows from the general duty of transparency stemming from Articles 13(2)f and 14(2)g. Whereas ultimately, it all boils down to how national courts will interpret transparency requirements under the GDPR, this may certainly prove a meaningful benchmark for assessing the misleading nature of a given commercial practice.

g) General rule

Article 5(2) contains the general rule according to which a commercial practice shall be regarded as unfair if (a) it is contrary to the requirements of professional diligence and (b) it materially distorts or is likely to materially distort the economic behaviour with regard to the product of the average consumer whom it reaches or to whom it is addressed, or of the average member of the group when a commercial practice is directed to a particular group of consumer. Compliance with data protection law, notably the requirement of transparency and the provisions on the right to explanation, could be regarded as being part of the professional diligence that traders owe consumers. In other words, failing to abide by the rules laid down in the GDPR could be seen as an unfair commercial practice if such a breach results in consumers not being able to take informed transactional decisions.133 Failure to provide data subjects with all the relevant information surrounding the scope of the processing activities could therefore trigger the applicability of the UCPD, on top of remedies already foreseen in the GDPR itself. As a result, the reasoning developed in ‘d) Misleading/aggressive commercial practices’ could be fully transposed to the general rule laid down in Article 5(2) UCPD and serve as an additional basis on which enforcing transparency of recommender systems applied to news curation. 134

3.2.3 Unfair Terms in Consumer Contracts Directive (Directive 93/13)

a) Scope of application

The Unfair Terms in Consumer Contracts Directive (UTCCD) applies to contracts concluded between a seller or supplier and a consumer. Article 2(b) defines the consumer as ‘any natural person who is acting for purposes which are outside his trade, business or profession’. Article 2(c), on the other hand, defines the seller or supplier as ‘any natural or legal person who is acting for purposes relating to his trade, business or profession, whether publicly owned or privately owned’. Most online first party content providers, news aggregators and social media platforms are likely to fall within the definition of seller/supplier when providing their services, while users of these platforms will most likely be considered as consumers.135 Since all three types of actors often condition the use of their services to

132 Article 29 Working Party, Guidelines on consent under Regulation 2016/679, 28 November 2017 (last revised on 10 April 2018) WP259rev.01 <http://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=623051> accessed 21 August 2018. 133 The requirement for a commercial practice to ‘materially distort the economic behavior of consumers’ is identical to the requirement of leading a consumer to a ‘transactional decision’. See European Commission, “Commission staff working document - Guidance on the implementation/application of Directive 2005/29 on unfair commercial practices”, 35. ‘This is the same assessment that is to be made on the basis of Articles 6, 7 and 8. It follows that although the wording of Article 5(2) is different from the wording of the latter Articles, the requirement in relation to the material distortion of the consumer’s behavior is the same’. Therefore, a similar conclusion can be drawn when it comes to consenting to the processing of one’s personal data (see supra). 134 On that ground, the Berlin Court of Appeals, later confirmed by the Federal Supreme Court, found that the ‘Find Friends’ feature of Facebook violated the requirement to obtain user’s consent to the data collection after having clicked the button enabling that functionality. That behaviour was considered by the German courts as an infringement of the national legislation implementing the UCPD since it amounted to a violation of a ‘statutory provision that is also intended to regulate market behaviour in the interest of market participants’. See: Kammergericht Berlin, Judgment of 24 Jan. 2014, 5U 42/12, Bundesgerischtshof, Judgment of 14 Jan. 2016, I ZR 65/14. This is developed by: Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1455. 135 See also: scope of application of the UCPD.

Page 42: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

the acceptance of a series of rules – which, in most cases, can be assimilated to contractual terms – the fairness of those provisions could be scrutinised in light of the UTCCD.

3.2.4 Unfair terms The UTCCD aims at protecting consumers against unfair clauses include in pre-formulated contracts.136 In that sense, Article 3(1) specifies that ‘a contractual term that has not been individually negotiated shall be regarded as unfair if, contrary to the requirement of good faith, it causes a significant imbalance in the parties’ rights and obligations arising under the contract, to the detriment of the consumer’. While initially enacted to address power asymmetries in sales or service contracts, the UTCCD also applies to contracts concluded by electronic means and for the provision of digital content, regardless of the counter-performance.137 As most contracts between online service providers and consumers are unilaterally drafted by the former, this paves the way for Member States’ jurisdictions to consider the fairness of these pre-formulated terms.138 Besides, the notion of ‘significant imbalance’ is broad enough to cover any type of clause, including but not limited to privacy and data protection interests. In other words, the UTCCD provides an additional legal basis to assess the fairness of the conditions under which consumers agree to the processing of their personal data.139 The fairness conditions proposed by the UTCDD could prove particularly relevant when it comes to the collection and processing of personal information for the purpose of personalizing users’ news content. A lack of transparency could, in that case, amount to unfair contractual terms within the meaning of Article 3(1) UTCCD. Article 4(2) UTCCD excludes both the definition of the main subject matter of the contract and the adequacy of the price from the fairness test, as long as these terms are formulated in plan, intelligible language. The exclusion raises interesting issues when it comes to assessing the extent of the data in light of the service provided by the seller. In other words, the question arises whether it is possible to challenge the fairness of the data collection since it is often regarded as the ‘price’ paid in exchange for the provision of the service. At first sight, one would be tempted to exclude that element from the fairness assessment in cases where sellers are fully transparent about the personal data collected, therefore excluding any discussion as to the proportionality of the personal data gathering with respect to the service provided. The latter, however, is challenged by the wording of the proposed Digital Content Directive which distinguishes between the notions of ‘price’ and ‘counter performance other than money in the form of personal data or any other data’ (cf. supra). If data are not considered a the ‘price’ paid in exchange for a service within the meaning of Article 4(2) UTCCD, then the fairness test of Article 3(1) UTCCD might still be relevant and provide an extra layer of protection on top of traditional data protection law principles such as data minimisation and purpose limitation.140

136 Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1449 137 Marco Loos en Joasia Luzak, “Wanted: A Bigger Stick. On Unfair Terms in Consumer Contracts with Online Service Providers”, Journal of Consumer Policy 39, nr. 1 (1 maart 2016): 63–90, https://doi.org/10.1007/s10603-015-9303-7. 138 Mario Tenreiro, “The Community Directive on Unfair Terms and National Legal Systems &#8211; The Principle of Good Faith and Remedies for Unfair Terms”, European Review of Private Law 3, nr. 2 (1 juni 1995): 273–84. 139 Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1451. The authors give the example of a torch app that collects all kinds of information that are not related to the functionality of the app itself, such as the contact list, location data, IP addresses, etc. Consumer law’s fairness test, they underline, ‘could be used to interpret data protection law’s data minimization and purpose limitation principles’. Conversely, they add, ‘data protection law could also provide an additional benchmark to assess the fairness of contractual conditions. For instance, a contract could be considered unfair if it breaches data protection law’s data minimization, but also security or privacy by default requirements’. 140 Loos en Luzak, “Wanted”, 67.

Page 43: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

In the same vein, Article 3(1) could be used to scrutinise the use of consent as a legitimate ground for processing.141 Recital 42 GDPR indeed specifies that ‘in accordance with Council Directive 93/13/EEC, a declaration of consent pre-formulated by the controller should be provided in an intelligible and easily accessible form, using clear and plain language and it should not contain unfair terms’. In other words, pre-formulated declaration of consent – on top of abiding by transparency and accessibility requirements – should not cause a ‘significant imbalance’ between the parties’ rights and obligations arising under the contract.142 This cross-reference suggests that consumer protection law could serve as an additional benchmark to assess the fairness of consent given without consideration for the actual terms, in the context of ‘take-it-or-leave-it’ clauses or in case of disproportionate collection of personal data. In the specific context of recommender systems used for news curation, requesting data subjects to consent to the excessive collection of personal data in order to offer a personalised experience might therefore be considered as unfair in light of Article 3(1) UTCCD. Illustrating the above, many national consumer protection authorities have already relied on the UTCCD to assess the fairness of the terms and conditions imposed by online service providers to their users.143 More specifically, the German Consumer Organisation has filed numerous complaints against internet giants such as Facebook and Apple aiming at tacking allegedly unfair clauses in their standard contracts.144 Similarly, the Norwegian Consumer Council has already assessed the fairness of clauses contained in contracts concluded between Facebook, Instagram, LinkedIn, Twitter and Tinder and their users under the legislation transposing the UTCCD.145 In short, national consumer protection authorities could play a significant role to prevent abuses of power asymmetries between consumers and suppliers of digital services, which in turn could result in more a more efficient enforcement of transparency requirements in the context of the personalisation of services.

3.2.5 Consumer Rights Directive (Directive 2011/83)

a) Scope of application

The Consumer Rights Directive (CRD) applies to any contract concluded between a trader and a consumer.146 As most first party content providers, news aggregators and social media platforms are

141 Michiel Rhoen, “Beyond consent: improving data protection through consumer protection law”, Internet Policy Review 5, nr. 1 (31 maart 2016): 1–15. 142 Clifford, Graef, en Valcke, “Pre-Formulated Declarations of Data Subject Consent - Citizen-Consumer Empowerment and the Alignment of Data, Consumer and Competition Law Protections”, 7–14. 143 Peter Rott, “Data protection law as consumer law–How consumer organisations can contribute to the enforcement of data protection law”, Journal of European Consumer and Market Law 6, nr. 3 (2017): 113–119. 144 See footnotes 136-139 in Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1452. See also: Alex Hern, “Facebook Personal Data Use and Privacy Settings Ruled Illegal by German Court”, The Guardian, 12 februari 2018, sec. Technology, http://www.theguardian.com/technology/2018/feb/12/facebook-personal-data-privacy-settings-ruled-illegal-german-court. 145 See footnotes 140-143 in Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1452–53. See also: “Norwegian Agency Dings Facebook, Google For ‘Unethical’ Privacy Tactics”, The First Stop for Security News | Threatpost (blog), geraadpleegd 15 augustus 2018, https://threatpost.com/norwegian-agency-dings-facebook-google-for-unethical-privacy-tactics/133196/; “Norwegian Consumer Council files complaint against Tinder for breaching European law : Forbrukerrådet”, geraadpleegd 15 augustus 2018, https://www.forbrukerradet.no/side/norwegian-consumer-council-files-complaint-against-tinder-for-breaching-european-law/. 146 On the discussion as to whether or not ‘free’ services are subject to the CRD, see: Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1442–49. In essence, the CRD was not meant to apply in cases where consumers did not have to pay a fee in exchange for the service. However, the European Commission gave a broad interpretation to the CRD’s scope so as to include contract for online digital content for which no monetary payment was due. The Commission nonetheless excluded situations in which people access online services without express contractual agreement. As a result, many ‘free’ online services do not currently fall under the scope of the CRD. Since the proposed Digital Content Directive now specifically recognizes the

Page 44: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

likely to fall within the scope of both the UCPD and the UTCCD, an identical conclusion might be drawn as to the applicability of the CRD to these three types of actors.

b) Transparency requirements

In the case of distance or off-premises contract, Article 5(1) CRD requires traders to provide consumers with a series of information on a clear and comprehensible manner. Among the elements that must be communicated to consumers, Article 5(1)r mentions the ‘functioning of digital content’.147 Recital 19 specifies that the notion of ‘functionality’ refers to ‘the ways in which digital content can be used, for instance for the tracking of consumer behaviour’. The European Commission further adds that it should also encompass information as to whether personalisation of content happens.148 Such knowledge might prove extremely useful in the context of recommender systems used for news curation, as the three types of actors might have to disclose detailed information on how their algorithms operate.

3.3 ATAP and Media Law

3.3.1 Advertising principle of identifiability of commercial content as such

a) Blacklist number 11 UCPD

The UCPD, described in the previous chapter, also contains a provision in its blacklist that is inspired by the traditional advertising principle of identifiability. Number 11 on the blacklist prohibits the use of editorial content in the media ‘to promote a product where a trader has paid for the promotion without making that clear in the content or by images or sounds clearly identifiable by the consumer (advertorial). […]’ This blacklisted practice may also cover social media news feeds or that of news aggregators services based on the following arguments. After all, the use of the general word ‘the media’, does not make any apparent distinction between social media, news aggregators and more traditional media formats (radio, newspapers, tv, etc.). More problematic however, is the requirement that editorial content is used to promote. It will be difficult, but perhaps not impossible, to argue that a user’s news feed or other area containing recommendations, constitute editorial content as such. It is increasingly argued in that intermediaries do exercise a certain extent of editorial control through service design and recommender systems.149 It could be argued these tools are therefore part of the editorial content to be controlled.

provision of data as a separate form of ‘counter-part’ from consumer, this interpretation does not appear sustainable anymore. As already underlined, this paves the way for applying the fairness test to ‘free’ services’ that rely in the collection and processing of personal information rather than on the payment of a monetary price. 147 Digital content is defined by Article 2(11) CRD as ‘data which are produced and supplied in digital form’ 148 “DG Justice guidance document concerning Directive 2011/83 of the European Parliament and of the Council of 25 October 2011 on consumer rights, amending Council Directive 93/13 and Directive 1999/44 of the European Parliament and of the council and repealing Council Directive 85/577 and DIrective 97/7 of the European Parliament and of the Council”, juni 2014, 76, https://ec.europa.eu/info/sites/info/files/crd_guidance_en_0.pdf; Helberger, Zuiderveen Borgesius, en Reyna, “The Perfect Match?”, 1439. 149 Cabrera Blázquez F.J., Cappello M., Fontaine G., Valais S., On-demand services and the material scope of the AVMSD, IRIS Plus, European Audiovisual Observatory, Strasbourg, 2016, https://rm.coe.int/1680783488 p.64; ERGA report on material jurisdiction in a converged environment (extract), ERGA, Brussels, 2015, p.2 http://erga-online.eu/wp-content/uploads/2016/10/report_juris_2015_sum.pdf; Ingrid Lambrecht, Valerie Verdoodt & Jasper Bellon, ‘Platforms and commercial communications aimed at children: a playground under

Page 45: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

As elaborated in the previous section, the wording ‘paid for the promotion’ is relatively broad and can include payment to receive priority in search result rankings.150 Furthermore, the argument can be made that the payment does not have to be monetary. Rather, the notion payment could include transactions where, just as with ‘pay with data’ by consumers, news providers can ‘pay with traffic’, as quality news generates interactions, shares, and other traffic, thus generating information on the intermediary’s users. As a result, a digital intermediary between news business on the one side and readers on the other side, can obtain revenue from the dataflow on both sides, sustaining a profitable business model. Depending on the interpretation of ‘paid for promotion’, it would be capable of including today’s reality of data value from businesses to intermediaries, in casu that of news providers to news intermediaries. This form of payment to obtain a higher ranking in recommender systems and to gain more visibility to the intermediary’s readers would then trigger the obligation to provide transparency regarding its commercial intent; not that of the editorial content recommended, but that of the system recommending it.

b) Self-regulation ICC code

Given the fact that existing legislation is not entirely adapted to today’s digital reality of commercial practices in news distribution, self-regulation may be better equipped to respond to a fast-changing environment. The International Chamber of Commerce (ICC)151 published a guidance document in 2015 on ‘native advertising’.152 The guidance points to the rapidly evolving advertising format in which users are experiencing ads ‘organically’, i.e. part of content. Organic advertising could include users receiving content pushed by recommender systems, for both monetary and other types of added value, referring to the previous paragraphs on the interpretation of ‘paid for content’. Relevant to the case of editorial news content, is that the ICC does not require the mere appearance of brand or product (i.e. news content) to be advertising as such. The ICC stresses that it is important to re-interpret and reiterate existing self-regulatory advertising principles in light of today’s digital marketing practices to ‘ensure transparency and to enhance consumer trust’. The guidelines thus elaborate on the relevant interpretations regarding the principle of recognisability153, identifiability154 and of disclosures155, complementary to the main ICC Code on marketing practices156. Additionally, it stresses that assessing these principles in practice requires respect for ‘content that is entertainment and news-focused’, while not intending to excessively stifle innovation. More concrete, the ICC guidance on Native Advertising states in relation to identification: that ‘Marketing communications should be clearly distinguishable as such, whatever their form and whatever the medium used.’ When an advertisement appears in a medium containing news or editorial matter, it should be so presented that it is readily recognisable as an advertisement and the identity of the advertiser should be apparent (see also article 10). Marketing communications should

legislative reform?’, March 2018, p.58-79, https://www.tandfonline.com/doi/full/10.1080/13600869.2018.1443378; ‘ERGA Opinion on AVMSD Proposals’, ERGA, Brussels, September 2016, p.7, http://erga-online.eu/wp-content/uploads/2016/10/Opinion_avmsd_0916.pdf. 150 Cf. Section on consumer protection law supra. 151 ICC is the world business organization, a representative body that speaks with authority on behalf of enterprises from all sectors in every part of the world. 152 ICC Guidance of Native Advertising, 2015, https://cdn.iccwbo.org/content/uploads/sites/3/2015/05/ICC-Guidance-on-Native-Advertising.pdf 153 Consumers should be able to recognise when something is an ad (ICC Code, art 9, B1 and D2) 154 The identity of an advertiser should be easily ascertainable (ICC Code, art 10; B12). 155 Disclosures should be prominent and understandable to consumers (ICC Code, art 3). 156 ICC Consolidated Code of Advertising and Marketing, 2011, https://cdn.iccwbo.org/content/uploads/sites/3/2011/08/ICC-Consolidated-Code-of-Advertising-and-Marketing-2011-English.pdf

Page 46: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

not misrepresent their true commercial purpose. Hence a marketing communication promoting the sale of a product should not be disguised by the marketer or sponsor as, for example, market research, consumer surveys, user-generated content, private blogs or independent reviews. Considering the broad notion of the word ‘medium’, the door could be left open for news aggregator websites or social media news feeds as a ‘medium’ for distributing editorial news content. Specifically, for the case of transparency of recommender systems, the previous paragraph should be read together with the clarification added within the Code regarding disclosures that ‘Marketing communications should be so framed as not to abuse the trust of consumers or exploit their lack of experience or knowledge. Relevant factors likely to affect consumers’ decisions should be communicated in such a way and at such a time that consumers can take them into account.’157 Such a level of transparency could, and perhaps should, pave the way for algorithmic transparency for consumers.

c) Sponsorship agreements in self-regulation

When the relationship behind the commercial practice can be qualified as a sponsorship, self-regulation provides in specific transparency requirements. The ICC Code defines sponsorship as ‘any commercial agreement by which a sponsor, for the mutual benefit of the sponsor and sponsored party, contractually provides financing or other support in order to establish an association between the sponsor’s image, brands or products and a sponsorship property, in return for rights to promote this association and/or for the granting of certain agreed direct or indirect benefits’.158 By using the word ‘benefit’, through financing or other support, the provision is sufficiently open to include the situation of a news content provider voluntarily choosing to add their content (or ‘product’) to the intermediary’s system for distribution by their recommender algorithms. Sponsorship could go both ways however. A service provider of a social media platform or a news aggregator may also be considered the sponsor through the deliberate design of their recommender systems, for example when it prefers news content from traditional titles and brands over unknown, new ones. In both cases however, article B12 of the ICC Code provides a specific notion of media sponsorship, with a rule of their own. Article B12 provides that ‘the content and scheduling of sponsored media properties should not be unduly influenced by the sponsor so as to compromise the responsibility, autonomy or editorial independence of the broadcaster, programme producer or media owner,[…].’ But most importantly for the case of algorithmic transparency, it provides that ‘sponsored media properties should be identified as such by presentation of the sponsor’s name and/or logo at the beginning, during and/or at the end of the programme or publication content.’ Additionally, it stresses that this also applies to online material. Reading provisions B1 to 13 in combination with the aforementioned clarifications on consumer transparency, there appears to be room for an interpretation that allows self-regulatory bodies to push and promote the need to employ more algorithmic transparency measures by the intermediary service and the sponsored party..

3.3.2 Recommender systems for audiovisual news: sponsorship and protection of minors The Audiovisual Media Services Directive159 also includes rules on sponsorship and promotions of audiovisual content; both in general and for minors specifically.

157 Article 9, ICC Guidance on Native Advertising, 2015; available via: https://iccwbo.org/content/uploads/sites/3/2015/05/ICC-Guidance-on-Native-Advertising.pdf 158 ICC Consolidated Code of Advertising and Marketing, 2011, articles B1-B13, p.22-25, https://cdn.iccwbo.org/content/uploads/sites/3/2011/08/ICC-Consolidated-Code-of-Advertising-and-Marketing-2011-English.pdf 159 Directive 2010/13/EU of the European Parliament and of the Council of 10 March 2010 on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the

Page 47: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

a) General provisions

Article 1(k) defines sponsorship as ’any contribution made by public or private undertakings or natural persons not engaged in providing audiovisual media services or in the production of audiovisual works, to the financing of audiovisual media services or programmes with a view to promoting their name, trade mark, image, activities or products’. Important in this definition however is that the sponsor themselves should not be involved in any type of production of audiovisual works. The possibility of a news content provider being the sponsor in this arrangement is therefore excluded. It does not exclude, however, a social media platform or news aggregator to be the sponsor, on the condition that they do not qualify as an audiovisual media service. This qualification is currently agreed to not be the case.160 If applying the earlier reasoning of sponsorship, Article 10(1)c AVMSD provides that ‘viewers shall be clearly informed of the existence of a sponsorship agreement. […]’ The list of possible disclosures includes a distinctive sign in appropriate way. Considering the nature of the agreement this could include transparency on its recommendation, i.e. ‘recommended’ or ‘sponsored’. This does not provide the algorithmic transparency desired. Recital (93) further provides that sponsorship should be prohibited where the sponsor influences the content of programmes in such a way as to affect the responsibility and the editorial independence of the media service provider. It is possible to argue however, that a news content provider loses a part of their traditional editorial control regarding the context, order and structure of their content, to the algorithmic system running a social media news feed or news aggregator lay-out. To prevent such a form of sponsorship would be too far-reaching, but it could question whether there should not be more transparency regarding the functioning of these algorithms, to both consumers and content providers alike.

b) Provisions relating to minors

Where general provisions may not provide any straightforward legal basis for algorithmic transparency, specific provisions aimed to protect and empower minors may. Recital (59) for example, stresses that ‘the availability of harmful content in audiovisual media services is a concern for legislators, the media industry and parents. There will also be new challenges, especially in connection with new platforms and new products. Rules protecting the physical, mental and moral development of minors as well as human dignity in all audiovisual media services, including audiovisual commercial communications, are therefore necessary.’ The AVMSD thus explicitly recognises the need for adequate measures ensuring protection, relevant in light of the technology used. Recital (60) follows up by stressing that measures, such as filtering, labelling or age verification systems should also account for fundamental rights. It explicitly refers to the Recommendation on the protection of minors and human dignity and on the right of reply,161 in which systems for automatic filtering and labelling of content are carefully recognised as possibilities. Such measures could

provision of audiovisual media services (Audiovisual Media Services Directive) https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32010L0013 160 TF1 and others v. Dailymotion TF1 et autres / Dailymotion (Tribunal de grande instance de Paris 13 september 2012). ; ‘ERGA Opinion on AVMSD Proposals’, ERGA, Brussels, September 2016, p.7, http://erga-online.eu/wp-content/uploads/2016/10/Opinion_avmsd_0916.pdf; REPORT on the Proposal for a Directive of the European Parliament and of the Council Amending Directive 2010/13/EU on the Coordination of Certain Provisions Laid Down by Law, Regulation or Administrative action in Member States Concerning the Provision of Audiovisual Media Services in View of Changing Market Realities. 2017. 161 Recommendation of the European Parliament and of the Council of 20 December 2006 on the protection of minors and human dignity and on the right of reply in relation to the competitiveness of the European audiovisual and on-line information services industry, https://eur-lex.europa.eu/legal-content/EN/ALL/?uri=CELEX%3A32006H0952

Page 48: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

arguably not be balanced with fundamental rights without providing transparency on the concrete rules the filtering and recommender algorithms follow, to both (older) minors and parents alike. Additionally, article 12 obliges Member States to help ensure that on-demand audiovisual media services which might seriously impair the physical, mental or moral development of minors are only made available in such a way as to ensure that minors will not normally hear or see them. Interestingly, the wording of the article implies that an on-demand service as such may be capable of being harmful to minors. An on-demand audiovisual media service is defined by article 1(g) as a service ‘for the viewing of programmes at the moment chosen by the user and at his individual request on the basis of a catalogue of programmes selected by the media service provider’. This definition could arguably include news aggregators of audiovisual content, thus opening the door for a measure that demands such aggregators to provide transparency of their filtering and recommender systems insofar a minor is otherwise capable of fully accessing its content and receiving recommendations based on their use of the intermediary’s services.

c) Political advertising: a Belgian case study

Another issue pushing the boundaries of existing media regulations lies within the field of political advertising. Political advertising has traditionally been subject to strict regulations to ensure fair and transparent electoral campaigning. The clearest examples of traditional regulation are the obligations for public service media to maintain fair, balanced and objective representations of political parties and their representatives.162 These regulations are drafted into their management contracts163, which means they fall under scrutiny of the media authority of the respective language communities.164 Regulations for private media are mixed. Television and radio broadcasts prohibit undue political affiliation and influence by legislation, whereas for written press this prohibition is solely provided by way of self-regulation for those press undersigning the Journalistic Code of Belgium. The code provides that any bias by a media actor should be limited and, in any case, entirely voluntary and independent from external political influences.165 Whereas self-regulation does provide guidelines on the use of social media and user-generated-content, it does not speak of the use of algorithms and micro-targeting. Such guidelines are also not applicable to independent online websites and media intermediaries. Any political party may thus publish digital ‘news’ articles and campaigns through an affiliated website, so long as it complies to regulations regarding electoral spending, privacy, data protection and some consumer rights.

162 16 DECEMBER 2005. — Decreet houdende de oprichting van het publiekrechtelijk vormgegeven extern verzelfstandigd agentschap Vlaamse Regulator voor de Media en houdende wijziging van sommige bepalingen van de decreten betreffende de radio-omroep en de televisie, gecoördineerd op 4 maart 2005 (1), https://www.vlaamseregulatormedia.be/sites/default/files/vrm-decreet.pdf; article 6, Décret coordonné sur les services de médias audiovisuels (version consolidée par le CSA), 8 July 2016, http://www.csa.be/system/documents_files/1440/original/D%C3%A9cret%20SMA%20coordonn%C3%A9%20au%208%20juillet%202016.doc.pdf?1474623093 163 Flemish PSM: https://www.vrt.be/nl/over-de-vrt/beheersovereenkomst/; French speaking community PSM: https://www.rtbf.be/entreprise/article_statut-et-financement?id=3433 164 18 MEI 2009. — Reglement van Orde van de Algemene kamer, de Kamer voor onpartijdigheid- en bescherming van minderjarigen, het college van voorzitters en de algemene vergadering van de Vlaamse regulator voor de mediahttps://www.vlaamseregulatormedia.be/sites/default/files/reglement_van_orde.pdf; Titre VII du Décret coordonné sur les services de médias audiovisuels (version consolidée par le CSA), 8 July 2016, http://www.csa.be/system/documents_files/1440/original/D%C3%A9cret%20SMA%20coordonn%C3%A9%20au%208%20juillet%202016.doc.pdf?1474623093 165 Code van de Raad voor de Journalistiek: http://www.rvdj.be/code-raad-voor-de-journalistiek

Page 49: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Several issues still arise in this field however, as social media and intermediaries blur the boundaries between political advertising and news curation, providing an array of legal loopholes for politicians to work with. For example, the line between political campaigning and publishing biased news can be found in a large grey area. When a website publishes ‘news’ articles under the direct control of a political party, they may not fall within the scope of media regulations prohibiting political affiliation with the publisher, nor do the ‘news’ articles sufficiently qualify as political advertising. Concretely this would mean that the biased news content would not have to reveal its political affiliation, nor make itself known to be political advertising.

3.4 ATAP and Intellectual Property Law

“But what about our intellectual property?” This is likely to be one of the first concerns a company will raise in response to a request for an explanation. As indicated earlier, innovative or specialised algorithms can constitute a valuable competitive advantage, of which enterprises are understandably protective. This section is aimed at analysing the potential intellectual property (IP) law based defences that might be invoked against individuals asking news recommender algorithms to be explained. It can already be said at this stage that the likelihood of success for a company’s IP defence seems limited, especially when it aims to withhold meaningful information from the individual altogether. While each IP regime comes with its own set of characteristics and flaws, a distinction can generally be drawn between: Intellectual property rights in a traditional sense, i.e. copyright and patent law; and Trade secrets, which are not formally acknowledged as a form of intellectual property but

have received increased attention in recent years.

3.4.1 Traditional intellectual property schemes

a) Patents

At first glance, a patent could seem like an interesting pathway to algorithmic transparency as its public nature provides transparency to data subjects, while retaining a sense of security for the algorithm’s proprietor. However, upon closer inspection this pathway is a rather rocky road, hindering widespread adoption of patent protection for news-recommender algorithms. First, patent law was not originally designed to deal with “intangible” algorithms or software creations. As PHILPOTT (COO of the European Patent Office [EPO] ICT) noted in his concluding remarks at a 2018 conference on AI Patenting, the patent system has known great growing pains in shifting from a mechanism to protect hardware innovation to one that is increasingly focused on software and software-machine integrations.166 This arduous process is exemplified by the fact that in the European Patent Convention (or EPC) a computer program "as such" is still not a patentable invention (Article 52(2)(c) and (3) EPC). According to case law of the EPO Board of Appeal, for an algorithm to qualify as a “computer-implemented invention” (which is patentable), it must have a technical effect in the sense of the EPC. This requirement of technical effect has been the origin of much confusion and debate, often varying from case to case. In the past, technical effect has for instance been accepted for software containing improvements to Windows 3.1, where the EPO cited that:

‘[The] computer-executable instructions have the potential of achieving the […] further technical effect of enhancing the internal operation of the computer, which goes beyond the elementary interaction of any hardware and software data

166 European Patent Office, “Patenting Artificial Intelligence”, 2018, https://e-courses.epo.org/course/view.php?id=151#section-1.

Page 50: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

processing […]. The computer program recorded on the medium is therefore not to be considered a computer program as such.’167

In recent decades, the scope of what is excluded as patentable subject matter has narrowed168, undoubtedly an acknowledgement of the increased value of analytics or AI “inventions”. However, currently the EPO still holds quite a strict stance on what it calls an “inventive step” for algorithms. What an inventive step typically means is that the algorithm should have been non-obvious to a skilled person who was aware of similar algorithms (the closest prior art). Under a strict assessment of this test, it is doubtful whether just about any news publisher or aggregator could receive patent protection for its algorithm, or whether they would even want to. While the above elements are likely enough to deter most news recommenders (especially SME’s) from seeking patent protection, the system is also largely impractical. Rather than expending resources on tracking down and punishing infringers, many companies rather opt not to reveal their perceived “sensitive” algorithms at all. In other words, they are strongly incentivised to rely on trade secrets instead. Another point which is often raised in opposition to the patentability of algorithms , is that if one company owns a key algorithm it could stifle overall growth of the sector by denying improvement patents from incorporating its code. This issue was discussed on the above-mentioned 2018 EPO AI Patenting Conference, suggesting that if:

‘“fundamental” algorithms were to be protected, then there should be compulsory cross-licensing as in Section 24(2) of the German Patent Act to guarantee access to the algorithms, which would then be used in improvement patents. It should be possible for the creator of an algorithm to have a type of “reachthrough” claim for the basic idea, even if it has not been “tested” in a specific application.’

With the EPO looking to modernize the patenting system, it may very well be that algorithmic transparency in the future will be guaranteed by means of publicly accessible semi-open patents, though for now this is not yet the case.

b) Copyright

If an approach based on patent law wouldn’t work for reasons of procedural complexity and costs, perhaps the algorithm may fare better with a protection under copyright law? Indeed, copyright protection is immediate, from the point when the code is fixated onto a medium at least, and not subject to any formalities. However, whereas the source or object code of a programme can be the subject of copyright169, this copyright protection would only apply to the “expression” of the software in its code, not to its functionality (C-406/10, SAS/World Programming). Of course, it is precisely the functionality of a programme in which the data subject is interested. As such copyright is unlikely to stand in the way of providing meaningful information to the data subject concerning the processing of his or her personal information.

167 T0424/03 Clipboard formats I/MICROSOFT, No. ECLI:EP:BA:2006:T042403.20060223 (European Patent Office Boards of Appeal 23 Februari 2006); Susan J. Marsnik en Robert E. Thomas, “Drawing a Line in the Patent Subject-Matter Sands: Does Europe Provide a Solution to the Software and Business Method Patent Problem?”, Boston College International and Comparative Law Review 34, nr. 2 (22 maart 2011): 293. 168 Marsnik en Thomas, “Drawing a Line in the Patent Subject-Matter Sands”, 277. 169 Directive 2009/24/EC (replacing Directive 91/250 - 14.05.1991) on the legal protection of computer programs

Page 51: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

3.5 Trade secrets

Trade secrets offer algorithm proprietors one of the most flexible forms of protection, by way of their largely unmapped and unregulated character.170 Virtually any company asset can be the subject of a trade secret whether it is an invention, a design, a piece of software, or even an idea. Over the last decade, both sides of the Atlantic have seen a rise in regulatory interest for trade secrets, undoubtedly stemming from their inflated commercial use in the digital economy. Some scholars argue that in practice, trade secrets in particular represent a significant obstacle to meaningful disclosure about algorithms.171 MALGIERI and COMANDÉ have counterpoised that fundamental rights such as the right to data protection take precedence over trade secrecy.172 The latest EU development on the front of trade secrets has been the promulgation of the 2016 Trade Secrets Directive,173 of which the transposition deadline has recently come to pass.

3.5.1 Trade Secrets Directive Prior to the introduction of the Trade Secrets Directive each jurisdiction had adopted heterogeneous eligibility standards for information to be qualified as trade secrets. Fortunately, the Directive now provides for a general, wide-reaching base definition in line with art. 39(2) of TRIPS.

The goal of the Trade Secrets Directive is to harmonize company redress for situations where undisclosed trade secrets were unlawfully acquired, used or disclosed by a third party. Meanwhile, it is clear from the recitals of the Directive that a request to comply with GDPR, i.e. to provide “meaningful information on the inner logic” of the algorithm, should be considered as lawful action (Recitals 18,34, 35). In other words, large portions of the Trade Secrets Directive do not apply to such requests (for instance the provisions on limitation period and publication of judicial decisions). Yet the Directive remains of interest because it is the first European legislation on trade secrets to explicate the relationship between trade secrets and fundamental rights such as freedom of expression and data protection.

3.5.2 Interplay between the Trade Secrets Directive and the GDPR A number of recitals in the Trade Secrets Directive refer either explicitly or implicitly to data protection. Most prominent among them is recital 35, which states that ‘(T)his Directive should not affect the rights and obligations laid down in Directive 95/46/EC, in particular the rights of the data subject to access his or her personal data being processed and to obtain the rectification, erasure or blocking of the data where it is incomplete or inaccurate and, where appropriate, the obligation to process sensitive data in accordance with Article 8(5) of Directive 95/46/EC’. So far so good, it would seem: data subjects’ rights should not be hampered by the Trade Secrets Directive. However, both Directive 95/46/EC and the GDPR, which succeeded the prior Directive on 25 May 2018, contain their own recitals which seemingly rebut the point made in the Trade Secrets Directive. In respect of the GDPR, recital 63 sends the reader from pillar to post as: ‘[the access right of data subjects, including the right to know the logic involved in any automatic personal data processing] should not adversely

170 Bruce T. Atkins, “Trading secrets in the informative age: can trade secret law survive the Internet?”, University of Illinois Law Review 1996, nr. 4 (22 september 1996): 1194. 171 Sandra Wachter, Brent Mittelstadt, en Luciano Floridi, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law 7, nr. 2 (1 mei 2017): 35, https://doi.org/10.1093/idpl/ipx005. 172 Malgieri en Comandé, “Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation”, 1 november 2017, 242. 173 “Directive (EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the Protection of Undisclosed Know-How and Business Information (Trade Secrets) against Their Unlawful Acquisition, Use and Disclosure (Text with EEA Relevance)”, Pub. L. No. 32016L0943, OJ L 157 (2016), http://data.europa.eu/eli/dir/2016/943/oj/eng.

Page 52: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

affect […] trade secrets […] However, the result of those considerations should not be a refusal to provide all information to the data subject.’ This last sentence contains the key to dispelling the apparition of a legal “loop”. From the recitals of the GDPR and the Trade Secrets Directive it becomes clear that, instead, there is a balancing exercise which should be done between the fundamental right to data protection and the economic interests companies hold in trade secrets. This balance moreover favours fundamental rights, allowing that trade secret protection be tailored ‘in line with the principle of proportionality’, so that it does not undermine or jeopardise fundamental rights and freedom (recital 21 Trade Secret Directive). Authors have noted the distinction made in the respective texts between a blanket “should not affect” and a qualified “should not adversely affect, however…”. It is in this margin that a right to meaningful information could be situated, without contradicting either instrument. In sum, while trade secrets could be relied on to withhold crucial information on the technical workings of an algorithm, their protection should never lead to the denial of meaningful information to the data subject. Instead, a reasonable balance should be sought between safeguarding a data subject’s fundamental rights and doing so in a way which does not disproportionately affect the company’s interests.

3.5.3 Trade Secrets Directive and media pluralism Several provisions of the Trade Secrets Directive (both in the recitals as well as in the main text) refer to the importance of sustaining media pluralism. Article 2 (a) states unequivocally that ‘the Directive shall not affect the exercise of the right to freedom of expression and information as set out in the Charter, including respect for the freedom and pluralism of the media.’ The origin of this provision is explained by recital 19, which links it to investigative journalism and the protection of journalistic sources. During the draft process of the Directive there were concerns that the new protective measures for trade secrets would be used against journalists to discourage them from “whistleblowing” on company practices. Yet, the resulting article 2 (a) and 5 (a) of the Trade Secret Directive are actually quite broad, which could provide for another hook to dismantle a “trade secret defence” by companies trying to keep their news provision algorithms from the prying eyes of the public. After all, news-recommender algorithms have the potential to significantly (and surreptitiously) impact the plurality of sources from which consumers receive their information.

3.5.4 Conclusion Even though current intellectual property law may not offer many ground-breaking alternatives to support a right to algorithmic transparency and accountability, it notably does not stand in the way of such a right either. Furthermore, the future is looking bright as regulators are becoming more and more aware of both the importance of accessible algorithms and personal data in our society, as signified by the recent activity of the EPO and the European legislator respectively.

3.6 Ongoing legal initiatives

3.6.1 EU Parliamentary resolution on media pluralism On May 3, 2018, the European Parliament adopted a resolution on Media pluralism and media freedom in the European Union.174 The resolution brought the importance of media freedom and pluralism back to the fore, this time including the online environment.

174 European Parliament, resolution on Media pluralism and media freedom in the European Union, available at: http://www.europarl.europa.eu/sides/getDoc.do?type=TA&language=EN&reference=P8-TA-2018-0204

Page 53: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

First, the topic of fake news and disinformation is highlighted in the resolution.175 The resolution encourages social media companies and online platforms to provide extra efforts in this domain, additionally calling for effective self-regulation. The proposed measures include self-regulatory obligations, instruments and tools on source verification. The tools should enable users to report and flag potential fake news, complementary to new ‘forward-looking’ legislation at a national level in these fields and the obligations of the new AVMSD.176 Additionally, the resolution stresses that any type of enforcement in the fight against fake news should be complemented by the necessary transparency to monitor these mechanisms.

Second, the resolution emphasises the need to keep enforcing competition laws as ownership online can get blurred without clear indication to the consumer which websites or articles belong to whom. The need to safeguard transparency, disclosure, and easy accessibility for citizens to information on media source, ownership, funding sources and management is therefore paramount. In turn, it stresses the need for independent monitoring mechanisms to assess media freedom and media pluralism within EU Member States. The resolution therefore calls the European Commission to further fund the Media Pluralism Monitor, thus supporting an annual mechanism to collect information and statistics on media pluralism both online and offline.177

In sum, the resolution only sees a role on a European level insofar as it concerns collecting information and centralizing it, so that it may be monitored and analysed; or for domains which (indirectly) fall within EU competences, such as the AVMSD and competition laws.

The EU parliament does stress however that matters related to media and media transparency do not directly fall within the competences of the EU. For this reason, the following section will briefly look at other means through which the various institutions at the European level aims to enable and support transparency of online news media: recommendation by the Committee of Ministers (2), self-regulation supervised by the European Commission (3) and the AVMSD affecting video-sharing-platforms (4).

3.6.2 Recommendation of the Committee of Ministers to member States on media pluralism and transparency of media ownership

The recommendation of the Committee of Ministers to member States on media pluralism and transparency of media ownership (for this section referred to as ‘the Recommendation’)178 of March 7, 2018, was written after a series of debates of the committee of experts on Media Pluralism and Transparency of Media Ownership (MSI-MED), from 1 January 2016 until 31 December 2017. The committee analysed best practices in various member States with regard to “policies and other measures ensuring a pluralist media landscape, transparency of media ownership, diversity of media content, inclusiveness in public service media, gender equality in media coverage of election campaigns.” The goal of these debates was to draft standard-setting proposals on media pluralism and transparency of media ownership, based upon both Council of Europe standards and relevant

175 https://data.europa.eu/euodp/en/data/dataset/S2183_464_ENG 176 Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities; available via: https://eur-lex.europa.eu/eli/dir/2018/1808/oj. 177 For more information on CMPF’s Media Pluralism Monitor, visit: http://cmpf.eui.eu/media-pluralism-monitor/ 178 Recommendation CM/Rec(2018)1 of the Committee of Ministers to member States on media pluralism and transparency of media ownership, search.coe.int/cm/Pages/result_details.aspx?ObjectId=0900001680790e13

Page 54: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

jurisprudence of the European Court of Human Rights. The expert committee additionally worked on a range of issues related to elections, including the use of internet in electoral campaigns.179 The Recommendation on Media Pluralism and Transparency of Media Ownership that followed these debates stresses early on that safeguards and measures for media pluralism require a fresh approach for the digital age, based on monitoring and comparative data, such as use and exposure diversity. 180 The Recommendation stresses that before Member States take regulatory interventions, there should first be a good understanding of how internet intermediaries affect media pluralism. The recommendation stresses that such regulatory interventions should account for these changes in practices, so as to “maintain or restore the integrity of the democratic process and to prevent bias, misleading information or suppression of information.”181 The recommendation additionally calls upon new policy and strategic responses, specifically for safeguarding the journalistic profession, while enhancing citizens’ access to diverse content across all media types and formats. While most of the recommendation refers to media ownership and control, it does have some specific statements regarding diversity of content that are particularly relevant. In paragraph 2.3, it encourages States to “adopt regulatory and policy measures to promote the availability, findability and accessibility of the broadest possible diversity of media content as well as the representation of the whole diversity of society in the media, including by supporting initiatives by media to those ends.” For this purpose, the recommendation first mentions a range of traditional regulatory interventions, such as the must-carry concept, rules on due prominence of content of general interest and rules on accessibility for persons with disabilities. The recommendation goes on to more complex, digital, new mechanisms for the online environment. It argues that since “media content is increasingly managed, edited, curated and/or created by internet intermediaries, States should recognise the variety of their roles in content production and dissemination and the varying degrees of their impact on media pluralism.” Additionally, it stresses that any complementary self-regulatory interventions should be as transparent as possible, to safeguard the independent nature of such mechanisms. It requires such mechanisms to be “open to meaningful participation from all relevant stakeholders, be accountable to the public and work in accordance with ethical standards that take full account of the multimedia ecosystem.”182 Even more specifically, in paragraph 2.5 of its guidelines, the Recommendation encourages States to invest in initiatives that aim to improve effective exposure of users to the broadest possible diversity of media content online. It argues that the visibility, findability, accessibility and promotion of media content online are increasingly influenced by automated processes, whether used alone or in combination with human decisions. It therefore recommends that States should encourage social media, media, search and recommendation engines and other intermediaries which use algorithms, along with media actors, regulatory authorities, civil society, academia and other relevant stakeholders to engage in open, independent, transparent and participatory initiatives that: – improve the transparency of the processes of online distribution of media content, including automated processes; – assess the impact of such processes on users’ effective exposure to a broad diversity of media content; – seek to improve these distribution processes in order to enhance users’ effective exposure to the broadest possible diversity of media content;

179 For more information on these debates and considerations, please visit: https://www.coe.int/en/web/freedom-expression/committee-of-experts-on-media-pluralism-and-transparency-of-media-ownership-msi-med- 180 Ibid CM/Rec(2018), guidelines point 2.5. 181 Ibid CM/Rec(2018), preamble point 8. 182 Ibid CM/Rec(2018), guidelines point 2.4.

Page 55: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

– provide clear information to users on how to find, access and derive maximum benefit from the wide range of content that is available; and – implement the principle of privacy by design in respect of any automated data processing techniques and ensure that such techniques are fully compliant with the relevant privacy and data protection laws and standards.183 Finally, in paragraphs 2.6 and 2.7 it reiterates that States should make particular efforts, taking advantage of technological developments, to ensure that the broadest possible diversity of media content, including general interest content, is accessible to all groups in society. It then states that in order to achieve diversity, high levels of transparency about editorial and commercial content are necessary, suggesting that media and other actors should adhere to “the highest standards of transparency regarding the source of their content and always indicate clearly when content is provided by political sources or involves commercial communications”, including hybrid forms of content.

3.6.3 Audiovisual Media Services Directive reform

The technical evolution in the audiovisual media landscape specifically has reached a point where the impact of alternative sources of audiovisual media, such as online video-sharing platforms like YouTube or DailyMotion, is explicitly recognized by the EU legislator. These alternative sources differ from traditional media sources in one key element and that is their (perhaps voluntary) lack of editorial responsibility over media content on their platform. Nevertheless, these new media players have an important influence on the sector due to their global reach and their technical capacities to decide who sees what, under which conditions. The first recital of the new AVMSD of November 14th, 2018, justifies this additional tier of responsibilities stating that ‘convergence of media requires an updated legal framework in order to reflect developments in the market and to achieve a balance between access to online content services, consumer protection and competitiveness.’184 The Directive defines a video-sharing platform as a commercial service addressed to the public: where the principal purpose of the service (or an essential functionality of such service ) is devoted to providing programmes and user-generated videos to the general public, in order to inform, entertain or educate; which is made available by electronic communications networks; and where the content is organised in a way determined by the provider of the service, in particular by displaying, tagging and sequencing. In short, services such as YouTube will now be included in the scope of the AVMSD, albeit with different responsibilities. The AVMSD also explicitly includes audiovisual content shared on social media services, such as Facebook, when certain conditions have been met. Specifically for news websites however, separate parts of newspapers' websites for audiovisual content may be considered a video-sharing platform in and by itself. The exemption remains for any occasional use of videos on for example websites, blogs or news portals.

While the directive does not contain any articles directly related to algorithmic transparency, the responsibilities for video-sharing platforms does contain an interesting open-ended obligation in its article 28a (f) that video-sharing platforms should, for the purposes of this directive provide effective media literacy measures and tools and raising users' awareness of these measures and tools. Preamble

183 Ibid CM/Rec(2018), guidelines point 2.5. 184 Directive (EU) 2018/1808 of the European Parliament and of the Council of 14 November 2018 amending Directive 2010/13/EU on the coordination of certain provisions laid down by law, regulation or administrative action in Member States concerning the provision of audiovisual media services (Audiovisual Media Services Directive) in view of changing market realities, available at: https://eur-lex.europa.eu/eli/dir/2018/1808/oj.

Page 56: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

29 does clarify that in light of the nature of the providers' involvement with the content provided on video-sharing platform services, those appropriate measures should relate to the organisation of the content and not to the content as such. Additionally, the Directive states that Member States can impose obligations “to ensure the appropriate prominence of content of general interest under defined general interest objectives such as media pluralism, freedom of speech and cultural diversity.” Of course, the conditions of proportionality and necessity still apply. Finally, as a way to introduce the following policy instrument, the Directive encourages that platforms, such as social media networks and video-sharing platforms should be involved in any efforts of self- and co-regulation. Preamble 30 adds that such platforms are welcomes to take stricter measures on a voluntary basis in accordance with Union law, with respect for freedom of expression and information and media pluralism. However, the EU level code of Practice on Disinformation, as discussed below, will state the opposite and advise platforms to refrain from voluntarily applying stricter measures, so as to ensure full respect for these fundamental rights and values.

3.6.4 EU Code of Practice on Disinformation The EU has supported the drafting of an EU level code of practice on disinformation, published on September 2018.185 The code has been signed by Facebook, Google, Twitters, Mozzilla, a variety of advertisers and the advertising industry in general, later followed by a signature from Microsoft. The preamble to the code says in this regard that the breadth of the commitments, and the range of stakeholders involved, Signatories will sign up only to commitments which correspond to the product and/or service they offer, and their role in the value chain. The code of practice opens by stating that Human Rights and the principle of freedom of opinion specifically, are central considerations throughout the code. For example, signatories should stay independent from government interferences, but should also refrain from excessive policies in content moderation and censorship of lawful content. In its article II.D. ‘Empowering consumers’ the code encourages signatories to “invest in technological means to prioritize relevant, authentic, and accurate and authoritative information where appropriate in search, feeds, or other automatically ranked distribution channels.” For this purpose, the code stresses that transparency is essential, and that users should be fully enabled to understand why they have been targeted by a given political or issue-based advertisement. Additionally, this transparency should facilitate the assessment of content, e.g. trustworthiness, media ownership and verified identities or certification. The code stresses that these assessments should be based on objective criteria and endorsed by news media associations, in line with journalistic principles and processes. It recognises the importance that users “should be empowered with tools enabling a customized and interactive online experience so as to facilitate content discovery and access to different news sources representing alternative viewpoints, and should be provided with easily-accessible tools to report disinformation”.186 Finally, the article closes by stating that the signatories should recognise the ongoing legislative work to develop standards for transparency about the main parameters of ranking included in the draft Platform to Business Regulation as well as the work being carried out by the EU Artificial Intelligence Expert Group as well as the EU consumer acquis. The current – relevant - commitments of the signatories are:

185 https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation 186 Article II, D EU Code of Practice on Disinformation, available at: https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

Page 57: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

1. Relevant Signatories will invest in products, technologies and programs to help people make informed decisions when they encounter online news that may be false, including by supporting efforts to develop and implement effective indicators of trustworthiness. 2. Relevant Signatories will also invest in features and tools that make it easier for people to find diverse perspectives about topics of public interest. Signatories are in the process of providing their very first summary reports of the efforts they made to fulfil these commitments. The European Commission itself nevertheless kept a close watch on the implementation of these commitments in the run-up to the 2019 elections, with targeted short-term monitoring reports now available.187

187 EU Code of Practice on Disinformation, available at: https://ec.europa.eu/digital-single-market/en/news/code-practice-disinformation

Page 58: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Human Computer Interaction Research Exploration for News Recommender Systems

This section will explore the relevant state of the art regarding the interaction design for algorithms.. In this context, Algorithmic Experience (AX) emerges in HCI as a tool to study how to design the relationship between users and algorithms. Based on a broader literature review, and in correspondence to task 1.2, this chapter presents an overview of the HCI topics related to algorithms, social implications of algorithms, algorithms and culture, recommender systems and algorithmic news. The literature study presented will facilitate the evaluation and design of transparent algorithmic systems.

Algorithms are involved in our everyday life 188. That is why many researchers address them in different contexts. For example, these tools are involved in cultural consumption and distribution 189, suggesting cultural options 190 and becoming part of our culture 191. Furthermore, academics explore how algorithms influence our democratic system and governments 192 or their use in algorithmic news production and consumption 193, search engines 194 and social media 195. Additionally, algorithms are

188 Michele Willson, “Algorithms (and the) everyday”, Information Communication and Society 20, nr. 1 (2017): 137–50, https://doi.org/10.1080/1369118X.2016.1200645. 189 Ted Striphas, “Algorithmic culture”, European Journal of Cultural Studies 18, nr. 4–5 (2015): 395–412, https://doi.org/10.1177/1367549415577392. 190 Tarleton Gillespie, “#trendingistrending: When algorithms become culture”, Algorithmic Cultures: Essays on Meaning, Performance and New Technologies 189 (2016): 52–75, https://doi.org/10.4324/9781315658698; B. Hallinan en T. Striphas, “Recommended for you: The Netflix Prize and the production of algorithmic culture”, New Media & Society 18, nr. 1 (2016): 117–37, https://doi.org/10.1177/1461444814538646; Jeremy Wade Morris, “Curation by code: Infomediaries and the data mining of taste”, European Journal of Cultural Studies 18, nr. 4–5 (2015): 446–63, https://doi.org/10.1177/1367549415577387. 191 Paul. Dourish, “Algorithms and their others: Algorithmic culture in context”, Big Data & Society 3, nr. 2 (2016): 1–11, https://doi.org/10.1177/2053951716665128; Nick Seaver, “Algorithms as culture: Some tactics for the ethnography of algorithmic systems”, Big Data & Society 4, nr. 2 (2017): 205395171773810, https://doi.org/10.1177/2053951717738104. 192 John Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy and Technology 29, nr. 3 (2016): 245–68, https://doi.org/10.1007/s13347-015-0211-1; L. D. Introna, “Algorithms, Governance, and Governmentality: On Governing Academic Writing”, Science, Technology & Human Values 41, nr. 1 (2016), https://doi.org/10.1177/0162243915587360. 193 Philip M. Napoli, “Automated media: An institutional theory perspective on algorithmic media production and consumption”, Communication Theory 24, nr. 3 (2014): 340–60, https://doi.org/10.1111/comt.12039. 194 Tarleton Gillespie, “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem”, Information Communication and Society 20, nr. 1 (2017): 63–80, https://doi.org/10.1080/1369118X.2016.1199721. 195 Taina Bucher, “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook”, New Media & Society 14, nr. 7 (2012): 1164–80, https://doi.org/10.1177/1461444812440159.

Page 59: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

criticized because of biases in their personalization strategies 196, lack of transparency 197 or fairness 198. Academic efforts in HCI explore algorithmic explanations 199, machine learning experience 200, personalized advertisement 201, and social media 202. Consequently, Algorithmic Experience (AX) emerges as way to make explicit the interaction and experience with algorithms 203. This project deliverable describes a group of research papers related to algorithms, social implications with algorithms, algorithms and culture, recommender systems and algorithmic news. The findings show certain research opportunities related with algorithms and news recommendations.

4.1 Algorithm as a concept

Some academic contributions define the word “algorithm” to delimit the research domain. A relevant example for our context is Gillespie 204 describing four different ways 205 to understand the concept of algorithm depending on the context:

• Algorithm as a trick, when it implies technicalities such as the procedure and its model. • Algorithm as a synecdoche, referring to its sociotechnical implications and avoiding its

technical nature.

196 Engin Bozdag, “Bias in algorithmic filtering and personalization”, Ethics and Information Technology 15, nr. 3 (2013): 209–27, https://doi.org/10.1007/s10676-013-9321-6. 197 Rashmi Sinha en Kirsten Swearingen, “The role of transparency in recommender systems”, CHI ’02 extended abstracts on Human factors in computing systems - CHI ’02, 2002, 830–31, https://doi.org/10.1145/506443.506619; Frank Pasquale, “Restoring Transparency to Automated Authority”, Journal on Telecommunications and High Technology Law 9, nr. 235 (2011): 235–56. 198 Tal Zarsky, “The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making”, Science Technology and Human Values 41, nr. 1 (2016): 118–32, https://doi.org/10.1177/0162243915605575; Reuben Binns e.a., “‘It’s Reducing a Human Being to a Percentage’; Perceptions of Justice in Algorithmic Decisions”, in Conference on Computer-Human Interaction (New York, New York, USA: ACM Press, 2018), 1–14, https://doi.org/10.1145/3173574.3173951. 199 René F. Kizilcec, “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface”, Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems - CHI ’16, 2016, 2390–95, https://doi.org/10.1145/2858036.2858402. 200 Qian Yang, “The Role of Design in Creating Machine-Learning-Enhanced User Experience”, The AAAI 2017 Spring Symposium on Designing the User Experience of Machine Learning Systems Technical Report SS-17-04, 2017, 406–11. 201 Motahhare Eslami e.a., “Communicating Algorithmic Process in Online Behavioral Advertising”, in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18 (New York, New York, USA: ACM Press, 2018), 1–13, https://doi.org/10.1145/3173574.3174006. 202 Michael A. DeVito, Darren Gergle, en Jeremy Birnholtz, “Algorithms ruin everything: RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media”, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems - CHI ’17, 2017, 3163–74, https://doi.org/10.1145/3025453.3025659; Emilee Rader en Rebecca Gray, “Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed”, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems - CHI ’15, 2015, 173–82, https://doi.org/10.1145/2702123.2702174. 203 Oscar Alvarado en Annika Waern, “Towards Algorithmic Experience”, in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18 (Montreal, Canada: ACM Press, 2018), 1, https://doi.org/10.1145/3173574.3173860. 204 “Algorithm”, in Digital Keywords: A Vocabulary of Information Society and Culture, onder redactie van Ben Peters (Princeton University Press, 2016), 18–30. 205 Gillespie, 19–27.

Page 60: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

• Algorithm as a talisman, when it refers to a service performed by algorithms. An example is explaining Facebook’s algorithm as a reason for a filter bubble, a way to assign neutrality to the tool and to avoid implications for their owners.

• Algorithm as committed to the procedure, usually invoked with the term “algorithmic” such as the “Algorithmic Turn”, “Algorithmic Journalism” or “Algorithmic Experience”.

Academic papers addressing “algorithm” as a concept:

Nick Seaver, “Knowing algorithms”, Media in Transition, nr. April 2013 (2013): 1–12.

Jannick Schou en Johan Farkas, “Algorithms, interfaces, and the circulation of information: Interrogating the epistemological challenges of Facebook” Jenna Burrell, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”

Gillespie, “Algorithm”.

Dourish, “Algorithms and their others: Algorithmic culture in context”.

David Beer, “The social power of algorithms”

Striphas, “Algorithmic culture”

Felicitas Kraemer, Kees van Overveld, en Martin Peterson, “Is there an ethics of algorithms?”

Seaver, “Algorithms as culture: Some tactics for the ethnography of algorithmic systems”.

R Stuart Geiger, “Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture”

Tarleton Gillespie, “The relevance of algorithms”

Introna, “Algorithms, Governance, and Governmentality: On Governing Academic Writing”

Rob Kitchin, “Thinking critically about and researching algorithms”

4.2 Algorithmic Imaginary and Folk Theories

Researchers have addressed people’s beliefs and imaginaries about algorithms. According to Bucher, Algorithmic Imaginary is described “… as the way in which people imagine, perceive and experience algorithms and what these imaginations make possible” 206. From an HCI perspective, folk theories approach users’ conceptions about the algorithms they use. One example explains how the inclusion of a filtering algorithm in Twitter promoted negative opinions and understandings 207. Another example describes folk theories among Facebook users 208.

Academic papers addressing Algorithmic Imaginary and Folk Theories

206 Bucher, “The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms”, Information, Communication & Society 20, nr. April (2017): 93, https://doi.org/10.1080/1369118X.2016.1154086. 207 DeVito, Gergle, en Birnholtz, “Algorithms ruin everything: RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media”. 208 Motahhare Eslami e.a., “First I ‘like’ it, then I hide it: Folk Theories of Social Feeds”, Conference on Human Factors in Computing Systems, 2016, 2371–82, https://doi.org/10.1145/2858036.2858494.

Page 61: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Rader en Gray, “Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed”

Shagun Jhaver en Judd Antin, “Algorithmic Anxiety and Coping Strategies of Airbnb Hosts” Bucher, “The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms”

Jhaver en Antin, “Algorithmic Anxiety and Coping Strategies of Airbnb Hosts”

Changhoon Oh e.a., “Us vs. Them: Understanding Artificial Intelligence Technophobia over the Google DeepMind Challenge Match”

Alvarado en Waern, “Towards Algorithmic Experience”

DeVito, Gergle, en Birnholtz, “Algorithms ruin everything: RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media”.

Eslami e.a., “First I ‘like’ it, then I hide it: Folk Theories of Social Feeds”

4.3 Algorithms and democracies

Researchers have described the effects of algorithms in democratic systems, enabling or rejecting participation, information sharing, political discussion, etc. For instance, Bozdag and van den Hoven 209 explain the democratic implications of filter bubbles and describe different applications intended to combat those effects. As another example, Danaher 210 explores how algorithms overtake public decisions. The author proposes a concept called “Algocracy” as society is ruled by:

“…a system in which algorithms are used to collect, collate and organize the data upon which decisions are typically made and to assist in how that data is processed and communicated through the relevant governance system. In doing so, the algorithms structure and constrain the ways in which humans within those systems interact with one another, the relevant data and the broader community affected by those systems” 211.

Academic papers addressing algorithms and their relationship with democracies

Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation”

Bozdag, “Bias in algorithmic filtering and personalization” Bozdag en van den Hoven, “Breaking the filter bubble: democracy and design”

Kate Crawford, “Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics”

Taina Bucher, “‘Machines don’t have instincts’: Articulating the computational in journalism”

209 “Breaking the filter bubble: democracy and design”, Ethics and Information Technology 17, nr. 4 (2015): 249–65, https://doi.org/10.1007/s10676-015-9380-y. 210 “The Threat of Algocracy: Reality, Resistance and Accommodation”. 211 Danaher, 247.

Page 62: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Natali Helberger, Kari Karppinen, en Lucia D’Acunto, “Exposure diversity as a design principle for recommender systems”

4.4 Algorithms, government and regulations

Algorithms create such implications in our lives that it stimulates the creation of legal tools and procedures for their regulation. For example, Goodman and Flaxman 212 present an analysis about EU regulations and the GDPR implication on design strategies for their containing systems. Additionally, algorithms regulate people and societies promoting different governmental dynamics in different social contexts. For example, Introna 213 discuss algorithms in academic writing and how these tools implement governance structures.

Academic papers addressing algorithms and their relationship with government and regulations

Solon Barocas, Sophie Hood, en Malte Ziewitz, “Governing algorithms: A provocation piece”

Pasquale, “Restoring Transparency to Automated Authority” Goodman en Flaxman, “EU regulations on algorithmic decision-making and a ‘right to explanation’”

Danah Boyd, “The Networked Nature of Algorithmic Discrimination”

Nicholas Diakopoulos, “Accountability in algorithmic decision making”

Helberger, Karppinen, en D’Acunto, “Exposure diversity as a design principle for recommender systems”

Mike Ananny en Kate Crawford, “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability”

Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation”

Malte Ziewitz, “Governing Algorithms: Myth, Mess, and Methods”

Zarsky, “The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making”.

Introna, “Algorithms, Governance, and Governmentality: On Governing Academic Writing”

Beer, “The social power of algorithms”

Barocas, Hood, en Ziewitz, “Governing algorithms: A provocation piece”

Adrienne Massanari, “#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures”

Geiger, “Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture”

212 “EU regulations on algorithmic decision-making and a ‘right to explanation’”, 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), 2016, 1–9. 213 Introna, “Algorithms, Governance, and Governmentality: On Governing Academic Writing”.

Page 63: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Pasquale, “Restoring Transparency to Automated Authority”

Michael Veale, Max Van Kleek, en Reuben Binns, “Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making”

4.5 Algorithms and news

News creation and consumption are closely related with algorithms. Many authors try to understand the ethical implications and societal impacts of this processes. For example, Napoli 214 describes a distinction between algorithms in news production and news consumption. Another example is Dörr 215, defining algorithmic journalism as:

“…the (semi) automated process of natural language generation by the selection of electronic data from private or public databases (input), the assignment of relevance of preselected or non-selected data characteristics, the processing and structuring of the relevant data sets to a semantic structure (throughput) and the publishing of the final text on an online or offline platform with a certain reach (output)”. 216

Similarly, Bucher addresses the same process as:“…a tendency towards large-scale data collection, algorithmic data analysis and computational practices in the production and dissemination of news…” 217. A related research concept in this category is the “algorithmic turn”: a notion that describes the transformation of media industry by implementing algorithms in their processes 218, both in media consumption and production 219. Napoli expresses how algorithms are acting in two main areas: demand prediction and content creation 220. For the former, media companies try to understand and learn their audience preferences using big data analysis, which brings an ethical discussion on content defining activities 221. Regarding content creation, Napoli expresses: “This is not to say that the human element is being eliminated from content creation. Algorithms are human creations. Rather, the point here is that the human role in content creation is migrating from a direct to an indirect role” 222. The Algorithmic turn is also mentioned by Frizzera 223, discussing how media companies started to adopt algorithms in their news production process to retake their previous position as information gatekeepers 224. This process starts an: “intensive use of digital technologies and large volumes of data (i.e., big data) to enhance decision-making about the production of content and the preferences of the audience” 225.

214 “Automated media: An institutional theory perspective on algorithmic media production and consumption”. 215 “Mapping the field of Algorithmic Journalism”, Digital Journalism 0811, nr. November (2015): 700–722, https://doi.org/10.1080/21670811.2015.1096748. 216 Dörr, 4. 217 “‘Machines don’t have instincts’: Articulating the computational in journalism”, 920. 218 Napoli, “Automated media: An institutional theory perspective on algorithmic media production and consumption”. 219 345–50. 220 348. 221 Napoli, 349. 222 350. 223 “I / O : Reinforcing Newsmaking Practices Through Algorithmic Media”, Stream: Inspiring Critical Thought 10, nr. 1 (2018): 39–51. 224 39. 225 Frizzera, 40.

Page 64: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Academic papers addressing algorithms and their relationship with news

Dörr, “Mapping the field of Algorithmic Journalism”

Bozdag, “Bias in algorithmic filtering and personalization” Frizzera, “I / O : Reinforcing Newsmaking Practices Through Algorithmic Media”

Diakopoulos, “Accountability in algorithmic decision making”

Napoli, “Automated media: An institutional theory perspective on algorithmic media production and consumption”

Bucher, “‘Machines don’t have instincts’: Articulating the computational in journalism”

Mike Ananny, “Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness”

Bozdag en van den Hoven, “Breaking the filter bubble: democracy and design”

Ashraf Abdul e.a., “Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda”

4.6 Algorithms and culture

Algorithms also deal with our cultural production, selection and distribution. For example, Striphas proposes algorithmic culture as “… the enfolding of human thought, conduct, organization and expression into the logic of big data and large-scale computation, a move that alters how the category of culture has long been practiced, experienced and understood” 226. The author also explains how almost all everyday cultural activities are currently managed by data-related activities, subject to computerized information processing 227. Similarly, Gillespie expresses how:

“Algorithms, particularly those involved in the movement of culture, are both mechanisms of distribution and valuation, part of the process by which knowledge institutions circulate and evaluate information, the process by which new media industries provide and sort culture. In particular, assertions of cultural value, always based on prediction, recipes, and measurements about what makes something culturally valuable, are incorporating algorithmic techniques for doing so”. 228

Likewise, Gillespie describes how trending algorithms work and define what is culturally valuable and popular. 229 The author states how trending algorithms can become cultural objects by themselves, culturally meaningful, deserving to be tracked, worth representations of public tendencies 230, having the power to make visible or invisible cultural products, artists, or political expressions 231. Finally, Gillespie explains how these algorithms provide a space to understand us as a public by their abilities for tracking and calculating users’ activities 232.

226 “Algorithmic culture”, 396. 227 Striphas, 396. 228 Gillespie, “#trendingistrending: When algorithms become culture”, 2. 229 Ibid,, 3. 230 Gillespie, 12. 231 Gillespie, 14. 232 14–16.

Page 65: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Academic papers addressing algorithms and their relationship with culture

Dourish, “Algorithms and their others: Algorithmic culture in context”. Striphas, “Algorithmic culture”. Hallinan en Striphas, “Recommended for you: The Netflix Prize and the production of algorithmic culture”. Gillespie, “#trendingistrending: When algorithms become culture”. Geiger, “Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture”. Morris, “Curation by code: Infomediaries and the data mining of taste”. Seaver, “Algorithms as culture: Some tactics for the ethnography of algorithmic systems”.

Massanari, “#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures”.

4.7 Algorithms and power

Researchers have also studied the power dynamics that algorithms enact over people. For instance, Neyland and Möllers 233 analyze how IF and THEN rules define power relationships in specific contexts. Likewise, another paper proposes an extensive description of the social power of algorithms 234.

Academic papers addressing algorithms and their relationship with power

Neyland en Möllers, “Algorithmic IF … THEN rules and the conditions and consequences of power”. Bucher, “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook”. Beer, “The social power of algorithms”.

4.8 Algorithms and transparency

Researchers have dealt with algorithms as black boxes, opacity or trust. For example, some studies show how transparency affects recommender systems 235 and the use of explanations and their implications on algorithmic transparency 236. Other studies explore the definition of transparency or accountability. For example, Ananny and Crawford propose transparency as : “a system of observing and knowing that promises a form of control” 237. In their view, transparency should be performative by creating understanding about

233 “Algorithmic IF … THEN rules and the conditions and consequences of power”, Information Communication and Society 20, nr. 1 (2017): 45–62, https://doi.org/10.1080/1369118X.2016.1156141. 234 Beer, “The social power of algorithms”. 235 Sinha en Swearingen, “The role of transparency in recommender systems”. 236 Emilee Rader, Kelley Cotter, en Janghee Cho, “Explanations as Mechanisms for Supporting Algorithmic Transparency”, in Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems - CHI ’18 (New York, New York, USA: ACM Press, 2018), 1–13, https://doi.org/10.1145/3173574.3173677. 237 “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability”, 3.

Page 66: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

algorithms in their users, but provoke a list of possible limitations for algorithmic systems 238. To define accountability, they explain that “…we instead hold systems accountable by looking across them -seeing them as sociotechnical systems that do not contain complexity but enact complexity by connecting to and intertwining with assemblages of humans and non-humans” 239. Similarly, Diakopoulos proposes a discussion for algorithmic accountability in two sides of the spectrum: public services and private corporations 240. Diakopoulos states that algorithms are currently unregulated and “they are indeed exercising power over individuals or policies in a way that in some cases (for example, hidden government watch lists) lacks any accountability whatsoever” 241. The author explains how public institutions are constantly using algorithms to make relevant decisions, and the governed “should find unacceptable there is no transparency or even systematic benchmarking and evaluation of these forecasts, given the important policy decisions they feed” 242. In this context, “the most clear-cut way to do this is to design processes that adjudicate and facilitate the correction of false positives by end users” 243. He explains that users should be able to inspect, dispute and correct inaccurate results from algorithms or data management, which in the end will improve the quality of machine learning applications. Finally, the author proposes a model of transparency for algorithmic systems 244.

Academic papers addressing algorithms and their transparency

Ziewitz, “Governing Algorithms: Myth, Mess, and Methods”. Zarsky, “The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making”. Aaron Springer, Victoria Hollis, en Steve Whittaker, “Dice in the Black Box : User Experiences with an Inscrutable Algorithm Seaver, “Knowing algorithms”. Schou en Farkas, “Algorithms, interfaces, and the circulation of information: Interrogating the epistemological challenges of Facebook”. Christian Sandvig e.a., “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms” Kizilcec, “How Much Information?: Effects of Transparency on Trust in an Algorithmic Interface”. Burrell, “How the machine ‘thinks’: Understanding opacity in machine learning algorithms”. Adrian Bussone, Simone Stumpf, en Dympna O’Sullivan, “The role of explanations on trust and reliance in clinical decision support systems” Jhaver en Antin, “Algorithmic Anxiety and Coping Strategies of Airbnb Hosts”. Berkeley J Dietvorst, Joseph P Simmons, en Cade Massey, “Algorithm aversion: People erroneously avoid algorithms after seeing them err.” Sophie Bishop, “Anxiety, panic and self-optimization” Kevin Hamilton e.a., “A path to understanding the effects of algorithm awareness” Gillespie, “The relevance of algorithms”.

238 Ananny and Crawford, 5–10. 239 Ananny and Crawford, 2. 240 Diakopoulos, “Accountability in algorithmic decision making”. 241 48. 242 Diakopoulos, 58. 243 Diakopoulos, 58. 244 Diakopoulos, 59–61.

Page 67: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Geiger, “Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture”. Eslami e.a., “Communicating Algorithmic Process in Online Behavioral Advertising”. Dourish, “Algorithms and their others: Algorithmic culture in context”. Diakopoulos, “Accountability in algorithmic decision making”. Crawford, “Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics”. Henriette Cramer, Jennifer Thom, “Not-So-Autonomous, Very Human Decisions in Machine Learning: Questions when Designing for ML The human side of machine learning” Barry Brown, Eric Laurier, “The trouble with autopilots : Assisted and autonomous driving on the social road” Barocas, Hood, en Ziewitz, “Governing algorithms: A provocation piece”. Ananny and Crawford, “Seeing without knowing: Limitations of the transparency ideal and its application to algorithmic accountability”. Bussone, Stumpf, en O’Sullivan, “The role of explanations on trust and reliance in clinical decision support systems”. Jonathan Vitale e.a., “Privacy by Design in Machine Learning Data Collection: A User Experience Experimentation” Fedor Bakalov e.a., “An approach to controlling user models and personalization effects in recommender systems” Nava Tintarev en Judith Masthoff, “A survey of explanations in recommender systems” Sinha en Swearingen, “The role of transparency in recommender systems”.

Pasquale, “Restoring Transparency to Automated Authority”

Napoli, “Automated media: An institutional theory perspective on algorithmic media production and consumption”. Chen He, Denis Parra, en Katrien Verbert, “Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities” Bishop, “Anxiety, panic and self-optimization”. Bucher, “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook”. Goodman en Flaxman, “EU regulations on algorithmic decision-making and a ‘right to explanation’”. Introna, “Algorithms, Governance, and Governmentality: On Governing Academic Writing”. Kitchin, “Thinking critically about and researching algorithms”. Brian Y. Lim, Anind K. Dey, en Daniel Avrahami, “Why and why not explanations improve the intelligibility of context-aware intelligent systems” Seaver, “Algorithms as culture: Some tactics for the ethnography of algorithmic systems”. Abdul e.a., “Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda”. Alvarado en Waern, “Towards Algorithmic Experience”. Rader, Cotter, en Cho, “Explanations as Mechanisms for Supporting Algorithmic Transparency”. Allison Woodruff e.a., “A Qualitative Exploration of Perceptions of Algorithmic Fairness” Veale, Van Kleek, en Binns, “Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making”.

Page 68: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Ari Schlesinger, Kenton P. O’Hara, en Alex S. Taylor, “Let’s Talk about Race: Identity, Chatbots, and AI”, Sinha en Swearingen, “The role of transparency in recommender systems”.

Algorithms and discrimination Researchers have also shown how certain algorithms produce certain discriminatory practices. For example, Sandvig et al. 245 propose a group of methods to audit algorithms and look for discrimination in internet platforms. Bishop 246 shows another example, describing how YouTube’s algorithm treats female video creators differently.

Academic papers addressing algorithms and their relationship with discrimination

Bishop “Anxiety, panic and self-optimization” Sandvig e.a., “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms”. Gillespie, “#trendingistrending: When algorithms become culture”. Boyd, “The Networked Nature of Algorithmic Discrimination”. Goodman and Flaxman, “EU regulations on algorithmic decision-making and a ‘right to explanation’”. Massanari, “#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures”. Zarsky, “The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making”. Abdul e.a., “Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda”. Mark Diaz e.a., “Addressing Age-Related Bias in Sentiment Analysis” Woodruff e.a., “A Qualitative Exploration of Perceptions of Algorithmic Fairness”.

4.9 Algorithms and bias

Scientific papers have explored latent bias in algorithm and false perceptions of algorithmic objectivity. For example, Bozdag 247 describes the stages in which bias can be inserted in personalization systems, affecting their final results. Another paper describes how sentiment analysis systems poses age-related biases 248. Similarly, Gillespie defines the notion of “public relevance algorithms” to describe those algorithms that enact their labor “…producing and certifying knowledge” 249. One of these main characteristics for this kind of algorithms is their objectivity promise: “the way the technical character of the algorithm is positioned as an assurance of impartiality, and how the claim is maintained in the face of controversy” 250.

245 “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms”. 246 “Anxiety, panic and self-optimization”. 247 “Bias in algorithmic filtering and personalization”. 248 Diaz e.a., “Addressing Age-Related Bias in Sentiment Analysis”. 249 “The relevance of algorithms”. 250 Gillespie.

Page 69: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Academic papers addressing algorithms and their biases

Willson, “Algorithms (and the) everyday”. Gillespie, “Algorithm”. Bishop, “Anxiety, panic and self-optimization”. Bozdag, “Bias in algorithmic filtering and personalization”. Gillespie, “The relevance of algorithms”. Frizzera, “I / O : Reinforcing Newsmaking Practices Through Algorithmic Media”. Barocas, Hood, en Ziewitz, “Governing algorithms: A provocation piece”. Aylin Caliskan, Joanna J. Bryson, en Arvind Narayanan, “Semantics derived automatically from language corpora contain human-like biases” Gillespie, “#trendingistrending: When algorithms become culture”. Goodman en Flaxman, “EU regulations on algorithmic decision-making and a ‘right to explanation’”. Hamilton e.a., “A path to understanding the effects of algorithm awareness”. Introna, “Algorithms, Governance, and Governmentality: On Governing Academic Writing”. Morris, “Curation by code: Infomediaries and the data mining of taste”. Abdul e.a., “Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda”. Diaz e.a., “Addressing Age-Related Bias in Sentiment Analysis”. Rader, Cotter, en Cho, “Explanations as Mechanisms for Supporting Algorithmic Transparency”. Woodruff e.a., “A Qualitative Exploration of Perceptions of Algorithmic Fairness”. Veale, Van Kleek, en Binns, “Fairness and Accountability Design Needs for Algorithmic Support in High-Stakes Public Sector Decision-Making”. Schlesinger, O’Hara, en Taylor, “Let’s Talk about Race: Identity, Chatbots, and AI”.

4.10 Algorithms and privacy

Other research has explored algorithmic user profiling and its relationship with privacy. For instance, Warshaw et al 251 analyze people privacy perceptions with personal analytic systems. Similarly, Goodman and Flaxman 252 explain the GDPR implications with algorithmic profiling and privacy.

Academic papers addressing algorithms and their relationship with privacy

Warshaw e.a., “Can an Algorithm Know the ‘Real You’? Understanding People’ s Reactions to Hyper-personal Analytics Systems”.

Vitale e.a., “Privacy by Design in Machine Learning Data Collection: A User Experience Experimentation”.

251 “Can an Algorithm Know the ‘ Real You ’? Understanding People ’ s Reactions to Hyper-personal Analytics Systems”, Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 2015, 797–806, https://doi.org/10.1145/2702123.2702274. 252 “EU regulations on algorithmic decision-making and a ‘right to explanation’”.

Page 70: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Goodman en Flaxman, “EU regulations on algorithmic decision-making and a ‘right to explanation’”. Bakalov e.a., “An approach to controlling user models and personalization effects in recommender systems”. Bozdag, “Bias in algorithmic filtering and personalization”. Eslami e.a., “Communicating Algorithmic Process in Online Behavioral Advertising”. Goodman en Flaxman, “EU regulations on algorithmic decision-making and a ‘right to explanation’”. Abdul e.a., “Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda”. Alvarado en Waern, “Towards Algorithmic Experience”.

4.11 Algorithms and visibility

Other academic work describes how algorithms limit information visibility in different platforms. For example, Bucher explains how algorithms define what should and should not be visible based on “…preexisting cultural assumptions, but also on anticipated or future-oriented assumptions about valuable and profitable interactions…” 253. The author also describes an existing discrepancy between what users believe they should be exposed to and what the algorithm considers to show 254. Gillespie defines a concept called Algorithmically Recognizable to define the practice of simulating search algorithms preferences and therefore increase a particular content chances to appear in their results 255. Similarly, the author states in a different paper that: “we make ourselves already algorithmically recognizable in all sorts of ways” 256, expressing that we all are already trying to appear in search engines, which impulse the platforms to distinguish between genuine characteristics or gaming signals. From the author’s perspective, the struggle between “honest” content and search engine optimization is a false dichotomy since web content mostly falls in a “middle position”: a mixture of strategy implementation and valuable content creation to promote our own visibility 257.

Academic papers addressing algorithms and their relationship with visibility

Bucher, “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook”. Massanari, “#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures”. Bishop, “Anxiety, panic and self-optimization”. Frizzera, “I / O : Reinforcing Newsmaking Practices Through Algorithmic Media”. Gillespie, “The relevance of algorithms”. Gillespie, “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem”.

253 Bucher, “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook”, 1169. 254 1169. 255 “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem”, 65. 256 Gillespie, “The relevance of algorithms”. 257 Gillespie, “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem”, 67–68.

Page 71: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Rader en Gray, “Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed”.

4.12 Algorithms and content diversity

Scientific efforts have addressed algorithmic personalization and their implications with diversified content. For example, Helberger, Karppinen and D’Acunto 258 study diversity as a design principle for recommender systems.

Academic papers addressing algorithms and their relationship with content diversity

Helberger, Karppinen, en D’Acunto. Bakalov e.a., “An approach to controlling user models and personalization effects in recommender systems”. Bozdag, “Bias in algorithmic filtering and personalization”. Bozdag en van den Hoven, “Breaking the filter bubble: democracy and design”. He, Parra, en Verbert, “Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities”. Alvarado en Waern, “Towards Algorithmic Experience”.

4.13 Algorithms and their creation

If algorithms have such an importance in our societies, it is valuable to discuss how they are coded and defined. Surprisingly, only one related example was found in our broader literature review: Hallinan and Stripas 259 explored the “Netflix Prize” and how participants developed their algorithms to improve the movie recommendation service.

4.14 Algorithms in Intelligent Systems

Academic work related with recommender systems, decision support systems and context aware systems provides valuable input about their interaction with users. For instance, Tintarev and Masthoff’s 260 survey describes explanations as a way to improve transparency in recommender systems. Similarly, Bussone et al 261 studied the role of explanations to improve trust in a clinical decision support system. Likewise, Lim, Dey and Avrahami 262 discuss how to improve the understanding of context-aware systems with “why” explanations.

Academic papers addressing algorithms in Intelligent Systems

Hallinan en Striphas, “Recommended for you: The Netflix Prize and the production of algorithmic culture”. Morris, “Curation by code: Infomediaries and the data mining of taste”.

258 Helberger, Karppinen, en D’Acunto, “Exposure diversity as a design principle for recommender systems”. 259 “Recommended for you: The Netflix Prize and the production of algorithmic culture”. 260 “A survey of explanations in recommender systems”. 261 “The role of explanations on trust and reliance in clinical decision support systems”. 262 “Why and why not explanations improve the intelligibility of context-aware intelligent systems”.

Page 72: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Tintarev en Masthoff, “A survey of explanations in recommender systems”. Bakalov e.a., “An approach to controlling user models and personalization effects in recommender systems”. Sinha en Swearingen, “The role of transparency in recommender systems”. Todd Kulesza e.a., “Tell me more? The Effects of Mental Model Soundness on Personalizing an Intelligent Agent” Helberger, Karppinen, en D’Acunto, “Exposure diversity as a design principle for recommender systems”. He, Parra, en Verbert, “Interactive recommender systems: A survey of the state of the art and future research challenges and opportunities”. Bussone, Stumpf, en O’Sullivan, “The role of explanations on trust and reliance in clinical decision support systems”. Lim, Dey, en Avrahami, “Why and why not explanations improve the intelligibility of context-aware intelligent systems”. Abdul e.a., “Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda”.

4.15 Algorithms in Social Media

This application domain is usually included in algorithmic research to study news consumption and sharing. For example, Bozdag and van den Hoven263 present prototype examples to break the filter bubble effect in social media. Eslami et al 264 and Rader and Gray 265 describe how people perceive and have different feelings about Facebook’s algorithm. Bucher 266 also used Facebook to explain the existing threat of user invisibility.

Academic papers addressing algorithms in Social Media

DeVito, Gergle, en Birnholtz, “Algorithms ruin everything: RIPTwitter, Folk Theories, and Resistance to Algorithmic Change in Social Media”. Eslami e.a., “First I ‘like’ it, then I hide it: Folk Theories of Social Feeds”. Gillespie, “The relevance of algorithms”. Schou en Farkas, “Algorithms, interfaces, and the circulation of information: Interrogating the epistemological challenges of Facebook”. Bozdag, “Bias in algorithmic filtering and personalization”. Gillespie, “#trendingistrending: When algorithms become culture”. Frizzera, “I / O : Reinforcing Newsmaking Practices Through Algorithmic Media”. Diakopoulos, “Accountability in algorithmic decision making”. Bozdag en van den Hoven, “Breaking the filter bubble: democracy and design”.

263 Bozdag en van den Hoven, “Breaking the filter bubble: democracy and design”. Ananny, “Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness”. 264 “‘I always assumed that I wasn’t really that close to [her]’”, in Proceedings of the 2015 CHI Conference on Human Factors in Computing Systems - CHI ’15 (Seoul, 2015), 153–62, https://doi.org/10.1145/2702123.2702556; “First I ‘like’ it, then I hide it: Folk Theories of Social Feeds”. 265 “Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed”. 266 “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook”; “The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms”.

Page 73: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Ananny, “Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness”. Willson, “Algorithms (and the) everyday”. Rader en Gray, “Understanding User Beliefs About Algorithmic Curation in the Facebook News Feed”. Bucher, “The algorithmic imaginary: exploring the ordinary affects of Facebook algorithms”. Eslami e.a., “‘I always assumed that I wasn’t really that close to [her]’”. Crawford, “Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics”. Warshaw e.a., “Can an Algorithm Know the ‘ Real You ’? Understanding People ’ s Reactions to Hyper-personal Analytics Systems”. Bucher, “Want to be on the top? Algorithmic power and the threat of invisibility on Facebook”. Sayooran Nagulendra en Julita Vassileva, “Understanding and Controlling the Filter Bubble through Interactive Visualization: A User Study” Massanari, “#Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures”. Gillespie, “Algorithm”. Helberger, Karppinen, en D’Acunto, “Exposure diversity as a design principle for recommender systems”. Seaver, “Knowing algorithms”. Alvarado en Waern, “Towards Algorithmic Experience”. Rader, Cotter, en Cho, “Explanations as Mechanisms for Supporting Algorithmic Transparency”

4.16 Algorithms in search engines

Other research focusses on the exploration of search engines and their algorithms. For instance, Gillespie 267 discusses how Google and their algorithmic recognition played an important role during senator Santorum’s case. Furthermore, Bozdag and van den Hoven 268 exposed filter bubbles in search engines. In another paper, Bozdag 269 uses search engines to describe every step in which human bias could interfere with algorithmic results.

Academic papers addressing algorithms in Search Engines

Danaher, “The Threat of Algocracy: Reality, Resistance and Accommodation”. Gillespie, “#trendingistrending: When algorithms become culture”. Gillespie, “The relevance of algorithms”. Crawford, “Can an Algorithm be Agonistic? Ten Scenes from Life in Calculated Publics”. Bozdag en van den Hoven, “Breaking the filter bubble: democracy and design”.

267 “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem”. 268 “Breaking the filter bubble: democracy and design”. 269 “Bias in algorithmic filtering and personalization”.

Page 74: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Ananny, “Toward an Ethics of Algorithms: Convening, Observation, Probability, and Timeliness”. Willson, “Algorithms (and the) everyday”. Gillespie, “Algorithmically recognizable: Santorum’s Google problem, and Google’s Santorum problem”. Bozdag, “Bias in algorithmic filtering and personalization”. Gillespie, “#trendingistrending: When algorithms become culture”. Frizzera, “I / O : Reinforcing Newsmaking Practices Through Algorithmic Media”. Pasquale, “Restoring Transparency to Automated Authority”. Napoli, “Automated media: An institutional theory perspective on algorithmic media production and consumption”. Helberger, Karppinen, en D’Acunto, “Exposure diversity as a design principle for recommender systems”. Woodruff e.a., “A Qualitative Exploration of Perceptions of Algorithmic Fairness”.

4.17 Conclusions

Our exploration showed many opportunities to study algorithms in different areas such as news, democracies, culture, and others. For example, studies about algorithms and their creation contexts seem insufficient and invite for more exploration. Due to its importance, this research opportunity should be fostered to understand what forces, power structures, and contexts influence algorithms creation in different contexts such as news consumption and distribution, culture, and similar areas. Furthermore, transparency issues related with algorithms was the category with most papers. This issue is likely the most addressed in research because of its relevance, which invites to continue its development inside news consumption, recommendation and production contexts. Likewise, most of social media related papers deal with Facebook and Twitter, platforms that intervene in news and culture distribution. An invitation could be done to study algorithms in different services such as Instagram or LinkedIn, which also deal with news and cultural consumption. Also search engines provide a fruitful domain to study the interaction with algorithms due to their extended use. Algorithmic awareness in users and transparency comparisons among different services on the market are topics that could be addressed from a scientific perspective. Achieving an adequate interaction with algorithms is currently a promising research field for HCI. Being one of the most common technologies used in our devices, algorithms should become allies for users, rather than threats that affect their privacy, opportunities and information. It is necessary to continue gathering the necessary knowledge to address all the challenges related with algorithmic interaction.

Page 75: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Challenges to Explanations and Transparency Increased transparency and the ability to provide explanations concerning the deployment of automated recommendation systems would be beneficial. Not only would it enable a better understanding amongst data subjects on how their news curation . Nevertheless, several challenges still exist regarding the actual feasibility in practice of transparency measures. The following sections map the perceived challenges likely to be faced. These challenges not only relate to the difficulty of news providers providing an explanation, but also concerning the difficulties data subjects might face regarding their understanding of news recommender systems. Afterwards, it will be elaborated how the empirical research could help gain more insight into the perceived challenges concerning transparency.

5.1 Technical Complexity:

Algorithms, and machine learning techniques in particular, are complex. Considering data-driven techniques are equally used for recommender system, their complexity poses several challenges in order to achieve full transparency, including a possible explanation. The functioning of news recommender system also requires an expert understanding by those who have developed them. Yet, and especially where deep learning techniques would be deployed, even for an expert audience it might remain difficult to understand the exact functioning of the recommender system. Even where the models used are understood by an expert audience, the explanation must be catered to the forum that requires them. In this regard, external authorities or society at large might not know what exactly is behind them. In other words, this complexity must be translated to a level where the intended forum can understand them. Here, choices will have to be made concerning the level of information and granularity desired in order to reach the desired audience. Perhaps data subjects suffice with knowing what elements went into news recommendations, rather than knowing all the details and the separate weight these elements might have in a given decision. For example, it might simply suffice that data subjects simply know that the recommendation of news items is based on likes from friends or the communities the data subject was part of.

5.2 Data Complexity

News recommender systems do not simply consist of software, they interact with data. An important question concerning the functioning of news recommender systems will relate to the data that are used to generate the results. The answer to this question however might not be easy to ascertain. Certain key information points within the automated chain might not have been collected from the data controller or the news provider, but rather a third-party. In these cases, a thorough explanation might require that the data controller gains insight as to how data was collected and used by the third-party in order to inform the recommender system.

5.3 Legal Complexity

As indicated from the legal state of the art, the legal landscape is fragmented. Though the GDPR has provided for information rights, including information concerning the logic involved in the case of automated decision-making, uncertainty amongst data subjects is likely to exist: “what instrument would provide me the desired information?” Furthermore, data subjects are often unaware of the rights they have been granted under legislation. Moreover, even where data subjects are aware of their rights, it does not necessarily mean they are likely to act. Given the pervasive nature of recommender systems in the online environment, it would require a lot of administrative power from

Page 76: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

the individual to enforce his rights in every single case. Similar to privacy policies, it is unlikely that data subjects will enforce their information rights. Moreover, due to the use of recommender systems, they might experience a certain indifference or fatigue regarding the exercise of their rights: “Why even try?” Of course, a different matter, for which the empirical research will prove important, is the question whether or not, when an explanation is asked, and when a legal ground to receive such an explanation is present, is actually provided by news content providers. In other words, do news content providers respect legal rights in practice. And if they don’t respect these legal rights, is there a possibility to enforce them, through other means? A different legal matter relates to intellectual property. News content providers might refuse to provide the details regarding automated news curations because it provides them a unique way to attract and gain audiences. Customer satisfaction, gained through personalised news provision, can result in an increase in readers, and consequently, an increase in revenue. Therefore, news content providers might fear that through an explanation, insight is given into the model they use to provide personalised content. In other words, the news content providers might invoke their intellectual property rights on software, and their right to conduct business, in an effort to protect the information concerning their models, their “secret sauce”, from third parties.

5.4 Different Purposes Require Different Explanations

There are multiple risks associated with the use of recommender systems. For example, and as explained under this deliverable’s first section, risks concerning recommender systems not only relate to the processing of personal data, news curation also affects other values deemed important, such as the democracy and media pluralism. Considering the multitude of values potentially affected however, different expectations exist concerning the explanations we should receive. Though perhaps not all values affected, re subject of investigation within this project’s empirical research, it should nonetheless be noted that the explanation to be received, will differ depending on the value that is sought to be protected. For example, a data subject might want to know how the recommender system processes his or her data in order to . A political institution on the other hand, might want to gauge whether or not a political bias has crept into news recommender systems. Being able to provide a multitude of explanations concerning a multitude of values, will become an important tasks of news content providers who wish to deploy recommender systems.

5.5 Design Complexity

Interpretability and explainability can also be achieved by design. Research in these areas is however still lacking, given the technical complexity of recommender systems. The design however does not only relate to the software as such. The term design might also relate to the way information is presented to the individual. The presentation of an explanation or insight into the functioning of algorithms can be approached in a multitude of ways, e.g. textual or via a dashboard. These design choices need to take into account not only the preferences of the target audience, but also the level of knowledge concerning these processes.

5.6 The ATAP Project: Objectives

Based upon the literature overview, and the challenges identified, this section serves to provide a summary of the main objectives that the ATAP project aims to achieve. A main distinction can be made between the questions the project seeks to answer on a legal level, and those questions the projects seeks to address from a human computer interaction perspective. In addition, though on a secondary

Page 77: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

level, the research that will be performed under Work Package 2 and 3 might also provide answers to other relevant questions within the automated news curation domain.

a) Legal Objectives

On a legal level, the ATAP project, through an empirical study into the practices of news recommender systems, aims to ascertain the following:

• Do news content providers respect the information requirements stemming from Art. 13(2)f - 14(2)g GDPR.

• Do news content providers provide (extra) information regarding the personalization of news content where an access request following Art. 15(1)h GDPR is submitted?

• Do news content providers provide insight into the logic and envisaged consequences of the news recommender system they use, or do they rely on a restrictive interpretation of Art. 22(1) GDPR to deny access to such information.

• Do News Content Providers take responsibility for their recommender systems? • Do News Content Providers claim editorial control? • What does it mean from a legal perspective to have the right to an explanation? Does such a

right exist? And if so, does it exist in practice?

b) Human Computer Interaction Objectives

The Human Computer Interaction research, aims to learn how explanations can be better catered to the users of news platforms. The HCI research will be informed by WP2, and relies on WP2.

• When are explanations provided: up-front v push v pull? • What design features are available already? • Do news content providers involve users/readers in the design process of their algorithms or

information ? • What is the design process involved? • How do news content providers try to incorporate the GDPR in their design process? • If a user-centered process is done, how to address the low awareness in users? • Which kind of professionals are involved in the GDPR related designing? • How can the profile of users be ‘reset’ or deleted? • Where service providers curate news, who defines the data collection ? • Is there a dashboard for privacy or data-protection related settings?

c) Related Objectives

Though not necessarily directly linked to the legal and HCI research, through empirical enquiries to content providers, the project nonetheless aims to ascertain more information concerning the following:

• When news content is curated, how are the profiles of individuals to whom news is presented, determined?

• How are recommendations provided or generated, e.g. does content provision depend upon likes, friends’ likes, online behavior, etc.?

• When and why is content considered relevant for the profile of the individual that is subject to news recommendation.

Page 78: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

• How do news content providers treat missing/unknown data? • How do news content providers treat unique/alone cases? • How do news content providers decide on the parameters that are relevant when building

the news curation model? What are the priorities? • What kind of model do news content providers use for the recommendation? What are the

elements that feed into the decision to opt for a specific model? What are the priorities to choose one model over the other?

• Do news content providers develop and deploy mixed models? • How do news content providers define the process of data collection?

Page 79: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Future Research The research performed under the Work Package 1 will form the basis of future research performed within the ATAP project. In particular, the Work Package 1 research has ensured adequate background understanding concerning both the legal framework governing recommender systems, the technicality of recommender systems, and the normative values affected by their deployment. From a legal perspective, understanding the legal framework allows a better analysis of the empirical analysis, in which legal rights will be invoked in order to receive an explanation concerning the potential use of recommender systems in the news websites investigated. Under Work Package 2, all necessary preparations to enable data gathering in Task 2.2. (i.e. the actual empirical research) will be executed, including drafting a list of questions to be investigated. Building on Task 1.3., the empirical study will identify relevant actors to be investigated and develop online surveys for easy and centralized data gathering. Afterwards, the actual empirical research will be conducted, consisting of contacting online service providers and assess their compliance strategies for accommodating the right to explanation. Finally, an analysis of the results, in an effort to identify key issues concerning algorithmic transparency, will be made. Work Package 3 relates to design research. First, using input from WP1 and WP2, as well as from a sensitizing activity (diary study), two co-design workshops will be organized with 20 end-users. Based on the outcome of this task, several interface prototypes will be created that offer different variations of algorithmic transparency. Building upon the prototypes created, several between-subjects experiments will be set-up to assess the impact of the various interface designs on the users’ comprehension, acceptance and trust of the prototypes.

Page 80: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing
Page 81: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

Annex: Overview Table of Relevant Data Protection Transparency Enablers270

Article Content

Type of Actor Operation

Content Provider Aggregator Social

Media

Ex Ante Ex Post

Generic Subject-specific Generic Subject-

specific

13 GDPR Information obligation when PD is collected from DS directly X X X X X

14 GDPR Information obligation when PD is collected from DS indirectly X X X X X

15 GDPR Right of access X X X X X

15(1)h GDPR

Access to meaningful information about the logic involved, as well as the significance and the envisaged consequences

X X X X X

22(3) GDPR

Obligation to implement suitable measures to safeguard data subject's rights and freedoms and legitimate interests + at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

X X X x x x x

25 GDPR

Data protection by design and by default. The controller shall, both at the time of the determination of the means for processing and at the time

X X X x x x x

270 Legend: “X” is clear yes; “x” is potentially-likely.

Page 82: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing

of the processing itself, implement appropriate technical and organisational measures […] in order to meet the requirements of this Regulation and protect the rights of data subjects.

35 GDPR

Data protection impact assessment. The assessment shall contain at least […] the measures envisaged to address the risks, including safeguards, security measures and mechanisms to ensure the protection of personal data and to demonstrate compliance with this Regulation taking into account the rights and legitimate interests of data subjects and other persons concerned.

X X X x x

Table 8 - Overview Table of relevant provisions to enforce ATAP

Page 83: Mapping Legal and HCI Scholarship Interdisciplinary …...Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, NY, USA: Crown Publishing