Cognitive technologies: mapping the Internet governance debate

Embed Size (px)

Citation preview

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    1/14

    Cognitive technologies: mapping the Internet governance debate

    by Goran S. Milovanovi

    Introduction

    Among the words that rst come to mindwhen Internet governance (IG) is mentioned,complexitysurely scores in the forerunners.But do we ever grasp thefull complexity of suchissues? Is it possible for an individual humanmind ever to claim a full understanding of aprocess that encompasses thousands of actors,a plenitude of dierent positions, articulates anagenda of almost nonstop ongoing meetings,conferences, forums, and negotiations, whileaddressing the interests of billions of Internet

    users? With the development of the Internet,the Information Society, and the Internetgovernance processes, the amount of informationthat demands eective processing in order forus to act rationally and in real time increasestremendously. Paradoxically, the InformationAge, marked by the discovery of the possibility ofdigital computers in the rst half of the twentiethcentury, demonstrated the shortcomingsin processing capacities very quickly as itprogressed. The availability of home computersand the Internet have been contributing to thisparadox since the early 1990s: as the number ofnetworked social actors grew, the governance

    processes naturally faced increased demand forinformation processing and management. But

    this is not simply a question of how many rawprocessing power or how much memory storagewe have at our disposal. The complexity of socialprocesses that call for good governance, as wellas the amount of communication that mediatesthe actions of the actors involved, increase upto a level where qualitatively dierent forms ofmanagement must come into play. One cannotunderstand them by simply looking at them, orlistening to what everyone has to say: there areso many voices, and among billions of thoughts,ideas, concepts, and words, there are known

    limits to human cognition to be recognised.

    The good news is, as the Information Ageprogresses, new technologies, founded upon thescientic attempts to mimic the cognitive functionsof the human mind, are becoming increasinglyavailable. Many of the computational tools thatwere only previously available to wellfundedresearch initiatives in cognitive science andarticial intelligence can nowadays run onaverage desktop computers and laptops. Withincreased trends of cloud computing and theparallel execution of thousands of lines ofcomputationally demanding code, the application

    This paper provides a simple explanation of what cognitive technologies are.

    gives an overview of the main idea of cognitive science (why human minds and computers couldbe thought of as being essentially similar kinds of systems).

    discusses in brief how developments in engineering and fundamental research interact to resultin cognitive technologies.

    presents an example of applied cognitive science (textmining) in the mapping of the Internetgovernance debate.

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    2/14

    2

    of cognitive technologies in attempts to discovermeaningful regularities in vast amounts ofstructured and unstructured data is now withinreach. If the known advantages of computersover human minds namely, the speed ofprocessing that they exhibit in repetitive,wellstructured, daunting tasks performedover huge sets of data can combine with at

    least some of the advantages of our naturalminds over computers, what new frontiersare touched upon? Can computers do morethan beat the best of our chess players? Canthey help us to better manage the complexityof societal consequences that have resultedfrom our own discovery and the introductionof digital technologies to human societies? Howcan cognitive technologies help us analyse andmanage global governance processes suchas IG? What are their limits and how will theycontribute to societal changes themselves? These

    are the questions that we address in this shortpaper, tackling the idea of cognitive technologyand providing an illustrative example of theirapplication in the mapping of the IG debate.

    Box 1: Cognitive technologies

    The Internet links people; networkedcomputers are merely mediators.

    By linking people globally, the Internethas created a network of human minds systems that are a priori more complexthan digital computers themselves.

    The networked society exchanges a vastamount of information that could not havebeen transmitted before the inception ofthe Internet: management and governanceissues become critical.

    New forms of governance introduced:global IG.

    New forms of information processing

    introduced: cognitive technologies. Theyresult from the application of cognitivescience that studies both natural andartificial minds.

    Contemporary cognitive technologiespresent an attempt to mimic some of thecognitive functions of the human mind.

    Increasing raw processing power (cloudcomputing, parallelisation, massivememory storage) nowadays enables fora widespread application of cognitivetechnologies.

    How do they help and what are their limits?

    The main idea: mind as a machine

    For obvious reasons, many theoreticaldiscussions and introductions to IG begin withan overview of the history of the Internet. Forreasons less obvious, many discussions aboutthe Internet and the Information Society tend to

    suppress the historical presentation of an ideathat is clearly more important than the very ideaof the Internet. The idea is characteristic of thecognitive psychology and cognitive science ofthe second half of the twentieth century, andit states to put it in a nutshell that humanminds and digital computers possibly share manyimportant, even essential properties, and thatthis similarity in their design which, as manybelieve, goes beyond pure analogy opens aset of prospects towards the development ofarticial intelligence, which might prove to be

    the most important technological developmentin the future history of human kind if achieved.From a practical point of view, and given thecurrent state of the technological development,the most important consequence is that at leastsome of the cognitive functions of the humanmind can be mimicked by digital computers.The eld of computational cognitive psychology,where behavioural data collected fromhuman participants in experimental settingsare modelled mathematically, increasingly

    contributes to our understanding that thehuman mind acts in perception, judgment,decisionmaking, problemsolving, languagecomprehension, and other activities as if it isgoverned by a set of natural principles that canbe eectively simulated on digital computers.Again, even if the human mind is essentiallydierent from a modern digital computer, thesendings open a way towards the simulationof human cognitive functions and theirenhancement (given that digital computers areable to perform many simple computational

    tasks with effi ciency which is orders ofmagnitudes above the effi ciency of naturalminds).

    An overview of cornerstones in the historicaldevelopment of cognitive science is givenin Appendix I. The prelude to the history ofcognitive science belongs to the pre WorldWar II epoch, when a generation of brilliantmathematicians and philosophers, certainlybest represented by an ingenious Britishmathematician Alan Mathison Turing (19121954),paved the way towards the discovery of thelimits formalisation in logic and mathematics

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    3/14

    3Geneva Internet Conference

    in general. Byformalisation we mean theexpression of any idea in a strictly dened,unambiguous language, precisely enough thatno two interpretants could possibly argue overits meaning. The concept of formalisation isimportant: any problem that is encoded by a setof transformations over sequences of symbols in other words, by a set of sentences in a precise,

    exact, and unambiguous language is said tobe formalised. The question of whether thereis meaning to human life, thus, can probablybe never formalised. The question of whetherthere is a certain way for the white to win achess game given its initial advantage of havingthe rst move can be formalised, since chess isa game that receives a straightforward formaldescription through its welldened, exact rules.Turing was among those to discover a way ofexpressing any problem that can be formalisedat all in the form of a computer program for

    abstract computational machinery known as theUniversal Turing Machine (UCM). By providingthe denition for his abstract computer, hewas able to show how any mathematicalreasoning and all mathematical reasoningtakes place in strictly formalised languages can be essentially understood as a form ofcomputation. Unlike computation in a narrowsense, where its meaning usually refers to basicarithmetic operations with numbers only, thisbroad sense of computation encompasses allprecisely dened operations over symbols andsets of symbols in some predened alphabet.The alphabet is used to describe the problem,while the instructions to the Turing Machinecontrol its behaviour which essentially presentsno more than the translation of sets of symbolsfrom their initial form to some other form, withone of the possible forms of transformationbeing discovered and recognised as a solutionto the given problem the moment whenthe machine stops working. More important,from Turings discovery, it followed that formal

    reasoning in logic and mathematics can beperformed mechanically, i.e., an automateddevice could be constructed that computes anycomputable function at all. The road towards thedevelopment of digital computers was thus open.But even more important, following Turingsanalyses of mechanical reasoning, the questionof whether the human mind is simply a biologicalincarnation of universal computation a complexuniversal digital computer, instantiated bybiological evolution instead being a productof design processes, and implemented in

    carbonbased organic matter instead of silicon was posed. The idea that human intelligence

    shares the same essential properties as Turingsmechanised system of universal computationproved to be the major driving force in thedevelopment of post World War II cognitivepsychology. For the rst time in history, mankindnot only developed the means of advancingarticial forms of thinking, but instantiated therst theoretical idea that saw the human mind

    as a natural, mechanical system whose abstractstructure is at least, in a sense, analogous tosome wellstudied mathematical description.A way for the naturalisation of psychology wasnally opened, and cognitive science, as thestudy of natural and articial minds, was born.

    Roughly speaking, three important phases inthe development of its mainstream can berecognised during the course of the twentiethcentury. The rst important phase in thedevelopment of cognitive science was marked

    by a clear recognition that, at least in principle,the human mind could operate on principlesthat are exactly the same as those that governuniversal computation. Newell and SimonsPhysical Systems Hypothesis [1]provides probablythe most important theoretical contribution tothis rst, pioneering phase. Attempts to designuniversal problem solvers and design computersthat successfully play chess were characteristicof the rst phase. The ability to produce andunderstand natural language was recognised asa major characteristic of an articially intelligentsystem. An essential critique of this rst phase inthe historical development of cognitive sciencewas provided by the philosopher Hubert Dreyfusin his classic What Computers Cant Do in 1972.[2]The second phase, starting approximatelyin the 1970s and gaining momentum duringthe whole 1980s and 1990s, was characterisedby an emphasis on the problems of learning,the restoration of importance of some of thepre World War II principles of behaviouristicpsychology, the realisation that welldened

    formal problems such as chess are not reallyrepresentative of the problems that humanminds are really good at solving, and theexploitation of a class of computational modelsof cognitive functions known as neural networks.The results of this second phase, marked mainlyby a theoretical movement of connectionism,showed how sets of strictly dened, explicitrules, almost certainly miss describingadequately the highly flexible, adaptive nature ofthe human mind. [3a,3b]The third phase is rootedin the 1990s, when many cognitive scientists

    began to understand that human mindsessentially operate on variables of uncertain

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    4/14

    4

    value, with incomplete information, and inuncertain environments. Sometimes referredto as the probabilistic turn in cognitive science, [4]the important conclusion of this latest phase inthe development of cognitive science is that thelanguage of probability theory, used instead of(or in conjunction with) the language of formallogic, provides the most natural way to describe

    the operation of the human cognitive system.The widespread application of decision theory,describing the human mind as a biological organthat essentially evolved in order to perform thefunction of choice under risk and uncertainty, ischaracteristic of the most recent developmentsin this third, contemporary phase in the historyof cognitive science. [5]

    Box 2. The rise of cognitive science

    In summary: Fundamental insights in twentieth century

    logic and mathematics enabled a firstattempt at a naturalistic theory of humanintelligence.

    Alan Turings seminal contribution to thetheory of computation enabled a directparallel between the design of artificiallyand naturally intelligent systems.

    This theory, in its mainstream form, seesno essential differences between the

    structure of the human mind and thestructure of digital computers, both viewedat the most abstract level of their design.

    Different theoretical ideas andmathematical theories were used toformalise the functioning of the mindduring the second half of the twentiethcentury. The ideas of physical symbolsystems, neutral networks, and probabilityand decision theory, played the mostprominent roles in the development of

    cognitive science.

    The machine as a mind: appliedcognition

    As widely acknowledged, humanity still did notachieve the goal of developing true articialintelligence. What, then, is applied cognition?At the current stage of development, appliedcognitive science encompasses the application

    of mostly partial solutions to partial cognitiveproblems. For example, we cannot build software

    that reads Jorge Luis Borges collected shortstories and then produces a critical analysis froma viewpoint of some specic school of literarycritique. One would say not many human beingscan actually do that. But we cant accomplisheven simpler tasks; with the general rule thatas cognitive tasks get more general, the harderit gets to simulate them. But, what we can do,

    for example, is to feed the software with a largecollection of texts from dierent authors, let itsearch through it, recognise the most familiarwords and patterns of word usage, and thensuccessfully predict the authorship of a previouslyunknown text. We can teach computers torecognise some visual objects by learning withfeedback from their descriptions in terms ofsimpler visual features, and we are getting goodat making them recognise faces and photography.We cannot ask a computer to act creatively in theway that humans do, but we can make them prove

    complicated mathematical theorems that wouldcall for years of mathematical work by hand,and even produce aesthetically pleasing visualpatterns and music by sampling, resampling, andadding random but not completely irregular noiseto initial sound patterns.

    In cognitive science, engineers learn frompsychologists, and vice versa, mathematicalmodels, developed initially to solve purelypractical problems, are imported in psychologicaltheories of cognitive functions. The goals of thestudy that cognitive engineers and psychologistspursue are only somewhat dierent. Whilethe latter addresses mainly the functioning ofnatural minds, the former does not have toconstrain a solution to some cognitive problemby imposing on it the limits of the human mindand realistic neurophysiology of the brain.Essentially, the direction of the arrow usuallygoes from mathematicians and engineerstowards psychologists: the ideas proposed in theeld of articial intelligence (AI) are tested only

    after having them dressed in a suit of empiricalpsychological theory. However, engineers andmathematicians in AI discover their ideas byobserving and reflecting on the only known trulyintelligent system, namely, the real, natural,human mind.

    Many computational methods were thus rstdiscovered in the eld of AI before they weretried out as explanations of the functioning of thehuman mind. To begin with, the idea of physicalsymbol systems, provided by Newell and Simon

    in the early formulation of cognitive science,presents a direct interpretation of a symbolic

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    5/14

    5Geneva Internet Conference

    theory of computation initially proposed byTuring and the mathematicians in the rst half ofthe twentieth century. Neural networks, whichpresent a class of computational models thatcan learn to respond to complex external stimuliin a flexible and adaptive way, were clearlymotivated by the empirical study of learningin humans and animals. However, they were

    rst proposed as an idea in the eld of articialintelligence, and then only later applied inhuman cognitive psychology. Bayesian networks,known also as causal (graphical) models[6],represent structured probabilistic machinerythat deal effi ciently with learning, prediction, andinference tasks, and were again rst proposedin AI before heavily influencing the most recentdevelopments in psychology. Decision and gametheory, to provide an exception, were initiallydeveloped and reflected on in pure mathematicsand mathematical economics, before being

    imported into the arena of empirical psychology,were they still represent both a focal subjectof experimental research and a mathematicalmodelling toolkit.

    The current situation in applying the knownprinciples and methods of cognitive sciencecan be described as eclectic. In applications torealworld problems, and not necessarily todescribe truthfully the functioning of the humanmind, algorithms developed on the behalf ofcognitive scientists do not need to obey anytheoretical purity. Many principles discovered inempirical psychology, for example reinforcementlearning, are applied without necessary applyingthem in exactly the same way as it is thought thatthey operate in natural learning systems.

    As already noted, its uncertain whether appliedcognition will ever produce any AI that will fullyresemble the natural mind. A powerful analogyis proposed: for example, people rarely admitthat the human kind has never understood

    natural flying in birds or insects, in spite of thefact that we have and use articial flying ofairplanes and helicopters. The equations thatwould correctly describe the natural, dynamic,biomechanical systems that fly are simply toocomplicated and, in general, they cannot beanalytically solved even if they can be described.But we have invented articial flying by reflectingon the principles of the flight of birds, withoutever having a complete scientic understandingit. Maybe AI will follow the same path: we mayhave useful, practical, and powerful cognitive

    applications, even without ever understandingthe functioning of the human mind in totality.

    The main goal of current cognitive technologies,the products of applied cognitive science, is tohelp natural human minds to better understandvery complex cognitive problems those thatwould be hard to comprehend by our mentalfunctions solely and to increase the speed andamount of processing that some cognitive tasksrequire. For example, studying thousands of text

    documents in order to describe, at least roughly,what are the main themes that are discussedin them, can be automated to a degree to helphuman beings get the big picture without actuallyreading through all of them.

    Box 3. Applied cognition

    Cognitive engineers and cognitivepsychologists learn from each other. Theformer reflect on natural minds and buildalgorithms that solve certain classes of

    cognitive problems, which leads directlyto applications, while the latter test theproposed models experimentally todetermine whether they describe theworkings of the human mind adequately.

    Many principles of cognitive psychologyare applied to real-world problems withoutnecessary mimicking the correspondingfaculties of the human mind exactly. Wediscover something, than change it to suitour present purpose.

    We provide partial solutions only, sinceglobal human cognitive functioning isstill too diffi cult to describe. However,even partial solutions that are nowadaysavailable skyrocket what computers couldhave done only decades ago.

    Contemporary cognitive technologiesfocus mainly on reducing the complexity ofsome cognitive tasks that would be hard toperform by relying on our natural cognitivefunctions only.

    Example: applying text-mining to mapthe IG debate

    The NETmundial Multistakeholder Statementof So Paulo1 the nal outcome documentof NETmundial (22, 23 April 2014), the GlobalMultistakeholder Meeting on the Future of IG resulted from a political process of immensecomplexity. Numerous forms of inputs, various

    1 http://netmundial.br/netmundialmultistakeholderstatement/

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    6/14

    6

    expertise, several preformed bodies, a massof individuals and organisations representingdierent stakeholders, all interfaced bothonline and in situ, through a complex timelineof the NETmundial process, to result inthis document. On 3 April, the NETmundialSecretariat prepared the rst draft, previouslyprocessing more than 180 content contributions.

    The nal document resulted following thenegotiations in So Paulo, based on the seconddraft that was itself based on incorporatingnumerous suggestions made in comments tothe rst draft. The multistakeholder process ofdocument drafting introduced in its productionis already seen by many as the future commoningredient of global governance processes ingeneral. By the complexity of the IG debatealone, one could have anticipated that morecomplex forms of negotiations, decisionshaping,and crowdsourced document production

    will naturally emerge. As the complexityof the processes under analysis increases,the complexity of tools used to conduct theanalyses must increase also. At the presentpoint of its development, DiploFoundationsTextAnalytics Framework (DTAF) operateson the Internet Governance Forum (IGF) TextCorpus, a collection of all available session,workshop, and panel transcripts from theIGF 20062014, encompassing more than600 documents and utterances contributedon behalf of hundreds of speakers. By anystandards in the eld of text-mining an areaof applied cognitive science which focuses onstatistical analyses of patterns of words thatoccur in natural language both the NETmundialcollection of content contributions and the IGFText Corpus present rather small datasets. Theanalyses of text corpora that encompass tens ofthousands of documents are rather common.Imagine incorporating all websites, social media,newspaper and journal articles on IG, in order toperform a fullscale monitoring of the discourse

    of the IG debate, and youre already there.

    Obviously, the cognitive task of mappingthe IG debate represented even only by twotext corpora that we discuss here, is highlydemanding. It is questionable whether a singlepolicy analyst or social scientist would manageto comprehend the full complexity of the IGdiscourse in several years of dedicated work.Here we illustrate the application of textmining,which is a typical cognitive technology usednowadays, to the discovery of useful, structured

    information in large collections of texts. We willfocus our attention on the NETmundial corpus

    of content contributions and ask the followingquestion: What are the most important themes,or topics, that have appeared in this set of morethan 180 contributions, including the NETmundialMultistakeholder Statement of So Paulo? Inorder to answer this question, we rst need tohypothesise a model of how the NETmundialdiscourse was produced. We rely on a fairly

    wellstudied and frequently applied modelin textmining, known by its rather technicalname of Latent Dirichlet Allocation (LDA, seeMethodology section in Appendix II. [7,8,9]). InLDA, it is assumed that each word (or phrase)in some particular discourse is produced froma set of underlying topics with some initiallyunknown probability. Thus, each topic is denedas a probability distribution across the wordsand phrases that appear in the documents. Itis also assumed that each document in the textcorpus is produced from a mixture of topics,

    each of them weighted dierently in proportionto their contribution to the generation of thewords that comprise the document. Additionalassumptions must be made about the initialdistribution of topics across documents. Allthese assumptions are assembled in a graphicalmodel that describes the relationships betweenthe words, documents, and latent topics. Onenormally runs a number of LDA models thatencompass dierent number of topics and relyon the statistical properties of the obtainedsolutions to recognise which one providesthe best explanation for the structure of thetext corpus under analysis. In the case of theNETmundial corpus of content contributions,an LDA model with seven topics was selected.Appendix II presents fteen most probablewords generated by each of the seven underlyingtopics. By inspecting which words are mostcharacteristic in each of the topics discovered inthis collection of texts, we were able to providemeaningful interpretations2of the topics. Wend that NETmundial content contributions were

    mainly focused on questions of (1) human rights,(2) multistakeholderism, (3)global governancemechanism for ICANN, (4) information security,(5) IANA oversight, (6) capacity building, and (7)development (see Table A2.1 in Appendix II).

    In order to help a human policy analyst in theirresearch on the NETmundial, for example, wecould determine the contribution of each ofthese seven topics to each document from the

    2 I wish to thank Mr Vladimir Radunovi of DiploFoundationfor his help in the interpretation of the topics obtainedfrom the LDA model of the NETmundial contentcontributions.

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    7/14

    7Geneva Internet Conference

    collection of content contributions, so that theanalyst interested in just some aspects of thiscomplex process could select only the mostrelevant documents. As an illustration, FigureA2.1 in Appendix II presents the distributionsof topics found in the content contributions oftwo important stakeholders in the IG arena,civil society and government. It is easily read

    from the displays that the representatives of theorganisations of civil society strongly emphasisedhuman rights (Topic 1 in our model) in theircontributions, while representatives of nationalgovernments focused more on IANA oversight(Topic 5) and development issues (Topic 7).

    Figure A2.2 in Annex II presents the structureof similarities between the most importantwords in the human rights topic (Topic 1,Table A2.1 in Annex II). We rst selected onlythe content contributions made on behalf of

    civil society organisations. Then we used theprobability distributions of words across topicsand the distribution of topic weights across thedocuments to compute the similarities betweenall relevant words. Since similarity computed inthis way is represented in a highdimensionalspace and thus not suitable for visualisation,we have decided to use the graph representedin Figure A2.2. Each node in Figure A2.2represents a word, and each word receivesexactly three arrows. These arrows originateat nodes that represent those words that arefound to be among the three most similar wordsto the target word. Each word is an origin of asmany links as there are words in whose set ofthe three most similar words it is found. Thuswe can use graph representation to assess thesimilarities in the patterns of word usage acrossdierent collections of documents. The lowerdisplay in Figure A2.2 presents the similaritystructure in the human rights topic extractedfrom governmental content contributions toNETmundial only. By comparing the two graphs,

    we can see that only slight dierences appear,in spite of the fact that the importance of thehuman rights topic is dierent in the contentcontributions of these two stakeholders. Thus,they seem to understand the conceptual realmof human rights in a similar way, but tend toaccentuate it dierently in the statements oftheir respective positions.

    Conclusions that stream from our cognitiveanalysis of the NETmundial content contributionscould have been brought by a person who did

    not actually read any of these documents at all.The analysis does rely on some builtin human

    expert knowledge, but once set, it can producethis and similar results in a fully automatedmanner. While it is not advisable to use thisand similar methods instead of a real, carefulstudy of the relevant documents, their powerin improving on the work of skilled, thoroughlyeducated scholars and professionals should beemphasised.

    Concluding remarks

    However far we are from the ideal of truearticial intelligence, and given that the denitionof what true articial intelligence might be isnot very clear in itself, cognitive technologiesthat have emerged after more than 60 years ofstudy of the human mind as a natural system

    are nowadays powerful enough to providemeaningful application and valuable insight.With the increasing trends of big data, numerousscientists involved in the development of morepowerful algorithms and even faster computers,cloud computing, and means for massive datastorage, even very hard cognitive problems willbecome addressable in the near future. Theplanet, our ecosystem, now almost completelycovered by the Internet, will introduce anadditional layer of cognitive computation, making

    information search, retrieval, data mining,and visualisation omnipresent in our mediaenvironments.

    A prophecy to end this paper with: not onlywill this layer of cognitive computation bringabout more effi cient methods of informationmanagement and extend our personal cognitivecapacities, it will itself introduce additionalquestions and complications to the existing IGdebate. Networks intermixed with human mindsand narrowly dened articial intelligences

    will soon begin to present the major units ofrepresenting interests and ideas, and theirfuture political signicance should not beunderestimated now when their development isstill in its infancy. They will grow fast, as fast asthe eld of cognitive science did.

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    8/14

    8

    Bibliography

    [1] Newell A and Simon HA (1976) Computer Science as Empirical Inquiry: Symbols and Search.,Communications of the ACM, 19(3), 113126, doi:10.1145/360018.360022

    [2] Dreyfus H (1972) What computers cant do. New York: MIT Press, ISBN 0060906138

    [3a] Rumelhart DE, McClelland JL and the PDP Research Group (1986) Parallel Distributed Processing:Explorations in the Microstructure of Cognition. Volume 1: Foundations. Cambridge, MA: MIT Press.

    [3b] McClelland JL, Rumelhart DE and the PDP Research Group (1986) Parallel Distributed Processing:Explorations in the Microstructure of Cognition. Volume 2: Psychological and Biological Models.Cambridge, MA: MIT Press.

    [4] Oaksford M and Chater N (2009) Prcis of Bayesian rationality: The probabilistic approach to humanreasoning. Behav Brain Sci 32(1), 6984. doi: 10.1017/S0140525X09000284

    [5] Glimcher P (2003) Decisions, Uncertainty, and the Brain. The Science of Neuroeconomics. Cambridge,MA: MIT Press.

    [6] Pearl J (2000) Causality. Models, Reasoning and Inference. Cambridge: Cambridge University Press.

    [7] Blei DM, Ng AY, Jordan MI (2003) Laerty J ed. Latent Dirichlet Allocation.Journal of Machine LearningResearch 3(45), 9931022. doi:10.1162/jmlr.2003.3.45.993

    [8] Griffi thsTL, Steyvers M and Tenenbaum JB (2007) Topics in semantic representation. PsychologicalReview 114, 211 244. http://dx.doi.org/10.1037/0033295X.114.2.211

    [9] Grn B and Hornik K (2011) topicmodels: An R Package for Fitting Topic Models.Journal of StatisticalSoftware 40(3). Available at http://www.jstatsoft.org/v40/i13

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    9/14

    9Geneva Internet Conference

    Appendix I

    Timeline of cognitive science

    Year Selected developments

    1936 Turing publishes On Computable Numbers, with an Application to theEntscheidungsproblem. Emil Post achieves similar results independently of Turing.The idea that (almost) all formal reasoning in mathematics can be understood as aform of computation becomes clear.

    1945 The Von Neumann Architecture, employed in virtually all computer systems in usenowadays, is presented.

    1950 Turing publishes Computing machinery and intelligence, introducing what is nowadaysknown as the Turing Test for articial intelligence.

    1956 George Miller discusses the constraints on human shortterm memory incomputational terms.

    Noam Chomsky introduces the Chomsky Hierarchy of formal grammars,enabling the computer modeling of linguistic problems.

    Allen Newell and Herbert Simon publish a work on the Logic Theorist,mimicking the problem solving skills of human beings; the rst AI program.

    1957 Frank Rosenblatt invents the Perceptron, an early neural network algorithm forsupervised classication. The critique of the Perceptron published by MarvinMinsky and Seymour Papert in 1969 is frequently thought of as responsible for

    delaying the connectionist revolution in cognitive science.

    1972 Stephen Grossberg starts publishing results on neural networks capable ofmodeling various important cognitive functions.

    1979 James J. Gibson publishes The Ecological Approach to Visual Perception.

    1982 David Marr, Vision: A Computational Investigation into the Human Representation andProcessing of Visual Information makes a strong case for computational models ofbiological vision and introduces the commonly used levels of cognitive analysis

    (computational, algorithmic/representational, and physical).

    1986 Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Vols1 and 2, are published, edited by David Rumelhart, Jay McClelland, and the PDPResearch Group. The onset of the connectionism (the term was rst used by DavidHebb in the 1940s). Neural networks are considered as powerful models to capturethe flexible, adaptive nature of human cognitive functions.

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    10/14

    10

    Year Selected developments

    1990s Probabilistic turn: the understanding slowly develops, in many scientic centresand the work of many cognitive scientists, that the language of probabilitytheory provides the most suitable means of describing cognitive phenomena.Cognitive systems control the behaviour of organisms that have onlyincomplete information about uncertain environments to which they need to

    adapt. The Bayesian revolution: most probabilistic models of cognition expressed

    in mathematical models relying on the application of the Bayes theorem andBayesian analysis. Latent Dirichlet Allocation (used in the example in this paper)is a typical example of Bayesian analysis.

    A methodological revolution is introduced by Pearls study of causal (graphical)models (also known as Bayesian networks).

    John Andersons methodology of rational analysis.

    1992 Francisco J. Varela, Evan T. Thompson, and Eleanor Rosch publish The EmbodiedMind: Cognitive Science and Human Experience, formulating another theoretical

    alternative to classical symbolic cognitive science.

    2000s Decisiontheoretic models of cognition. Neuroeconomics: the human brain asa decisionmaking organ. The understanding of importance of risk and value indescribing cognitive phenomena begins to develop.

    Georey Hinton and others introduce deep learning: a powerful learningmethod for neural networks partially based on ideas that already went underdiscussion in the early 1990s and 1980s.

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    11/14

    11Geneva Internet Conference

    Appendix II

    Topic model of the content contributions to the NETmundial

    Methodology. A terminological model of the IG discourse was rst developed by DiploFoundations IGexperts. This terminological model encompasses almost 5000 IGspecic words and phrases. The text

    corpus of NETmundial content contributions in this analysis encompasses 182 documents. The corpuswas preprocessed and automatically tagged for the presence of the IGspecic words and phrases.The resulting documentterm matrix, describing the use frequencies of IG specic terms across 182available documents, was modelled by Latent Dirichlet Allocation (LDA), a statistical model that enablesfor the recognition of semantic topics (i.e., thematic units) that accounts for the frequency distributionin the given documentterm matrix. A single topic comprises all IGspecic terms; the topics dier by theprobability they assign to each IGspecic term. The model selection procedures proceeded as follows.We split the text corpus into two halves, by randomly assigning documents to the training and the testset. We t the LDA models ranging from two to twenty topics to the training set and then compute theperplexity (an informationtheoretic, statistical measure of badnessoft) of the tted models for thetest set. We select the best model as the one with the lowest perplexity. Since the text corpus is rathersmall, we repeated this procedure 400 times and looked at the distribution of the number of topics from

    the besttting LDA models across all iterations. This procedure pointed towards a model encompassingseven topics. We then tted the LDA with seven topics to the whole NETmundial corpus of contentcontributions. Table A2.1 presents the most probable words per topics. The original VEM algorithm wasused to estimate the LDA model.

    Table A-2.1. Topics in the NETmundial Text Corpus. The columns represent the topics recovered by theapplication of LDA to the NETmundial content contributions. The words are enlisted by their probabilityof being generated by each topic.

    Topic 1.

    Human Rights

    Topic 2.

    Multi

    stakeholderism

    Topic 3.

    Global governance

    mechanism for

    ICANN

    Topic 4.

    Information

    security

    Topic 5.

    IANA

    oversight

    Topic 6.

    Capacity

    building

    Topic 7.

    Development

    right IG internet internet ICANN curriculum internet

    human rights stakeholder global security IANA technology IG

    principle internet governance service organisation analysis global

    cyberspace principle ICANN data function research development

    state process need cyber operation education principle

    information discuss technical network account blog open

    internet issue role country process online governance

    protection participation system need review association participation

    access ecosystem issue control policy similarity continue

    communication need IG information DNS term stakeholder

    surveillance role local nation board product access

    law multistakeholder principle policy GAC content model

    respect governance level eective multistakeholder integration organisation

    international NETmundial country trade model innovative innovative

    charter address state user government public economic

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    12/14

    12

    Figure A-2.1. The comparison of civil society and government content contributions to NETmundial.We assessed the probabilities with which each of the seven topics from the LDA model of theNETmundial content contributions determine the contents of the documents, averaged across alldocuments per stakeholder, normalised and expressed the contribution of each topic in %.

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    13/14

    13Geneva Internet Conference

    Figure A-2.2. The conceptual structures of the topic of human rights (Topic 1 in the LDA model ofNETmundial content contributions) for civil society and government contributions. The graphsrepresent the 3neighbourhoods of the 15 most important words in the topic of human rights (Topic 1 inthe LDA model). Each node represents a word and has exactly three arrows pointed at it: the nodes fromwhich these arrows originate represent the words found to be among the three words most similarlyused to a word that receives the links.

    Government

    Civil Society

  • 8/10/2019 Cognitive technologies: mapping the Internet governance debate

    14/14

    14

    About the author

    Goran S. Milovanovi is a cognitive scientist who studies behavioural decision theory, perception of riskand probability, statistical learning theory, and psychological semantics. He has studied mathematics,philosophy, and psychology at the University of Belgrade, and graduated from the Department ofPsychology. He began his PhD studies at the Doctoral Program in Cognition and Perception, Departmentof Psychology, New York University, USA, while defending a doctoral thesis entitled Rationality of

    Cognition: A Meta-Theoretical and Methodological Analysis of Formal Cognitive Theories at the Faculty ofPhilosophy, University of Belgrade, in 2013. Goran has a classic academic training in experimentalpsychology, but his current work focuses mainly on the development of mathematical models ofcognition, and the theory and methodology of behavioural sciences.

    He organised and managed the rst research on Internet usage and attitudes towards informationtechnologies in Serbia and the region of SE Europe, while managing the research programme of theCenter for Research on Information Technologies (CePIT) of the Belgrade Open School (20022005), thefoundation of which he initiated and supported. He edited and coauthored several books on InternetBehaviour, attitudes towards the Internet, and the development of the Information Society. He managedseveral research projects on Internet Governance in cooperation with DiploFoundation (20022014) andalso works as an independent consultant in applied cognitive science and da