13
Through this collection of programmatic state- ments from key figures in the field, we chart the progress of AI and survey current and future direc- tions for AI research and the AI community. W e know—with a title like that, you’re expecting something awful. You’re on the lookout for fanciful prognos- tications about technology: Someday comput- ers will fit in a suitcase and have a whole megabyte of memory. And you’re wary of lurid Hollywood visions of “the day the robots come”: A spiderlike machine pins you to the wall and targets a point four inches behind your eyes with long and disturbing spikes. Your last memory before it uploads you is of it ask- ing, with some alien but unmistakable existen- tial agony, what does this all mean? We are not here to offer a collection of fic- tion. Artificial intelligence isn’t a nebulous goal that we hope—or fear—humanity one day achieves. AI is an established research practice, with demonstrated successes and a clear trajec- tory for its immediate future. David Leake, the editor-in-chief of AI Magazine, has invited us to take you on a tour of AI practice as it has been playing out across a range of subcommunities, around the whole world, among a diverse com- munity of scientists. For those of us already involved in AI, we hope this survey will open up a conversation that celebrates the exciting new work that AI researchers are currently engaged in. At the same time, we hope to help to articulate, to readers thinking of taking up AI research, the rewards that practitioners can expect to find in our mature discipline. And even if you’re a stranger to AI, we think you’ll enjoy this inside peek at the ways AI researchers talk about what they really do. We asked our contributors to comment on goals that drive current research in AI; on ways our current questions grow out of our past re- sults; on interactions that tie us together as a field; and on principles that can help to repre- sent our profession to society at large. We have excerpted here from the responses we received and edited them together to bring out some of the most distinctive themes. We have grouped the discussion around five broad topics. We begin in the first section (“Progress in AI”) by observing how far AI has come. After decades of research, techniques de- veloped within AI don’t just inform our under- standing of complex real-world behavior. AI has advanced the boundaries of computer sci- ence and shaped users’ everyday experiences with computer systems. We continue in the second section (“The AI Journey: Getting Here”) by surveying contributors’ experience doing AI. This section offers a more intimate glimpse inside the progress that AI has made as our contributors explain how their approaches 25th Anniversary Issue WINTER 2005 85 Copyright © 2005, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2005 / $2.00 Artificial Intelligence: The Next Twenty-Five Years Edited by Matthew Stone and Haym Hirsh AI Magazine Volume 26 Number 4 (2006) (© AAAI) AI Magazine Volume 26 Number 4 (2005) (© AAAI)

Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

■ Through this collection of programmatic state-ments from key figures in the field, we chart theprogress of AI and survey current and future direc-tions for AI research and the AI community.

We know—with a title like that, you’reexpecting something awful. You’reon the lookout for fanciful prognos-

tications about technology: Someday comput-ers will fit in a suitcase and have a wholemegabyte of memory. And you’re wary of luridHollywood visions of “the day the robotscome”: A spiderlike machine pins you to thewall and targets a point four inches behindyour eyes with long and disturbing spikes. Yourlast memory before it uploads you is of it ask-ing, with some alien but unmistakable existen-tial agony, what does this all mean?

We are not here to offer a collection of fic-tion. Artificial intelligence isn’t a nebulous goalthat we hope—or fear—humanity one dayachieves. AI is an established research practice,with demonstrated successes and a clear trajec-tory for its immediate future. David Leake, theeditor-in-chief of AI Magazine, has invited us totake you on a tour of AI practice as it has beenplaying out across a range of subcommunities,around the whole world, among a diverse com-munity of scientists.

For those of us already involved in AI, wehope this survey will open up a conversation

that celebrates the exciting new work that AIresearchers are currently engaged in. At thesame time, we hope to help to articulate, toreaders thinking of taking up AI research, therewards that practitioners can expect to find inour mature discipline. And even if you’re astranger to AI, we think you’ll enjoy this insidepeek at the ways AI researchers talk about whatthey really do.

We asked our contributors to comment ongoals that drive current research in AI; on waysour current questions grow out of our past re-sults; on interactions that tie us together as afield; and on principles that can help to repre-sent our profession to society at large. We haveexcerpted here from the responses we receivedand edited them together to bring out some ofthe most distinctive themes.

We have grouped the discussion around fivebroad topics. We begin in the first section(“Progress in AI”) by observing how far AI hascome. After decades of research, techniques de-veloped within AI don’t just inform our under-standing of complex real-world behavior. AIhas advanced the boundaries of computer sci-ence and shaped users’ everyday experienceswith computer systems. We continue in thesecond section (“The AI Journey: GettingHere”) by surveying contributors’ experiencedoing AI. This section offers a more intimateglimpse inside the progress that AI has made asour contributors explain how their approaches

25th Anniversary Issue

WINTER 2005 85Copyright © 2005, American Association for Artificial Intelligence. All rights reserved. 0738-4602-2005 / $2.00

Artificial Intelligence:The Next

Twenty-Five YearsEdited by Matthew Stone

and Haym Hirsh

AI Magazine Volume 26 Number 4 (2006) (© AAAI)AI Magazine Volume 26 Number 4 (2005) (© AAAI)

Page 2: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

another’s interests and new research toexplore together. As this research bringsresults, it cements these intellectual con-nections in new kinds of computer sys-tems and in a deeper understanding ofhuman experience. We’re pleased andhonored to present just a slice of a snap-shot of this dynamic process. Our onlyprediction for the next 25 years is that itwill continue to bring us unexpected in-sights and connections to one another.

Every member of AAAI ideally deservesto contribute to this article. But, obvious-ly, space limits do not make that possible,and thus we solicited feedback from fartoo small a number of the many peoplewho are playing leadership roles in ourfield. We don’t imagine that what wehave is definitive; we have set up a web-site, at www.ai.rutgers.edu/aaai25, tocontinue the discussion. Your contribu-tion remains welcome.

Progress in AIAI is thriving. Many decades’ efforts ofmany talented individuals have resultedin the techniques of AI occupying a cen-tral place throughout the discipline ofcomputing. The capacity for intelligentbehavior is now a central part of people’sunderstanding of and experience withcomputer technology. AI continues tostrengthen and ramify its connections toneighboring disciplines.

The multifaceted nature of AI today isa sign of the range and diversity of its suc-cess. We begin with contributions observ-ing the successes we often neglect tomention when we consider AI’s progress.

Last night I saw a prescreening of themovie Stealth, the latest science fictionflick in which an artificially intelligentmachine (in this case an unmannedcombat air vehicle) stars in a major role.The movie has a couple of scenes thatpay homage to the 1968 film 2001: ASpace Odyssey, in which HAL, the ad-vanced AI computer, is a key character.What amazed me in the new movie wasa realization of just how far AI has come.When I saw 2001, the idea of the talkingcomputer that understood language wasso cool that I decided then and therethat I wanted to be an AI scientist some-day. In Stealth, the AI carries on normalconversation with humans, flies a high-powered airplane, and shows many hu-man-level capabilities without it reallyraising an eyebrow—the plot revolvesaround its actions (and emotions), not

to central questions in the field havebeen formed and transformed over theyears by their involvement in researchand in the AI community.

The third section (“The AI Journey: Fu-ture Challenges”) looks forward to char-acterize some of the new principles, tech-niques, and opportunities that define theongoing agenda for AI. By and large, ourcontributors expect to take up researchchallenges that integrate and transcendthe traditional subfields of AI as part ofnew, inclusive communities organizedaround ambitious new projects. They en-vision big teams solving big problems—designing new fundamental techniquesfor representation and learning that fitnew computational models of percep-tion, human behavior and social interac-tion, natural language, and intelligent ro-bots’ own open-ended activity. Thefourth section (“Shaping the Journey”)considers the community building thatcan make such investigations possible.We have a lot to do to facilitate the kindof research we think we need, but a lot ofexperience to draw on—from creating re-sources for collaborative research, to es-tablishing meetings and organizations, totraining undergraduates and graduatestudents and administering research in-stitutes. We close, in the fifth section(“Closing Thoughts”), with some re-minders of why we can expect takers forthis substantial undertaking: the excite-ment, the satisfaction, and the impor-tance of creating intelligent artifacts.

Across the statements contributorssent us, we see how thoroughly AI hasmatured into a collaborative process ofgenuine discovery. AI now seems quitedifferent from what would have beenpredicted twenty-five years ago: that factonly highlights the progress we havemade. AI continues to embrace new per-spectives and approaches, and we contin-ue to find new interplay among the com-plementary principles required forgeneral intelligence.

Our progress itself thus helps tie us to-gether as a community. Whenever newconfluences of ideas provide fertile newground for theory and experiment, theyreconnect AI researchers into an evolvingnetwork of mutual support, friendship,and fun. Even 50 years after the Dart-mouth conference, and 25 years after thefounding of AAAI, AI researchers contin-ue to find new points of overlap in one

25th Anniversary Issue

86 AI MAGAZINE

Jim Hendler

Rodney Brooks

Page 3: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

around how “cool” it is that a computercan do these things.

From the point of view of the AI vi-sion, we’ve already achieved many of thethings the field’s founders used for moti-vators: for example, a computer beat theworld’s chess champ, commercial sys-tems are exploiting continually improv-ing voice and speech capabilities, thereare robots running around the surface ofMars, and the word processor I’m usingto write this comment helps to correctmy grammar mistakes. We’ve grownfrom a field with one conference to onein which many subareas hold well-at-tended conferences on a regular basisand in which it is rare to see a universitythat does not include AI in its undergrad-uate curriculum. We in the field aresometimes too fast to recognize our ownfaults and too slow to realize just howamazingly far we’ve come in such a shorttime.

—Jim Hendler, University of Maryland

Artificial intelligence has enjoyed tre-mendous success over the last twenty fiveyears. Its tools and techniques are nowmainstream within computer science andat the core of so many of the systems weuse every day. Search algorithms, thebackbone of traditional AI, are usedthroughout operating systems, compilers,and networks. More modern machine-learning techniques are used to adaptthese same systems in real-time. Satisfia-bility of logic formulas has become a cen-tral notion in understanding computabil-ity questions, and once esoteric notionslike semantic ontologies are being used topower the search engines that have be-come organizers of the world’s knowl-edge, replacing libraries and encyclope-dias and automating business interfaces.And who would have guessed that AI-powered robots in people’s homes wouldnow be counted in the millions? So muchaccomplishment to bring pride to us all.

—Rodney Brooks, MIT

From the engineering perspective, artifi-cial intelligence is a grand success. In edu-cation, computer science majors expect totake a subject or two in artificial intelli-gence, and prospective employers expectit. In practice, big systems all seem to con-tain elements that have roots in the pasthalf century of research in artificial intel-ligence.

—Patrick Henry Winston, MIT

The AI community has cause for muchpride in the progress it has made over thepast half century. We have made signifi-cant headway in solving fundamentalproblems in representing knowledge, in

reasoning, in machine learning, andmore. On the practical side, AI methodsnow form a key component in a wide va-riety of real-world applications.

—Daphne Koller, Stanford University

Fifty years into AI’s U.S. history and 25years into AAAI’s history, we’ve come along way. There are a wide variety of de-ployed applications based on AI or incor-porating AI ideas, especially applicationsinvolving machine learning, data min-ing, vision, natural language processing,planning, and robotics. A large fractionof the most exciting opportunities for re-search lie on the interdisciplinary bound-aries of AI with computer science (sys-tems, graphics, theory, and so on),biology, linguistics, engineering, and sci-ence. Vastly increased computing powerhas made it possible to deal with realisti-cally large though specialized tasks.

—David Waltz, Columbia University

The AI success was once composed of alist of offshoot technologies, from timesharing to functional programming. Nowit is AI itself that is the contribution.

The AI Journey: Getting Here

The wide-ranging and eclectic sweep ofAI as a field is mirrored in the experiencesof individual researchers. A career in AI isa license to liberally explore a range ofproblems, a range of methods, and arange of insights—often across a range ofinstitutions. For many, AI’s successes rep-resent a very personal journey. We arepleased that many of our contributors of-fered an inside look at their journeysthrough AI. Thus, Alan Mackworth andRuzena Bajcsy—in recounting the chal-lenges of bridging discrete, symbolic rea-soning with the continuous mathematicsof signal processing and control theo-ry—highlight how new concepts of con-straint satisfaction and active perceptionclarified their research programs. Similar-ly, Bruce Buchanan describes his evolvingresearch into the design of systems thatsolve complex real-world problemsthrough insights he gained about the in-terplay of represented knowledge and cal-culated inference.

In many cases, newly discovered in-sights lead us to radically change thework we do and the way we talk about it.Consider the range of research domainsAaron Sloman has explored, or watch as

25th Anniversary Issue

WINTER 2005 87

Patrick Winston

Daphne Koller

David Waltz

Page 4: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

straints can be static or dynamic. Our de-velopment of the robot soccer challengehas forced all of us to develop architec-tures supporting both proactive and reac-tive behaviors.

—Alan Mackworth,University of British Columbia

I came from Czechoslovakia to the Stan-ford AI Laboratory in October 1967. Thislaboratory was one of the three AI labs inthe USA and was under the leadership ofJohn McCarthy. The basic philosophy ofProfessor McCarthy was that AI wasabout representation of knowledge andthat this representation was symbolic.The language we used for the representa-tion was Lisp. To his credit, McCarthyrecognized that perception and roboticinteraction with the environment wasequally important as reasoning strictlyon symbolic information. Hence wefaced the problem of how to systemati-cally convert the measurements or obser-vations into symbols. What is an edge,straight line, circle, cube, and so on? Thisis still an open problem.

The tradition that was set at that time(and it has prevailed) is the foundation ofa good engineering science: every goodtheory needs experimental verification.As we go on and understand more com-plex phenomena, the experiments reflectthis complexity.

I implemented this tradition in theGRASP laboratory during my thirty yearsat the University of Pennsylvania inPhiladelphia. Furthermore, coming from abackground of control engineering, werecognized the need in building intelligentsystems, the importance of controlling thedata acquisition, and introduced an newparadigm: active perception. We stated thatwe not just see but we also look, and wenot only touch but we also feel.

—Ruzena Bajcsy, University of California at Berkeley

Every empirical science needs both theo-reticians and experimenters. Turing sawthat operational tests of behavior wouldbe more informative than arguing in theabstract about the nature of intelligence,which established the experimental na-ture of AI.

The two major research themes forboth theoretical and experimental AIhave always been knowledge representa-tion (KR) and inference. Clearly an intel-ligent person or program needs a store ofknowledge and needs inferential capabil-ities to arrive at answers to the problemhe/she/it faces in the world. Other big is-sues, like learning and planning, can beseen as secondary to KR and inference.

Ed Feigenbaum and I were early play-

Michael Kearns and Usama Fayyad tracequite different evolving trajectoriesthrough machine learning. As we surveyemerging research in AI in the third sec-tion (“The AI Journey: Future Chal-lenges”), we’ll see that new problems,new discoveries, and new technologiescontinue to forge new intellectual con-nections and create new directions for re-search in AI. This ongoing interplay, inWolfgang Wahlster’s experience, both de-fines and sustains AI research.

Across the span of a career in AI, theformative mentorship that starts us offhas a special place, of course. Ruzena Ba-jcsy points to the eclectic good taste ofJohn McCarthy, while Aaron Sloman waswon over by Max Clowes’s passionate ad-vocacy of the fundamental insights in AIand Usama Fayyad was captivated by theromance and energy with which AI wastaught at the University of Michigan. Ourwork is also strongly shaped by our ac-quaintance with particularly thought-provoking research, as Bruce Buchanan’s(and the field’s) was by Turing’s empiri-cism of the 1950s, as Alan Mackworth’swas by the seminal computer vision re-search of the 1970s, and as MichaelKearns’s was by some of the early mathe-matics of computational learning theoryin the 1980s. We can only hope that ourpersonal and intellectual efforts to sus-tain AI as a community continue to act sopowerfully.

As a young scientist, I found AI’s con-stant ferment exciting, and I still do. Ihad previously worked in cybernetics,control theory, and pattern recognition,where we modeled intelligence, percep-tion, and action as signal processing.However, that view excluded much ofwhat we know intelligence to require,such as symbolic cognition. Modelingcognition as symbolic computation pro-vided a missing link. But we went too farin modeling intelligence as only symbol-ic. One of our toughest challenges now isto develop architectures that smoothlycombine the symbolic and the subsym-bolic. Or, if you like, to synthesize theachievements of logicist AI with those ofcybernetics, control theory, neural nets,artificial life, and pattern recognition.

Inspired initially by David Waltz, UgoMontanari, David Huffman, MaxClowes, and David Marr, I’ve advocatedconstraint satisfaction as the unifyingmodel. At both the symbolic and sub-symbolic levels we can specify the inter-nal, external, and coupled constraintsthat agents must satisfy. Those con-

25th Anniversary Issue

88 AI MAGAZINE

Alan Mackworth

Ruzena Bajcsy

Bruce Buchanan

Page 5: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

ers in two major controversies: (1) whatare the relative contributions of knowl-edge and inference, and (2) what repre-sentation methods are both simpleenough to work with and sophisticatedenough to capture the kinds of knowl-edge that experts use? The DENDRALand MYCIN programs provide experi-mental evidence on the side of moreknowledge, represented simply.

—Bruce Buchanan, University of Pittsburgh

AI is today routinely employed in somany areas of advanced informationtechnology that it is fair to say that AIstands also for avant-garde informatics,since it is always pushing informatics toits limits. For the steady growth of AI inGermany, it was imperative for AI re-searchers to stay integrated with main-stream informatics and to collaborate in-tensively with colleagues from allsubareas of computer science. The at-tempts of AI researchers in some othercountries to establish AI as another meta-science like cybernetics outside of infor-matics were unsuccessful.

—Wolfgang Wahlster, German ResearchCenter for Artificial Intelligence (DFKI)

I met AI through Max Clowes in 1969when I was still a lecturer in philosophyat Sussex University1 and soon becamedeeply involved, through a paper at IJ-CAI 1971 criticizing the 1969 logicistmanifesto by John McCarthy and PatHayes, followed by a fellowship in Edin-burgh 1972–1973. Since then I’ve workedon forms of representation, vision, archi-tectures, emotions, ontology for architec-tures, tools for AI research, and teaching,links with psychology, biology and phi-losophy, and most recently robotics, andI have helped to build up two major AIcenters for teaching and research (at Sus-sex and Birmingham).

I believe philosophy needs AI and AIneeds philosophy. Much of what phi-losophers write about consciousness andthe mind-body problem shows their ig-norance of AI, and many silly debates be-tween factions in AI (for example, aboutrepresentations, use of symbols, GOFAI)and some fashions (for example, recententhusiasm for “emotions”) result fromdoing poor philosophical analysis.

I always thought progress in AI wouldbe slow and difficult and that peoplewho predicted rapid results had simplyfailed to understand the problems, assketched in my 1978 book.2

—Aaron Sloman, University of Birmingham

Twenty years ago, I arrived at Harvard to

work with Les Valiant on the field thatwould shortly become known as compu-tational learning theory but that at thetime consisted exclusively of two algo-rithmically focused papers by Valiant,and an early draft of the rather mind-bending (to a first-year graduate student,at least) “four Germans” paper on the ex-otic and powerful Vapnik-Chervonenkisdimension.3 It was a great time to enterthe field, as virtually any reasonableproblem or model one might considerwas untouched territory.

Now that the field is highly devel-oped (with even many unreasonableproblems sporting hefty literatures), Ithink that the greatest sources of innova-tion within computational learning the-ory come from the interaction with theexperimental machine learning and AIcommunities. In a 2003 InternationalConference on Machine Learning(ICML) talk, I recalled how my first paperwas published in ICML 1987, then an in-vitation-only workshop. To our amuse-ment, the program committee stronglyadvised us not to use abstract symbolslike x1 for feature names, but warmer andfuzzier terminology like can_fly andhas_wings.

Perhaps we smirked a bit, but we un-derstood the sentiment and complied.Both sides have come a long way sincethen, to their mutual benefit. The rich-ness of the theory that has been either di-rectly or indirectly driven by the con-cerns and findings of empirical machinelearning and AI work is staggering to me,and it has been a great pleasure to be atheoretician working in a field in such aclose dialogue with practitioners. I amhard-pressed to think of other branchesof computer science that enjoy compara-ble marriages. May the next twenty yearsbring even more of the same; I cannotpredict the results but know they will beinteresting.

—Michael Kearns, University of Pennsylvania

I have fond recollections of the earlyyears of my “discovering” the field of AI.Coming across it in a graduate course inMichigan back in 1985, I was fascinatedand inspired by a field that had the boldvision of nothing short of modeling hu-man intelligence. The field in its earlydays consisted of a collection of worksthat spanned everything from computerscience theory to biology, to psychology,to neural sciences, to machine vision, tomore classical computer science tricksand techniques. The excitement was veryhigh and the expectations even higher.As I decided to start working in the sub-area of machine learning, I started to re-

25th Anniversary Issue

WINTER 2005 89

Wolfgang Wahlster

Aaron Sloman

Michael Kearns

Page 6: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

mendous practical impact are, in myopinion: (1) the generic visual object-recognition capabilities of a two-year-oldchild; (2) the manual dexterity of a six-year-old child; (3) the social interactionand language capabilities of a ten-year-old child. So much work for all of us to bechallenged by.

—Rod Brooks, MIT

From the scientific perspective, not somuch has been accomplished, and thegoal of understanding intelligence, froma computational point of view, remainselusive. Reasoning programs still exhibitlittle common sense. Language programsstill have trouble with idioms,metaphors, convoluted syntax, and un-grammatical expressions. Vision pro-grams still stumble when asked to de-scribe an office environment.

—Patrick Henry Winston, MIT

Our choice of problems is telling: Smalland technical, not large and important.A large, important problem is to workout the semantics of natural language—including all the required commonsenseknowledge—so machines can read andunderstand the web. Another is to devel-op robots that understand what they seeand hear.

Understanding is hard, so AI approx-imates it with increasingly sophisticatedmappings from stimuli to responses: fea-ture vectors to class labels, strings in onelanguage to strings in another, states tostates. I once had a robot that learned tomap positive and negative translationalvelocity to the words “forward” and“backward,” but never learned that for-ward and backward are antonyms. It un-derstood the words superficially. It hadthe kind of understanding we can mea-sure with ROC curves. Every child doesbetter.

—Paul Cohen, University of Southern California

Such broad capabilities need not origi-nate in a single fundamental principle oralgorithm that applies across the board.Instead, they may be the product of arange of different models, representa-tions, and experience, appropriatelycombined. Building a general system mayhinge on principled, flexible, and exten-sible ways of putting the pieces together.If that’s right, we’ll have to start with agrounded understanding of the mean-ings of representations, as SebastianThrun argues, but we’ll also need ways ofscaffolding sophisticated intelligent be-havior over underlying abilities to per-

alize how difficult the problems are andhow far we truly are from realizing theultimate dream of a thinking machine.What I also realized at the time was thatspecialization with some deep technicalapproaches and mathematical rigor werea necessity to make progress.

In reflecting back on those days of ro-mantic excitement, I am very pleased atwhat they drove in terms of engineeringachievements and new fields of study. Inmy own area of machine learning, whilethe vision of pursuing general algorithmsthat “learn from experience” morphed it-self into highly specialized algorithmsthat solve complex problems at a largescale, the result was the birth of severalnew subfields of specialization. Combin-ing learning algorithms with databasetechniques and algorithms from compu-tational statistics resulted in data-miningalgorithms that work on very large scales.The resulting field of data mining is nowa vibrant field with many commercial ap-plications and significant economic val-ue. This journey has also taken me per-sonally from the world of basic scientificresearch to the business side of realizingeconomic value from applying these al-gorithms to commercial problems and fi-nally to working at the “strategy” levelon the senior executive team of thelargest Internet company in the world,Yahoo!, where data drives many productsand strategies.

In looking back at it, I can only say inwonder: what a ride!

—Usama Fayyad, Yahoo!

The AI Journey: Future Challenges

As a field, AI researchers have alwayslooked for generality in the intelligentbehavior our artifacts exhibit, and gener-ality remains a central challenge. RodBrooks, Patrick Winston, and Paul Cohenoffer us a call to arms.

Artificial intelligence has not yet suc-ceeded in its most fundamental ambi-tions. Our systems are still fragile whenoutside their carefully circumscribed do-mains. The best poker-playing programcan’t even understand the notion of achess move, let alone the conceptual ideaof animate versus inanimate. A six-year-old child can discuss all three domainsbut may not be very good at any of themcompared to our specialized systems. Thechallenge for AI, still, is to capture thefundamental nature of generalized per-ception, intelligence, and action. Worthychallenges for AI that would have tre-

25th Anniversary Issue

90 AI MAGAZINE

Usama Fayyad

Paul Cohen

Page 7: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

ceive and act in the world, and we’ll needoperational ways of weaving together re-stricted solutions into systems that ex-hibit more robust behavior. In such archi-tectures, David Waltz, Manuela Veloso,and Patrick Winston find parallels to hu-man intelligence. Despite its flexibility,our own intelligence is an evolved capac-ity with clear limitations. We manage toact so successfully in part because webring the right sets of cognitive skills, in-cluding our notable strengths in domainsof vision, language, and action.

One of the big dreams of AI has been tobuild an artificially “intelligent” robot—arobot capable of interacting with peopleand performing many different tasks. Wehave seen remarkable progress on manyof the component technologies neces-sary to build AI robots. All these tremen-dous advances beg the obvious question:Why don’t we have a single example of atruly multipurpose robot that would,even marginally, deserve to be called ar-tificially intelligent?

I believe the key missing component isrepresentation. While we have succeededin building special-purpose representa-tions for specialized robot applications, weunderstand very little about what it takesto build a lifelong learning robot that canaccumulate diverse knowledge over longperiods of time and that can use suchknowledge effectively when decidingwhat to do. It is time to bring knowledgerepresentation and reasoning back into ro-botics. But not of the old kind, where ouronly language to represent knowledge wasbinary statements of (nearly) universaltruth, deprived of any meaningful ground-ing in the physical world.

We need more powerful means ofrepresenting knowledge. Robotics knowl-edge must be grounded in the physicalworld, hence knowledge acquisitionequals learning. Because data-drivenlearning is prone to error, reasoning withsuch knowledge must obey the uncer-tainties that exist in the learned knowl-edge bases. Our representation languagesmust be expressive enough to representthe complex connections between ob-jects in the world, places, actions, people,time, and causation, and the uncertaintyamong them. In short, we need to rein-vent the decades-old field knowledgerepresentation and reasoning if we are tosucceed in robotics.

—Sebastian Thrun, Stanford University

We are still far short of truly intelligentsystems in the sense that people are intel-ligent—able to display “common sense,”deal robustly with surprises, learn from

anything that can be expressed in natur-al language, understand natural scenesand situations, and so on. At the sametime AI has tended to splinter into spe-cialized areas that have their own confer-ences and journals and that no longerhave the goal of understanding or build-ing truly intelligent systems.

My own sense is that the AI researchprogram needs to be rethought in orderto have a realistic hope of building trulyintelligent systems, whether these are au-tonomously intelligent or “cognitiveprostheses” for human-centered systems.Early AI focused on the aspects of humanthought that were not shared with othercreatures—for example, reasoning, plan-ning, symbolic learning—and minimizedaspects of intelligence that are sharedwith other creatures—such as vision,learning, adaptation, memory, naviga-tion, manipulation of the physical world.A truly intelligent system will require anarchitecture that layers specifically hu-manlike abilities on top of abilitiesshared with other creatures. Some recentprograms at the Defense Advanced Re-search Projects Agency (DARPA) and theNational Science Foundation (NSF) aresetting ambitious goals that will requireintegrated generally intelligent systems,a very promising trend. The best newsabout the neglect of integrated intelli-gent systems is that researchers going in-to this area are likely to encounter a gooddeal of “low hanging fruit.”

—David Waltz, Columbia University

Creating autonomous intelligent robotswith perception, cognition, and actionand that are able to coexist with humanscan be viewed as the ultimate challeng-ing goal of artificial intelligence. Ap-proaches to achieve such a goal that de-pend on rigid task and world models thatdrive precise mathematical algorithms,even if probabilistic, are doomed to betoo restrictive, as heuristics are clearlyneeded to handle the uncertainty thatinevitably surrounds autonomy withinhuman environments. Instead we needto investigate rich approaches capable ofusing heuristics and flexible experience-built webs of knowledge to continuouslyquestion and revise models while actingin an environment. Significant progressdepends upon a seamless integration ofperception, cognition, and action to pro-vide AI creatures with purposeful percep-tion and action, combined with theability to handle surprise, to recognizeand adapt past similar experience, and tolearn from observation. Hence and inter-estingly, I find that the achievement ofthe ultimate goal of the field requires us,researchers, to accept that AI creatures

25th Anniversary Issue

WINTER 2005 91

Manuela Veloso

Sebastian Thrun

Page 8: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

are increasingly going electronic. Thispresents a wonderful opportunity for AI:electronic marketplaces provide new ex-citing research questions, and AI can sig-nificantly help generate more efficientmarket outcomes and processes.

One example is expressive competition,a generalization of combinatorial auc-tions. The idea is to let buyers and sellers(human or software) express demandand supply at a drastically finer granular-ity than in traditional markets—muchlike having the expressiveness of human-to-human negotiation, but in a struc-tured electronic setting where demandand supply are algorithmically matched.A combination of AI and operations re-search techniques have recently madeexpressive competition possible, and to-day almost all expressive competitionmarkets are cleared using sophisticatedtree search algorithms. Tens of billions ofdollars of trade have been cleared withthis technology, generating billions ofdollars of additional value into the worldby better matching of supply and de-mand.

Less mature, but promising, roles ofAI include the following: (1) Automati-cally designing the market mechanismfor the specific setting at hand. This cancircumvent seminal economic impossi-bility results. (2) Designing marketswhere finding an insincere (socially un-desirable) strategy that increases the par-ticipant’s own utility is provably hardcomputationally. (3) Supplementing themarket with software that elicits the par-ticipants’ preferences incrementally sothat they do not have to determine theirpreferences completely when that is un-necessary for reaching the right marketoutcome. (4) Taking into account the is-sues that arise when the participants in-cur costs in determining their prefer-ences and can selectively refine them.

There are numerous other roles for AI,undoubtedly including many that havenot even been envisioned. With this briefnote I would like to encourage brightyoung (and old) AI researchers to get in-volved in this intellectually exciting areathat is making the world a better place.

—Tuomas Sandholm, Carnegie Mellon University

So far we know of exactly one system inwhich trillions of pieces of informationcan be intelligently transmitted to bil-lions of learners: the system of publishingthe written word. No other system, artifi-cial or otherwise, can come within a fac-tor of a million for successful communi-cation of information. This is despite thefact that the written word is notoriouslyambiguous, ill-structured, and prone to

are evolving artifacts most probably al-ways with limitations, similarly to hu-mans. Equipped with an initial perceptu-al, cognitive, and execution architecture,robots will accumulate experience, refinetheir knowledge, and adapt the parame-ters of their algorithms as a function oftheir interactions with humans, other ro-bots, and their environments.

—Manuela Veloso, Carnegie Mellon University

Since the field of artificial intelligencewas born in the 1960s, most of its practi-tioners have believed—or at least acted asif they have believed—that language, vi-sion, and motor faculties are the I/Ochannels of human intelligence. Overthe years I have heard distinguished lead-ers in the field suggest that people inter-ested in language, vision, and motor is-sues should attend their ownconferences, lest the value of artificial in-telligence conferences be diminished byirrelevant distractions.

To me, ignoring the I/O is wronghead-ed, because I believe that most of our intel-ligence is in our I/O, not behind it, and ifwe are to understand intelligence, we mustunderstand the contributions of language,vision, and motor faculties. Further, wemust understand how these faculties,which must have evolved to support sur-vival in the physical world, enable abstractthought and the reuse of both concreteand abstract experience. We must also un-derstand how imagination arises from theconcert of communication among our pu-tative I/O faculties, and we must learnhow language’s symbols ground out in vi-sual and other perceptions.

—Patrick Henry Winston, MIT

Intelligence is key not only for our phys-ical robots but also for the intermediarieswe recruit for competition, communica-tion, and collaboration in virtual worlds.For example, Tuomas Sandholm seeselectronic marketplaces as an area whereAI can change the world. Peter Norvig,meanwhile, delights us with the poten-tial of AI to forge relationships among thebillions of readers and writers on the web.Though it’s easy to forget, much of ourown intelligence is specifically social. Wecount on our abilities to explain, coordi-nate, and adapt our actions to one anoth-er, and as Danny Bobrow and Joe Marks,Chuck Rich, and Candy Sidner argue, du-plicating these abilities offers an excitingbridge between AI and human–computercollaboration.

A significant portion of trade is alreadyconducted electronically, and markets

25th Anniversary Issue

92 AI MAGAZINE

Tuomas Sandholm

Peter Norvig

Daniel Bobrow

Page 9: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

logical inconsistency or even fallacy.In the early days of AI, most work was

on creating a new system of transmis-sion—a new representation language,and/or a new axiomatization of a domain.This was necessary because of the difficul-ty of extracting useful information fromavailable written words. One problem wasthat people tend not to write down theobvious; as Doug Lenat and Ed Feigen-baum put it, “each of us has a vast store-house of general knowledge, though werarely talk about any of it ... Some exam-ples are: ‘water flows downhill’ ....”4 Thisis undeniably true; if you look at a 1,000-page encyclopedia, there is no mention of“water flows downhill.” But if you look atan 8 billion page web corpus, you findthat about one in a million pagesmentions the phrase, including somequite good kindergarten lesson plans.

This suggests that one future for AI is“in the middle” between author andreader. It will remain expensive to createknowledge in any formal language (Pro-ject Halo suggests $10,000/page) but AIcan leverage the work of millions of au-thors of the written word by understand-ing, classifying, prioritizing, translating,summarizing, and presenting the writtenword in an intelligent just-in-time basisto billions of potential readers.

—Peter Norvig, Google

People build their own models of systemsthey don’t understand and may make un-warranted extrapolations of their capabil-ities—which can lead to disappointmentand lack of trust. An effective intelligentsystem should be transparent, able toexplain its own behavior in a way thatconnects to its users’ background andknowledge. An explanation is not a fulltrace of the process by which the systemcame to a conclusion. It must highlightimportant/surprising points of its processand indicate provenance and dependen-cies of resources used. Systems that evolvethrough statistical learning must explain(and exemplify) categories it uses and clar-ify for a user what properties make a dif-ference in a particular case. Such systemsmust not be single minded, hence shouldbe interruptible and able to explain cur-rent goals and status. They should be ableto take guidance in terms of the explana-tion they have given. Artificial intelli-gence systems must not only understandthe world, and the tasks they face, but un-derstand their users and— most impor-tant—make themselves understandable,correctable, and responsible.

—Daniel G. Bobrow, PARC

In his prescient 1960 article titled “Man-Computer Symbiosis,” J. C. R. Licklider

wrote: “[Compare] instructions ordinari-ly addressed to intelligent human beingswith instructions ordinarily used withcomputers. The latter specify preciselythe individual steps to take and the se-quence in which to take them. The for-mer present or imply something aboutincentive or motivation, and they supplya criterion by which the human executorof the instructions will know when hehas accomplished his task. In short: in-structions directed to computers specifycourses; instructions directed to humanbeings specify goals.”5

Licklider goes on to argue that in-structions directed to computers shouldbe more like instructions to human be-ings. Even today, this is a radical ideaoutside of AI circles. Most research onhuman-computer interaction (HCI) hasfocused on making interaction withcomputers more efficient by adding newinput and output mechanisms. AI re-searchers, however, are working to fun-damentally change the level of HCI fromcommand-oriented interaction to goal-oriented collaboration. Furthermore,since Licklider wrote the words above, re-searchers in AI and the neighboring fieldsof linguistics and cognitive science haveaccumulated a large body of empiricalknowledge and computational theory re-garding how human beings collaboratewith one another, which is helping us torealize Licklider’s vision of human-com-puter symbiosis.

—Joe Marks, Charles Rich, and CandaceSidner, Mitsubishi Electric Research Labs

As we aim for broader capabilities, oursystems must draw on a more diverserange of ideas. This brings new challengesfor research in the field. As Daphne Kollerexplains, our problems now require us totake ideas from across subfields of AI andmake them work together. Several of ourcontributors suggest that well-defined,long-range projects can inspire re-searchers with different backgrounds andspecializations to bridge their ideas andresults. Henry Kautz will be working onunderstanding human activities in instru-mented environments. Tom Mitchell willbe working on understanding the lan-guage of the web.

A solution to the AI problem—achieving atruly intelligent system—remains elusive.

The capabilities that appear the hard-est to achieve are those that require inter-action with an unconstrained environ-ment: machine perception, naturallanguage understanding, or common-sense reasoning. To build systems that ad-dress these tasks, we need to draw upon

25th Anniversary Issue

WINTER 2005 93

Joe Marks

Charles Rich

Candace Sidner

Page 10: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

cluding assistive technology for the dis-abled, aging in place, security and sur-veillance, and data collection for the so-cial sciences.

—Henry Kautz, University of Washington

I believe AI has an opportunity toachieve a true breakthrough over thecoming decade by at last solving theproblem of reading natural language textto extract its factual content. In fact, Ihereby offer to bet anyone a lobster din-ner that by 2015 we will have a computerprogram capable of automatically read-ing at least 80 percent of the factual con-tent across the entire English-speakingweb, and placing those facts in a struc-tured knowledge base.

Why do I believe this breakthroughwill occur in the coming decade? Becauseof the fortunate confluence of threetrends. First, there has been substantialprogress over the past several years innatural language processing for automat-ically extracting named entities (such asperson names, locations, dates, products,and so on) and facts relating these enti-ties (for example, WorksFor[Bill, Mi-crosoft]). Much of this progress has comefrom new natural language processingapproaches, many based on machinelearning algorithms, and progress hereshows no sign of slowing. Second, therehas been substantial progress in machinelearning over the past decade, most sig-nificantly on “bootstrap learning” algo-rithms that learn from a small volume oflabeled data, and huge volumes of unla-beled data, so long as there is a certainkind of redundancy in the facts ex-pressed in this data. The third importanttrend is that the data needed for learningto read factual statements is finally avail-able: for the first time in history everycomputer has access to a virtually limit-less and growing text corpus (such as theweb), and this corpus happens to containjust the kind of factual redundanciesneeded. These three trends, progress innatural language analysis, progress inmachine learning, and availability of asufficiently rich text corpus with tremen-dous redundancy, together make this theright time for AI researchers to go back toone of the key problems of AI—naturallanguage understanding—and solve it (atleast for the factual content of language).

—Tom Mitchell, Carnegie Mellon University

Shaping the JourneyTo pursue the kinds of goals and projectssketched in the previous section, we need

the expertise developed in many subfieldsof AI. Of course, we need expertise in per-ception and in natural language models.But we also need expressive representa-tions that encode information about dif-ferent types of objects, their properties,and the relationships between them. Weneed algorithms that can robustly and ef-fectively answer questions about theworld using this representation, given on-ly partial information. Finally, as thesesystems will need to know an essentiallyunbounded number of things about theworld, our framework must allow newknowledge to be acquired by learningfrom data. Note that this is not just “ma-chine learning” in its most traditionalsense, but a broad spectrum of capabili-ties that allow the system to learn contin-uously and adaptively.

Therefore, in addition to makingprogress in individual subfields of AI, wemust also keep in mind the broader goalof building frameworks that integraterepresentation, reasoning, and learninginto a unified whole.

—Daphne Koller, Stanford University

One of the earliest goals of research in ar-tificial intelligence was to create systemsthat can interpret and understand day today human experience.

Early work in AI, in areas such as sto-ry understanding and commonsense rea-soning, tried to tackle the problem headon but ultimately failed for three mainreasons. First, methods for representingand reasoning with uncertain informa-tion were not well understood; second,systems could not be grounded in realexperience without first solving AI-com-plete problems of vision or language un-derstanding; and third, there were nowell-defined, meaningful tasks againstwhich to measure progress.

After decades of work on the “bitsand pieces” of artificial intelligence, weare now at a time when we are well-poised to make serious progress on thegoal of building systems that understandhuman experience. Each of the previousbarriers is weakened:

First, we now have a variety of expres-sive and scalable methods for dealingwith information that is both relationaland statistical in nature. Second, the de-velopment and rapid deployment of low-cost ubiquitous sensing devices—includ-ing RFID tags and readers, global po-sitioning systems, wireless motes, and awide variety of wearable sensors—makeit possible to immediately create AI sys-tems that are robustly grounded in directexperience of the world. Third, there area growing number of vital practical appli-cations of behavior understanding, in-

25th Anniversary Issue

94 AI MAGAZINE

Henry Kautz

Tom Mitchell

Page 11: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

the right institutions as well as the rightideas. If we share programs and data inmore open collaborations, we can makeit easier for individual researchers tomake meaningful contributions to bignew problems. If we slant major confer-ences to emphasize integrative research,we can help these contributions findtheir audiences. These changes are underway: talk to Tom Mitchell or Paul Cohenabout shared resources; talk to John Lairdor Paul Cohen about integrative researchmeetings. Still, to tackle the really bigproblems, our institutions may have tobecome bigger and broader, too. Perhapswe will see more large research centers,like DFKI, specifically dedicated to inte-grative research in artificial intelligence.

I can think of nothing more excitingthan working on integrated human-levelintelligent systems, and after 25 years itis still captivating and challenging. Onechallenge is that it requires interdispli-narity in the small (across subfields of AI)and large scale (with other disciplinesoutside of AI). A second challenge is thatteams are needed to attack the large-scaleproblems that can challenge integratedAI systems (as evident in many of the re-cent DARPA programs). This type of re-search isn’t for the faint of heart or thosewho enjoy solitary work. A third chal-lenge is to communicate research re-sults—if the work is truly interdiscipli-nary, which field or subfield should it bepublished in? Moreover, how (andwhere) can I talk about what is learnedabout integration, which itself is not na-tive to any specific field? AI has done amarvelous job, leveraging specializationwith the inexhaustible growth of newsubfields, new applications, and newconferences; but we also need to fiercelysupport integration. What better way todo this than by making integrated cogni-tive systems a major emphasis of theAAAI conference?

—John Laird, University of Michigan

What to do? Form societies defined byand dedicated to solving large, importantproblems. Hold conferences where thecriteria for publication are theoreticaland empirical progress on these prob-lems. Discourage sophistication-on-steroids; encourage integrations of sim-ple (preferably extant) algorithms thatsolve big chunks of important problems.Work together; share knowledge bases,ontologies, algorithms, hardware, andtest suites; and don’t fight over stan-dards. Don’t wait for government agen-cies to lead; do it ourselves. If anyone

25th Anniversary Issue

WINTER 2005 95

wants to join a society dedicated to thecognitive, perceptual, and social develop-ment of robot babies, or to learning lan-guage sufficient to understanding chil-dren’s books at a 5-year-old’s level ofcompetence, please drop me a line.

—Paul Cohen, University of Southern California

Some have mentioned to me that this[natural language understanding for theweb] is a large goal. I agree and proposewe approach it by forming a shared webrepository where facts that are extractedfrom the web by different researchers’ ef-forts are accumulated and made accessi-ble to all. This open-source shared repos-itory should also accumulate and sharelearned rules that extract content fromdifferent linguistic forms. Working as aresearch community in this fashionseems the best way to achieve this ambi-tious goal. And I’d hate to have to leavethe field to open a lobster fishery.

—Tom Mitchell, Carnegie Mellon University

DFKI, the German Research Center forAI, employs today more than 200 full-time researchers, many of them holdinga Ph.D. degree in AI. With yearly rev-enues of more than US$23 million, it isprobably the world’s largest contract re-search center for AI. It has created 39 fast-growing spin-off companies in manyfields of AI. DFKI views itself as a softwaretechnology innovator for governmentand commercial clients. DFKI is a jointventure including Bertelsmann, Daimler-Chrysler, Deutsche Telekom, Microsoft,SAP, and the German federal govern-ment. Its mission is to perform “innova-tion pure” application-oriented basic AIresearch. Although we have always triedto contribute to the grand challenges ofAI, we have never experienced an AI win-ter at DFKI, since we have always beenquite cautious with promises to oursponsors and clients, trying to deliverdown-to-earth, practical AI solutions.Since its foundation in 1988, many ma-turing AI technologies have left DFKI’slabs and become ubiquitous to the pointwhere they are almost invisible in em-bedded software solutions.

—Wolfgang Wahlster, DFKI

Integrative research will be particular-ly challenging for research students. Todo it, they must master a wide range offormal techniques and understand notjust the mathematical details but alsotheir place in overall accounts of intelli-gent behavior. At the same time, tolaunch productive careers, they must

John Laird

Tom Dean

Page 12: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

make a name for themselves with importantnew ideas of their own. It may take longer thanwe are used to and require us to think different-ly about how we nurture new scientists.

So, if AI as a community is to tackle the bigproblems and continue its progress, we have alot of work to do besides our own research. Butwe mustn’t think of this work as painful. We’llbe doing it with friends and colleagues, as TomDean and Bruce Buchanan remind us, and wecan expect a unique kind of satisfaction in see-ing AI’s collaborative process of discoverystrengthened and energized by the new com-munities we foster.

Surely, with such intriguing problems to workon, and with allied fields on the march, thisshould be a time for universal optimism and ex-pectation; yet many of today’s young, emergingpractitioners seem to have abandoned thegrand original goals of the field, working in-stead on applied, incremental, and fundamen-tally boring problems. Too bad. Pessimists willmiss the thrill of discovery on the Watson-and-Crick level.

—Patrick Henry Winston, MIT

Another reason for slow progress is the frag-mentation of AI: people learn about tiny frag-ments of a whole system and build solutionsthat could not form part of an integrated hu-manlike robot. One explanation is that we donot have full-length undergraduate degrees inAI and most researchers have to do a rapidswitch from another discipline, so they learnjust enough for their Ph.D. topic, and they andtheir students suffer thereafter from the result-ing blinkered vision.

I’ve proposed some solutions to this prob-lem in an introduction to a multidisciplinarytutorial at IJCAI’05, including use of multiplepartially ordered scenarios to drive research.

It requires a lot more people to step backand think about the hard problems of combin-ing diverse AI techniques in fully functionalhumanlike robots, though some room for spe-cialists remains.

—Aaron Sloman, University of Birmingham

The students of AI are sophisticated in both dis-crete and continuous mathematics, including arecognition of the role of uncertainty. This isnecessary because of the increased complexityof problems that we need to attack.

—Ruzena Bajcsy, University of California at Berkeley

As teachers, we must challenge students towork on problems in which integration is cen-tral and not an afterthought: problems that re-quire large bodies of different types of knowl-edge, problems that involve interaction withdynamic environments, problems that changeover time, and problems in which learning iscentral (and sometimes problems in which de-

termining the appropriate metrics is part of theresearch). But given the dynamics in the field ofAI, a Ph.D. student must forge an association toan identifiable subfield of AI—some communi-ty in which to publish and build a reputa-tion—and as of today that is not “human-levelintelligence,” “integrated cognitive systems,”or even my favorite, “cognitive architecture.”So even more than finding a home for publish-ing, we must grow a community of researchers,teachers, and students in which the integrationis the core and not the periphery.

—John Laird, University of Michigan

In more than twenty years in this field, themost satisfying moments by far have comefrom working with people who have set asidetheir individual interests and biases to inspirestudents, nurture young scientists, and createcommunity and esprit de corps. And, while Itruly enjoyed collaborating with Kathy McKe-own on AAAI-91 and Gigina Aiello on IJCAI-99,helping to create the AAAI Robot Competitionand Exhibition series with Pete Bonasso, JimFirby, Dave Kortenkamp, David Miller, ReedSimmons, Holly Yanco, and a host of otherswas actually a lot of fun. The exercise was cer-tainly not without its aggravations, as getting asizable group of researchers to agree on any is-sue is not easy. But most of the effort was spentthinking about how to create useful technolo-gy, advance the science and art of robotics, andmake the entire experience both educationaland inspirational to participants and spectatorsalike.

It was particularly gratifying to see the buzzof activity around this year’s event in Pitts-burgh and learn about some of the new ideasinvolving social robots, assistive technologies,and, of course, cool hardware hacks. I don’tknow what direction the field should take, andat this particular moment in my career as I re-turn to research after several years in senior ad-ministration at Brown University, I’m contentto pursue my own personal research interests inthe intersection of robotics, machine learning.and computational neuroscience. But I amthinking about how to get students interestedin my research area, and in due course, I hopeto work with the AI community to run work-shops, develop tutorials, sponsor undergradu-ate research, and pursue all the other avenuesopen to us to nurture and sustain community,both scientific and social, in our rapidly evolv-ing and increasingly central field.

—Tom Dean, Brown University

The sense of collegiality in the AI communityhas always made AI more fun. Most of the time,the statesmanlike conduct of senior people likeAl Newell set an example for debate withoutrancor. The common goal of understanding thenature of intelligence makes everyone’s contri-bution interesting.

—Bruce Buchanan, University of Pittsburgh

25th Anniversary Issue

96 AI MAGAZINE

Page 13: Artificial Intelligence: The Next Twenty-Five Yearspeople.cs.pitt.edu/~litman/courses/cs2710/papers/1750.pdf · 2010. 8. 11. · world’s chess champ, commercial sys-tems are exploiting

Closing ThoughtsAs we find new discoveries in AI, as new com-munities form, as new sets of ideas come to-gether, as new problems emerge and we findnew ways of working on them, we will contin-ue to frame new accounts about what our workis. The best stories will resonate not only withwhat we are doing now but also with how wegot here—they will transcend today’s fashionsand motivate our present activities as a synthe-sis of our earlier goals and experiences. Weclose with some brief contributions that takeup this discipline of self-examination and, indifferent ways, distill something importantabout the landscape of our field. We responddeeply to Bruce Buchanan’s characterization ofthe enduring significance of AI as a goal andthe enduring bottom line of AI methodologyand to Henry Kautz’s acknowledgment of thehuman meaning of AI results. We endorse Us-ama Fayyad’s prediction that an exciting ridethrough intellectual space will continue to de-fine the careers of AI researchers. And, withAlan Mackworth, we remind you to keep it real.The discussion will doubtless continue.

Because there is not enough intelligence in theworld and humans often ignore relevant conse-quences of their decisions, AI can provide themeans by which decision makers avoid globalcatastrophes. I believe we can realize this visionby formulating and testing ideas in the contextof writing and experimenting with programs.

—Bruce Buchanan, University of Pittburgh

I believe that understanding human experiencewill be a driving challenge for work in AI in theyears to come and that the work that will resultwill profoundly impact our knowledge of howwe live and interact with the world and witheach other.

—Henry Kautz, University of Washington

In looking at the future, we have much to do,and I hope we make some serious progress onthat original romantic dream of building ma-chines that truly exhibit learning and thoughtin general. The dream is still worth pursuing, es-pecially after all we have learned over the pastdecades. The ride will be a lot more exciting forthe new researchers entering the AI field.

—Usama Fayyad, Yahoo!

Think of AI itself as an agent. We need a clearunderstanding of our own goals, but we mustalso be willing to seize opportunistically on newdevelopments in technology and related sci-ences. This anniversary is a lovely opportunityto take stock, to remind ourselves to state ourclaims realistically, and to consider carefully theconsequences of our work. Above all, have fun.

—Alan Mackworth, University of British Columbia

Notes1. Sloman, A. 1981. Experiencing Computation: ATribute to Max Clowes. Computing in Schools.(www.cs.bham.ac.uk/research/cogaff/sloman-clow-estribute.html.)

2. Sloman, A. 1978. The Computer Revolution in Philos-ophy: Philosophy, Science and Models of Mind. Brighton,U.K.: Harvester Press. (http://www.cs.bham.ac.uk/re-search/cogaff/crp.)

3. Valiant, L. G. 1984. A Theory of the Learnable.Communications of the ACM 27(11):1134–1142;Valiant, L. G. 1985. Learning Disjunctions of Con-junctions. In Proceedings of the Ninth InternationalJoint Conference on Artificial Intelligence, vol. 1,560–566. Los Altos, CA: William Kaufmann, Inc.; andBlumer, A.; Ehrenfeucht, A.; Haussler, D.; and War-muth, M. 1985. Classifying Learnable GeometricConcepts with Vapnik-Chervonenkis Dimension. InProceedings of the Twenty-Fourth Symposium on Theoryof Computation, 175–85. New York: Association forComputing Machinery.

4. Lenat, D., and Feigenbaum, E. 1991. On theThresholds of Knowledge. Artificial Intelligence 47(1–3): 185–250.

5. Licklider, J. C. R. 1960. Man-Computer Symbiosis.IRE Transactions on Human Factors in Electronics, Vol-ume HFE-1: 4–11.

Haym Hirsh is a professor andchair of computer science at Rut-gers University. His research ex-plores applications of machinelearning, data mining, and infor-mation retrieval in intelligent in-formation access and human-com-puter interaction. He received hisB.S. in 1983 in mathematics and

computer science from UCLA, and his M.S. and Ph.D.in 1985 and 1989, respectively, in computer sciencefrom Stanford University. He has been on the facultyat Rutgers University since 1989 and has also heldvisiting faculty positions at Carnegie Mellon Univer-sity, MIT, and NYU’s Stern School of Business.

Matthew Stone is associate profes-sor in the department of computerscience at Rutgers and has a jointappointment in the Center forCognitive Science; for the 2005–2006 academic year, he is also Lev-erhulme Trust Visiting Fellow atthe School of Informatics, Univer-sity of Edinburgh. He received his

Ph.D. in computer science from the University ofPennsylvania in 1998. His research interests spancomputational models of communicative action inconversational agents and include work on generat-ing coordinated verbal and nonverbal behavior (pre-sented most recently at SIGGRAPH 2004 and ESSLLI2005). He served on the program committee of IJCAI2003 and as tutorial chair of AAAI 2004. His e-mailaddress is [email protected].

25th Anniversary Issue

WINTER 2005 97