18
Presidential Address 18 AI MAGAZINE The Importance of Importance The Importance of Importance

ImportanceThe Importance The Importance ofof · 2008. 12. 16. · Human intelligence is shaped by what is most important to us—the things that cause ecstasy, despair, pleasure,

  • Upload
    others

  • View
    3

  • Download
    0

Embed Size (px)

Citation preview

  • Presidential Address

    18 AI MAGAZINE

    The Importance ofImportance The Importance ofImportance

  • � Human intelligence is shaped by what is mostimportant to us—the things that cause ecstasy,despair, pleasure, pain, and other intense emo-tions. The ability to separate the important fromthe unimportant underlies such faculties as atten-tion, focusing, situation and outcome assessment,priority setting, judgment, taste, goal selection,credit assignment, the selection of relevant mem-ories and precedents, and learning from experi-ence.

    AI has for the most part focused on logic and rea-soning in artificial situations where only relevantvariables and operators are specified and has paidinsufficient attention to processes of reducing therichness and disorganization of the real world to aform where logical reasoning can be applied. Thisarticle discusses the role of importance judgmentin intelligence; provides some examples ofresearch that make use of importance judgments;and offers suggestions for new mechanisms, archi-tectures, applications, and research directions forAI.

    It’s a great pleasure to address you all. I haveviewed this as a unique opportunity toexamine what I think about AI as a field andto say what I think to you without having torun it by reviewers. I’d like to talk about whatI think is really important in the field and inparticular what we’re doing that’s importantand what’s important that we aren’t doing,with more emphasis on the latter. Roughlywhat I’ll cover today is the following: first, thegood news, briefly. AI really is doing well, andI’ll try to give a brief overview as I see it. ThenI’ll digress to ask some basic questions aboutintelligence: What are brains really for? Howdid they evolve? Why do we have them? I’llargue that brains and intelligence are rooted inmatters of life and death, important issues,things that people regard as juicy, things thatpeople really care about. And then I’ll move tothe bad news: Assuming that its goal is to really

    understand and model intelligence, AI isneglecting many essential topics. I’ll talk aboutspecific issues in perception, deep understand-ing of language, creativity, and logic. I’ll makesome suggestions about what we as a fieldshould do based on what I’ve said. If I don’toffend everybody at least a little, I’ve failed.

    First the good news: AI is thriving. Unlikeearlier times, AI encompasses a broad range ofareas and topics. If you look through the cur-rent AAAI conference proceedings, I thinkyou’ll see this. The narrow focus on logic aloneis gone. When AI people make predictions,which of course we do and should do, they’remuch more muted; I don’t think we’re going toget in trouble for the kinds of predictions we’remaking today. There’s a significant industrialeffort and many applications. Membership inthe organization is stable now. It had droppedfor a long time—from 15,000 during the expertsystems craze—but it’s now been around 5,500for several years and maybe has even gone upa little bit. If you count institutional member-ships, it’s more like 7000. We have very highstandards for paper acceptance. In this confer-ence, the acceptance rate was 32 percent, andjournals in AI are strong. Advances in comput-ing hardware have been a huge help to AI. Forinstance, in the practical speech recognitionsystems of today, there aren’t many new ideascompared to what we knew in the seventies.(There are a few: For example, hidden Markovmodels (HMMs) are important and new sincethe seventies.) But overall, much of the creditfor making AI become real in this and othercases should go to computing power increases.The earlier ideas were adequate, but the com-puting power was inadequate.

    There’s a very impressive range of very inter-esting AI applications. Rather than runningthrough a long list, I’ll point you to the IAAI

    Presidential Address

    FALL 1999 19

    AAAI-98 Presidential Address

    The Importance of Importance

    David Waltz

    Copyright © 1999, American Association for Artificial Intelligence. All rights reserved. 0738-4602-1999 / $2.00

  • strong about it. The central phenomena ofintelligence are tightly related to life and deathissues. We have brains because they reallymade a difference to human survivability asindividuals and as a species. What exactly is AItrying to study, understand, and explain? Thestudy of intelligence is an ill-posed problem:What is intelligence? A better question is Whatis intelligence for? That one I think we havesome chance of answering. And the answer toit is that any kind of intelligent phenomenonthat we see, whether physical or behavioral, isthere because it really serves the organism’ssurvivability purposes. Brains are for matters oflife and death. Brains are primarily organizedto perform fast action selection in order to pro-mote survival. Brains make decisions on issuessuch as fight or flight, on the finding of foodand water, on the selecting of mates and repro-ducing, and in only a slight exaggeration, Ithink the basic operations the brain is doing isasking over and over again, quickly, Can I eatthis, can it eat me? Now, it doesn’t seem to usthat this is happening. Partially this is becausegiven the degree of intelligence and develop-ment that we have as a culture, we’ve createda world where food is plentiful, and predatorsor dangers are kept at bay. But that doesn’t tellus much about where the brain came from. Wesurely haven’t changed very much physicallyin 5000 years. Five thousand years ago theworld was a much more dangerous and uncer-tain place. We’ve been able to relax aboutthese life and death issues because we’ve creat-ed a supportive, protective culture. But I thinkthat the brain arose during much less securetimes.

    How did the brain actually come to accom-plish fast action selection? It accomplished itthrough the evolution of a lot of critical struc-tures: innate drive, reward, and evaluation sys-tems. We know what we like. We know whensomething we experience is good or bad. Wedon’t have to be taught that; in fact, that’s themechanism of teaching. The brain consistslargely of many parallel, short, interactingchains of neurons that connect attention andperception systems to action systems. Weknow the chains are short because we can acton things within roughly 100 milliseconds,and given the latencies of neurons, which areon the order of a millisecond, there can’t bemany more than 100 layers of neuronsbetween inputs and outputs. That’s very short!There’s not much chance for backtracking orserial reasoning. Of course, the paths could bevery broad and bushy: There could be manyparallel paths and many neurons involved, butthe depth can’t be very great.

    proceedings this year and other years. Manygrand challenges, first stated many years ago,have been achieved. We have trulyautonomous vehicles such as Dean Pomer-leau’s NO HANDS ACROSS AMERICA project, theautonomous vehicle that drove from Washing-ton to San Diego, with all but 30 miles handsoff, and SOJOURNER, which wasn’t very smartbut certainly made a big splash and will be fol-lowed by more interesting systems. In auto-matic gene finding, AI techniques were usedby Steve Salzburg to find the Lyme disease andsyphilis genes recently. There have also beenthe automatic classification of galaxies andstars and winning game programs.

    In addition, AI has a very impressive recordof spin-off ideas: Lisp, SCHEME, PROLOG, maybeSMALLTALK, garbage collection, object-orientedprogramming. Lots of programming ideascame out of AI labs; so did the mouse, thegraphic user interface, software agents, datamining, telerobots, metasearching, PCs, laserprinting, time sharing, symbolic math aids,computer games, graphics, and others.

    Many important future applications willrequire progress in AI: high-quality transla-tion; high-precision web search, really findingjust what you want as opposed to finding a listthat hopefully contains what you want; con-tent-based image matching, finding the imageyou want; intelligent robots, intelligent envi-ronments, smart offices, and houses; andautonomous space systems.

    Moreover, I think it’s very clear that under-standing cognition—understanding the waythe brain works, what the mind does—is a keyscientific grand challenge. I think that withinthe next 20 years, we will have availableaffordable computing power sufficient tomatch human abilities, at least in principle.We’ll have a lot more knowledge of the brainand mind. So the future looks very rosy in thatregard.

    Here is a chart, with thanks to HansMoravec (figure 1). You can see that computersare very quickly approaching the chimp-human-whale cluster on the upper right. Andit won’t be too long after that before thesecreatures will be overtaken if trends continue.And there’s very good reason to believe thattrends will continue.

    What Are Brains For?That was the good news. Let’s step back andlook at the Big Picture: What is the brain reallydoing, what is intelligence about? This is a per-sonal view, but I feel pretty confident and

    … understanding

    cognition—understanding

    the way thebrain works,

    what themind does—

    is a key scientific

    grand challenge.

    Presidential Address

    20 AI MAGAZINE

  • The other very important things we have aremany metasystems that watch other systems,that in particular watch the actions of theseshort chains (and other parts of our brains)and look for regularities that they can predict.They can generate shortcuts—essentially pre-dictive mechanisms—and also try to generatemetaobservations and learning controls. Thisis very important for intelligence and probablyone area in which people are clearly differenti-ated from other creatures. There’s been a lot ofdiscussion over what drives learning. One pop-ular theory is that learning is failure driven. Ithink that’s roughly true, but I’d like to broad-en it a bit: I think learning is surprise driven.Whenever things are predictable or emotional-ly mild, not much changes in the brain. Whyshould it change? The world is more or less inour control, we know what’s going on, andthere’s no particular reason to change any-thing. But when we’re surprised, when expec-tations are violated, and when we encounternovel items—things we’ve never seen before,situations we know we’ve never been inbefore—then I believe we drop into a differentmode in which we note and remember itemsmuch more vividly. The more surprising, themore alarming, the more novel, the more isremembered.

    Why are surprises remembered? Well, weneed to assign credit. When we encounter

    something novel, we don’t know what tothink of it. We do have to note such items inorder to later judge their importance. (Curios-ity is also tied with this. We may seek out sur-prises if they aren’t coming fast enough for us,and our learning rate is stagnating.) We learndisproportionately from intense experiences.Think of your first romantic experience. Youprobably thought about it for a long time after-ward. You probably still remember it prettyvividly, if you’re anything like me. What’sgoing on?

    Consider great accomplishments, thingsthat you worked very hard on and finallyachieved, things that were extremely lucky,praise that you’ve gotten from people that youadmire, experiences of terror or pain, deaths ofloved ones, abuse, humiliation, failures; allthese things cause us to experience intensely.In such cases, we note and remember manyrelatively raw features, both ones that werepresent at the time that the shocking or sur-prising event happened and ones from aroundthat time (songs, weather, clothing we werewearing, particular locations, and so on). Wetend to need a long time to get over intenseevents, and we may obsess over them oversome period of time. I believe that such obsess-ing corresponds to substantial reorganizationsof our minds/brains.

    What’s the purpose of obsessing? Why

    Presidential Address

    FALL 1999 21

    Creatures

    0

    2

    4

    6

    8

    10

    12

    14

    16

    18

    20

    0 2 4 6 8 10 12 14 16 18

    Computers

    Memory size(bits, powers of ten)

    Sponge(alive)

    Slug

    Bee

    Chimp

    Human Whale

    Pocket Calculator

    IBM Mainframe (1953)

    IBM PC (1983)

    Cray-1(1977) IBM 3090 Mainframe (1990) Pentium PC (1993) {

    1K CM-5 (1992) Intel Tera (1996) 64K CM-2(1987)

    Computepower(bits/secpowersof ten)

    Figure 1. Relative Compute Power and Memory.

  • stress syndrome are examples of experiences sointense that every subsequent experience islikely to contain some reminders of it. That is,certain events can generate so many fea-tures—all associated with strong negativity—that any day to day experience will share somefeatures that cause the victim to recollect andrelive the negative experience. And I think weall know sad stories of adult maladaptationsthat can be traced to abuse in childhood.

    Animal IntelligenceMuch of this is shared with animals. I hadhoped to bring a videotape that had beenshown by Pat Churchland, but I’ll have to set-tle for describing it to you. We are not alone, Ibelieve, in being smart and self-reflective inexactly the sense I’ve been talking about. Patshowed a video of a baboon being put in frontof a mirror. This was a large mirror, a wall-sizedmirror, very clear and sharp. The baboonlooked at the mirror, saw another baboon,lunged at “the other baboon,” which in turn ofcourse lunged back at her. She jumped back insurprise, and then made a threatening move-ment, mirrored by the other baboon, and shethen just ran off in a panic. The video thenshowed a chimp being put in front of the samemirror. The chimp had initially the same reac-tions: She jumped toward the mirror, and theother chimp jumped toward the mirror. Thechimp then just stood back, making somefaces and hand motions and watching the“other chimp” doing the same things. Andthen there was a cut to a scene where thechimpanzee was turned back to the mirror,examining her own rump and then up close,looking in her own mouth, with an expressionthat conveyed “ooh, what’s this?” It was reallystartling. There was no question that thechimp was self-reflective, that she realized thatnot only was she seeing herself, but also realiz-ing that she had found a new method for dis-covering things about herself, things she’dnever been able to see or know before.

    Animal SuperstitionKonrad Lorenz was very fond of geese andimprinted himself on many of them. He’d bepresent when they hatched, and they wouldimprint on him and follow him everywhere.He was essentially their mother. He had oneparticular favorite goose who followed himeverywhere, except that the goose would neverfollow him into his house. Geese are said to beafraid of going into dark places, so the goosewould always stop at the door. But one day, the

    would it be valuable to remember manydetails, most irrelevant? Imagine that we expe-rience two very similar situations, one casethat turns out very positive and another simi-lar situation that turns out extremely negative.How are we to ever learn to predict the differ-ence in the future? If we’ve generalized the firstexperience to a high degree, selecting minimalpatterns of features, there’s no way in whichwe can later reliably learn the differencesbetween the two, since certain features, nowlost, may be essential clues to telling the differ-ence. So I think one important reason weremember such things vividly and go over thefeatures of the experiences is so we can lateridentify the differences between these situa-tions and differentiate them. We don’t neces-sarily create accurate causal models. We can behaunted by experiences even though we knowrationally they may never happen again. Eventhose of us who are highly scientific may feelat least hints of superstition—“well, thingsturned out well when I did x, and even thoughI can’t see a causal relation from x to the out-come, I might as well do x again.”

    What’s the value of obsessing? I think thebrain and mind “work through” intense expe-riences by repeated rehearsal, daydreams, talk-ing to oneself. In the process, we constructdemons that can give us timely alerts: We canavoid bad things in the future, avert disasters,or we can seize opportunities that presentthemselves. By rehearsing situations, we canconstruct demons that give us very early warn-ings, noting the faintest signs of dangers andopportunities ahead. I think we assign valuesto people, objects, places, situations, events,and relations by constructing a number of veryquick static evaluators. These provide us withan instant impression of whether something isgood or bad. There are recent psychologicalstudies by Bargh that strongly suggest thatpeople have quick (often unconscious) posi-tive and negative reactions to just about every-thing.

    Language and mental imagery play a criticalrole in this. Our basic perceptual systems arefast and feed forward. To modify these systemsand build new ones, I think we use internallanguage and mental imagery to “reexperi-ence” events and “what-if” scenarios, so thatwe can build new chains for when weencounter similar situations again, as well ashypothetical situations that we have imaginedin this process. The results are new, very fastfeed-forward perception-action chains as wellas self-descriptive prediction systems.

    If events are sufficiently negative, they canbe debilitating. Battle fatigue and delayed

    …the brain and

    mind “workthrough”

    intenseexperiencesby repeated

    rehearsal,daydreams,

    talking tooneself. In the

    process, weconstruct

    demons thatcan give us

    timely alerts…

    Presidential Address

    22 AI MAGAZINE

  • goose absent-mindedly followed Lorenz intothe house and suddenly found herself in thiscompletely dark place and panicked. Whengeese panic because of darkness, they’ll flytoward light. So she flew to a window andstood by the window and cooled down, andafter her eyes adjusted to the darkness, sheresumed following him around the house.Thereafter the goose followed him into thehouse every day: The goose would follow himin, go and stand by the window for a while,and then return to following him. But gradu-ally, over time, the goose started only makinga sort of detour toward the window, no longerstanding by the window, but just making adeviation in her path. One day the gooseabsent-mindedly forgot to make the deviationin path after coming in the door, suddenlyrealized it, flew in a panic to the window, andcooled out for a while. So what was going onthere? Lorenz argued the goose was trulysuperstitious, that somehow, even though thegoose now knew the house well, and felt safethere, she needed to make that little deviationin her path, or else who knew what could hap-pen? We humans aren’t so different.

    The Bad NewsAI has for the most part neglected these sortsof issues. And I think this is a serious problem,because it raises the question about whetherAI, if it continues on its current course, is reallyup to the challenge of fulfilling its stated longterm goals. I would argue that most AI researchto date has been dictated largely by applica-tions that people have chosen to work on or byavailable methods. And so it’s rather like look-ing under the light for your keys even thoughyou know you lost them somewhere elsebecause it’s easiest to see there. AI has ignoredmost of neuroscience—some neuroscienceknowledge has come into AI via the neuralnets community, which has made a seriousattempt to follow neuroscience—but AI reallyhasn’t taken neuroscience seriously, eventhough it is a very fast growing and relevantbody of knowledge. It’s very clear that FMRI(functional magnetic resonance imaging) stud-ies support society of minds and/or PDP-likemodels; mental activity is highly distributed.This hasn’t had much of an impact on AI. Andit ought to because, even as a practical matter,we are in the age of distributed computers.

    I’m going to talk about these three topics: (1)perception, situation assessment, and action; (2)meaning and deep language understanding (asopposed to statistical processing or web search-ing); and (3) generativity and creativity.

    Perception, Situation Assessment, and Action

    Most of the AI work in perception, situationassessment, and action concerns symbolic rea-soning about situations. AI models alreadyassume that somehow we’ve discovered theobjects and relations that are important in asituation. Very little work has been done onthe problem of actually turning real, complexscenes into symbolic situation descriptions.And I would argue that this is where most ofintelligence really lies.

    Perceptual pattern recognition is in a veryprimitive state. Human Go champions face nothreat from AI programs anytime soon. Why isthat? Well, people are very good at and, Ibelieve, very dependent on being able to makeperceptual pattern judgments. In chess, this isalso true. But chess is a sufficiently easier gamethat has been attacked successfully by comput-ers in a very different fashion.

    Why hasn’t AI looked at this sort of thingmore carefully? I think there are two main rea-sons: One is that perception really is complexand difficult. It depends on extensive innatewiring. Broad principles just aren’t going towork. There are some principles, but there area lot of them, and few apply generally. Someexamples: Animals clearly have extensiveinnate perceptual abilities. Animals have mat-ing rituals and dances that are extremely elab-orate (another topic of Lorenz if you’re inter-ested in details). But clearly these are notlearned; they’re innate in the creature. Ungu-lates—cattle, sheep, horses—can walk and theycan find their mother’s udders, they don’tbump into things, and they do all this imme-diately from the time they are born. Theydon’t learn this. It’s wired in. Human experi-ments show that young children, too young tohave been able to explore the world with theirhands, understand in some fairly deep way thephysics of the world, as discussed later. In con-trast, most of AI work in vision—which, by the

    Presidential Address

    FALL 1999 23

    … experiments show that young children, too young to have been able toexplore the world with their hands,understand in some fairly deep way thephysics of the world ….

  • object and that they will move together. If youtake a scene with dissimilar occluded regions,pull on one, and both pieces move together,the child is surprised. The child expectsbecause the regions are different that they arenot part of the same object. And this is judgedby the child long before the child could havefigured out such things by playing withobjects. Figure 3 shows a similar display: If youshow a block with occluded regions moving ina coordinated manner behind it until the childgets bored, you then remove the occludingobject to show either a complete rod or twoshort rods joined invisibly. Now, to a 4-1/2month old, the left bottom display (the com-plete rod) is extremely boring: The child hasalready figured out that the rod is continuedbehind the occluding object. But the situationon the right is very interesting: The child isvery surprised that these aren’t really joined. Itturns out that before 4-1/2 months, the effectis exactly opposite, and nobody really quiteunderstands why that is. But it can’t have beenjudged from experience because an infant ofthis age still lacks hand-eye coordination andcan’t have had any physical experience withsuch situations.

    Figure 4 requires some explanation. The bot-tom line of the figure depicts a block—theshaded object on the left—and a thin board,which is the object on the right. The child seesthe board rolling up to and then past vertical,coming to lean on the block, then coming for-ward and lying flat on the table. And that’s

    way, I think is a good and thriving field—hasconcentrated on low level vision—edge find-ing, segmentation, object recognition, 3-Dstructure and so on—and hasn’t really treatedperception, that is, the problem of how oneselects what’s really important from among allthe possible segmented objects.

    Evidence for Innate PerceptualKnowledge in Humans

    To make clear the importance of innate wiringin human perception, here are some experi-ments from Renee Baillergeon and ElizabethSpelke. The basic form of their experiments isthis: An infant sits in a seat and looks at somesort of display and if the child attends to thedisplay, then the presumption is the child isinterested in it, there is something novel aboutit, something surprising or interesting, but ifthe child starts getting bored, looking aroundand fidgeting, then the presumption is thechild is not interested anymore because thedisplayed situation is boringly familiar. Thissounds shaky, but researchers have been ableto replicate very strong, clear effects. In thefirst experiment (figure 2), a 4-1/2 month oldinfant is shown a scene consisting of threeregions, one occluding the other two. In thiscase, if you show the child the figure where theoccluded regions are similar and then pull onone of the regions, but only one piece comesout, the child is very surprised. The childexpects that both regions are part of the same

    Presidential Address

    24 AI MAGAZINE

    Identical Nonidentical

    Figure 2. When Two Object Parts Are Seen behind an Occluder, 4-1/2-month-olds Assume There Is One Object If the Parts AreIdentical (left) and Two Objects If the Parts Are Dissimilar (right) (Bremnan, Slater, and Butterworth 1997).

    From A. Slater, “Visual Perception and Its Organization in Early Infancy,” © 1997. Reprinted by permission of Psychology Press Ltd.

  • dull and uninteresting to the child. The mid-dle display, an impossible event, represents thefollowing: The board rolls up, goes back towhere the block should be but is not supportedby the block (which has been removed by theresearcher), allowing the board to go flat onthe table. The board then rolls forward andthen the block appears; so it’s as though theblock somehow disappeared and reappearedduring the process. This is very surprising toinfants. So in some sense they really under-stand the object ought to be there, that itought to support this board, and if it doesn’t,something is really wrong. So there has to be alot of innate wiring—really a lot of knowl-edge—that’s built into us. To build a percep-tion system, that information somehow has tobe there. It’s not likely to be learned easily—itwas learned evolutionarily, but for AI pro-grams, it would have to be programmed in.

    Links between Perception and Action

    The second reason that perception is difficult isthat perception is goal-directed and, therefore,varies as goals change: Perception is tightlylinked with a rich set of goals and actions. Ofcourse, not all aspects of perception are goal-directed. Some parts of scenes are inherentlysalient and “stand out”: things that are verybig, or moving very fast, or very bright, orfamiliar (for example, people you know). Butin general, perception is responsible for assess-ing scenes relative to the assessor’s goals andexperiences. You see in the scene opportunitiesand dangers, things you can eat, things thatcan eat you, metaphorically of course. There’sa large body of work by J. J. Gibson from the1950s and 1960s that talks about perception inexactly this form. Gibson described the per-ceived world as consisting of affordances, forexample, the things that the scenes offer to us,the possibilities and risks. I don’t think generalpurpose representation is even possible, exceptin the very, very simplest domains, where onehas, for instance, plane surfaces with polyhe-dra. How do you make sense of complex scenesand situations?

    To illustrate this in part, consider figure 5.This is a picture from the National Geographic,and at the first level of analysis, it’s an orang-utan in a tree. This is a hard scene to parsewith many very small regions. I’d like you toimagine yourself in the following situation:Suppose you’re really in the scene, and this iswhat you’re seeing, you’re there. If you’re aphotographer looking at that scene, whatwould you see? I would argue that first of all

    as a photographer for National Geographic,when you look at this you’d want to be ableto show enough of the surroundings to givean idea of the habitat and terrain in whichthis orangutan lives, how much visibilitythere is, how much she wants to be hiddenversus being able to see what’s coming interms of predators, what kind of foliage orvegetation this creature might eat, whethershe’s in a nest or not. You’d want to showhow high off the ground she is, have a pleas-ing composition to the picture, and so on.Now suppose instead that you are a collectorof animals for a zoo, then what would yousee? Well, I think you’d see something verydifferent. You first of all might look and say,“Hmm, can I sneak up on this creature? Isthere any way for this creature to get away?

    Presidential Address

    FALL 1999 25

    Habituation Display

    Test Displays

    Figure 3. Habituation and Test Displays in Experiments’ Perception of Partly Occluded Objects.

    During habituation, the rod moved back and forth behind the occluder (Brem-nan, Slater, and Butterworth 1997). From A. Slater, “Visual Perception and Its Orga-nization in Early Infancy, © 1997. Reprinted by permission of Psychology Press Ltd.

  • would you look at? Well, I think you’d lookaround the bases of the trees and ask yourself,“Is someone sneaking up on this creature. Isthere some way I could intercept such a per-son in time, etc.?” You’d look at different partsof the scene, you’d look at them in differentways, and you’d look at them in terms ofpaths, trajectories, distances, escape routes,accessibility routes, and so on.

    Could the creature jump to this nearest tree?Could I get there in time? Is there a path? Howwould I put something, a net or something,out to catch it? Could I hit her with a dartfrom here?” You’d be looking at very differentparts of the scene and in different ways. Third,imagine that you are a member of the WorldWildlife Federation, knowing that there mightbe a collector, a poacher, nearby. Then what

    Presidential Address

    26 AI MAGAZINE

    Figure 4. Schematic Representation of the Possible and Impossible Test Events Used in the Principal Experiment (Baillargeon,Spelke, and Wasserman 1995) (courtesy of Renee Baillargeon, University of Illinois at Urbana-Champaign).

    Test Events

    Habituation Event

    Possible Event

    Impossible Event

  • AI Successes in Perception andLanguage

    Now, I don’t want you to think that all thenews is bad. There are a number of AI successesand promising projects that address the issuesI’ve been discussing. I’ll list just a few. Success-ful systems that have perception and action forthe most part don’t do anything terribly com-plex, but they do take real world scenes and dosomething useful with them. I mentionedRALPH, the autonomous driving system used inthe NO HANDS ACROSS AMERICA project. RALPHlooks at the real world, and it finds travel lanes,other vehicles, road obstacles, hazards, exits(to avoid), intersections, and so on, and it doessomething with them, namely, it changes thedirection of the car and may hit the brakes.Autonomous planetary explorers like SOJOURN-ER have been pretty dumb to date, but SOJOURN-ER’s descendants will do things such as locatinginteresting targets, for example, rocks or theirhome base, devising paths to get to these tar-gets while avoiding obstacles such as big rocks

    or cliffs that they might not be able to climbover or might fall down into. And there is alarge body of work by Rod Brooks et al. on var-ious reactive robots that interact with the realworld.

    Evolution of LanguageLanguage and meaning are quintessential fea-tures of human intelligence. Language is notjust for I/O. Early on in AI’s history there wereinfluential articles that argued that the centralcore topics of AI were logic, reasoning, andknowledge representation and that languageand vision were peripheral operations for thecentral reasoning core. I disagree; language iscentral, not peripheral.

    The origins of language are unclear, but it’sfun to speculate. My guess is that language isthe intersection of (1) the ability to reifyobjects perceptually, that is, the ability to cre-ate coherent mental chunks that can benamed; (2) the richer association of possibili-ties that a larger brain offers—more possible

    Presidential Address

    FALL 1999 27

    Figure 5. A Complex Scene (Knott, C., and Laman, T. 1998. Orangutans in the Wild. National Geographic 194(2): 30–56).Courtesy Timothy Laman, National Geographic Society Image Collection.

  • that once a certain level of proficiency is devel-oped, people can reify objects and events evenif they haven’t seen them perceptually. (Per-haps we can’t even avoid reifying experienceditems.) And I think we need importance judg-ment for abstracting correctly—there are a lotof ways we can abstract. How do we do it prop-erly? And we clearly need judgment for under-standing things such as anaphora, humor,fables, or parables. A fable isn’t just a storyabout some creatures. It’s really about somebigger issue, and we somehow learn early tounderstand that.

    Language is related to and depends on per-ception. The possible meaning structures forutterances are more constrained than in thecase of perception, which is much more open-ended. But many of the arguments made forperception also hold for language. Languagecan have special meanings and affordances forthe understander: You can hurt or inspiresomebody very effectively with language.Probably for many of you the most intenseexperiences you’ve had—positive or nega-tive—have been language-mediated ratherthan perceptually mediated. Some of whatyour parents said to you, your boss said to you,your child said to you have likely had a hugeimpact.

    Much language requires perceptual reason-ing. I wrote an article a long time ago when Iwas already thinking about these sorts ofthings. The central example I used was, “Mydachshund bit our postman on the ear.” Nowwhy is that an odd sentence? It’s odd becausepostmen’s ears aren’t ordinarily anywhere neardachshunds’ mouths. So one needs to inventsome sort of story for how this could happen,how this could possibly make sense: maybethe postman fell down, or the postman pickedup the dachshund to pet him. But this requiresperceptual reasoning, possibly involving theperceptual system, even though the similarsentence, “My doberman bit our postman onthe ear,” is not problematic—a doberman is bigenough to reach the ear. But the fact that theperceptual system can be pulled into action ifnecessary, is, I think, significant.

    How can we build systems that understandlanguage? I think text based learning methods,statistical methods, simply won’t do it all,unless we somehow solve perception and canuse perception to support language learningfrom experience, which is a big if. Humanslearn language with the aid of perception.Symbol grounding is very important. Lan-guage experience has to be, to be really under-stood, linked into the structure of the corre-sponding situation, which, like language, also

    connections; (3) metaoperator developmentthat lets us learn via self-observation andrehearsal as I mentioned earlier; and (4) preex-isting signaling systems, shared with animals.I believe there may have been a synergybetween these abilities that led to a criticaladvantage for early ancestors and eventuallydeveloped quite significantly, giving hugeadvantages to hunters and gatherers and ulti-mately leading to civilization. (There are manyother proposals. One seriously advanced theo-ry is that gossip is the root of language; thatgossip is extremely important for social bond-ing, stratification of society; and consequently,that we are wired to love gossip, much as we’rewired to love food and sex. It seems to be truethat most of us love to hear and tell juicy sto-ries about the people around us. Languagethus developed initially to let us gossip moresuccessfully! Others have suggested that lan-guage developed because women preferredpoetic men. I think that these factors may haveplayed some role, but not the central one.)

    I think language evolved through interlock-ing synergies. The relatively undeveloped stateof human newborns made it possible for us tolearn a great deal more, including language,because we were born incompletely developed.There’s a natural course of innate developmentthat would occur anyway, but its direction andcontent are influenced by outside experi-ence—by the language heard around us, byperception, by the particular circumstances wefind ourselves in. In turn, of course, the help-lessness of lengthened childhoods requires sta-ble families and societies. And so languageitself may also have played a role of increasingthe fitness of families and societies that madepossible the longer childhoods and support ofchildren until they were able to go off on theirown. And so these forces may have formed asynergy that was important in language devel-opment. Language makes culture possible: wedon’t even have culture without language.Memes—the things that we name and share asthe standard units into which we divide theworld—are language items. They might bethings that we discover ourselves perceptually,but they might not be. Language makes theminescapable. And of course language itself is amajor element of culture. Language itself cansubstitute for direct experience so much ofwhat I said about perception is also true of lan-guage. Language allows us to experience thingsthat we can’t see directly, things that are faraway in space or time, as well as accumulatedculture, and more abstract things: morals,hints, metaconcepts, stories, hypotheticals,rules, and so on. Language is self-extending, so

    Presidential Address

    28 AI MAGAZINE

  • has meaning and context. As with perception,I believe that much of language is innate.Chomsky’s main point is that syntax is largelyinnate, and I think this is moderately wellestablished, although the scope of innate func-tionality is certainly still arguable. Lessarguable are the facts that deaf children spon-taneously sign and that twins are frequentlyobserved to invent private languages of theirown. And there are many other examples.

    I’m going to show you a video of HOWARD,one of the best systems for actually under-standing real, complicated, moving scenes.This is work by Jeff Siskind who’s at NECResearch Institute now. The work was original-ly done while he was at the University ofToronto. Figure 6 shows a sample from Jeff’scurrent follow-up of this earlier research. Thefollowing is the audio track of the videotape:

    People can describe what they see.They describe not only objects like blocks,

    but also events like picking things up.This video presents Howard, a computerprogram with the ability to describeevents. Our approach is motivated by asimple observation: We pass movies ofevents through an edge detector. Whilepeople couldn’t recognize the objectsfrom stationary edges alone, they couldrecognize events depicted by edges inmotion.

    An event pass can be described by themotion profile of its participant objects. Apicking up event has two sub-events:First, the agent moves toward the patientwhile the patient is at rest above thesource. Then the agent moves with thepatient away from the source. Our eventmodels attempts to capture these motionprofiles.

    We don’t use detailed image under-standing to perform event recognition.

    Presidential Address

    FALL 1999 29

    Figure 6. Video Sequence to Be Classified as an Event by HOWARD(Siskind and Morris 1996) (courtesy of NEC Research Institute).

  • Subsequent processing uses only ellipsedata.

    Here you see the colored and movingregions found by applying our tracker toa sample movie. Once the ellipses areplaced in each frame, we find the ellipsecorrespondences between frames. Ourtechnique attempts to fill in missingellipses and filter out spurious ellipses.

    We processed 72 movies with our track-er. Of these, 36 were randomly selected astraining movies, 6 movies for each of the6 event classes. We constructed a singlehidden Markov model for each eventclass. We then classified all 72 movies,both the original training movies andthose not used for training, against all 6event models. Our models correctly classi-fied all 36 training movies and 35 out ofthe 36 test movies.

    You are watching the results of our clas-sifier now. Different events can have dif-

    Only the rough position, orientation,shape, and size of the participant objectsare needed. We represent this informationby ellipses centered on the participants.Our training and classification proceduresoperate solely on the streams of ellipseparameters produced by our tracking pro-cedure.

    We process the movie one frame at atime to find regions of pixels that areeither moving or are brightly colored.First, we ignore pixels with either low sat-uration or value. Then we group nearbysimilarly colored pixels into regions dis-carding those that are too small. Opticalflow is then used to find moving objectsin the non-brightly colored pixels. Next,we group nearby moving pixels intoregions, again discarding those that aretoo small. We use both the colored andthe moving regions to track the object.Finally, we fit an ellipse to each region.

    Presidential Address

    30 AI MAGAZINE

    Figure 7. Parse Trees for Sentences in English, Korean, and German (courtesy of NEC Research Institute).

  • ferent numbers of participant objects. Ourpick up and put down models have threeparticipants, while our push, pull, drop,and throw models have two. Movies areclassified against only those models withthe same or fewer numbers of participantsas the ellipses found by our tracker. Ourclassifier can correctly match a subset ofthe ellipses in this three ellipse movie tothe drop model, which has only two par-ticipants. It appears that poor trackingcaused one drop movie to be misclassifiedas a throwing event.

    I should comment that Jeff can now showyou a real time demo on his desk in normallight thanks to computer power increases. He’strying to significantly increase the set of possi-ble events. On one hand, that doesn’t seemvery impressive: It’s a long way from being ableto do what you’d really like a system to do,namely, describe what’s going on, given anypossible scene. On the other hand, the realworld, with its shadows and complexity, ishard to work with, as any of you who havetried it can attest.

    Overall, to really build intelligent systems, Isuspect that large amounts of handcoding aregoing to be essential. Fortunately, a lot of workhas been done in recent years. We have exam-ples such as WORDNET and CYC. In the subsec-tions to follow I’ll tell you about the PAPPI sys-tem and work by Inquizit, Inc., that I recentlyencountered. Inquizit has built a huge naturallanguage processing system with many levelsand functionalities.

    PAPPIPAPPI is the work of Sandiway Fong of the NECResearch Institute. PAPPI parses a broad range ofdifficult sentences from 12 languages. It’s a“principles and parameters” parser, inspired byNoam Chomsky’s ideas. Figure 7 shows parsetrees for sentences in several languages: Eng-lish, Korean, and German. PAPPI comprisesabout a quarter million lines of PROLOG code,and it’s able to handle new languages by sim-ply setting parameters correctly, an impressivefeat. Sandiway’s group has been able to addnew languages, including complicated lan-guages such as Hungarian or Turkish, based onone summer’s work with a linguistics grad stu-dent for each language.

    INQUIZITINQUIZIT, a product of Inquizit Technologies, isa very interesting system, written by KathyDahlgren and Ed Stabler over a 14 year period.INQUIZIT has an extensive word sense lexiconwith 350,000 word senses as well as a morpho-

    logical analyzer that lets it actually representover a million word senses. INQUIZIT has anontology with inheritance, and a “naivesemantics” with first order predicate calculustests to determine when the various defini-tions can apply. Figure 8 shows INQUIZIT appliedto an employee manual, answering the ques-tion, “Who trains employees?” The point hereis to show that INQUIZIT knows a large number

    Presidential Address

    FALL 1999 31

    Query: Can they can an employee over drug addiction?

    Figure 9. INQUIZIT with the Question “Can They Can an Employeeover Drug Addiction?”(courtesy of Kathy Dahlgren, InQuizit Tech-

    nologies, Santa Monica, California).

    Figure 8. INQUIZIT, as Applied to an Employee Manual, Answers theQuestion “Who Can Train Employees?” (courtesy of Kathy

    Dahlgren, InQuizit Technologies, Santa Monica, California).

  • Generativity and Creativity Art, science, and technology depend criticallyon our ability to create ideas and artifacts thathave never been seen before and to create nov-el relationships between people, objects, andideas. Doing this depends clearly on develop-ing meaning in perceptual pattern systems andon creativity, but creativity is not just theprovince of arts and sciences. Infants andadults of all ages illustrate creativity: infants bytheir play activity, by their language, by under-standing, and use of metaphor and analogy.We aren’t going to have systems we ever regardas smart if all they do is react to what they see.And in fact they can’t really understand lan-guage unless they are able to make some senseof analogy and so on. And they won’t be ableto express things very concisely unless theycan also use language and use it appropriately.I know this is important, but I’m not certainhow to attack this as an AI problem. Some rea-sonable starting places can be found in thework of Ken Forbus and Dedre Gentner, Mag-gie Boden, Harold Cohen, John Koza, DougLenat, and Doug Hofstadter.

    The Role of Logic A possible objection to what I have said so faris, “Isn’t everything built on logic?” A substan-tial community within AI has long held theview that the human brain somehow imple-ments a perfect logic machine and that allintelligence is built on top of this. A corollaryto this view is that in order to create a fullyintelligent system all we need to do is create aperfect logic machine and then build every-thing on top of it. I’d ask you the followingquestion: If that’s true, why are adults so mis-erable at reasoning and hopeless at formal rea-soning for the most part? (None of you ofcourse. You’re all great at it! But if you take theoverall population, they’re not so great.) Wehave cases of people with Williams syn-drome—kids who speak extremely articulately,who have great understanding about them-selves, social situations, and other people, butwho are absolutely hopeless in reasoningabout the physical world, and have very lowIQs. If intelligence was based on a commonlogic reasoning system, how could such a caseexist of people who are so good in languageand social reasoning but so poor about mostother things?

    My view is that logic is really the fruit oflearning, one of the last things we learn, notthe system on which everything is based. Logicis the culmination of repeatedly detecting pat-

    of meanings for “trains”—railroad cars, a par-allel line of people, and so on—and it’s able topick the correct meaning—“instruct”—basedon the values of predicates it applies to the sur-rounding sentence. Figure 9 shows a sentencebeing used to do text retrieval; the highlightedtext is what’s retrieved. The question here is,“Can they can an employee over drug addic-tion?” The example illustrates first that INQUIZITcan pick the right meanings for each occur-rence of can, but more importantly that INQUIZ-IT can match text based on deep meanings—note that the word can is never used in thematched passage. INQUIZIT understands that inthis case can means terminate. Figure 10 showsa third example, “What crimes should bereported?’’ In this case, the word crimes neverappears either but the text matched says, “Areyou aware of any fraud, embezzlement, inven-tory shortage?” and so on, and INQUIZIT is ableto use its ontology to judge that those wordsare specific examples of crimes and matchthem. Building PAPPI and INQUIZIT (and CYC) hasrequired a huge amount of hand effort. Thebad news is that I think we will need to tacklea number of similarly large tasks in order tobuild systems that are really intelligent. Thegood news is that it’s possible that this can bedone successfully for very large—butfinite—domains.

    Presidential Address

    32 AI MAGAZINE

    Query: What crimes should be reported?

    Figure 10. INQUIZIT with the Question “What Crimes Should be Reported?”(courtesy of Kathy Dahlgren, InQuizit Technologies,

    Santa Monica, California).

  • terns and abstracting them from specificevents and schemas and then from these evermore abstract schemas. However, I think thatwhen we do this, we also retain polymorphicfeatures so that we can apply the patterns andschemas if and only if they are appropriate. Asevidence, we have work by Tversky whoshowed that people are much better at logicproblems when they’re stated as social situa-tions, but when exactly the same schematicproblem is stated as As, Bs, xs, and ys, peopleare miserable at solving them. We understandenvy, jealousy, competition, and such stuffwell, but we have trouble reasoning about setsand relations. We also have such gems as thecompeting proverbs: “Absence makes the heartgrow fonder’’ and “Out of sight, out of mind.”They’re both sort of right, sometimes. Weknow when one or the other applies.

    Logic lets us talk about unimportant things.It lets us take diverse objects—important andunimportant—and turn them into objects ofequal importance. In logic, all objects, opera-tors, and so on tend to be uniform, and thiscan be extremely important and useful, but it’svery different I think from what intelligence isusually doing, namely, judging what’s impor-tant under the circumstances.

    SummaryPerception, action, language, and creativity arecentral to intelligence. They aren’t just I/Ofunctions, they aren’t just peripheral opera-tions that AI’s going to need to connect thereal systems to the outside world. There’s verylittle—perhaps no—evidence for a centralCPU. There’s lots of evidence for the “societyof mind.” What humans think is importantreally ought to have a role in AI—how could itnot? Understanding systems need goals andvalues to guide their actions. If they’re to actappropriately, they need to make importancejudgments and choices. And of course if intel-ligent systems are going to understand us,they’ll need to be able to model us. They’regoing to need to be able to understand whywe’ve done what we’ve done, why we say whatwe say. If they can’t do that, they’re going to bepretty useless as colleagues or assistants. Soeven if they didn’t need to have their ownemotions and goals, they’d have to be able tosimulate them in order to deal with us.

    So what should AI be doing? I certainlydon’t want you to go away thinking that weshould stop doing applications. I am also verymuch in favor of continuing research on rea-soning, logic, and so on—it’s great work.Applications are absolutely critical for contin-

    ued credibility and funding. The work on log-ic, reasoning, and knowledge representationhas been the heart and soul of what lets us dothe AI applications we have done. Don’t stop!But as a field we need to do more. We need tohave a greater emphasis on science. We needto be more problem driven, not solution dri-ven. Even if we have great hammers, we haveto avoid the tendency to see the world as madeup completely of nails. I deeply believe weneed broader scholarship. To do AI properly,we have to know a lot of areas. We have toknow about neuroscience, we have to knowabout psychology, we have to know about lin-guistics. This is hard. There’s a lot to know. Istrongly recommend that you cultivate rela-tionships with people who can be good infor-mants so you don’t necessarily have to studythe whole literature. When you discover partsof the literature that you really need to know,go and learn them. We aren’t going to accom-plish our largest goals unless we are preparedto do that. We need this both for inspirationand to avoid reinventing the wheel—or rein-venting it hexagonally. Future AI applicationswill really require us to model language, per-ception, and reasoning well. Other people canuse statistics, other people can use simple stuffthat we’ve already shown how to do. Whatmakes us special? We need to have somethingnew. We need to have something that’s better.

    ServiceThat’s the main thrust of what I wanted to saytechnically. I do want to add a postscript. Thisis very important. In addition to research, Ireally believe that AI has to give much moreattention to service. This was underlined in arecent trip that several of us made to NSF, thefirst of a series of trips I hope we will make toother agencies as well, part of an educationprocess. The story we heard over and overagain was that AI has not given sufficientattention to volunteering (nor has CS in gen-eral). AI is not well understood by other peoplein CS, not understood well by funders, by leg-islators, or by the public, and it’s not going tobe well understood unless people in AI are will-ing to go and be part of the decision makingand contracting functions of government inour country—and I’m sure this is true of othercountries as well. This is a critical step towardinfluencing policies and funding.

    Service is also critical for conferences, forediting of journals and conference proceed-ings, and prompt reviewing. Review andreturn your papers more quickly. Let’s notmake excuses for why it takes two or threeyears for papers to appear. We can do better

    Presidential Address

    FALL 1999 33

  • Presidential Address

    34 AI MAGAZINE

    Creativity

    Gentner, D.; Brem, S.; Ferguson, R..; Markman, A.; Levidow,B.; Wolff, P.; and Forbus, K. 1997. Analogical Reasoning andConceptual Change: A Case Study of Johannes Kepler. TheJournal of the Learning Sciences 6(1): 3–40.

    RALPH

    Jochem, T., and Pomerleau. D. 1996. Life in the Fast Lane. AIMagazine 17(2): 11–50.

    Chess

    IBM. 1996. DEEP BLUE Chess Program. IBM Research Productsand Projects. Available at www.research.ibm.com/research/systems.html#chess.

    General

    Waltz, D. L. 1997. Artificial Intelligence: Realizing the Ulti-mate Promises of Computing. AI Magazine 18(3): 49–52. Alsoavailable at www.cs.washington.edu/homes/lazowska/cra/ai.html.

    Dachshund-Postman Example

    Waltz, D. L. 1981. Toward a Detailed Model of Processing forLanguage Describing the Physical World. In Proceedings ofthe Seventh International Joint Conference on Artificial Intel-ligence, 1–6. Menlo Park, Calif.: International Joint Confer-ences on Artificial Intelligence.

    SOJOURNER

    MarsMap—Demos. SOUJOURNER. ca. 1999. Available atimg.arc.nasa.gov/Pathfinder/marsmap2/www_demo.html.

    Lyme Disease

    Fraser, C. M.; Casjens, S.; Huang, W. M.; Sutton, G. G.; Clay-ton, R.; Lathigra, R.; White, O.; Ketchum, K. A.; Dodson, R.;Hickey, E. K.; Gwinn, M.; Dougherty, B.; Tomb, J. F.; Fleis-chmann, R. D.; Richardson, D.; Peterson, J.; Kerlavage, A. R.;Quackenbush, J.; Salzberg, S.; Hanson, M.; van Vugt, R.;Palmer, N.; Adams, M. D.; Gocayne, J.; Venter, J. C.; et al.1997. Genomic Sequence of a Lyme Disease Spirochaete, Bor-relia burgdorfori. Nature 390(6660): 553, 555. Available atht tp : / /br ic .postech.ac .kr / rev iew/genet ics_molbi -ol/97/12_11.html.

    Syphilis

    UPI. 1998. Key to Syphilis Vaccine May Be Found. UPI ScienceNews, 16 July. Available at www.medserv.dk/health/98/07/17/story03.htm.

    Star and Galaxy Classification

    DIMACS. 1998. Astrophysics and Algorithms: A DiscreteMath and Computer Science Workshop on Massive Astro-nomical Data Sets, 6–8 May, Princeton, N.J. Available at

    dimacs.rutgers.edu/Workshops/Astro/abstracts.html.

    Animal Behavior

    Glass, J. 1998. The Animal within Us: Lessons about Life fromOur Animal Ancestors. Corona del Mar, Calif.: DoningtonPress.

    Good general reference that generally agrees with myviews, but adds significant new ideas on reasoning, ourinner monologues, territoriality, and other importanttopics.

    Lorenz, K. 1966. On Aggression. New York: Harcourt, Brace.

    Perception

    Goleman, D. 1995. Brain May Tag All Perceptions with a Val-ue. New York Times, 8 August, C1 (Science Section).

    Reports on values attached to perceptions, a report onthe work of Jonathan Bargh.

    Affordances

    Gibson, J. J. 1979. The Ecological Approach to Visual Perception.Mahwah, N.J.: Lawrence Erlbaum. Available atwww.sfc.keio.ac.jp/~masanao/affordance/home.html

    Theories of the Mind

    Moyers, B. 1988. Interview with Patricia Churchland on thePBS Program “A World of Ideas.”

    A brief synopsis of this videotape can be found atwww.dolphininstitute.org/isc/text/rs_brain.htm.

    Animal Morality and Superstition

    Lorenz, K. 1991. Objective Morality. Journal of Social and Bio-logical Structures 14(4): 455–471. Also available at www.per-cep.demon.co.uk/morality.htm.

    Robots

    AMRL. 1994. Project 3: Motion Control for Space Robotics:Behavior-Based 3D Motion. Autonomous Mobile RoboticsLab, University of Maryland, College Park, Maryland.

    This site contains reactive robot design information.

    Cañamero, D. 1996. Research Interests, AI Lab, MassachusettsInstitute of Technology. Available at alpha-bits.ai.mit.edu/people/lola/research.html.

    General information, including the Zoo Project, headedby Rodney Brooks, is presented.

    Moravec, H. 1988. Mind Children: The Future of Robot andHuman Intelligence. Cambridge, Mass.: Harvard UniversityPress.

    Source List

    than that. Treat reviewing as an importantactivity, not a nuisance that you only get toafter you’ve done all the more important stuff.

    Executive Council Actions The last thing I’d like to do is to tell you aboutwhat the executive council is doing. I tried to

    decide when I became AAAI President whatwas the most important thing to do. I decidedthat the most important thing to do is to makesome real use of the large amount of moneythat we’re fortunate enough to have as anorganization, to help AI become a betterfield—not to merely save the money for a

  • rainy day. I think we should be careful that it’snot wasted, but I think we really can use thefunds we have to do something great.

    Here are some of the actions that the councilhas agreed to recently:

    We’re going to create a number of newprizes for AI members. I think this isimportant because nobody gets a NobelPrize (or Turing Award) without havinggotten other prizes first. In fact you’reunlikely to get a prize from any othercomputer society unless you’ve alreadygotten one from AI. We need to recognizethe great work that’s done in the field andneed to create heroes and heroines andhelp people’s careers.

    We’re going to create high school andundergraduate prizes to encourage peopleso they will want to go into AI as a fieldand so that they will know what AI’sabout.

    We’d like to sponsor two internationalscience fair teams to come to AAAI, findones that are doing great work that hap-pens to bear on our topic and perhapsencourage them.

    We want to put some AI technology onour web site. Our web site ought to some-how reflect the great things that are goingon in the field. It ought to be distinctive.It shouldn’t be lagging behind the rest ofthe field.

    We’re going to try to commission a sci-ence writer or perhaps several science writ-ers to write about AI. If the word isn’t get-ting out the way it should be, let’s makesure that we hire someone who’s persua-sive and articulate and tell them whatwe’re doing and let them make our story.

    To encourage people to go into govern-ment service from within the field, we’regoing to create service awards for AI peo-ple who serve in government positions.Bruce Buchanan has already done a lot ofwork on creating high school informationweb pages and brochures that shouldappear fairly shortly. We get a lot ofrequests from high schools, and we’regoing to try to answer them better thanwe have in the past.

    Finally, we’re going to increase the bud-get for the national conference scholar-ships to let more students, who could oth-erwise not afford to come, come to theconference. Next year we’re going to trysomething interesting called “CHI-Care.”This was Jim Hendler’s suggestion. TheCHI conference has offered this for awhile now. Basically, they have children

    take part in the conference program: Thekids write a newsletter about the confer-ence, take pictures of the speakers, inter-view them, mingle with the conferenceparticipants, help out at booths, whatev-er. Apparently this is hugely successful—alot of fun for the kids and also anotherway of getting them interested in thefield. Next year we’re going to leave theconference fee the same, but we’re goingto include tutorials, so everyone can go tothem. We’re going to try to continue toincrease the cooperation with collocatedconferences and continue the series ofeducational visits to NSF, DOD, NLM, andNIH funders.

    And finally we’re going to take a mem-ber’s survey and act on the good ideas.

    Thank you very much for your attention. Iactually managed to finish before the end ofmy hour. Have a great conference!

    ReferencesBaillargeon, R.; Spelke, E.; and Wasserman, S. 1995.Object Permanence in Five-Month-Old Infants. Cog-nition 20:191–208.

    Bremner, G.; Slater, A.; and Butterworth, G. 1997.Infant Development: Recent Advances. Philadel-phia: Psychology Press.

    Knott, C., and Laman, T. 1998. Orangutans in theWild. National Geographic 194(2): 30–56.

    Siskind, J. M., and Morris, Q. 1996. A Maximum-Likelihood Approach to Visual Event Classification.In ECCV96, Proceedings of the Fourth European Confer-ence on Computer Vision, 347–360. New York:Springer-Verlag.

    David Waltz has beenvice-president of theComputer ScienceResearch Division ofNEC Research Institutein Princeton, New Jer-sey, and adjunct profes-sor of computer scienceat Brandeis University

    since 1993. He is currently president of the AAAI andis a fellow of the AAAI, a fellow of the ACM, andsenior member of IEEE, and a former chair of ACMSIGART. Before moving to NEC Research, Waltzdirected the data mining and text retrieval effort atThinking Machines Corp. for 9 years, following 11years on the faculty at the University of Illinois atUrbana-Champaign. Waltz received all his degreesfrom MIT. His research interests have included con-straint propagation, computer vision, massively par-allel systems for both relational and text databases,memory-based and case-based reasoning systems andtheir applications, protein structure prediction usinghybrid neural net and memory-based methods, andconnectionist models for natural language process-ing. His e-mail address is [email protected]. nec.com.

    Presidential Address

    FALL 1999 35