Interactivity and multimedia interfaces

  • Published on
    03-Aug-2016

  • View
    214

  • Download
    1

Embed Size (px)

Transcript

  • Instructional Science 25: 7996, 1997. 79c

    1997 Kluwer Academic Publishers. Printed in the Netherlands.

    Interactivity and multimedia interfaces

    DAVID KIRSHUniversity of California, San Diego, U.S.A.

    Abstract. Multimedia technology offers instructional designers an unprecedented opportunityto create richly interactive learning environments. With greater design freedom comes com-plexity. The standard answer to the problems of too much choice, disorientation, and complexnavigation is thought to lie in the way we design the interactivity in a system. Unfortunately,the theory of interactivity is at an early stage of development. After critiquing the decisioncycle model of interaction the received theory in human computer interaction I presentarguments and observational data to show that humans have several ways of interacting withtheir environments which resist accommodation in the decision cycle model. These additionalways of interacting include: preparing the environment, maintaining the environment, andreshaping the cognitive congeniality of the environment. Understanding how these actionssimplify the computational complexity of our mental processes is the first step in designingthe right sort of resources and scaffolding necessary for tractable learner controlled learningenvironments.

    Key words: interactivity, multimedia, interface, learning environment, HCI, complementaryactions

    Introduction

    Given the obvious potential of computers for promoting active learning, andthe rich virtual environments they can now provide, it is natural to ask howwe might better design human computer interfaces for active learners. It ispresupposed that these computer created environments will be highly interac-tive, populated with all sorts of useful tools and resources for problem solving,experimentation, communication and sharing, but there remains many unre-solved questions about the nature of this open-ended interactivity. Indeedthere remains many unresolved questions about the nature of interactivityitself. My objective in this paper is to explore the concept of interactivity,particularly as it applies to the design of multimedia learning environments.

    The essay is organized into three main sections. In the first section I intro-duce two problems for designers of interactive learning environments createdby the freedom of choice multimedia systems offer users. Freedom seemsnecessary for learning environments emphasizing learner control. But withfreedom comes complexity. How can such systems be scripted so that all or

  • 80

    most trajectories through the environment are coherent? How can resourcesand scaffolding be deployed to guide learners without restricting their free-dom?

    In Section 2, I inquire into the basic notion of an interactive interface andcritique the state of the art in theorizing about interactive interfaces thedecision cycle model. At present there is no settled view of what interactivitymeans. J.J. Gibson (1966, 1979, 1982) and proponents of the ecologicalapproach to cognition (Lee, 1978; Lee, 1982; Shaw and Bransford, 1977), forinstance, have shown how perception itself is interactive. In visual perception,the movement of the eyes, head, body, all must act in a coordinated fashion tocontrol the sampling of the optic array the information about all the objectsand their layout in the viewable environment. Since bodily activity is anintegral part of perception the senses must be considered a system operatingunder principles of organization that include what Gibson called exploratoryactions. Craning, twisting, walking, glancing, are exploratory actions of thevisual system, hefting, stroking, squeezing, palpating are exploratory actionsof the haptic system. This important insight has taught interface designersto give immediate sensory feedback so that users can pick up informationabout the structure and functionality of their software environments andlearn in an immediate way about the affordances (Gibson, 1977) that theyoffer. In multimedia environments it enjoins designers to be sensitive to theinformation that can be picked up by coordinating virtual movement withimagery change. But this form of interactivity is certainly not the whole storywhen we consider what is required for reasoning and higher problem solving.

    In Section 3, I discuss how we must overhaul the decision cycle modelto accommodate forms of interactivity typically left out. Chief among theseare the need to allow users to prepare, explore and maintain their environ-ments. I briefly discuss some laboratory observations we have made of wayshumans have of interacting with their environments to minimize cognitiveload and simplify their cognitive tasks. My conclusion is that if dynamic inter-faces are to support complex learning activities they must not only offer thetype of perceptual affordances and action effectivities that Gibson described,they must also facilitate a range of actions for reshaping the environment incognitively congenial ways.

    Two hard problems for open-ended interactivity

    Beyond scripts

    One canon of educational constructivism is that lessons should be tunedto the very issues which students raise in their personal exploration of

  • 81

    phenomena. Learning environments are supposed to create a context whereusers can discover interesting phenomena, pose their own questions,1 sketchout their own plans for deeper inquiry, and yet change their minds in an adap-tive and responsive manner depending on what they encounter. This meansthat while users probably engage in some degree of advance planning theyare equally involved in continuous, online improvisation.

    This requirement, even in a weak form, poses a major problem for designersof effective learning environments. In current models of multimedia, contentmust be scripted in advance. The links that join pages must be anticipated if not directly, then at least by being indexed to allow selection throughonline search methods or pushed by selection criteria chosen in advance bythe user. The information that is found in an environment, therefore, thoughnot necessarily predetermined exhaustively, must be largely anticipated. Thiscreates an undesirable limitation on improvisation. Evidently the scriptingmodel is inadequate. This poses a problem. Since interactive interfaces oughtto foster this type of coordination between improvisation and planning weneed to discover better theories of what is involved in dynamic control ofinquiry, line of thought, and action more generally. We need to discover moreopen-ended models of coherence and narrative structure.

    Good interfaces should help us decide what to do next

    One possible method of maintaining coherence without restrictive scriptingis to place enough scaffolding in a learning environment to guide learnersin useful directions without predetermining where they will go. This ideaof indirectly pointing users in useful directions is an objective for all welldesigned multimedia interfaces, not only for learning environments.

    It is widely acknowledged (Norman, 1988; Hutchins, Hollan, and Norman,1986) that good interfaces ought to satisfy the principle of visibility: (a) usersshould be able to see the actions that are open to them at every choice point,(b) they should receive immediate feedback about the actions they have justtaken since few things upset computer users more than not knowing whata computer is doing when it seems to be churning unexpectedly, and (c) theyshould get timely and insightful information about the consequences of theiractions. Designers have worked hard at finding ways of providing immediatefeedback, and ways of exploiting natural mappings between the controlsthat can be acted on and the effects they have. Their goal is to make thefunction of knobs and buttons transparent. And indeed, they have been cleverin discovering ways of satisfying (a) and (b). They have been less successfulin satisfying (c) primarily because in any interesting environment a singleaction may have multiple effects.2 But they have still made considerable

  • 82

    progress in developing techniques to make certain consequences of actionsvisible.

    An even more challenging element of the visibility principle, however, isthat effective interfaces ought to provide us with the information we need todecide what we ought to do next. The interactivity built into a multimediaenvironment should be sensitive to the goals of users, and help to direct themin fruitful directions.

    This may seem an almost impossible request. Yet, one possible source ofinsight into this dilemma can be found in Gibsons work. (Gibson, 1966,1979). It is a tenet of the ecological approach to perception and action thatagents actively engage in activities that provide the necessary informationto decide what to do next. Perception is not a passive process in which thesenses extract information about the world. It is an active process in whichthe perceiver moves about to unearth regularities, particularly goal relevantregularities. One of our modes of interaction, then, is for the sake of decidingwhat to do next. This feature of Gibsons theory is not always emphasized.Action not only facilitates pick-up of the ecological layout Gibsons wayof describing perception it also facilitates picking-up the aspects of theecological layout that determine what to do next. One consequence for inter-face design is that in many contexts it may be possible to bias the affordancesthat are visible so that the agent registers only those opportunities to act whichprobably lie on the path to goal achievement. An example of this can be foundin graphics programs discussed later.

    Biasing what is visible is an important step in designing helpful interac-tivity. But for problem solving tasks that require protracted reasoning andinformation processing this is a tall order. The forms of interactivity neededfor such problems go well beyond perception-action paradigms. Designerssometimes speak of conversation-like interactivity. But this too is only oneparadigm. To solve the problems of coherence and guidance we need torethink the basic idea of interactivity. It is to that we now turn.

    What is interactivity?

    Webster defines the verb to interact as to act mutually; to perform reciprocalacts. If we consider examples of interactivity in daily life, our clearest exam-ples come from social contexts: a conversation, playing a game of tennis,dancing a waltz, dressing a child, performing as a member in a quartet, react-ing to the audience in improv theater. All these highly interactive recreationsteach us something about the nature of interaction. Each requires coopera-tion, the involved parties must coordinate their activity or else the processcollapses into chaos; all parties exercise power over each other, influencingwhat the other will do, and usually there is some degree of (tacit) negotiation

  • 83

    over who will do what, when and how. In these examples, interactivity is acomplex, dynamic coupling between two or more intelligent parties.

    Conceived of in this way, computer interfaces are rarely interactive becausethe programs that drive them are rarely intelligent enough to behave as tacitpartners. Despite the fashionable talk of dialogue boxes and having a conver-sation with your computer, there is little cooperation to be found. As a user, Iam obliged to adapt to the computer; it does very little in the way of adaptingor accommodating to me. Current software agents embodying simple expertsystems may change this situation in the future. But so far, intelligence,particularly social intelligence, is largely absent from interfaces.

    Interaction, however, is not confined to intelligent agents. There is a healthysense of interaction in which inert bodies may interact. Take the solar system.The moons gravitational field acts on the Earth and Sun, just as the Earthand Suns gravitational fields act on the moon (and all other solar bodies),mutually and reciprocally. There is no cooperation here, no negotiation, andno coordination of effort. Everything is automatic and axiomatic. But thecausal coupling is so close that to understand the behavior of any one body,we must understand the influence of all the others in the system.

    Intermediate forms of interaction are also commonplace. For instance,when I bounce on a trampoline I am interacting with it in the sense that mybehavior is closely coupled to its behavior and its behavior is closely coupledto mine. The same applies to the highway when I drive over it, to a book whenI handle and read it, to my daughters leggo blocks when I help her to build aminiature fort, and even to the appearances of a room when I walk around it.These environments of action, rich with reactive potential, are not themselvesagents capable of forming goals and therefore capable of performing trulyreciprocal actions. But they envelop me, I am causally embedded in them,and they both determine what I can do and what happens as a result of myactions. The reciprocity here is not between agents, but between an agent andits environment of action.

    Interactive interfaces

    The sense of interactivity which cognitive engineers have in mind when theysay that an interface is interactive falls somewhere between the first (thesocial sense) and the intermediate sense (the agent and its environment).For instance, in early accounts, interaction was thought to be a sophisticatedfeedback loop characterizable as a decision cycle. The user starts with a goal an idea of what he wants to have happen he then formulates a small planfor achieving this goal, say by twisting knobs, pressing buttons, dragging anddropping icons. He next executes the plan by carrying out the actions he had inmind; and finally he compares what happens with what he thought he wanted

  • 84

    to happen. This process is interactive because the environment reacts to theusers action and if well designed leads him into repeatedly looping throughthis decision sequence in a manner that tends to be successful. Agent acts,the environment reacts, the agent registers the result, and acts once again. Asthe temporal delay between action-reaction-action decreases, the coupling ofthe human computer system becomes closer and more intense. Interactivity isgreater. As the environment is made more responsive to the cognitive needsof the user, it moves more toward the social sense of interaction.

    The decision cycle model

    In Don Normans account (Norman, 1988), the agent-environment-agent loopwas elaborated as a seven stage process.1. form a goal the environmental state that is to be achieved.2. translate the goal into an intention to do some specific action that ought

    to achieve the goal.3. translate the intention into a more detailed set of commands a plan for

    manipulating the interface.4. execute the plan.5. perceive the state of the interface.6. interpret the perception in light of expectations.7. evaluate or compare the results to the intentions and goal.Accordingly, interaction was seen to be analogous to...

Recommended

View more >