29
ANALYZING INTERACTIVE GRAPHICS SYSTEMS General Exam Paper by Michael Sannella TECHNICAL REPORT No. 182 September 1989 Department of Statistics, GN-22 University of Washington Seattle, Washington 98195 USA

ANALYZING INTERACTIVE GRAPHICS SYSTEMS

  • Upload
    others

  • View
    13

  • Download
    0

Embed Size (px)

Citation preview

ANALYZING INTERACTIVE GRAPHICS SYSTEMS

General Exam Paperby

Michael Sannella

TECHNICAL REPORT No. 182

September 1989

Department of Statistics, GN-22

University of Washington

Seattle, Washington 98195 USA

Analyzing Interactive Graphics Systems

General Exam Paperby

Michael Sannella

Abstract: The use of high-quality graphics by interactive applications is discussed. A numberof graphics systems designed to support interactive graphics applications are described, includingthree window systems. These systems are analyzed and compared with respect to issues such as

redisplay and event handling. 1

1. Introduction

User interfaces for computer applications have changed radically in recent times. Not too longago, most user interfaces were based on some sort of command language. This style of userinteraction is simple to implement, but is not appropriate for non-experienced and infrequentusers, who can not be expected to memorize the details of a command language. A differentstyle developed to handle these problems is the menu-based system, which allows the user tochoose from a list of operations, handling complex operations by presenting a series of menus.This works well in situations with a small, fixed set of choices.

Another style developed more recently, known as "Direct Manipulation.t'[Shneiderman87][Hutchins et al 86] allows the user to specify operations by manipulating pictures on agraphics display. In theory, these systems should be easier for inexperienced users to learn, andallow more freedom of action than menu systems. However, the direct manipulation style has itslimitations. For example, direct manipulation operations which are performed quickly and easilywith a few objects become tedious with hundreds of objects. Also, all operations do not conformto the metaphor of manipulating physical objects.

is a trademark of Sun Inc. NeXT. Application Kit, and Interface Builder are trademarks of NeXT, Inc.registered trademark of The trademark. and

X Institute of Tecanology.

By no means has the development of new types of user interfaces stopped. As computers areapplied to new tasks, by a larger variety of users, it will be necessary to invent new styles ofinteraction. Existing interactive systems are built around certain useful concepts which will bemaintained and refined in future systems. This paper will extract some of these concepts from

design of current systems, specifically those interactive systems based onconsider how may In

One area that has been the scene of intense development is the area of window systems. Windowsystems have proved themselves indispensable in utilizing high-quality graphics displays. Thispaper will examine a number of window systems, in order to analyze the particular concepts thatthey are based on. One could complain that window systems are a rather specialized type ofinteractive system. However, this paper takes the position that the same issues that occur in thedevelopment of window systems also occur within any interactive graphics application.

This paper is organized as follows: The following section will introduce a number of examplesystems that will be referenced throughout the paper. The next few sections will discuss detailsof the input and output facilities of interactive graphics systems, comparing the implementationof certain interesting concepts within the example systems. Finally, the conclusion will speculateon approaches for future research in interactive graphics system.

2. Example Interactive Graphics Systems

This section will describe a number of interactive graphics systems. One difference betweenthese systems is the "scope" of the system. Some systems attempt to support the entireapplication, from the user interface through the processing of the application data. Other systemsjust provide a high-level interface to the I/O devices, and leave the details of the application to behandled by another program. Over time, as the design of user interfaces is better understood, onewould expect that more and more of the structure of interactive applications will be supported byinteractive graphics tools, and less by application-specific code.

2.1 PPS and PSBase: Presentation Based User Interfaces

The paper [Ciccarelli 84] develops a model for analyzing user interfaces based on the concept of"presentations." A presentation is defined as a visible graphic form which conveys informationabout another object. For example, the number 23 in computer memory could be represented bya presentation displaying the two-character string "23". By considering a presentation to be anobject in its own right, explicitly connected to the object it is representing, this creates thepossibility of having the presentation change as the represented object is changed, or of havingmultiple different presentations of the same object (such as another presentation of the number23, which displays as the string "XXIII"), or of interactively selecting a presentation andchanging it uses to concept of a one way

semantic connection between an image on a screen and mtormanon

Analvzing Interactive Graphics Svstems

model, which is used to describe the behavior of a wide variety of complex user interfaces withdifferent interactive styles:

presenter control~

userediting-_1Ioi

commands

presentationdata base

recognizer control

applicationdata base

Figure 1. Diagram of Primitive Presentation System (PPS) model.

The application data base contains the domain information that the user is examining or editing.The presenter maps some subset of this information into the presentation data base, whichcontains the set of presentations that are displayed on the graphics display screen. Thepresentation editor allows the user to graphically manipulate the forms on the display screen.The recognizer observes the editing operations, and updates the application data base"appropriately" (which may cause the presenter to update the presentation data base). Both thepresenter and the recognizer may have a number of "primitive control signals" that controlaspects of their operation. For example, a presenter that displays part of a document within ascrollable window will access primitive control signals to specify what part of the document is tobe displayed.

Note that the application data base and the presentation data base are accessed through primitivedata base commands. All of the interactive editing and display is handled by the othercomponents of the model. As a result, there is no need for the application data base to "know"

it is a part of an interactive graphics system. All it has to do is implement simple data basequery commands.

IHVU\;l described above a of parts of an mteracnveexample, one could implement a display-based text editor using a

structure a text as as tormattmg information ,,,-va_v,

and information on where the text is displayed on the screen. By conceptually dividing this datastructure into an application data base (containing the text and formatting information actuallystored in the document), and a presentation data base (containing the information on how the textis currently displayed on the screen), one can more easily understand the operation of thisapplication.

The simple PPS model provides a reasonable model of simple direct manipulation graphicseditors. However, it is not sophisticated enough to represent many complex user interface styles,because it is based on making immediate changes to the application data base (implemented bythe presentation editor and recognizer), that are immediately displayed (by the presenter). Thisdoes not model interaction styles where changes do not take place immediately. Anotherproblem is that the concept of "primitive control signals" has been left undefined. In a realsystem, there needs to be some sort of user interface to the presenter controls.

The strategy for solving both problems is to hook together multiple PPS's. For example, supposethat one wanted to control the scrolling of text in a document window by using a scroll bar. Onecould model this by using one PPS to handle the display and modification of the scroll bar, andanother PPS to handle the display of the text. The application data base of the scroll bar PPS,which specifies the section of the text to be displayed, could be used to generate a primitivecontrol signal for the text display presenter.

The PPS model provides some useful concepts, but it also has some problems. First, the PPSmodel does not represent the concept flow of control between multiple PPS's. When there aremultiple presentation editors within a system, there is no way to explicitly represent the idea thatparticular commands/operations are interpreted by one presentation editor versus another.Within a single PPS, it is not clear when different parts of the model (such as the presenters andrecognizers) are activated. Finally, the behavior of the presentation editor is not very clearlydefined. It seems as if a lot of the trickiest parts of interactive editing have been bundled up inthe presentation editor, and essentially ignored.

The Ciccarelli paper also describes a system called the Presentation System Base (or PSBase),which contains tools and mechanisms that can be used to construct user interfaces based on thePPS model. PSBase includes a data base mechanism based on a knowledge representationnetwork, that allows the structure of complicated objects to be represented. Within a PPS-basedapplication, the application data and presentation data base are represented bysame base. makes it easy to establish

they represent, and to codeit makes

one is

PPS. On the other hand, the requirement for a single large data base mechanism limits the use ofPSBase as a base for implementing real interactive graphics systems.

Overall, the PPS model seems more useful as a tool for understanding complex interfaces, ratherthan being used as a structure for actually implementing them (via PSBase). Precisely becausethe PPS model is a high-level model of a user interface, it does not address a lot of the issues(like flow of control) that need to be addressed when building real systems.

2.2 GROW: A Graphical Object Workbench

Developing the interface can consume a major part of the effort of building an interactiveapplication. The primary goal behind the GROW system[Barth 86] was to develop a system thatwould reduce this effort by allowing the re-use of interface components from existingapplications when developing new interactive graphics interfaces.

One method that GROW uses to facilitate reuse of interface components is to specify a strictseparation between the interface and the application. The interface is specified by a collection of"graphic objects" that contain information about how and where they appear on the displayscreen. The application has no direct access to these graphic objects. In order to create, move, ormodify objects on the display, the application has to send "messages" through the GROWsystem. The links between application data structures and graphic objects are saved in anassociation table maintained by GROW. Multiple graphic objects can be associated with a singleapplication "key," which supports graphics displays containing multiple objects displaying thestate of the same application data structure.

Note that the above scheme requires that the application be aware of the existence of thegraphical interface and sends appropriate messages to update the display. This is unfortunate,since this requires changing the application to add the interface (unlike the PPS model, where theapplication data base doesn't need to "know" that it has an interface). On the other hand,maintaining such a narrow conduit between interface and application allows greater freedom inimplementing the application; it doesn't have to be written in the same language as GROW(Lisp), or even be executing on the same machine. This narrow communication channel alsoallows a programmatic interface to the application to be implemented using same messagesas the interactive interface, thereby supporting animation as "program-driven graphical editing.

idea been used to implement a of GF~O'W-ba:;edapplications.

encourages reuse of interface code is support

the component objects when creating new composite graphic objects. For example, suppose oneimplemented a single monolithic graphic object that displayed as a string with a box around it.This implementation could be reused whenever one wanted to display other boxed strings, but itwouldn't help when one wanted to display a boxed picture. On the other hand, if the boxed­string object was implemented as a composition of a box object and a string object, thesecomponent objects could be reused in a variety of other graphic objects. GROW also supportsusing composite objects as components of other composite objects, to any number of levels.Messages such as Move or Destroy can be sent to composite objects, in which case they are sentto all of the component objects, or they can be sent to individual component objects (to affectthese objects alone).

Given that graphic objects can be implemented as a set of component objects, it is necessary tospecify the spatial layout of these objects. This could be specified through a single layoutprogram associated with the composite object, but this code would not be reusable. GROW'ssolution is to support the specification of graphic layout in terms of one-way dependenciesbetween graphic objects attributes, the named "slots" used to store information about the object.For example, a box graphic object may have attributes storing the position of the lower left-handcomer of the box on the screen, as well the width and height of the box. Considering the boxed­string composite object mentioned above, one could specify the layout between the box andstring components by specifying that the box position is a fixed distance below and to the left ofthe lower left-hand comer of the string, and the height and width of the box are a fixed distancelonger than the height and width of the string. If the string width was changed (through amessage sent from the application), GROW would note the dependency between the stringattributes and the box attributes, and re-format the box (erasing the old version, and redrawingthe new). A single graphic object attribute can be dependent on multiple attributes, which maybe either within the same object (the width attribute of a string depends on both the string textand the string font) or within other objects. One restriction on the dependency system is that thedependency graph must be acyclic, but the builders of GROW claim that they have not found thisto be a significant restriction.

toattributes it is dependent on.

a.:>~,U'-'la.WLHh one object attribute

of a object attribute

Underlying the whole GROW system is a general object-oriented programming language knownas Strobe.[Smith 83] Graphic objects are defined using a taxonomic hierarchy of classes,supporting multiple inheritance of methods and slots. One interesting feature that Strobesupports is that object slots can have any number of associated "facets," can used to

a GROW uses to store

objects, slots, facets, methods) can be specified in an object's class, inherited from other classes,and specialized in subclasses. For example, it would be possible to specify a new subclass of theboxed-string graphic object where a different compute method was used to position the boxobject relative to the string object (say, to display more white space around the string) withoutmodifying the composite structure, or the dependency links. This makes it easier to reusegraphic objects by making a minimal set of changes in a subclass.

The features described above are predominantly concerned with the displaying of graphics.GROW's input facilities are somewhat less sophisticated. For most input, GROW uses thefacilities of the Impulse-So system,(Smith et al 86] which supports using a mouse to selectobjects on the display (handling highlighting and maintaining a stack of selections), selectingmenu items (calling functions that can use the selection stack to retrieve arguments), andhandling type-in. In order to handle other styles of user interaction, GROW allows the graphicobject methods to "take over" the user interface, and call lower-level I/O functions. This expandsthe range of interactive styles that can be implemented, but this interaction code would not bevery reusable.

The GROW system provides a set of tools for constructing a graphic interface and linking it to aseparate application. The idea of explicitly representing graphical dependency information (ascompared to encoding this information inside of function code) shows some promise. Some ofthese ideas have been extended and refined in the Coral system.(Szekely & Myers 88] Amongthe refinements in Coral are the use of "active value" variables for communicating between theapplication and the interface, so that the two sides do not need to know as much about themessages used to activate the other. Coral also supports declaratively specifying the structure ofa composite graphic object along with the graphical dependencies between the different objectslots. Coral also has a more sophisticated system for handling user input than GROW, usingobjects called "interactors,"(Myers 89] which provides a way of specifying and reusing complexinteractive sequences.

Another possible extension of the ideas within GROW would be to extend the dependencysystem to provide more support within the application. One system that has taken this approachis ThingLab,(Borning & Duisberg 86] which uses bidirectional constraints to maintain graphicaldependencies on the display, to maintain the connections between the application and theinterface, and to handle communication within application. Given access to the internals ofthe application, this approach may provide a method for implementing interactive

one wants to add an to an existingappncanon, more systems as GROW prove ....,,""...........

Analyzing Interactive Graphics Svstems

2.3 The X Window System2

The X window system[Scheifler & Gettys 86][Nye 88] supports a large set of graphicsoperations (lines, curves, text, images) drawing within overlapping rectangular windows on ahigh-resolution raster display, as well as handling keyboard and mouse input from the user.Application programs can use the X window system as a base for building interactive systems,while avoiding the exact details of graphics algorithms and input device drivers. Stated in thesebroad terms, the X window system is similar to a number of window systems that have beendeveloped in recent years.[Hopgood et al 86] However, the X window system was designed tofulfill a number of goals that make it significantly different from many earlier systems.

Goal: High-Performance. One always wants the maximum possible performance with anycomputer system. However, when developing a general window system for use in interactiveapplications, this issue becomes even more important. Graphics application developers alwaysface the temptation to gain performance by directly accessing the graphics hardware, eventhough this sacrifices portability. In order to make a general window system attractive, it musthave performance comparable to what could be obtained through special-purpose coding. The Xwindow system addresses this issue throughout it design. For example, the rectangular windowssupported by the window system are designed to be very efficient to create and destroy. This hasan effect on the style of interfaces created using the X window system. Since windows are"cheap," the application can use a lot of them, for example using separate windows to display theindividual items on a menu. This capability can be used to reduce the complexity of theapplication program.

Goal: Device-Independence. The X window system was designed so that it could beimplemented on a wide range of raster display devices, from very simple monochrome displaysto high-performance color workstations. However, to encourage portability of applicationprograms, one of the goals of the X window system was to shield the application program fromthe details of the display device, so that the same application program could be run unchanged ondifferent systems. Using high-level graphics primitives implemented by the X window system,the application can precisely specify the graphics to be displayed, without concern for the detailsof the display hardware.

describes version 11 X

Goal: Network Access to the Display. The X window system was designed to be used in ana collection different computers a area network.

V"~'''o at a particular terminal to use computers onfor the X system was to allow remote computers to access

Analvzing Interactive Graphics Systems

X window system display over the network. To support this goal, the X window system wasdesigned around a "Client-Server" model. The X window system software implements agraphics display "server," which runs on the computer hardware connected to the graphicsdisplay. An application program (or "client"), which can be running on the same computer as theserver or on another computer on the network, communicates with an X window server through asimple duplex byte stream. This stream is used to implement a block protocol for sendingmessages between the client and the server. The X window system permits multiple clients toaccess the display at the same time, with each client possibly accessing multiple windows. Eachof the clients can draw on its windows as if it were the only client using the display. The servercommunicates with all of the different clients, responding to graphics requests by drawing on theclient's windows, and sending back user input events associated with each window to theappropriate client.

Goal: Support Multiple Window Managers, Any window system needs to provide a"window manager" user interface allowing the user to interactively open, close, resize, and movewindows. In addition, the visual consistency of windows on the display would be improved ifthe window system maintained a particular window style (with particular window titles, borders,scroll bars, menu formats). The question of the "best" window manager user interface has been asubject of controversy, and it doesn't appear like there will be any agreement soon, so one of thegoals of the X window system was that it must support multiple window-management schemes,and not impose a particular style. This was accomplished by implementing a window manageras just another client, communicating to the X window system server using the same protocol asother clients. The X window system includes facilities for communicating application-specificinformation to the window manager that it needs to handle the display (such as window titles,icon images, etc). Multiple window manager clients have been implemented, presentingdifferent management styles.

The X window system described above defines a basic protocol for communicating between aclient and a window server. This supports construct complex interactive applications, but it isnot particularly easy to use when writing programs. To help with this problem, a number of"toolkits" have been constructed to allow constructing an application interface from a set ofstandard parts. The X Toolkit[Swick & Ackerman 88] (a separate package from the X windowsystem) provides an object-oriented system for using a hierarchical collection of "widgets"(complex interactive objects such as menus, buttons, and scroll bars). Other toolkits have also

constructed for facilitating creation user interfaces X window system.

2.4 The Ne,"VS Window System

The NeWS window system[Gosling 86][Sun 87] was designed to achieve many of the samegoals as the X window system. NeWS efficiently supports drawing on a large number ofoverlapping, hierarchical windows. NeWS was designed to be ported to different hardwaredisplays, providing a high-level device-independent graphics package. NeWS was based on theclient-server model, so application programs can be running on the same machine as the NeWSserver, or on other machines connected by a network. NeWS allows the use of different windowmanagement styles. However, the design developed to achieve these goals is radically differentfrom the X window system.

Whereas the messages sent between an X window system server and a client are fixed-formatcommands and events, a NeWS client communicates with a NeWS server using an extendedversion of the PostScript Ianguage.IAdobe 85] PostScript is a stack-based graphics language thatwas originally developed to send graphical images and drawings to high-quality laser printers.To print an image on such a laser printer, a PostScript program is sent to the printer. When thisprogram is interpreted by a computer within the printer, it calls a series of graphics operations todraw the image. PostScript is a general-purpose programming language that includes facilitiesfor defining named variables and functions, executing loops, manipulating complex datastructures, etc.

Whereas the original PostScript language was designed for creating static printed images, theNeWS window system has extended PostScript to allow writing programs that control thedynamic behavior of interactive graphics on the display screen. The NeWS server supports anarbitrary number of lightweight processes, each interpreting a PostScript program. The extendedPostScript language includes facilities for manipulating and drawing on display windows,receiving events from input devices, dynamically creating new processes, and communicatingbetween processes. Extended PostScript programs can be written to handle many interactivegraphics operations.

A NeWS client communicates to the server by opening a byte stream connection (over anetwork, or on the same machine), which causes the server to create a new PostScript process tolisten to the connection. Then the client sends a PostScript program to the server to beinterpreted by the process, displaying whatever graphics the to be displayed

scheme great choosing to divideclient PcsrScript program on server.

Small, simple could implemented by simply writing whole application as acomprete controt over

Analyzing Interactive Graphics Systems

behavior of the display can send a minimal PostScript program that executes graphic operationssent from the application, and sends user input events back to the application. Many applicationsare somewhere in the middle, "down-loading" a PostScript program to handle the interactive partof the application, and having this program communicate with a client process to handle the non­interactive processing. PostScript programs running on the NeWS server can be used toimplement highly-interactive user interfaces, since much interaction can be handled within theserver without any server-client communication. Mouse-tracking and menu selection are oftenhandled by PostScript programs.

Even if the application does not create autonomous processes within the NeWS server, PostScriptcan be used to reduce the network traffic between the client and the server. Common graphicsoperations can be defined as PostScript functions in the server, and then repeatedly called by theclient. Repetitive graphics (such as grids) can be transmitted in a very compact form, by sendinga PostScript program containing a loop.

Admittedly, there are some arguments against implementing a NeWS application using complexautonomous PostScript processes. First, PostScript is not a very good language for writingcomplex programs; it was initially designed to be generated by printer-driver programs, ratherthan coded by hand. However, people have developed methods for structuring PostScriptprograms to improve their readability and maintainability.[Adobe 88] In addition, "compilers"have been developed to translate a more standard language (like C) to PostScript. A secondproblem is that, once one starts using multiple processes in an application (a client process, plusone or more server PostScript processes), there is the potential for introducing bugs such asdeadlock and race conditions, which can appear in any distributed concurrent program. A finalargument against downloading the interactive parts of an application is that most interactiveapplications require the interaction to be controlled by complex application data structures.Unless the entire application can be down-loaded, interactive user interfaces will still require alot of communication between the client and the server.[ScheifIer & Gettys 86] One can comeup with examples where all of these objections are born out, as well as examples where theability to down-load PostScript programs is crucial. Only more experience with this windowsystem will show whether these objections are valid.

NeWS uses this "''''_'''U<.:>H.HH_.l'

One very important feature of PostScript used by the NeWS system is that PostScript isextensible. New functions can be defined using PostScript code, and called exactly like the

process can""<"ti,..,,,! system. basic NeWS server, a

C), only implements PosrScriptmachine-dependent graphics package (probably nrMtt"'"

event U""""""'''''5 <1V<:tP1TI

Analvzing Interactive Graphics Svstems

associated with a "window manager," such as support for interactively moving and reshapingwindows, window styles, or menus, is implemented by PostScript programs that are loaded whenthe NeWS server is first "launched." These programs are stored in normal text files, which theNeWS user can customize or replace. The window and menu system supplied with the NeWSsystem is defined using a simple object-oriented system (also defined in a PostScript file), whichmakes it easy to make small changes to the system-supplied defmitions.

Recently, a window system server named Xll/NeWS has been created that is capable ofsimultaneously managing windows for both X and NeWS clients.[Schaufler 88] This wasimplemented by extending the NeWS server event system and graphic model, and then writingPostScript programs (running in several NeWS processes) to interpret the X window systemprotocol, and emulate that window system. The combined system does not provide a perfectmerger of the two systems (in particular, an X window manager may not be able to manageunmodified NeWS client windows), however it is an impressive display of the potential of anextensible system.

2.5 The NeXT Window System 3

The NeXT window system[NeXT 89] takes a somewhat different view towards the user interfacethan the other systems described above. Whereas the other systems were explicitly designed tosupport a wide range of different user interfaces, the NeXT system was designed to support asingle user interface style. This style is described in great detail in the NeXT reference manual,it is supported by the interface creation tools provided by the system, and it is used by theinteractive system applications. The goal of providing a consistent interface style across allapplications is a good one, though one can wonder how suitable this style will be as new types ofapplications are developed.

The existence of a system-wide "standard user interface style" raises a question with respect todiscussing the capabilities of the system. What does it mean to say that the NeXT windowsystem "supports" a particular feature? The system architecture does not prevent constructingprograms using different interaction styles that are not compatible with the standard userinterface. However, all of the development tools are aimed towards supporting the standard, andthe system will probably be "tuned" to support the standard efficiently, perhaps at the expense ofother operations. This paper will primarily be considering the interaction styles supported by

C'l<:fPl-n standard user though it

svstem architecture.

paper is based on the documentation included with NeXT system release 0.9. There are notes in this documentationthat the been release this paper contain incorrect information the

Externally, the NeXT window system looks similar to the other window systems describedabove, in that it supports multiple, overlapping rectangular windows, application programs candraw graphics within multiple windows, and the windows can be interactively moved and resizedby the user. The window manager enforces the NeXT model of multiple applications; only oneapplication is "active" at anyone time, window tides are highlighted to visually emphasize themain windows in the active application, and only the menu for the active application isdisplayed. This is an attempt to deal with the clutter that can occur when windows from multipleapplications are displayed on a single screen. Once one masters the particular conventions of thewindow manager, one can use the NeXT system much like any other window system.

Looking at the implementation of the window system, the graphics system bears some similarityto Ne\VS: there is a system process, called the window server, which interprets multiple threadsof PosrScript code. A version of PosrScript extended to handle displaying on graphics displays,called Display PostScript, is extended with additional operations for handling the NeXT windowsystem. Applications can directly establish connections to the window server (running on thesame computer, or on other computers on the network), and send PostScript code to the server tospecify drawing operations. It appears that the "down-loading" of autonomous PosrScriptprograms to handle interactive graphics is not used as much on the NeXT system as it is withNeWS, although it is possible. In the current NeXT environment, applications usually run on thesame machine as the window server, which might make the additional complications ofcommunicating with a "down-loaded" PostScript program outweigh any possible performancegains.

appncanons cansoul;,nrl',",Ul server.

Most interactive application programs are not built from a simple communication link to thewindow server. The NeXT system includes a toolkit known as the Application Kit, that allowsthe application to present a user interface consistent with the NeXT system user interface, whileshielding the application from many of the details of communicating with the window server.Much of the NeXT system and most applications are implemented in an object-oriented versionof the C language, called Objective-C. offering a hierarchical taxonomy of classes withinheritance and specialization of methods. The Application Kit consists of a tree of objectclasses implementing commonly-used user interface components such as windows, menus, andvarious types of buttons and sliders. Typically, an application builds a user interface by creatingsubclasses to the Application Kit classes, and specializing methods within subclasses to

behavior

Analvzing Interactive Graphics Systems

The NeXT system also includes an application, called the Interface Builder, which allows theuser to interactively specify the position and size of buttons, menus, and other interactivecomponents within a set of windows. After such an arrangement has been specified, theInterface Builder can produce Objective-C files that can be included in an application program toproduce the specified initial layout. This allows the application programmer to concentrate onthe behavior of these objects within the application.

It is only to be expected that, as interactive graphics systems mature, there will be attempts tocreate simpler environments optimized for implementing the "best" user interface, as well as itcan be envisioned at the time. This allows the quick design and implementation of complex userinterfaces, at the expense of restricting the range of possibilities. The NeXT system is relativelynew; only time will tell whether this approach will be successful,

3. Output: The Graphics Imaging Process

The first area that we will examine to find concepts that are used in interactive graphics systemsis the "output" process, whereby information in an application program eventually winds up asan image on a display screen. This is roughly equivalent to the "presenter" module in the PPSmodel, along with the routines associated with the presentations that display the contents of thepresentation data base on the screen.

Both PSBase and GROW implement drawing by calling Lisp functions supporting line-drawingand raster-oriented graphics. These systems were developed to investigate the high-level issuesof controlling images on a screen, rather than support the most advanced graphics. However,that there is no reason why these experimental systems could not be extended to implement morecomplex graphics operations.

The three window systems described above provide a much richer graphics environment, in orderto support the development of complex graphics-oriented applications. The X window systemdefines a large collection of graphics operations (lines, curves, filled polygons, images, etc.) thatare specified in terms of which bits in which pixels on the screen are affected by the operation,much like earlier systems based on the BitElt (or RasterOp) operation. A "graphics context"structure maintained in the server stores all of the parameters that can affect the drawingoperations, including the logical function used to combine a "source" drawing with

pixels, a used to limit set of foreground and backgroundcolors, and a parameters how lines are drawn joined. usingmethods that been developed during the history of raster graphics, an application can

parameters to access

pixels on the display, an application can show any image that can be produced on the displayhardware. For example, complex graphics applications can implement techniques such as anti­aliasing that are not directly supported by the X window system.

The NeWS and NeXT window systems are both based on the PostScript imaging model. Thismodel does not describe graphics operations in terms of pixels, but rather by specifyingmathematical shapes on an abstract plane. Points, lines, and curves are described using floating­point coordinates on a two-dimensional Cartesian plane, and it is the task of the graphicssoftware to map these images into changes in pixel bits on a raster display. The application canscale, translate, and rotate the coordinate axes in order to set up a more convenient coordinatesystem for specifying a particular graphics image. PostScript also allows creating "paths,"arbitrary regions outlined by drawing operations (which may contain holes or multiple closedobjects), that can be used for clipping drawing operations. Similar to the X window system,PostScript uses "graphics state" structures to specify many graphic operation parameters.

The basic PostScript model does not explicitly represent the idea of pixels, so many of the pixel­oriented effects are not supported. In the NeXT graphics system, PostScript has been extendedto include a "composite" operation that performs many of the pixel-copying and modifyingeffects that BitBlt was used for. However, rather than specifying a particular boolean functionthat is blindly applied to numeric pixel values, the combination of the "source" and "destination"bits is specified more abstractly, in terms of which image is to appear "in front" or "behind" theother. There are a number of possible variations related to the use of "transparency" (anotherPostScript extension implemented on the NeXT). This allows BitBlt-like operations to beabstractly specified for color pixels, which has always been difficult (since arbitrary logicaloperations performed on color pixel values can produce weird colors).

3.1 Drawing Surfaces

In PSBase and GROW, the surface on which the graphics operations draw is maintained by theLisp graphics system. These systems did not concern themselves with a more complex surfacethan a single two-dimensional plane. The window systems provide a more-complex model ofdrawing surfaces, supporting any number of hierarchically-organized overlapping two­dimensional drawing surfaces. all of the windows systems, each drawing surface may haveany number of child surfaces, that can overlap each other according to a changeable front-to-back Child are positioned to so movingparent on screen move are clipped

Rather than combining the positioning of a control within a widow with the specific routine fordrawing the details of a control, it is easier (and results in more reusable code) to specify thedetails of the drawing relative to a child surface, and position this surface separately.

The different window systems have somewhat different implementations of drawing surfaces.The X window system implements a hierarchy of rectangular "window" objects. Each windowcan have its own border and title, maintained by the window manager. Each window has its ownfixed coordinate system, with the origin at the upper left-hand corner of the window, the Xcoordinate increasing to the left and the Y-coordinate increasing downwards. Each X and Ycoordinate unit corresponds to exactly one pixel in that direction, making it easy to specify pixeloperations. To handle displays whose pixels do not have a I-to-l aspect ratio, the X client canquery the server to determine the aspect ratio of the display, and scale drawing operations by thatfactor. If an application wants to use another coordinate system for drawing, it must handle themapping itself.

The NeWS system implements a hierarchical collection of two-dimensional "canvases."Complex windows with borders and title bars are implemented using multiple canvases. Eachcanvas is specified by a PostScript path in its parent canvas, so it may have an arbitrary shape.Most applications only use rectangular canvases, which can be implemented as a fast specialcase, but in situations where an application may need non-rectangular objects it is useful to havethis generality supported by the system. Each canvas has its own PostScript coordinate system,which can be modified using PostScript operations, providing great flexibility in choosing how todraw an object within a canvas.

The NeXT system supplies a hierarchical collection of "views," each of which has its ownPostScript coordinate system. Windows are constructed using multiple views. Each view isspecified as a rectangle in its parent view's coordinate space, but because of PostScript's facilityfor rotating and scaling coordinate spaces, it is possible to create views whose edges are nothorizontal or vertical, and even views that are parallelograms rather than rectangles. Aninteresting feature allows creating a view whose Y-coordinate is "flipped," increasingdownwards, without affecting the Y-coordinate direction of child views. A view whose Y­coordinate increases downward can be useful when displaying multiple lines of English text,since increasing "line numbers" are displayed further down on a display. This flipping effectcould be obtained by scaling PostScript coordinate system the view, but this requireuntlipping the coordinate when child also awhether they be If this were not text cnspiayecHfr'\" irl appear upside-down.

3.2 Redisplay Mechanisms

Graphics systems that support a collection of overlapping drawing surfaces have proven to bevery useful. However, to implement such a system, a number of issues have to be resolved. Onesuch issue is the problem of redisplaying exposed surfaces. If one has a set of overlappingsurfaces that can be moved relative to each other, it is possible to expose a hidden piece of adrawing surface by moving all covering surfaces. To maintain the illusion of independentoverlapping surfaces, it is necessary to display "what would have been drawn" in the exposedregion as if it had never been covered.

Note that the issue of redisplaying images is relevant to both window management systems andmore general graphics formatting systems. If a graphics program displays a set of icons, andthen wants to move an icon, it has to consider whether erasing one icon will damage theappearance of nearby icons. The Coral system[Szekely & Myers 88] includes routines formanaging the redisplay of graphic icons as overlapping ones are moved.

An important question is how the responsibility for the redisplay is divided between theapplication and the window system. Ideally, one would like the window system to handle theentire redisplay, so that the application can draw on the drawing surfaces without regard for theiroverlapping layout. However, if the window system were to maintain the complete state of manyoverlapping color windows, this would require a huge amount of memory, which would severelylimit the number of windows that could be created. At the moment, it is generally thought betterto pass some of the responsibility for redisplay to the application than to limit the number ofwindows.

Most current window systems handle redisplay by sending a special input event to theapplication, requesting redisplay of a particular part of a window. Some applications may beable to redisplay some section of a window more quickly than the whole window, though it maybe difficult to decide exactly what needs to be redisplayed. Simple applications can alwaysignore the specification of the damaged region, and redisplay the entire window contents.

In the X window system, the client applications are totally responsible for redisplaying thecontents of their windows. Some servers may be able to save and redisplay the contents ofspecified windows (the client can query server to determine whether this feature issupported), cannot assume supported on server.case a a non-saved is exposed, one or more "exposed" events are sent to

specifying a rectangle that has exposed. Multipleexposed as a

by a single operation, the last event of the group is marked specially, so the application cangather all of the rectangles, and redisplay the window with a single operation. This sameexposure mechanism is used to cause the application to display the initial contents of thewindow, when it is first displayed on the screen.

In the NeW'S window system, each canvas can be optionally "retained," in which case the serveris responsible for saving any covered graphics, and displaying them if the canvas is uncovered.To handle the redisplay of non-retained canvases, each canvas has an associated "damage path,"a PostScript path that outlines all of the areas on the canvas that need to be redisplayed. Everytime a additional region of a canvas is exposed, the region is added to the damaged path, and a"Damaged" event is generated. At some point, possibly after receiving multiple damaged events,the application can redisplay the canvas image, perhaps using the damage path, and reset thedamage path to a null path. Sometimes it is possible to handle redisplay operations entirelywithin PostScript processes "down-loaded" from a remote application. For example, a simplePostScript program could maintain a canvas displaying some text by saving an array of asciicharacters (using less memory than a retained canvas would require), and redisplaying the textdisplay from that array.

The NeXT window system offers the same two options for saving the contents of windows in thewindow server as the NeWS server, either retaining all of the covered regions of a window, ornot retaining any covered pixels and sending events to the application to redisplay them whenexposed. The window server also offers a third option, "buffered" windows, where all drawingoperations occur on an off-screen copy of the window, which is then explicitly "flushed" to theon-screen window display. This allows a simple way to implement animation without displayingintermediate drawing steps. The same effect could be achieved on the NeWS server, by drawingon a non-displayed canvas and copying that image to an on-screen canvas, but the NeXTimplementation is much simpler.

Unlike the other two window systems, the NeXT window server supports maintaining thecontents of overlapping windows, but not of the hierarchical views within a window. Similarly,the window server sends window-exposed events when moving or resizing windows causes partsof windows to be exposed, but is not directly involved with the movement or rearrangement ofviews. The Application Kit includes methods for redisplaying hierarchical views, usinginformation about the region that to redisplayed order to reduce amount of

as is done at window server. course, itIJV.:>.:>lVLv to create a of View object uses a store to redrsptayit is curious the system doesn't Perhaps views are not

3.3 Handling Device-Dependencies

All three of the window systems we have examined attempt to provide some degree of device­independence. When designing a graphics system, one has to deal with a variety of hardwaresystems in existence, and one would also like a new system to be able to handle new types ofgraphics hardware that will be developed in the future. These issues are not quite as importantwith the NeXT window system, initially designed for a single hardware base, but even the NeXTsystem designers realized that eventually people will want to connect various fancy colordisplays to their NeXT computers. Even if one knew that a particular graphics system was onlygoing to be used with one type of graphics hardware, providing a higher-level device­independent graphics system would make it easier to create applications.

On the other hand, one has to trade off device-independence against the speed and efficiency ofthe final application. Sometimes, it may be necessary to specify machine-specific operations touse special graphics hardware to its full potential. Graphics hardware designers are graduallybeing influenced by customer demands for more standard hardware interfaces, but there are stillsome hardware designs that are based on special features. A good example of this is the writablecolor-map in some color displays. By using a color map, the display can be used to display awide range of colors, without requiring all of the memory of a full-color frame buffer. However,in order to use such a device to its full capacity, one needs to load the color map with the colorsused in a particular application, which means that the application has to be written to knowsomething about the device-dependent characteristics of the color-map.

written toa

The X and NeWS window systems take different approaches to the problem of supportingdevice-independent graphics. The X window system takes the approach of "describing" device­dependencies versus the NeWS approach of "hiding" them. The designers of the X windowsystem attempted to accommodate the use of a range of graphics systems by including protocolsthat an X client can use to query the window server about details of the graphics displayhardware (such as whether the display is monochrome, gray scale, color-map color, or truecolor). Using this information, the client can use additional protocols to control this hardware toproduce particular graphics effects. For example, if an X window server is implemented on agraphics display with a writable color-map, a client can request a contiguous range of pixelvalues allocated to that client, which the client can use specifying traditional mum-maneraster operations. system color a client caneven create use own color map, can be that client'sU111,rl",U1C are other windows even

Analyzing Interactive Graphics Svstems

is still supplying a degree of device-independence, by providing an abstract layer on top of theparticular mechanisms used to control color maps on different hardware. However, all clients donot have to use these device-dependent features (which reduce portability of applicationprograms). The X window system also supports device-independent entries where a client canrequest a particular color by specifying a red-green-blue triple, and the X window system willuse whatever available color is closest to the specification (either white or black, on amonochrome display). Therefore, the X window system combines general device-independentfunctions with a detailed set of entries for handling particular device types.

The NeWS system takes a different approach, by attempting to hide all details of the graphicsdisplay hardware from the application. This approach is supported by the use of the PostScriptgraphics imaging model, which was designed to be device and resolution-independent bybreaking away from the model of raster operations on pixels. A number of arguments have beenadvanced for avoiding pixel-based operations.[Sun 87] First, as the technology advances, andhigh-level graphics operations (such as curve generation) are supported in hardware, applicationsthat depend on low-level pixel operations will not be able to use these new systems effectively.Second, as pixel resolution increases, some pixel operations (such as moving images with BitBlt)may not provide acceptable performance, since many more bits will have to be moved. Finally,the traditional pixel-oriented logical operations do not extend cleanly to color pixel systems.

True to this philosophy, the NeWS system hides the existence of pixels from the applicationprogram. The application program can determine a few details about the display hardware, suchas whether it is monochrome or color, by calling PostScript routines, but this information can notbe used for special pixel-dependent operations. With color displays, colors are specified usingPostScript operations which take red-green-blue triples (or a few other similar representations)and return the nearest color available. The NeWS system supplies an operation to determinewhether two colors are visually distinguishable, but no more detailed information about thecolors is available.

This approach is more truly device-independent than the X window system's, and will allowapplication programs to work with a wide range of future technologies. However, it has somelimitations, because it concentrates all of the details of graphics rendering within the NeWSserver. Because the applications do not have access to the display pixels, it is not possible towrite applications to implement graphics techniques such as anti-aliasing, or to efficiently draw

of curves that are not supported by the server. Such may benew versions the NeWS server, the server be slow to change.

many applications, graphics technology reouiresLuvUL'.L"v not current seem to some

AnaIvzing Interactive Graphics Systems

applications. The degree of device-dependence within an application could be reduced byencouraging the use of device-independent operations wherever practical, but still allowingdevice-dependent operations where it is absolutely necessary.

The NeXT window system is based on the PostScript imaging model, so it encourages the use ofresolution-independent operations. However, the model has been extended to include a numberof concepts, such as partially-transparent paint and the "composite" operation (comparable to theBitBlt operation) that can be used to abstractly specify graphics effects (such as dissolves, andtransparent overlays) that would traditionally have been accomplished through special-purposepixel manipulations. In addition, the system includes documentation describing exactly howdifferent PostScript operations affect pixel values (for example, defining which pixels are turnedon when a line of a particular width is drawn), which advanced graphics applications can use tocreate certain effects (more of less efficiently). Such extensions provide a compromise betweenstrict device-independence, and the needs of current advanced graphics applications.

4. Input: Handling User Input Devices

The graphics imaging process is only one issue relating to interactive graphics systems. Anotherimportant issue is the handling of the input devices with which the user communicates to thecomputer. Whereas the computer has complete control over the information being displayed ona graphics screen, it has no control over the user's actions, but must attempt to interpret them soas to provide a usable interface. This process is represented in the PPS model as the presentationeditor and the recognizer. The fact that these components are not as well defined as the rest ofthe model is an indication of the complexity of the task of interpreting user actions.

All of the systems described above are based upon receiving user input through a keyboard and amouse. These devices have proven to be very useful, however there is no reason to believe thatnew and better input devices will not be developed. It is interesting to consider how well thewindow systems described above would be able to handle additional input devices. An Xwindow server could be extended to handle new input devices through the X extensionmechanism, which allows associating named "extensions" with an X window server. A clientcan query a server to find if a particular extension is supported, and interact with it using anextension-specific protocol. However, this does not provide a high level of support for alternateinput devices. The NeWS system documentation does not explicitly define methods foraccessing server devices. However, the drivers for new devices were included

a server, it seem rather easy to input devices down-loadedPostScript The system includes direct access to a number of input devices,

as a are not specifically supported

system. One would hope that platforms for handling interactive graphics will eventually beextended to provide support for new types of input devices.

Keyboards and mouse devices are physically simple. All that the user can do with these devicesis to press and release keys and mouse buttons, and move the mouse. The mouse is usually usedto control the position of a cursor on the display screen, whose "hot spot" is located at a certainpoint at any given time. At the lowest level, monitoring the mouse cursor position and the stateof all keys and buttons allows one to capture all user input. However, the window systemspresent this information in the more convenient form of a sequence of input events that are sentto application programs when the state of the keyboard or ~ouse changes. All of the examplewindow systems support the generation of input events when any keyboard key or mouse button

has been pressed or released.f and whenever the mouse has been moved. An input event in allof these systems includes information about what input device has changed and a time stamp.Events also include the state of common "modifier keys" (such as shift and control), and theposition of the cursor on the screen. The modifier key information is not strictly necessary, sincethe application can keep track of this information, but it proves to be very convenient wheninterpreting other events. The current-cursor position information allows handling very fastsequences of mouse movements and button clicks, without introducing problems withcoordinating multiple events.

In addition to using events for reporting the state of the input devices, these window systems alsouse events as the main form of communication from the window server to the applicationprogram. Each window server recognizes a set of higher-level actions, such as the entrance andexit of the cursor from certain regions on the display screen, and generates special eventscorresponding to these activities. In many situations, the writing of applications can besimplified by having an application interpret only a subset of the possible input events. Forexample, an application that displays a simulated button on the screen only needs to care whetherthe cursor has entered the area of the button, but does not have to care about the exact position ofthe cursor within the button. These window systems provide different ways for an application toindicate what events they are interested in, so the window server knows whether to generateparticular types of events.

The X window system provides a large fixed set of events. Each window has an associated"event mask," determining which types of events should be passed to that window's client.addition to simple keyboard mouse state changes, X also detects cursor

vALLvU rectangular windows on screen. It is to create windows thatare not visible on the screen, to are a

does detect

large number of events associated with window management, so that an application can benotified when its windows are moved or resized, or when portions of windows are exposed, andneed to be redisplayed. Clients can generate any of the system events, as well as a "client event"containing a small amount of unformatted data. This facility can be used to provide simple inter­client communication.

The NeWS system server directly generates a small set of events when keyboard keys and mousebuttons are pressed and released, when the mouse is moved, when the cursor enters and exits acanvas (which may have an arbitrary shape, and may be transparent to define mouse-sensitiveregions), and when regions of a canvas have been "damaged" and need to be redrawn. However,events are named using arbitrary PostScript atoms, so it is possible to create window managerprocesses that will create and distribute any set of events. This facility is used forcommunicating between multiple PostScript processes, associated with the same or differentclients.

In order to receive events, a PostScript process "posts" any number of "interests," which areevent records containing special patterns in their fields. When an event is generated by theserver or another process, the NeWS server does a pattern matching operation between the eventand the set of interests, and distributes the event to one or more of the processes waiting toreceive it (the distribution method is described in more detail later). By posting interests, aprocess can ask to only receive events of particular types, associated with particular canvases, orsent from particular processes. Note that only PostScript processes can directly receive events.If a remote application needs to receive this information, it will have to be relayed back via aPostScript process. Such a down-loaded process can be designed to filter and transform theevents into a form that can be most easily used by the application.

The NeXT window server directly generates a small fixed set of mouse and keyboard inputevents, including events when the cursor enters or exits any of a set of "tracking rectangles" thathave been registered with the window server. Some of these events are passed to the application,and some are consumed by the Application Kit methods, which produces a larger set of window­based events, similar to the set of X window events. One type of event is an application-definedevent, which an application can generate to send any information to itself or another application,though the NeXT system provides other, more-direct, communication channels. Each NeXTwindow (but not each hierarchical view) has an associated event mask specifying what eventsshould be sent to application.

4.1 Distributing Events and Keyboard Focusing

Events provide a convenient mechanism for communicating the state of input devices, but weneed to address the issue of how events are distributed to application programs. One of thebenefits provided by a window system is the ability to have multiple application windows on ascreen simultaneously, and quickly switch from using one application to another, while using asingle keyboard and mouse. Even if there is only a single application, it is convenient to use thesame input devices to operate the window manager, moving and resizing windows on the screen.Therefore, the mechanism for distributing input events has to be flexible enough to "connect" theinput devices to different applications at different times. In addition, some applications containmultiple interactive elements (menus, text windows, buttons) that can be operated with the sameinput devices. It would ease the application programming task, and provide a more consistentuser interface, for the event distribution system to support the selection of these differentelements. Remember that one of the main faults of the PPS model was that it did not provide anadequate model for how input events were sent to one presentation editor versus another.

The operation of "connecting" an input device to a particular interactive element is called "settingthe focus" of the input device. Most window system user interfaces are based on the idea that thekeyboard is focused on one window at any particular time, while the mouse cursor ranges overthe entire screen, controlling the window manager and changing the keyboard focus in additionto being used by the keyboard focus application. Two methods are typically used to change thekeyboard focus, under the control of the mouse. In the "click to focus" approach, the keyboardfocus is set to a particular window by moving the mouse cursor over that window and clicking amouse button. The keyboard focus will remain with the new window until it is explicitlychanged again. In the "focus under mouse" approach, the keyboard focus is set to whateverwindow is under the mouse cursor. Simply by moving the mouse, the keyboard focus can bechanged. Both user interface styles have their proponents, and one can conceive of applicationinterface designs that would work better using one approach versus the other. The X windowsystem and the NeWS system provide support for both styles. The NeXT system user interfaceis designed to use the "click to focus" method, though it would probably be possible, using low­level routines, to implement the "focus under mouse" method.

It is interesting to compare how the different window systems distribute events, and how this isused to support setting keyboard focus. The X window server always maintains one of thenierarctucal V;lm(10~/S disptayed on screen as "a eventoccurs, event sent to application associated focus window, or onehierarchical descendents, depending on position of the mouse cursor event

cursor is Hl1thll"

server examines the hierarchical descendent of the focus window directly under the mousecursor. If that window's event mask indicates that it is interested in accepting keyboard eventsthen the event is sent to that window's application. Otherwise, the server examines the parentwindow containing that window, and repeats the process all the way up the ancestor chain to thefocus window (although each window on the chain can specify that the server should not"propagate" particular event types to ancestor windows). If the server reaches the focus window,and it does not want the event either, the event is ignored. If the mouse cursor is outside of thefocus window, the event is immediately sent to the focus window or ignored (depending on thefocus window's event mask).

This mechanism can be used to support several different keyboard focusing styles. At the top ofan X window hierarchy is a "root window," covering the entire screen. If the focus window is setto be the root window, the effect is like using the "focus under mouse" method of changing thekeyboard focus. If the window manager sets the focus window to an application's window whena mouse click occurs, then this implements the "click to focus" method. Note that the X eventdistribution scheme supports both switching the keyboard focus between top-level applicationwindows, and between sub-windows within an application window.

The NeWS server supports a general mechanism for distributing events, based on PostScriptprocesses posting any number of "interests" specifying which types of events are of interest tothe process. Among the fields in an interest is a "canvas" field, containing either a particularcanvas (indicating that the process is interested in events associated with that canvas) orcontaining null (to indicate an interest in events associated with anv canvas). When a keyboardevent occurs, the server first examines all of the process interests with a null canvas field, andthen scans the interests associated with the canvas directly under the cursor, then its ancestorcanvas, up to the root canvas. If an interest matches the event, then it is sent to the event'sprocess. Each matching interest can specify whether the event should be propagated further. Inaddition, each canvas can specify whether any events should be propagated to its ancestors. Notethat copies of a NeWS event may be sent to multiple processes.

This mechanism implements the "focus under mouse" user interface, if application processes postinterests specifying their canvases. "Click to focus" can be implemented by a window managerprocess that intercepts all keyboard events using a null-canvas interest, and then redistributes theevents directly to whatever canvas it considers the keyboard One could design windowmana~;ers to uU..U ..'-U'-' arbitrary tocusing schemes.

window system is designed to H<:LllUJ,,,

to Tr,,~n"" approacn to sernngparticular user interface,

is a

and a single "key window" belonging to that application, both of which the user can change bymoving the mouse cursor into a window and clicking a mouse button. Mouse clicks can also beused to set the keyboard focus to a particular hierarchical view within a window. This isimplemented by associating with each window a "responder chain" of views, which is usually setup to follow the path from a view through the chain of its ancestor views. When an applicationreceives a keyboard event, it scans through the responder chain of its key window, in order, to

fmd the first view that can handle the keyboard event. This chain is reset by the Application Kitmethods that handle mouse clicks, to implement changing the keyboard focus. However, onecould imagine implementing an application that changes the responder chain as the mouse cursorenters and exits particular views, implementing a "focus under mouse" user interface. Toimplement this style between windows from different applications would require re­implementing much of the functionality of the window manager, however.

5. Conclusions

This paper has described a number of systems that have been created for supporting interactivegraphics systems, and has examined a number of concepts that have emerged from theseimplementations. Some of these concepts are of interest to the human factors specialists (such asthe methods for setting the keyboard focus to different windows), and others are of primaryinterest to the implementer (such as the methods for redisplaying exposed windows). This hasnot been an exhaustive analysis by any means, but it does serve to show the means that can beused to compare similar features of different systems. Such comparisons can be used todetermine whether certain systems are suitable for particular applications, and how anapplication might be designed to use a particular system.

Based on this analysis, one could try to argue that one system is "better" than the others, but onewould quickly run into the limits of this type of technical analysis. The success of failure of thedifferent graphics systems will be influenced by future developments in graphics hardware andapplication software, in addition to the non-technical issues of business and marketing. At themoment, the X window system appears to be the most successful of the systems described, interms of the number of users. However, it is carefully optimized to perform well with the currenthardware technology. It will be interesting to see how successful the X window system appearsin ten years.

intoboundary netween

of implementing interactive graphics applications "'UVUH,,t become

interactive graphics mdicatesthe "application" is cln,ndu

researchers develop ways of supporting user interaction UU'JUF',H

Examining the progress

elements, such as the NeXT Application Kit, provides a way of quickly building complexapplications while maintaining a consistent user interface style. Eventually, more of thecapabilities of research systems such as GROW could be included in such toolkits, providingtools for automatically handling the layout of graphic objects on the screen, and maintainingconnections between graphics and elements within the application. New developments inlanguage design could also be applied to supporting the construction of interactive applications.Past experience shows that the type of applications that are built is influenced by the toolsavailable for building them. One can safely assume that new types of interactive graphicsapplications will be created, using new interactive styles, and posing new problems for researchand development.

6. References("*" designates the assigned general exam papers)

[Adobe 85] Adobe Systems, Inc. Postscript Language Reference Manual. Addison­Wesley Publishing Company, Inc., Reading, Mass, 1985.

[Adobe 88] Adobe Systems, Inc. Postscript Language Program Design. Addison­Wesley Publishing Company, Inc., Reading, Mass, 1988.

*[Barth 86] Barth, P. An Object-Oriented Approach to Graphical Interfaces. ACMTransactions on Graphics, 5(2):142-172, Apr 1986.

[Borning & Duisberg 86] Borning, A. and Duisberg, R. Constraint-Based Tools for BuildingUser Interfaces. ACM Transactions on Graphics, 5(4):345-374, Oct1986.

*[Ciccarelli 84] Ciccarelli, E.. Presentation Based User Interfaces. Ph.D. dissertation,NUTAI Lab Technical Report 794, 1984

*[Gosling 86] Gosling, J. SUNTIEW: A Distributed and Extensible Window System. In1986 Winter USENIX Technical Conference Proceedings, pages 98-103.

[Hopgood et a1 86] Hopgood, F.R.A., Duce, D.A., Fielding, E.V.C., Robinson, K., and'Williams, A.S., Eds. (1986). Methodology of Window Management.Springer-Verlag, New York, 1986.

illiltctlinser && Draper, S.W.Lawrence Erlbaum

[Myers 89]

*[NeXT 89]

[Nye 88]

Myers, B.A. Encapsulating Interactive Behaviors. In CHI '89Conference Proceedings, pages 319-324, Austin, Texas, May 1989.ACM, New York.

NeXT, Inc. The NeXT System Reference Manual. NeXT, Inc., May 1989.

Nye, A. Xlib Programming Manual, Volume One. O'Reilly &Associates, Inc., Newton, Mass, 1988

*[Scheifler & Gettys 86] Scheifler, RW., and Gettys, J. The X Window System. A CMTransactions on Graphics, 5(2):79-109, Apr 1986.

[Schaufler 88]

[Shneiderman 87]

[Smith 83]

[Smith et al 86]

[Sun 87]

Schaufler, R Xll/NeWS Design Overview, In 1988 Summer USENIXTechnical Conference Proceedings, pages 23-35, June 1988.

Shneiderman, B. Designing the User Interface. Addison-Wesley,Reading, Mass, 1987.

Smith, R.G. Strobe: Support for Structured Object KnowledgeRepresentation. In Proceedings of the 8th International Joint Conferenceno Artificial Intelligence, pages 855-858, Karlsruhe, Germany, Aug 1983.William Kaufman, Los Altos, CA.

Smith, RG., Dinitz, R, and Barth, P. Impulse-So: A Substrate for Object­Oriented Interface Design. In Proceedings of the ACl14 Conference onObject Oriented Programming Systems, Languages, and Applications,Sept. 1986. ACM, New York.

Sun Microsystems. NeWS Manual. Sun Microsystems, Inc., MountainView, California, March 1987.

[Swick & Ackerman 88] Swick, RR, and Ackerman, M.S. The X Toolkit: More Bricks forBuilding User-Interfaces or Widgets for Hire. In 1988 Winter USENIXTechnical Conference Proceedings, pages 221-228.

[Szekely & Myers 88] Szekely, P.A., and Myers, B.A. A User Interface Toolkit Based onGraphical Objects and Constraints. In Proceedings of the ACMConference on Object Oriented Programming Systems, Languages, andApplications, 36-45, September 1988. ACM, New