View
0
Download
0
Category
Preview:
Citation preview
A Transformation-based Approach to BuildingMulti-Platform User Interfaces Using a Task Model and
the User Interface Markup Language
by
Mir Farooq Ali
Dissertation submitted to the faculty of theVirginia Polytechnic Institute and State University
in partial fulfillment of the requirements for the degree of
Doctor of Philosophyin
Computer Science and Applications
Committee in charge:Dr. Manuel Perez-Quinones, Chair
Dr. Scott MidkiffDr. Marc Abrams
Dr. Naren RamakrishnanDr. Scott McCrickard
November 16, 2004Blacksburg, Virginia, USA
Keywords: User Interface Markup Language, Multi-Platform User Interfaces,Task Model, Vocabulary, Transformation Algorithms
Copyright c© 2004 Mir Farooq Ali
A Transformation-based Approach to BuildingMulti-Platform User Interfaces Using a Task Model and
the User Interface Markup Language
Mir Farooq Ali
(Abstract)
The widespread emergence of computing devices that go beyond the capabilities
of traditional desktop computers has created a challenge for user interface (UI) developers
who face the problem of a lack of a unified development process for building these UIs.
This dissertation research focuses on creating a simplified development process for build-
ing UIs for multiple platforms. As part of this, the necessary building blocks (and their
relationships) that can be used in a process to develop multi-platform UIs (MPUIs) are iden-
tified and specified. A task model, which is an abstract representation of the tasks that
users perform with a system, is used as a high-level platform-independent specification for
representing UIs for multiple platforms. The task model is supplemented with additional
navigation attributes and containment operators for each target platform to facilitate the UI
development process. This contribution is based on the insight that an uncontaminated task
model, in conjunction with additional operators, allows different styles of UIs to be derived
for different platforms.
This development process is evaluated by functional comparison with a few other
multi-platform development processes, based on a set of criteria. In particular, a detailed
comparison of this approach is performed with the approach used in the TERESA develop-
ment environment. The process is also evaluated by demonstrating how the new features of
this approach allow different styles of UIs to be built not only for a single platform, but also
for different platforms. The two underlying notations that are used in this work include the
Concurrent Task Tree (CTT) modeling notation for the task model and an intermediate lan-
guage for UIs, the User Interface Markup Language (UIML). This research associates a new
vocabulary with the UIML language to facilitate a multi-step transformation-based MPUI
development process.
iii
Contents
List of Figures vi
List of Tables ix
1 Introduction 11.1 Problem Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Research Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.1 Discussion of Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Sample Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.4 Dissertation Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Background 92.1 Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2 UIML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.2.1 Language Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2.2 Generic UIML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.3 Concurrent Task Tree notation (CTT) . . . . . . . . . . . . . . . . . . . . . . . . 23
3 Related Work 273.1 Model-Based Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.1.1 TERESA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.1.2 Model Driven Architecture (MDA) . . . . . . . . . . . . . . . . . . . . . 313.1.3 Dygimes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.1.4 Comparison of UIML with Model-based Systems . . . . . . . . . . . . 32
3.2 Markup Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2.1 XIML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.2.2 UsiXML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343.2.3 AUIML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2.4 XForms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 363.2.5 AAIML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2.6 DISL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2.7 Other Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.2.8 Comparison of Languages . . . . . . . . . . . . . . . . . . . . . . . . . . 38
iv
3.3 Task Representations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393.4 Other Approaches and Related Techniques . . . . . . . . . . . . . . . . . . . . 43
3.4.1 Transcoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 433.4.2 Device Independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.4.3 Accessible Interfaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4 Development Process 464.1 My Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.1.1 Building the Task Model . . . . . . . . . . . . . . . . . . . . . . . . . . . 494.1.2 Annotating the Task Model . . . . . . . . . . . . . . . . . . . . . . . . . 504.1.3 Specifying the Structural Mappings . . . . . . . . . . . . . . . . . . . . 524.1.4 Generating Platform-specific UIML . . . . . . . . . . . . . . . . . . . . 524.1.5 Customizing the UI for a Platform . . . . . . . . . . . . . . . . . . . . . 53
4.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
5 Process Implementation Details 565.1 Annotating the Task Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
5.1.1 Navigation Attributes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 575.1.2 Grouping Operator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.2 Mapping of Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 625.3 Generating Generic UIML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3.1 Generating the Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . 655.4 Generating and Customizing Platform-specific UIML . . . . . . . . . . . . . . 675.5 Illustrative Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
6 Functional Comparison 786.1 Comparison to Other Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.1.1 Description of Comparison Criterion . . . . . . . . . . . . . . . . . . . 796.1.2 TERESA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 816.1.3 XIML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 856.1.4 UsiXML . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
6.2 Example Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 896.2.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
6.3 Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1056.3.1 Limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
7 Conclusions and Future Work 1127.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1127.2 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
References 116
A Source Code 131
B Vita 155
v
Listings
2.1 Skeleton of a UIML document . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142.2 Skeleton of a UIML interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3 UIML code for sample UI in Figure 2.3 . . . . . . . . . . . . . . . . . . . . . . . 185.1 Abbreviated CTT XML notation for task model above . . . . . . . . . . . . . . 575.2 XML code indicating contains navigation operator as attribute. . . . . . . . . 585.3 XML code indicating menustyle navigation operator as attribute. . . . . . . . 595.4 XML code indicating independent navigation operator as attribute. . . . . . . 605.5 UIML code generated based on the menustyle navigation operator in task
model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 605.6 XML code indicating grouping of two tasks. . . . . . . . . . . . . . . . . . . . 626.1 VoiceXML listing for weather example . . . . . . . . . . . . . . . . . . . . . . . 91A.1 XSLT stylesheet for transforming annotated CTT task model to generic UIML. 131A.2 Desktop mappings between tasks and UIML generic parts. . . . . . . . . . . . 145
vi
List of Figures
1.1 Calendar application for the Java platform. . . . . . . . . . . . . . . . . . . . . 61.2 Calendar application for the Palm OS platform. . . . . . . . . . . . . . . . . . 71.3 Calendar application for the WML platform. . . . . . . . . . . . . . . . . . . . 7
2.1 Sample families and platforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.2 UIML’s Meta-Interface Model (Phanouriou, 2000). . . . . . . . . . . . . . . . . 172.3 Sample UI built in UIML. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182.4 Sample task model for a weather application . . . . . . . . . . . . . . . . . . . 24
3.1 Task model-centric process for multiple platforms. . . . . . . . . . . . . . . . 40
4.1 Usability Engineering process for multiple platforms. . . . . . . . . . . . . . . 474.2 Multi-step process for generating multi-platform UIs. . . . . . . . . . . . . . . 484.3 (a) This illustrates the difference between having all possible tasks ‘contained’
within a single container. (b) Each task is enclosed in a separate container.This case requires extra navigation to perform the various tasks. . . . . . . . . 51
4.4 Schematic showing repeated iteration between the different phases of build-ing a MPUI. The dashed lines in the above figure indicate that the developermight have to take more than one pass to generate the final UI. . . . . . . . . 53
4.5 Schematic tying in main phases in my process to a usability engineering processfor MPUIs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.6 Schematic indicating data-flow between different phases of my process. Theartifact produced out of each phase is also indicated in the figure. . . . . . . . 55
5.1 A simple task model with one root task and three subtasks. . . . . . . . . . . 575.2 A container with three subtasks. . . . . . . . . . . . . . . . . . . . . . . . . . . 585.3 A menu in HTML with three options, each leading to one of the subtasks. . . 595.4 A container with three subtasks. . . . . . . . . . . . . . . . . . . . . . . . . . . 605.5 A simple task model with one group of two tasks. . . . . . . . . . . . . . . . . 625.6 Screen-shots of a sample form in Java Swing (left) and HTML (right) . . . . . 685.7 Original task model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 725.8 Task model with groupings indicated for HTML (desktop family). . . . . . . 735.9 Task model with groupings indicated for WML 1.1 (small-device family) . . . 74
vii
5.10 The final rendered UI for WML. . . . . . . . . . . . . . . . . . . . . . . . . . . . 755.11 The final rendered UI for HTML. . . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.1 Simple task model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 826.2 Modified task model for menustyle navigation style. . . . . . . . . . . . . . . . 836.3 Modified task model to be able to generate independent navigation style. . . . 836.4 Task model for a weather application. . . . . . . . . . . . . . . . . . . . . . . . 936.5 The corresponding HTML rendering of the UI generated from the task model
in figure 6.4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 946.6 Task model with two containers and a ‘menustyle’ navigation style for the
main task for HTML (desktop family). . . . . . . . . . . . . . . . . . . . . . . 956.7 The corresponding HTML UI for Figure 6.6 with two containers and one
menu to allow navigation to the containers. . . . . . . . . . . . . . . . . . . . 966.8 Partial task model in TERESA with extra tasks (enabling and disabling) added
to generate menustyle navigation style. The extra transition tasks are indicatedby the dashed circles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.9 Task model with groupings indicated for voice family. There is one menu atthe root level of the UI and one at the second level for containers two and three. 98
6.10 Original partial subtree of task model showing some of the subtrees for dis-playing the current weather information for the WML platform. . . . . . . . . 99
6.11 Same task model in TERESA with extra nodes (enabling and disabling) addedto generate menustyle navigation style. The extra nodes are indicated by thedashed circles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.12 Task model with containers indicated for WML (small-device family). Notethat there are more containers and a different style of menu generated. . . . . 100
6.13 Screen-shots showing final rendered UIs for WML for the task model in Fig-ure 6.12. The dashed arrows indicate the possible navigation between thevarious WML cards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.14 Original partial subtree of task model showing some of the subtrees for dis-playing the current weather information for the WML platform. . . . . . . . . 102
6.15 Same task model in TERESA with extra nodes (enabling and disabling) addedto generate menustyle navigation style. The extra nodes are indicated by thedashed circles. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.16 Original partial subtree of task model showing some of the subtrees for dis-playing future weather conditions for the WML platform. . . . . . . . . . . . . 103
6.17 Same task model in TERESA with extra nodes (enabling and disabling addedfor each subtree to generate menustyle navigation style with a larger set ofcontainers. The extra nodes are indicated by the dashed circles. . . . . . . . . 103
6.18 Initial task model for the BBC news web-site, available in HTML and WML. 1086.19 Upper portion of the BBC news web-site indicating different structural sec-
tions of the page. Task model indicating a few annotations and groupings inthe bottom part of the figure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
6.20 Lower section of the BBC web-site. . . . . . . . . . . . . . . . . . . . . . . . . . 110
viii
6.21 Partially annotated task model with the a few screen shots of the BBC newssite on a cell phone browser. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.1 TIDE 2 showing the four different panels, each representing different phasesof the UI development activity. . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
ix
List of Tables
2.1 Example of a Generic Vocabulary . . . . . . . . . . . . . . . . . . . . . . . . . . 222.2 Different types of Interaction types . . . . . . . . . . . . . . . . . . . . . . . . . 242.3 Different types of Application types . . . . . . . . . . . . . . . . . . . . . . . . 25
5.1 Table indicating categories of UIML parts . . . . . . . . . . . . . . . . . . . . 645.2 Table indicating mapping of tasks to different part categories . . . . . . . . . 64
6.1 Feature and artifact comparison with TERESA. . . . . . . . . . . . . . . . . . . 104
x
Acknowledgments
Writing and completing a dissertation is akin to running a marathon. First, I wish
to acknowledge my principal advisor, Dr. Manuel Perez-Quinones, for his guidance and su-
pervision in helping me cross the finish line. He has been a wonderful advisor and, more
importantly, a friend to me. I have been very fortunate to see him join the Computer Science
department at a time when I needed an advisor and he has helped me through various highs
and lows. He has been extremely patient with me during my less productive semesters, yet
been persistent enough in pushing me to complete the research necessary to graduate. In
spite of being extremely busy with other students and his other work, there have been few
times when I could not just knock on his door and talk to him about my research and other
matters.
I am also extremely grateful to Dr. Marc Abrams who was my principal advisor
in my initial days as a graduate student before he left Virginia Tech. He introduced me
to this research topic and provided initial mentoring to me as a new Ph.D. student here at
Virginia Tech and also during my internship with his company. I would also like to thank
the other members of my committee: Dr. Scott Midkiff, Dr. Naren Ramakrishnan and Dr. Scott
McCrickard for their advice and guidance in providing a better structure to the dissertation
document. In particular, I would like to thank them all for their comments in helping me
revise my Functional Comparison chapter.
A few other people I would like to thank in the Computer Science department in-
clude Dr. Verna Schuetz, Dr. Cal Ribbens, Dr. James Arthur, Dr. Rex Hartson, Dr. Ed Fox,
Carol Roop and Jessie Eaves. At various points in my graduate life, I have received encour-
agement, guidance and help from each of them either directly or indirectly. In particular,
I am indebted to Dr. Verna Schuetz who showed enough confidence in me to allow me to
teach undergraduate courses.
My parents and family have been a constant source of guidance and support for
me throughout my graduate life. Although I have been physically away from them for most
of my studies, their encouragement and support has been unwavering. One person without
whom I could not have finished this dissertation is my wife Batul who has been an unlimited
source of strength, love and patience for me during this long and arduous journey. She has
helped me maintain my focus on my dissertation and keep my spirits up during the last
xi
few years when I would get distracted by innumerable little things. She deserves as much
credit as I do in this dissertation being completed. I also wish to thank my many friends,
too innumerable to mention individually, within the Computer Science department and in
the Blacksburg community whose presence and support has helped me in my stay here.
Last but not least, I am grateful and thankful to Allah (God) who has helped me in
finishing up this dissertation. Personally, it has not been an easy journey for me in finishing
up my Ph.D. I have gone through many ups and downs. It has been through His benevo-
lence and blessings that I have been able to finally finish this and move on to the next phase
of my life.
1
Chapter 1
Introduction
Imagine a situation in which a person owns a desktop computer at work, a laptop
computer that he1 carries between home and office, a Personal Digital Assistant (PDA) that
he carries around with him, and a cellular phone. Imagine that this person has a favorite
calendar and an email program that he uses on his desktop machine. Now suppose that he
wants to use the same applications on his other computing devices.
The face of computing has been rapidly changing over the past few years. People
are no longer tethered to their desktop machines. In addition to using their desktop com-
puters, people are using a plethora of different devices and information appliances. These
devices have varying input/output characteristics, modalities and interaction mechanisms.
Many of these devices incorporate interaction techniques that were restricted to research
environments until a few years ago like touch screen displays, styli for input, voice recog-
nition, tiny screens, 3D graphics, virtual environments, etc. However, users want to use the
same kinds of applications and access the same data on these appliances that they can access
on their desktop computers. They want access to the same information on all these devices.
The UIs for these devices and platforms go beyond the traditional interaction metaphors.
Accessible user interfaces are another breed of UIs that introduce challenges un-
like those by traditional user interfaces. For example, people having vision impairment are
unable to use a graphical user interface (GUI) that is designed for people with normal sight.
The user interface might be changed for this group of users by having larger fonts for eas-
1The term ’he’ is used in place of the more appropriate he/she throughout this dissertation to avoid awk-wardness of language.
Mir Farooq Ali CHAPTER 1. INTRODUCTION
ier viewing or augmenting it with a screen reader. Accessibility itself could be considered
a different platform because accessible user interfaces have to be designed in a different
fashion.
These varying demands make it extremely difficult to apply traditional UI design
methodologies to create UIs that would work across multiple platforms. There is a lack of a
unified design methodology available to create user interfaces for these devices. Currently,
people have to design interfaces for different devices separately taking into account the
different physical characteristics and interaction styles.
The primary objective of this dissertation research is to identify and develop the
building blocks of a unified design methodology for the creation of UIs for different de-
vices and platforms. The work involved in the research identifies different phases of the
methodology, develops mechanisms to implement the various phases and provides transi-
tions between the phases. The developer can also build different types of UIs using some
extra annotations that are provided in the process. All of this is done in the broader con-
text of providing a simplified and usable process to the interface developer who is involved
in building the interface. There are two main underlying representations used in this work:
the Concurrent Task Tree (CTT) notation for representing task models and the User Interface
Markup Language (UIML).
1.1 Problem Description
Olsen (1999), in the broader context of problems existing and emerging due to
changes in the basics of interactive techniques, talks about problems in interaction emerging
due to the diversity of interactive platforms. He uses the term “chaos” to describe the over-
all problems. He writes that computers of the future could well be wall-sized, desk-sized,
hand-sized, palm-sized or even ear-sized. An immediate outcome of these varying devices
is the corresponding variety of interaction styles. And even while using these varieties of
devices, people would still want access to the same information. He mentions that while it
would be reasonable to expect for these varying devices to have different interaction styles,
it would not be acceptable to assume that they would all operate independently of each
other. He addresses the problem at an even deeper level, when he indicates that the typical
modeling process (i.e. studying a problem, forming an abstract model to represent the prob-
2
Mir Farooq Ali CHAPTER 1. INTRODUCTION
lem and forming a solution around the abstract model) used to develop interactive systems
are insufficient to accommodate the variety of interaction devices and interaction styles we
are dealing with today.
To further complicate the issue, mobile devices introduce some additional prob-
lems. Johnson (1998) outlines four concerns regarding the Human Computer Interaction
(HCI) of mobile systems:
1. The demands of designing for mobile users, their tasks and contexts,
2. Accommodating the diversity and integration of devices, network services and appli-
cations,
3. The current inadequacy of HCI models to address the varied demands of mobile sys-
tems, and
4. The demands of evaluating mobile systems.
Brewster et al. (1998) talk about the problems associated with the small-screen size
of hand-held devices. In comparison to desktop computers, hand-held devices will always
suffer from a lack of screen real estate, and so new metaphors of interaction have to be de-
vised for such devices. Some of the other problems that they mention while dealing with
small devices are with navigation and presenting dynamic information. Their approach is
to use non-speech sound as an alternative for providing feedback to the user. Two of the
problems I encountered in my own work (Ali and Abrams, 2001; Ali et al., 2004) in building
user interfaces for different platforms are the different layout features and screen sizes asso-
ciated with each platform and device. A related problem that I faced at the implementation
phase was to learn the different UIML vocabularies associated with each platform.
To summarize, the problems faced in designing UIs for different devices that I at-
tempt to solve in this dissertation include:
• Different layout facilities provided by the platform
• Varying screen size
• Different interaction techniques
• Different languages and toolkits on different devices
3
Mir Farooq Ali CHAPTER 1. INTRODUCTION
This makes it quite difficult to design and build UIs that allow the users to perform the same
tasks on different devices. People have to build completely separate UIs for each platform.
1.2 Research Objectives
The main goal of this research is to simplify the creation of multi-platform UIs
(MPUIs) using UIML. The research objectives in achieving this are listed below:
1. Identify the necessary building blocks that could be used in a process to develop
MPUIs. Develop the specifications for these building blocks and provide the rela-
tionships between them.
2. Demonstrate that UIML could be used to create a generic platform-independent de-
scription of a UI that works for multiple platforms. Show that the generic description
goes beyond a least common denominator or common widget approach.
3. Show that a task model, used in conjunction with the generic platform-independent
UIML description, is a feasible building block for the process of building multi-platform
UIs. Develop the specification for this logical abstraction or model.
4. Demonstrate that a significant part of the process of building MPUIs could be auto-
mated. Provide algorithms to automate the mechanical steps involved in creating the
MPUIs.
5. Demonstrate the feasibility of this process by demonstrating that it works for some
target platforms.
1.2.1 Discussion of Objectives
Each of the proposed Research Objectives is discussed in detail next.
Objective 1
As discussed in Section 1.1, the widespread proliferation of information appliances
and devices has introduced new problems for the creation of UIs for these platforms. Most
of the traditional UI design methodologies and processes are oriented towards GUIs for the
4
Mir Farooq Ali CHAPTER 1. INTRODUCTION
desktop platform. Even if they consider alternative platforms, there is a lack of a process or
methodology that could be used to build UIs for multiple platforms. In this research, I in-
vestigate how to simplify the overall process with which MPUIs can be built. As part of this,
I identify the necessary building blocks that are required to build MPUIs. The relationships
between these building blocks are also defined. The objective also includes an attempt to
draw relationships between my process and some accepted Usability Engineering process
(Hix and Hartson, 1993). A discussion of this objective is presented in Chapter 4.
Objective 2
As I demonstrate in Section 1.3 later in this chapter, it is possible to create UIs for
multiple platforms using UIML alone. However, there is a lot of duplication of effort in
developing the UI. This objective seeks to demonstrate that a set of generic elements can be
defined in the form of a vocabulary that can be used to describe a complete interface. The
objective is to provide a generic description that could be mapped to multiple platforms that
are similar in nature. Chapters 2 and 5 discuss how this objective is achieved.
Objective 3
The objective here is to show that a logical abstraction or model of the UI be-
ing built, that is independent of the various target platforms, is necessary to build multi-
platform UIs. I use a task model (Paterno, 1999) as the logical model for a platform-independent
representation. As part of this objective, I also show that using certain annotations in con-
junction with the task model provides a powerful mechanism to the UI developer in build-
ing MPUIs. Chapters 2, 4 and 5 present a detailed discussion about this objective.
Objective 4
The goal of this objective is to automate certain steps in the process. The objective
here is to show that certain steps of the overall process could be automated without the
intervention of the developer. As part of this, the steps where automation can occur in the
process are identified and necessary algorithms developed to achieve the automation. This
objective is discussed in Chapter 5.
5
Mir Farooq Ali CHAPTER 1. INTRODUCTION
Objective 5
The overall goal of this proposed research is to simplify the construction of multi-
platform UIs. Objectives 1-4 discuss various parts of my approach. In this objective, I show
that this process is feasible and does work for a set of target platforms. The target platforms
selected for this objective are based on the availability of renderers to support them and also
to represent a diverse set in terms of screen sizes. I also compare my work to a few other
approaches to demonstrate that this process is simple to use for building MPUIs. Chapter 6
discusses this objective.
1.3 Sample Application
I now present a motivating example of a weather application on a few different
platforms to stress the problems in developing MPUIs. This example also illustrates a few
concepts that I use in this research. My work does not necessarily address all the issues
encountered in this example.
Figure 1.1: Calendar application for the Java platform.
Consider a simple appointment program that permits users to view daily, weekly
or monthly views of their appointments. I implemented this application in UIML for the
Java Swing, Palm OS and WML (Wireless Markup Language) platforms respectively using
their platform-specific vocabularies. Figure 1.1 illustrates the application for Java while Fig-
ures 1.2 and 1.3 illustrate the same for WML and Palm OS, respectively. I wrote three sepa-
rate files, one for each platform, and use platform-specific renderers to render the interface.
6
Mir Farooq Ali CHAPTER 1. INTRODUCTION
Figure 1.2: Calendar application for the Palm OS platform.
Figure 1.3: Calendar application for the WML platform.
I had to put considerable effort into duplicating the same functionality in each platform-
specific interface, because of the inherent differences in the layout and vocabularies of each
platform.
Java has its own layout managers for placing its UI elements on the screen. Palm
OS does not provide any layout mechanism. Absolute positioning has to be used on the
Palm OS. WML too does not provide any layout mechanism. In addition, it does not al-
low absolute positioning. The UIs for the Java and Palm OS platform are pretty similar in
appearance. However, because of the limited screen size on the Palm, some data has to be
omitted. This problem is further compounded for WML because the interface itself has to
be completely reorganized based on menus due to the tiny screen size. The names of the el-
ements (stylistic and presentation attributes) and the events (behavior) associated with the
elements differs on all the platforms too.
The number of “containers” (UI component used to contain other UI elements)
used differs for different platforms in the above example. For example, the Java and Palm OS
versions uses three containers, whereas more are needed for WML (only four are indicated
7
Mir Farooq Ali CHAPTER 1. INTRODUCTION
in the example). Due to the increased number of containers for WML and other small-screen
platforms, there is increased navigation between them. This means that the UI developer
has to now be concerned about providing navigation capabilities between the containers
too. This puts an additional burden on the developer.
This example illustrates some of the problems faced by UI developers in building
MPUIs.
1.4 Dissertation Outline
The rest of the document is organized as follows:
Chapter 2: Presents the details of CTT and UIML, the two main specifications used in this
research.
Chapter 3: A survey of work in several related areas, including a few UI development lan-
guages, task notations and model-based systems.
Chapter 4: A high-level view of my development process for MPUIs.
Chapter 5: The implementation details of the development process.
Chapter 6: A functional comparison of the development process presented in this research
with a few other approaches for building multi-platform UIs.
Chapter 7: An overall summary of the presented research and avenues for further research.
Appendix A: A summary of the algorithm for converting a task model to generic UIML
and the XSLT implementation of the algorithm.
8
9
Chapter 2
Background
In Chapter 1, I presented the primary objectives of this research. In particular,
objectives 2 and 3 indicate demonstrating the feasibility of using UIML and a task model
as necessary building blocks in the development process for building MPUIs. I selected
UIML as the representation language in two phases of my process because of its features
for representing UIs at different levels of abstractions and for different platforms by simply
associating alternative vocabularies. Also, UIML was the only XML-based UI development
language in existence at the time this research was started that had a clean separation of
concerns about the different aspects of a UI. I discuss these features of UIML later in this
chapter (Section 2.2).
The choice of a task model as another important building block in the process
was performed after examining different models and representations that are used in tradi-
tional UI development processes. Task analysis is an integral part of any interaction design
process. A task model is a formal outcome of the task analysis phase and is a representation
of the tasks that the end-user does with the UI. The Concurrent Task Tree (CTT) notation is
a recent task model notation that comes with tool support and produces XML format (Pa-
terno, 1999). It also allows specification of different types and categories of tasks. This makes
it easy to generate UIML from this notation. It also allows the tasks to be represented at var-
ious levels of granularity and also allows temporal relationships to be specified between
the different tasks. Any other task model that has these features (hierarchical arrangement
of tasks, specification of different types and categories of tasks, temporal relationships be-
tween tasks and XML-based representation) could be used instead of CTT. A more detailed
Mir Farooq Ali CHAPTER 2. BACKGROUND
discussion of CTT follows later (Secton 2.3).
I present the basics behind UIML and CTT in this chapter and also introduce some
terms that are used in this dissertation.
2.1 Terminology
Below is a list of most-commonly terms used in this document.
Application: An application is defined to be the back-end logic behind a UI that implements
the interaction supported by the user interface.
Developer: A developer is the person who builds the UI.
Device: A device is a physical object with which an end-user interacts using a UI, such
as a Personal Computer (PC), a hand-held computer (e.g. a Palm), a cell-phone, an
ordinary desktop telephone or a pager.
End user: The person who uses the UI built by the developer is the end-user.
Platform: A platform is the combination of a device, an Operating System (OS) and a
toolkit. An example of a platform is a PC running Windows XP on which applications
use the Java Swing toolkit. In the case of HTML, this definition has to be expanded
to include the version of HTML and the particular web browser being used. For ex-
ample, Internet Explorer 6.0 running on a PC with Windows XP would be a different
platform than Netscape Communicator 7.2 running on the same PC with the same OS
since they implement HTML differently.
Family: A family of platforms is a group of platforms that have similar features. For exam-
ple, the desktop platform could include Firefox 1.0 with XHTML 1.0 on a PC running
Windows XP and Java Swing 5.0 on a PC.
Rendering: This is the process of converting a UIML document into a form that can be
meaningfully presented (e.g. through sight or sound) to an end-user, and with which
the user can interact. Rendering can be accomplished in two forms: by compiling
UIML into another language (e.g. HTML, WML or VoiceXML), or by interpreting
10
Mir Farooq Ali CHAPTER 2. BACKGROUND
UIML. An interpreter is a program that reads UIML code and makes calls to an Appli-
cation Program Interface (API) that displays the UI and allows interaction.
Toolkit: A toolkit is the library or markup language used by the application program to
build its UI. A toolkit typically describes the behavior of widgets like menus, but-
tons and scrolling bars. In the context of this chapter, the term toolkit includes both
markup languages like WML, XHTML, and VoiceXML (with their sets of tags) and
more traditional APIs for imperative languages like Java Swing, Java AWT and Mi-
crosoft Foundation Classes for C++.
UI element: A UI element or widget is a primitive building block provided by the toolkit
for creating UIs.
Vocabulary: A vocabulary is the set of names, properties and associated behaviors for UI
elements. A generic vocabulary is the vocabulary shared by all platforms of a family.
The UIML 3.0 specificaton (http://www.uiml.org/specs/uiml3/ ) shows how
to formally define a vocabulary and use it within a UIML document.
WAP family
7.2 forHTML
VoiceXML
Voice FamilyJava NetscapeInternet
Explorer 6.0for HTML
Swing1.4
WML
Desktop family
Figure 2.1: Sample families and platforms.
Figure 2.1 clarifies some of the terminology presented. Based on the above de-
finitions, using HTML with Internet Explorer 6.0 on a Personal Computer (PC) running
the Windows XP operating system is one platform within the Desktop family. This fam-
ily also includes the following platforms: the Java Swing 1.4 toolkit on a Windows XP PC,
and HTML with Netscape 7.2 on a Windows XP PC. Each version of Internet Explorer and
Netscape (on the same device and OS) is a separate platform within the same family. My
definition of the desktop family is based on the similar layouts and screen sizes provided
11
Mir Farooq Ali CHAPTER 2. BACKGROUND
by the platforms that make it up. Netscape, Internet Explorer and Java Swing are all de-
signed for devices that have ample screen real estate and similar layout facilities. On the
other hand, the WAP or small-device family have significantly different layout capabilities
and its devices have smaller screen sizes. Similarly, the mode of interaction with voice UIs
is significantly different than graphical UIs so these merit a separate family by themselves.
2.2 UIML
UIML (Abrams et al., 1999; Abrams and Helms, 2002, 2004; Phanouriou, 2000) is a
declarative XML-based language that can be used to define user interfaces. Initial work
on UIML started in late 1999 at the Center of Human-Computer Interaction at Virginia
Tech. Harmonia (http://www.harmonia.com ) started developing a suite of tools for
UIML in 1997. One of the first goals of UIML is to design a declarative language that
could describe any User Interface (UI) that an imperative language could describe. The
second goal, which coincided with the emergence of XML and the widespread emergence
of information appliances, is to design one language to serve as a canonical representation
of UIs for any device, using any language, any UI metaphor, and any operating system.
Harmonia developed an initial language version, UIML1, in January 1998. This was re-
designed based on lessons learned from its usage and the second version of the language
was released in the summer 1999 in the form of a specification. Constantinos Phanou-
riou’s dissertation at Virginia Tech (Phanouriou, 2000) also provided a major contribution
to the development of UIML. The current specification of UIML is version 3.0 and is cur-
rently undergoing a standardization process through the OASIS UIML technical committee
(http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=uiml ).
One of the original design goals of UIML is to “reduce the time to develop user
interfaces for multiple device families” (Abrams and Phanouriou, 1999). A related design
rationale behind UIML is to “allow a family of interfaces to be created in which the common
features are factored out” (Abrams et al., 1999).
This indicates that inherent in the design of UIML itself is the capability to create
multi-platform UIs. However, although UIML allows a multi-platform description of UIs,
there is limited commonality in the platform-specific descriptions when platform-specific
vocabularies are used. This means that the UI designer has to create separate user interfaces
12
Mir Farooq Ali CHAPTER 2. BACKGROUND
for each platform using its own vocabulary which is defined to be a set of user interface
elements with associated properties and behavior.
One of the best features of UIML is the clean separation of concerns, which allows
a generic vocabulary to be attached to UIML (Ali and Perez-Quinones, 2002; Ali et al., 2002). A
generic vocabulary is at a higher level of abstraction than a platform-specific vocabulary and
represents a family of platforms that share common traits of layout features. This UIML with
generic vocabulary is referred as generic UIML and UIML with platform-specific vocabulary
as platform-specific UIML. The generic UIML is converted into multiple instances of platform-
specific UIML for final rendering.
Although the use of a generic vocabulary raises the level of abstraction compared
to dealing with different languages, the problem of mapping this to multiple families that
are quite different still persists. For example, a generic vocabulary for a desktop family can-
not be mapped to a voice family, which differs quite substantially from the desktop family
of platforms. This necessitates an even higher level of abstraction than that of a generic
UIML representation. Another of UIML’s shortcoming lies in the fact that a platform or
device-specific vocabulary has to be associated with the language to make it work for any
platform and yield the final rendered interface. A renderer for that particular platform ren-
ders a platform-specific UIML file. This does not allow for an easy transition of that UI to a
different platform.
Hence, this dissertation tries to provide a methodology or framework in which the
device-independent representation of UIs in UIML could be exploited easily. The different
language features of UIML are now presented below.
One of the primary design goals of UIML is to provide a canonical format for de-
scribing interfaces that map to multiple devices. Constantinos Phanouri lists some of the
criteria used in designing UIML to generate interfaces for multiple devices using a single
canonical format:
• UIML should be able to map the interface description to a particular device/platform
• UIML should be able to separately describe the content, structure, behavior and style
aspects of an interface
• UIML should be able to describe the behavior in a device-independent fashion
13
Mir Farooq Ali CHAPTER 2. BACKGROUND
• UIML should be able to give the same power to the UI implementer as the native
toolkit
2.2.1 Language Overview
Since the language is XML-based, the different components of a user interface are
represented through a set of tags. The language itself does not contain any platform-specific
or metaphor-dependent tags. For example, there is no tag such as <window> that is directly
linked to the desktop metaphor of interaction. UIML uses about thirty generic tags instead.
Platform-specific renderers have to be built in order to render the interface defined in UIML
for that particular platform. Associated with each platform-specific renderer is a vocabulary
of the language widget-set or tags that are used to define the interface in the target platform.
A skeleton UIML document is represented in Listing 2.1. The first line of a UIML document
identifies it as a XML document. The second line gives the encoding of the document and
the location of the Document Type Definition (DTD) that provides the syntactic structure
that the document must conform to.
Listing 2.1: Skeleton of a UIML document
<?xml version=” 1 . 0 ” ?>
< !DOCTYPE uiml PUBLIC ”−//UIT//DTD UIML 2 . 0 Draf t//EN”
”UIML2 0g . dtd”>
<uiml>
<head> . . .</head>
< i n t e r f a c e> . . .</ i n t e r f a c e>
<peers> . . .</peers>
<template> . . .</template>
</uiml>
At the highest level, a UIML document comprises of four components:
• <head>: This optional tag contains meta-data about the interface. The content of this
tag is ignored while rendering the interface.
• <interface>: This is the heart of the UIML document in terms of representing the
actual user interface. All the UIML elements that describe the UI are present within
14
Mir Farooq Ali CHAPTER 2. BACKGROUND
this tag. Listing 2.2 illustrates the skeleton of a UIML <interface> and its contents.
The four main components are:
<structure>: The physical organization of the interface, including the relationships
between the various UI Elements within the interface, is represented using this tag.
Each <structure> is comprised of different <part> s. Each part represents the ac-
tual platform-specific UI Element and is associated with a single class of UI elements.
The term class in UIML represents a particular category of UI elements. Different parts
may be nested to represent a hierarchical relationship. There might be more than one
structure in a UIML document representing different organizations of the same UI.
<style>: The style contains a list of properties and values used to render the in-
terface. The properties are usually associated with individual parts within the UIML
document through the part-names. Properties can also be associated with particular
classes of parts. Typical properties associated with parts for GUIs could be the back-
ground color, foreground color, font, etc. It is also possible to have multiple styles
within a single UIML document to be possibly associated with multiple structures or
even the same structure. This facilitates the use of different styles for different contexts
<behavior>: The behavior of an interface is specified by enumerating a set of condi-
tions and associated actions within rules. UIML permits two types of conditions. The
first condition is when an event occurs, while the second is true when an event occurs
and the value of some data associated with the event is equal to a certain value. There
are four kinds of actions that occur. The first action is to assign a value to a part’s
property. The second action is to call an external function or method. The third is to
fire an event and the fourth action is to restructure the interface.
<content>: This represents the actual content associated with the various parts of
the interface. A clean separation of the content from the structure is useful when dif-
ferent content is needed under different contexts. This feature of UIML is very helpful
when creating interfaces that might be used in multiple languages. Suppose an inter-
face is created for both French and English. In this case, separate content is essential.
15
Mir Farooq Ali CHAPTER 2. BACKGROUND
Listing 2.2: Skeleton of a UIML interface
< i n t e r f a c e>
<s t r u c t u r e> . . .</ s t r u c t u r e>
<s t y l e> . . .</ s t y l e>
<behavior> . . .</behavior>
<content> . . .</content>
</ i n t e r f a c e>
• <peers>: In UIML, all the device and toolkit specific information is isolated within
this tag. This section of the UIML document is used to map the parts and associ-
ated behavior and properties to toolkit and device-specific widget names. Normally, a
UIML author does not write peer components, but simply includes existing ones. The
peers comprises of presentation and logic. The presentation element provides infor-
mation about a single UI toolkit by describing the different widgets and events. There
are two child elements within a <peers> element:
1. The <presentation> element contains mappings of part and event classes,
property names, and event names to a UI toolkit. This mapping defines a vo-
cabulary to be used with a UIML document, such as a vocabulary of classes and
names for VoiceXML or WML.
2. The <logic> element provides the glue between UIML and other code. It de-
scribes the calling conventions for methods that are invoked by the UIML code.
• <template>: The goal of the UIML template element is to enable re-usability. Cer-
tain parts of an interface can be described as a template, and reused multiple times
either within the same UI or across other UIs. This reduces the amount of UIML code
needed to develop a UI and also ensures a consistent presentation. The template ele-
ment within a UIML document can be used in three ways: replace, append or cascade.
Using the replace mode, a default implementation of the UI will be replaced by the
template. In the append mode, the template properties or behavior is appended to the
default code. Finally the cascade mode allows the ability to customize certain proper-
ties from the template. An extremely detailed discussion about the language features
16
Mir Farooq Ali CHAPTER 2. BACKGROUND
can be found in the UIML language specification (UIML2, 2000), while a discussion of
the language design issues can be found in Phanouriou (2000).
Data sourcesApplications/Device/Platform
UI Metaphors
LogicPresentation Interface
Behavior
Content
StyleVocabularies
Peers
Structure
Figure 2.2: UIML’s Meta-Interface Model (Phanouriou, 2000).
The main features of UIML are based on the Meta-Interface Model (MIM) (Phanou-
riou, 2000). MIM divides the UI into three major components: presentation, logic, and in-
terface. The logic component provides a canonical way for the UI to communicate with
an application while hiding information about the underlying protocols, data translation,
method names, or location of the server machine. The presentation component provides a
canonical way for the UI to render itself while hiding information about the widgets and
their properties and event handling. The interface component describes the dialog between
the end-user and the application using a set of abstract parts and events.
To better understand the features of the language, consider the sample UI dis-
played in Figure 2.3. A UIML renderer for Java produced this UI. The UIML code corre-
sponding to this interface is presented in Listing 2.3. The UI itself is pretty simple. As
indicated in Figure 2.3, the UI displays the string “Hello World!” Clicking on the button
changes the string’s content to “I am red now!” and color to red.
17
Mir Farooq Ali CHAPTER 2. BACKGROUND
Figure 2.3: Sample UI built in UIML.
Listing 2.3: UIML code for sample UI in Figure 2.3
<?xml version=” 1 . 0 ” encoding=”ISO−8859−1” ?>
< !DOCTYPE uiml PUBLIC ”−//Harmonia//DTD UIML 2 . 0 Draf t//EN”
”UIML2 0g . dtd”>
<uiml>
<head>
<meta name=”Purpose” content=” Hello World UIML example”/>
</head>
< i n t e r f a c e>
<s t r u c t u r e>
<part id=”HWF” c l a s s =”JFrame”>
<part id=”HWL” c l a s s =” Label ”/>
<part id=”HWB” c l a s s =” JButton ”/>
</part>
</ s t r u c t u r e>
<s t y l e>
<property part−name=”HWF” name=” t i t l e ”>Hello World Window
</property>
<property part−name=”HWF” name=” layout ”>j ava . awt . FlowLayout
</property>
<property part−name=”HWF” name=” r e s i z a b l e ”>t rue</property>
<property part−name=”HWF” name=”background”>CCFFFF</property>
<property part−name=”HWF” name=” foreground ”>black</property>
<property part−name=”HWF” name=” s i z e ”>200 ,100</property>
18
Mir Farooq Ali CHAPTER 2. BACKGROUND
<property part−name=”HWF” name=” l o c a t i o n ”>100 ,100</property>
<property part−name=”HWL”
name=” font ”>ProportionalSpaced−Bold−16</property>
<property part−name=”HWL” name=” t e x t ”>Hello World !</property>
<property part−name=”HWB” name=” t e x t ”>Click me!</property>
</ s t y l e>
<behavior>
<r u l e>
<condi t ion>
<event c l a s s =” actionPerformed ” part−name=”HWB”/>
</condi t ion>
<a c t i o n>
<property part−name=”HWL”
name=” foreground ”>FF0000</property>
<property part−name=”HWL”
name=” t e x t ”>I am red now!</property>
</ a c t i o n>
</r u l e>
</behavior>
</ i n t e r f a c e>
<peers>
<p r e s e n t a t i o n base=” Java 1 . 3 Harmonia 1 . 0 ”
source=” Java 1 . 3 Harmonia 1 . 0 . uiml#vocab”/>
</peers>
</uiml>
An important point to be observed here is that the UIML code in Listing 2.3 is
platform-specific for the Java AWT/Swing platform. Hence, the use of Java Swing-specific
UIML part-names like JFrame, JButton and JLabel can be observed in the UIML code. The
UI is comprised of the label for the string and the button, both of which are enclosed in a
container. This relationship is indicated in the structure part of the UIML code. The other
presentation and layout characteristics of the parts are indicated in UIML through various
19
Mir Farooq Ali CHAPTER 2. BACKGROUND
properties. All these properties can be grouped together in the style section. Note that each
property for a part is indicated through a name.
The behavior section of the UIML document specifies what actually happens when
a user interacts with the UI. In this example, two actions are triggered when the user clicks
the button: “Hello World” changes to “I am red now!”, and the text’s color changes to red.
As indicated in Listing 2.3, this is presented in UIML in the form of a rule that in turn is
composed of two parts: a condition and an action.
Currently, there are platform-specific renderers available for UIML for a number
of different platforms. These include Java, HTML, WML, VoiceXML, and the .NET platform
(Luyten and Coninx, 2004). Each of these renderers has a platform-specific vocabulary as-
sociated with it to describe its UI elements, behavior and layout. The UI developer uses the
platform-specific vocabulary to create a UIML document that is rendered for the target plat-
form. The example presented in Listing 2.3 is an example of UIML used with a Java Swing
vocabulary. The renderers are available from http://www.uiml.org/tools/ .
There is a great deal of difference between the vocabularies associated with each
platform. Consequently, a UI developer will have to learn each vocabulary in order to build
UIs that will work across multiple platforms. Using UIML as the underlying language for
cross-platform UIs reduces the amount of effort required in comparison with the effort that
would be required if the UIs had to be built independently using each platform’s native
language and toolkit.
Unfortunately, UIML alone cannot solve the problem of creating multi-platform
UIs. The differences between platforms are too significant to create one UIML file for one
particular platform and expect it to be rendered on a different platform with a simple change
in the vocabulary. In the past, when building UIs for platforms belonging to different fami-
lies, the developer had to redesign the entire UI due to the differences between the platform
vocabularies and layouts. In order to solve this problem, I have found that more abstract
representations of the UI are necessary, based on my experience with creating a variety of
UIs for different platforms. The abstractions that I use in my approach include using a task
model for all families, and a generic vocabulary for one particular family. These approaches
are discussed in detail in the following sections.
20
Mir Farooq Ali CHAPTER 2. BACKGROUND
2.2.2 Generic UIML
Within my framework, the family model is a generic description of a UI (in UIML)
that functions on multiple platforms. As indicated in Figure 2.1, there can be more than
one family model. Each family model represents a group of platforms that have similar
characteristics.
In distinguishing family models, I use the physical layout of the UI elements as the
defining characteristic. For example, different HTML browsers and the Java Swing platform
can all be considered part of one family model based on their similar layout facilities. Some
platforms might require a family model of their own. The VoiceXML platform is one such
example, since it is used for voice-based UIs and there is no other analogous platform for
either auditory or graphical UIs.
An additional factor that comes up while defining a family is the navigation ca-
pabilities provided by the platforms within the family. For example, WML 1.2 (WML2.0,
2001) uses the metaphor of a deck of cards. Information is presented on each card and the
end-user navigates between the different cards.
Building a family model requires one to build a generic vocabulary of UI elements.
These elements are used in conjunction with UIML in order to describe the UI for any plat-
form in the family. The advantage of using UIML is apparent since it allows any vocabulary
to be attached to it. In this framework, I use a generic vocabulary that can be used in the fam-
ily model. Recall that a generic vocabulary is defined to be one vocabulary for all platforms
within a family. Creating a generic vocabulary can solve some of the problems outlined
above. The family models that can currently be built are for the desktop platform (Java
Swing and HTML) and the phone (WML). These family models are based on the available
renderers. The specification for the family model is already built.
From Section 2.1, recall that the definition of family refers to multiple platforms that
share common capabilities. Different platforms within a family often differ on the toolkit
used to build the interface. Consider, for example, a Windows OS machine capable of dis-
playing HTML using some browser, and capable of running Java applications. Both HTML
and Java use different toolkits. This makes it impossible to write an application for one and
have it execute on the other, even though they both run on the same hardware device using
the same operating system. For these particular cases, UIML allows the definition of generic
21
Mir Farooq Ali CHAPTER 2. BACKGROUND
vocabularies.
A generic vocabulary of UI elements, used in conjunction with UIML, can describe
any UI for any platform within its family. The vocabulary has two objectives: first, to be
powerful enough to accommodate a family of devices, and second, to be generic enough
to be used without requiring expertise in all the various platforms and toolkits within the
family.
As a first step in creating a generic vocabulary, a set of elements has to be selected
from the platform-specific element sets. Second, several generic names, representing UI el-
ements on different platforms, must be selected. Third, properties and events have to be
assigned to the generic elements. A set of generic UI elements (including their properties
and events) were identified and selected. Ali et al. (2002) provides a more detailed descrip-
tion of generic vocabulary. Table 2.1 shows some of this vocabulary’s part classes for the
desktop family (which includes HTML 4 and Java Swing).
Table 2.1: Example of a Generic VocabularyGeneric Part UIML Class NameGeneric top container G:TopContainerGeneric Label G:LabelGeneric area G:AreaGeneric Button G:ButtonGeneric Internal Frame G:InternalFrameGeneric Icon G:IconGeneric Menu Item G:MenuGeneric Radio Button G:RadioButtonGeneric Menubar G:MenuBarGeneric File Chooser G:FileChooser
The mechanism that is currently employed for creating UIs with UIML is one
where the UI developer uses the platform-specific vocabulary to create a UIML document
that is rendered for the target platform. These renderers can be downloaded from http:
//www.harmonia.com
The platform-specific vocabulary for Java uses AWT and Swing class names as
UIML part names. The platform-specific vocabulary for HTML, WML, and VoiceXML uses
HTML, WML, and VoiceXML tags as UIML part names, respectively. This enables the
UIML author to create a UI that is equivalent to what is possible in Java, HTML, WML,
22
Mir Farooq Ali CHAPTER 2. BACKGROUND
or VoiceXML. However, the platform-specific vocabularies are not suitable for a UI au-
thor that wants to create UIML documents that map to multiple target platforms. For
this a generic vocabulary is needed. A few platform-specific vocabularies are available at
http://www.uiml.org/toolkits/ .
2.3 Concurrent Task Tree notation (CTT)
The Concurrent Task Tree (CTT) notation (Lecerof and Paterno, 1998; Paterno, 1999,
2001, 2004; Paterno et al., 2001) provides a hierarchical tree representation for the tasks per-
formed by the user. Each node in the tree represents a task and the sibling nodes in the
tree are represented by temporal operators. Details about the different task categories and
temporal operators in the CTT notation are given below.
Task Categories
There are four categories of tasks defined in the CTT notation. These categories are
user, interaction, application and abstraction. User tasks represent cognitive and perceptual
tasks performed by the user. Interaction tasks are performed by the user interacting with
the system directly. Tasks like pressing a key or entering some text are interaction tasks.
Application tasks are tasks performed by the system. These might be performed either in
response to some interaction task done by the user or to elicit some interaction from the
user. Abstraction tasks are complex tasks that could be a combination of user, interaction
and application tasks and are not used to represent tasks at the lowest level in the task tree.
Each task in the task model has a unique identifier that distinguishes it from other tasks in
the tree. For the purpose of this discussion, I only consider the interaction, application and
abstraction task types. Figure 2.4 illustrates a small example task model. The nodes in the
hierarchical structure represent the tasks while the horizontal links between the tasks repre-
sent the temporal operators. This task model contains only the abstraction and interaction
task categories.
Interaction and application tasks can further be classified into different types. Some
types of interaction and application tasks along with their descriptions are given in Tables
2.2 and 2.3 respectively.
23
Mir Farooq Ali CHAPTER 2. BACKGROUND
Figure 2.4: Sample task model for a weather application
Table 2.2: Different types of Interaction typesInteraction task type DescriptionSelection The user has to select one or more items from a specified setEdit The user has to perform an editing operationControl The user triggers some action explicitly in this task.Monitoring The user performs some action to monitor some ongoing
applicationResponding to alerts The user responds to a system generated alert
Temporal Operators
There are twelve temporal operators in the CTT notation. Some of these opera-
tors are binary in the sense that they operate between adjacent sibling tasks in the task tree.
These are choice ( ), concurrent ( ), concurrent with information exchange ( ), disabling ( ),
enabling ( ), enabling with information exchange ( ), suspend/resume ( ) and order indepen-
dent ( ). The remaining operators operate on a single task. These are iteration ( ), finite
iteration (T(n)), optional ( ), and recursion. I use the relationships between these operators
to generate the behavior for the UIML. Each of these temporal operators is defined below:
Choice Operator : The user could select from a set of tasks.
Concurrent : The user can perform actions belonging to two tasks in any order without
24
Mir Farooq Ali CHAPTER 2. BACKGROUND
Table 2.3: Different types of Application typesApplication task type DescriptionOverview The application presents a summary of a set of dataComparison The purpose of the presentation generated is to assist the
user in comparing the values of some quantities of the sametype
Processing feedback The application indicates its state, like a progress indicatorGenerating alerts The application generates an alert for the userLocate The application gives detailed information about a set of
data to allow the user to find the desired informationGrouping The data that is presented has some one-to-many
relationship among data attributes
any specific constraints.
Concurrent with information exchange : The user can execute two tasks concurrently,
but they have to synchronize in order to exchange information.
Deactivation or disabling : The first task is definitively deactivated once the first action
of the second task has been performed.
Enabling : One task enables a second one when it terminates.
Enabling with information exchange : This is the same as enabling, but with informa-
tion being passed between the two tasks.
Suspend/Resume : This operator gives the second task the possibility of interrupting
the first task while it starts and completes, and then resumes the first task.
Order Independent : This operator indicates that either of the two tasks can be per-
formed in any order, but when one task is started, it has to complete before starting
the other one.
Iteration : This indicates certain tasks that restart after completion and performs repeat-
edly until it is interrupted or deactivated by another task.
Finite Iteration T(n): This is similar to the iteration operator but in this case, the task
iterates a fixed number of times.
25
Mir Farooq Ali CHAPTER 2. BACKGROUND
Optional : This operator indicates a task that is optional to perform.
Recursion: This operator indicates a task that contains a similar instance of it in its subtree.
The next chapter presents some related work.
26
27
Chapter 3
Related Work
My work has been also influenced by work in a few related areas. These include
model-based systems, task analysis and its related notations, and markup languages for UIs.
Each of these topics is discussed next.
3.1 Model-Based Systems
It is useful to revisit some of the concepts behind model-based UI development
tools (daSilva, 2000; Schlungbaum, 1996; Szekely, 1996) since some of these concepts are
being used to generate multi-platform UIs (Myers et al., 2000). Model-based systems intro-
duce the concept of abstract models that are used to represent the UI. Eisenstein and Puerta
(2000); Eisenstein et al. (2001) states that a UI model is an essential step in the design and
development of UIs for mobile computing. This makes the process less time-consuming
and error-prone. The primary component in a model-based tool is the model that is used
to represent the UI in an abstract fashion. Different types of models are used in different
systems, including task models, dialogue models, user models, domain models, and appli-
cation models. All of these models represent the UI at a higher level of abstraction than
what is possible with a more concrete representation. The UI developer builds these mod-
els, and they are automatically or semi-automatically transformed into the final UI. My work
borrows two concepts from these model-based systems: 1) using abstract models to repre-
sent the UI, and 2) using a process of transformation between models at various levels of
abstraction. In this section, I survey a few model-based systems.
Mir Farooq Ali CHAPTER 3. RELATED WORK
The models in these systems are typically high-level specification of the tasks that
users need to perform, data models that capture the structure and relationships of the in-
formation that applications manipulate, specifications of the presentation and dialogue and
user models (Myers, 1995). The essential idea behind model-based tools is to achieve a
balance between the detailed control of the design and automation. Many model-based
tools were built in the late 80s and early 90s including UIDE (Sukaviriya and Foley, 1993;
Sukaviriya et al., 1993), Interactive UIDE (Frank and Foley, 1993), DON (Kim and Foley,
1990, 1993), HUMANOID (Luo et al., 1993; Szekely et al., 1993), MASTERMIND (Castells
et al., 1997; Szekely et al., 1995), ITS (Wiecha et al., 1990), Mecano (Puerta et al., 1994), Jade
(Zanden and Myers, 1990), Teallach (Barclay et al., 1999; Griffiths et al., 1999, 2001), TRI-
DENT (Bodart et al., 1995) and Mickey (Olsen, 1989).
Pedro Szekely (Szekely, 1996) indicates four clear components of a model-based
interface development environment: the modeling tools, the model, the automated design
tools and the implementation tools. Developers use the modeling tools to build the model.
The automated design tools are used to perform certain design activities that developers
either choose or are forced to delegate to the system. The implementation tool transforms the
model into an executable representation that is linked with application code, and delivered
to the end-users.
The model, which is the main component of the system, can further be decom-
posed into three levels of abstraction. At the highest level are the task and domain models
of the application. The task model represents the tasks that users need to perform with
the application, and the domain model represents the data and operations that the appli-
cation supports. The second level of the model is the abstract user interface specification,
which represents the structure and content of the interface in terms of two abstractions, ab-
stract interaction objects, information elements and presentation units. The third level of the
model, called the concrete user interface specification, specifies the style for rendering the
presentation units, and the abstract interface objects and information elements they contain.
The concrete specification represents the interface in terms of toolkit primitives such as win-
dows, buttons, menus, etc. The models of different model-based user interface development
environments vary substantially. A few of the tools mentioned earlier are discussed briefly
next.
The User Interface Design Environment (UIDE) is a knowledge-based system that
28
Mir Farooq Ali CHAPTER 3. RELATED WORK
assisted in the user interface design and implementation. The knowledge base has a number
of different purposes that include representing the conceptual model of the interface, trans-
forming itself to represent different but functionally equivalent interfaces, checking the in-
terface design for consistency and completeness, and to provide run-time help support. The
model in this system is a knowledge-based schema involving objects in an inheritance hi-
erarchy. The objects include attributes and associated actions that indicate what could be
done with the interface. The UIDE system also uses pre-conditions and post-conditions for
the actions.
The original UIDE system automatically generated the user interface based on the
created model. The developer does not have control over the generated interface. This
is an undesirable feature that is rectified in a subsequent version of the tool that is called
Interactive UIDE In this tool the designer could start with an application model or the user
interface and an intelligent agent component within the tool maintains consistency between
the two based on a dialog with the designer. Interactive UIDE also includes support for both
novice and expert users through the intelligent component called Albert.
The Interactive Transaction System (ITS) was used to develop the EXPO’ 92 Guest
Services System in Seville, Spain, which was a public information and service kiosk net-
work. The ITS architecture was a four-layer architecture. The action layer implemented
back-end application functions. The dialog layer defined the content of the user interface,
independent of its style. Content specified the objects included in each frame of the inter-
face, the flow of control among the frames and the actions associated with each object. The
style rule layer defined the presentation and behavior of a family of interaction techniques
while the style program layer implemented primitive toolkit objects that were composed
by the rule layer into complete interaction techniques. The ITS system did not provide any
graphical editor to generate the rules.
HUMANOID requires designers to build interfaces by constructing a declarative
model of how the interface looks and behaves. It provides a modeling language that is
comprised of five semi-independent dimensions: Application semantics that represent the
objects and operations of an application, presentation that defined the visual appearance of
the interface, behavior that defined the input gestures associated with the presented objects,
dialogue sequencing which defined the ordering constraints for executing command and
supplying inputs to commands, and action side-effects that defined the actions executed
29
Mir Farooq Ali CHAPTER 3. RELATED WORK
automatically.
MASTERMIND tries to combine the best features from two earlier systems, UIDE
and HUMANOID, while trying to avoid their shortcomings. It uses a declarative language
for its models. There are three models used in MASTERMIND: Application model that
defines the capabilities of the system, Task Model that describes the tasks that users can
perform with a system, and Presentation Model that defines the visual appearance of the
interface. One of the goals of MASTERMIND was to maintain a centralized knowledge base
that could be accessed and maintained by various tools used in the construction of the user
interface.
Mecano automatically generates the user interface using a domain model or on-
tology. The domain model is used to capture the objects in a particular domain and their
interrelationships. The domain model could then be used to generate dialogue specifica-
tions. Mecano also supports reuse of its domain models with minor variations. The main
contribution of Mecano is the use of a domain model that was a novel concept in model-
based systems.
Although model-based tools provide many advantages over other user interface
development tools, one of the main drawbacks of these systems is that the automatically
generated interfaces were not of very good quality. It is not feasible to produce good quality
interfaces for even moderately complex applications from just data and task models. One
more limitation of some of the earlier systems was the lack of user control over the process
of UI generation.
Some recent approaches to using models for building web-sites include WebML
(Bonifati et al., 2000; Ceri et al., 2000) and the AutoWeb system (Fraternali, 1999; Fraternali
and Paolini, 2000). The basic idea behind WebML is to develop a conceptual specification
for a web-site using four different models: structural, hypertext, presentation and person-
alization. These models are adapted from the E/R model and UML class diagrams. The
AutoWeb system uses a similar notation called Hypermedia Design Model-lite (HDM-lite)
that utilizes concepts from database modeling to develop conceptual models for web-site
development. Once these models are built, an automatic transformation is done to yield the
final web pages. One other recent approach that uses a model-based system for multi-device
UIs is presented in Vanderdonckt et al. (2001a) that is based on context of use (Vanderdonckt
et al., 2001b). A couple of other model-based systems that are used in developing multi-
30
Mir Farooq Ali CHAPTER 3. RELATED WORK
device collaborative applications are Manifold (Marsic, 2001) and WebSplitter (Han et al.,
2000).
3.1.1 TERESA
TERESA (Transformation Environment for inteRactivE Systems representAtions)
is an integrated tool that supports model-based development of UIs for multiple platforms
(Paterno and Santoro, 2002; Mori et al., 2003, 2004; Berti et al., 2004; Bandelloni and Paterno,
2004; Marucci et al., 2004). Some of the platforms for which UIs are generated by TERESA
include HTML, WML and VoiceXML. The tool uses a task model that is semi-automatically
transformed to yield MPUIs. It too uses the same CTT task model that I use in this work. In
fact, the CTT notation has been developed by the same group that built TERESA.
In the TERESA approach, the developer first builds a task model using the CTT no-
tation. System task models, specific to a particular platform, are created from the original task
model based on filtering. This process of filtering could lead to different tasks being added
for certain platforms, and a possible restructuring of the original task tree. Enabled Task Sets
are then generated from the system task model, based on the presence of the enabling tem-
poral operator. The ETSs are sets of tasks that are enabled at the same time. The ETSs could
be combined using some heuristics to yield presentation task sets (PTS). These PTSs are then
used to generate the Abstract User Interface (AUI), which comprises of interactors and vari-
ous operators. The AUI is then used to generate the Concrete User Interfaces (CUIs) and the
final UI, which is heavily platform-dependent. XML is used in various intermediate forms.
It is used to represent the task model, the AUI and the CUI. The TERESA tool allows vari-
ous entry points for the generation of UIs. A developer can start with the CTT or any of the
intermediate representations (AUI and CUI) to get the final UI.
3.1.2 Model Driven Architecture (MDA)
A parallel can be between the model-based approach for building UIs to the Model-
Driven Architecture (MDA) of the OMG group (http://www.omg.com/mda/ ) for build-
ing software (Seidewitz, 2003; Kleppe et al., 2003; Raistrick et al., 2004). The MDA approach
also involves building models using the Unified Modeling Language (UML) notation. At
the highest level, the MDA has a Computation Independent Model (CIM), which can be
31
Mir Farooq Ali CHAPTER 3. RELATED WORK
construed to be a domain model in the model-based UI realm. Its purpose is to capture the
requirements that are specific to that particular domain. The Platform Independent Model
(PIM) at the next level is at a degree of platform-independence to enable being suitable for
a set of different platforms of similar types. It is equivalent to an Abstract UI (AUI) in the
model-based UI realm. The Platform Specific Model (PSM) combines the funcitonality of
the PIM with the requirements of the target platform. Finally, a Platform Model provides
the details of a particular platform and the services it provides. The MDA approach places
special emphasis between model transformations.
3.1.3 Dygimes
The Dygimes system (Dynamically Generating Interfaces for Mobile and Embed-
ded Systems) is another recent model-based system that is used to generate UIs for embed-
ded systems and mobile computing devices (Luyten et al., 2003; Clerckx et al., 2004; Luyten,
2004). This system develops the UI at run-time compared to most systems that help in the
development at design-time instead. The same CTT task model that I use in my work is used
as one of the initial models in this work too. It is used to derive the dialog model that in turn
drives the development of the UI. As the first step in extracting the dialog model, the same
enabled task sets (ETS) that are used in TERESA are extracted from the CTT task model.
These ETSs are used to create a State Transition Network (STN), which indicates the tran-
sitions between the different task sets. These transitions are calculated automatically based
on the presence of the temporal operators in the task model. Ultimately, this STN helps
in guiding the navigation dialog between the real UI container elements that represent the
tasks.
3.1.4 Comparison of UIML with Model-based Systems
It is possible draw a parallel between UIML and traditional model-based systems
that were described above. The relationship between different models and the correspond-
ing UIML element is shown below.
Presentation model: <interface> elementDomain model: <logic> elementPlatform model: <presentation> elementAbstract UI model: <interface> with generic vocabulary
32
Mir Farooq Ali CHAPTER 3. RELATED WORK
Concrete UI model: <interface> with platform-specific vocabulary
UIML in its current version lacks a transform model and thus UIML implementers use their
own transforms.
3.2 Markup Languages
I next discuss markup languages which are an integral part of my work. The pri-
mary language that I use is the User Interface Markup Language. The task modeling nota-
tion that I use has a representation using XML which is also a markup language. Since the
advent of the World Wide Web in the early 90s and the emergence of eXtensible Markup
Language (XML) as a standard meta-language, a number of different markup languages
have emerged for creating UIs for different devices. The foremost among these are HTML
(HTML, 1998) for desktop machines, VoiceXML (Voicexml, 2004) for voice-enabled devices
and WML (WML2.0, 2001) for small hand-held devices. The Wireless Markup Language
(WML), a part of the Wireless Application Protocol (WAP), is an XML-based language de-
signed primarily for devices with small screen-sizes and limited bandwidth, including cel-
lular phones and pagers. WML uses the metaphor of a deck of cards to represent a UI with
a user having to navigate between different cards that are grouped together like a deck.
VoiceXML is a markup language for specifying interactive voice response applications. It is
designed for creating audio dialogs that feature synthesized speech, digitized audio, recog-
nition of spoken and Dual-Tone Multi Frequency key input, recording of spoken input, tele-
phony, and mixed-initiative conversations. XML itself is a meta-language that allows the
definition of other languages. There have been various other standards developed in con-
junction with XML including XSLT that allows XML to be converted to other formats based
on some rules.
Another XML-based language is XUL (XML-based User Interface Language) that
is used to develop cross-platform UIs. XUL (http://www.mozilla.org/projects/
xul/ ) is primarily oriented towards GUIs and provides standard graphical elements like
toolbars, tabbed dialogs, trees, etc.
While all of these markup languages have almost entirely removed the UI develop-
ers’ need to know the specifics about the toolkit, hardware, and operating system, they have
not made a significant contribution toward multi-platform development. Model-based tools
33
Mir Farooq Ali CHAPTER 3. RELATED WORK
were primarily designed for a single platform and do not consider multi-platform UIs. The
other shortcoming of these tools is that some of them incorporate layout information within
their transformation algorithms, thus making them inextensible. Most of the languages that
are mentioned, such as WML and VoiceXML, are geared toward a single platform.
I next review five XML-based languages that have been designed for the purpose of
device-independence or multi-platform authoring. These include XIML, UsiXML, AUIML,
XForms and AAIML. I do a detailed comparison of my work with two of these (XIML and
UsiXML) in Chapter 6.
3.2.1 XIML
The eXtensible Interface Markup Language (XIML), available from http://www.
ximl.org/ , is an XML-based language that specifies different aspects of interaction data
for single and multiple-platform user interfaces (Puerta and Eisenstein, 2002, 2004). A UI
representation in XIML is primarily a collection of elements, each of which can be catego-
rized into different components. The language does not impose a limit on the number and
type of components that can be defined, although there are five predefined components.
The five basic predefined interface components are task, domain, user, dialog, and presentation.
XIML supports multi-platform UI development by allowing multiple presentation
components within a single XIML specification. Each presentation component is geared to-
wards a target platform and includes specification of the widgets, interactors and controls
for one particular platform, allowing UIs for multiple platforms to be created. To avoid hav-
ing to create multiple presentation components, XIML permits the creation of intermediate
presentation elements, which could be mapped to multiple target presentation elements
automatically through relations. An example of an intermediate presentation element is
a ‘map-location widget’ (Puerta and Eisenstein, 2004) that could be mapped to a graphical
map control for the web and a text-based display for a PDA. XIML is an integrated language
that provides different models and their transformations within the same specification.
3.2.2 UsiXML
The USer Interface eXtensible Markup Language (UsiXML) (Calvary et al., 2003;
Limbourg et al., 2004) is another recent XML-based language intended for the purpose of
34
Mir Farooq Ali CHAPTER 3. RELATED WORK
creating ‘context-sensitive’ user interfaces. This language too has its foundations in model-
based systems work and extensively utilizes models at various levels of abstractions. Some
of the models that are used in UsiXML are task, domain, presentation, dialog, user, platform
and environment. These are arranged at the appropriate levels based on the Cameleon
framework (Calvary et al., 2003). In their work, a context-sensitive user interface is a user
interface that exhibits some capability to be aware of the context and reacts to changes of
this context. Change of platform can be considered to be one change of context. Their
approach is broader in scope than this work, but incorporates some of the same models that
are common to TERESA and XIML too.
3.2.3 AUIML
The Abstract User Interface Markup Language (AUIML) is another XML-based
language, which has been designed to “allow the intent of interaction with a user to be cap-
tured” (Merrick et al., 2004). The main idea behind the language is to record the interaction
semantics without worrying about the particulars of the devices to be supported. It is an
MVC XML language, supposed to be independent of any client platform, implementation
language, and the UI implementation technology. AUIML allows the specification of a sin-
gle intent to run on many devices.
A UI is described in terms of three components: a data model, interaction and
presentation. The data model comprises of various data types and data structures to gather
and output data. Some of these include trees, tables, choices and panel aggregations. Data
state and validation is controlled by properties on the data type. Interaction with the UI is
abstracted through events. A special kind of events is an action which signifies completion
of data entry, requesting help text or signifying invocation of special action-handling code.
Presentation in AUIML comprises of properties that are tied to the data elements specified
in the data model. AUIML provides some tool support through a visual XML builder. It
is supported through IBM’s websphere server and renderers for Java Swing and HTML are
available.
35
Mir Farooq Ali CHAPTER 3. RELATED WORK
3.2.4 XForms
XForms, a World Wide Web Consortium (W3C) standard, represents the next gen-
eration of forms for the Web. (Dubinko, 2003; World Wide Web Consortium, 2003) One of
the goals in designing the XForms standard is to support device-independence. It splits tra-
ditional HTML/XHTML forms into three parts: model, instance data, and user interface.
Similar to AUIML, XForms also intends to capture the intent behind an application.
The model, in an abstract representation, describes the purpose of the form. From
the point of view of device-independence or multi-platform UI authoring, this is the most
important part of XForms. This can be further broken down into two: a data model and
a processing model. The data model provides the structure of the data and the processing
model specifies what happens with the data when the forms gets ‘submitted’. XForms al-
lows individual properties to be associated with items within the model to constrain them if
necessary. A example of two properties include “readonly” and “required”. The data model
uses XPath for referencing the data. The processing model defines the processing actions
associated with user actions. It defines a common set of behaviors associated with the form.
It uses the concept of events associated with the form model.
The UI, represented through form controls, is independent of any one particular
platform and allows different representations on different platforms. Some of the form con-
trols include input , secret , textarea , output , upload , range , trigger , submit ,
select1 and select . These controls might be interpreted in different ways on different
platforms. For example, the range control might be represented through a slider on a GUI,
but might be specified through a constraining grammar on a voice platform that limits the
input data that is permitted.
The instance data provides a way for the user-entered data to be collected in an
XML format and possibly sent back to the server. This data is extracted either inline from the
XForms or an XML document on a server, possibly modified by the user and then submitted
in a serialized format back to the server for processing.
Although XForms is intended primarily to supplant the current XHTML/HTML
forms on the web, its separation of concerns between the model, instance data and UI
provides a useful comparison platform with my and other approaches for building multi-
platform UIs.
36
Mir Farooq Ali CHAPTER 3. RELATED WORK
3.2.5 AAIML
The V2 Technical Committee of the INternational Committee for Information Tech-
nology Standards (INCITS at http://www.incits.org/tc_home/v2.htm ) is develop-
ing Universal Remote Control (URC) standards. As part of their standards, V2 is defin-
ing a technical framework including an abstract language, Alternative Abstract Interface
Markup Language(AAIML), intended to capture the representation of UI for a service or
device (Zimmermann et al., 2002). The language is supposed to be sufficiently abstract so
that different devices could render a particular description of the UI based on their own
capabilities.
The abstract UI description language defines a set of abstract UI elements (called
”abstract interactors”) for input and output operations, each with a particular semantic or
function. An abstract interactor is mapped to a ”concrete interactor” available on the URC
platform. For example, the abstract interactor ”string-selection” may be rendered as an ar-
ray of radio buttons on a GUI, and as a menu on a voice platform. The final rendering is
dependent on the URC device. AAIML also provides an event model for facilitating in-
teractive user interfaces, driven by a two-way notification mechanism between URC and
target.
3.2.6 DISL
The Dialog and Interface Specification Language (DISL) is another XML-based lan-
guage that extends UIML with a dialog model (Bleul et al., 2004; Mueller et al., 2004; Schaefer
et al., 2004). This language uses the Dialog Specification Notation (DSN) to specify the dif-
ferent dialogs that can appear as part of the UI. It is based on the concept of a state-oriented
dialog specification. The dialog descriptions in the language are independent of the tar-
get device and modality. The state transitions indicating the dialog are implemented in the
behavior section of UIML.
3.2.7 Other Languages
Besides the languages discussed above, there have been a few more languages and
notations for representing UIs for more than one platform. A recent workshop on User In-
terface Development Languages (UIDL) presented most of the languages discussed earlier,
37
Mir Farooq Ali CHAPTER 3. RELATED WORK
including UIML (Luyten et al., 2004). Richter (2002) discusses a XML-based remote access
platform for mobile devices allowing every user to interact with common kiosks through
their own customized mobile device. Another approach that uses various models including
task, domain, user, application and interaction is presented in Dittmar and Forbrig (1999);
Mueller et al. (2001); Forbrig et al. (2004).
Nichols et al. (2002, 2003, 2004) presents a language that is similar to AIAP de-
scribed above, which is used to describe the functions of appliances like “televisions, VCRs,
copiers, microwave ovens, and even manufacturing equipment”. The language has been
used to generate both graphical and speech interfaces on handheld computers, mobile phones
and desktop computers. The main features of the specification language includes state vari-
ables, commands and explanations. The state variables have types that specify how they
can be manipulated. These are common programming language data types like boolean,
floating-point, integer, etc. One disadvantage of this language is that the platform informa-
tion has to be hard-wired into the specification for each device. It also does not support any
task-based information.
3.2.8 Comparison of Languages
A comparative analysis of four abstract specification languages for UIs, includ-
ing XIML, UIML, XForms, and AIAP can be found in (Trewin et al., 2003, 2004). The
comparison is on the basis of a ‘Universal Remote Console’ scenario, in which the ab-
stract specifications allow a user to access and control any compliant device or service in
the local environment, using any personal device. The paper concludes that AIAP and
XForms are better suited for this scenario compared to XIML and UIML. Souchon and Van-
derdonckt (2003) compares ten different XML-based UIDLs. Each language is evaluated
based on the types of models it supports, its methodology, tool-support, target languages
and platforms, and whether the final desired UI is for a single platform or multiple plat-
forms. The relationship between UIML and other UI standards and UIDLs is discussed
at http://www.oasis-open.org/committees/tc_home.php?wg_abbrev=uiml . I
compare a few of these languages to my approach of combining UIML with CTT later in
Chapter 6 on the basis of their capabilities to generate MPUIs.
38
Mir Farooq Ali CHAPTER 3. RELATED WORK
3.3 Task Representations
I use a task model as the intial starting point for my development process. The task
model is a formal artifact that emerges out of the task analysis phase in interaction design. I
present a few different task representations in addition to briefly discussing task analysis.
Task Analysis (TA) has been an important component of the usability engineering
process in HCI. “Task analysis is the process of understanding the user’s task thoroughly
enough to help design a computer system that will effectively support users in doing the
task.” (Kieras, 1997) Broadly, the term “task” represents the work or activity that a user
is supposed to perform with a computer system. Peter Johnson (Johnson, 1992) states that
before the actual process of task analysis begins, four things need to be decided: the purpose
of the task analysis, identifying the domain and its characteristics, the tasks themselves,
and finally, the user groups. He also states that in carrying out the task analysis, it should
be clearly specified what the data is, where is it going to be obtained from and how it is
going to be obtained. Kieras (Kieras, 1997) states that “representing the user’s tasks can be
summarized into four categories: what the user needs to know, what the user must do, what
the user sees and interacts with, or what the user might do wrong” In general, Task Analysis
(TA) is the process of collecting, classifying and interpreting data on human performance in
work situations and is an integral part of ergonomics (Annett and Stanton, 2000). With the
evolution of user-centered development techniques in HCI, it has also become a part of the
usability engineering process. According to Hix and Hartson, it can used to build a formal
analytic model to predict user performance or to help drive design (Hix and Hartson, 1993).
Another definition of it is that it is an “umbrella term that covers techniques for investigating
cognitive processes and physical actions, a high level of abstraction and in minute detail”
(Preece et al., 2002). It is a part of the interaction design phase when used to help in the
design of the interface. It primarily aids in eliciting requirements for the purpose of the
interface design.
Some of the famous task analysis methods and notations include Hierarchical Task
Analysis (HTA) (Annett and Duncan, 1967), TAKD (Johnson and Johnson, 1991), TAG (Payne
and Green, 1986), different variants of GOMS (Card et al., 1983; John and Gray, 1995; John
and Kieras, 1996a,b; Kieras, 1988) and task scenarios (Carroll, 1995, 2000). The details about
these various techniques are presented later in this section. Task scenarios and HTA are used
39
Mir Farooq Ali CHAPTER 3. RELATED WORK
to help in the design phase while the GOMS models are used primarily for predicting pur-
poses. In most of the above techniques, the process of task analysis involves decomposing
the actual tasks that the user performs hierarchically into sub-tasks, goals and procedures
or plans for achieving the goals. For example, the process of HTA involves breaking a task
into subtasks and then each subtask into subtasks until a desired level of granularity for the
subtasks is achieved. Translated to the process of interaction design, developers perform
HTA by observing what the users need and then creating a hierarchical annotated tree-like
structure that represents the task hierarchy.
Some model-based user interface development environments (MB-IDEs) like TRI-
DENT (Bodart et al., 1995) and MASTERMIND (Szekely et al., 1995) use a Task Model as the
high-level specification. The task model describes the tasks that the user performs with the
system and is the formal output of the Task Analysis phase. Typically, these task models are
hierarchical in nature like the HTA structures and usually contain sequencing information
between the tasks. The developer then uses this task model and transforms it automatically
or semi-automatically to generate the user interface. This user interface is used for usabil-
ity evaluation and in turn helps refine the original task model. Figure 3.1 illustrates this
task-model-centric view of the interaction design process.
transformation
Task analysis
generation
Design of
Task model
Usability
Evaluation
Domain and
User Interface
Tool−supported
Figure 3.1: Task model-centric process for multiple platforms.
The details of a few task notations mentioned above are now presented.
40
Mir Farooq Ali CHAPTER 3. RELATED WORK
Hierarchical Task Analysis
One of the earliest forms of Task Analysis is the Hierarchical Task Analysis (HTA).
In HTA, a task is decomposed into necessary goals, subtasks, sub-goals and procedures for
obtaining those goals. One of the strengths of HTA is that it requires the analyst to study
actual task performers to form a detailed model of the task (Annett, 2004). HTA descrip-
tions involve goals, tasks, operations, and plans. A goal is a desired state of affairs. A task
is a combination of a goal and a context. Operations are activities for attaining a goal. Plans
specify which operations should be applied under what conditions. Plans appear as annota-
tions to be the tree-structure diagram that explain which portions of the tree will be executed
under what conditions. Each operation in turn might be decomposed into subtasks, leading
to a hierarchical structure. (Kieras, 1997)
GOMS
The GOMS method of analysis is one of the most widely known theoretical meth-
ods in HCI (Card et al., 1983; John and Kieras, 1996a,b; Kieras, 1988). A GOMS analysis is
a description or model of the knowledge that a user must have in order to carry out tasks
on a device or system. It is based on a cognitive model (human processor model) described
by a set of memories and processes and a set of principles underlying their behavior. It is a
predictive modeling technique that defines a model of user performance for interactive com-
puter systems. GOMS defines its human processor as having three interacting subsystems:
the perceptual, motor, and cognitive systems, each with its own memories and processors.
It provides rules for analyzing the tasks and subtasks of the computer users. Estimates of
the frequency and duration of unit tasks are obtained to derive the total time for a task.
This information is used in the early stages of system design. GOMS models are closely
related to HTA in the sense that GOMS models describe a task in terms of a hierarchy of
goals and subgoals, methods which are sequence of operators (actions) that when executed
will accomplish the goals, and selection rules that choose which method should be applied
to accomplish a particular goal in a specific situation (Kieras, 1997).
GOMS analysis can be performed at different stages in the system development. It
can be performed after implementation, after the design specification or during the design
process. For a working system, it can determine whether what the users actually do are what
41
Mir Farooq Ali CHAPTER 3. RELATED WORK
the developers intended them to do. It can be used after the design is specified to analyze the
intended user goals and predict performance to accomplish tasks. This may lead to change
of the design specifications. The third way GOMS analysis can help is during the design
process itself. This helps weed out complex, inconsistent methods early in the design phase.
One of the primary criticisms of the GOMS method is that it models error-free task
performance without considering the possibility of concurrent tasks. This does not accu-
rately represent typical usage of interactive systems in which users routinely perform errors
and their time to accomplish tasks is affected by the interruptions due to error-recovery.
Another criticism is that it requires the developer to create a formal model of the system
assuming a certain type of task-hierarchy that might not relate exactly to how users actu-
ally perform those tasks. Additionally, GOMS analysis can be extremely verbose for fairly
complex systems.
Different variants of GOMS have been developed to overcome some of the prob-
lems mentioned above, described in John and Kieras (1996a,b); Kieras (2004). These include
a more rigorous and sophisticated version that uses a structured language like description
of the model called NGOMSL (Kieras, 1988) and a parallel version called CPN-GOMS that
uses cognitive, perceptual and motor operators (John and Gray, 1995). These models predict
execution time, learning time, errors and they identify those parts of an interface that lead
to these predictions, thereby providing a focus for redesign effort.
Other Task Notations
A few other well-known task notations/techniques include Task Analysis for Knowl-
edge Description (TAKD), Task Knowledge Structures (TKS), Task-Action Grammar (TAG)
and User Action Notation (UAN). TAKD uses a hierarchy to task descriptive hierarchy
(TDH) to taxonomically represent a set of tasks performed by the user. Task Knowledge
Structure (TKS) is “a particular task model that is a conceptual representation of the knowl-
edge a person has stored in his or her memory about a particular task” (Johnson and John-
son, 1991). It implies that people develop knowledge structures as they learn and perform
tasks. TAG is a formal, production rule-based description technique for representing mental
models of users in task performance Payne and Green (1986). It is concerned with an evalu-
ation of the learnability of systems. The User Action Notation (UAN) is a formal task- and
42
Mir Farooq Ali CHAPTER 3. RELATED WORK
user-oriented notation for representing UI designs (Hartson et al., 1990; Hartson and Gray,
1992).
3.4 Other Approaches and Related Techniques
In this section, a few other approaches for building UIs for different devices are pre-
sented. Florins and Vanderdonckt (2004) discusses how an approach of graceful degradation
of features in a UI can achieve some of the goals of multi-platform UIs. Building ”plastic
interfaces” is one such method in which the UIs are designed to ”withstand variations of
context of use while preserving usability” (Thevenin and Coutaz, 1999; Calvary et al., 2000;
Thevenin et al., 2001, 2004). Grundy and Zou (2004) presents the AUIT (Adaptable User
Interface Technology) architecture provides a way to supplement Java server pages with ad-
ditional device-independent tags that allows the transformation at runtime to either HTML
or WML. Seffah and Javahery (2004) terms MPUIs as Multiple User Interfaces and provides
a good introduction to the general problem of MPUI development. A runtime system that
allows migration of UIs between heterogenous devices is ROAM (Chu et al., 2004).
3.4.1 Transcoding
Transcoding (Asakawa and Takagi, 2000; Takagi and Asakawa, 2000; Hori et al.,
2000; Huang and Sundaresan, 2000) is a technique used in the World Wide Web to adaptively
convert web-content for the increasingly diverse devices that are being used to access web
pages. This technique converts an HTML web page into the desired format. Transcoding
assumes multiple forms: in its simplest form, semantic meaning is inferred from the struc-
ture of the web page and the page is transformed using this semantic information. A more
sophisticated version of transcoding associates annotations with the structural elements of
the web page and the transformation occurs based on these annotations. Another version
infers semantics based on a group of web pages. Although these approaches work within
a limited context of use, they are not very extensible since it is not always possible to infer
semantic information from the structural elements (i.e. syntactic structure) of a web page.
Transcoding is typically implemented at the web proxy which is an intermediary, residing
between the web client and the web server.
43
Mir Farooq Ali CHAPTER 3. RELATED WORK
3.4.2 Device Independence
There has been increasing push in the Web community to provide access to Web
pages for mobile devices. Many standards are being developed for this purpose. One of
the standards is Composite Capabilities/Preference Profile (CC/PP) (Klyne et al., 2004) that
permits various devices with different characteristics to convey their profiles to the web
server. This enables the server to customize the content and style for the device, based on
the profile, and deliver it. There is also work being done in the areas of device-independence
and mobile devices at the World Wide Web Consortium to provide greater access to different
devices.
“Device Independence” is an umbrella term used by the World Wide Web Consor-
tium to encompass the techniques for providing web access to the plethora of different kinds
of devices (Butler et al., 2002; World Wide Web Consortium, 2004). It also refers to tech-
niques for authoring device-independent content that could be accessed by various devices.
Butler et al. (2002) indicates three different ways in which content could be adapted for var-
ious devices. These include intermediate adaptation, client-side adaptation and server-side
adaptation.
3.4.3 Accessible Interfaces
Computer Accessibility was usually associated in the past with providing access
to interactive computer-based systems for people with disabilities (Stephanidis, 2000). In-
creasing emphasis is being placed these days on producing accessible interfaces. One reason
for this is Government statutes like Section 508 of the United States Department of Justice
(http://www.section508.gov ), which seek to provide the same access to information
for individuals with disabilities as that provided to people without disabilities. The World
Wide Web Consortium has a special Web Accessibility Initiative that is geared towards mak-
ing the Web accessible. They have a set of guidelines and checkpoints for web developers to
make sure that their web pages are accessible. Pioneering work in the area of accessibility
is also done at the Trace Research and Development Center at the University of Wisconsin-
Madison.
Work in the area of accessible interfaces in the past could be broadly sub-divided
into two categories: Product-level adaptation and Environment-level adaptation. Product-
44
Mir Farooq Ali CHAPTER 3. RELATED WORK
level adaptation involves designing the product or UI from scratch for different target groups
and is an expensive strategy. Environment-level adaptation involves retroactively adapting
the product or UI for disabled people. One example of this approach was in converting a
GUI to an Auditory User Interface for blind users (Mynatt, 1997). This approach does not
work well because Auditory UIs are quite different from visual UIs. Raman (1997) describes
several shortcomings of this approach noting that the conversion of visual to auditory UIs
loses some of the most important characteristics of the visual UI.
45
46
Chapter 4
Development Process
This chapter presents a multi-step transformation-based development process for
building MPUIs. I outline each of the steps of this process and present a high-level view of
the transformations. The details behind the process are presented in chapter 5.
The concept of building multi-platform UIs is relatively new. To envision the devel-
opment process, I consider an existing, traditional approach from the Usability Engineering
(UE) literature. One such approach identifies three different phases in the UI development
process: interaction design, interaction software design and interaction software implemen-
tation (Hix and Hartson, 1993).
Interaction design is the phase of the usability engineering cycle in which the ”look
and feel” and behavior of a UI is designed in response to what a user hears, sees or does.
According to Preece et al. (2002, pg. 1-33), there are four basic activities in the process of
interaction design:
1. needs and requirements gathering (Note: Task analysis and modeling occurs here),
2. developing alternative designs for the requirements,
3. building interactive versions of the design (prototyping), and
4. evaluating what is built throughout the process.
These are also basically the same steps for interaction design as described by Dix et al. (2004,
pg. 191-224). This process is also iterative in nature with the design being refined through
the process of prototyping and evaluation.
Mir Farooq Ali CHAPTER 4. DEVELOPMENT PROCESS
In current UE practices, this phase of interaction design is highly platform-specific.
Once the interaction design is complete, the interaction software design is created. This
involves making decisions about UI toolkit(s), widgets, positioning of widgets, colors, etc.
The subsequent step involves implementing the interaction software that has been designed.
Interaction design Interaction software and
implementation
Platform−independent Platform−dependent
SW Impl.
PS3−
interaction
SW Impl.
Platform−
independent
interaction
design
PS1−
interaction
design
PS2−
interaction
design
PS3−
interaction
design
PS2−
interaction
SW design
PS3−
interaction
SW design
PS1−
interaction
SW design
PS1−
interaction
SW Impl.
PS2−
interaction
Figure 4.1: Usability Engineering process for multiple platforms.
The previous paragraphs describe the traditional view of interaction design in Us-
ability Engineering. This view is highly platform-specific and works well when designing
for a single platform. However, when working with multiple platforms, the process of in-
teraction design has to be split into two distinct phases: platform-independent interaction
design and platform-dependent interaction design.
Platform-independent interaction design implies a high-level design of the overall
system, where the overall tasks performed by the end-user are enumerated. At this point,
the developer is not concerned about the platform-specific issues at all. That is determined
in the platform-dependent interaction design phase. At this point, a decision is made re-
garding how each platform supports its tasks and what mode of interaction is desired for
each platform. As an example, for a selection task, a design decision has to be made whether
47
Mir Farooq Ali CHAPTER 4. DEVELOPMENT PROCESS
a pull-down list or a radio-button list is used for a GUI platform. These UI elements might
not be available on other platforms, so this decision is irrelevant for those platforms. This
platform-dependent interaction design phase, one for each target platform, leads to differ-
ent, platform-specific interaction software designs that in turn lead to platform-specific UIs.
Figure 4.1 illustrates this process.
4.1 My Approach
independent of thewidgets or layoutassociated with any generic UIML.
This model is specific to
one particular family of
platforms. This model
describes the hierarchical
arrangement ofthe interface
being generated using generic
UI elements.
This is the platform−specific
description of the UI in UIML
using the widgets and layout
associated with the platform.
platform−specific renderer.
This is rendered using the
Platform−specific
Abstract Concrete
Task Model
This model is Platform−specific
UIML (WML)
Platform−specific
UIML (HTML)
Platform−specific
UIML (Java)
(Generic UIML 1)
Desktop family model
Small−device family
model (Generic UIML2)
Voice family model
(Generic UIML 3) UIML (VoiceXML)
Figure 4.2: Multi-step process for generating multi-platform UIs.
I have developed a framework that is very closely related to the traditional UE
process. The main building blocks of my framework are the task model, the family model
and the platform-specific UI. The three building blocks are interconnected via a process of
transformation. More specifically, the task model, built using the CTT notation, is trans-
formed into the family model (represented by Generic UIML), and the family model is
transformed into the platform-specific UI (represented by platform-specific UIML) (Ali et al.,
2002; Ali and Perez-Quinones, 2002). This framework is illustrated in Figure 4.2. It should
48
Mir Farooq Ali CHAPTER 4. DEVELOPMENT PROCESS
be noted here that the transformations are not automatic and need developer guidance. As
described in Chapter 3, the model-based user interface development community has exten-
sively used the Task Model as an abstraction for building UIs for a single platform, including
systems like ADEPT (Johnson et al., 1995) and MASTERMIND (Szekely et al., 1995).
The current CTT notation in its current form does an excellent job of representing
the tasks and their temporal operators. It also allows the developer to specify which plat-
form(s) need to be supported for any particular task. I supplement the CTT notation with
additional navigation operators for each node in the task model. Navigation operators be-
come relevant when generating behavior for UIML. A detailed discussion of their relevance
follows in the next section, while the internal details Chapter 5.
This chapter outlines the various steps in my process for creating MPUIs starting
from a task model. The main steps in the process are outlined below. The process is de-
scribed from the point of view of a developer who is building the MPUI. The main steps in
building a MPUI in my process are:
1. Building the task model
2. Annotating the task model
3. Specifying the structural mappings to guide the generation of generic UIML
4. Guiding the generation of platform-specific UIML
5. Customizing the UI for a platform
The reasoning behind each of the above steps is described in the following subsec-
tions.
4.1.1 Building the Task Model
As a first step, the developer builds the task model using the CTT notation (Mori
et al., 2003; Paterno, 1999, 2004) that I described earlier in Chapter 2. This involves speci-
fying the hierarchical task representation, the individual tasks within the task tree and the
temporal relationships between the tasks. Building the task model is part of task analysis
(Annett, 2004), which itself is a step in the process of usability engineering (Hix and Hartson,
1993).
49
Mir Farooq Ali CHAPTER 4. DEVELOPMENT PROCESS
As seen in Section 3.3, the output of the task analysis phase could be at varying
levels of formalism. To make the generation of the later artifacts in my process automated
to a certain extent, I decided on the CTT notation, which is a relatively formal notation.
Besides being a well-accepted notation, it comes with tool support through the CTTE and
TERESA environments (Paterno and Santoro, 2002, 2003; Mori et al., 2004; Berti et al., 2004).
The TERESA tool also produces the task model in XML format, which allows relatively easy
transformation to UIML.
Assuming the sample application introduced in section 1.3 of Chapter 1, the de-
veloper builds one task model to represent the application. The tasks within the model
represent what the user intends to do with the application on all platforms.
4.1.2 Annotating the Task Model
After the developer has built the task model, he needs to annotate it in this step.
The annotation is performed with navigation and grouping operators. As the name implies,
the navigation operators help in characterizing the navigation within a particular UI for a
particular platform, while the grouping operators help in specifying the “container-ship” of
UI elements.
The concept of “container” arises when dealing with any UI. All UIs have the no-
tion of a container. A container is some UI element that the user interface designer uses as
a grouping artifact. All things “contained” within that region are “related.” The same idea
applies even to voice UIs, where the designer has a series of “likely responses” acceptable
at a particular point in the dialogue. Containers in desktop applications are usually called
windows or frames. Containers in web browsers are often referred to as web-pages. Con-
tainers in VoiceXML are forms or menus. Containers in WML-like devices are often called
cards (as in a stack of cards).
As described above, the task model should be free of any particular user interface
styles as much possible. However, as the developer starts thinking about a particular plat-
form, one of the first design decisions that he must face is what subtasks of a task go in a
container of its own. This decision is based on the complexity of the task and also the target
platform, as putting too many interaction items in a container would most likely overwhelm
the user.
50
Mir Farooq Ali CHAPTER 4. DEVELOPMENT PROCESS
Task 1
Task 3
Task 2
Task 2Task 2
Task 1
Task 1
Task 3
Task 3
(a) (b)
Figure 4.3: (a) This illustrates the difference between having all possible tasks ‘contained’within a single container. (b) Each task is enclosed in a separate container. This case requiresextra navigation to perform the various tasks.
But the moment a task is divided into multiple containers, new behavior is intro-
duced in the task. Now the user must navigate from one container to the next. For example,
Figure 4.3 (a) illustrates an example in which all the UI elements to perform three tasks
are first enclosed within a single container. As soon as they are separated into separate
containers (Figure 4.3 (b)), extra UI elements for navigation have to be introduced to allow
performing all the tasks.
I have added a few annotations to the CTT task notation to facilitate this concept
of container-ship and navigation between the various containers. The details of the anno-
tations are presented in Chapter 5. In this step of the process, the developer adds these to
the task model developed in the first step. Referring back to the sample appointment ap-
plication presented in Chapter 1 again, the developer annotates the task model with these
operators for each of the platforms. So there are three annotated task models as a result of
this step, one for each platform family for that example.
51
Mir Farooq Ali CHAPTER 4. DEVELOPMENT PROCESS
4.1.3 Specifying the Structural Mappings
As explained in Chapter 2, a UIML structure is primarily comprised of parts .
Each UIML part needs to have a class name for it the UIML to be valid and renderable. In
this step of the process, the developer has to provide mappings that map each task in the
task model to one or more generic UIML parts. Default mappings are provided for each
task based on its category and type. This implies that generic class names are assigned to
the generated parts based on the task category and type. In this step, the developer might
change the default mappings to better customize the generated generic UIML.
A detailed description of generic UIML is presented in (Ali et al., 2004, 2002). The
mappings have to be varied for different platform families since the structure of the UI
might change depending on the platform family. The mechanics of this and the previous
steps are explained in detail in Chapter 5. This annotated task model is used to generate
generic UIML based on the mappings. Both the structure and behavior of the UIML are
generated from the task model. UIML behavior is based on the navigation and grouping
of tasks within the task model and the temporal operators between the tasks themselves.
After the developer specifies the annotations and the mappings, he now generates
generic UIML, which implies generating the UIML structure and behavior .
4.1.4 Generating Platform-specific UIML
The generic UIML generated in the previous step is at a level of abstraction that
does not allow it to be rendered, since it represents a family of platforms. For this UIML to be
rendered, it needs to be transformed into more concrete platform-specific UIML. Depending
on the particular platform family, it is possible to generate multiple instances of platform-
specific UIML from one generic UIML description. For example, it is possible to generate
platform-specific UIML for the Java platform and XHTML, both of which belong to the
desktop family. This code generation is a table-driven process that maps generic UIML
parts to one or more platform-specific UIML parts. A detailed discussion of this process of
generating platform-specific UIML is presented in Chapter 5.
So far, the generic UIML that has been generated comprises of only the structure
and behavior sections. There is no style section generated yet. As discussed in Chapter
2, the style section within UIML is comprised of properties . The developer is required
52
Mir Farooq Ali CHAPTER 4. DEVELOPMENT PROCESS
to intervene at this point to associate properties with the generic UIML parts. The properties
at this point have to be generic for the platform family.
4.1.5 Customizing the UI for a Platform
Once the platform-specific UIML is generated, the developer can customize and
tailor it for the particular target platform by adding even more platform-specific properties.
For example, the developer might desire a particular color scheme for the XHTML-Internet
Explorer 6.0-Windows XP platform. So he adds these properties to the generated platform-
specific UIML. Once the properties are added, the platform-specific UIML can then be ren-
dered using one of the available renderers. Referring back to the appointment example of
Chapter 1 again, the developer now adds platform-specific style and presentation properties
to the platform-specific UIML for each of the three platforms. These properties are specific
to that particular platform.
At this point, the platform-specific UI is ready to be rendered on that platform.
4.2 Discussion
A high-level view of my process for building MPUIs has been presented so far in
this chapter. This whole process is a series of steps from the abstract towards the concrete.
The task model is a highly abstract notation in which there is no notion of the actual UI.
As the developer annotates the task model and specified mappings, it helps transform this
abstract entity towards a more concrete realization.
Task Model Annotated CTT G−UIML PS−UIML
(CTT)
task model task model
Annotating the
PS−UIML
Customizing the
mappings
Specifying theBuilding the
Figure 4.4: Schematic showing repeated iteration between the different phases of building aMPUI. The dashed lines in the above figure indicate that the developer might have to takemore than one pass to generate the final UI.
53
Mir Farooq Ali CHAPTER 4. DEVELOPMENT PROCESS
The process of interaction design for a single platform is highly iterative, with the
developer having to revise the initial design due to prototyping and evaluation. I envision
my process to be iterative in a similar fashion. The generation of the family model (generic
UIML) from the task model could possibly lead to the task model being revised and refined.
Similarly, the generation of the platform-specific UIML and its rendering could lead to the
generic UIML and/or the task model being updated. This iterative multi-step process is
illustrated in Figure 4.4. The figure also indicates the artifact produced as a result of each
step. For example, the first step produces a CTT task model as its output.
Usability Engineering Realm
CTT and UIML
Interaction DesignInteraction Design
Platform−independent
Software Design
PS−interaction PS−Interaction
Software Implementation
Platform−specific
Customizing theBuilding themappings
Specifying the
task model
Annotating the
PS−UIMLtask model
Figure 4.5: Schematic tying in main phases in my process to a usability engineering processfor MPUIs.
Based on this schematic illustrated in Figure 4.4, it is possible to create a link back
to the usability engineering process for multi-platform UIs illustrated in Figure 4.1. Each
‘box’ in Figure 4.4 could be linked to one or more phase of the MPUI usability engineering
process. This is illustrated in Figure 4.5. The thick vertical arrows indicate the links between
the usability engineering realm and the different phases in my process.
The first link is between platform-independent interaction design and building the
task model. It is clear that, at this point, the developer does not and should not consider
any platform-specific characteristics of the UI. The developer is only concerned about the
tasks supported on all platforms and builds the task model to reflect this. Incorporating any
platform-specific information in the task model at this point just contaminates it. The second
54
Mir Farooq Ali CHAPTER 4. DEVELOPMENT PROCESS
step in my process of annotating the task model is platform-specific and is related to both
platform-specific interaction design and platform-specific interaction software design. The
navigation operators are related to interaction design and the grouping/container operators
are related to interaction software design.
Specifying the mappings is related to both the platform-specific interaction soft-
ware design and implementation. The design aspect comes into picture since the developer
could use different kinds of mappings for any particular task. The final step of customiz-
ing the UI for a particular platform is solely related to the UE phase of platform-specific
interaction software implementation.
The primary advantage of showing links between these two is that a experienced
developer who is familiar with traditional UE development processes, could easily under-
stand the different phases in my process and relate the two. This also indicates that my ap-
proach, although for multiple platforms, is not much different from traditional approaches.
Another view of my process is of the data flow that occurs between the different
steps. Figure 4.6 illustrates this view. The oval boxes in the middle indicate the integral
steps in my process (annotating, generating the generic UIML, customizing the UI for a
specific platform and finally rendering it) and the elliptical boxes on the top indicate input
to a couple of these steps.
To summarize, I presented a high-level view of a multi-step process for building
MPUIs in this chapter that indicates the purpose behind each step and the artifact produced
from that particular phase. In the next chapter, I present the details of each step of this
process.
style properties
Customize UI(CTT) Render UITask Model SpecifyAnnotations
(Containers +
navigation)
Generate Generic
UIML
Structural
mappings
Specify
style
PS−UIML UIG−UIMLCTT+
maps
Figure 4.6: Schematic indicating data-flow between different phases of my process. Theartifact produced out of each phase is also indicated in the figure.
55
56
Chapter 5
Process Implementation Details
In this chapter, I detail each of the steps in my process that was briefly outlined in
Chapter 4. I also provide an example UI built for two platforms starting from a task model
and following the steps in my process.
5.1 Annotating the Task Model
The navigation attributes and the grouping operator are used to aid in the develop-
ment of MPUIs in my framework. This section describes the new operators and where they
are added to the task model. It also describes the implications for the generic UIML that
is produced. This corresponds to step 4.1.2 of the process discussed in the previous chap-
ter. Some relevant properties or attributes of each task in a task model in the CTT notation
include its unique id, category (abstraction, interaction or application), type (dependent on
the category), description and temporal operator (with its right sibling). A sample CTT task
model with one root node and three child nodes, along with the XML notation for this task
model indicating just the tasks with their ids, is represented in Figure 5.1 and Listing 5.1.
The XML notation is produced by the TERESA tool Mori et al. (2004). This task model is
used to demonstrate the various navigation and grouping annotations and what effect they
have in the generated generic UIML.
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
Figure 5.1: A simple task model with one root task and three subtasks.
Listing 5.1: Abbreviated CTT XML notation for task model above
<?xml version=” 1 . 0 ” ?>
<TaskModel>
<Task id=”Root”>
<SubTask>
<Task id=”Task1”/>
<Task id=”Task2”/>
<Task id=”Task3”/>
</SubTask>
</Task>
</TaskModel>
5.1.1 Navigation Attributes
Each task can have one of three navigation attributes: contains, menustyle, and in-
dependent. These attributes are applicable only for task nodes in the task model tree that
are not at the leaf-level. The navigation attributes are contains, menustyle, and independent. I
now show the task model in Figure 5.1 in three different navigation styles to illustrate the
navigation attributes.
Contains: This type of navigation attribute indicates that the parent task is mapped to some
container that contains whatever parts its subtasks are mapped to. And example of this is
shown in Figure 5.2, along with the relevant XML. No extra navigation needs to be generated
57
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
for this particular operator.
Figure 5.2: A container with three subtasks.
Listing 5.2: XML code indicating contains navigation operator as attribute.
<TaskModel>
<Task id = ‘ Root ’ navigation=” contains ”>
. . .
</Task>
</TaskModel>
Menustyle: This navigation attribute for a task indicates that it has to be organized in the
form of a menu with selection of each menu-item leading to the particular subtask. An
example of this for the simple task model shown earlier, along with the relevant XML, is
given below in Figure 5.3.
The figure indicates that a menu is created with three menu-items, each of which
leads to a separate container for the subtasks. The arrows indicate the navigation paths from
the menu to the subtask containers and back. There might also be more navigation paths be-
tween the subtask containers based on the temporal operators between the subtasks. These
are shown in Figure 5.4. Note that there is no navigation attribute for the subtasks that are
at the leaf-level.
Each UIML part is required to have a unique identifier and a class. The class
indicates the nature of the particular part. I have omitted the class for some of the parts
above since they are dependent on some of the other attributes of the task including its
category, type and mappings. The mappings are described in detail later in this chapter.
58
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
Figure 5.3: A menu in HTML with three options, each leading to one of the subtasks.
Listing 5.3: XML code indicating menustyle navigation operator as attribute.
<TaskModel>
<Task id = ‘ Root ’ navigation=”menustyle”>
. . .
</Task>
</TaskModel>
Independent: This navigation strategy indicates that each subtask of a particular task can
be in an independent container. Figure 5.4, along with the relevant XML notation, shows
this structure. The lines in the figure indicate the navigation paths between the three con-
tainers. The name for this particular navigation style indicates independence from the parent
container. The navigation paths vary between the various subtasks since the temporal oper-
ators (shown in Figure 5.1) between them are different. The temporal operator between the
first two tasks is of type “choice”, while the temporal operator between the second and third
subtasks is of type “enabling”. The enabling temporal operator indicates that there can only
be a path in one way. The use of these temporal operators in such a fashion is not unique to
my approach. As an example, the enabling operator is used to generate enabled task sets in
the TERESA tool (Mori et al., 2003, 2004).
The use of these navigation operators illustrates that the task model need not
59
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
Figure 5.4: A container with three subtasks.
Listing 5.4: XML code indicating independent navigation operator as attribute.
<TaskModel>
<Task id = ‘ Root ’ navigation=”indep”>
. . .
</Task>
</TaskModel>
change for different navigation styles to be derived in the generated UI. The same origi-
nal task model is being used in all the different cases, but different navigation operators are
inserted depending on the particular target platform.
An example of the UIML generated for one of these operators (menustyle) is shown
in Listing 5.5.
Listing 5.5: UIML code generated based on the menustyle navigation operator in taskmodel.
<part id=”Root” c l a s s =” . . . ”>
<part id=”Rootmenu” c l a s s =”G:Menu”>
<part id=”Task1menuitem”
c l a s s =”G: MenuItem”/>
<part id=”Task2menuitem”
c l a s s =”G: MenuItem”/>
<part id=”Task2menuitem”
c l a s s =”G: MenuItem”/>
</part>
60
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
</part>
<part id=”Task1” c l a s s =” . . . ”>
<part id=”T1Root” c l a s s =”G: Link”/>
</part>
<part id=”Task2” c l a s s =” . . . ”>
<part id=”T1Root” c l a s s =”G: Link”/>
</part>
<part id=”Task3” c l a s s =” . . . ”>
<part id=”T1Root” c l a s s =”G: Link”/>
</part>
5.1.2 Grouping Operator
The grouping operator is used in instances where the developer might need to
group some subtasks for containment together in the final UI. It shares two attributes that
are similar to those of each task in the task model: a unique identifier and navigation at-
tribute. However, the navigation attribute can only assume one of two values: menustyle
and contains. An “independent” navigation attribute is not very useful since it means that
there is no grouping performed. A sample grouping and relevant XML code for the task
model is illustrated in Figure 5.5 and Listing 5.6 respectively. Tasks 1 and 2 are grouped
together in this figure.
It can be argued here that having an abstraction task as the parent of the two sub-
tasks with the relevant navigation attribute can fulfill the same purpose. However, the
grouping operator is an artifact that is used only for the purpose of customizing the gen-
eration of UIML downstream. It is not an inherent feature of the task model. Using an
abstraction task changes the inherent nature of the task model, which is intended to capture
the tasks that the user performs with the system and in an ideal situation, should not be
used to record any other kind of information.
It should be noted here that depending on the particular target family for which
the generic UIML is being generated, the navigation attributes and/or grouping operators
might vary. As an example, the desired navigation style for a mobile platform family might
incorporate menustyle and independent attributes, since the typical user interfaces for small
61
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
Figure 5.5: A simple task model with one group of two tasks.
Listing 5.6: XML code indicating grouping of two tasks.
<?xml vers ion = ‘ 1 . 0 ’ ?>
<TaskModel>
<Task id = ‘ Root ’ navigation=”indep”>
<SubTask>
<Group id = ‘ group1 ’ navigation=” . . . ”>
<Task id = ‘ Task1 ’/>
<Task id = ‘ Task2 ’/>
</Group>
<Task id = ‘ Task3 ’/>
</SubTask>
</Task>
</TaskModel>
mobile devices are structurally hierarchical, while there might be more contains attributes
used for a desktop family, which might not necessarily be very hierarchical.
5.2 Mapping of Tasks
In the previous section, I have shown the mechanism of the navigation and group-
ing operators. In this section, I discuss the details of step 4.1.3 from the section describing
my approach. As mentioned earlier in Section 2.3, the CTT task model notation supports
four different task categories. In my approach, I just use the abstraction, interaction, and ap-
62
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
plication tasks. Each task category can be further classified into different types. The generic
family model at the next step is comprised of generic UIML which constitutes a vocabu-
lary of generic parts (widgets in normal UI terminology) and associated events. Each task
in the task model is mapped to one or more parts in the generated UIML. The different
parts of the generic vocabulary associated with the family model can be further classified
into different categories depending on their functionality.
For example, the World Wide Web Consortium uses the concept of a module to
further classify the tags of XHTML (xhtmlmod, 2004). The modularization of XHTML refers
to the decomposition of XHTML tags into a collection of abstract modules, each of which
comprises a group of tags that provide similar functionality. Some of the basic XHTML
modules are Structure, Text, Hypertext, List, Basic Form, and Basic Tables. Like the name
suggests, the Structure model contains the structural elements like html, body, head
and title . Similarly, the Text module comprises of all the heading, block, inline and flow
tags in XTHML.
Griffiths et al. (2001), in their model-based system named Teallach, classify the ba-
sic categories of their Abstract Presentation Model whose elements are mapped to concrete
widgets. Their basic categories are Free Container, Container, Inputter, Display, Editor, Chooser,
ActionItem. Both of these approaches indicate that a large set of UI widgets (or tags in the
case of XHTML) can be grouped into some categories and modules based on the similarity
of the widgets.
Following this same strategy and choosing from this basic set of items, I define a
set of abstract categories for the generic vocabulary to facilitate the mapping from the task
model to these categories. Some of the basic categories I use are defined in Table 5.1. Based
on this, I provide a mapping from each task in the task model to a part within a particular
category, based on its task category and task type. Some of these mapping are given in Table
5.2.
It can be seen in this table that for each task category-type pair, there might be
a number of generic parts to select from. The choice has to be made by the developer if
he is not content with the default mapping. This is the second annotation to the task model
besides the navigation operator. As part of this annotation, the developer has to specify their
part category number and a part within that category. As an example, if the task category is
Application and the task type is Display, this could potentially mapped to a part belonging
63
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
Table 5.1: Table indicating categories of UIML parts
Category Generic UIML parts
Structure G:TopContainer, G:Area, G:Group, G:IconText G:Text, G:Label
Hypertext G:LinkList G:List, G:Checkbox, G:ListItem, G:Menu, G:MenuBar,
G:MenuItem, G:PulldownList, G:RadioButtonInputter G:Button
Action Item G:ButtonSelector G:Menu, G:MenuBar, G:MenuItem, G:PulldownList,
G:CheckBox, G:RadioButton,Editor G:Text
Basic Table G:Table, G:TableCaption, G:TableRow, G:TableCell,
to any of Structure, Text or Table categories. So the first choice the developer has to specify
is one of these three categories. The second decision is to specify any one part within that
particular category.
Table 5.2: Table indicating mapping of tasks to different part categoriesTask category Task type Generic UIML part category
Abstraction - Structure
Application
DisplayComparisonLocateGroupingProcessing-feedbackOverview
Structure, Text, TableText, TableActionItemTableStructureTable, Text
Interaction
SelectionEditControlMonitoringResponding-to-alerts
Selector, ListText, InputterActionItemStructureActionItem
64
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
5.3 Generating Generic UIML
This section describes the details of generating generic UIML mentioned in sub-
section 4.1.3 that outlines my approach from the previous chapter. After the user has anno-
tated the task model and specified the mappings for the various tasks, my process generates
generic UIML. Part of this process of generating generic UIML includes generating the be-
havior of the UI.
5.3.1 Generating the Behavior
The behavior section of UIML describes the events associated with the UI when
the user interacts with it. It is enumerated through a set of rules , each of which is further
comprised of a condition and an action . I again use the tasks, temporal operators and
the navigation operators to generate the behavior. These behavior rules are created in
two ways:
Using temporal operators: Some of the rules are generated based on the temporal
operators between the tasks. For example, if there is an enabling temporal operator between
two tasks, it indicates that the first task in some way enables the second task. This indicates
that there might be a rule in which the condition elaborates some event in the first task
and the action indicates what happens as a result of this event occurring. The generated
behavior rule for the above example between two tasks T1 and T2 might look something
like this:
<r u l e>
<condi t ion>
<event c l a s s = ‘G: actionPerformed ’ part−name= ‘T1 ’/>
</condi t ion>
<a c t i o n>
<property name= ‘G: v i s i b l e ’
part−name= ‘T2 ’>t rue</property>
</ a c t i o n>
</r u l e>
In UIML, a generic vocabulary comprises of parts along with associated properties and
65
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
events. So the generated behavior and events also depends on the type of mapping pro-
vided for each task. In the above example, G:actionPerformed is an event for parts
belonging T1’s class. In some cases, the part class corresponding to the task T1 might not
have a corresponding event for navigation. For example, T1 might correspond to a container
class. In this case, either a suitable part (mapped to one of T1’s child tasks) might be used
for the purpose of navigation or one might be created.
Using navigation operators: In the earlier step of generating the structure , ad-
ditional UIML parts are created in some cases depending on the navigation style and target
family. Additional behavior rules needs to be generated for these parts too. The specific
navigation cases for which the rules are generated are:
i. Menustyle: As discussed earlier, when this navigation operator is used, a menu is cre-
ated with several menu-items, each of whose selection leads to a separate container.
Obviously, there is a need to specify this navigation from the menu-item to the actual
container for the subtask. This is specified as a rule in the behavior section. Addition-
ally, there is a need to specify the navigation back from the subtask to the parent. This
is also specified by a rule in the behavior section.
ii. Independent: When this operator is used, the parent container is eliminated and inde-
pendence is granted to the child subtasks, each in their own container, as explained
earlier in an earlier section. The tasks being independent does not directly imply that
there is no relationship between them. Depending on the temporal operators between
them, navigation parts are created in these containers and rules are also created to
indicate this. This creation of extra parts is indicated in Figure 5.4.
No extra rules are explicitly generated for the contains navigation operator.
I have discussed the generation of the structure and behavior sections of the UIML
document so far. The developer does not specify the layout, style and other presentation-
related information in this first step of the generation process. Once the generic UIML is
generated, the developer can add more generic style properties and customize the behav-
ior for a particular platform family.
One of the main problems of some of the earlier model-based systems was that
a large part of the UI generation process from the abstract models was fully automated,
66
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
removing user-control of the process. This dilemma is also known as the “mapping prob-
lem”, as described by Puerta and Eisenstein (1999). We have seen that the transformation
and mapping in my process from the task model to the generic family model is developer-
guided and only semi-automatic. This gives the developer more control over the generation
process and alleviates some of the problems faced in earlier model-based systems.
5.4 Generating and Customizing Platform-specific UIML
It can be seen in Figure 4.2 that there needs to be a transition between the different
representations in order to arrive at the final platform-specific UI. There are two different
types of transformations needed. The first type of transformation is the mapping from the
task model to the family model, which I have discussed in the previous section. We have
also seen that this transformation is developer-guided and not fully automated. The second
transformation that occurs is between the generic family model and the platform-specific
UIML. This transformation needs developer intervention to a smaller extent.
The second type of transformation occurs between the family model and the platform-
specific UI. This is a conversion from generic UIML to platform-specific UIML, both of which
can be represented as trees since they are XML-based. This process can be largely auto-
mated. However, there are certain aspects of the transformation that need to be guided by
the user. For example, there are certain UI elements in my generic vocabulary that could
be mapped to more than one element on the target platform. The developer has to select
what the mapping will be for the target platform. Currently, the developer’s selection of
the mapping is a special property of the UI element. The platform-specific UIML is then
rendered using an existing UIML renderer. There are several types of transformations that
are performed:
• Map a generic class name to one or more parts on the target platform. For example, in
XHTML a G:TopContainer is mapped to the following sequence of parts:
<html>
<head> . . .
< t i t l e> . . .
<base> . . .
67
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
<s t y l e> . . .
<l i n k> . . .
<meta> . . .
<body> . . .
In contrast, in Java G:TopContainer is mapped to only one part: JFrame.
• Map the properties of the generic part to the correct platform-specific part.
• Map generic events to the proper platform-specific events.
Figure 5.6: Screen-shots of a sample form in Java Swing (left) and HTML (right)
In order to allow a UI designer to fine-tune the UI to a particular platform, the
generic vocabulary contains platform-specific properties. These are used when one platform
has a property that has no equivalent on another platform. In the generic vocabulary, these
property names are prefixed by for example, J: or H:, for mapping to Java or HTML only.
The transform engine automatically identifies which target part to associate the property
with, in the event that a generic part (e.g. G:TopContainer) maps to several parts (e.g. 7
parts for HTML). This is also done for events that are specific to one platform. The resulting
interface could be as powerful as the native platform. The multiple style section allows each
interface to be as complete as the native platform allows. The generic UIML file will then
68
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
contain three style elements. One is for cross-platform style, one for HTML, and one for
Java UIs:
<uiml>
. . .
<s t y l e id=” al lPla t forms ”>
<property id=”g : t i t l e ”>My User I n t e r f a c e</property>
</ s t y l e>
<s t y l e id=”onlyHTML” source=” al lPla t forms ”>
<property id=”h : link−color ”>red</property>
</ s t y l e>
<s t y l e id=” onlyJava ” source=” al lPla t forms ”>
<property id=” j : r e s i z a b l e ”>red</property>
</ s t y l e>
</uiml>
In the example above, both a web browser and a Java frame display the title, “My
User Interface”. However, only web browsers can have the color of their links set, so the
property h:link-color is only used for HTML UIs. Similarly, only Java UIs can make
themselves non-resizable, so the j:resizable property applies only to Java UIs. When the
UI is rendered, the renderer chooses exactly one style element. For example, an HTML UI
would use onlyHTML. This style element specifies in its source attribute the name of the
shared allPlatforms style, so that the allPlatforms style is shared by both the HTML and Java
style elements. Figure 5.6 illustrates two interfaces, generated from generic UIML, which are
presented for Java Swing and HTML thanks to a transformation process. The generic UIML
is presented below.
<?xml vers ion=” 1 . 0 ” ?>
< !DOCTYPE uiml PUBLIC ”−//Harmonia / /DTD UIML 2 . 0 Draft / /EN”
”UIML2 0g . dtd”>
<uiml>
<head>
<meta name=”Purpose” content=”Data Collect ion Form”/>
<meta name=”Author” content=”Farooq Ali ”/>
69
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
</head>
< i n t e r f a c e name=” DataCollectionForm ”>
<s t r u c t u r e>
<part name=”RequestWindow” c l a s s =”G: TopContainer”>
<part name=”EBlock1” c l a s s =”G: Area”>
<part name=” Tit leLabel ” c l a s s =”G: Label”/>
<part name=”FirstName” c l a s s =”G: Label”/>
<part name=” FirstNameField ” c l a s s =”G: Text ”/>
<part name=”LastName” c l a s s =”G: Label”/>
<part name=”LastNameField” c l a s s =”G: Text ”/>
<part name=” StreetAddress ” c l a s s =”G: Label”/>
<part name=” StreetAddressField ” c l a s s =”G: Text ”/>
<part name=” City ” c l a s s =”G: Label”/>
<part name=” CityField ” c l a s s =”G: Text ”/>
<part name=” Sta te ” c l a s s =”G: Label”/>
<part name=” StateChoice ” c l a s s =”G: L i s t ”/>
<part name=”Zip” c l a s s =”G: Label”/>
<part name=” ZipField ” c l a s s =”G: Text ”/>
<part name=”OKBtn” c l a s s =”G: Button”/>
<part name=”CancelBtn” c l a s s =”G: Button”/>
<part name=” ResetBtn ” c l a s s =”G: Button”/>
</part>
</part>
</ s t r u c t u r e>
</ i n t e r f a c e>
<peers>
<p r e s e n t a t i o n base=” Generic 1 . 2 Harmonia 1 . 0 ”/>
</peers>
</uiml>
70
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
5.5 Illustrative Example
In this section, I use a small example to illustrate my complete process to generate
final rendered UIs for a couple of platforms (HTML and WML) from a single task model.
Figure 5.7 indicates a task model that represents the various steps taken by a developer
to complete a typical registration form that is required at most ecommerce web-sites. The
steps include entering the user’s email address, selecting a password, entering a shipping
address, etc. Each of these tasks is represented in the task model through the abstraction,
user or application tasks. As part of building this task model, the developer also has to
specify the ‘type’ of each task too as indicated in Table 5.2.
71
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
Figu
re5.
7:O
rigi
nalt
ask
mod
el.
72
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
Figu
re5.
8:Ta
skm
odel
wit
hgr
oupi
ngs
indi
cate
dfo
rH
TML
(des
ktop
fam
ily).
73
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
Figu
re5.
9:Ta
skm
odel
wit
hgr
oupi
ngs
indi
cate
dfo
rW
ML
1.1
(sm
all-
devi
cefa
mily
)
74
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
In the second step, the developer has to specify the navigation operators for the
tasks, the grouping of tasks and the preferred mappings for each task. This has to be done
for each desired target family of platforms. Figures 5.8 and 5.9 indicate different groupings
for the desktop and small-device family of devices respectively. The groupings are just
show in this picture to help clarify the concept. The visual representation is not a part of the
CTT notation or development environment. The rationale behind having a larger number
of groupings for the small-device family is the smaller screen real estate of those devices
and consequent less cluttering by having fewer UI elements represented per ‘container’. As
discussed in the previous chapter, the idea behind the groupings is to represent container-
ship of UI elements in the final rendered UI.
Figure 5.10: The final rendered UI for WML.
Generic UIML is generated as a result of this step including the structure and be-
havior. The developer has to further intervene now to customize the mappings of the generic
UIML parts to platform-specific parts. Based on these mappings, the platform-specific
UIML is generated. This platform-specific UIML is largely devoid of any presentation or
stylistic properties. The developer has to add the stylistic properties to customize the UI for
that particular platform. This UIML can then be rendered using the appropriate renderers
to yield the final UI. Figures 5.101 and 5.11 show the final rendered UIs for HTML and WML
respectively.
The use of the navigation and grouping operators make it easy to separate the dif-
ferent UI elements into different containers for the two platforms. It should also be noted1Image courtesy Openwave Systems Inc.; Openwave and the Openwave logo are trademarks of Openwave
Systems Inc. All rights reserved.;
75
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
Figure 5.11: The final rendered UI for HTML.
here that the root task in the task model for the desktop family uses an independent navi-
gation operator, while it uses a ‘contains’ operator for the small-device family. This is due
to the nature of the target platforms. We need separate HTML pages while for WML, all the
different cards can be contained within the same WML container.
This particular example uses only the ‘contains’ and ‘independent’ navigation op-
erators for its tasks. The use of ‘menustyle’ operator is not appropriate for this example
because of the nature of the application which requires sequential entry of the required in-
formation.
76
Mir Farooq Ali CHAPTER 5. PROCESS IMPLEMENTATION DETAILS
To summarize this chapter, I have presented the details behind my process. I have
shown the use of the navigation and grouping operators, the mapping of tasks and the
generation/customization of generic and platform-specific UIML. I evaluate this process in
the next chapter.
77
78
Chapter 6
Functional Comparison
In this chapter, I provide a functional comparison of my work with three other
approaches for building MPUIs: TERESA, XIML and UsiXML, based on a few criteria. In
particular, I provide a detailed comparison with the approach used in the TERESA develop-
ment environment. The primary reason for this comparison is that the initial artifact in both
development processes is a CTT task model. TERESA is also one of the de facto standards
with regard to MPUI development (Paterno and Santoro, 2003; Paterno and Tscheligi, 2003;
Mori et al., 2004). I work out a couple of examples using TERESA and compare the details
of their approach with mine.
Secondly, I demonstrate that my approach works for a few different platforms and
also show that even for a single platform, different UI styles are possible with a change in the
annotations used. I again compare my development process of this MPUI with TERESA’s
approach. I also examine a popular news web-site (BBC news at http://news.bbc.co.
uk ) and analyze how my development process could be used to build such a web-site for
two platforms.
6.1 Comparison to Other Approaches
My approach of using a task model in conjunction with a intermediate representa-
tion language and semi-automatic transformations is comparable in bits and pieces to other
approaches. This approach also provides a strong tie-in to existing usability engineering
approaches. This provides a valuable guide to a researcher as a framework for compar-
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
ing various approaches for building MPUIs. It also helps a practitioner by providing clear
guidelines in building MPUIs.
6.1.1 Description of Comparison Criterion
Before I compare my approach with the others, I outline a few qualitative crite-
ria I use to perform the comparison. These attributes are presented below in the form of
capabilities that each approach provides the developer for building the MPUIs.
1. Specifying navigation: Navigation in a UI refers to the capability provided by the
UI to the end-user for transitioning between different containers associated with user
tasks. Some development processes provide the developer explicit capabilities for
specifying navigation. Other approaches generate the navigation automatically from
one of the other abstract representations.
The first method provides more flexibility to the developer since there is capability
to generate more than one navigation style. The second method is more restrictive
since the developer does not have explicit control over the style of navigation that is
generated. An additional factor to consider is whether the original representations
(task model or other models) have to be modified in order to accommodate different
navigation styles. If so, this is a restrictive feature of the development process.
In comparing and evaluating the different approaches, I will examine them with re-
gard to what kind of capability they provide the developer for specifying navigation.
2. Specifying grouping/containment: One of the primary differences between UIs on
different platforms is the different number of containers and the tasks that they allow
the end-user to perform. As an example, section 5.5 presented a form-filling UI that
had two containers for the HTML platform and four containers for the WML plat-
form. Any MPUI development process provides facilities for specifying containment.
In some cases, the containment is implicitly derived from some abstract representa-
tions. Other processes allow the developer to modify this automatically derived con-
tainership. A third category permits the developer full control over the containment
specification. The first approach is the least flexible among the three, since it gives
little power to the developer. If the containment has to be changed, it means that the
79
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
developer has to go back to the abstract representations and modify them. On the
other hand, if the developer can specify containership without being encumbered by
the abstract representations, it greatly increases the power of the approach.
To summarize, the best approach requires: a) little modification of the task model,
b) automatic generation of behavior, c) and ease of exploration of different semantic
meanings in navigations, and d) ability to specify it all by hand if needed.
3. Number/type of different representations for different platforms: In every develop-
ment process, the developer has to work with a number of different specifications in
order to build MPUIs for different platforms. The greater the number and types of
different specifications that the developer has to specify, the more work he has to do in
order to build the MPUIs. The optimal scenario has the developer working with only
a small set of representations to build MPUIs. One the other hand, if the developer
has to work with a large set of representations for generating MPUIs, it is analogous
to having to build the UI separately for each different platform. Obviously, the latter
approach defeats the purpose behind having a unified process for building MPUIs.
4. Relationship to usability engineering processes: There are well established usabil-
ity engineering (UE) processes for building UIs for a single target platform (Hix and
Hartson, 1993; Preece et al., 2002; Rosson and Carroll, 2002) that most UI developers
follow. If a MPUI development process follows a similar process, it is easier for UI
developers who are familiar with traditional UE processes to follow that process. It
is easier for them to relate the various steps of a MPUI development process to the
steps of a traditional UE process. On the other hand, it is difficult for UI developers
to work with any UE development process that is radically different from traditional
approaches.
A strong relationship would mean that the developer does not over-commit on platform-
specific design decisions in the early phases and have still have enough capability to
create different MPUIs. A weak process would force the developer to make platform-
specific decisions early on in the development cycle.
Next, I compare and contrast a few approaches/tools based on the above compar-
ison criteria.
80
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
6.1.2 TERESA
The most obvious comparison is with TERESA (Mori et al., 2003, 2004), which too
uses a transformation-based approach and the same CTT notation for its task model. How-
ever, those are the only common features. The intermediate artifacts in TERESA are quite
different compared to my approach. In the TERESA approach, the developer first builds a
task model. This task model is filtered based on platform information to generate a platform-
specific system task model. The system task model is used to create Enabled Task Sets(ETS). An
ETS contains tasks that are enabled at the same time. These tasks might span across sub-
tasks. The ETSs are derived based on the temporal relationship between the tasks. These
ETSs are then combined into Presentation Task Sets (Mori et al., 2004; Paterno and Santoro,
2003). The TERESA tool uses some heuristics to calculate these based on the presence of the
enabling, disabling or enabling with information exchange temporal operator in the task
model.
The PTSs are then used to generate the Abstract User Interface (AUIs). The AUI
is comprised of interactors or abstract interaction objects representing high-level interac-
tion objects associated with the tasks.The AUI can be envisioned to be comprised of the
static ‘structure’ of the UI, comprising of the various presentation elements, and the dy-
namic ‘connections’, representing the dynamic behavior of the UI. TERESA also provides a
set of composition operators: grouping, ordering, hierarchy and relation, to help organize
the interactors. The AUIs generate the Concrete User Interfaces (CUIs) and the final phase
involves code generation for some specific platforms from these CUIs. The output of this
final phase in the TERESA environment is the final user interface (FUI).
As discussed in Chapter 5, the equivalent steps in my process are: build the task
model, annotate it with navigation attributes for each platform, customize mappings for
task to generic UIML parts, generate platform-specific UIML, customize stylistic properties
and use it to render the final UI, either by translation or by interpretation.
I now discuss how TERESA measures against the comparison criteria mentioned
in subsection 6.1.1.
1. Specifying navigation: TERESA does not provide an explicit method for specifying
the navigation between various containers. Some of the navigation paths between var-
ious containers are deduced from the presence of the enabling temporal operator. For
81
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
other navigation paths, extra ‘transition’ tasks have to be inserted in the task model
to accomplish this. In the provided examples that come with the TERESA tool, there
are interaction tasks with names like ‘Back to main page’ and ‘Back to start’ that indi-
cate that these tasks are inserted into the task model for the sole purpose of navigation
between various presentation units. Depending on the target platform, a different
number of tasks have to be inserted into the task model in different positions.
Figure 6.1: Simple task model.
Consider the simple task model shown in the Figure 6.1 (same task model from Figure
5.1). There is one main root task and three subtasks. Here I show how to represent my
three navigation operators (contains, menustyle and independent) can be expressed in
this task model using TERESA.
In order to generate the contains navigation style (Figure 5.2 in which all the UI ele-
ments for the subtasks are contained within a single container), the task model shown
in Figure 6.1 will not have to be modified. In this case, TERESA supports this naviga-
tion style well.
For the menustyle navigation style (Figure 5.3), the task model shown in Figure 6.1
does not suffice. The navigation links shown in Figure 5.3 (from the menu container
to the containers for each of the children and back from each of the child containers to
the menu container) need extra interaction tasks to be introduced in the task model.
One possible task model that could generate this kind of navigation in the final UI is
represented in Figure 6.2.
82
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figure 6.2: Modified task model for menustyle navigation style.
It is clear from Figure 6.2 that for each of the subtasks, an enabling task preceding it
(to indicate the menuitem leading to this task) has to be introduced and a disabling
task following it (to allow the navigation back from the child container to the menu
container). The structure of the original task model has to be altered in order to allow
this navigation style.
Figure 6.3: Modified task model to be able to generate independent navigation style.
The independent navigation style (Figure 5.4 in which each task is in a separate con-
tainer) is very difficult to generate from the simple task model in Figure 6.1 above.
The task model shown in Figure 6.3 generated the above kind of navigation behavior.
It is apparent that the task model has to be modified extensively with the introduction
83
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
of various tasks for the sole purpose of producing this navigation style.
The above examples illustrate the difficulty in the TERESA approach when the de-
veloper wants to associate multiple navigation styles with the same task model. The
same task model cannot be used without substantial modifications to generate differ-
ent navigation styles. This is a severe limitation of the TERESA approach compared
to my process. In contrast, all the developer has to do in my approach is to introduce
the desired navigation attribute in each task. There is no need to alter the original task
model to generate different navigation styles.
2. Specifying grouping/containment: Containment of tasks in TERESA is directly asso-
ciated with the PTSs. Each PTS is mapped to a UI container element in the final UI.
As explained above, these PTSs are an amalgamation of ETSs. Once the PTSs have
been calculated, it is not possible in TERESA to split the tasks within a PTS into sepa-
rate containers. A limited amount of grouping is also done within the AUI using the
grouping composition operator. This is an obvious limitation of the TERESA system.
In contrast, the developer is not restricted by the task model structure in specifying
containment in my approach. As illustrated in section 5.5, the developer has the ca-
pability to group tasks within a subtask using the grouping operator in my process.
Consider a task that is comprised of a large number of sub-tasks, all of them related
by the choice temporal operator. TERESA keeps all these tasks in a single container.
On a small-device platform, the end-user might prefer having these tasks split across
two or more containers. The developer is not able to perform this grouping based in
the existing task model in TERESA. He would have to split the tasks into further sub-
tasks to accomplish this. This sort of containment specification is not a problem in my
approach.
3. Number/type of different representations for different platforms: The UI developer
has to initially create one task model that is then filtered on the basis of the target plat-
form to generate multiple system task models. Depending on what kind of tasks are
supported on different platforms, the developer has to work on each of these system
task models separately and further refine them. So, although the developer starts off
with a single task model, he has to work with individual task models to customize
them for the particular target platforms. As seen in the previous item, the system task
84
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
model for each platform might differ dramatically and the developer has to work with
each. Additionally, the developer has to work with the different presentation sets and
AUIs for each platform. In contrast, the developer works with just one task model in
my approach and just needs to annotate it for different platforms. This reduces the
amount of work that he has to do.
4. Relationship to usability engineering processes: TERESA does require some sort of
early commitment since the developer has to build the initial task model keeping in
mind all the target platforms. He has to incorporate the necessary transition task either
in the initial task model or the system task model. Doing this at the system task level
is not bad since that is specific to a certain platform. However, having to do it at the
initial step is a weak point in the system. In contract, my approach does not require
the developer to worry about the various platforms in the initial step. The platform or
family-dependent customization comes in the later phases.
6.1.3 XIML
The eXtensible Interface Markup Language (XIML), available from http://www.
ximl.org/ , is an XML-based language that specifies different aspects of interaction data
for single and multiple-platform user interfaces (Puerta and Eisenstein, 2002, 2004). A UI
representation in XIML is primarily a collection of elements, each of which can be catego-
rized into different components. The language does not impose a limit on the number and
type of components that can be defined, although there are five predefined components.
The five basic predefined interface components are task, domain, user, dialog, and presentation.
XIML supports multi-platform UI development by allowing multiple presentation
components within a single XIML specification. Each presentation component is geared to-
wards a target platform and includes specification of the widgets, interactors and controls
for one particular platform, allowing UIs for multiple platforms to be created. To avoid hav-
ing to create multiple presentation components, XIML permits the creation of intermediate
presentation elements, which could be mapped to multiple target presentation elements
automatically through relations. An example of an intermediate presentation element is
a ‘map-location widget’ (Puerta and Eisenstein, 2004) that could be mapped to a graphical
map control for the web and a text-based display for a PDA. XIML is an integrated language
85
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
that provides different models and their transformations within the same specification.
1. Specifying navigation: The closest comparison to the navigation operators in my
process in the XIML language is the dialog component. However, the dialog model is
derived from the task and domain components and is very closely tied to them (Puerta
and Eisenstein, 2004). Some of the navigation also depends on how the presentation
component is created for a target platform. As an example, consider a portion of a task
component in XIML Puerta and Eisenstein (2004):
<TASK_MODEL ID=‘tm1’><TASK_ELEMENT ID=‘t1’ name=‘Make annotation’>
<TASK_ELEMENT ID=‘t1.1’ name=‘Select location’/><TASK_ELEMENT ID=‘t1.2’ name=‘Enter note’/><TASK_ELEMENT ID=‘t1.3’ name=‘Confirm Annotation’/>
</TASK_ELEMENT></TASK_MODEL>
The corresponding initial dialog component based on this task component looks like
this:
<DIALOG_MODEL ID=‘im1’><DIALOG_ELEMENT ID=‘i1.1’ name=‘Make annotation’>
<DIALOG_ELEMENT ID=‘i1.2’ name=‘Select location’><DIALOG_ELEMENT ID=‘i1.2.1’ name=‘Select map point’/><DIALOG_ELEMENT ID=‘i1.2.2’ name=‘Specify latitude’/><DIALOG_ELEMENT ID=‘i1.2.3’ name=‘Specify longitude’/>
</DIALOG_ELEMENT><DIALOG_ELEMENT ID=‘i1.2’ name=‘Enter note’/><DIALOG_ELEMENT ID=‘i1.3’ name=‘Confirm Annotation’/>
</DIALOG_ELEMENT></DIALOG_MODEL>
It is clear that the dialog component is pretty tightly coupled with the task compo-
nent. This dialog component is linked with interactors to permit interaction, part of
which is navigation. The tight coupling of the dialog component with the task compo-
nent is a limitation when trying to generate different navigation styles as discussed in
subsection 6.1.1. There is no such limitation in my work.
86
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
2. Specifying grouping/containment: This is dependent on how the presentation model
is specified. Although Puerta and Eisenstein (2004) discusses the concept of a software
middleware ‘mediator’ that assists in development of a presentation model based on
the target platform, the details of how this is done are not presented. Depending on
the target platform, the presentation model can comprise of varying number of con-
tainers. This is guided by the mediator. However, the available literature (Puerta and
Eisenstein, 2004) fails to mention the details of how the containment is done and how
much involvement is required by the developer. This prevents a fair comparison with
my approach in which the developer has to specify some containers explicitly, but he
is provided great flexibility in doing so.
3. Number/type of different representations for different platforms: The developer
starts off with building the task, domain and user components in this approach too.
The later components (dialog and presentation) are generated partially from this first
set of components and need to be semi-automatically customized for the target plat-
form. The middleware helps in this generation and customizing. Again, the available
literature fails to mention the details of how much involvement is required by the de-
veloper. This prevents a fair comparison with my approach in which the developer
does not have create many different representations.
4. Relationship to usability engineering processes: Compared to my process, XIML too
delays platform-specific decisions until later phases. However, the structure of the
task component, to a certain extent, influences design decisions later on. Decisions
made at a earlier phase do affect the artifacts that are generated at later phases. This is
a weakness of the XIML process.
6.1.4 UsiXML
The User interface eXtensible Markup Language (UsiXML) (Calvary et al., 2003;
Furtado et al., 2004; Limbourg et al., 2004; Limbourg and Vanderdonckt, 2004b,a,c; Lim-
bourg, 2004) is another recent XML-based language intended for the purpose of creating
‘context-sensitive’ user interfaces. This language too has its foundations in model-based
systems work and extensively utilizes models at various levels of abstractions. A unique
feature of this XML-based language is that it represents the UI in the form of a graph. Most
87
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
other languages consider a tree-representation of a UI instead. UsiXML also allows ‘multi-
path’ UI development. This implies that it supports forward engineering (abstract to con-
crete), reverse engineering (concrete to abstract) and context of use adaptation (at same level
of abstraction) within its many models. This is primarily done by graph transformations.
Some of the models that are used in UsiXML are task, domain, presentation, di-
alog, user, platform, environment, mapping, and transformation. These are arranged at
the appropriate levels based on the Cameleon framework (Calvary et al., 2003). In their
work, a context-sensitive user interface is a user interface that exhibits some capability to
be aware of the context and reacts to changes of this context. Change of platform can be
considered to be one change of context. Their approach is broader in scope than mine, but
incorporates some of the same models that are common to TERESA and XIML. The avail-
able literature for UsiXML (Limbourg et al., 2004; Limbourg and Vanderdonckt, 2004b,a,c;
Limbourg, 2004) has not considered any non-desktop platforms like small-device (WML) or
voice (VoiceXML) for its examples.
1. Specifying navigation: Navigation specification in UsiXML is deferred to the concrete
UI development stage. When the abstract UI is reified to produce more concrete con-
tainers, navigation is specified by a set of transitions between the various containers.
This is also governed by how the containers are created. The containment in UsiXML
is dependent on the structure of the task model and that in turn restricts the naviga-
tion style that is generated. My method of specifying navigation styles is more flexible
compared to this approach.
2. Specifying grouping/containment: In UsiXML, most of the containment of UI ele-
ments corresponding to the user tasks is done based on the level in the task tree.
For example, tasks at the highest level are automatically assigned to container win-
dows. The tasks at one level lower than these tasks are assigned to container boxes
and inserted in the higher windows. This indicates a level of automation based on the
structure of the task model and is a limitation of the approach. Again, in contrast, my
approach is more flexible.
3. Number/type of different representations for different platforms: UsiXML provided
the UI developer with multiple starting points for developing UIs. One path is the
forward engineering approach that is comparable to mine: starting from an abstract
88
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
representation and moving to a more concrete representation. Even for a single plat-
form, the developer has to work with a number of different models: task, mapping,
domain and possibly transformation. The developer also has to carefully guide the
generation of the abstract and concrete UIs. All this indicates that the developer has to
work with a lot of different models for the UI generation. This increases the work that
he has to do. Currently, UsiXML comes with a bunch of different tools for generating
these models, but most of them are individual editors and not completely integrated.
Contrasting this with my approach, I just require the developer to work the task model
and UIML representations.
4. Relationship to usability engineering processes: UsiXML provides clear guidelines
on how the development process should proceed. Comparing the forward engineer-
ing approach in UsiXML to mine, it provides clear steps on how the task and domain
models are transformed into the AUI and subsequently the CUI. UsiXML also defers
decisions about navigation until the concrete UI generation phase. So on a compara-
tive basis, it is on par with my system regarding early commitment of design decisions.
6.2 Example Application
In this section I present a weather application on different platforms as an example
to demonstrate the multi-platform generation capability of my approach. I also build the
same UI for the same three platforms using TERESA and compare both approaches. Con-
sider a weather reporting application that displays the current weather and forecast for a
certain region (Blacksburg, in this instance). The forecast could further be expanded into
‘immediate forecast’ and ‘forecast for later’. Figure 6.4 illustrates the task model for this. If
the developer uses a ‘contains’ navigation operator for the root task in the task model - to
display the complete UI in a single container - one possible UI that is derived from the task
model for HTML is illustrated in Figure 6.5. In this example, generic UIML for the desktop
family is generated from the task model. The developer has to specify the properties for
HTML and the mappings of the generic UIML parts to platform-specific HTML parts. Some
of the properties that the developer uses in this example include positioning and formatting
properties to properly arrange the UI, and properties to label the information and links.
89
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figure 6.6 illustrates a different type of UI that is derived from the same task model,
for the same target family (desktop) and platform (HTML), but with a different navigation
operator (‘menustyle’) for the root task. Figure 6.6 indicates two containers for the same
task model as in Figure 6.5. Note that there are three separate HTML pages now generated
- two for containing the information that is to be displayed and one for the ‘menu’ to allow
navigation to these containers. Note also that there are links back from each of the containers
to the main menu page. These are generated based on the use of the menustyle navigation
operator used in the task model. The dashed arrows in the figure indicate these navigation
paths.
The task model (Figure 6.4) is not sufficient in TERESA to generate the simple
menu-like navigation of Figure 6.6. Extra transition tasks have to be introduced in order
to generate the necessary navigation from the parent menu container and back from the
child containers to the parent. A partial version of this task model, with some tasks deleted
for clarity, is shown in Figure 6.8. The dashed circles indicate the extra nodes that have been
added to the original task model. This indicates that for the same target platform, if a differ-
ent navigation style is desired, the task model has to be altered to introduce new transition
tasks.
Figure 6.9 indicates three separate containers, one for the current forecast, and one
each for immediate forecast and later forecast, respectively. These are desired for the voice
platform. It also indicates the menustyle navigation style of a few top level tasks. Listing 6.1
shows one possible rendering with VoiceXML for this task model. The dialog between the
system and the end-user is indicated.
90
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Listing 6.1: VoiceXML listing for weather example
∗∗ Dial ing in progress . . .
Connection e s t a b l i s h e d
OUTPUT −−> Please say one of the fol lowing :
Check todays weather
Check f o r e c a s t
USER −−> Check todays weather
OUTPUT −−> Current Blacksburg Conditions
Temperature i s 86 Degrees ,
Humidity i s 80%,
Dewpoint i s 45 degrees Fahrenheit ,
Wind i s Calm ,
Pressure i s 4 0 . 5 inch ,
Conditions are Clear ,
V i s i b i l i t y i s Poor ,
There are no Clouds ,
Sunrise i s a t 7 : 0 0 AM,
Moonrise i s a t 8 : 4 5 PM, and
Moonset i s a t 6 : 0 0 AM.
Please say one of the fol lowing :
Check todays weather
Check f o r e c a s t
USER −−> Check f o r e c a s t
OUTPUT −−> Please say one of the fol lowing :
Check immediate f o r e c a s t
Check l a t e r f o r e c a s t
Check weather
USER −−> Check immediate f o r e c a s t
OUTPUT −−> The immediate f o r e c a s t i s : I n c r e a s i n g clouds .
Low ranging from the lower 40 s to upper 50 s .
∗∗ Trying to hang up . . .
91
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Again, it should be emphasized that the same task model, albeit with different
navigation and grouping operators, is used as the starting point for the HTML examples
in Figures 6.5 and 6.6 and VoiceXML here in my approach. The basic hierarchy of the task
model has not been altered in any shape or form for the purpose of deriving the UIML. In
the dialog listed above, the menu at the highest level asks the user to select between the
current weather or forecast. If the user selects the current weather, then the current weather
is presented and the user is taken back to the previous menu and prompted with the same
choices again. If the user selects the second option, then he is prompted to select either
current forecast or forecast for later. In both the instances, after the forecast is presented,
he is taken back to the previous menu. This navigation is generated based on the operators
specified by the developer.
Again, TERESA requires the presence of additional transition tasks to make this
sort of navigation possible. One possible task model for TERESA, with a few task deleted
for clarity, with these extra tasks highlighted is shown in Figure 6.11. The original task
model is shown in Figure 6.10.
Finally, Figure 6.12 illustrates many more containers for the same task model for
the small-device family (WML). Screen-shots of the corresponding WML UI are shown in
Figure 6.13. Some of the possible containers are not shown in the figure for lack of space.
The various navigation paths between the different WML cards are indicated in the figure
with the dashed arrows. It is quite apparent that there is quite a bit of navigation necessary
for a small-screen device like this.
92
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figu
re6.
4:Ta
skm
odel
for
aw
eath
erap
plic
atio
n.
93
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figure 6.5: The corresponding HTML rendering of the UI generated from the task model infigure 6.4.
94
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Men
ust
yle
Figu
re6.
6:Ta
skm
odel
wit
htw
oco
ntai
ners
and
a‘m
enus
tyle
’nav
igat
ion
styl
efo
rthe
mai
nta
skfo
rHTM
L(d
eskt
opfa
mily
).
95
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figure 6.7: The corresponding HTML UI for Figure 6.6 with two containers and one menuto allow navigation to the containers.
96
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figu
re6.
8:Pa
rtia
ltas
km
odel
inTE
RES
Aw
ith
extr
ata
sks
(ena
blin
gan
ddi
sabl
ing)
adde
dto
gene
rate
men
usty
lena
viga
tion
styl
e.T
heex
tra
tran
siti
onta
sks
are
indi
cate
dby
the
dash
edci
rcle
s.
97
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Men
ust
yle
Figu
re6.
9:Ta
skm
odel
wit
hgr
oupi
ngs
indi
cate
dfo
rvo
ice
fam
ily.
Ther
eis
one
men
uat
the
root
leve
loft
heU
Ian
don
eat
the
seco
ndle
velf
orco
ntai
ners
two
and
thre
e.
98
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figu
re6.
10:
Ori
gina
lpar
tial
subt
ree
ofta
skm
odel
show
ing
som
eof
the
subt
rees
for
disp
layi
ngth
ecu
rren
twea
ther
info
r-m
atio
nfo
rth
eW
ML
plat
form
.
Figu
re6.
11:S
ame
task
mod
elin
TER
ESA
wit
hex
tra
node
s(e
nabl
ing
and
disa
blin
g)ad
ded
toge
nera
tem
enus
tyle
navi
gati
onst
yle.
The
extr
ano
des
are
indi
cate
dby
the
dash
edci
rcle
s.
99
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Men
ust
yle
Figu
re6.
12:T
ask
mod
elw
ith
cont
aine
rsin
dica
ted
for
WM
L(s
mal
l-de
vice
fam
ily).
Not
eth
atth
ere
are
mor
eco
ntai
ners
and
adi
ffer
ents
tyle
ofm
enu
gene
rate
d.
100
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figure 6.13: Screen-shots showing final rendered UIs for WML for the task model in Figure6.12. The dashed arrows indicate the possible navigation between the various WML cards.
101
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figu
re6.
14:
Ori
gina
lpar
tial
subt
ree
ofta
skm
odel
show
ing
som
eof
the
subt
rees
for
disp
layi
ngth
ecu
rren
twea
ther
info
r-m
atio
nfo
rth
eW
ML
plat
form
.
Figu
re6.
15:S
ame
task
mod
elin
TER
ESA
wit
hex
tra
node
s(e
nabl
ing
and
disa
blin
g)ad
ded
toge
nera
tem
enus
tyle
navi
gati
onst
yle.
The
extr
ano
des
are
indi
cate
dby
the
dash
edci
rcle
s.
102
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figu
re6.
16:
Ori
gina
lpar
tial
subt
ree
ofta
skm
odel
show
ing
som
eof
the
subt
rees
for
disp
layi
ngfu
ture
wea
ther
cond
itio
nsfo
rth
eW
ML
plat
form
.
Figu
re6.
17:
Sam
eta
skm
odel
inT
ERES
Aw
ith
extr
ano
des
(ena
blin
gan
ddi
sabl
ing
adde
dfo
rea
chsu
btre
eto
gene
rate
men
usty
lena
viga
tion
styl
ew
ith
ala
rger
seto
fcon
tain
ers.
The
extr
ano
des
are
indi
cate
dby
the
dash
edci
rcle
s.
103
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figures 6.14, 6.15 and 6.16, 6.17 represent the original and altered task models in
TERESA for the two separate sub-tasks (‘check current weather’ and ‘check forecast’)of the
original task, respectively. I have shown Only a few of the tasks in the figures for clarity.
It is apparent that if the developer desires a deeply hierarchical navigation structure in the
UI for a particular platform, he has to add transition tasks at each and every level to ac-
complish this. This works for the menustyle navigation style. As seen in subsection 6.1.2, the
structure of the task model has to be altered even more drastically for independent navigation
style. This has the effect of altering the structure of the task model and the introduction of
these extra transition tasks, which do not contribute to the actual tasks that the end-user is
performing and have an artificial existence for the sole purpose of aiding navigation. If the
developer does not account for the presence of all possible platforms apriori, he has to do
substantial work in trying to modify the task models to accommodate the new platform.
In contrast, my approach does not need the introduction of any extra tasks in the
task model for different navigation styles. The same task model can be used for different
navigation styles, regardless of the target platform.
Table 6.1 summarizes this discussion of my work with TERESA’s in terms of fea-
tures and artifacts produced during each process. The purpose behind this comparison is to
familiarize a reader who knows the TERESA approach to my work.
Table 6.1: Feature and artifact comparison with TERESA.
Feature My work TERESAAbstract representations CTT and generic UIML CTT, ETS, PTSConcrete representations Platform-specific UIML CUI and FUINavigation Specified through Specified through
task attributes transition tasksGrouping/Containment Explicit operator Derived from ETS and PTS
6.2.1 Discussion
Depending on the target platform, the containers are used to drive decisions be-
hind the style of navigation in my process. For example, the weather application presented
earlier utilizes a lesser number of containers for the HTML platform on a personal com-
puter, since the screen real-estate is not a restricting factor. Voice has to be processed in a
104
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
serial fashion and consequently, voice UIs follow a nature of dialog between the user and
the system. In VoiceXML, the ‘containers’ are menus or forms and have to be suitably used.
Small-screen platforms like WML on cell phones have limited screen real-estate and there is
a limit on how much information could be presented in each container. As a result, UIs for
these platforms need to be broken up into a large number of smaller containers. Regardless
of the target platform, the use of appropriate containers and navigation operators facilitates
the UI development process.
The weather example presented above illustrates the power of the navigation oper-
ators in creating different styles of UIs for different platforms. Combined with the grouping
operator and flexibility in mapping tasks to various UIML parts, it affords great power to
the UI developer in creating MPUIs for different platform.
By allowing the developer to flexibly create mappings between tasks and UIML
parts, my approach permits easy introduction of new platforms in the framework. As an
example, consider an accessible UI. Assume that this UI is to be built for a vision-impaired
end-user who needs to have text in large fonts. Having large fonts might require that there
be limited amount of text in each container. All the developer would need to do is to use
the grouping operator to restrict the amount of UI elements within each container and spec-
ify the appropriate navigation operators. My implementation takes care of generating the
necessary navigation paths between the different containers.
6.3 Case Study
In this section, I examine a popular news web-site (http://news.bbc.co.uk ),
available on two platforms: a desktop machine through a traditional browser and a mobile
phone, and analyze how my development process could be used to build this MPUI. The
task model indicating some of the high-level tasks for this application is shown in Figure
6.18. Note that all these tasks should be named more appropriately like ‘Get England news’,
but I omitted the word ‘news’ to conserve space.
The HTML web-page along with the partially annotated task model, indicating the
navigation and containment of a few tasks, for this web-site is shown in Figure 6.19. There
are three main discernible areas on the web-page as indicated by the dashed boxes in the
figure. On the left hand side of the page is a navigation menu that allows access to other
105
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
sections of the web-site to get news about that particular topic. The center and right hand
side of the page contain highlights and links to the top stories of the day. The middle section
of the page contains links to top stories from other sections of the web-site. For example,
the headline under ‘Africa’ links to a news story that can also be accessed by clicking on the
‘World’ link on the left side of the page and then ‘Africa’.
It is obvious from the web-page that nearly all tasks (getting news stories) are ac-
cessible to the end-user via direct links to the news stories, or by links to other sections of the
web-site that in turn provide links to the stories. This indicates that the top-level task can
have a contains navigation attribute, as shown in the task model at the bottom of the figure.
The top stories section in the top center and right, comprises of one or more sentences about
each of news stories and a link to the detailed story. This can be construed to be a menu
since the actual stories are accessible only by clicking on the links. So a menustyle navigation
for the subtask ‘Get top stories’ yields that particular section of the page.
The left-hand navigation menu is also easily obtained by creating a group out of the
remaining top-level tasks in the task model and applying a menustyle navigation attribute
on them. The link between the rendered UI and the task model is shown in the figure. The
creation of the middle section of the page where the links to the headlines of the other topics
is shown is trickier. The page shown displays the headlines from around the world, cate-
gorized based on different regions. Getting the news from each such region is a task in the
task model (Figure 6.18) with multiple subtasks. For example, the ‘Get Africas’ abstract task
shown in Figure 6.18 might have more than two subtasks (news stories). Links to only one
or two of these are shown on the main page. An initial strategy is to create a container with
all the sub-tasks of the ‘Get World’ task with navigation attribute menustyle. Recalling the
next step in building the UI, these tasks have to be mapped to generic UIML parts. To avoid
displaying all the news stories, only the relevant news stories that have to be displayed
should be mapped to generic UIML parts. The others should be ignored for this first page.
So far, I have shown how to generate the left-hand menu, the top stories section and part of
the middle section.
The remainder of the web-page is shown in Figure 6.20. The top section here is
similar to the middle section previously described. There is one headline from each of the
tasks that are siblings with the ’Get world’ task. A strategy similar to what was used earlier
for the left-hand menu can be used here. The navigation menu at the bottom of the page is
106
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
also similar to the left-hand menu. The links lead to the same pages. Only the presentation
of the menus are different. An easy way of generating these navigation menus is to apply
the menustyle navigation attribute a fixed number of times (three in this case). This iterative
operator does not exist in my current implementation, but can be easily incorporated.
A WML version of this page and its rendering, along with a partially annotated
task model, is shown in Figure 6.21. The navigation is pretty hierarchical on that platform
and is easily obtained by applying the menustyle navigation attribute on each of the higher-
level tasks in the tree.
6.3.1 Limitations
As described above, my approach works in creating most of the structural elements
present on the initial web-site for the HTML platform. However, it fails in certain aspects.
1. Every page on the BBC news web-site has the ubiquitous navigation menu on the left-
hand side. On some pages, the menu is context-sensitive. For example, if the user
clicks on the ‘World’ link on the main page, he is presented with a page that has the
original links in that left-hand menu. However, he is also presented with links to
other categories within World news. All these are within the original menu. Even if
an unaltered menu had to be present on each page in the site, the developer has to do
some manual work to implement this. One solution to this problem is to implement
the navigation menus, left-hand side and bottom, in separate frames.
2. A fundamental assumption of my work is that all tasks are implemented on all target
platforms. This might not necessarily be true in some cases. For example, the BBC
web-site has audio and video news reports that are not suitable for browsing using a
cell phone. One solution is to have an attribute for each task, similar to TERESA, that
specifies the target platform. As a first step, the task model would have to be filtered
based on this attribute to yield family or system task models (to use the terminology
of TERESA) that are specific to a particular family. These task models could then be
processed to get the final UIs.
107
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Figu
re6.
18:I
niti
alta
skm
odel
for
the
BBC
new
sw
eb-s
ite,
avai
labl
ein
HTM
Lan
dW
ML.
108
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
NewsWorld
Top stories Get World
Menustyle
Contains
Group with menustyle
MenuNavigation
Other headlines
Top stories
Group 2 with menustyle
Figure 6.19: Upper portion of the BBC news web-site indicating different structural sectionsof the page. Task model indicating a few annotations and groupings in the bottom part ofthe figure.
109
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
More headlines
Navigation menu
Figure 6.20: Lower section of the BBC web-site.
110
Mir Farooq Ali CHAPTER 6. FUNCTIONAL COMPARISON
Menustyle
Figure 6.21: Partially annotated task model with the a few screen shots of the BBC news siteon a cell phone browser.
111
112
Chapter 7
Conclusions and Future Work
I discuss the main contributions of my dissertation and some directions for future
work in this chapter.
7.1 Contributions
This dissertation has presented a process for developing MPUIs that starts off with
an initial task model, which is incrementally annotated and is used to generate UIML and
the final UI. This process is strongly influenced by insertion of navigation annotations and
mapping information that help customize the UI for a specific target platform. The navi-
gation operators also help keep the navigation features of the UI separate from the ‘task’
content of the task model. The same task model, albeit with different annotations, is repeat-
edly used for different target platforms and allows the same tasks to be performed by users
on all of them. This approach also maintains the separation of the task model from the other
artifacts used to guide the UI-generation and customization process.
The current framework utilizes concepts from the model-based UI development lit-
erature and usability engineering realm and applies them to this new area of multi-platform
UI development. This framework tries to eliminate some of the pitfalls of other model-based
approaches by having multiple steps and allowing for developer-intervention throughout
the UI generation process. This approach allows the developer to build a single specification
for a family of devices. UIML and its associated tools transform this single representation
to multiple platform-specific representations that can then be rendered to each device.
Mir Farooq Ali CHAPTER 7. CONCLUSIONS AND FUTURE WORK
In summary, the main contributions of my dissertation are:
1. The specification and identification of building blocks that could be used in a process
to build MPUIs (Research objective 1).
2. The specification of a generic UIML vocabulary to incorporate generic language struc-
tures based to support transformation-based MPUI development (Research objective
2).
3. The specification, definition and implementation of navigation and grouping opera-
tors as extensions to the CTT notation to support MPUI development (Research objec-
tive 3).
4. The definition of a multi-platform usability engineering development process to sup-
port MPUI development with the building-blocks identified above (Research objective
4).
I have also demonstrated the feasibility of this approach (research objective 5) by
building MPUIs for a few different platforms in Chapter 6.
7.2 Future work
There are several areas in which I want to extend the work done in my dissertation.
Developing usability metrics for MPUIs: My framework for building MPUIs creates a link
between the different phases in building MPUIs and a usability engineering approach.
Doing this ensures a certain level of usability in the generated UIs. However, the area
of MPUI development being relatively new, there has not been much work done in
investigating usability across MPUIs. For example, could there be some metrics that
help evaluate the usability of UIs for a certain set of platforms, developed using a
unified approach? In extending my work, I would like to develop a set of metrics that
help in the development process. These metrics could also be used as a comparison
mechanism for different approaches for building MPUIs.
Building accessible interfaces: I have briefly mentioned accessible UIs in chapter 1 and
also in chapter 6. However, I have not done a lot of work in building accessible UIs us-
113
Mir Farooq Ali CHAPTER 7. CONCLUSIONS AND FUTURE WORK
ing my approach. I would like to build a vocabulary for accessible UIs and investigate
the extensibility of my approach in building them from the same task model.
Integrated Development Environment tool support: Although I have implemented the trans-
formation algorithms for converting the task model to generic UIML and generic
UIML to platform-specific UIML, the programs have not been completely integrated
into any development environment. Some of the algorithms have been incorporated
into TIDE (Transformation-based Integrated Development Environment), which rep-
resents a visual link between the different phases for the Java Swing platform (Ali
et al., 2004, 2002). A screen-shot of the IDE is shown in Figure 7.1. The different
panels in the figure indicate the different phases of development in my process. The
first panel shows the task model, the second panel the generic UIML, the third the
platform-specific panel and the final panel shows the rendered UI. When the user se-
lect any item in any panel, a visual link is drawn on the screen indicating the links
between the various panels.
Figure 7.1: TIDE 2 showing the four different panels, each representing different phases ofthe UI development activity.
114
Mir Farooq Ali CHAPTER 7. CONCLUSIONS AND FUTURE WORK
The tool in its current form supports the Java Swing platform only. I would like to
extend it to support at least a couple of different platform families. The environment
would then allow a more life-cycle centered development environment for my work.
Information and UI personalization: Finally, one of the main contributions of my work in
the area of MPUIs is to build one model and use it as a starting point to create UIs for
various heterogeneous platforms. I would like to investigate if these general principles
can also be applied in the area of information personalization where information is
tailored differently for different users based on their preferences and constraints.
Right now, my approach implements the complete task model for all possible plat-
forms. However, it is necessary to implement only some of the tasks for certain plat-
forms, depending on the particular application. This is something that I would like to
investigate further.
One of the other models that could be incorporated in my work as part of this UI
personalization is a data model. It is primarily the lack of such a model that it is cur-
rently difficult to build the calendar application shown in Figures 1.1, 1.2 and 1.3 in
Chapter 1. The application differs dramatically from the Java platform to the WML
platform due the difference in the structure of the data that is presented. In my cur-
rent approach, the task model would have to be artificially manipulated to yield the
different UIs for the different platforms. This would be minimized if a data model is
incorporated in my approach.
115
116
References
ABRAMS, M., HELMS, J., 2002. User Interface Markup Language (UIML) Specification: Lan-
guage Version 3.0 (Draft).
URL http://www.uiml.org/specs/docs/uiml30-revised-02-12-02.pdf
ABRAMS, M., HELMS, J., May 2004. Retrospective on UI Description Languages, based on 7
years Experience with the User Interface Markup Language. In: Luyten, K., Abrams, M.,
Vanderdonckt, J., Limbourg, Q. (Eds.), Developing User Interfaces with XML: Advances
in User Interface Description Languages. Gallipoli (Lecce), Italy, pp. 1–8.
ABRAMS, M., PHANOURIOU, C., 1999. UIML: An XML Language for Building Device-
Independent User Interfaces. In: XML’99. Philadelphia, USA.
ABRAMS, M., PHANOURIOU, C., BATONGBACAL, A., SHUSTER, J., 1999. UIML: An
Appliance-Independent XML User Interface Language. In: Eight International World
Wide Web Conference: WWW’8. Toronto, Canada.
ALI, M. F., ABRAMS, M., 2001. Simplifying Construction of Multi-Platform User Interfaces
Using UIML. In: UIML’2001. Paris, France.
ALI, M. F., PEREZ-QUINONES, M., 2002. Multi-Platform User Interface Construction with
Transformations using UIML. In: Human Factors in Computing Systems: Extended Ab-
stracts: CHI’2002. ACM Press, Minneapolis, USA.
ALI, M. F., PEREZ-QUINONES, M., ABRAMS, M., 2004. Building Multi-Platform User In-
terfaces with UIML. In: Seffah, A., Javahery, H. (Eds.), Multiple User Interfaces: Cross-
Platform Applications and Context-Aware Interfaces. John Wiley & Sons, Ltd, West Sus-
sex, England, Ch. 6, pp. 95–118.
Mir Farooq Ali REFERENCES
ALI, M. F., PEREZ-QUINONES, M., ABRAMS, M., SHELL, E., 2002. Building Multi-
Platform User Interfaces with UIML. In: Computer Aided Design of User Interfaces III:
CADUI’2002. Valenciennes, France.
ANNETT, J., 2004. Hierarchical Task Analyis. In: Diaper, D., Stanton, N. (Eds.), The Hand-
book of Task Analysis for Human-Computer Interaction. Lawrence Erlbaum Associates,
publishers, Mahwah, New Jersey, Ch. 3, pp. 67–82.
ANNETT, J., DUNCAN, K. D., 1967. Task Analysis and Training Design. Occupational Psy-
chology 41, 211–221.
ANNETT, J., STANTON, N. A. (Eds.), 2000. Task Analysis. Taylor and Francis.
ASAKAWA, C., TAKAGI, H., 2000. Annotation-Based Transcoding For Nonvisual Web Ac-
cess. In: Fourth International ACM SIGCAPH Conference on Assistive Technologies: AS-
SETS 2000. Arlington, Virginia, USA, pp. 172–179.
BANDELLONI, R., PATERNO, F., January 2004. Flexible Interface Migration. In: Ninth Inter-
national Conference on Intelligent User Interfaces: IUI’2004. Madeira, Funchal, Portugal,
pp. 148–155.
BARCLAY, P., GRIFFITHS, T., MCKIRDY, J., PATON, N., COOPER, R., KENNEDY, J., October
1999. The Teallach Tool: Using Models for Flexible User Interface Design. In: Computer-
Aided Design of User Interfaces II. Louvain-la-Neuve, Belgium.
BERTI, S., CORREANI, F., PATERNO, F., SANTORO, C., May 2004. The TERESA XML
Language for the Description of Interactive Systems at Multiple Abstraction Levels. In:
Luyten, K., Abrams, M., Vanderdonckt, J., Limbourg, Q. (Eds.), Developing User Inter-
faces with XML: Advances in User Interface Description Languages. Gallipoli (Lecce),
Italy, pp. 103–110.
BLEUL, S., MUELLER, W., SCHAEFER, R., May 2004. Multimodal dialog description for
mobile devices. In: Luyten, K., Abrams, M., Vanderdonckt, J., Limbourg, Q. (Eds.), De-
veloping User Interfaces with XML: Advances in User Interface Description Languages.
Gallipoli (Lecce), Italy.
117
Mir Farooq Ali REFERENCES
BODART, F., HENNEBERT, A.-M., LEHEUREUX, J.-M., PROVOT, I., SACRE, B., VANDER-
DONCKT, J., 1995. Towards a Systematic Building of Software Architecture: the TRIDENT
Methodological Guide. In: Eurographics Workshop on Design, Specification, Verification
of Interactive Systems: DSV-IS 1995. pp. 237–253.
BONIFATI, A., CERI, S., FRATERNALI, P., MAURINO, A., 2000. Building Multi-device,
Content-Centric Applications Using WebML and the W3I3 Tool Suite. In: Liddle, S. W.,
Mayr, H. C., Thalheim, B. (Eds.), ER 2000 Workshops on Conceptual Modeling Ap-
proaches for E-Business and The World Wide Web and Conceptual Modeling. Vol. 1921 of
LNCS. Springer-Verlag, Salt Lake City, Utah, USA.
BREWSTER, S., LEPLATRE, G., CREASE, M., 1998. Using Non-Speech Sounds in Mobile
Computing Devices. In: First Workshop on Human Computer Interaction of Mobile De-
vices. Glasgow, UK.
BUTLER, M., GIANNETTI, F., GIMSON, R., WILEY, T., September-October 2002. Device In-
dependence and the Web. IEEE Internet Computing, 81–86.
CALVARY, G., COUTAZ, J., THEVENIN, D., 2000. Embedding Plasticity in the Development
Process of Interactive Systems. In: Sixth ERCIM Workshop “User Interfaces for All”. Flo-
rence, Italy.
CALVARY, G., COUTAZ, J., THEVENIN, D., LIMBOURG, Q., BOUILLON, L., VANDERDON-
CKT, J., June 2003. A Unifying Reference Framework for Multi-target User Interfaces. In-
teracting with Computers 15 (3), 289–308.
CARD, S. K., MORAN, T. P., NEWELL, A., 1983. The Psychology of Human-Computer In-
teraction. Lawrence Erlbaum Associates, Inc. publishers, Hillsdale, New Jersey.
CARROLL, J. M. (Ed.), 1995. Scenario-Based Design: Envisioning Work and Technology in
System Development. John Wiley and Sons, Inc.
CARROLL, J. M., 2000. Making Use: Scenario-Based Design of Human-Computer Interac-
tions. The MIT Press, Cambridge, Massachusetts, USA.
CARROLL, J. M. (Ed.), 2002. Human-Computer Interaction in the New Millennium. ACM
Press, New York, New York, USA.
118
Mir Farooq Ali REFERENCES
CARROLL, J. M., KELLOGG, W. A., ROSSON, M. B., 1990. The Task-Artifact Cycle. In: Car-
roll, J. M. (Ed.), Designing Interaction: Psychology at the Human-Computer Interface.
Cambridge University Press, pp. 74–102.
CASTELLS, P., SZEKELY, P., SALCHER, E., 1997. Declarative Models of Presentation. In: Sec-
ond International Conference on Intelligent User Interfaces: IUI’97. Orlando, Florida, pp.
137–144.
CERI, S., FRATERNALI, P., BONGIO, A., 2000. Web Modeling Language (WebML): A Mod-
elling Language for Designing Web sites. Computer Networks 33 ((1-6)).
CHU, H., SONG, H., WONG, C., KURAKAKE, S., KATAGIRI, M., 2004. Roam, a seamless
application framework. J. Syst. Softw. 69 (3), 209–226.
CLERCKX, T., LUYTEN, K., CONINX, K., January 2004. Generating Context-sensitive Multi-
ple Device Interfaces from Design. In: 5th International Conference on Computer-Aided
Design of User Interfaces: CADUI’2004. Funchal, Madeira Island, Portugal.
DASILVA, P. P., 2000. User Interface Declarative Models and Development Environments:
A Survey. In: Proceedings of 7th International Workshop on Design, Specification and
Verification of Interactive Systems: DSV-IS 2000. Limerick, Ireland.
DITTMAR, A., FORBRIG, P., 1999. Methodological and tool support for a task-oriented
development of interactive systems. In: Computer Aided Design of User Interfaces II:
CADUI’1999). Kluwer Academic publishers, Louvain-la-Neuve, Belgium, pp. 257–270.
DIX, A., FINLAY, J., ABOWD, G., BEALE, R., 2004. Human-Computer Interaction. Pearson
Prentice Hall.
DUBINKO, M., 2003. XForms Essentials. O’Reilly.
EISENSTEIN, J., PUERTA, A., 2000. Adaptation in Automated User-Interface Design. In: Fifth
International Conference on Intelligent User Interfaces: IUI’2000. New Orleans, LA, USA,
pp. 74–81.
EISENSTEIN, J., VANDERDONCKT, J., PUERTA, A., 2001. Applying Model-Based Techniques
to the Development of UIs for Mobile Computers. In: Sixth International Conference on
Intelligent User Interfaces: IUI’2001. Santa Fe, New Mexico, USA, pp. 69–76.
119
Mir Farooq Ali REFERENCES
FLORINS, M., VANDERDONCKT, J., January 2004. Graceful Degradation of User Interfaces
as a Design Method for Multiplatform Systems. In: Ninth International Conference on
Intelligent User Interfaces: IUI’2004. Madeira, Funchal, Portugal, pp. 140–147.
FORBRIG, P., DITTMAR, A., MULLER, A., 2004. Adaptive Task Modelling: From Formal
Models to XML Representations. In: Seffah, A., Javahery, H. (Eds.), Multiple User In-
terfaces: Cross-Platform Applications and Context-Aware Interfaces. John Wiley & Sons,
Ltd, West Sussex, England, Ch. 9, pp. 171–192.
FRANK, M., FOLEY, J., 1993. Model-Based User Interface Design by Example and by In-
terview. In: Sixth Annual ACM Symposium on User Interface Software and Technology:
UIST’93. Atlanta, USA, pp. 129–137.
FRATERNALI, P., 1999. Tools and Approaches for Developing Data-Intensive Web Applica-
tions: A Survey. ACM Computing Surveys 31 (3), 227–263.
FRATERNALI, P., PAOLINI, P., 2000. Model-Driven Development of Web Applications: The
Autoweb System. ACM Transactions on Information Systems 28 (4), 323–382.
FURTADO, E., FURTADO, V., SOUSA, K. S., VANDERDONCKT, J., LIMBOURG, Q., November
2004. KnowiXML: A Knowledge-Based System Generating Multiple Abstract User Inter-
faces in UsiXML. In: 3rd Int. Workshop on Task Models and Diagrams for user interface
design: TAMODIA’2004. Prague, Czechoslovakia, pp. 121–128.
GO, K., CARROLL, J. M., 2004. Scenario-based Task Analysis. In: Diaper, D., Stanton, N.
(Eds.), The Handbook of Task Analysis for Human-Computer Interaction. Lawrence Erl-
baum Associates, publishers, Mahwah, New Jersey, Ch. 5, pp. 117–134.
GRIFFITHS, T., BARCLAY, P. J., MCKIRDY, J., PATON, N. W., GRAY, P. D., KENNEDY, J. B.,
COOPER, R., GOBLE, C. A., WEST, A., SMYTH, M., 1999. Teallach: A Model-Based User
Interface Development Environment for Object Databases. In: User Interfaces to Data
Intensive Systems. pp. 86–96.
GRIFFITHS, T., BARCLAY, P. J., PATON, N. W., MCKIRDY, J., KENNEDY, J., GRAY, P. D.,
COOPER, R., GOBLE, C. A., SILVA, P. P. D., 2001. Teallach: a Model-Based User Interface
Development Environment for Object Databases. Interacting with Computers 14 (1), 31–
68.
120
Mir Farooq Ali REFERENCES
GRUNDY, J., ZOU, W., 2004. AUIT: Adaptable User Interface Technology, with Extended
Java Server Pages. In: Seffah, A., Javahery, H. (Eds.), Multiple User Interfaces: Cross-
Platform Applications and Context-Aware Interfaces. John Wiley & Sons, Ltd, West Sus-
sex, England, Ch. 8, pp. 149–168.
HAN, R., PERRET, V., NAGSHINEH, M., 2000. WebSplitter: A Unified XML Framework for
Multi-Device Collaborative Web Browsing. In: ACM Conference on Computer Supported
Cooperative Work: CSCW 2000. ACM Press, Philadelphia, USA.
HARTSON, R., GRAY, P. D., 1992. Temporal Aspects of Tasks in the User Action Notation.
Human-Computer Interaction 7, 1–45.
HARTSON, R., SIOCHI, A. C., HIX, D., 1990. The UAN: A User-Oriented Representation for
Direct Manipulation Interface Designs. ACM Transactions on Information Systems 8 (3),
181–203.
HIX, D., HARTSON, R., 1993. Developing User Interfaces: Ensuring Usability Through Prod-
uct and Process. John Wiley and Sons, Inc.
HORI, M., KONDOH, G., ONO, K., HIROSE, S., SINGHAL, S., 2000. Annotation-Based Web
Content Transcoding. In: Ninth World Wide Web Conference. Amsterdam, Netherlands.
HTML, 1998. HTML 4.01 Specification.
URL http://www.w3.org/TR/1998/REC-html40-19980424/
HUANG, A., SUNDARESAN, N., 2000. Aurora: A Conceptual Model for Web-Content Adap-
tation to Support the Universal Usability of Web-based Services. In: Conference on Uni-
versal Usability: CUU 2000. Arlington, VA, USA, pp. 124–131.
JOHN, B. E., GRAY, W. D., 1995. CPM-GOMS: An Analysis Method for Tasks with Parallel
Activities. In: Human Factors in Computing (CHI’95). pp. 393–394.
JOHN, B. E., KIERAS, D. E., 1996a. The GOMS Family of User Interface Analysis Techniques:
Comparison and Contrast. ACM Transactions on Computer-Human Interaction 3 (4), 320–
351.
JOHN, B. E., KIERAS, D. E., 1996b. Using GOMS for User Interface Design and Evaluation:
Which Technique? ACM Transactions on Computer-Human Interaction 3 (4), 287–319.
121
Mir Farooq Ali REFERENCES
JOHNSON, H., JOHNSON, P., 1991. Task Knowledge Structures: Psychological basis and
integration into system design. Acta Psychologica 78, 3–26.
JOHNSON, P., 1992. Human-Computer Interaction: Psychology, Task Analysis and Software
Engineering. McGraw-Hill Book Company, London.
JOHNSON, P., 1998. Usability and mobility: Interactions on the move. In: First Workshop on
Human Computer Interaction With Mobile Devices. Glasgow.
JOHNSON, P., JOHNSON, H., WILSON, S., 1995. Rapid prototyping of user interfaces driven
by task models. In: Carroll, J. M. (Ed.), Scenario-Based Design: Envisioning Work and
Technology in System Development. John Wiley and Sons, Inc., pp. 209–246.
KIERAS, D., 2004. GOMS Models for Task Analysis. In: Diaper, D., Stanton, N. (Eds.), The
Handbook of Task Analysis for Human-Computer Interaction. Lawrence Erlbaum Asso-
ciates, publishers, Mahwah, New Jersey, Ch. 4, pp. 83–116.
KIERAS, D. E., 1988. Towards a Practical GOMS Model Methodology for User Interface De-
sign. In: Helander, M. (Ed.), Handbook of Human-Computer Interaction. North-Holland,
Amsterdam, pp. 135–157.
KIERAS, D. E., 1997. Task analysis and the design of functionality. In: Tucker, A. (Ed.), The
Handbook of Computer Science and Engineering. CRC Inc., Boca Raton, pp. 1401–1423.
KIM, W. C., FOLEY, J., 1990. Don: User Interface Presentation Design Assistant. In:
Third Annual ACM SIGGRAPH Symposium on User Interface Software and Technology:
UIST’90. Snowbird, USA, pp. 10–20.
KIM, W. C., FOLEY, J., 1993. Providing High-level Control and Expert Assistance in the User
Interface Presentation Design. In: Human Factors in Computing Systems: INTERCHI’93.
Amsterdam, Netherlands.
KLEPPE, A., WARMER, J., BAST, W., 2003. MDA Explained, The Model-Driven Architecture:
Practice and Promise. Addison Wesley.
KLYNE, G., REYNOLDS, F., WOODROW, C., OHTO, H., HJELM, J., BUTLER, M., TRAN, L.,
January 2004. Composite Capability/Preference Profile (CC/PP): Structure and Vocabu-
122
Mir Farooq Ali REFERENCES
laries 1.0.
URL http://www.w3.org/TR/2004/REC-CCPP-struct-vocab-20040115/
LECEROF, A., PATERNO, F., October 1998. Automatic Support for Usability Evaluation. IEEE
Transactions on Software Engineering 24 (10), 863–888.
LIMBOURG, Q., 2004. Multi-Path Development of User Interfaces. Ph.D. thesis, Universite
catholique de Louvain, Louvain-la-Neuve, Belgium.
LIMBOURG, Q., VANDERDONCKT, J., November 2004a. Addressing the Mapping Problem in
User Interface Design with UsiXML. In: 3rd Int. Workshop on Task Models and Diagrams
for user interface design: TAMODIA’2004. Prague, Czechoslovakia, pp. 155–163.
LIMBOURG, Q., VANDERDONCKT, J., January 2004b. Transformational Development of User
Interfaces with Graph Transformations. In: 5th International Conference on Computer-
Aided Design of User Interfaces: CADUI’2004. Kluwer Academic Publishers, Dordrecht,
Funchal, Madeira Island, Portugal.
LIMBOURG, Q., VANDERDONCKT, J., July 2004c. UsIXML: A User Interface Description Lan-
guage Supporting Multiple Levels of Independence. In: Workshop on Device Indepen-
dent Web Engineering: DIWE’04. Munich, Germany.
LIMBOURG, Q., VANDERDONCKT, J., MICHOTTE, B., BOUILLON, L., FLORINS, M., TRE-
VISAN, D., May 2004. USIXML: A User Interface Description Language for Context-
Sensitive User Interfaces. In: Luyten, K., Abrams, M., Vanderdonckt, J., Limbourg, Q.
(Eds.), Developing User Interfaces with XML: Advances in User Interface Description
Languages. Gallipoli (Lecce), Italy, pp. 55–62.
LUO, P., SZEKELY, P., NECHES, R., 1993. Management of Interface Design in Humanoid.
In: Human Factors in Computing Systems: INTERCHI’93. Amsterdam, Netherlands, pp.
107–114.
LUYTEN, K., 2004. Dynamic User Interface Generation for Mobile and Embedded Systems
with Model-Based User Interface Development. Ph.D. thesis, Limburgs Universitair Cen-
trum, transnational University Limburg: School of Information Technology, Diepenbeek,
Belgium.
123
Mir Farooq Ali REFERENCES
LUYTEN, K., ABRAMS, M., VANDERDONCKT, J., LIMBOURG, Q. (Eds.), May 2004. Develop-
ing User Interfaces with XML: Advances in User Interface Description Languages. Gal-
lipoli (Lecce), Italy.
LUYTEN, K., CONINX, K., January 2004. Uiml.net: an Open Uiml Renderer for the .Net
Framework. In: Computer Aided Design of User Interfaces: CADUI’2004. Funchal,
Madeira Island, Portugal.
LUYTEN, K., CREEMERS, B., CONINX, K., 2003. Multi-device layout management for mo-
bile computing devices. Tech. Rep. TR-LUC-EDM-0301, Limburgs Universitair Centrum,
Diepenbeek, Belgium.
MARSIC, I., 2001. An Architecture for Heterogenous Groupware Applications. In: 23rd
IEEE/ACM International Conference on Software Engineering: ICSE 2001. Toronto,
Canada, pp. 475–484.
MARUCCI, L., PATERNO, F., SANTORO, C., 2004. Supporting Interactions with Multiple
Platforms through User and Task Models. In: Seffah, A., Javahery, H. (Eds.), Multiple
User Interfaces: Cross-Platform Applications and Context-Aware Interfaces. John Wiley
& Sons, Ltd, West Sussex, England, Ch. 11, pp. 217–238.
MERRICK, R. A., WOOD, B., KREBS, W., May 2004. Abstract User Interface Markup Lan-
guage. In: Luyten, K., Abrams, M., Vanderdonckt, J., Limbourg, Q. (Eds.), Developing
User Interfaces with XML: Advances in User Interface Description Languages. Gallipoli
(Lecce), Italy, pp. 39–45.
MORI, G., PATERNO, F., SANTORO, C., January 2003. Tool Support for Designing Nomadic
Applications. In: Eight International Conference on Intelligent User Interfaces: IUI’2003.
Miami, Florida, USA.
MORI, G., PATERNO, F., SANTORO, C., August 2004. Design and Development of Multi-
device User Interfaces through Multiple Logical Descriptions. IEEE Transactions on Soft-
ware Engineering 30 (8), 507–520.
MUELLER, A., FORBRIG, P., CAP, C., 2001. Using XML for Model-based User Interface De-
sign. In: CHI’2001 Workshop: Transforming the UI for Anyone, Anywhere. Seattle, Wash-
ington, USA.
124
Mir Farooq Ali REFERENCES
MUELLER, W., SCHAEFER, R., BLEUL, S., 2004. Interactive Multimodal User Interfaces for
Mobile Devices. In: Hawaii International Conference on System Sciences. Hawaii, USA.
MYERS, B., 1995. User Interface Software Tools. ACM Transactions on Computer-Human
Interaction 2 (1), 64–103.
MYERS, B., HUDSON, S., PAUSCH, R., 2000. Past, Present, and Future of User Interface
Software Tools. ACM Transactions on Computer-Human Interaction 7 (1), 3–28.
MYNATT, E. D., 1997. Transforming Graphical Interfaces Into Auditory Interfaces for Blind
Users. Human-Computer Interaction 12, 7–45.
NICHOLS, J., MYERS, B. A., HIGGINS, M., HUGHES, J., HARRIS, T. K., ROSENFELD, R.,
LITWACK, K., April 2003. Personal universal controllers: controlling complex appliances
with GUIs and speech. In: Proc. CHI’2003. Fort Lauderdale, Florida, USA, pp. 624–625.
NICHOLS, J., MYERS, B. A., HIGGINS, M., HUGHES, J., HARRIS, T. K., ROSENFELD, R.,
PIGNOL, M., October 2002. Generating remote control interfaces for complex appliances.
In: 15th annual ACM symposium on User interface software and technology: UIST’2002.
Paris, France, pp. 161–170.
NICHOLS, J., MYERS, B. A., LITWACK, K., HIGGINS, M., HUGHES, J., HARRIS, T. K., May
2004. Describing Appliance User Interfaces Abstractly with XML. In: Luyten, K., Abrams,
M., Vanderdonckt, J., Limbourg, Q. (Eds.), Developing User Interfaces with XML: Ad-
vances in User Interface Description Languages. Gallipoli (Lecce), Italy, pp. 9–16.
OLSEN, D., 1989. A Programming Language Basis for User Interface Management. In: Hu-
man Factors in Computing: CHI’89. Austin, USA, pp. 171–176.
OLSEN, D., 1999. Interacting in Chaos. Interactions 6 (5), 42–54.
PATERNO, F., 1999. Model-Based Design and Evaluation of Interactive Applications.
Springer.
PATERNO, F., 2001. Deriving Multiple Interfaces from Task Models of Nomadic Applica-
tions. In: CHI’2001 Workshop: Transforming the UI for Anyone, Anywhere. Seattle, Wash-
ington, USA.
125
Mir Farooq Ali REFERENCES
PATERNO, F., 2004. ConcurTaskTrees: An Engineered Notation for Task Models. In: Diaper,
D., Stanton, N. (Eds.), The Handbook of Task Analysis for Human-Computer Interaction.
Lawrence Erlbaum Associates, publishers, Mahwah, New Jersey, Ch. 24, pp. 483–502.
PATERNO, F., MORI, G., GALIBERTI, R., 2001. CTTE: An Environment for Analysis and De-
velopment of Task Models of Cooperative Applications. In: Human Factors in Computing
Systems: CHI’2001, Extended Abstracts. ACM Press, Seattle, WA, USA, pp. 21–22.
PATERNO, F., SANTORO, C., May 2002. One Model, Many Interfaces. In: Computer-Aided
Design of User Interfaces III. Valenciennes, France, pp. 143–154.
PATERNO, F., SANTORO, C., June 2003. A Unified Method for Designing Interactive Systems
Adaptable to Mobile and Stationary Platforms. Interacting with Computers 15 (3), 349–
366.
PATERNO, F., TSCHELIGI, M., 2003. Design of usable multi-platform interactive systems. In:
Proc. of CHI’2003: Extended Abstracts. Fort Lauderdale, Florida, USA, pp. 872–873.
PAYNE, S. J., GREEN, T. R. G., 1986. Task-Action Grammars: A Model of the Mental Repre-
sentation of Task Languages. Human-Computer Interaction 2, 93–133.
PHANOURIOU, C., 2000. UIML: An Appliance-independent XML User Interface Language.
Ph.D. thesis, Virginia Polytechnic Institute and State University, Blacksburg, USA.
PREECE, J., ROGERS, Y., SHARP, H., 2002. Interaction Design: beyond Human-Computer
Interaction. John Wiley and Sons, Inc.
PUERTA, A., EISENSTEIN, J., 1999. Towards a General Computational Framework for
Model-Based Interface Development Systems. In: Fourth International Conference on In-
telligent User Interfaces: IUI’99. CA, USA, pp. 171–178.
PUERTA, A., EISENSTEIN, J., January 2002. XIML: A Common Representation for Interaction
Data. In: Seventh International Conference on Intelligent User Interfaces: IUI’2002. San
Francisco, California, USA, pp. 214–215.
PUERTA, A., EISENSTEIN, J., 2004. XIML: A Multiple User Interface Representation Frame-
work for Industry. In: Seffah, A., Javahery, H. (Eds.), Multiple User Interfaces: Cross-
126
Mir Farooq Ali REFERENCES
Platform Applications and Context-Aware Interfaces. John Wiley & Sons, Ltd, West Sus-
sex, England, Ch. 7, pp. 119–148.
PUERTA, A., ERIKSSON, H., GENNARI, J. H., MUNSEN, M. A., 1994. Model-Based Auto-
mated Generation of User Interfaces. In: Twelfth National Conference on Artificial Intel-
ligence: AAAI’94. Seattle, USA, pp. 471–477.
RAISTRICK, C., FRANCIS, P., WRIGHT, J., CARTER, C., WILKIE, I., 2004. Model Driven
Architecture with Executable UML. Cambridge University Press.
RAMAN, T. V., 1997. Auditory User Interfaces: Towards the Speaking Computer. Kluwer
Academic publishers.
RICHTER, K., May 2002. Generic Interface Descriptions Using XML. In: Computer-Aided
Design of User Interfaces III. Valenciennes, France, pp. 275–282.
ROSSON, M. B., CARROLL, J. M., 2002. Usability Engineering: Scenario-based Development
of Human-Computer Interaction. Morgan Kaufmann Publishers.
SCHAEFER, R., BLEUL, S., MUELLER, W., 2004. A novel Dialog Model for the Design of Mul-
timodal User Interfaces. In: Design Specification and Verification of Interactive Systems:
DSVIS’2004.
SCHLUNGBAUM, E., 1996. Model-Based User Interface Software Tools: Current State of
Declarative Models. Tech. Rep. GIT-GVU-96-30, Georgia Tech.
SEFFAH, A., JAVAHERY, H., 2004. Multiple User Interfaces: Cross-Platform Applications
and Contect-Aware Interfaces. In: Seffah, A., Javahery, H. (Eds.), Multiple User Interfaces:
Cross-Platform Applications and Context-Aware Interfaces. John Wiley & Sons, Ltd, West
Sussex, England, Ch. 2, pp. 11–26.
SEIDEWITZ, E., September/October 2003. What Models Mean. IEEE Software 20 (5), 26–32.
SOUCHON, N., VANDERDONCKT, J., June 2003. A Review of XML-Compliant User Interface
Description Languages. In: Conf. on Design, Specification, and Verification of Interactive
Systems: DSV-IS2003. Funchal, Madeira Island, Portugal.
URL http://virtual.inesc.pt/dsvis03/papers/02.html
127
Mir Farooq Ali REFERENCES
STEPHANIDIS, C. (Ed.), 2000. User Interfaces for all : Concepts, Methods, and Tools.
Lawrence Erlbaum Associates, Inc., Mahwah, NJ.
SUKAVIRIYA, P. N., FOLEY, J., 1993. Supporting Adaptive Interfaces in a Knowledge-Based
User Interface Environment. In: First International Conference on Intelligent User Inter-
faces: IUI’93. Orlando, USA.
SUKAVIRIYA, P. N., KOVACEVIC, S., FOLEY, J., MYERS, B., OLSEN, D., SCHNEIDER-
HUFSCHMIDT, M., 1993. Model-based user interfaces: What are they and why should
we care? In: Sixth Annual ACM Symposium on User Interface Software and Technology:
UIST’93. Atlanta, USA, pp. 133–135.
SZEKELY, P., 1996. Retrospective and Challenges for Model-Based User Interface Develop-
ment. In: Bodart, F., Vanderdonckt, J. (Eds.), 3rd Int. Eurographics Workshop on Design,
Specification and Verification of Interactive Systems: DSVIS’96. Springer-Verlag, Berlin,
Germany.
SZEKELY, P., LUO, P., NECHES, R., 1993. Beyond Interface Builders: Model-Based Interface
Tools. In: Human Factors in Computing Systems: INTERCHI’93. Amsterdam, Nether-
lands, pp. 383–390.
SZEKELY, P., SUKAVIRIYA, P. N., CASTELLS, P., MUKTHUKUMARASAMY, J., SALCHER, E.,
1995. Declarative Interface Models for User Interface Construction Tools: The MASTER-
MIND Approach. In: 6th IFIP Working Conference on Engineering for HCI. Wyoming,
USA.
TAKAGI, H., ASAKAWA, C., 2000. Transcoding Proxy for Nonvisual Web Access. In: Fourth
International ACM SIGCAPH Conference on Assistive Technologies: ASSETS 2000. Ar-
lington, Virginia, USA, pp. 164–171.
THEVENIN, D., CALVARY, G., COUTAZ, J., 2001. A Development Process for Plastic User
Interfaces. In: CHI’2001 Workshop: Transforming the UI for Anyone, Anywhere. Seattle,
Washington, USA.
THEVENIN, D., COUTAZ, J., 1999. Plasticity of User Interfaces: Framework and Research
Agenda. In: INTERACT’99. pp. 1–8.
128
Mir Farooq Ali REFERENCES
THEVENIN, D., COUTAZ, J., CALVARY, G., 2004. A Reference Framework for the Develop-
ment of Plastic User Interfaces. In: Seffah, A., Javahery, H. (Eds.), Multiple User Interfaces:
Cross-Platform Applications and Context-Aware Interfaces. John Wiley & Sons, Ltd, West
Sussex, England, Ch. 3, pp. 29–42.
TREWIN, S., ZIMMERMANN, G., VANDERHEIDEN, G., November 2003. Abstract User Inter-
face Representations: How Well do they Support Universal Access? In: Conference on
Universal Usability: CUU 2003. Vancouver, British Columbia, Canada, pp. 77–84.
TREWIN, S., ZIMMERMANN, G., VANDERHEIDEN, G., May 2004. Abstract Representations
as a Basis for Usable User Interfaces. Interacting with Computers 16 (3), 477–506.
UIML2, 2000. User Interface Markup Language (UIML) Draft Specification.
URL http://www.uiml.org/specs/docs/uiml20-17Jan00.pdf
VANDERDONCKT, J., LIMBOURG, Q., OGER, F., MACQ, B., 2001a. Synchronised Model-
Based Design of Multiple User Interfaces. In: Workshop on Multiple User Interfaces over
the Internet. Lille, France.
VANDERDONCKT, J., ROY, P. V., GROLAUX, D., 2001b. Towards a Model-Based Approach
for Context-Sensitive User Interfaces. In: CHI’2001 Workshop: Transforming the UI for
Anyone, Anywhere. Seattle, Washington, USA.
Voicexml, March 2004. Voice Extensible Markup Language (VoiceXML) Version 2.0.
URL http://www.w3.org/TR/2004/REC-voicexml20-20040316/
WIECHA, C., BENNETT, W., BOIES, S., GOULD, J., GREENE, S., 1990. ITS: A Tool for Rapidly
Deveoping Interactive Applications. ACM Transactions on Information Systems 8 (3),
204–236.
WML2.0, 2001. Wireless Markup Language Specification (WML), Version 2.0.
URL http://www.oasis-open.org/cover/WAP-238-WML-20010626-p.pdf
WORLD WIDE WEB CONSORTIUM, October 2003. XForms 1.0: W3C Recommendation.
URL http://www.w3.org/TR/2003/REC-xforms-20031014/
129
Mir Farooq Ali REFERENCES
WORLD WIDE WEB CONSORTIUM, February 2004. Authoring Techniques for Device Inde-
pendence.
URL http://www.w3.org/TR/2003/REC-xforms-20031014/
xhtmlmod, February 2004. Modularization of XHTML: Second Edition.
URL http://www.w3.org/TR/xhtml-modularization/
ZANDEN, B. V., MYERS, B., 1990. Automatic, Look-and-Feel Independent Dialog Creation
for Graphical User Interfaces. In: SIGCI Conference on Human Factors in Computing
Systems: CHI’90. Seattle, USA, pp. 27–34.
ZIMMERMANN, G., VANDERHEIDEN, G., GILMAN, A., May 2002. Universal Remote Con-
sole Prototyping of an Emerging XML Based Alternate User Interface Access Standard.
In: Eleventh International World Wide Web Conference. Honolulu, Hawaii, USA.
130
131
Appendix A
Source Code
The XSLT source code for the conversion program that has its input an annotated
CTT task model and produces generic UIML is shown below.
Listing A.1: XSLT stylesheet for transforming annotated CTT task model to generic UIML.<?xml vers ion=” 1 . 0 ” encoding=”UTF−8” standalone=”yes” ?>
< !−−Programmer : Mir Farooq A l i
OS : Windows XP T a b l e t E d i t i o n
System : T o s h i b a T a b l e t PC
XSLT P r o c e s s o r : Saxon 7 . 8
Purpose : Th i s s t y l e s h e e t t a k e s a s input one XML f i l e t h a t r e p r e s e n t s
a t a s k model b a s e d on t h e CTT n o t a t i o n . I t d o e s a l o o k u p i n t o a
f i l e t h a t c o n t a i n s mappings f o r a d e s k t o p p l a t f o r m and s u b s t i t u t e s
e a c h t a s k in t h e t a s k model wi th a UIML ’ par t ’ . The s u b s t i t u t i o n
i s p e r f o r m e d b a s e d on a l o o k u p i n t o t h e mapping f i l e . Some e x t r a
nodes a r e a l s o c r e a t e d in t h e g e n e r i c UIML f i l e a s a r e s u l t o f t h e
n a v i g a t i o n / group ing o p e r a t o r s p r e s e n t in t h e t a s k model .
−−><x s l : s t y l e s h e e t vers ion=” 1 . 0 ” xmlns : x s l =” http : / /www. w3 . org / 1 9 9 9 /XSL / Transform”>
<x s l : output indent=”yes” />
< !−− G l o b a l v a r i a b l e t o k e e p r e c o r d o f t h e mapping f i l e −−><x s l : v a r i a b l e name=”lookUpdoc”>
<x s l : copy−of s e l e c t =”document ( ’ desktopmappings . xml ’ ) ” />
</ x s l : v a r i a b l e>
< !−− Root t e m p l a t e −−><x s l : template match=”TaskModel”>
<uiml>
<i n t e r f a c e>
< !−− s t r u c t u r e h e r e −−>
Mir Farooq Ali APPENDIX A. SOURCE CODE
<s t r u c t u r e>
<x s l : apply−templates />
</ s t r u c t u r e>
< !−− b e h a v i o r h e r e −−><behavior>
< !−− f i r s t p a s s t o g e n e r a t e b e h a v i o r b a s e d on t e m p o r a l o p e r a t o r s −−><x s l : apply−templates mode=” behavior1 ” />
< !−− s e c o n d p a s s t o g e n e r a t e b e h a v i o r b a s e d on n a v i g a t i o n
a t t r i b u t e s −−><x s l : apply−templates mode=” behavior2 ” />
</behavior>
</ i n t e r f a c e>
</uiml>
</ x s l : template>
< !−− Task t e m p l a t e t h a t matches e a c h t a s k in t h e Task model −−><x s l : template match=”Task”>
< !−− Var ious v a r i a b l e s t o k e e p t r a c k o f t h e c u r r e n t t a s k p r o p e r t i e s and
a t t r i b u t e s −−>< !−− Ca t e g o r y o f t h e t a s k . C a t e g o r i e s a r e a b s t r a c t i o n , a p p l i c a t i o n and
i n t e r a c t i o n −−><x s l : v a r i a b l e name=” c a t ” s e l e c t =”@Category” />
< !−− I d e n t i f i e r o f t h e t a s k −−><x s l : v a r i a b l e name=” id ” s e l e c t =” @Ident i f ier ” />
< !−− The d e v e l o p e r s p e c i f i e d mapping f o r t h e UIML c a t e g o r y f o r t h i s
t a s k −−><x s l : v a r i a b l e name=”pmuic” s e l e c t =” . / PreferredMapping / UIMLCategory” />
< !−− The d e v e l o p e r−s p e c i f i e d mapping f o r t h e UIML p a r t f o r t h i s t a s k .
Each UIML p a r t b e l o n g s t o a p a r t i c u l a r c a t e g o r y t h a t i s s p e c i f i e d e a r l i e r−−><x s l : v a r i a b l e name=”pmuip” s e l e c t =” . / PreferredMapping / UIMLPart” />
< !−− The type o f t h e t a s k . Th i s i s a g a i n a d e v e l o p e r s p e c i f i e d p r o p e r t y
t o b e t t e r c l a s s i f y a t a s k w i t h i n a p a r t i c u l a r c a t e g o r y −−><x s l : v a r i a b l e name=” tasktype ” s e l e c t =” t r a n s l a t e ( . / Type , ’ ’ , ’ ’ ) ” />
< !−− Temporary v a r i a b l e t o k e e p t r a c k o f t h e c u r r e n t node b e i n g examined −−><x s l : v a r i a b l e name=”nodeLookup” s e l e c t =”name( $lookUpdoc / TaskMapping /
TaskCategory [@name=$cat ] /
TaskTypeMapping [ @type=$tasktype ] /
UIMLCategoryMapping [ @preferencenumber=$pmuic ] /
UIMLParts / part [ @preferencenumber=$pmuip ] / ∗ ) ” />
<x s l : v a r i a b l e name=”lookupChildren” s e l e c t =”count ( $lookUpdoc / TaskMapping /
TaskCategory [@name=$cat ] /
TaskTypeMapping [ @type=$tasktype ] /
UIMLCategoryMapping [ @preferencenumber=$pmuic ] /
UIMLParts / part [ @preferencenumber=$pmuip ] / ∗ ) ”></ x s l : v a r i a b l e>
< !−− G e t t i n g r i d o f t h e s p a c e s in t h e I d e n t i f i e r f i e l d −−>
132
Mir Farooq Ali APPENDIX A. SOURCE CODE
<x s l : v a r i a b l e name=”newid” s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ”/>
<x s l : message> newid i s <x s l : value−of s e l e c t =”$newid”/></ x s l : message>
<x s l : v a r i a b l e name=” parentid ”
s e l e c t =”lower−case ( t r a n s l a t e ( . . / . . / @Identif ier , ’ ’ , ’ ’ ) ) ” />
< !−− A l t e r i n g t h e s t r u c t u r e o f t h e t a r g e t g e n e r i c uiml b a s e d on t h e n a v i g a t i o n
a t t r i b u t e s −−><x s l : choose>
< !−− Menu−s t y l e n a v i g a t i o n −−><x s l : when t e s t =” @navigation = ’ menustyle ’ ”>
<part id=”{$newid}”>
<x s l : choose>
<x s l : when t e s t =”name ( . . ) = ’TaskModel ’ ”>
<x s l : a t t r i b u t e name=” c l a s s ”>G: TopContainer</ x s l : a t t r i b u t e>
</ x s l : when>
<x s l : otherwise>
<x s l : a t t r i b u t e name=” c l a s s ”>G: Area</ x s l : a t t r i b u t e>
</ x s l : otherwise>
</ x s l : choose>
<part id=”{ concat ( $newid , ’ menu ’ ) } ” c l a s s =”G:Menu”>
<x s l : for−each s e l e c t =” child : : SubTask / Task”>
<part
id=”{ concat ( lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) , ’ menuitem ’ ) } ”
c l a s s =”G: MenuItem”>
<behavior>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{ concat ( lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ,
’ menuitem ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) } ”>
t rue</property>
</ a c t i o n>
</r u l e>
</behavior>
</part>
</ x s l : for−each>
<x s l : for−each s e l e c t =” child : : SubTask / Group”>
<part id=”{ concat ( lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) ,
’ menuitem ’ ) } ” c l a s s =”G: MenuItem”>
<behavior>
<r u l e>
<condi t ion>
133
Mir Farooq Ali APPENDIX A. SOURCE CODE
<event c l a s s =”G: actionPerformed ”
part−name=”{ concat ( lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) ,
’ menuitem ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) } ”>
t rue</property>
</ a c t i o n>
</r u l e>
</behavior>
</part>
</ x s l : for−each>
</part>
<x s l : i f t e s t =” . . / . . / @navigation = ’ menustyle ’ ”>
<x s l : v a r i a b l e name=”newpartid”
s e l e c t =” concat ( concat ( $newid , $parentid ) , ’ navig ’ ) ” />
<part id=”{$newpartid}” c l a s s =”G: Link”>
<behavior>
<ru l e>
<condi t ion name=”G: actionPerformed ” part−name=”{$newpartid}” />
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{lower−case ( t r a n s l a t e ( $parentid , ’ ’ , ’ ’ ) ) } ”>t rue</property>
</ a c t i o n>
</r u l e>
</behavior>
</part>
</ x s l : i f>
<x s l : i f t e s t =” . . / . . / @navigation = ’ independent ’ ”>
<x s l : v a r i a b l e name=”newid”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Task”>
<x s l : v a r i a b l e name=” prectask ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $prectask ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Task”>
<x s l : v a r i a b l e name=” f o l l t a s k ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $ f o l l t a s k ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Group”>
<part id=”{ concat ( concat ( $newid , @id ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
134
Mir Farooq Ali APPENDIX A. SOURCE CODE
<x s l : for−each s e l e c t =” following−s ibl ing : : Group”>
<part id=”{ concat ( concat ( $newid , @id ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
</ x s l : i f>
</part>
< !−− Apply t h e t e m p l a t e s a g a i n from w i t h i n t h i s p a r t −−><x s l : apply−templates />
</ x s l : when>
< !−− Co n t a i n s n a v i g a t i o n s t y l e −−><x s l : when t e s t =” @navigation = ’ contains ’ ”>
<part id=”{$newid}” c l a s s =”{$nodeLookup}”>
<x s l : apply−templates s e l e c t =”$lookUpdoc / TaskMapping /
TaskCategory [@name=$cat ] /
TaskTypeMapping [ @type=$tasktype ] /
UIMLCategoryMapping [ @preferencenumber=$pmuic ] /
UIMLParts / part [ @preferencenumber=$pmuip ] / ∗ / ∗ ” mode=”p”>
<x s l : with−param name=” id ” s e l e c t =”$newid” />
</ x s l : apply−templates>
< !−− Apply t h e t e m p l a t e s a g a i n from w i t h i n t h i s p a r t −−><x s l : apply−templates />
<x s l : i f t e s t =” . . / . . / @navigation = ’ menustyle ’ ”>
<x s l : v a r i a b l e name=”newpartid”
s e l e c t =” concat ( concat ( $newid , $parentid ) , ’ navig ’ ) ” />
<part id=”{$newpartid}” c l a s s =”G: Link”>
<behavior>
<ru l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ” part−name=”{$newpartid}” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{lower−case ( t r a n s l a t e ( $parentid , ’ ’ , ’ ’ ) ) } ”>t rue</property>
</ a c t i o n>
</r u l e>
</behavior>
</part>
</ x s l : i f>
<x s l : i f t e s t =” . . / . . / @navigation = ’ independent ’ ”>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Task ”>
<x s l : v a r i a b l e name=” prectaskid ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $prectaskid ) , ’ navig ’ ) } ”
c l a s s =”G: Link” />
</ x s l : for−each>
135
Mir Farooq Ali APPENDIX A. SOURCE CODE
<x s l : for−each s e l e c t =” following−s ibl ing : : Task”>
<x s l : v a r i a b l e name=” f o l l t a s k i d ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $ f o l l t a s k i d ) , ’ navig ’ ) } ”
c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Group”>
<part id=”{ concat ( $newid , @id )} ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Group”>
<part id=”{ concat ( $newid , @id )} ” c l a s s =”G: Link” />
</ x s l : for−each>
</ x s l : i f>
</part>
</ x s l : when>
< !−− I n d e p e n d e n t n a v i g a t i o n s t y l e −−><x s l : when t e s t =” @navigation = ’ independent ’ ”>
<x s l : apply−templates />
</ x s l : when>
< !−− L e a f node with no e x p l i c i t n a v i g a t i o n s t y l e −−><x s l : otherwise>
<part id=”{$newid}” c l a s s =”{$nodeLookup}”>
<x s l : apply−templates s e l e c t =”$lookUpdoc / TaskMapping /
TaskCategory [@name=$cat ] /
TaskTypeMapping [ @type=$tasktype ] /
UIMLCategoryMapping [ @preferencenumber=$pmuic ] /
UIMLParts / part [ @preferencenumber=$pmuip ] / ∗ / ∗ ” mode=”p”>
<x s l : with−param name=” id ” s e l e c t =”$newid” />
</ x s l : apply−templates>
<x s l : i f t e s t =” . . / . . / @navigation = ’ menustyle ’ or . . / @navigation = ’ menustyle ’ ”>
<x s l : v a r i a b l e name=”newpartid”
s e l e c t =” concat ( concat ( $newid , $parentid ) , ’ navig ’ ) ” />
<part id=”{$newpartid}” c l a s s =”G: Link”>
<behavior>
<ru l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ” part−name=”{$newpartid}” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{lower−case ( t r a n s l a t e ( $parentid , ’ ’ , ’ ’ ) ) } ”>t rue</property>
</ a c t i o n>
</r u l e>
</behavior>
</part>
136
Mir Farooq Ali APPENDIX A. SOURCE CODE
</ x s l : i f>
<x s l : i f t e s t =” . . / . . / @navigation = ’ independent ’ ”>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Task ”>
<x s l : v a r i a b l e name=” prectaskid ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $prectaskid ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Task”>
<x s l : v a r i a b l e name=” f o l l t a s k i d ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $ f o l l t a s k i d ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Group”>
<part id=”{ concat ( $newid , @id )} ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Group”>
<part id=”{ concat ( $newid , @id )} ” c l a s s =”G: Link” />
</ x s l : for−each>
</ x s l : i f>
</part>
</ x s l : otherwise>
</ x s l : choose>
</ x s l : template>
< !−− C r e a t i n g a new p a r t f o r a group ing o f t a s k s −−><x s l : template match=”Group”>
<x s l : v a r i a b l e name=”newid” s e l e c t =”lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) ” />
<x s l : v a r i a b l e name=” parentid ”
s e l e c t =”lower−case ( t r a n s l a t e ( . . / . . / @Identif ier , ’ ’ , ’ ’ ) ) ” />
< !−− C r e a t i n g a new p a r t b a s e d on a l o o k u p i n t o t h e mapping f i l e . The
top−l e v e l p a r t i s l a b e l l e d t h e same as t h e t a s k in t h e t a s k model −−><x s l : choose>
<x s l : when t e s t =” @navigation = ’ menustyle ’ ”>
<x s l : v a r i a b l e name=” cont ” s e l e c t =” ’G: Area ’ ” />
<part id=”{$newid}” c l a s s =”{$cont}”>
<part id=”{ concat ( $newid , ’ menu ’ ) } ” c l a s s =”G:Menu”>
<x s l : for−each s e l e c t =” child : : Task”>
<part id=”{ concat ( lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) , ’ menuitem ’ ) } ”
c l a s s =”G: MenuItem” />
</ x s l : for−each>
</part>
<x s l : i f t e s t =” . . / . . / @navigation = ’ menustyle ’ ”>
<part id=”{ concat ( concat ( lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) ,
$parentid ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : i f>
<x s l : i f t e s t =” . . / . . / @navigation = ’ independent ’ ”>
137
Mir Farooq Ali APPENDIX A. SOURCE CODE
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Task ”>
<x s l : v a r i a b l e name=” prectaskid ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $prectaskid ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Task”>
<x s l : v a r i a b l e name=” f o l l t a s k i d ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $ f o l l t a s k i d ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Group”>
<part id=”{ concat ( $newid , @id )} ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Group”>
<part id=”{ concat ( $newid , @id )} ” c l a s s =”G: Link” />
</ x s l : for−each>
</ x s l : i f>
</part>
<x s l : apply−templates />
</ x s l : when>
<x s l : when t e s t =” @navigation = ’ contains ’ ”>
<part id=”{$newid}” c l a s s =”G: Area”>
< !−− Apply t h e t e m p l a t e s a g a i n from w i t h i n t h i s p a r t −−><x s l : apply−templates />
<x s l : i f t e s t =” . . / . . / @navigation = ’ menustyle ’ ”>
<part id=”{ concat ( concat ( $newid , $parentid ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : i f>
<x s l : i f t e s t =” . . / . . / @navigation = ’ independent ’ ”>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Task ”>
<x s l : v a r i a b l e name=” prectaskid ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $prectaskid ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Task”>
<x s l : v a r i a b l e name=” f o l l t a s k i d ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<part id=”{ concat ( concat ( $newid , $ f o l l t a s k i d ) , ’ navig ’ ) } ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Group”>
<part id=”{ concat ( $newid , @id )} ” c l a s s =”G: Link” />
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Group”>
<part id=”{ concat ( $newid , @id )} ” c l a s s =”G: Link” />
</ x s l : for−each>
</ x s l : i f>
138
Mir Farooq Ali APPENDIX A. SOURCE CODE
</part>
</ x s l : when>
</ x s l : choose>
</ x s l : template>
< !−− A c c e s s t h e o t h e r c h i l d nodes , i f any , from t h e mapping t a b l e and c r e a t e
a p p r o p r i a t e p a r t s f o r them −−><x s l : template mode=”p” match=”∗”>
<x s l : param name=” id ”>IDENTIFIER</ x s l : param>
< !−− G e t t i n g r i d o f t h e s p a c e s in t h e I d e n t i f i e r f i e l d −−><x s l : v a r i a b l e name=”idd” s e l e c t =”lower−case ( t r a n s l a t e ( $id , ’ ’ , ’ ’ ) ) ” />
< !−− C r e a t i n g a new i d f o r t h e c h i l d p a r t s by append ing a number t o t h e
o r i g i n a l i d −−><x s l : v a r i a b l e name=”newid” s e l e c t =” concat ( $idd , posi t ion ( ) ) ” />
<part id=”{$newid}” c l a s s =”{name ( )} ”>
<x s l : apply−templates mode=”p” s e l e c t =”∗”>
<x s l : with−param name=” id ” s e l e c t =”$newid” />
</ x s l : apply−templates>
</part>
</ x s l : template>
< !−− I g n o r e a s e r i e s o f t a s k p r o p e r t i e s t h a t a r e not p a r t i c u l a r r e l e v a n t a t
t h i s p o i n t −−><x s l : template match=” Description | NavigationStyle | PreferredMapping |Name|Type”/>
< !−− B e h a v i o r t e m p l a t e s −−><x s l : v a r i a b l e name=”templookup”>
<x s l : copy−of s e l e c t =”document ( ’ temporal . xml ’ ) ” />
</ x s l : v a r i a b l e>
<x s l : template match=”Task” mode=” behavior1 ”>
<x s l : v a r i a b l e name=” to ” s e l e c t =”TemporalOperator /@name” />
<x s l : v a r i a b l e name=” c a t l e f t ” s e l e c t =”@Category” />
<x s l : v a r i a b l e name=” c a t r i g h t 1 ” s e l e c t =” following−s ibl ing : : Task [ 1 ] / @Category” />
<x s l : v a r i a b l e name=” c a t r i g h t 2 ” s e l e c t =” following−s ibl ing : : Group / Task [ 1 ] / @Category” />
<x s l : v a r i a b l e name=”tempopyn1”
s e l e c t =”$templookup / TemporalOperators / TemporalOperator [@Name=$to ] /
Mapping [ @Left= $ c a t l e f t and @Right= $ c a t r i g h t 1 ] ” />
<x s l : i f t e s t =”$tempopyn1 = ’Y’ ”>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) } ” />
</condi t ion>
<a c t i o n>
<property part−name=”{lower−case ( t r a n s l a t e ( following−s ibl ing : : Task [ 1 ] /
@Identif ier , ’ ’ , ’ ’ ) ) } ” name=” v i s i b l e ”>t rue</property>
</ a c t i o n>
139
Mir Farooq Ali APPENDIX A. SOURCE CODE
</r u l e>
</ x s l : i f>
<x s l : v a r i a b l e name=”tempopyn2”
s e l e c t =”$templookup / TemporalOperators / TemporalOperator [@Name=$to ] /
Mapping [ @Left= $ c a t l e f t and @Right= $ c a t r i g h t 2 ] ” />
<x s l : i f t e s t =”$tempopyn2 = ’Y’ ”>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) } ” />
</condi t ion>
<a c t i o n>
<property part−name=”{lower−case ( t r a n s l a t e ( following−s ibl ing : : Group /
Task [ 1 ] / @Identif ier , ’ ’ , ’ ’ ) ) } ” name=” v i s i b l e ”>t rue</property>
</ a c t i o n>
</r u l e>
</ x s l : i f>
<x s l : apply−templates mode=” behavior1 ” />
</ x s l : template>
<x s l : template match=”Group” mode=” behavior1 ”>
<x s l : v a r i a b l e name=” to ” s e l e c t =” . / Task [ posi t ion ( ) = l a s t ( ) ] / TemporalOperator /@name” />
<x s l : v a r i a b l e name=” c a t l e f t ” s e l e c t =” . / Task [ posi t ion ( ) = l a s t ( ) ] / @Category” />
<x s l : v a r i a b l e name=” c a t r i g h t 1 ” s e l e c t =” following−s ibl ing : : Task [ 1 ] / @Category” />
<x s l : v a r i a b l e name=” c a t r i g h t 2 ” s e l e c t =” following−s ibl ing : : Group / Task [ 1 ] / @Category” />
<x s l : v a r i a b l e name=”tempopyn1”
s e l e c t =”$templookup / TemporalOperators / TemporalOperator [@Name=$to ] /
Mapping [ @Left= $ c a t l e f t and @Right= $ c a t r i g h t 1 ] ” />
<x s l : i f t e s t =”$tempopyn1 = ’Y’ ”>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{lower−case ( t r a n s l a t e ( . / Task [ posi t ion ( ) = l a s t ( ) ] /
@Identif ier , ’ ’ , ’ ’ ) ) } ” />
</condi t ion>
<a c t i o n>
<property
part−name=”{lower−case ( t r a n s l a t e ( following−s ibl ing : : Task [ 1 ] /
@Identif ier , ’ ’ , ’ ’ ) ) } ” name=” v i s i b l e ”>t rue</property>
</ a c t i o n>
</r u l e>
</ x s l : i f>
<x s l : v a r i a b l e name=”tempopyn2”
s e l e c t =”$templookup / TemporalOperators / TemporalOperator [@Name=$to ] /
Mapping [ @Left= $ c a t l e f t and @Right= $ c a t r i g h t 2 ] ” />
<x s l : i f t e s t =”$tempopyn2 = ’Y’ ”>
140
Mir Farooq Ali APPENDIX A. SOURCE CODE
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{lower−case ( t r a n s l a t e ( . / Task [ posi t ion ( ) = l a s t ( ) ] /
@Identif ier , ’ ’ , ’ ’ ) ) } ” />
</condi t ion>
<a c t i o n>
<property
part−name=”{lower−case ( t r a n s l a t e ( following−s ibl ing : : Group /
Task [ 1 ] / @Identif ier , ’ ’ , ’ ’ ) ) } ”
name=” v i s i b l e ”>t rue</property>
</ a c t i o n>
</r u l e>
</ x s l : i f>
<x s l : apply−templates mode=” behavior1 ” />
</ x s l : template>
< !−− I g n o r e a s e r i e s o f t a s k p r o p e r t i e s t h a t a r e not p a r t i c u l a r r e l e v a n t a t
t h i s p o i n t −−><x s l : template match=” Description | NavigationStyle | PreferredMapping |Name|Type”
mode=” behavior1 ”></ x s l : template>
< !−− T e m p l a t e s f o r g e n e r a t i n g r u l e s b a s e d on n a v i g a t i o n r u l e s −−><x s l : template match=”Task” mode=” behavior2 ”>
< !−− G e n e r a t i n g r u l e s b a s e d on Tasks menus ty l e n a v i g a t i o n −−><x s l : i f t e s t =” @navigation = ’ menustyle ’ ”>
<x s l : for−each s e l e c t =”SubTask / Task”>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{ concat ( lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) , ’ menuitem ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) } ”>t rue</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
<x s l : for−each s e l e c t =”SubTask / Group”>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{ concat ( lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) , ’ menuitem ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) } ”>t rue</property>
141
Mir Farooq Ali APPENDIX A. SOURCE CODE
</ a c t i o n>
</r u l e>
</ x s l : for−each>
</ x s l : i f>
< !−− G e n e r a t i n g r u l e s b a s e d on Tasks p a r e n t s menus ty l e n a v i g a t i o n −−>< !−−x s l : i f t e s t =” . . / . . / @navigation = ’ menustyle ’ ”>
<x s l : v a r i a b l e name=”newid” s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<x s l : v a r i a b l e name=” parentid ” s e l e c t =” t r a n s l a t e ( . . / . . / @Identif ier , ’ ’ , ’ ’ ) ” />
<r u l e>
<condi t ion name=”G: actionPerformed ”
part−name=”{ concat ( concat ( $newid , $parentid ) , ’ navig ’ ) } ” />
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{ t r a n s l a t e ( $parentid , ’ ’ , ’ ’ ) } ”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : i f−−>< !−− G e n e r a t i n g r u l e s b a s e d on Task p a r e n t i n d e p e n d e n t n a v i g a t i o n −−><x s l : i f t e s t =” . . / . . / @navigation = ’ independent ’ ”>
<x s l : v a r i a b l e name=”newid” s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Task ”>
<x s l : v a r i a b l e name=” prectaskid ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{ concat ( concat ( $newid , $prectaskid ) , ’ navig ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ” part−name=”{$newid}”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Task”>
<x s l : v a r i a b l e name=” f o l l t a s k i d ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{ concat ( concat ( $newid , $ f o l l t a s k i d ) , ’ navig ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ” part−name=”{$newid}”>t r u e</property>
</ a c t i o n>
</r u l e>
142
Mir Farooq Ali APPENDIX A. SOURCE CODE
</ x s l : for−each>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Group”>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{ concat ( concat ( $newid , @id ) , ’ navig ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ” part−name=”{$newid}”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Group”>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{ concat ( concat ( $newid , @id ) , ’ navig ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ” part−name=”{$newid}”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
</ x s l : i f>
<x s l : apply−templates mode=” behavior2 ” />
</ x s l : template>
<x s l : template match=”Group” mode=” behavior2 ”>
< !−− G e n e r a t i n g r u l e s b a s e d on Group menus ty l e n a v i g a t i o n −−><x s l : i f t e s t =” @navigation = ’ menustyle ’ ”>
<x s l : for−each s e l e c t =”Task”>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
part−name=”{ concat ( lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) , ’ menuitem ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) } ”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
<x s l : for−each s e l e c t =”Group”>
<r u l e>
<condi t ion>
<event c l a s s =”G: actionPerformed ”
143
Mir Farooq Ali APPENDIX A. SOURCE CODE
part−name=”{ concat ( lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) , ’ menuitem ’ ) } ” />
</condi t ion>
<a c t i o n>
<property name=” v i s i b l e ”
part−name=”{lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) } ”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
</ x s l : i f>
< !−− G e n e r a t i n g r u l e s b a s e d on Groups p a r e n t s i n d e p e n d e n t n a v i g a t i o n −−><x s l : i f t e s t =” . . / . . / @navigation = ’ independent ’ ”>
<x s l : v a r i a b l e name=”newid” s e l e c t =”lower−case ( t r a n s l a t e ( @id , ’ ’ , ’ ’ ) ) ” />
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Task ”>
<x s l : v a r i a b l e name=” prectaskid ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<r u l e>
<condi t ion name=”G: actionPerformed ”
part−name=”{ concat ( concat ( $newid , $prectaskid ) , ’ navig ’ ) } ” />
<a c t i o n>
<property name=” v i s i b l e ” part−name=”{$newid}”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Task”>
<x s l : v a r i a b l e name=” f o l l t a s k i d ”
s e l e c t =”lower−case ( t r a n s l a t e ( @Identif ier , ’ ’ , ’ ’ ) ) ” />
<r u l e>
<condi t ion name=”G: actionPerformed ”
part−name=”{ concat ( concat ( $newid , $ f o l l t a s k i d ) , ’ navig ’ ) } ” />
<a c t i o n>
<property name=” v i s i b l e ” part−name=”{$newid}”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
<x s l : for−each s e l e c t =” preceding−s ibl ing : : Group”>
<r u l e>
<condi t ion name=”G: actionPerformed ”
part−name=”{ concat ( concat ( $newid , @id ) , ’ navig ’ ) } ” />
<a c t i o n>
<property name=” v i s i b l e ” part−name=”{$newid}”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
<x s l : for−each s e l e c t =” following−s ibl ing : : Group”>
<r u l e>
144
Mir Farooq Ali APPENDIX A. SOURCE CODE
<condi t ion name=”G: actionPerformed ”
part−name=”{ concat ( concat ( $newid , @id ) , ’ navig ’ ) } ” />
<a c t i o n>
<property name=” v i s i b l e ” part−name=”{$newid}”>t r u e</property>
</ a c t i o n>
</r u l e>
</ x s l : for−each>
</ x s l : i f>
<x s l : apply−templates mode=” behavior2 ” />
</ x s l : template>
< !−− I g n o r e a s e r i e s o f t a s k p r o p e r t i e s t h a t a r e not p a r t i c u l a r r e l e v a n t a t
t h i s p o i n t −−><x s l : template
match=” Description | PreferredMapping |Name|Type”
mode=” behavior2 ”></ x s l : template>
</ x s l : s t y l e s h e e t>
Listing A.2: Desktop mappings between tasks and UIML generic parts.
<?xml vers ion=” 1 . 0 ” encoding=”UTF−8” ?>
<TaskMapping xmlns :G=” http : / /www. cs . vt . edu / ”>
<TaskCategory name=” appl ica t ion ”>
<TaskTypeMapping type=”Display”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>S t r u c t u r e</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: TopContainer></G: TopContainer>
</part>
<part preferencenumber=”2”>
<G: Area />
</part>
<part preferencenumber=”3”>
<G: Group />
</part>
<part preferencenumber=”4”>
<G: Icon />
</part>
</UIMLParts>
</UIMLCategoryMapping>
<UIMLCategoryMapping preferencenumber=”2”>
<UIMLCategory>Text</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
145
Mir Farooq Ali APPENDIX A. SOURCE CODE
<G: Text />
</part>
<part preferencenumber=”2”>
<G: Label />
</part>
<part preferencenumber=”3”>
<G: Area>
<G: Text />
<G: Text />
</G: Area>
</part>
</UIMLParts>
</UIMLCategoryMapping>
<UIMLCategoryMapping preferencenumber=”3”>
<UIMLCategory>Table</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Table />
</part>
<part preferencenumber=”2”>
<G: TableCaption />
</part>
<part preferencenumber=”3”>
<G: TableRow />
</part>
<part preferencenumber=”4”>
<G: TableCel l />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=” Locate ”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>ActionItem</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Button />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=” Visualise ”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>ActionItem</UIMLCategory>
<UIMLParts>
146
Mir Farooq Ali APPENDIX A. SOURCE CODE
<part preferencenumber=”1”>
<G: Button />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=”Grouping”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>Table</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Table />
</part>
<part preferencenumber=”2”>
<G: TableCaption />
</part>
<part preferencenumber=”3”>
<G: TableRow />
</part>
<part preferencenumber=”4”>
<G: TableCel l />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=” ProcessingFeedback ”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>S t r u c t u r e</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: TopContainer>
<G: Label />
<G: TextF ie ld />
</G: TopContainer>
</part>
<part preferencenumber=”2”>
<G: Area />
</part>
<part preferencenumber=”3”>
<G: Group />
</part>
<part preferencenumber=”4”>
<G: Icon />
</part>
</UIMLParts>
147
Mir Farooq Ali APPENDIX A. SOURCE CODE
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=”Overview”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>Text</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Text />
</part>
<part preferencenumber=”2”>
<G: Label />
</part>
</UIMLParts>
</UIMLCategoryMapping>
<UIMLCategoryMapping preferencenumber=”2”>
<UIMLCategory>Table</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Table />
</part>
<part preferencenumber=”2”>
<G: TableCaption />
</part>
<part preferencenumber=”3”>
<G: TableRow />
</part>
<part preferencenumber=”4”>
<G: TableCel l />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
</TaskCategory>
<TaskCategory name=” i n t e r a c t i o n ”>
<TaskTypeMapping type=” null ”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>n u l l</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Area></G: Area>
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=” Select ion ”>
148
Mir Farooq Ali APPENDIX A. SOURCE CODE
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>S e l e c t o r</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Menu />
</part>
<part preferencenumber=”2”>
<G: MenuBar />
</part>
<part preferencenumber=”3”>
<G: MenuItem />
</part>
<part preferencenumber=”4”>
<G: PulldownList />
</part>
<part preferencenumber=”5”>
<G: CheckBox />
</part>
<part preferencenumber=”6”>
<G: RadioButton />
</part>
</UIMLParts>
</UIMLCategoryMapping>
<UIMLCategoryMapping preferencenumber=”2”>
<UIMLCategory>L i s t</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: L i s t />
</part>
<part preferencenumber=”2”>
<G: Checkbox />
</part>
<part preferencenumber=”3”>
<G: Lis t I t em />
</part>
<part preferencenumber=”4”>
<G: Menu />
</part>
<part preferencenumber=”5”>
<G: MenuBar />
</part>
<part preferencenumber=”6”>
<G: MenuItem />
</part>
<part preferencenumber=”7”>
149
Mir Farooq Ali APPENDIX A. SOURCE CODE
<G: PulldownList />
</part>
<part preferencenumber=”8”>
<G: RadioButton />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=” Select ion / Single Choice ”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>S e l e c t o r</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Menu />
</part>
<part preferencenumber=”2”>
<G: MenuBar />
</part>
<part preferencenumber=”3”>
<G: MenuItem />
</part>
<part preferencenumber=”4”>
<G: PulldownList />
</part>
<part preferencenumber=”5”>
<G: CheckBox />
</part>
<part preferencenumber=”6”>
<G: RadioButton />
</part>
</UIMLParts>
</UIMLCategoryMapping>
<UIMLCategoryMapping preferencenumber=”2”>
<UIMLCategory>L i s t</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: L i s t />
</part>
<part preferencenumber=”2”>
<G: Checkbox />
</part>
<part preferencenumber=”3”>
<G: Lis t I t em />
</part>
<part preferencenumber=”4”>
150
Mir Farooq Ali APPENDIX A. SOURCE CODE
<G: Menu />
</part>
<part preferencenumber=”5”>
<G: MenuBar />
</part>
<part preferencenumber=”6”>
<G: MenuItem />
</part>
<part preferencenumber=”7”>
<G: PulldownList />
</part>
<part preferencenumber=”8”>
<G: RadioButton />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=” Edit ”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>Text</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Text />
</part>
<part preferencenumber=”2”>
<G: Label />
</part>
</UIMLParts>
</UIMLCategoryMapping>
<UIMLCategoryMapping preferencenumber=”2”>
<UIMLCategory>I n p u t t e r</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Button />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=” Control ”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>ActionItem</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Button />
</part>
151
Mir Farooq Ali APPENDIX A. SOURCE CODE
<part preferencenumber=”2”>
<G: Link />
</part>
</UIMLParts>
</UIMLCategoryMapping>
<UIMLCategoryMapping preferencenumber=”2”>
<UIMLCategory>Table</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Table />
</part>
<part preferencenumber=”2”>
<G: TableCaption />
</part>
<part preferencenumber=”3”>
<G: TableRow />
</part>
<part preferencenumber=”4”>
<G: TableCel l />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=”Monitoring”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>S t r u c t u r e</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: TopContainer>
<G: Label />
<G: TextF ie ld />
</G: TopContainer>
</part>
<part preferencenumber=”2”>
<G: Area />
</part>
<part preferencenumber=”3”>
<G: Group />
</part>
<part preferencenumber=”4”>
<G: Icon />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
152
Mir Farooq Ali APPENDIX A. SOURCE CODE
<TaskTypeMapping type=”RespondingToAlerts”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>ActionItem</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Button />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=”Query”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>ActionItem</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Button />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
</TaskCategory>
<TaskCategory name=” a b s t r a c t i o n ”>
<TaskTypeMapping type=” null ”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory />
<UIMLParts>
<part preferencenumber=”1”>
<G: TopContainer></G: TopContainer>
</part>
<part preferencenumber=”2”>
<G: Area />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
<TaskTypeMapping type=”Grouping”>
<UIMLCategoryMapping preferencenumber=”1”>
<UIMLCategory>Table</UIMLCategory>
<UIMLParts>
<part preferencenumber=”1”>
<G: Table />
</part>
<part preferencenumber=”2”>
<G: TableCaption />
</part>
153
Mir Farooq Ali APPENDIX A. SOURCE CODE
<part preferencenumber=”3”>
<G: TableRow />
</part>
<part preferencenumber=”4”>
<G: TableCel l />
</part>
</UIMLParts>
</UIMLCategoryMapping>
</TaskTypeMapping>
</TaskCategory>
</TaskMapping>
154
155
Appendix B
Vita
PERSONAL INFORMATION
Full Name: Mir Farooq Ali
Email: miali@cs.vt.edu
Web: http://purl.org/net/farooq
EDUCATION
Doctor of Philosophy, Computer Science, May 2005
Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg,
VA
Dissertation title: A Transformation-based Approach to Building Multi-
Platform User Interfaces Using a Task Model and the User Interface Markup
Language
Advisor: Dr. Manuel Perez-Quinones
Master of Science, Computer Science, Dec 1996
Virginia Polytechnic Institute and State University (Virginia Tech), Blacksburg,
VA
Bachelor of Engineering, Computer Science and Engineering, June 1993
Osmania University, Hyderabad, India
Mir Farooq Ali APPENDIX B. VITA
TEACHING EXPERIENCE
Courses taught as primary instructor
Introduction to Programming in C/C++ (CS 1044), Computer Science, Virginia Tech (Spring 2004,
Summer 2004)
• A lecture course for non-majors that taught the fundamentals of structured program-
ming and problem solving using the C/C++ programming language.
• Prepared class material, programming assignments, in-class quizzes and conducted
lecture sessions
• Graded homework and examsIntroduction to programming in Java (CS 1054), Computer Science, Virginia Tech (Fall 2003)
• A lecture and laboratory based course for non-majors that provided an introduction
to object-oriented programming using the Java language.
• Duties included preparing materials for lectures, labs and programming assign-
ments.
• Graded homework and exams.Introductory UNIX (CS 1026 & CS 2204), Computer Science, Virginia Tech (Summer 1999,
Spring 2003)
• A lecture and laboratory based course that introduced students to the usage and
administration of the UNIX OS. Each lecture was supplemented by an accompanying
lab.
• Duties involved preparing materials for both lectures, labs and programming assign-
ments.Operating Systems (CS 3204), Computer Science, Virginia Tech (Fall 2002, Spring 2003, Fall
2003, Spring 2005)
• A lecture-based conceptual course for juniors that introduced both theoretical and
practical issues underlying operating system design and implementation. Lectures
and homework assignments focused primarily on theoretical and conceptual aspects
of operating systems. Programming projects focused on the application of concepts
and implementation details.
• Duties involved preparing lecture notes and homework/programming assignment
specs.
156
Mir Farooq Ali APPENDIX B. VITA
Internet Programming (CS 4244), Computer Science, Virginia Tech (Fall 2003)
• A senior-level course that focused on key technologies underlying the World Wide
Web. Course topic included web architecture, programming systems, security, cryp-
tography, document representations, and legal and social issues of the Web.
• Duties involved preparing lecture notes and homework/programming assignments.
• Students also participated in group project and presented their work at the end of
the semester.Graphical User Interface Programming (CS 4984), Computer Science, Virginia Tech (Spring 2005)
• Course covers basic topics in Graphical User Interface (GUI) programming including
GUI architectures, development tools, and GUI design.
• Duties involve preparing lecture notes and homework/programming assignments.
This was a new course in the department and I was responsible for developing the
course content and structure.
• Students participate in group project and will present their work at the end of the
semester.Graduate Teaching Assistant, Virginia Tech, (Aug 1995 - May 1997, Aug 1998 - May 1998, Jan
1999 - May 2000, May 2001 - Jun 2001, Aug 2001 -Dec 2001)
• Tutored, graded, and administered following Computer Science courses
– Undergraduate classes: Data Structures, Java, Parallel and Distributed Computa-
tion, Operating Systems, Data Analysis and Algorithms
– Graduate classes: Information Storage and Retrieval, Theory of Algorithms and
User Interface Software
• Graded student homework, exams and computer programs
• Taught classes in the absence of instructor
• Assisted students with programming assignments
• Maintained class web pages and class list
RESEARCH INTERESTS
Multi-Platform User Interfaces, Device Independence, User Interface Software, Multi-Platform
Usability, Accessibility
157
Mir Farooq Ali APPENDIX B. VITA
RESEARCH EXPERIENCE
Research Associate, Virginia Tech Transportation Institute, Blacksburg, VA (Feb 2004 - Jun
2005)
• Worked on Instrumented City Project
• Helped in configuration of traffic camera controllers
• Developed software for license plate matching and travel time calculations
Graduate Research Assistant, Virginia Tech, Computer Science, Blacksburg, VA (Jan 2002 -
May 2002)
• Researched multi-platform UI development and associated usability issues
• Developed a task model specification in XML
• Developed and implemented transformation algorithms using Java/XML
Graduate Research Assistant, Virginia Tech, Computer Science, Blacksburg, VA (Jul 1998 - Dec
1998)
• Partially implemented delta encoding in Squid proxy server (modified and wrote
system level C code under UNIX operating system)
• Conducted experiments to test effectiveness of implementation
Research/Teaching Assistant, King Fahd University of Petroleum and Minerals (KFUPM), Dept.
of Information and Computer Science, Dhahran, Saudi Arabia (Aug 1993 - Jul 1995)
• Teaching Assistant for Computer Science classes including Operating Systems and
Compilers
• Conducted lab sessions for programming classes (FORTRAN)
• Developed a framework for creating hierarchical interconnection networks for par-
allel computers
INDUSTRY EXPERIENCE
Software Engineer, Harmonia, Inc., Blacksburg, VA (Sep 2000 - May 2001)
• Designed and developed a generic translator in JAVA for User Interface Markup Lan-
guage (UIML)
• Developed a specification document for Generic vocabulary used with the translator
• Integrated translator within an Authoring Tool for UIML
158
Mir Farooq Ali APPENDIX B. VITA
PUBLICATIONS
Refereed Journal Papers
• Ali, M. F. and Perez-Quinones, Facilitating development of Multi-Platform User Interfaces
using navigation attributes, in preparation.
Refereed Conference Papers
• Goncalves, M., Luo., M, Shen, R., Ali, M. F., and Fox, E., An XML Log Standard and Tool
for Digital Library Logging Analysis, ECDL’2002, Rome, Italy.
• Ali, M. F., Perez-Quinones, M., Abrams, M. and Shell, E., Building Multi-Platform User
Interfaces using UIML, CADUI’2002, Valenciennes, France.
• Ali, M. F. and Abrams, M., Simplifying construction of Multi-Platform User Interfaces using
UIML, UIML’2001, Paris, France, March 2001.
• Ali M. F., and Guizani M., A new design methodology for optical hypercube interconnection
network, ICAPP’95, IEEE First International Conference on Algorithms and Architec-
tures for Parallel Processing, 1995, Brisbane, Australia.
• Ali, M. F., and Perez-Quinones, M., Using Task Models to Generate Multi-Platform User
Interfaces While Ensuring Usability, CHI’2002, Minneapolis, USA.
Book Chapter
• Ali, M. F., Perez-Quinonez, M., and Abrams, M., “Building Multi-Platform User In-
terfaces with UIML”, in Seffah, A. and Javahery, H. ed, Multi-Device User Interfaces:
Engineering and Application Frameworks, John Wiley & Sons, Nov 2003.
• Ali, M. F, “HTTP: Present and Future” Chapter 23, in Abrams, Marc, ed., World Wide
Web: Beyond the Basics, Prentice Hall, 1998.
Workshop Papers
• Ali, M. F., Abrams, M., and Perez-Quinonez, M. Multi-Platform User Interface Con-
struction with Transformations using UIML, position paper for Workshop “Transform-
ing the UI for anyone anywhere” at CHI’2001, April 2000.
Presentations
159
Mir Farooq Ali APPENDIX B. VITA
• Ali, M. F., Simplifying Construction of Multi-platform User Interfaces with UIML, pre-
sented at Doctoral Symposium, SIGCSE’2002, 2002, Covington, Kentucky, USA.
Technical Reports
• Ali, M. F., Perez-Quinonez, M., Abrams, M. and Shell, E., Building Multi-Platform User
Interfaces using UIML, cs.HC/0111024, Computing Research Repository (CoRR).
• Ali, M. F., Perez-Quinonez, M., and Abrams, M., A Multi-Step Process for Generat-
ing Multi-Platform User Interfaces using UIML, cs.HC/0111025, Computing Research
Repository (CoRR).
TEXTBOOK ASSISTANCE / SUPPLEMENTALS
• Prepared Instructor Material for “UNIX in a Nutshell”, 3rd Edition, Arnold Robbins,
O’Reilly & Associates, 1999 - work done in Summer 2003.
HONORS/AFFILIATIONS
• Winner, “Excellence in Graduate Teaching” award, Computer Science, Virginia Tech
(2004 - 2005 academic year)
• Participant in SIGCSE Doctoral Symposium, 2002
• Attendee in CRA Academic Careers Workshop, Arlington, Virginia, 2002
• Student member, Association of Computing Machinery (ACM)
• Member, ACM Special Interest Group on Computer Science Education (SIGCSE)
• Student member, IEEE Computer Society
• Member, Dialogue Research Group at Virginia Tech
• Member, Upsilon Pi Epsilon (Honor Society for Computer Science)
SERVICE
• Reviewer, WWW’1999, CHI’2002, SIGCSE’2005
• Reviewed for Computer Journal
160
Mir Farooq Ali APPENDIX B. VITA
• Member, Computer Science Graduate Council (Fall 97 - Spring 98, Fall 2000 - Spring
2001)
• Served as council treasurer and representative of CS graduate students to university-
level Graduate Assembly
• Student Volunteer, UIST’2001, SIGCSE’2002, CHI’2002
161
Recommended