4
Künstl Intell (2012) 26:197–200 DOI 10.1007/s13218-012-0184-5 DISSERTATIONEN UND HABILITATIONEN Incorporating Usability Evaluation in Software Development Environments Shah Rukh Humayoun Published online: 28 March 2012 © Springer-Verlag 2012 Abstract We propose a way to define and automate user and usability evaluation from within the integrated devel- opment environment (IDE). Specifically, for the automatic analysis of usability issues and functionality problems, we provide a framework for conducting evaluation experiments using TaMoGolog-based formal task models. This approach enables the software development team to automatically col- lect and analyze users and system activities and behavior for recognizing usability issues and functionality problems in an efficient and effective way. The developed tools, UEMan and TaMUlator, provide the realization of the proposed ap- proach and framework at the IDE level. Keywords User centered design (UCD) · Usability evaluation · Integrated development environment 1 Introduction Usability evaluation aims at involving users, especially product end-users, and experts (e.g., UI experts, system an- alyst, etc.) in the evaluation process of a specific product to find usability flaws and errors and refine the product ac- cording to the feedback [1, 4]. Automating evaluation ap- proaches and techniques, and applying them throughout the development process, provides several benefits, e.g. [9]; re- duces development costs and time, improves error tracing, better feedback, and increases coverage of evaluated fea- tures. S.R. Humayoun ( ) Dipartimento di Informatica e Sistemistica “A. Ruberti”, Sapienza Università di Roma, Via Ariosto 25, 00185 Roma, Italy e-mail: [email protected] Our research approach [2, 3, 6] involves integrating user centered design (UCD) [1, 4] activities, especially related to user and usability evaluation, within the software devel- opment activities and defining evaluation experiments and running them from within the Integrated Development En- vironment (IDE). This two-layers approach equips the soft- ware development team with the mechanisms to perform, monitor, and control a continuous evaluation process tightly coupled with the development process, thus receiving on- going user feedback while continuing development. The pa- per highlights our contributions towards the above approach. 2 Integrating UCD into Agile Development We propose an integrated life-cycle of four UCD activi- ties alongside the agile development activities for involving UCD philosophy into agile development approach. Figure 1 shows the UCD activities (solid ovals), in which agile activ- ities that are done per each user story or development task are represented by dashed lines ovals. UCD Involvement The software team involves end-users and UCD experts when appropriate, mostly through the use of elicitation methods during work on requirements, design, and early prototypes. Design-Artifacts Evaluation The early design and proto- types are then quickly evaluated through short-time consum- ing evaluation methods normally by UCD experts (i.e., sys- tem analysts, usability evaluators) and sometimes by a small group of end-users. The results of this phase become input for the third phase.

Incorporating Usability Evaluation in Software Development Environments

Embed Size (px)

Citation preview

Künstl Intell (2012) 26:197–200DOI 10.1007/s13218-012-0184-5

D I S S E RTAT I O N E N U N D H A B I L I TAT I O N E N

Incorporating Usability Evaluation in Software DevelopmentEnvironments

Shah Rukh Humayoun

Published online: 28 March 2012© Springer-Verlag 2012

Abstract We propose a way to define and automate userand usability evaluation from within the integrated devel-opment environment (IDE). Specifically, for the automaticanalysis of usability issues and functionality problems, weprovide a framework for conducting evaluation experimentsusing TaMoGolog-based formal task models. This approachenables the software development team to automatically col-lect and analyze users and system activities and behavior forrecognizing usability issues and functionality problems inan efficient and effective way. The developed tools, UEManand TaMUlator, provide the realization of the proposed ap-proach and framework at the IDE level.

Keywords User centered design (UCD) · Usabilityevaluation · Integrated development environment

1 Introduction

Usability evaluation aims at involving users, especiallyproduct end-users, and experts (e.g., UI experts, system an-alyst, etc.) in the evaluation process of a specific productto find usability flaws and errors and refine the product ac-cording to the feedback [1, 4]. Automating evaluation ap-proaches and techniques, and applying them throughout thedevelopment process, provides several benefits, e.g. [9]; re-duces development costs and time, improves error tracing,better feedback, and increases coverage of evaluated fea-tures.

S.R. Humayoun (�)Dipartimento di Informatica e Sistemistica “A. Ruberti”,Sapienza Università di Roma, Via Ariosto 25, 00185 Roma, Italye-mail: [email protected]

Our research approach [2, 3, 6] involves integrating usercentered design (UCD) [1, 4] activities, especially relatedto user and usability evaluation, within the software devel-opment activities and defining evaluation experiments andrunning them from within the Integrated Development En-vironment (IDE). This two-layers approach equips the soft-ware development team with the mechanisms to perform,monitor, and control a continuous evaluation process tightlycoupled with the development process, thus receiving on-going user feedback while continuing development. The pa-per highlights our contributions towards the above approach.

2 Integrating UCD into Agile Development

We propose an integrated life-cycle of four UCD activi-ties alongside the agile development activities for involvingUCD philosophy into agile development approach. Figure 1shows the UCD activities (solid ovals), in which agile activ-ities that are done per each user story or development taskare represented by dashed lines ovals.

UCD Involvement The software team involves end-usersand UCD experts when appropriate, mostly through the useof elicitation methods during work on requirements, design,and early prototypes.

Design-Artifacts Evaluation The early design and proto-types are then quickly evaluated through short-time consum-ing evaluation methods normally by UCD experts (i.e., sys-tem analysts, usability evaluators) and sometimes by a smallgroup of end-users. The results of this phase become inputfor the third phase.

198 Künstl Intell (2012) 26:197–200

Fig. 1 Life cycle for involvingUCD activities into agiledevelopment activities

Design Improvement The software team corrects and im-proves the design according to the feedback and performsimplementation of the target modules.

Detailed Evaluation The developed modules are evaluatedin detail by end-users and/or by UCD experts, normallythrough rigid evaluation methods preferably with automatedtools support. The results, feedback, and suggestions serveas input for making plans for improvements in the designand for the implementation of developing products in theupcoming iterations.

3 Automating User and Usability Evaluation inDevelopment Environments

Managing and automating user and usability evaluationmethods and techniques within the IDE, as described inwhat follows, provides the on-going benefit of including theuser experience as part of the development process for pro-

ducing high quality products with an adequate level of us-ability.

The Experiment Object To help end-users continuouslyevaluate the product, the UCD philosophy provides differ-ent kinds of tests, following referred as experiments due toexecuting them in controlled environments. Automating theevaluation experiments within the IDE means that in the de-velopment area of a software project we can add a new kindof an object, named experiment, that can be created and ex-ecuted to provide evaluation data.

Derived Development Tasks Each kind of evaluation ex-periment has its own criteria for judging the usability levelof the product. Support for the analysis of the experiments’results enables comparison of the results against the targetedusability metrics. If the results show a failure to achieve thetargeted metrics then new development tasks can be definedaccordingly.

Künstl Intell (2012) 26:197–200 199

Fig. 2 TaMU process life-cycle

Code Traceability Automating the process of keepingbackward and forward traceability between different evolv-ing parts at the IDE level gives a better understanding ofthe refinement done in the design to improve the product.Furthermore, it helps to learn about the impacts of the eval-uation.

4 Task Model-based Usability Evaluation (TaMU)Framework

We present TaMU framework for managing and automat-ing an end-to-end task model-based evaluation life-cycle(Fig. 2). It has three main methodological distinctions:(1) defining formal task models through TaMoGolog [8] taskmodeling language; (2) analyzing users and system behav-ior based on tasks structure, precondition axioms of tasks,postcondition effects to variables, and any optional domainknowledge representation; and (3) the integration of thewhole evaluation process in the development environment.

Tag The software team tags the application at the codelevel with the set of relevant tasks and those variables (flu-ents) that can be part of precondition axioms or postcondi-tion effects. Through this, the software team can define tasksat different abstraction levels, which is useful to evaluate thesoftware from these levels.

Task Model Creation The evaluator creates task modelsusing TaMoGolog language for each planned usability-scenario, where each task model is a representation of the

corresponding scenario at a certain abstraction level, in or-der to highlight usability issues and functionality problemsrelated to that scenario.

Evaluation-Experiment Creation The evaluator createsevaluation experiments and links the related task models,constructed in previous phase. In each experiment, evaluat-ing users are supposed to achieve a list of goals through per-forming tasks while using the developing/developed prod-uct.

Run & Record While the evaluating users perform tasks onthe targeted application, the user actions and the evaluatedapplication data is recorded.

Analysis The logged data of users and the evaluated appli-cation is analyzed through some analysis criteria. The anal-ysis results; i.e., errors and flaws (e.g., preconditions notfulfilled, skipped tasks, etc.), usability issues (e.g., wrongpath selection), functionality problems (e.g., system func-tion provided wrong output), users’ and system behavior(e.g., user inputs, user trend, etc.), etc.; are created throughapplying appropriate statistical techniques based on someanalysis criteria.

5 Automated Tools Support

The developed tools, UEMan [5] and TaMUlator [7], pro-vide the realization of concepts and framework, describedin previous sections, at the IDE level.

200 Künstl Intell (2012) 26:197–200

UEMan It is a Java-based Eclipse plug-in that supports theautomation and management of UCD activities as part ofthe Eclipse IDE. The main capabilities include creating theexperiment objects, deriving development tasks, tracing dif-ferent types of tasks to and from the code, and the facilityfor creating automatic aspects to increase the body of theevaluation data.

TaMUlator It provides the realization of TaMU life-cycleat the IDE level. It allows tagging tasks and variables of in-terest in the program code. TaMUlator provides an easy wayto define usability scenarios for evaluation. This is achievedby compiling TaMoGolog-based task models that can beaggregated into evaluation experiments, which can then beevaluated automatically at any time by the built-in analyzerusing the recorded data of these experiments, or analyzedmanually after exporting the recorded data into a CSV for-mat.

6 Conclusion

The main difference between our approach and the previousones is that it integrates UCD philosophy not only at processlife-cycle level but also goes to the concrete level integra-tion; i.e., managing and automating user and usability eval-uation from within the development environment alongsideperforming other development activities. This two-layers(i.e., conceptual and concrete) approach equips the softwareteam to perform, monitor, and control a continuous evalua-tion process tightly coupled with the development process.

Secondly, the presented TaMU framework provides away to perform both the product usability and functional-ity testing together through the evaluation experiments, thusfacilitating to highlight usability issues side by side withfinding out functionality problems. Although, there is anoverhead of training, time and cost for using the facilityof tagging tasks and writing formal task models throughTaMoGolog; but, we argue that keeping in view the com-bined usability evaluation and functionality testing approachit saves time and cost in the long run. This is especiallyuseful for the software teams that work with short-time it-erations, such as agile development iterations, as normallythere is not enough time and resources to perform usabilityevaluation and functionality testing separately. Conductinga case study in the academia [6, 7], we found that softwareteams were easily engaged in performing product evaluationon an iteration basis while using the TaMUlator tool to eval-uate their product more effectively. However, we intend to

perform empirical studies in the future, both in industry andacademia, to check the effectiveness of our approach and tovalidate our claim.

Acknowledgements Many thanks go to Prof. Tiziana Catarci andProf. Giuseppe De Diacomo for supervising this PhD work, Dr. YaelDubinsky for collaborating during all the time, and students from Tech-nion [6] who participated in the course ‘annual project in software en-gineering’ from 2007 to 2010.

References

1. Dix A, Finlay JE, Abowd GD, Beale R (2003) Human computerinteraction, 3rd edn. Prentice Hall, New York

2. Dubinsky Y, Catarci T, Humayoun SR, Kimani S (2007) Integratinguser evaluation into software development environments. In: 2ndDELOS conference on digital libraries, Pisa, Italy, 5–7 Dec., 2007

3. Dubinsky Y, Humayoun SR, Catarci T (2008) Eclipse plug-in tomanage user centered design. In: I-USED workshop, Pisa, Italy,Sep. 2008

4. Gould JD, Lewis C (1985) Designing for usability: key principlesand what designers think. Commun ACM 28:300–311

5. Humayoun SR, Dubinsky Y, Catarci T (2009) UEMan: a tool tomanage user evaluation in development environments. In: ICSE’09.IEEE Press, Vancouver, pp 551–554

6. Humayoun SR, Dubinsky Y, Catarci T (2011) A three-fold inte-gration framework to incorporate user-centered design into agilesoftware development. In: HCII 2011. Lecture notes in computerscience, vol 6776.

7. Humayoun SR, Dubinsky Y, Nazarov E, Israel A, Catarci T (2011)TaMUlator: a tool to manage task model-based usability evaluationin development environments. In: IADIS conf. IHCI 2011, Rome,Italy, July 20–26, 2011.

8. Humayoun SR, Catarci T, Dubinsky Y (2011) A dynamic frame-work for multi-view task modeling. In: 9th ACM SIGCHI Italianchapter CHItaly 2011, pp 185–190

9. Ivory M, Hearst M (2001) The state of the art in au-tomating us-ability evaluation of user interfaces. ACM Comput Surv 33(4):470–516

Shah Rukh Humayoun holds aPh.D. degree in Computer Engi-neering from Sapienza Universityof Rome. His research interests in-clude user-centered design (UCD),automated usability evaluation, taskmodeling, and software develop-ment processes. He is particularlyinterested in applying and mergingHCI methodologies and techniquesin software engineering in order tofill the gap between these two fields.Contact him at [email protected].