1
Bryan Kern (SUNY Oswego), Anna Medeiros (UFPB), Rafael de Castro (UFPB), Maria Clara (UFPB), José Ivan (UFPB), Tatiana Tavares (UFPB), Damian Schofield (SUNY Oswego) Using different types of evaluation methods for interfaces helps gain a broader understanding of what the program is doing from the users perspective and also to find design flaws that exist within the program. When creating and conducting usability tests, there are five important attributes that are user centered. They are, learnability (the program must be easy to learn and use), efficiency (the user must understand what the software is doing in order to use it), memorability (the user should remember how to use the software when leaving and coming back to it), errors (a low error rate, makes a happy user) and satisfaction (the users must enjoy using the software). All of these attributes can be measured, either on a qualitative or quantitative scale (Bhatnagar & Dubey, 2012). Other researchers have proposed a system that looks at the communicability of usability . Instead of focusing on the users and solutions to how they rank the qualities of a program, communicability design focuses on the conversation or internal dialogue that occurs within users (de Souza et al., 1999; de Souza, & Laffron, 2007). Using these different types of evaluation methods, both the software can be analyzed as well as the different methods, which can validate a study even more. Incorporating both types of methods (usability and communicability) a deeper understanding of evaluation can be obtained. Combining and comparing the two methods also has the possibility of producing more reliability and increasing the validity for the study. For communicability tags see figures 1. Figure 3 – iteration of part of the website after the Heuristic Eval. addition of 'arrow' for drag-and-drop screen for Arthton Server UI Introduction Introduction Program Program s s Evaluation Experiences Evaluation Experiences Discussion / Conclusions Discussion / Conclusions We want to thank... We want to thank... User interfaces evaluation experiences: User interfaces evaluation experiences: A brief comparison between A brief comparison between usability and communicability tests usability and communicability tests Figure 1 – a visual representation of communicability tags. Figure 2 – User testing of the Kinect UI. Two distinct applications are being created at UFPB's LAVID were evaluated for this study. Kinect Application - - 3D manipulation of objects for health professionals. - Purpose is for in surgery. - Less time spent scrubbing in and out; more efficient surgeries. GTAVCS Arthron Server - - Video collaboration between health professionals. - Many different sites can watch the same surgery at any one time. - Makes teaching medical procedures more efficient. - This is the web client application; a desktop application has already been created and tested for internal use. - This application is intended to broaden the use of video collaboration between health professionals. The comparisons made in this study are preliminary. There are not many studies done on utilizing both usability methods and communicability methods during user testing (see Rusu et al., 2012). As for a comparison of the two methods, nothing comes from the literature. Future research will be conducted regarding the use of both usability methods and communicability methods. Limitations of this study included language barriers between the HCI expert, developers and users. Some extra time was required to smooth communication issues over the testing phases. Another limitation was only having college students be participants. An infusion with actual health professionals will be important for future steps of each of these projects. To summarize, the use of both usability and communicability evaluation methods contribute to different ways of looking at user interface problems. Being able to use both methods to test a single application is useful, and creates more reliability and increases Method Method s s Three different testing methods were utilized for both applications.The first was used by itself; two and three were used together. Heuristic Evaluation (Nielsen, 1993) - - HCI expert evaluates a system. - Fast, easy and cheap to use. - More problems can be found, initially, using this method. Naturalistic Evaluation (Bhatnagar & Dubey, 2012) - - Users try to figure out how to use the program on their own. - Can find problems that happen at initial look at program. - Seeing if the program is intuitive. Coaching Method (Bhatnagar & Dubey, 2012) - - Used to control time of experiment. - User was able to ask questions to the experimenter if confusion occurred. - Users were urged to try to figure out how to use the program on their own before asking questions. Kinect Application Initial Testing - - Heuristic Evaluation with a Masters HCI student. - Needed to change the rotation, zoom and stop functions. - Made them buttons instead of hand movements. - These three functions are the core of the program. - Tried to make the functions as intuitive as possible. User Testing - - Four participants (two Brazilian students and two American students, three males, one female). - All enjoyed using the program (see figure 2 for user testing experience). - Issue with initial screen, user one wanted to use the mouse and keyboard. - All four of the users stated that at first, the program was not intuitive, but after a minute or so of using the program they understood what they were asked to do. - High learning curve. - Communicability tags discovered: “what happened?”, “why doesn't it?”, and “what now?” Initial Testing - - Heuristic Evaluation with a Masters HCI student. - Needed improvement on the navigation aspect of the website. - The website was not telling a story, so it made it hard for users to conceptually understand what they were suppose to do once on the website. - There were also issues with the collaboration aspect of the website. The drag and drop aspect was not intuitive and users would become confused once on the page. - Suggestions were to add an arrow ' –> ' indicating what to do (see figure 3). - There were also issues with displaying identical screens, even after users clicked on certain tabs. - Suggestions were to add descriptions of what the given webpage was intended to do. User Testing - - Four users (two Brazilians, two Americans, two males and two females). - Still issues after user testing. - One that was found during testing was not explicitly telling the user how to play, pause and stop video. - Still needed to add descriptions to each subpage of the site. - Communicability tag found: “what now?” GTAVCS Arthron Server

Bryan Kern (SUNY Oswego), Anna Medeiros (UFPB), Rafael de Castro (UFPB), Maria Clara (UFPB), José Ivan (UFPB), Tatiana Tavares (UFPB), Damian Schofield

Embed Size (px)

Citation preview

Page 1: Bryan Kern (SUNY Oswego), Anna Medeiros (UFPB), Rafael de Castro (UFPB), Maria Clara (UFPB), José Ivan (UFPB), Tatiana Tavares (UFPB), Damian Schofield

Bryan Kern (SUNY Oswego), Anna Medeiros (UFPB), Rafael de Castro (UFPB), Maria Clara (UFPB), José Ivan (UFPB), Tatiana Tavares (UFPB), Damian Schofield (SUNY Oswego)

Using different types of evaluation methods for interfaces helps gain a broader understanding of what the program is doing from the users perspective and also to find design flaws that exist within the program. When creating and conducting usability tests, there are five important attributes that are user centered. They are, learnability (the program must be easy to learn and use), efficiency (the user must understand what the software is doing in order to use it), memorability (the user should remember how to use the software when leaving and coming back to it), errors (a low error rate, makes a happy user) and satisfaction (the users must enjoy using the software). All of these attributes can be measured, either on a qualitative or quantitative scale (Bhatnagar & Dubey, 2012).

Other researchers have proposed a system that looks at the communicability of usability . Instead of focusing on the users and solutions to how they rank the qualities of a program, communicability design focuses on the conversation or internal dialogue that occurs within users (de Souza et al., 1999; de Souza, & Laffron, 2007). Using these different types of evaluation methods, both the software can be analyzed as well as the different methods, which can validate a study even more. Incorporating both types of methods (usability and communicability) a deeper understanding of evaluation can be obtained. Combining and comparing the two methods also has the possibility of producing more reliability and increasing the validity for the study. For communicability tags see figures 1.

Figure 3 – iteration of part of the website after the Heuristic Eval. addition of 'arrow' for drag-and-drop screen for Arthton Server UI

IntroductionIntroduction

ProgramProgramss

Evaluation ExperiencesEvaluation Experiences

Discussion / ConclusionsDiscussion / Conclusions

We want to thank...We want to thank...

User interfaces evaluation experiences: User interfaces evaluation experiences: A brief comparison between A brief comparison between

usability and communicability testsusability and communicability tests

Figure 1 – a visual representation of communicability tags.

Figure 2 – User testing of the Kinect UI.

Two distinct applications are being created at UFPB's LAVID were evaluated for this study.

Kinect Application - - 3D manipulation of objects for health professionals.- Purpose is for in surgery.- Less time spent scrubbing in and out; more efficient surgeries.

GTAVCS Arthron Server - - Video collaboration between health professionals.- Many different sites can watch the same surgery at any one time.- Makes teaching medical procedures more efficient.- This is the web client application; a desktop application has already been created and tested for internal use.- This application is intended to broaden the use of video collaboration between health professionals.

The comparisons made in this study are preliminary. There are not many studies done on utilizing both usability methods and communicability methods during user testing (see Rusu et al., 2012). As for a comparison of the two methods, nothing comes from the literature. Future research will be conducted regarding the use of both usability methods and communicability methods.

Limitations of this study included language barriers between the HCI expert, developers and users. Some extra time was required to smooth communication issues over the testing phases. Another limitation was only having college students be participants. An infusion with actual health professionals will be important for future steps of each of these projects.

To summarize, the use of both usability and communicability evaluation methods contribute to different ways of looking at user interface problems. Being able to use both methods to test a single application is useful, and creates more reliability and increases validity for this given study. Ongoing research will continue between UFPB and SUNY Oswego this coming fall with these projects.

MethodMethodssThree different testing methods were utilized for both

applications.The first was used by itself; two and three were used together.

Heuristic Evaluation (Nielsen, 1993) - - HCI expert evaluates a system.- Fast, easy and cheap to use.- More problems can be found, initially, using this method.

Naturalistic Evaluation (Bhatnagar & Dubey, 2012) -- Users try to figure out how to use the program on their own.- Can find problems that happen at initial look at program.- Seeing if the program is intuitive.

Coaching Method (Bhatnagar & Dubey, 2012) - - Used to control time of experiment.- User was able to ask questions to the experimenter if confusion occurred.- Users were urged to try to figure out how to use the program on their own before asking questions.

Kinect Application

Initial Testing -- Heuristic Evaluation with a Masters HCI student.- Needed to change the rotation, zoom and stop functions.- Made them buttons instead of hand movements.- These three functions are the core of the program.- Tried to make the functions as intuitive as possible.

User Testing - - Four participants (two Brazilian students and two American students, three males, one female).- All enjoyed using the program (see figure 2 for user testing experience).- Issue with initial screen, user one wanted to use the mouse and keyboard.- All four of the users stated that at first, the program was not intuitive, but after a minute or so of using the program they understood what they were asked to do.- High learning curve.- Communicability tags discovered: “what happened?”, “why doesn't it?”, and “what now?”

Initial Testing - - Heuristic Evaluation with a Masters HCI student.- Needed improvement on the navigation aspect of the website.- The website was not telling a story, so it made it hard for users to conceptually understand what they were suppose to do once on the website.- There were also issues with the collaboration aspect of the website. The drag and drop aspect was not intuitive and users would become confused once on the page.- Suggestions were to add an arrow ' –> ' indicating what to do (see figure 3).- There were also issues with displaying identical screens, even after users clicked on certain tabs.- Suggestions were to add descriptions of what the given webpage was intended to do.

User Testing -- Four users (two Brazilians, two Americans, two males and two females).- Still issues after user testing.- One that was found during testing was not explicitly telling the user how to play, pause and stop video.- Still needed to add descriptions to each subpage of the site.- Communicability tag found: “what now?”

GTAVCS Arthron Server