3
Towards measuring of E-Learning Usability through User Interface Abdelrahman Osman Elfaki 1 , Yucong Duan* 2 , Ruzi Bachok 1 ,Wencai Du 2 , Md Gapar Md Johar 1 , Sim Fong 1 1 Management and Science University, Malaysia 2 College of information science and technology, Hainan University, China {ruzi,abdelrahman_elfaki,gapar,lfsim}@msu.edu.my {[email protected], [email protected]} Abstract Education and learning is one of the world’s largest markets for internet applications. Learning and training at work is probably the fastest growing sector. Before measuring the usability of these e- learning applications, users have to face unknown risks of failure of applications. This paper represents five metrics to measure the usability of E-Learning systems through user interface. These metrics are: Time of user feedback, average of using help methods, average of using undo, average time spent in any page, and average of using e- learning system’s search engine. We focused to measure the using of e-systems rather than the content of the systems. These metrics cover the three parts of usability (ease of use, ease to learning, and task matching). In addition, these metrics satisfy the definition of usability for both human computer interaction and software engineering. Key Words: E Learning, Software Usability, User Interface, Software engineering 1. Introduction Good Usability of complex learning objects (e.g. simulations, animations, interactions etc.) is vital for the acceptance of e-learning material [1]. According to ISO 9241- International Standard Organization- (Part 11), usability is the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use [2]. The three major usability identifiers are ease of learning, ease of use, and task match [3]. The term (ease of learning) refers to the effort required to understand and operate unfamiliar system. Clearly, this will depend upon the knowledge the user possesses and the ease with which the knowledge[8] can be mapped onto the unfamiliar system. The second term (ease of use) refers to the efforts that are required to operate a system before it has been understood and mastered by the user. The third identifier concept is task match. The term refers to the extent to which the information and functions that a system provides matches the needs of the user. In short, a system may be easy to learn and easy to use, but the question is: Does it do the job? This is a question of whether the system provides the necessary functions that are required, as well as the information that the user needs [4]. 2. Usability between Human Computer Interaction and Software Engineering 2.1Usability in Human Computer Interaction In Human Computer Interaction community, usability has been defined as “The user needs in terms of usability. Goals are expressed as a set of requirements for the behaviour of the software when it is executed”[5]. In this paper, we have developed our metrics base on student’s behavior. The metrics follows and notice his actions when he deals with the e-learning system for measuring his usages. Based on this usage we conclude recommendations for enhancing e-learning development. Thus, our propose metrics satisfy the definition of usability in human computer interaction. 2.2Usability in software engineering In the software engineering community, scientist agreed that metrics can be used to capture the meaning of usability. McCall has based on three uses of software product, i.e. product revision, product operation, and product transition. For every one of those uses, this model defines different factors that describe the external view of the system, as end users perceive it. In this paper, we introduce software metrics that satisfy the definition of usability in software engineering. Our propose metrics are work in the background in which the student can not notice these metrics, i.e., there are no direct interaction between the student and these metrics. 3. Related Works Four summative usability tests were conducted to collect the common metrics (task completion, error counts, task times and satisfaction scores). For measuring satisfaction they created a questionnaire 2013 Second IIAI International Conference on Advanced Applied Informatics 978-0-7695-5071-8/13 $26.00 © 2013 IEEE DOI 10.1109/IIAI-AAI.2013.17 192 2013 Second IIAI International Conference on Advanced Applied Informatics 978-0-7695-5071-8/13 $26.00 © 2013 IEEE DOI 10.1109/IIAI-AAI.2013.17 192

[IEEE 2013 IIAI International Conference on Advanced Applied Informatics (IIAIAAI) - Los Alamitos, CA, USA (2013.08.31-2013.09.4)] 2013 Second IIAI International Conference on Advanced

  • Upload
    sim

  • View
    213

  • Download
    0

Embed Size (px)

Citation preview

Page 1: [IEEE 2013 IIAI International Conference on Advanced Applied Informatics (IIAIAAI) - Los Alamitos, CA, USA (2013.08.31-2013.09.4)] 2013 Second IIAI International Conference on Advanced

Towards measuring of E-Learning Usability through User Interface

Abdelrahman Osman Elfaki1, Yucong Duan*2, Ruzi Bachok1,Wencai Du2, Md Gapar Md Johar1, Sim Fong1 1Management and Science University, Malaysia

2College of information science and technology, Hainan University, China {ruzi,abdelrahman_elfaki,gapar,lfsim}@msu.edu.my {[email protected], [email protected]}

Abstract Education and learning is one of the world’s largest markets for internet applications. Learning and training at work is probably the fastest growing sector. Before measuring the usability of these e-learning applications, users have to face unknown risks of failure of applications. This paper represents five metrics to measure the usability of E-Learning systems through user interface. These metrics are: Time of user feedback, average of using help methods, average of using undo, average time spent in any page, and average of using e-learning system’s search engine. We focused to measure the using of e-systems rather than the content of the systems. These metrics cover the three parts of usability (ease of use, ease to learning, and task matching). In addition, these metrics satisfy the definition of usability for both human computer interaction and software engineering. Key Words: E Learning, Software Usability, User Interface, Software engineering

1. Introduction Good Usability of complex learning objects (e.g. simulations, animations, interactions etc.) is vital for the acceptance of e-learning material [1]. According to ISO 9241- International Standard Organization- (Part 11), usability is the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use [2]. The three major usability identifiers are ease of learning, ease of use, and task match [3]. The term (ease of learning) refers to the effort required to understand and operate unfamiliar system. Clearly, this will depend upon the knowledge the user possesses and the ease with which the knowledge[8] can be mapped onto the unfamiliar system. The second term (ease of use) refers to the efforts that are required to operate a system before it has been understood and mastered by the user. The third identifier concept is task match. The term refers to the extent to which the information and functions that a system provides matches the needs of the user. In short, a system

may be easy to learn and easy to use, but the question is: Does it do the job? This is a question of whether the system provides the necessary functions that are required, as well as the information that the user needs [4].

2. Usability between Human Computer Interaction and Software Engineering

2.1Usability in Human Computer Interaction In Human Computer Interaction community, usability has been defined as “The user needs in terms of usability. Goals are expressed as a set of requirements for the behaviour of the software when it is executed”[5]. In this paper, we have developed our metrics base on student’s behavior. The metrics follows and notice his actions when he deals with the e-learning system for measuring his usages. Based on this usage we conclude recommendations for enhancing e-learning development. Thus, our propose metrics satisfy the definition of usability in human computer interaction. 2.2Usability in software engineering In the software engineering community, scientist agreed that metrics can be used to capture the meaning of usability. McCall has based on three uses of software product, i.e. product revision, product operation, and product transition. For every one of those uses, this model defines different factors that describe the external view of the system, as end users perceive it. In this paper, we introduce software metrics that satisfy the definition of usability in software engineering. Our propose metrics are work in the background in which the student can not notice these metrics, i.e., there are no direct interaction between the student and these metrics.

3. Related Works Four summative usability tests were conducted to collect the common metrics (task completion, error counts, task times and satisfaction scores). For measuring satisfaction they created a questionnaire

2013 Second IIAI International Conference on Advanced Applied Informatics

978-0-7695-5071-8/13 $26.00 © 2013 IEEE

DOI 10.1109/IIAI-AAI.2013.17

192

2013 Second IIAI International Conference on Advanced Applied Informatics

978-0-7695-5071-8/13 $26.00 © 2013 IEEE

DOI 10.1109/IIAI-AAI.2013.17

192

Page 2: [IEEE 2013 IIAI International Conference on Advanced Applied Informatics (IIAIAAI) - Los Alamitos, CA, USA (2013.08.31-2013.09.4)] 2013 Second IIAI International Conference on Advanced

containing semantic distance scales with five points. The questionnaire included questions on task experience, ease of task, time on task, and overall task satisfaction. In [6] Adeoye and Wentling have provided a study to measure the relationship between national culture and the usability of an e-Learning system by using Hofstede’s cultural dimensions and Nielson’s usability attributes. The study revealed that high uncertainty avoidance cultures found the system more frustrating to use. The study also revealed that individuals from cultures with low power distance indicators (e.g., people more accepting of uneven power distribution) rated the system’s usability higher than individuals from high power distance cultures. The work in [6] paper presented the results obtained from a first phase of observation and analysis of the interactions of people with e-learning applications. The aim is to provide a methodology for evaluating such applications. This paper have used 4 dimensions (Presentation, Hypermediality, Application proactively, and User Activity); and analyze how each dimension is specialized in the context of e-learning platforms. Also, define three categories for Abstract Task (Content insertion and content access, Scaffolding, and Learning Window).

4. The proposed metrics The design of User Interface components in E learning systems - is the vital process; Adding of measurable factors in any product insure the quality of the product, according to this principle, adding the usability metrics to user interface of software product ensure the quality of this interface. In Fact, our suggesting for measuring the usability through User Interface should be considered when the process of user interface design is taking place. The usability measurement metrics should be one of the main components of user interface. The proposed usability metrics is suitable for any user interface because it is hidden from the end users. The suggested metrics represent this model where it (“describes the external view of the system, as it is perceived by end users. Each factor is decomposed into criteria that describe the internal view of the product as perceived by software developer”); it describes the external view via measuring the interaction between the end user and the software product. Moreover, describe the internal view via measuring the degree of complexity of the system. The design of User Interface components is the heart of HCI; we suggest the measurement of Usability through User Interface should be considered when the process of user interface design is taking place. The amalgamation of usability metrics into user interface could be

considered as a design process important; such as, selecting the interface colours, defining font types, determining picture positions, etc. The software process has been defined as a number of activates leads to the high quality product. In this paper, we have suggested a special software process aims to define usability metrics. This process should be clear at any phase of software development life cycle. 4.1. How the proposed metrics followed the seven steps to designing software metrics: This paper suggests five metrics for measuring usability through user interface. The proposed metrics integrate concepts of both Human Computer Interaction (HCI) and Software Engineering. It is based on the principle of seven steps required for an authorized metrics [7]. In the following we will describe how our proposed metrics follow these steps. Step 1- Objective Statement: Software metrics can perform one of four functions. Metrics can help us understand more about our software products, processes and services against established standards and goals. Metrics can provide the information we need to control resources and processes used to produce our software. Metrics can be used to predict attributes of software entities in the future. Step 2- Clear Definitions: The second step in designing a metric is to agree to a standard definition for the entities and their attributes that are being measured. Step 3- Define the Model: This step is to derive a model for the metric. In simple terms, the model defines how we are going to calculate the metric. Step 4- Establish Counting Criteria: This step in designing a metric is to break the model down into its lowest level metric primitives and define the counting criteria to be used to measure each primitive. Step 5- Decide what’s "Good": This step in designing a metric is defining what’s "Good". Once you have decided what to measure and how to measure it, you have to decide what to do with the results. Step 6- Metrics Reporting: This step is to decide how to report the metric. This includes defining the report format, data extraction and reporting cycle, reporting mechanisms and distribution and availability. Step 7- Additional Qualifiers: The final step in designing a metric is determining the additional metric qualifiers. A good metric is a generic metric. That means, the metric is valid for an entire hierarchy of additional extraction qualifiers.

193193

Page 3: [IEEE 2013 IIAI International Conference on Advanced Applied Informatics (IIAIAAI) - Los Alamitos, CA, USA (2013.08.31-2013.09.4)] 2013 Second IIAI International Conference on Advanced

4.2. The usability measurement process: The first metric, time of user feedback, means the time spent by users to answers the interactive messages or the time between the action and reaction. This metric is used to measure the interactivity degree of the system. The suitable time – for user feedback- means the messages are normally and expected by users and vise versa.High interactivity comes from two factors of usability, ease of use, and ease of learning. The second metric, average of using help methods, reflects the degree of user awareness. High average of using help methods means users can cover the basic features and ask for more, emphasizing the efficiency of the help features. This covers the ease of use of usability factor, in the other hand high average of using help means ease of use the help methods its self. The third metric, average of using undo, reflects two factors of usability, ease of use, and ease of learning. In reality, it is too difficult to differentiate between these factors. Therefore, one metric can measure these two factors together. The forth metric, the average of time spent in any page, measures the third factor of usability. Task match, this metric calculated the time from the begging, when the user opens the web page until he close it and compares the calculated time to predefine time representing the suitable time to finish the work in any web page. This metric should read more than one time and taken the closed numbers and eliminate the irregular readings. In the fifth metric average of using a search engine can be considered as a metric. A frequent use of a searching tool in an E-Llearning system is one of important indication of the system usability level. The systems with the least use of their search engine are considered to be easy to use and highly indexed and organized for the normal users.

5. Conclusion

The proposed metrics can help us to understand and evaluate the degree of end user acceptance, which is considered as one of the success criteria of a software product in an organization. Information provided by the proposed metrics can be used to control the implementation process. Understanding

of user behavior – according to his/her present usage of the system will provide important lead to predict the future software usage. In real environment, the three factors of usability are interdependent. Hence, it is very difficult to design software metrics to measure each factor for itself, but fortunately they serve one task, the usability. The proposed metrics can measure the usability of interactive software products if they are taken as a whole. Our proposed metrics satisfy the goals and concepts of both human computer interaction and software engineering. The main Contribution of the proposed metrics is that it is independent from the end user. ACKNOWLEDGMENT This paper was supported in part by China National Science Foundation grant 61162010 and by Hainan University Research program grant KYQD1242. * stands for corresponding author." References [1] Tomas Berns, Usability and user centred design, A necessity for efficient elearning, Inter. Journal of the Comp. Internet and Management, 1(12),2004. [2]Xavier Ferré and Natalia Juristo, Usability Basics for Software Developers, IEEE software, 2001. [3] Carina Gonzalez, Student Usability in Educational Software and Games: Improving Experiences, IGI, USA, 2012. [4] Kent L. Norman, Cyberpsychology: An Introduction to Human-Computer Interaction. Cambridge University Press; 1 edition. August 18, 2008. [5 ] Alan Dix, Janet Finlay, Gregory D. Abowd, Russell Beale, Human Computer Interaction, Prentice Hall; 3 edition (30 Sep 2003). [6] M. F. Costabile, M. De Marsico, R. Lanzilotti, V. L. Plantamura, T. Roselli, On the Usability Evaluation of E-Learning Applications, Proceedings of the 38th Hawaii International Conference on System Sciences, 2005. [7] Linda L. Westfall, Seven Steps to Designing a Software Metric. Principle, Software Measurement Services, http://www.itmpi.org/assets/base/images/itmpi/SevenSteps_LindaWestfall.pdf, 2001. [8] Yucong Duan, Christophe Cruz, Abdelrahman Osman Elfaki, Yang Bai, and Wencai Du. Modeling Value Evaluation of Semantics Aided Secondary Language Acquisition as Model Driven Knowledge Management. Computer and Information Science, volume 493 of Studies in Computational Intelligence, page 267-278. Springer International Publishing , (2013)

194194