Usability And Evaluation Of Elearning Applications

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abstract— E-learning is a technology enhanced web based learning application used for transferring knowledge and to impart learning across geographical locations. The aim of e-learning applications is to deliver knowledge, share information and help learners in their learning activities in an effective and efficient way by involving advanced electronic technologies. Usability of e-learning applications provides structured methods for achieving usability in user interface design during product development. In this paper, we present the results obtained from a first phase of observation and analysis of the interactions of people with e-learning applications. The aim is to provide a methodology for evaluating such applications.

Keywords— Usability evaluation, e-learning, HCI (Human computer Interaction)

Introduction

Electronic learning (or e-learning) is a kind of technology supported education/learning (TSL) where the medium of instruction is through computer technology, particularly involving digital technologies. E-learning has been defined by Nichols as "pedagogy empowered by digital technology" (Nichols, 2008 PP.2) In the case of e-learning design the main task for the user is to learn, which is rather implicit and abstract in nature. As Notess (2001) argues ―"Evaluating e-learning may move usability practitioners outside their comfort zone" Squires (1999) highlights the need for integration of usability and learning and points out the non-collaboration of workers in human–computer interaction (HCI) and educational computing areas. In fact, usability of e-learning designs is directly related to their pedagogical value. An e-learning application may be usable but not in the pedagogical sense, and vice versa (Albion, 1999; Quinn, 1996; Squires & Preece, 1999). Usability is the primary parameter of evaluation of e-learning technologies and systems. Major attributes of usability are efficiency, effectiveness and satisfaction. Usability stands for quality and putting users and their real need in the centre. Therefore investigation of usability and its integration or involvement in the learning process is worthwhile (Zaharias, 2004). This paper focuses on usability evaluation techniques for e-learning applications.

The purpose of educational software is to support learning. A major challenge for designers and Human Computer Interaction (HCI) researchers is to develop software tools able to engage novice learners and to support their learning even at a distance. Clearly, educational software should take into account the different ways students learn and ensure that student’s interactions are as natural and intuitive as possible.

A consolidated evaluation methodology of e-learning applications does not yet exist, or at least it is not well documented and idely accepted. In Ref. [1], Dringus proposes to use heuristics without further adaptation to the e-learning context. Similarly, in Ref. [3] Parlangeli et al. evaluate e-learning applications by using usability evaluation methods (Nielsen’s heuristics [2], User Evaluation of Interactive Computers System questionnaire [4]) that were developed to address needs and challenges of users of interactive systems, that is, not specific to e-learning. Squires and Preece propose an approach adapted to e-learning but there is a clear need for further elaboration and empirical validation [5]. In conclusion, the design of e-learning applications deserves special attention and designers need appropriate guidelines as well as effective evaluation methodologies to implement usable interfaces [6].

Usability Evaluation

Usability engineering is the discipline that provides structured methods for achieving usability in user interface design during product development. Usability evaluation is part of this process. While theoretically any software product could be evaluated for usability, the evaluation is unlikely to produce good results unless a usability engineering process has been followed. Usability engineering has three basic phases: requirements analysis, design/testing/development, and installation. Usability goals are established during requirements analysis. Iterative testing is done during the design/testing/development phases and the results are compared to the usability goals. User feedback should also be obtained after installation as a check on the usability and functionality of the product.

The methods differ depending on the source used for the evaluation. This source can be users, usability experts, or models. Figure 1 shows a timeline for usability evaluations in the last 30 years. Users were first used as the source of usability feedback but models have been also used for over 20 years. Expert feedback was developed in heuristic reviews and cognitive walkthroughs and has been used since the early 90s. All three methods rely on usability engineers or usability professionals to design, conduct, analyze, and report on the evaluations.

Fig. 1. 30 years of highlights in the development of desktop computing user evaluations from 1971 – 2001

User-Centered Evaluations

User-centered evaluations are accomplished by identifying representative users, representative tasks, and

developing a procedure for capturing the problems that users have in trying to apply a particular software product in accomplishing these tasks. During the design/testing/development cycle of software development, two types of user evaluations are carried out. Formative evaluations are used to obtain information used in design. Summative evaluations are usability evaluations that document the effectiveness, efficiency, and user satisfaction of a product at the end of the development cycle. These two types of evaluation differ in the purpose of the evaluation, the methods used, the formality of the evaluation, the robustness of the software being evaluated, the measures collected, and the number of participants used. In both types of evaluation representative users are recruited to participate, some method of collecting information is used, and some way of disseminating the results of the evaluation to members of the software development team is needed.

Expert-based Evaluations

Expert evaluations of usability are similar to design reviews of software projects and code walkthroughs. Inspection methods include heuristic evaluation, guideline reviews, pluralistic walkthroughs, consistency inspections, standards inspections, cognitive walkthroughs, formal usability inspections, and feature

inspections.

Model-based Evaluations

A model of the human information processor has been developed based on data derived from psychology research on the human systems of perception, cognition, and memory. The model incorporated capabilities of short term and long term memory, along with capabilities of the human visual and audio processing. This allows human-computer interaction researchers to evaluate user interface designs based on predictions of performance from the model.

Usability In E-Learning Applications

Usability plays an imperative role for the success of e-learning applications. If an e-learning system is not usable, the learner is forced to spend much more time trying to understand software functionality, rather than understanding the learning content (Wong et al.., 2003). Moreover, if the system interface is rigid, slow and unpleasant, people feel frustrated are likely to walk away and forget about using it. Usability of pedagogical systems is key feature in the pedagogy domain. According to Granic and Glavinic (2002), lack of an appropriate usable and user-cantered interface design of different computerized educational systems decreases the interface’s effectiveness and efficiency. This underlines the importance of the main goal of our research study which is to evaluate the usability of the interface of a widely used educational system. Increased maturity in learning approaches has increased the importance of and challenges for usability design in the domain of learning. In an e-learning environment, the traditional task and work-related usability seem to have limited value while at the same time the need to approach the learner experience in a more appropriate holistic way becomes stronger (Zaharias, 2004). This challenge requires a focus on the affective aspects of learning (O’Regan, 2003; Picard et al., 2001). To evaluate the usability of system and to determine usability problems, it is important to select appropriate usability evaluation method/methods (Fitzpatrick, 1999; Ssemugabi, 2006.) by considering efficiency, time, cost-effectiveness, ease of application, and expertise of evaluators (Gray & Salzman, 1998; Parlangeli et al., 1999). One of the goals of any learning systems is to avoid any distraction in order to keep all the content fresh in the learner’s minds as they accommodate new and foreign concepts. In the specific case of e-learning, the challenge is to create an interactive system that doesn’t confuse learners. It is often noticed that an e-learning application is a mere electronic transposition of traditional material, presented through rigid interaction schemes and awkward interfaces. When learners criticize the web based training or express a preference for classroom based instruction, it is often not the training, but rather the confusing menus, unclear buttons, or illogical links that scare them off (Ardito et al., 2005). In the view of Melis et al. & Weber (2003) designing an e-learning system which is more usable, basically involve two aspects. The first aspect is technical usability and the second is pedagogical usability. Technical usability involves methods for ensuring a trouble-free interaction with the system, while pedagogical usability aims at supporting the learning process. Both aspects of usability are intertwined and tap the user’s cognitive resources. The main goal should be minimizing the cognitive load resulting from interaction with the system in order to provide a resourceful learning environment. (Melis et al. & Weber 2003)

Systematic Usability Evaluations

Usability inspection refers to a set of methods through which evaluators examine usability-related aspects of an application and provide judgments based on their human

factors expertise. With respect to other usability evaluation methods, such as user-based evaluation, usability inspection methods are attractive because they are cost-effective and do not require sophisticated aboratory equipment to record user’s interactions, expensive field experiments, or heavy-to-process results of widespread interviews. Usability inspection methods ‘‘save users’’ [12], though they remain the most valuable and authoritative source of usability problems reports. However, they are strongly dependent upon the inspector skills and experience and therefore it may happen that different inspectors produce different outcomes. The SUE methodology aims at defining a general framework of usability evaluation [11]. The main idea behind SUE is that reliable evaluation can be achieved by systematically combining inspection with user-based evaluation. Several studies have outlined how such two methods are complementary [2] and can be effectively coupled for obtaining a reliable evaluation process. In line with those studies, SUE suggests coupling of inspection activities and user-testing and precisely indicates how to combine them to make evaluation more reliable and still cost-effective. The inspection has a central role: each evaluation process should start having expert evaluators inspecting the application. Then, user testing might be performed in more critical cases, for which the evaluator might feel the need of a more objective evaluation that can be obtained through user involvement. In this way, user testing is better focused, and the user resources are better optimized, thus making the overall evaluation less expensive but still effective [11].

Most of the existing approaches to usability evaluation

especially address presentation aspects of the graphical interfaces that are common to all interactive systems, for example layout design, choice of icons and interaction style, mechanisms of error handling, etc. [13, 2]. SUE proposes, instead, that an application must be analyzed from different points of view along specific dimensions. Interaction and presentation features refer to the most general point of view common to all interactive applications. More specific dimensions address the appropriateness of design with respect to the peculiar nature and purposes of the application. As previously mentioned, the SUE methodology requires to first identify a number of analysis dimensions. For each dimension, general usability principles are decomposed into finer-grained criteria [14]. By considering user studies and the experience of the usability experts, a number of specific usability attributes and guidelines are identified and associated to these criteria. Then, a set of ATs addressing these guidelines is identified. ATs precisely describe which objects of the application to look for and which actions the evaluators must perform in order to analyze such objects and detect possible violations of the identified heuristics. ATs are formulated by means of a template providing a consistent format that includes the following items:

– AT classification code and title: univocally identify the

AT and its purpose

– Focus of action: lists the application objects to be evaluated

– Intent: clarifies the specific goal of the AT

– Activity description: describes in detail the activities to

be performed during the AT application

– Output: describes the output of the fragment of the

inspection the AT refers to.

During the inspection, evaluators analyze the application

by using the defined ATs. In this way, they have a guide for identifying the elements to focus on, analyzing them and producing a report in which the discovered problems are described.

According to SUE, the activities in the evaluation process, independently of the analysis dimension being considered, are organized into a preparatory phase and an execution phase. The preparatory phase is performed only once for each analysis dimension; its purpose is to create a conceptual framework that will be used to carry out evaluations. It consists of the identification of usability attributes to be considered for the given dimension, and the definition of a library of ATs. The preparatory phase is a critical phase, since it requires the accurate selection or definition of the tools to be used during each execution phase, when the actual evaluation

is performed. The execution phase is performed every time a specific application must be evaluated. As described in Fig. 1, it consists of inspection, performed by expert evaluators, and user testing, involving real users. Inspection is always performed, while user testing may occur only in critical cases. At the end of each evaluation session, evaluators must provide designers and developers with an organized evaluation feedback. An evaluation report must describe the detected problems. The evaluation results must clearly suggest design revisions and the new design can subsequently be iteratively validated through further evaluation sessions. While the user testing proposed by SUE is traditional, and is conducted according to what it is suggested in literature, the SUE inspection is new with respect to classical inspection methods; its main novelty is in the use of ATs for driving the inspectors’ activities.

Heuristic evaluation

Heuristic evaluation (HE) is a usability inspection technique developed by the champion of usability, Jakob Nielsen. By definition of Dix, Finlay, Abowd and Beale , HE is classified as ‘evaluation through expert analysis’, which is distinguished from the category ‘evaluation through user participation’, i.e. it is inherent to HE that it is not conducted by actual users. During the process, a few expert evaluators conduct pre-defined, representative tasks on the application and, guided by a given set of usability principles known as heuristics (e.g. the criteria in Section 4), independently determine whether the interaction conforms to these principles. HE is a classic and widely-used UEM in frequent use. Researchers are using it in various contexts and variants, a number of them in evaluating educational websites. Nielsen describes it as ‘discount usability engineering’ in that it is fast and inexpensive, because it is conducted by evaluators who are acknowledged domain experts. In a well-balanced set of evaluators, some are experts in usability and others are experts in the content of the system being studied, i.e. subject-matter experts. Evaluators who are experts both in the domain area and in HCI are termed ‘double experts’. The heuristics or ‘rules of thumb’ guide their critique of the design under evaluation. The result of HE is a list of usability problems in the system, according to the heuristics used or other issues the evaluators identify. Factors involved in selecting and inviting a balanced set of experts, are the number to use and their respective backgrounds. It is seldom possible for a single evaluator to identify all the usability problems. However, different evaluators or experts find different problems, which may not be mutually exclusive.

Thus, when more experts are involved with an evaluation, more problems are discovered. Nielsen’s cost- benefit analysis demonstrated optimal value, with three to five evaluators identifying 65-75% of the usability problems. Despite this, the debate continues. Eleven experts were used in a study to assess the usability of a university web portal. Law and Hvannberg reject the ‘magic five’ and used eleven participants to define 80% of the detectable usability problems. In line with Nielsen, Karoulis and Pombortsis determined that two to three evaluators who are ‘double experts’ will point out the same number of usability problems as three to five ‘single experts’. According to Nielsen, the evaluation process comprises: identification of heuristics, selection of evaluators, briefing of evaluators, the actual heuristic evaluation, and finally, aggregation of the problems. Sometimes severity rating, i.e. assigning relative severities to individual problems, can be performed to determine each problem’s level of seriousness, estimated on a 3- or 5-point Likert scale. The experts can do ratings either during their HEs or later, after all the problems have been aggregated. The latter approach is advantageous since evaluators have the opportunity to consider and to rate problems they did not identify themselves. This is the approach adopted in this study. HE is the most widely-used UEM for computer system interfaces, since it is inexpensive, relatively easy and fast to perform, and can result in major improvements. It can be used early in design, but also on operational systems, as in this study. Despite these advantages, questions arise regarding its effectiveness in identifying user problems and the nature of problems it identifies. This study provides some answers to this debate. There are advantages to using HE in tandem with user-based methods. Several recent studies advocate combining it with user testing (UT) and user surveys. In an interview with Preece and her colleagues, Jakob Nielsen terms this combination a ‘sandwich model’ when used in a layered style. Tan, Liu and Bishu combined HE of e-commerce sites with UT (observation and interviews) and found the two methods complementary in addressing different kinds of problems. HE identified more problems than UT. Requirements for their nine expert evaluators were postgraduate courses in HCI and human factors of web design, and participation in at least one HE. In Thyvalikakath, Monaco, Thambuganipalle and Schleyer comparative study of HE and UT on four computer-based dental patient records systems (CPRs), HE predicted on average 50% of the problems found empirically by UT. The UT involved think-aloud by 20 novice users, coded in detail by researchers. The three expert evaluators were dentists: two were postgraduate Informatics students who had completed HCI courses; the third was an Informatics faculty member and an expert in HE. They were familiar with CPRs in general, but had not used them routinely. This would qualify them as single experts. Hvannberg, Law and Lárusdóttir combined the HE of an educational web portal with a set of user-based methods (observation, recording, questionnaires). As experts, they used 19 final-year BSc Computer Science students and one BSc (CS) graduate. The researchers acknowledge that these evaluatorshad a sound knowledge of evaluation but little practice. HE identified more problems than UT and 38% of the experts’ problems were confirmed by the user study. Also in an elearning context, Ardito and his colleagues posit that reliable evaluation can be achieved by systematic combination of inspection and user-based methods. In their study of Computer Science students learning HCI via the Internet, the user-based methods employed were think-aloud and interviews.

Conclusion

In this paper, we have presented a preliminary set of usability criteria that capture e-learning systems features using SUE and HE. We have also proposed to adapt to the e-learning domain the SUE inspection technique, which uses evaluation patterns (Abstract Tasks) to drive the inspectors’ activities during the evaluation and heuristic inspection technique, which is distinguished from the category ‘evaluation through user participation. It is worth mentioning that, as human factors experts, we can only evaluate "syntactic" aspects of e-learning applications. In order to go deeply into aspects concerning pedagogical approach and content semantics, experts of education science and domain experts are to be involved. The evaluation from a pedagogical point of view concerns, for instance, the coherence and the congruence of the learning path design.

Moreover, we are planning further users studies for validating our approaches to the usability evaluation of the e-learning applications. These studies should help to refine the preliminary set of guidelines and to possibly define others in order to all the peculiar aspects of e-learning applications.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now