12
Usability and credibility of e-government websites Zhao Huang a, , Morad Benyoucef b a School of Computer Science, Shaanxi Normal University, Xi'an, China b Telfer School of Management, University of Ottawa, 55 Laurier Avenue East, Ottawa, ON K1N 6N5, Canada abstract article info Available online xxxx Keywords: Usability Credibility User performance E-government Albeit e-government has seen a steady growth, it can still benet from a better user engagement, and usability and credibility are believed to be among the factors that inuence such engagement. This paper presents an empirical study that evaluates the usability and credibility of current e-government websites and looks at user performance while using these websites. The study is based on a heuristic evaluation which aims to capture users' perception of usability and credibility. Our results show a close correlation between usability and credibil- ity, as e-government websites with a high usability were perceived as having higher credibility, and vice versa. A number of usability and credibility weaknesses were identied on our sample of e-government websites. © 2014 Elsevier Inc. All rights reserved. 1. Introduction The power of the internet and web technologies has been clearly demonstrated in business, as exemplied by the enormous success of electronic commerce (e-commerce). Local, regional, and national governments around the globe have been tasked with leveraging such power to develop electronic government (e-government). E- government involves the use of information and communication technologies, particularly web-based applications to provide faster, easier and more efcient access to and delivery of information/services to the public (Lee, 2010). Most importantly, e-government is said to reform the back-ofce (The back-ofcecomprises systems that run or support e-government processes, while the front-ofcecomprises the various interfaces e.g., websites or mobile applications of e- government systems with the public.), making it work more efciently in terms of information exchange and knowledge sharing between various units, departments and organizations (Homburg & Bekkers, 2002). Today, thousands of e-government systems are accessible via the internet offering a variety of online government information and services (Shareef, Kumar, Kumar, & Dwivedi, 2011). However, generat- ing greater user engagement that translates into information access, service utilization, and participation in government decision making still remains a challenge. Usability and credibility are among the reasons for that challenge because they affect citizens' usage and acceptance of e-government, and inuence their day-to-day interaction with e- government websites (Clemmensen & Katre, 2012). As indicated by Wathen and Burkell (2002), failure to develop usable and credible websites may change users' attitudes, reduce their satisfaction, and raise their concerns about the use of information and services offered on those websites. Accordingly, usability is a critical element in the success of e- government (Youngblood & Mackiewicz, 2012). As suggested by Scott (2005), developers of e-government solutions must regularly monitor and enhance the usability of their websites to attract and satisfy users. As such, many studies have focused on dening e-government websites' usability constructs (e.g., Barnes & Vidgen, 2004; Gouscos et al., 2007). Some studies measured the multifaceted dimensions of e-government website usability (e.g., Garcia, Maciel, & Pinto, 2005), while others assessed its inuence on users' attitude and behaviours (e.g., Teo, Srivastava, & Jiang, 2008). However, even with the insights gathered from those studies, current e-government websites are still plagued by a number of usability problems, including hard-to-understand con- tent, inconsistent formats, poor navigation capabilities, disorientation, difculty in using help functions, and lack of reliability. Not surprisingly, such usability problems may negatively affect e-government credibility. For instance, Huang, Brooks, and Chen (2009) found that usability issues such as broken links, overloaded information presentation, and incon- sistent colours play an important role in the extent to which an e- government website is perceived as credible. In this sense, it is impor- tant to consider credibility issues when studying e-government usabil- ity in order to be able to design e-government systems that support the user in achieving the desired service outcome, and hence generate greater user participation. This exploratory study aims to evaluate the usability and credibility of current e-government websites and assess the possible link between them. Our aim is that our ndings can be leveraged by designers to improve the quality of e-government websites which will hopefully lead to a more usable and credible e-government. To conduct the study, we selected 3 London-area municipal websites, and recruited Government Information Quarterly xxx (2014) xxxxxx Corresponding author at: School of Computer Science, Shaanxi Normal University, 199 South Chang'an Road, Xi'an 710062, China. E-mail addresses: [email protected] (Z. Huang), [email protected] (M. Benyoucef). GOVINF-01040; No. of pages: 12; 4C: http://dx.doi.org/10.1016/j.giq.2014.07.002 0740-624X/© 2014 Elsevier Inc. All rights reserved. Contents lists available at ScienceDirect Government Information Quarterly journal homepage: www.elsevier.com/locate/govinf Please cite this article as: Huang, Z., & Benyoucef, M., Usability and credibility of e-government websites, Government Information Quarterly (2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

Usability and credibility of e-government websites

  • Upload
    morad

  • View
    221

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Usability and credibility of e-government websites

Government Information Quarterly xxx (2014) xxx–xxx

GOVINF-01040; No. of pages: 12; 4C:

Contents lists available at ScienceDirect

Government Information Quarterly

j ourna l homepage: www.e lsev ie r .com/ locate /gov inf

Usability and credibility of e-government websites

Zhao Huang a,⁎, Morad Benyoucef b

a School of Computer Science, Shaanxi Normal University, Xi'an, Chinab Telfer School of Management, University of Ottawa, 55 Laurier Avenue East, Ottawa, ON K1N 6N5, Canada

⁎ Corresponding author at: School of Computer Scienc199 South Chang'an Road, Xi'an 710062, China.

E-mail addresses: [email protected] (Z. Huang)(M. Benyoucef).

http://dx.doi.org/10.1016/j.giq.2014.07.0020740-624X/© 2014 Elsevier Inc. All rights reserved.

Please cite this article as: Huang, Z., & Beny(2014), http://dx.doi.org/10.1016/j.giq.2014

a b s t r a c t

a r t i c l e i n f o

Available online xxxx

Keywords:UsabilityCredibilityUser performanceE-government

Albeit e-government has seen a steady growth, it can still benefit from a better user engagement, and usabilityand credibility are believed to be among the factors that influence such engagement. This paper presents anempirical study that evaluates the usability and credibility of current e-government websites and looks at userperformance while using these websites. The study is based on a heuristic evaluation which aims to captureusers' perception of usability and credibility. Our results show a close correlation between usability and credibil-ity, as e-government websites with a high usability were perceived as having higher credibility, and vice versa. Anumber of usability and credibility weaknesses were identified on our sample of e-government websites.

© 2014 Elsevier Inc. All rights reserved.

1. Introduction

The power of the internet and web technologies has been clearlydemonstrated in business, as exemplified by the enormous successof electronic commerce (e-commerce). Local, regional, and nationalgovernments around the globe have been tasked with leveragingsuch power to develop electronic government (e-government). E-government involves the use of information and communicationtechnologies, particularly web-based applications to provide faster,easier and more efficient access to and delivery of information/servicesto the public (Lee, 2010). Most importantly, e-government is said toreform the back-office (The “back-office” comprises systems that runor support e-government processes, while the “front-office” comprisesthe various interfaces – e.g., websites or mobile applications – of e-government systems with the public.), making it work more efficientlyin terms of information exchange and knowledge sharing betweenvarious units, departments and organizations (Homburg & Bekkers,2002). Today, thousands of e-government systems are accessible viathe internet offering a variety of online government information andservices (Shareef, Kumar, Kumar, & Dwivedi, 2011). However, generat-ing greater user engagement that translates into information access,service utilization, and participation in government decision makingstill remains a challenge. Usability and credibility are among the reasonsfor that challenge because they affect citizens' usage and acceptanceof e-government, and influence their day-to-day interaction with e-government websites (Clemmensen & Katre, 2012). As indicated byWathen and Burkell (2002), failure to develop usable and credible

e, Shaanxi Normal University,

, [email protected]

oucef, M., Usability and cred.07.002

websites may change users' attitudes, reduce their satisfaction, andraise their concerns about the use of information and services offeredon those websites.

Accordingly, usability is a critical element in the success of e-government (Youngblood & Mackiewicz, 2012). As suggested by Scott(2005), developers of e-government solutions must regularly monitorand enhance the usability of their websites to attract and satisfy users.As such,many studies have focused on defining e-governmentwebsites'usability constructs (e.g., Barnes & Vidgen, 2004; Gouscos et al., 2007).Some studies measured the multifaceted dimensions of e-governmentwebsite usability (e.g., Garcia, Maciel, & Pinto, 2005), while othersassessed its influence on users' attitude and behaviours (e.g., Teo,Srivastava, & Jiang, 2008). However, even with the insights gatheredfrom those studies, current e-government websites are still plaguedby a number of usability problems, including hard-to-understand con-tent, inconsistent formats, poor navigation capabilities, disorientation,difficulty in using help functions, and lack of reliability. Not surprisingly,such usability problemsmay negatively affect e-government credibility.For instance, Huang, Brooks, and Chen (2009) found that usability issuessuch as broken links, overloaded information presentation, and incon-sistent colours play an important role in the extent to which an e-government website is perceived as credible. In this sense, it is impor-tant to consider credibility issues when studying e-government usabil-ity in order to be able to design e-government systems that supportthe user in achieving the desired service outcome, and hence generategreater user participation.

This exploratory study aims to evaluate the usability and credibilityof current e-government websites and assess the possible link betweenthem. Our aim is that our findings can be leveraged by designers toimprove the quality of e-government websites which will hopefullylead to a more usable and credible e-government. To conduct thestudy, we selected 3 London-area municipal websites, and recruited

ibility of e-government websites, Government Information Quarterly

Page 2: Usability and credibility of e-government websites

2 Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

36 people to participate in the evaluation of the 3 websites. Among thevarious evaluation methods available to us, we opted for a heuristicevaluation. We captured users' perception of e-government websites'usability and credibility, and measured the level of interaction of userswith those websites. We then investigated the effect of users' percep-tion of usability and credibility on their performance.

This paper is structured as follows. In Section 2 we provide somebackground on e-government in the UK, discuss the usability andcredibility of e-government websites, and introduce and justify ourevaluation method. In Section 3 we detail our empirical approach.Data analysis is discussed in Section 4. Finally, conclusions, limitations,and future work are covered in Section 5.

2. Background

2.1. E-government in the UK

E-government can be defined as the use of the internet, especiallyweb technology as a tool to deliver government information andservices to users (Muir & Oppenheim, 2002). In the UK, e-governmentis not only a matter of choice, but also a necessary strategy. Indeed,governments at all levels (central, regional and local) implementinformation and communication technologies to transform the struc-ture, operation and culture of traditional government (Beynon‐Davies&Williams, 2003). The central government employs a number of activ-ities to promote e-government development. For example, the nationalstrategic project “Modernising Government” was aimed at making100% of government services available online by 2005 (Cabinet Office,2010), and the “Implementing Electronic Government” program re-quired e-government applications from the national to the local level(Beaumont, Longley, & Maguire, 2005). Local authorities were taskedwith developing their own e-government systems. As indicated byKuk (2003), local authorities are the key drivers for the developmentand implementation of electronic service delivery. There are nearly500 local authorities in the UK, all of whichwere required to implementonline systems by 2005 in response to the central government policy(Irani, Love, Elliman, Jones, & Themistocleous, 2005). Among themwas the capital London, which is a major urban area consisting of 32local authorities, 12 designated as inner London boroughs and 20 desig-nated as outer London boroughs (Wikipedia, 2014). All have imple-mented their own e-government systems. Even though there is arapid and noticeable growth of e-government in the UK, some studiespoint to a variety of issues. For example, Mosse and Whitley (2008)examined UK e-government websites and suggested that more atten-tion should be paid to design elements like website consistency, sim-plicity, navigation and accessibility. Kuzma (2010) investigatedaccessibility design issues for UK e-government websites and foundthat the openness of websites to citizens is not widespread. Among

Table 1Nielsen's (1994) and extend usability guidelines.

No. Usability guidelines Interpretation

U1 Visibility of system status To keep users informed aboutU2 Match between system and the real world To use the user' language, follU3 User control and freedom To make undo, redo functionsU4 Consistency and standards To keep the same design featuU5 Error prevention To support users to overcomeU6 Recognition rather than recall To make information easily reU7 Flexibility, efficiency of use To consider usage for both noU8 Aesthetic design To make minimalist designU9 Help user recover errors To precisely indicate the probU10 Help and documentation To provide help to support usU11 Interoperability To make all service parts, desiU12 Support users' skills To support and develop usersU13 Respectful interaction To present a pleasant design a

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

the author's recommendations we cite: dividing large blocks of infor-mation into more manageable ones, clearly identifying the target ofeach link, and using the clearest and simplest language that is appropri-ate for a website's content. Issues were particularly raised for e-government at the local (i.e., municipal) level. Kuk (2003) found thatthe quality of local e-government websites in the UK was significantlypoor in terms of information content, and relatively limited in termsof the range of online services. Furthermore, Irani et al., 2005 evaluatedlocal e-government in the UK from an organizational perspective andpointed to big challenges in the areas of e-government infrastructures,web design and service management.

2.2. Usability of e-government websites

Usability is a well-known and well-defined concept in human–computer interaction (HCI) research, referring to the extent to whichthe user and the computer can “communicate” clearly through theinterface (Chou & Hsiao, 2007). Fernandez, Insfran, and Abrahão(2011) defined usability as the capability of the software product tobeunderstood, learned, operated, and attractive to the users. In additionto academic definitions, the International Organization for Standardiza-tion (ISO) interprets usability as effectiveness, efficiency and satisfactionwith which the specified users achieve specific goals in the specifiedcontext of use (ISO, 1998). With regard to the World Wide Web(WWW),website usability refers to a qualitative appraisal of the relativeuser-friendliness of a website, as well as ease of use (Lee & Kozar, 2012).Nielsen (1994) usedmultiplemetrics to explain usability and developeda set of standard usability guidelines to explain this concept (see Table 1Usability guidelines 1–10). These guidelines have since been widelyused for evaluating website usability (e.g., Garcia et al., 2005). Notethat these guidelines were developed around 20 years ago and wereused for general website usability evaluation purposes. In order toaddress the particular needs of today's e-government websites, it isimportant to further develop the existing usability guidelines.

Evidence fromprevious studies suggests that e-governmentwebsitescan benefit from a high level of usability in at least two ways. First, an e-government website, serving as a window for users, provides a firstimpression of a government and its online services. So regardless of thetype of governmentwebsite, democratic values underlying governmentaloperations require that e-government should aim for user-friendliness(Baker, 2009). If websites fail to perform readily from a usabilitystandpoint, and instead hinder users' access to websites and theiruse of online services, then the evolution of e-government will bestymied (Youngblood & Mackiewicz, 2012). Second, usability improvesusers' performance as well as their satisfaction with e-government.Verdegem and Verleye (2009) investigated users' utilization of e-government based on a large sample (5590 respondents) of participants.Their results show that users' adoption and use of e-government services

their progressow real-world conventions, make information appear in a natural and logical orderavailable during interactionres and follow platform conventions through websiteerrors and prevent same problem occurrencememberedvice and experienced users

lem and constructively suggest a solutioner's task completiongn elements, and website functions work as a whole to support user task completion' current skills and knowledgend treat users with respect

ibility of e-government websites, Government Information Quarterly

Page 3: Usability and credibility of e-government websites

3Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

closely relate to usability in terms of the degree of access to the services,the findability of the website, the loading speed of the pages, the useful-ness of information, and the flexibility offered through the website. Ahigher level of usability allows for a better use of e-government services.Conversely, failure to ensure usability hinders user participation. Forinstance, Anthopoulos, Siozos, and Tsoukalas (2007) found that ifusers fail to access and properly execute services due to usability short-comings, their dissatisfaction increases. Such dissatisfaction may pre-vent users from returning to an e-government website, and even fromrecommending it to others. Evidently, prior research suggests thatwebsite usability impacts users' impressions of, as well as their perfor-mance and satisfaction with e-government.

2.3. Usability and credibility

In this subsection, we discuss the relationship between usability andcredibility. Basically, credibility refers to “judgments made by a perceiv-er concerning the believability of a communicator” (O'Keefe, 2002,p.181). Although credibility has many definitions, it can be arguedthat two fundamental factors characterize it: trustworthiness andexpertise (Hilligoss & Rieh, 2008). The former is about reliability,while the latter is about the user's perception of the source of knowl-edge and skills (Fogg, 2003). However, some studies use multiplecriteria to describe credibility, allowing for a more comprehensiveunderstanding. For instance Rieh (2002) interprets credibility as trust-fulness, reliability, accuracy, authority and quality, while Fogg (2002)proposes a set of ten credibility guidelines that try to capture the com-plexity of the concept (see Table 2 Credibility guidelines 1–10).

Within the context of e-government, ensuring that websites arecredible is both an ethical and legal imperative. For example, Bélangerand Carter (2008) argue that users are reluctant to use e-governmentfor lack of trust in the security of online transactions and concernsabout the use of personal information submitted online, which, in theabsence of legislation, clearly raise an ethical issue. But in many cases,the credibility of e-government is tied to legal requirements (if any).Legislations such as the UK's Privacy and Electronic CommunicationsRegulations 2003 (Cabinet Office, 2010), the UK's Information Assur-ance Framework, and the Trust Services e-Government Strategy Frame-work Policy and Guidelines (Cabinet Office, 2010), to mention just afew, go a long way towards fostering e-government credibility. Forinstance, the UK's Trust Services e-Government Strategy FrameworkPolicy and Guidelines fosters e-government credibility by buildingthe confidence and trust of individuals and businesses. It does so byensuring that government organizations are responsible for deliveringdifferent levels of trust, such as providing effective user identificationand authentication, among other things.

The relationship betweenusability and credibility has beenhighlight-ed in a number of studies such as that of Brown, Rahman, and Hacker

Table 2Fogg's (2002) and extended credibility guidelines.

No. Credibility guidelines Interpretation

C1 Design look A clean, professional layout that fits the purpose and mC2 Information accuracy Third party references, links to source materials, proofC3 Real world feel Providing information like a physical address and detaC4 Expertise Providing credentials and any awards won in the fieldC5 Trustworthiness Photographs of department directors and the managemC6 Contact information Providing clear and easy to find contact information heC7 Ease of use Users can easily complete their tasks using the websiteC8 Content update Providing proof of when content was last updated or rC9 Promotional content Using restraint with any promotional contentC10 Avoid errors Preventing problems from all types, such as typographC11 Transparency The website should keep users informed of governmenC12 Service agility The website should provide flexible services to fit diffeC13 Privacy and security The website should protect users' information and secu

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

(2006) who argue that a combination of usability and credibility inwebsite design can increase user responsiveness. In addition, Teo et al.(2008) claim that user acceptance of e-government services is affect-ed by perceived usefulness and perceived ease of use. A study byYoungblood and Mackiewicz (2012), who analyzed the usability ofmunicipal government websites, revealed substantial problems withthose websites' usability and argued that such problems could erodethe credibility ofmunicipalities trying to engage users. Likewise, design-ing websites for credibility carries the potential for a greater overall us-ability. This is supported byNielsen (2000)whohighlights credibility aspart of website usability design, and suggests establishing credibility onevery page of the website, especially with respect to visual appearance.

Several studies show that usability and credibility share someimportant website design attributes. For instance, Yang (2007) investi-gated the credibility of news-related blogs in Taiwan, and some attri-butes used for perceived credibility, namely the degree of provision offair, unbiased and objective information, were closely related to usabil-ity. Indeed, according to Garcia et al. (2005), such attributes can beapplied to measure usability in aspects of information quality. Similarly,Robins and Holmes (2008) argue that website aesthetics design is thefirst credibility cue. Users judge a website's credibility quickly becausebefore other cognitive processes take place, preconscious judgementsbased on visual design elements are already made. Note that aestheticsdesign is also an attribute of usability (Nielsen, 2000). High aestheticstreatment of a user interface directly affects users' perception of usabil-ity (Sonderegger & Sauer, 2010).

Accordingly, evidence suggests that usability closely interrelates tocredibility; hence an assessment that explores how usability of e-government websites impacts their credibility (and vice versa) canprovide useful insights into the development of e-government.

2.4. Heuristic evaluation

In formative evaluation, usability is mostly concerned with evaluat-ing the software interface using approaches known as usability inspec-tions or expert reviews (Fernandez et al., 2011). Major approaches tousability inspections include heuristic evaluation; cognitivewalkthrough(Khajouei, Hasman, & Jaspers, 2011); and user testing (Sonderegger &Sauer, 2010). Among those approaches, heuristic evaluation is quicker,easier and more effective and has been broadly used in various studies(e.g., Youngblood & Mackiewicz, 2012). Heuristic evaluation normallyinvolves users discovering problems based on a set of predefined designprinciples, guidelines or heuristics. Although it is sometimes calledexpert inspection, heuristic evaluation can be effectively employed byboth experts and novices (Garcia et al., 2005). It can be conducted by asingle inspector, although its effectiveness is usually improved byincreasing the number of evaluators (Fogg, 2003). Furthermore, heuristicevaluation can trap a high proportion of problems. For example, Tan, Liu,

akes a good impressionthat information is from a trusted source.iled company background

ent team help give users clues to who is behind the website.lps to portray the image that the organization cares about the needs of its users..eviewed shows evidence that the website is being used and is current.

ical errors and broken linkstal operations and make government budgeting and spending information available.rent user paths.re its services.

ibility of e-government websites, Government Information Quarterly

Page 4: Usability and credibility of e-government websites

4 Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

and Bishu (2009) used both heuristic evaluation and user testing toidentifyweb design problems, and out of a total of 183 problems, heuris-tic evaluation identified 150, while user testing only detected 69.

Most importantly, heuristic evaluation can be used for in-depthinspections as in Garcia et al. (2005) who used it to assess Brazilian e-government websites. To ensure that heuristics discover entire websitefeatures and focus on detailed design elements, the authors extendedthe heuristics to meet the specific needs of e-government websites.Thus detailed sub-items were developed based on each extendedheuristic. As a result, serious problems were raised from all heuristics.In particular, heuristics such as security and privacy; efficiency of use;information precision; visibility of system; interoperability; and trans-parency enabled amore thorough and in-depth inspection. For instance,specific design issues such as lack of digital certification, absence of avirtual keyboard for password input for security and privacywere clear-ly identified.

The above-mentioned body of work points to the applicability andusefulness of heuristic evaluation in our study to evaluate the usabilityand credibility of e-government websites. Our evaluation is describedin the next section.

3. Methods

3.1. E-government website selection

Three local e-government websites were selected: (1) LondonAuthority One (LA1) which focuses on service delivery by offering awide range of services such as social care applications, council tax pay-ments and school registrations; (2) London Authority Two (LA2) whichis more informational, focusing on users' information searching needssuch as seeking local job postings and checking Heathrow airportflights, and (3) London Authority Three (LA3) which is aimed at moti-vating users' engagement, for instance by enabling them to participatein discussions of (and vote on) building plans. These three websiteswere selected for a number of reasons. First, local e-governmentwebsites are the closest to the users (compared to regional or nationallevel websites), and they are frequently used by the general public.According to Reddick (2009), local e-government focuses on the directneeds of users in accessing information and services. Second, as localgovernment has to lead the way in the creation of high quality contentand services (Kuk, 2003), it is important to understand the effects oflocal e-government on users (Garcia et al., 2005). Third, evidence fromprevious studies (e.g., Irani et al., 2005) suggests that bigger challengesexist at the local level of e-government as well as its website design(Yang & Paul, 2005). Such challenges (e.g., errors in the website) mayresult in a low user engagement with e-government.

3.2. Task sheet

As part of the assessment, and in order for the participants to have aninitial interactionwith the selected e-governmentwebsites and develop

Table 3Tasks assigned to users prior to filling the questionnaire.

No Task

1 Find the name of the Chief Executive officer of London Authority 12 Find the title of any job related to social work in London Authority 1, including refere3 Search the telephone number of the Planning Department in London Authority 14 Find the Revenue Budget 2008–09 of London Authority 15 Use the search engine on this website to find the place to apply for “Free School Mea6 Use “A to Z service” to find information about how to join the local library, fill in the ad7 Find the latest news about the reopening date of London Authority 1 Leisure centre8 Download the Primary School Guide 2009–10 to the computer (Drive C:\Form down9 Sign-in to the system first, and fill in a “compliments, comments and complaints onlin

submit to London Authority 1

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

a general perception, they were required to perform a set of practicaltasks on the three websites. Such tasks are representative of whatusers would usually be expected to carry out on an e-governmentwebsite. In general, there are three categories of e-government services:(1) information distribution; (2) services offered; and (3) user partici-pation (Garcia et al., 2005). Information distribution relates to theprovision of all kinds of information via the e-government website.Services offered refer to delivering one-way services to users, such asdocument downloads, and information searching. User participationinvolves users' interacting with two-way services on the website, forexample, electronic birth registrations, tax payments and school appli-cations. Based on these categories, the tasks shown in Table 3 weredesigned to represent different types of interactions that users normallyundertake when they visit e-government websites.

3.3. Usability and credibility questionnaire

Aquestionnairewas developed to assess the participants' perceptionof the usability and credibility of the target e-government websites.There are three steps in the questionnaire design. First, there is a needto extend the existing guidelines to meet the specific requirements ofe-government. Second, a set of associated criteria for each guidelineare developed in order to focus on the detailed aspects of usability andcredibility. Third, the specific questions are developed based on thesecriteria.

3.3.1. Extension of guidelinesAsmentioned, Nielsen's and Fogg's guidelineswere used as a bench-

mark in this study since their usefulness has already been validated(e.g., Sidi & Junaini, 2006). However, there is no record of these guide-lines being previously used for e-governmentwebsite assessment.More-over, these guidelines were developed around 20 and 12 years agorespectively and used for general website usability and credibility evalu-ation purposes. In order to address the specificity of e-governmentwebsites, we extended the existing Nielsen's and Fogg's guidelines.

In terms of usability, evidence fromprevious studies indicates that e-government is used by a wide range of people (Garcia et al., 2005),therefore, e-government websites should support users with differentskills to access their services in a reasonably simple way. Interoperabil-ity, especially in terms of information and service exchange (Garciaet al., 2005), is also important, for instance in making sure informationis kept current between government and the e-government system.Finally, during their interactions with e-government services, usersshould be treated with respect at all times (Reddick, 2009). As aconsequence, we extended the existing Nielsen's usability heuristicsby adding three more guidelines: ‘Interoperability’, ‘Support and devel-op users' skills’ and ‘Pleasurable and respectful interaction with users’(see Table 1 Usability guidelines 11–13).

As for credibility, studies show that since e-government is used forpublic administration, transparency should be fostered through theprovision of in-depth information such as public expenditures (Welch

Service category

1 & 2nce number, job description 1 & 2

1 & 21 & 2

ls” 2ult library membership form and submit the form via the website 2 & 3

1 & 2load) 2e form” to complain that there is no street lamp on XXX Road and 2 & 3

ibility of e-government websites, Government Information Quarterly

Page 5: Usability and credibility of e-government websites

Table 4Usability evaluation criteria.

Usability guideline/interpretation Evaluation criteria Relevant studies

U1 Visibility of system status: to keep users informedabout their progress

To display clear subject headers, page titles; label option; display differentcontent in distinct zones; track users' path and highlight current position

Brinck, Gergle, and Wood(2002), Barnes and Vidgen(2004)

U2 Match between system and the real world: to usethe user' language, follow real-world conventions

To apply the same colour scheme for e-government and physical government;to keep users informed about service processing progress at all times

Aladwani (2013)

U3 User control and freedom: to make undo,redo functions available during interaction

To allow users to change options earlier; make the undo functionalways available; arrange subject options in a logical order

Nielsen (2000), Brinck et al.(2002), Baker (2009)

U4 Consistency, standards: to follow conventionsthrough website

To apply consistent layout, colours; and follow web conventions Barnes and Vidgen (2004)

U5 Error prevention: to support users to overcomeerrors

To avoid that users skip over the order of the task; show a highlightedmessage around errors; indicate the requirements of data entry

Garcia et al. (2005)

U6 Recognition rather than recall: to makeinformation easily remembered

To locate key subjects in a central position; offer clear and brief prompts for options'explanation; appropriately use breathing space in text areas

Nielsen (2000)

U7 Flexibility and efficiency of use: to considerusage for both novice and experienced users

To ensure accessible links; provide detailed information in multiple options; arrangesearch results by level of relevance; provide shortcuts for highly frequent usage tasks

Brinck et al. (2002)

U8 Aesthetic design: to make minimalist design To apply clear, simple images within corresponding text; present content in anuncluttered manner

Sonderegger and Sauer (2010)

U9 Errors recovery: to precisely show the problem,and suggest solution

To present error messages for further actions; mark compulsory data entry fields;highlight errors in particular field

Clemmensen and Katre (2012)

U10 Help, documentation: To provide help tosupport user task completion

To provide a quick online help access on every page; cover a wider range of guidanceand advice

Brinck et al. (2002)

U11 Interoperability: to make all services, design,functions work as a whole to support task completion

To define standard communication protocols; increase collaboration ability tounderstand the abbreviations, code, format

Garcia et al. (2005),Baker (2009)

U12 Support users' skills: to support and developusers' skills and knowledge

Tomake subject options stand out; present the most important content at the beginningof the paragraph; make forward and backward functions available

Barnes and Vidgen (2004),Garcia et al. (2005)

U13 Respectful interaction: to present a pleasantdesign and treat users with respect

To make text short; write information in clear and simple language; provideaccessibility options

Barnes and Vidgen (2004),(Reddick, 2009)

5Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

&Hinnant, 2003). In addition, since a variety of information and servicesare made available on e-government websites, they must be deliveredusing flexible mechanisms that support users in developing their ownways of achieving their desired outcomes (Gant & Gant, 2002). Further-more, because all information and services are delivered and transactedvia the internet, security and privacy must be addressed seriously(Bélanger & Carter, 2008). Evidently, security and privacy are closelyrelated to user trust (Garcia et al., 2005). In light of all this, we addedthree new credibility guidelines to Fogg's original list of ten, namely

Table 5Credibility evaluation criteria.

Credibility guideline/interpretation Evaluation criteria

C1 Design look: to make a clean, professional layout To employ colours to group relpresentation; visually label eve

C2 Information accuracy: to provide cues thatinformation is from a trusted source

To present information at the rdisplay information with comp

C3 Real world feel: to provide information like a physicaladdress and detailed company background

To show multiple contact methnames and photos; make refer

C4 Expertise: to provide credentials and any awards wonin the field

To specify service policies; offereferences and dates; display m

C5Trustworthiness: to provide users with clues aboutwho is behind the website

To make references to other gogovernment; provide detailed

C6 Contact: to provide clear and easy contact information To offer a quick contact optioninformation by different depar

C7 Ease of use: to make it possible for users to easilycomplete their tasks using the website

To provide multiple functions tusers location; highlight curren

C8 Content update: to show when content was lastupdated or reviewed

To indicate website update dat

C9 Promotional content: to use restraint with anypromotional content

To limit the number of promotnon-important areas

C10 Avoid errors: to prevent problems from all types To provide proper instructions

C11 Transparency: to keep users informed ofgovernmental operations

To offer rich government inforconfirm service completion; in

C12 Service agility: to provide flexible services to fitdifferent user paths

To ensure all functions of e-goin a hierarchical way; be able t

C13 Privacy and security: to protect users' informationand secure its services

To provide login mechanisms;present warning messages wh

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

‘Transparency’, ‘Service agility’ and ‘Privacy and security’ (see Table 2Credibility guidelines 11–13).

3.3.2. Development of evaluation criteriaAlthough the guidelines were extended, they are still too general

for developing the usability and credibility questions in the ques-tionnaire, which might not allow us to accomplish an in-depthassessment. A lack of detail in the analysis would lead to failure inidentifying specific problems. It is therefore important to devise a

Relevant studies

ated information; use consistent colours in informationry page

Fogg (2003)

ight level of detail; provide third party references;leteness and conciseness

Robins and Holmes (2008),Huang et al. (2009)

ods; describe role of users and staff; present staffs'ences to other governmental bodies

Sidi and Junaini (2006)

r precise and detailed information with sourceessages with completeness and conciseness

Robins and Holmes (2008)

vernment agencies; display any awards earned bystaff information

Wathen and Burkell (2002)

; provide multiple types of contact; organise contacttments

Yang and Paul (2005)

o support task completion; provide messages to indicatet step in task process

Youngblood andMackiewicz (2012)

e; present information and services update date Sidi and Junaini (2006)

ional content; present promotional content in Robins and Holmes (2008)

; use clear, simple language without typographical errors Fogg et al. (2001), Sidi andJunaini (2006)

mation; offer terms, disclaimer detail;dicate task status

Welch and Hinnant(2003), Lee (2010)

vernment work as a whole; organise contento quit online services at any time

Verdegem and Verleye(2009)

show messages when transferring data;en accessing confidential services

Garcia et al. (2005),(Reddick, 2009)

ibility of e-government websites, Government Information Quarterly

Page 6: Usability and credibility of e-government websites

6 Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

set of associated evaluation criteria for each guideline. Such criteria(see Tables 4 and 5) are devised based on (1) relevant usability andcredibility studies and (2) from the interpretation of a wide set ofe-government research.

3.3.3. Usability and credibility questionnaireUsing a questionnaire ensures that the same questions are delivered

to each participant and that responses can be obtained quickly. The par-ticipants are asked to respond using a five-point Likert scale, which canclearly indicate the participants' agreement level with the statementsthat constitute the questionnaire. The other advantage of using a five-point Likert scale is that an odd number of response formats allowsthe participants to indicate their opinions with a positive, neutral ornegative option. Furthermore, information obtained from a five-pointLikert scale can be easily collected and analyzed (see a sample questionsfrom the questionnaire in Fig. 1).

3.4. Participants

This empirical study was conducted in London, UK. A total of 36participants were approached on the street and took part in the studyon a voluntary basis. The fact that this is an exploratory investigationexplains the rather limited number of participants. Most participantswere students and professors at Brunel University, and few wererecruited from public places, such as local libraries and leisure centresaround Brunel University. A clear explanation of the purpose of thisstudy was provided to the participants before the process started.

3.5. Evaluation procedure

The 36 participants were assigned evenly to the 3 target e-government websites (i.e., each website was evaluated by 12 partici-pants). It took a participant approximately 90 min to complete theentire evaluation. Each evaluation followed the same three-phaseprocess (Chen &Macredie, 2005): (1) free review; (2) task-based inter-action; and (3) questionnaire. Free review allows the participants tolook through the target e-government website. They can either freelylook at the overall website or focus on specific website design elements.

Fig. 1. Sample of questions f

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

This provides the participants with an initial interaction with e-government websites and their general perception may be formed.Subsequently, the participants were required to complete a set oftasks on the target e-government website. To that end, a task sheetdescribing the selected 9 tasks was handed to the participants (seeTable 3). The participants were asked to perform these tasks one byonewithout a time limit.While the participants performed the assignedtasks, elements of their performance were observed and recorded(including the amount of online help required; the average time spentcompleting all tasks; the average number of steps to finish tasks, andthe ratio of successful tasks completed). Having accomplished all thetasks, the participants were finally asked to fill out the usability andcredibility questionnaire. All research instruments for this study werethoroughly checked and pre-tested in a pilot study before the evalua-tion began.

3.6. Data analysis

To analyze users' perception of usability and credibility, and their per-formance with the target e-government websites, the data collectedfrom the questionnaire and observation was coded using IBM SPSS forwindows (version 20). The significance value (P) was predefined asless than 0.05. The independent variables were the three e-governmentwebsites, while the dependent variables were the participants' percep-tion and their performance. The dependent variables were analyzedagainst the independent variables using one-way ANOVA, one-samplet-test and paired-sample t-test. The output of these tests was examinedto discover differences among the target e-government websites. Forexample, to obtain the overall perception of usability among the threee-governmentwebsites, each participant's overall perception of usability(covering all usability guidelines) within each London Authority websitewas calculated by mean first. This was followed by calculating the meanof the 12 participants' perception of usability (overall perception) foreach London Authority website. Finally, such data was analyzed usingone-way ANOVA with the three London Authority websites as indepen-dent variables and the overall perception of usability as dependentvariable to indicate whether there is a difference among the threewebsites.

rom the questionnaire.

ibility of e-government websites, Government Information Quarterly

Page 7: Usability and credibility of e-government websites

Table 6Distribution of participants' characteristics.

London Authority 1N = 12 (%)

London Authority 2N = 12 (%)

London Authority 3N = 12 (%)

TotalN = 36 (%)

Gender Male 6 (50.0) 6 (50.0) 7 (58.3) 19 (52.7)Female 6 (50.0) 6 (50.0) 5 (41.7) 17 (47.3)

Age 18–25 0 (0.0) 1 (8.3) 4 (33.3) 5 (13.9)25–30 6 (50.0) 4 (33.3) 2 (16.7) 12 (33.3)30–35 4 (33.3) 4 (33.3) 4 (33.3) 12 (33.3)35–45 2 (16.7) 1 (8.3) 1 (8.3) 4 (11.2)45+ 0 (0.0) 2 (16.7) 1 (8.3) 3 (8.3)

Internet use(hours)

0–5 1 (8.3) 2 (16.7) 1 (8.3) 4 (11.2)6–10 2 (16.7) 2 (16.7) 0 (0.0) 4 (11.2)

11–15 1 (8.3) 1 (8.3) 1 (8.3) 3 (8.3)16–20 3 (25.0) 1 (8.3) 2 (16.7) 6 (16.6)20+ 5 (41.7) 6 (50.0) 8 (66.7) 19 (52.7)

7Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

4. Results and discussion

This section presents and discusses the results of the study. It coversthe descriptive analysis about the participants' demographic informa-tion, the usability and credibility assessment of the three e-governmentwebsites, the mutual impact between usability and credibility, and userperformance with e-government websites.

4.1. Descriptive information

Table 6 shows the demographic information (gender, age and inter-net use) of the participants. Note how the distribution of participants'characteristics is equally allocated across the three e-governmentwebsites.

Furthermore, the analysis onmeasures' reliability was implementedby calculating Cronbach's alpha. An alpha value between 0.70 and 0.98indicates high reliability. In this study, the alpha values of all usabilityand credibility guidelines were higher than the significance value 0.70(Sun & Hsu, 2012), indicating good reliability.

4.2. Usability assessment

The overall perception indicates the participants' evaluation of usabil-ity for each target e-government website. The results of one-way ANOVAshow that there is a significant difference in the participants' overall per-ception of usability (F(2, 33) = 8.784, P = 0.001) among the threewebsites. Since a low mean indicates a bad overall assessment, LondonAuthority 2 has the worst overall evaluation, with a mean of overall us-ability of 3.323 and a standard deviation of 0.367. London Authority 1placed second,with ameanof overall usability of 3.445 anda standardde-viation of 0.304. London Authority 3 has the best score with a mean ofoverall usability evaluation of 3.843 and a standard deviation of 0.275.

Having looked at the overall perception of usability, we also describethe detailed level of users' perception of usability strengths for thetarget e-government websites. To identify the usability strengths, aone-sample Test is used to determine whether each usability featureis significantly different from the overall usability. If a significant differ-ence is found (P b 0.05), the usability featureswith amean score greaterthan the overall usability mean score are selected as usability strengths.Similarly, the usability features with a mean score less than the overallusability mean score are selected as usability weaknesses.

Therefore, a set of usability strengthshas been identified on the targete-government websites. The most common strengths include “ease ofmoving forward and backward within different fields of the website”,and “presenting a title with every page to clearly indicate the subject ofthe content”. Note that these common usability strengths typically pro-vide users with navigation cues, not only facilitating website orientation,but also reinforcing their ability to ascertain their navigational control.

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

In addition, some usability strengths have been identified on specifice-governmentwebsites. For example, on LondonAuthority 1, the featureof “offering A–Z”which supports users in quickly finding relevant infor-mation was identified as a strength. On London Authority 2, the featureby which “each image corresponds to each context” was identified as astrength. This feature facilitates text presentation and increases thewebsite's visual communication. On LondonAuthority 3, themost signif-icant usability feature was the “provision of multiple service approachesfor tasks completion”. This particular feature provides users with free-dom of control and flexible navigation, so they can perform tasks intheir preferred way. Such findings may imply that usability featuresneed to meet the characteristics of e-government and are more in linewith its users' online behaviour. As indicated by Garcia et al. (2005),each e-government website presents a configuration related to threeconstitutive characteristics: online service offer, information migrationpercentage and user participation capacity. London Authority 1, being amajor London Borough Council, is responsible for offering a wide rangeof services dealing with social care, libraries, cemeteries, education, etc.(Anon, 2013). As such, its website may focus on providing support forusers' quick service access. London Authority 2, being home to BrunelUniversity as well as London Heathrow Airport, its website is usuallymore informational and focuses on the direct needs of users in terms ofinformation searching. As a result, the priority of London Authority 2website may be to provide support for users' increasing informationseeking, scanning and readability. Regarding London Authority 3, it usu-ally aims to encourage users' service participation, such as consultinggovernment statistical data, giving suggestions in web forums andparticipating in public voting. Therefore “provision of multiple serviceapproaches for task completion” was identified as an important featureof the London Authority 3 e-government website.

Table 7 shows the weak usability features that have been identifiedon the target e-government websites. Among those, the most commonusability issues are that “users are confused by links that have manydifferent colours” and “links already visited are not clearly marked”(highlighted in bold in the table). Note that these weaknesses visuallyhinder users' resource recognition, making it difficult to locate informa-tion during the information searching process. In addition, differentusability weaknesses have been found on each e-government website.As the lower scores indicate more serious weaknesses, on LondonAuthority 1, “online help function not being clearly indicated” wasfound to be a serious weakness, which may challenge users' ability tosolve problems. On London Authority 2, “options on the home pagenot being clearly presented” was seen as a weakness. Such weaknessaffects subject content presentation, which may cause users difficultiesduring information seeking. Finally, “presenting subject categorieswithout a logical order” was identified as a serious usability weaknesson London Authority 3. Such a weakness may influence users' under-standing of the subject arrangement as well as cause memory load.

ibility of e-government websites, Government Information Quarterly

Page 8: Usability and credibility of e-government websites

Table 7Usability weaknesses.

U Q Usability problems Mean (SD) Significance

London Authority 18 33 Users are confused by links that have many different colours 2.32 (1.084) t = −3.303, p = 0.00710 40 Online help function is not clearly indicated on the website 2.33 (1.155) t = −3.350, p = 0.00610 41 It is difficult to switch between online help and current work 2.75 (0.866) t = −2.800, p = 0.017

London Authority 21 2 Some options on the home page are not clearly presented 2.17 (1.030) t = −3.879, p = 0.0038 33 Users are confused by links that have many different colours 2.25 (0.866) t = −4.280, p = 0.0015 22 The website sometimes does not indicate a task's progress 2.33 (0.888) t = −3.851, p = 0.0038 32 Links already visited are not clearly marked 2.50 (1.243) t = −2.285, p = 0.0435 19 The website allows users to skip over the order of the process 2.67 (0.778) t = −2.907, p = 0.014

London Authority 38 33 Users are confused by links that have many different colours 2.58 (0.669) t = −6.511, p = 0.0004 16 Subject categories are presented without a logical order 2.83 (1.030) t = −3.386, p = 0.0068 32 Links already visited are not clearly marked 2.92 (1.084) t = −2.952, p = 0.0133 13 Information is unbalanced between breadth and depth 3.00 (0.853) t = −3.412, p = 0.006

Note: SD = Standard deviation.

8 Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

4.3. Credibility assessment

On the other side, the results also show a significant difference inthe participants' overall perception of credibility (F(2, 33) = 4.885,P= 0.014) among the three websites. Specifically, London Authority2 has the worst overall evaluation, with a mean of overall credibility of3.436 and a standard deviation of 0.322. London Authority 1 is second,with a mean of overall credibility of 3.699 and a standard deviation of0.432. London Authority 3 has the best score, with a mean of overallcredibility of 3.885 and a standard deviation of 0.291.

Meanwhile, a set of credibility strengths has been identified on thethree e-government websites. Among these, the most common are:“the URL properly presenting the domain name of the local council”,“the content of the website matching with information user expects toobtain from a local council”, and “the website not presenting too manyirrelevant promotion contents”. These findings are in line with other

Table 8Credibility weaknesses.

C Q Credibility problems

London Authority 11 2 Information is not presented by the consistent colours1 6 Subject/topic categories are not presented with order4 14 Instruments and messages displayed by the website are n5 16 The website does not display any awards it has earned6 20 The website does not show detailed contact informati7 24 Information is arranged without a balance between br7 22 Navigating the website is not easy7 23 It is not clear what page I am on and how far left of th8 25 The website has not a latest update

London Authority 25 16 The website does not display any awards it has earned7 22 Navigating the website is not easy7 24 Information is arranged without a balance between br6 20 The website does not show detailed contact informati5 15 It is not easy to find an “about us” page1 2 Information is not presented by the consistent colours7 23 It is not clear what page I am on and how far left of th8 25 The website has not a latest update

London Authority 38 25 The website has not a latest update13 38 Protected or confidential areas can be accessed without c13 39 A secure message is not appeared when access some con5 17 The website does not display information about who is in5 16 The website does not display any awards it has earned11 34 User status is not indicated in an action

Note: SD = Standard deviation.

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

studies which found that when users evaluate the credibility of web-based information, most of them may look at references, institutionalaffiliations, and URL domains (Liu, 2004). Similarly, Fogg (2003) sug-gested that users' credibility perceptions increase if the website matchestheir expectations about what they should do at each step to accomplishtheir goals on the website. Accordingly, highlighting these commoncredibility strengths can certainly support efficient e-government devel-opment and facilitate users' appraisal of credibility.

Strong features have also been found on each e-governmentwebsite. For instance, “accessing some personal services with alogin mechanism” was found on London Authority 1, which ensuresuser authentication and increases service security. On London Authority2, “limited promotional content presentation”was identified as a strongfeature. This feature helps distinguish information from advertisementcontent, and keep user concentration on the subject at hand during theprocess of information seeking. On London Authority 3, “provision of a

Mean (SD) Significance

2.58 (0.996) t = −3.883, p = 0.0032.58 (0.872) t = −3.356, p = 0.018

ot concise 2.65 (0.756) t = −3.122, p = 0.0342.72 (0.784) t = −3.658, p = 0.046

on 2.75 (0.682) t = −3.252, p = 0.025eadth and depth 2.69 (0.458) t = −2.543, p = 0.012

2.83 (0.743) t = −3.358, p = 0.018e quote process 2.86 (0.642) t = −3.682, p = 0.038

2.72 (0.632) t = −3.345, p = 0.023

2.17 (0.866) t = −4.280, p = 0.0012.17 (0.937) t = −2.242, p = 0.047

eadth and depth 2.17 (1.030) t = −3.879, p = 0.003on 2.33 (0.888) t = −3.851, p = 0.003

2.50 (1.243) t = −2.285, p = 0.0432.67 (0.778) t = −2.907, p = 0.014

e quote process 2.67 (0.985) t = −2.720, p = 0.0202.92 (0.515) t = −3.521, p = 0.005

2.58 (0.669) t = −6.511, p = 0.000ertain password 2.58 (0.872) t = −3.323, p = 0.004fidential information 2.75 (0.866) t = −4.560, p = 0.001charge of 2.83 (1.030) t = −3.386, p = 0.006

2.92 (1.084) t = −2.952, p = 0.0133.00 (0.853) t = −3.412, p = 0.006

ibility of e-government websites, Government Information Quarterly

Page 9: Usability and credibility of e-government websites

9Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

clear contact”was found to be strong. This feature is beneficial for build-ing a real-world presence on the website and fostering user trust.

Table 8 presents a number of credibility weaknesses identified on thetargetwebsites. Among them, themost common are “the absence of usertask progress indication”, “detailed contact information not being provid-ed”, “information being arrangedwithout a balance betweenbreadth anddepth”, and “the website not being updated regularly” (highlighted inbold in the table). Moreover, serious credibility weaknesses have beenfound on each website. For instance, a serious credibility weaknessidentified on London Authority 1 is that “information is presented with-out consistent colours”, which may affect the visual continuity of thewebsite. The biggest credibility weakness found on London Authority 2is that “the website does not display any awards it has earned”. Suchproblem may have an effect on e-government reputation, which inturn, may lead to lower user trust. On London Authority 3, the mostsevere weakness is that “some confidential services are accessiblewithout a password requirement”. This increases the risk of personalinformation loss, which may result in the user failing to engage withany services on the e-government website.

The above-mentioned strengths suggest that usability and credibili-ty were indeed considered in the development of the e-governmentwebsites reviewed by the participants. These strengths provide userswith easy access to, and believability of e-government, hence affectinge-government quality of use. As indicated by Bevan (1995), quality ofuse can be measured by the software product's attributes of usabilityand reliability, which can affect user satisfaction and performance.

The findings of weaknesses also suggest that although usability andcredibility have, to a certain degree, been considered in the developmentof e-government websites, they have not been paid enough attentionat the detail level. In other words, there is much more room for e-government websites to improve their usability and credibility. In partic-ular, the most serious usability problems lay within the areas of“Aesthetics and minimalist design”, “Recognition rather than recall”,and “Consistency and standards”, and themost serious credibility prob-lems are in the areas of “Design look”, “Ease of use”, “Content update”and “Privacy and security”. These areas can be looked at seriously by

Table 9Comparison of usability and credibility guidelines on the 3 websites.

U2

LA1 LA2 LA3

C1 3.60(.42) 3.19(.65) 3.44(.71) 2.85(.56) 3.94(.66) 3.33(.52)t = 2.35, p = 0.039 t = 3.062, p = 0.011 t = 2.275, p = 0.044

C2 3.75(.38) 3.19(.65) 3.63(.36) 2.85(.56) 4.04(.32) 3.33(.52)t = 3.00, p = 0.012 t = 5.172, p = 0.000 t = 4.287, p = 0.001

C3 3.72(.65) 3.19(.65) 3.67(.59) 2.85(.56) 4.08(.57) 3.33(.52)t = 1.823, p = 0.096 t = 3.437, p = 0.006 t = 3.146, p = 0.009

C4 3.67(.84) 3.19(.65) 3.42(.75) 2.85(.56) 4.08(.67) 3.33(.52)t = 1.494, p = 0.163 t = 2.254, p = 0.046 t = 2.505, p = 0.029

C5 3.83(.93) 3.19(.65) 3.28(.45) 2.85(.56) 3.06(.53) 3.33(.52)t = 1.905, p = 0.083 t = 3.460, p = 0.005 t = -1.275, p = 0.229

C6 3.72(.94) 3.19(.65) 3.28(.49) 2.85(.56) 3.92(.29) 3.33(.52)t = 1.562, p = 0.147 t = 2.511, p = 0.029 t = 3.766, p = 0.003

C7 3.33(.86) 3.19(.65) 2.98(.55) 2.85(.56) 3.88(.47) 3.33(.52)t = 0.581, p = 0.573 t = 0.692, p = 0.504 t = 2.519, p = 0.029

C8 3.17(1.03) 3.19(.65) 1.33(.62) 2.85(.56) 1.58(.52) 3.33(.52)t = -0.084, p = 0.934 t = -6.017, p = 0.000 t = −8.775, p = 0.000

C9 3.67(.62) 3.19(.65) 3.79(.54) 2.85(.56) 4.46(.45) 3.33(.52)t = 1.715, p = 0.114 t = 5.461, p = 0.000 t = 5.249, p = 0.000

C10 3.78(.64) 3.19(.65) 3.81(.58) 2.85(.56) 4.00(.64) 3.33(.52)t = 2.305, p = 0.042 t = 4.761, p = 0.001 t = 2.960, p = 0.013

C11 3.92(.63) 3.19(.65) 3.60(.44) 2.85(.56) 3.90(.58) 3.33(.52)t = 2.998, p = 0.012 t = 3.257, p = 0.008 t = 2.377, p = 0.037

C12 3.78(.63) 3.19(.65) 3.50(.70) 2.85(.56) 4.25(.73) 3.33(.52)t = 2.312, p = 0.041 t = 2.882, p = 0.015 t = 3.532, p = 0.005

C13 3.92(.63) 3.19(.65) 3.17(.58) 2.85(.56) 3.13(.43) 3.33(.52)t = 3.241, p = 0.008 t = 1.268, p = 0.231 t = −1.387, p = 0.193

Note: content of each cell consists of Mean and Standard deviation.

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

designers of e-government systems to zoom in on specific elements ofthe websites, which might improve their usability and credibility.

4.4. Impact of usability and credibility on each other

The results of the overall perception of usability and credibility showthat the e-governmentwebsitewith higher usabilitywas seen as havinghigher credibility (e.g., LA3), and vice versa (e.g., LA2), it can be arguedthat users' perception of usability and credibility influences each other,which seems to confirm findings from previous studies (e.g., Garciaet al., 2005).

Furthermore, we analyze the data with regards to how usability andcredibility impact each other on the three target e-governmentwebsites. The results show that for the three websites, the usabilityguidelines that have the most impact on the overall credibility include“Match between system and the real world (U2)” (e.g., participants'perception of U2 shows significant differenceswith 6 (46.2%) credibilityguidelines in LA1, 11 (84.6%) credibility guidelines in LA2, and 11(84.6%) credibility guidelines in LA3 respectively) and “Aesthetic andminimalist design (U8)” (e.g., participants' perception of U8 shows sig-nificant differences with 8 (61.5%) credibility guidelines in LA1, 7(53.8%) credibility guidelines in LA2, and 12 (92.3%) credibility guide-lines in LA3 respectively). See highlighted in bold in Table 9.

In retrospect, users' trust of e-government is significantly influencedby the e-government website which speaks the user's language withwords, phrases and concepts familiar to the user, and follows real-world conventions, making information appear in a natural and logicalorder (Aladwani, 2013). Such trust is a long-term proposition that isbuilt slowly as people use the website; a single violation of that trustcan destroy the website's credibility (Nielsen, 2000). Moreover, theaesthetic appearance of a website plays a fundamental role in a user'sperception of that website's credibility. With a high level of aestheticsprojecting a professional look-and-feel that is appropriate for theorganization it represents, a website may not only deliver a goodimpression to users, but also increase their perceived credibility of thewebsite (Robins & Holmes, 2008).

U8

LA1 LA2 LA3

3.60(.42) 3.17(.49) 3.44(.71) 3.17(.43) 3.94(.67) 3.45(.53)t = 4.450, p = 0.001 t = 1.478, p = 0.167 t = 3.742, p = 0.0033.75(.38) 3.17(.49) 3.63(.36) 3.17(.43) 4.04(.32) 3.45(.53)t = 4.116, p = 0.002 t = 4.034, p = 0.002 t = 3.684, p = 0.0043.72(.65) 3.17(.49) 3.67(.59) 3.17(.43) 4.08(.57) 3.45(.53)t = 2.663, p = 0.022 t = 3.834, p = 0.003 t = 3.871, p = 0.0033.67(.84) 3.17(.49) 3.42(.75) 3.17(.43) 4.08(.67) 3.45(.53)t = 1.605, p = 0.137 t = 1.416, p = 0.185 t = 3.443, p = 0.0053.83(.93) 3.17(.49) 3.28(.44) 3.17(.43) 3.06(.53) 3.45(.53)t = 2.281, p = 0.043 t = 0.670, p = 0.517 t = -2.620, p = 0.0243.72(.94) 3.17(.49) 3.28(.49) 3.17(.43) 3.92(.29) 3.45(.53)t = 1.799, p = 0.099 t = 0.720, p = 0.486 t = 3.062, p = 0.0113.33(.86) 3.17(.49) 2.98(.55) 3.17(.43) 3.88(.47) 3.45(.53)t = 0.704, p = 0.496 t = −1.170, p = 0.267 t = 2.849, p = 0.0163.17(1.03) 3.17(.49) 1.33(.62) 3.17(.43) 1.58(.52) 3.45(.53)t = 0.000, p = 1.000 t = −8.928, p = 0.000 t = −8.068, p = 0.0003.67(.62) 3.17(.49) 3.79(.54) 3.17(.43) 4.46(.45) 3.45(.53)t = 2.076, p = 0.062 t = 3.191, p = 0.009 t = 7.433, p = 0.0003.78(.64) 3.17(.49) 3.81(.58) 3.17(.43) 4.00(.64) 3.45(.53)t = 2.405, p = 0.035 t = 4.013, p = 0.002 t = 5.705, p = 0.0003.92(.63) 3.17(.49) 3.60(.45) 3.17(.33) 3.90(.58) 3.45(.53)t = 3.437, p = 0.006 t = 2.256, p = 0.045 t = 2.660, p = 0.0223.78(.63) 3.17(.49) 3.50(.70) 3.17(.43) 4.25(.73) 3.45(.53)t = 3.987, p = 0.002 t = 1.339, p = 0.208 t = 3.725, p = 0.0033.92(.63) 3.17(.49) 3.17(.58) 3.17(.43) 3.13(.43) 3.45(.53)t = 4.151, p = 0.002 t = 0.000, p = 1.000 t = −1.729,p = 0.112

ibility of e-government websites, Government Information Quarterly

Page 10: Usability and credibility of e-government websites

Table 10Comparison of credibility and usability guidelines on the 3 websites.

C10

LA1 LA2 LA3

U1 3.78 (.64) 3.60 (.69) 3.81 (.58) 3.27 (.60) 4.00 (.64) 3.98 (.32)t = 0.669, p = 0.517 t = 2.691, p = 0.021 t = 0.096, p = 0.925

U2 3.78 (.64) 3.19 (.65) 3.81 (.58) 2.85 (.56) 4.00 (.64) 3.33 (.52)t = 2.305, p = 0.042 t = 4.761, p = 0.001 t = 2.960, p = 0.013

U3 3.78 (.64) 3.83 (.33) 3.81 (.58) 3.35 (.43) 4.00 (.64) 3.63 (.74)t = −0.237, p = 0.817 t = 2.199, p = 0.042 t = 1.551, p = 0.149

U4 3.78 (.64) 3.50 (.62) 3.81 (.58) 3.40 (.51) 4.00 (.64) 3.87 (0.39)t = 1.469, p = 0.170 t = 2.017, p = 0.069 t = 0.892, p = 0.392

U5 3.78 (.64) 3.40 (.83) 3.81 (.58) 3.00 (.57) 4.00 (.64) 3.88 (.61)t = 1.539, p = 0.152 t = 4.867, p = 0.000 t = 0.570, p = 0.580

U6 3.78 (.64) 3.33 (.71) 3.81 (.58) 2.94 (.37) 4.00 (.64) 3.83 (.50)t = 1.591, p = 0.140 t = 3.867, p = 0.003 t = 1.149, p = 0.275

U7 3.78 (.64) 3.17 (.31) 3.81 (.58) 3.63 (.49) 4.00 (.64) 4.08 (.47)t = 2.653, p = 0.022 t = 1.047, p = 0.318 t = −0.420, p = 0.683

U8 3.78 (.64) 3.17 (.49) 3.81 (.58) 3.17 (.43) 4.00 (.64) 3.45 (.53)t = 2.405, p = 0.035 t = 4.013, p = 0.002 t = 5.705, p = 0.000

U9 3.78 (.64) 3.58 (.71) 3.81 (.58) 3.56 (.82) 4.00 (.64) 3.73 (.62)t = 1.124, p = 0.285 t = 1.259, p = 0.234 t = 1.634, p = 0.131

U10 3.78 (.64) 2.83 (.69) 3.81 (.58) 3.28 (.85) 4.00 (.64) 3.86 (.66)t = 5.785, p = 0.000 t = 2.258, p = 0.045 t = 0.788, p = 0.447

U11 3.78 (.64) 3.56 (.46) 3.81 (.58) 3.50 (.64) 4.00 (.64) 4.19 (.59)t = 1.265, p = 0.232 t = 1.608, p = 0.136 t = −1.134, p = 0.281

U12 3.78 (.64) 4.00 (.64) 3.81 (.58) 3.72 (.51) 4.00 (.64) 4.44 (.26)t = −1.146, p = 0.276 t = 0.609, p = 0.555 t = −2.530, p = 0.028

U13 3.78 (.64) 3.69 (.52) 3.81 (.58) 3.69 (.64) 4.00 (.64) 4.03 (.64)t = 0.306, p = 0.766 t = 0.509, p = 0.621 t = −0.178, p = 0.862

Note: content of each cell consists of Mean and Standard deviation.

10 Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

The credibility feature with the most impact on the overall usabilitywas identified as “Avoid errors (C10)” (e.g., participants' perception ofC10 shows significant differences with 7 (53.8%) usability guidelinesin LA1, 7 (53.8%) usability guidelines in LA2, and 4 (30.8%) usabilityguidelines in LA3 respectively). See highlighted in bold in Table 10.Error prevention helps users through two main functions: preventinga problem from occurring in the first place, and keeping the user incontrol. On the other hand, the absence of error preventionmay hamperthe effective use of the website and influence users' attitudes. This issupported by Bargas-Avila, Oberholzer, Schmutz, de Vito, and Opwis(2007), who indicate that users who experience a variety of errorsexpress dissatisfaction. Therefore, preventing errors of any type, suchas typographical errors, broken links and design flaws should be consis-tently sought throughout the usability driven design process.

The above discussion suggests that it is important to understand theconcepts of usability and credibility for e-government websites. Whendeveloping e-government, one should be aware of the factors affecting

Table 11User performance.

Performance London Auth

Online help required Mean 0.250SD 0.452

Significance F(2, 33) = 4.733, p = 0.016

Time spent for the task Mean 26.627SD 8.905

Significance F(2, 33) = 4.474, p = 0.019

Number of steps to complete task Mean 60.417SD 13.104

Significance F(2, 33) = 10.862, p = 0.000

Successful task completion Mean 1.139SD 0.117

Significance F(2, 33) = 2.590, p = 0.090

Note: SD = Standard deviation.

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

usability and credibility as well as the need to analyze their mutualinfluence. By doing so, more interactive e-government websites thatmotivate user participation can be developed.

4.5. User performance

This section analyzes the data with regards to participants' perfor-mance on e-government websites. The participants' performance isassessed using a set of performance measures, including the amountof online help required; the average time spent completing all tasks;the average number of steps to finish tasks, and the ratio of successfultasks completed.

Table 11 shows the participants' performance on the three target e-government websites. The results of one-way ANOVA clearly indicatethat there is a significant difference in the participants' performanceamong the three websites. More specifically, the participants whoused London Authority 2 required more online support and took more

ority 1 London Authority 2 London Authority 3

0.583 0.0000.669 0.000

21.721 16.2098.579 8.102

81.833 50.16720.687 16.297

1.148 1.0650.086 0.088

ibility of e-government websites, Government Information Quarterly

Page 11: Usability and credibility of e-government websites

11Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

steps to complete the tasks than those who used London Authority 1and 3. In addition, participants' performance in terms of the ratio ofsuccessful task completion indicates that the participants who usedLondon Authority 2 finished fewer tasks than those who used LondonAuthority 1 and 3. This is echoed in the results of the participants'overall perception of usability and credibility, which reveal that LondonAuthority 2 has the worst overall usability and credibility scores. Thismay imply that the users' overall perception of usability and credibilityinfluences users' performance. As suggested by Fogg et al. (2003), theoverall assessment is particularly affected by the problems with highseverity, which, in turn, have a bigger impact on user perception. Themost severe problems found on London Authority 2 are that “someoptions on the home page are not clearly presented” (Mean: 2.17; SD:1.030), “the unbalanced information arrangement between breadthand depth” (Mean: 2.17; SD: 1.030); “the absence of display of awardswon by organizations” (Mean: 2.17; SD: 0.866) and “the lack of naviga-tion tools” (Mean: 2.17; SD: 0.937). These problems may have seriouslyaffected the overall user perception of London Authority 2.

It is interesting to see the results of users' performance in terms ofthe average time spent completing all tasks. As shown in Table 11, theparticipants who used London Authority 1 took longer to completetheir tasks than those who used London Authority 2 and 3. However,this is not reflected in the findings of the users' overall perception ofusability and credibility, which do not show London Authority 1 to bethe website with the worst overall usability and credibility. A possibleexplanation is that the participants' performancemay not only be influ-enced by the overall usability and credibility, but also by particularfeatures of usability and credibility. The problems with the highestseverity found on London Authority 1 are that “the links within thewebsite usemanydifferent colours” (Mean: 2.32; SD: 1.084) and “subjectinformation is presented without consistent colours” (Mean: 2.58; SD:0.996), both of which are closely related to website look (see Tables 1and 2, usability guideline 8: Aesthetic andminimalist design and credibil-ity guideline 1: Design look). Website look is the set of visual designelements of e-government websites with a considerable impact on bothusability and credibility. Lavie and Tractinsky (2004) showed aestheticsto be strongly correlated to perceived usability, which is a key determi-nant of users' satisfaction and pleasure. This is also supported byTractinsky (1997) who found that system aesthetics can be seen as “ap-parent usability”, which is perceived more quickly than other attributesof usability. In terms of credibility, Fogg et al. (2003) argue that themost prominent issue found in credibility evaluation is design look,which can cause users the most concern about credibility. More impor-tantly, users' judgments of credibility are initially based on websitelook. As suggested by Robins and Holmes (2008), the first impression ofcredibility comes from thewebsite's design look, which results in a fasterjudgment of credibility compared with other credibility cognitiveprocesses. As such, these may suggest that users' perception of thee-government websites' look may influence their performance, asin the case of London Authority 1.

5. Conclusion

This study exploredhow theusability and credibility of e-governmentwebsites impact each other. In general, our findings are consistentwith our expectations that there is a close correlation between e-government websites' usability and credibility. The results show thatthe e-government websites with a high level of usability were seen ashaving higher credibility, and vice versa. Moreover, specific influencesbetween usability and credibility were detected. For instance, the com-mon usability issues that have the most impact on the overall credibilityinclude “Match between system and the real world (U2)” and “Aestheticandminimalist design (U8)”. However, the common credibility issue thathas the most impact on the overall usability is “Avoid errors (C10)”. Suchfindings confirm the importance of understanding the essence of usability

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

and credibility concepts, and suggest that these two factors should alwaysbe considered together in e-government website design.

The study identified a number of usability and credibility issues on e-government websites. For instance, the most serious usability issuesinclude “links having many different colours”, “difficulty in using onlinehelp functions”, and “presenting subject categories without a logicalorder”. As for credibility, the most severe problems include “difficulty tofind contact information”, “the absence of security message”, and “thelack of content update”. These results suggest that usability and credibilityhave not been addressed at the detailed level on current e-governmentwebsites. Failing to address usability and credibility in sufficient detailto inform e-government website design, e-government might still facethe challenge of user acceptance.

Meanwhile, it seems that there is much room for current e-government websites to improve their usability and credibility. Theproblems identified in this study can be used by designers to focuson specific elements of the website in order to further improve itsusability and credibility.

This study has some limitations. First, we only recruited 36 partici-pants to assess the three e-government websites, which explains theexploratory nature of our work. To perform a more comprehensiveassessment, further researchmay be conducted withmore participants.Another limitation is that the participants may have different skills inevaluating e-government websites, which may impact the results oftheir evaluation. Further research may provide participants with ashort training in terms of the specific knowledge of the domain, whichmay improve the evaluation outcomes. Additionally, our results indicatean interrelation of e-government website aesthetics and user perfor-mance. Further research might discover user aesthetics preferences,which may provide useful insight into designing user-centred e-governmentwebsites. Finally, a number of studies reveal that individualdifferences such as age, gender and prior experiences affect users' pref-erence of and behaviour on awebsite. Thus, theremight be a need to in-vestigate the effects of individual differences on users' attitude andperception towards e-government websites. Findings will be valuablefor developing flexible e-government websites that can be acceptedby and useful to a variety of individuals.

Acknowledgment

The authors would like to sincerely thank Dr. Laurence Brooks andDr. Sherry Chen who provided valuable advice and support in thisresearch project.

References

Aladwani, A.M. (2013). A cross-cultural comparison of Kuwaiti and British citizens' viewsof e-government interface quality. Government Information Quarterly, 30(1), 74–86.

Anon (2013). Harrow London Borough Council. Wikipedia, the free encyclopedia(Available at: http://en.wikipedia.org/w/index.php?title=Harrow_London_Borough_Council&oldid=563695825 [Accessed July 26, 2013]).

Anthopoulos, L. G., Siozos, P., & Tsoukalas, I. A. (2007). Applying participatory design andcollaboration in digital public services for discovering and re-designing e-governmentservices. Government Information Quarterly, 24(2), 353–376.

Baker, D. L. (2009). Advancing e-government performance in the United States throughenhanced usability benchmarks. Government Information Quarterly, 26(1), 82–88.

Bargas-Avila, J. A., Oberholzer, G., Schmutz, P., de Vito,M., & Opwis, K. (2007). Usable errormessage presentation in the World Wide Web: Do not show errors right away.Interacting with Computers, 19(3), 330–341.

Barnes, S. J., & Vidgen, R. (2004). Interactive e-government services: Modelling userperceptions with eQual. Electronic Government, an International Journal, 1(2), 213–228.

Beaumont, P., Longley, P. A., &Maguire, D. J. (2005). Geographic information portals—a UKperspective. Computers, Environment and Urban Systems, 29(1), 49–69.

Bélanger, F., & Carter, L. (2008). Trust and risk in e-government adoption. The Journal ofStrategic Information Systems, 17(2), 165–176.

Bevan, N. (1995). Measuring usability as quality of use. Software Quality Journal, 4,115–130.

Beynon‐Davies, P., &Williams, M.D. (2003). Evaluating electronic local government in theUK. Journal of Information Technology, 18(2), 137–149.

Brinck, T., Gergle, D., & Wood, S. D. (2002). Usability for the web: Designing web sites thatwork. Morgan Kaufmann.

ibility of e-government websites, Government Information Quarterly

Page 12: Usability and credibility of e-government websites

12 Z. Huang, M. Benyoucef / Government Information Quarterly xxx (2014) xxx–xxx

Brown, W., Rahman, M., & Hacker, T. (2006). Home page usability and credibility: Acomparison of the fastest growing companies to the Fortune 30 and the implicationsto IT governance. Information Management & Computer Security, 14(3), 252–269.

Cabinet Office (2010). Policy documents. UK gov talk (Available at: http://webarchive.nationalarchives.gov.uk/20100807034701/http://cabinetoffice.gov.uk/govtalk/policydocuments.aspx [Accessed June 21, 2013]).

Chen, S. Y., & Macredie, R. D. (2005). The assessment of usability of electronic shopping:A heuristic evaluation. International Journal of Information Management, 25(6),516–532.

Chou, J. -R., & Hsiao, S. -W. (2007). A usability study on human–computer interface formiddle-aged learners. Computers in Human Behavior, 23(4), 2040–2063.

Clemmensen, T., & Katre, D. (2012). Adapting e-gov usability evaluation to culturalcontexts. Usability in government systems (pp. 331–344). Boston: Morgan Kaufmann.

Fernandez, A., Insfran, E., & Abrahão, S. (2011). Usability evaluation methods for the web:A systematic mapping study. Information and Software Technology, 53(8), 789–817.

Fogg, B. J. (2002). Stanford guidelines for web credibility, a research summary from the Stanfordpersuasive technology lab. Stanford University (Available at: www.webcredibility.org/guidelines).

Fogg, B. J. (2003). Persuasive technology using computers to change what we think and do.Morgan Kaufmann.

Fogg, B. J., Marshall, J., Laraki, O., Osipovich, A., Varma, C., Fang, N., et al. (2001). Whatmakes web sites credible? A report on a large quantitative study. Proceedings of theSIGCHI conference on human factors in computing systems (pp. 61–68). New York,NY, USA: ACM Available at: http://doi.acm.org/10.1145/365024.365037 [AccessedJune 21, 2013].

Fogg, B. J., Soohoo, C., Danielson, D. R., Marable, L., Stanford, J., & Tauber, E. R.(2003). How do users evaluate the credibility of Web sites? A study with over2500 participants. Proceedings of the 2003 conference on designing for user experiences(pp. 1–15). New York, NY, USA: ACM.

Gant, J. P., & Gant, D. B. (2002). Web portal functionality and state government e-service.Proceedings of the 35th Hawaii international conference on system sciences. Hawaii:IEEE.

Garcia, A.C. B., Maciel, C., & Pinto, F. B. (2005). A quality inspection method to evaluate e-government sites. Lecture Notes in Computer Science, 3591, 198–209.

Hilligoss, B., & Rieh, S. Y. (2008). Developing a unifying framework of credibility assessment:Construct, heuristics, and interaction in context. Information Processing andManagement,44(4), 1467–1484.

Homburg, V., & Bekkers, V. (2002). The back-office of e-government (managing informa-tion domains as political economies). Proceedings of the 35th annual Hawaii interna-tional conference on system sciences (pp. 5). Washington, DC, USA.: IEEE Availableat: http://dl.acm.org/citation.cfm?id=820743.821056 [Accessed July 14, 2013].

Huang, Z., Brooks, L., & Chen, S. (2009). The assessment of credibility of e-government:Users' perspective. Lecture Notes in Computer Science, 5618, 26–35.

Irani, Z., Love, P. E. D., Elliman, T., Jones, S., & Themistocleous, M. (2005). Evaluating e-government: Learning from the experiences of two UK local authorities. InformationSystems Journal, 15(1), 61–82.

ISO (1998). ISO 9241-11:1998, ergonomic requirements for office work with visualdisplay terminals (VDTs) - Part 11: Guidance on usability. Available at: http://www.it.uu.se/edu/course/homepage/acsd/vt09/ISO9241part11.pdf (Accessed July 23, 2011)

Khajouei, R., Hasman, A., & Jaspers, M.W. M. (2011). Determination of the effectiveness oftwo methods for usability evaluation using a CPOE medication ordering system.International Journal of Medical Informatics, 80(5), 341–350.

Kuk, G. (2003). The digital divide and the quality of electronic service delivery in local gov-ernment in the United Kingdom. Government Information Quarterly, 20(4), 353–363.

Kuzma, J. M. (2010). Accessibility design issues with UK e-government sites. GovernmentInformation Quarterly, 27(2), 141–146.

Lavie, T., & Tractinsky, N. (2004). Assessing dimensions of perceived visual aesthetics ofweb sites. International Journal of Human-Computer Studies, 60(3), 269–298.

Lee, J. (2010). 10 year retrospect on stage models of e-government: A qualitative meta-synthesis. Government Information Quarterly, 27(3), 220–230.

Lee, Y., & Kozar, K. A. (2012). Understanding of website usability: Specifying andmeasuring constructs and their relationships. Decision Support Systems, 52(2),450–463.

Liu, Z. (2004). Perceptions of credibility of scholarly information on the web. InformationProcessing & Management, 40(6), 1027–1038.

Mosse, B., & Whitley, E. A. (2008). Critically classifying: UK e-government websitebenchmarking and the recasting of the citizen as customer. Information SystemsJournal, 19(2), 149–173.

Muir, A., & Oppenheim, C. (2002). National Information policy developments worldwideI: Electronic government. Journal of Information Science, 28(3), 173–186.

Nielsen, J. (1994). Heuristic evaluation: Usability inspection methods. New York: SAGEPublications.

Nielsen, J. (2000). Designing web usability: The practice of simplicity. Thousand Oaks, CA,USA: New Riders Publishing.

O'Keefe, D. J. (2002). Persuasion theory and research (2nd ed.). SAGE Publications (Availableat: http://www.sagepub.com/books/Book11299?prodId=Book11299 [Accessed June20, 2013]).

Reddick, C. G. (2009). The adoption of centralized customer service systems: A survey oflocal governments. Government Information Quarterly, 26(1), 219–226.

Please cite this article as: Huang, Z., & Benyoucef, M., Usability and cred(2014), http://dx.doi.org/10.1016/j.giq.2014.07.002

Rieh, S. Y. (2002). Judgment of information quality and cognitive authority in theweb. Journal of the American Society for Information Science and Technology,53(2), 145–161.

Robins, D., & Holmes, J. (2008). Aesthetics and credibility in web site design. InformationProcessing and Management, 44, 1.

Scott, J. K. (2005). Assessing the quality of municipal government web sites. State andLocal Government Review, 37(2), 151–165.

Shareef, M.A., Kumar, V., Kumar, U., & Dwivedi, Y. K. (2011). e-Government AdoptionModel (GAM): Differing service maturity levels. Government Information Quarterly,28(1), 17–35.

Sidi, J., & Junaini, S. N. (2006). Credibility review of the Malaysian states. Public Sector ICTManagement Review, 1(1), 41–45.

Sonderegger, A., & Sauer, J. (2010). The influence of design aesthetics in usability testing:Effects on user performance and perceived usability. Applied Ergonomics, 41(3),403–410.

Sun, J., & Hsu, Y. (2012). An experimental study of learner perceptions of the interactivityof web-based instruction. Interacting with Computers, 24(1), 35–48.

Tan, W., Liu, D., & Bishu, R. (2009). Web evaluation: Heuristic evaluation vs. user testing.International Journal of Industrial Ergonomics, 39(4), 621–627.

Teo, T. S. H., Srivastava, S.C., & Jiang, L. (2008). Trust and electronic governmentsuccess: An empirical study. Journal of Management Information Systems, 25(3),99–132.

Tractinsky, N. (1997). Aesthetics and apparent usability: Empirically assessing culturalandmethodological issues. Proceedings of the ACM SIGCHI conference on human factorsin computing systems (pp. 115–122). New York, USA,: ACM Available at: http://doi.acm.org/10.1145/258549.258626 [Accessed June 24, 2013].

Verdegem, P., & Verleye, G. (2009). User-centered e-government in practice: A compre-hensive model for measuring user satisfaction. Government Information Quarterly,26(3), 487–497.

Wathen, C. N., & Burkell, J. (2002). Believe it or not: Factors influencing credibility on theweb. Journal of the American Society for Information Science and Technology, 53(2),134–144.

Welch, E. W., & Hinnant, C. C. (2003). Internet use, transparency, and interactivity effectson trust in government. Proceedings of the 36th Annual Hawaii International Conferenceon System Sciences.

Wikipedia (2014). London boroughs. Available at: http://en.wikipedia.org/wiki/London_boroughs [accessed by July 26, 2014]

Yang, K. C. C. (2007). Factors influencing internet users' perceived credibility of news-relatedblogs in Taiwan. Telematics and Informatics, 24(2), 69–85.

Yang, J., & Paul, S. (2005). E-government application at local level: Issues and chal-lenges: An empirical study. Electronic Government, an International Journal, 2(1),56–76.

Youngblood, N. E., & Mackiewicz, J. (2012). A usability analysis of municipal gov-ernment website home pages in Alabama. Government Information Quarterly,29(4), 582–588.

Dr. Zhao Huang is an assistant professor at Shaanxi NormalUniversity, Xi'an, China. He obtained a PhD in the field ofinformation systems and computing at Brunel University inthe UK and was a postdoctoral research fellow at the TelferSchool of Management at the University of Ottawa, Canada.His main research interests include information systems,human–computer interaction, usability, e-government, e-commerce and Web 2.0.

Dr. Morad Benyoucef is an associate professor at the TelferSchool of Management at the University of Ottawa, Canada.He specializes in e-business and Management InformationSystems. His research interests include online marketplaces,online trust, Web 2.0, and e-Health applications. He haspublished articles in several international journals includingGroup Decision and Negotiation, Supply Chain Forum,Knowledge-based Systems, and Electronic Commerce Re-search. He holds a Master's from Rochester Institute of Tech-nology, USA, and a PhD from Université deMontréal, Canada.

ibility of e-government websites, Government Information Quarterly