Artigo - Doll e Torkzadeh 1988 - The Mesurement of End-User Computing Satisfaction

Embed Size (px)

Citation preview

  • End-User Satisfaction

    The Measurement ofEnd-User ComputingSatisfactionBy: William J. Doll

    Professor of MIS andStrategic Management

    The University of Toledo

    Gholamreza TorkzadehAssistant Professor of

    Information Systems andManagement Science

    The University of Toledo

    2801 West Bancroft StreetToledo, Ohio 43606

    AbstractThis articie contrasts traditional versus end-user computing environments and reports onthe development of an instrument which mergesease of use and information product items tomeasure the satisfaction of users who directlyinteract with the computer for a specific appli-cation. Using a survey of 618 end users, theresearchers conducted a factor analysis andmodified the instrument. The results suggest a12-item instrument that measures five compo-nents of end-user satisfaction content, accu-racy, format, ease of use. and timeiiness. Evi-dence of the instrument's discriminant validityis presented. Reliability and vaiidity is assessedby nature and type of application. Finaily, stan-dards for evaluating end-user applications arepresented, and the instrument's usefulness forachieving more precision in research questionsis explored.

    Keywords: End-user computing, user satisfac-tion, end-user computing satisfac-tion, management

    ACM Categories: K,6.4, K.6.0

    IntroductionEnd-user computing (EUC) is one of the mostsignificant phenomenon to occur in the informa-tion systems industry in the last ten years(Benson, 1983; Lefkovits, 1979), Although stillin its early stages, signs of rapid growth are evi-dent. In the companies they studied, Rockart andFlannery (1983) found annual EUC growth ratesof 50 percent to 90 percent, Benjamin (1982)has predicted that by 1990 EUC will absorb asmuch as 75 percent of the corporate computerbudget. Because of these trends, Rockart andFlannery call for better management to improvethe success of end-user computing. Withoutimproved management, they see the adverseeffects of the Nolan-Gibson (1974) 'control"stage constraining development of this newphenomenon.

    To improve the management of EUC. Cheney,et al, (1986) call for more empirical research onthe factors which influence the success of end-user computing, Henderson and Treacy (1986)describe a sequence of perspectives (implemen-tation, marketing, operations, and economic) formanaging end-user computing and identify ob-jectives for each phase. In the implementationphase, they maintain that objectives should focuson increased usage and user satisfaction. Asthe organization gains experience with end-usercomputing, they recommend increased empha-sis on market penetration and objectives that aremore difficult to evaluate such as integration, ef-ficiency, and competitive advantage.

    Ideally one would like to evaluate EUC basedon its degree of use in decision making and theresultant productivity and/or competitive advan-tages, Crandall (1969) describes these resultantbenefits as utility in decision making. However,this decision analysis" approach is generally notfeasible (Gallagher, 1974; Nolan and Seward,1974). End-user computing satisfaction (EUCS)is a potentially measurable surrogate for utilityin decision making. An end-user applications util-ity in decision making is enhanced when the out-puts meet the user's information requirements(described by Bailey and Pearson (1983) as 'in-formation product") and the application is easyto use. Ease of use or "user friendliness" is es-pecially important in facilitating voluntary mana*gehal use of inquiry or decision support systems.

    MIS QuarteriyiJune 1988 259

    DanilloNotaFeasible=Praticvel, possvel, exequvel

    DanilloNotaApproach= nesse contexto significa "abordagem"

    DanilloNotaSurrogate=substituto

    DanilloDestacar

    DanilloNota

  • End-User Satisfaction

    In a voluntary situation, system usage can alsobe a surrogate measure of system success. Ives,et al. (1983) argue that usage of an informationor decision support system is often not volun-tary (e,g., when usage is mandated by manage-ment). In this involuntary situation, perceptualmeasures of satisfaction may be more appropri-ate. Also, both theory (Fishbein and Ajzen, 1975)and a recent path analysis (Baroudi. ot al.. 1986)suggest that satisfaction leads to usage ratherthan usage stimulating satisfaction. Thus, usersatisfaction may be the critical factor.

    The growth of end-user computing is presentingnew challenges for information system manag-ers. Measures of user information satisfactiondeveloped for a traditional data processing en-vironment may no longer be appropriate for anend-user environment where users directiy in-teract with application software. Indeed, user in-formation satisfaction instruments have not beendesigned or validated for measuring end-usersatisfaction. They focus on general satisfactionrather than on a specific application, and theyomit aspects important to end-user computingsuch as ease of use. Hence, this study distin-guishes between user information satisfactionand an end user's satisfaction with a specificapplication.

    This article reports on the development of aninstrument designed to measure the satisfactionof users who directly interact with a specific ap-plication. The fccus is on measuring EUCSamong data processing (DP) amateurs and non-DP trained users rather than DP professionals.The explicit goals of this research were to de-velop an instrument that:

    1, Focuses on satisfaction with the informationproduct provided by a specific application;

    2, Includes items to evaluate the ease of useof a specific application;

    3, Provides Likert-type scales as an alternativeto semantic differential scaling;

    4, Is short, easy to use, and appropriate for bothacademic research and practice;

    5, Can be used with confidence across a vari-ety of applications (i.e.. adequate reliabilityand validity); and

    6, Enables researchers to explore the relation-ships between end-user computing satisfac-tion and plausible independent variables (ie..

    user computing skills, user involvement, EDPsupport policies and priorities, etc.).

    An additional goal was to identify underlying fac-tors or components of end-user computingsatisfaction.

    The End-User ComputingSatisfaction ConstructIn a traditional data processing environment (seeFigure 1), users interact with the computer indi-rectly, through an analyst/programmer or throughoperations. Routine reports might be requestedfrom operations. For ad hoc or nonroutine re-quests, an analyst / programmer assists the user.In this environment, a user might be unawareof what specific programs are run to producereports.

    In an end-user computing environment (seeFigure 2), decision makers interact directly withthe application software tc enter information orprepare output reports. Decision support and da-tabase applications characterize this emergingend-user phenomenon. The environment typi-cally includes a database, a mode! base, andan interactive software system that enables theuser to directly interact with the computer system(Sprague, 1980). Although vast improvementshave been made in end-user software (Canning,1981; Martin, 1982), efforts to improve fhe man-machine interface continue (Sondheimer andRelies, 1982; Yavelberg, 1982),Figures 1 and 2 do not depict all the differencesbetween traditional and end-user computing en-vironments. Other differences such as software,hardware, support requirements, and control pro-cedures are not illustrated. Rather, the intent ofthese figures is to illustrate that, in an end-usercomputing environment, analysts/programmersand operations staff are less directly involvedin user support; users assume more responsi-bility for their own applications. Systems person-nel might assist in fhe selection of appropriatesoftware tools, but the end users are largely ontheir own to design, implement, modify, and runtheir own applications. Training programs, ex-perienced colleagues, and manuals providesome assistance. However, the goal of informa-tion system staff and service policies typicallyfocuses on enabling end users to function moreindependently, to solve many problems on theirown.

    260 MIS Quarterly/June 1988

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloNotaHence=por isso

    DanilloNotaIndeed=De fato

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloNotaUnderlying=Subjacente

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloNotaPosso usar esse pargrafo para justificar a utilizao do modelo, explicando, por exemplo, que o Novo Siafi, de acordo com essa definio, uma aplicao voltada para o USURIO FINAL, uma vez que o usurio interage diretamente com o sistema, o sistema possui uma base de dados, etc

    DanilloDestacar

    DanilloNotaAutores, algumas vezes, utilizam a palavra SOFTWARE

    DanilloDestacar

  • End-User Satisfaction

    DATABASE

    LJ

    USERAd hoc Request ANALYST/

    PROGRAMMERCOMPUTERSYSTEMS

    OPERATIONSPROGRAMS

    (COBOL, ETC.)

    Figure 1. The Traditional DP Environment

    The definition of end-usercomputingDavis and Olson (1985) describe this changingrole of the user. To define end-user computing,they distinguish between primary and secondaryuser roles. The primary user makes decisionsbased on the system's output. The secondaryuser is responsible for interacting with the appli-cation software to enter information or prepareoutput reports, but does not use the output di-rectly in his or her job. In end-user computing,the two roles are combined: the person who util-izes the system output also develops it.

    In contrast, the CODASYL end-user facilities com-mittee (Lefkovits, 1979) provides a broader defi-nition of end-user computing to include: 'indi-rect" end users who use computers throughother people; "intermediate" end users who spec-ify business information requirements for reportsthey ultimately receive; and "direct" end userswho actually use terminals. However, for themost part, writers in this area such as Martin

    (1982), McLean (1979), and Rockart and Flan-nery (1983) limit their definition of end users toindividuals who interact directly with thecomputer.

    This research uses the more limited definition.End-user computing satisfaction is conceptual-ized as the affective attitude towards a specificcomputer application by someone who interactswith the application directly. End-user satisfac-tion can be evaluated in terms of both the pri-mary and secondary user roles. User informa-tion satisfaction, especially the informationproduct, focuses on the primary role and is in-dependent of the source of the information (i.e..the application). Secondary user satisfactionvaries by application; it depends on an applica-tion's ease of use. Despite the growing hands-on use of inquiry and decision support applica-tions by managerial, professional, and operat-ing personnel, research on user information sat-isfaction instruments has emphasized theprimary user role, measuring overall user infor-mation satisfaction.

    MIS Quarterly/June 1988 261

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

  • End-User Satistaction

    END USER INTERACTIVESOFTWARECOMPUTERSYSTEMS

    MODELBASE

    Figure 2. The End-User Computing Environment

    The Ives, Oison and BaroudiinstrumentFocusing on 'indirect" or "intermediate" users,Bailey and Pearson (1983) interviewed 32middle managers and developed a semantic dif-ferential instrument measuring overall computeruser satisfaction. Later, Ives. et al., (1983) sur-veyed production managers (e.g., 'indirect" or"intermediate" users), conducted a factor analy-sis of the Bailey and Pearson instrument, andreported on a shorter version of this instrument.After two factors identified as "information prod-uct" were combined and a vendor support factoreliminated, the Ives, et a!., study suggested threefactors; EDP staff and services: information prod-uct; and knowledge or involvement. However,the ratio of sample size to number of scales (7;1)must be regarded with some caution.

    Other validation studies have expressed someconcerns. Using a sample of "indirect" and "in-termediate" users, Treacy (1985) assessed thereliability and validity of the Ives, et al., instru-ment. He concludes that this instrument is an

    important contribution, but has difficulties in threeareas: the variables found through exploratoryfactor analysis were labeled in imprecise andambiguous terms; many of the questions usedwere poor operationalizations of their theoreti-cal variables; and the instrument failed toachieve discriminant validity. Also, Galletta andLederer (1986) found test-retest reliability prob-lems with the Ives, et a!., instrument and, be-cause of the heterogeneity of the items (infor-mation product, EDP staff and services, userinvolvement), expressed the need for caution ininterpreting results.

    These concerns are not widely shared. The tves,et al., instrument is frequently used (Barki andHuff. 1985; Mahmood and Becker, 1985-86; Ray-mond, 1985; Galletta, 1986) and is, to date, prob-ably the best available measure of user infor-mation satisfaction (Galletta and Lederer. 1986).However, this instrument has not been used inend-user computing research.

    The Ives, et al., instrument was designed for themore traditional data processing environment. It

    262 MIS QuarterlyIJune 1988

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

  • End-User Satisfaction

    measures general user satisfaction with EDPstaff and services, information product, and userinvolvement/knowledge rather than satisfactionwith a specific application. Indeed, it has notbeen validated for use in assessing specific end-user applications. It also ignores important easeof use aspects of the man-machine interface.

    Ease of use has become increasingly importantin software design (Branscomb and Thomas,1984). There is increasing evidence that the ef-fective functioning of an application dependson its ease of use or usability (Goodwin,1987). If end users find an application easy touse, they may become more advanced users,and therefore, better able to take advantage ofthe range of capabilities the software has tooffer. Also, ease of use may improve productiv-ity or enable decision makers to examine morealternatives.

    Both the EDP staff and services items and theuser involvement/knowledge items seem inap-propriate for an end-user environment. The end-user environment requires new EDP staff andservice policies. End users have less direct in-teraction with analysts/programmers or opera-tions. Rather than emphasizing direct supportfor user information requests, EDP staff and serv-ice policies emphasize more indirect and behindthe scene technical efforts to improve hardware,languages, data management, privacy, security,and restart/recovery (Rockart and Flannery,1983). Most end users would not be able to evalu-ate these activities. Thus, several EDP staff andservice items in the Ives, et al., instrument seemless appropriate in an end-user environment.These items include;

    Relationship with EDP staff; Processing of requests for system changes; Attitude of EDP staff; Communication with EDP staff; Time required for system development; and Personal control of EDP services

    By their nature, these items assume a more tra-ditional computing environment and. like the userknowledge / involvement and information productitems, are not application specific.

    In addition, EDP staff/services and user knowl-edge / involvement Items seemed more appropri-ately viewed as independent rather than depend-ent variables in an end-user computing environ-ment. End-user knowledge and involvement indevelopment is generally considered to be posi-

    tively correlated with satisfaction. Also, Rockartand Flannery (1983) suggest that end-user skilllevels and EDP support policies can affect thesuccess of end-user computing. For these rea-sons. EDP staff/services and user knowledgeinvolvement items were excluded when the re-searchers generated items to measure enduser computing satisfaction.

    Research MethodsTo ensure that a comprehensive list of Items wasincluded, the works of previous researchers(Bailey and Pearson, 1983; Debons. et al., 1978;Neuman and Segev, 1980; Nolan and Seward,1974; Swanson, 1974; Gallagher, 1974) werereviewed. Based on this review, the research-ers generated 31 items to measure end-user per-ceptions. To measure "ease of use " of an appli-cation, a construct which seemed to be missingfrom the previous works reviewed, seven addi-tional items were also included. Two global meas-ures of perceived overall satisfaction and suc-cess were added to serve as a criterion.

    Thus, a 40-item instrument (see the Appendix)was developed using a five point Likert-typescale, where 1 = almost never; 2 = some ofthe time; 3 = about half of the time; 4 - mostof the time; and 5 = almost always. The in-structions requested the users to write in thename of their specific application and, for eachquestion, to circle the response which best de-scribed their satisfaction with this application.

    Next, a structured interview questionnaire wasdeveloped where users were asked open-ended questions such as: How satisfied werethey with the application. What aspects of theapplication, if any, were they most satisfied withand why. What aspects of the application, if any,were they most dissatisfied with and why?

    Pilot StudyTo make the results more generalizable, the re-searchers attempted to gather data from a vari-ety of firms. Five firms a manufacturing firm,two hospitals, a city government office, and auniversity were selected. A sample of 96 endusers, with approximately an equal number ofresponses from each organization, was ob-tained. Data were gathered by research assis-tants through personal Interviews with end users.

    MIS Quarterly/June 1988 263

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloNotaThus=Assim, desse modo

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

  • End-User Satisfaction

    The personal interviews enabled the assistantsto verify that the respondent directly interactedwith the application software. The research as-sistants first conducted open-ended structuredinterviews and recorded the end user's com-ments: then, the Likert-type questionnaire wasadministered.

    To assess whether the instrument was captur-ing the phenomenon desired by the researchersand to verify that important aspects of satisfac-tion were not omitted, qualitative comments fromthe structured interviews were compared with theresponses to the 40 questions. The end users'overall level of satisfaction and the specific as-pects that satisfied or dissatisfied end-users sup-ported the instrument. This also enabled the re-searchers to verify that the respondents knewwhat the items were asking.

    To ensure that the items measured the end-user computing construct, the construct validityof each item was examined, Kerlinger (1978)cites two methods of construct validation: (1) cor-relations between total scores and item scores,and (2) factor analysis. The first approach as-sumes that the total score is valid; thus, theextent to which the item correlates with the totalscore is indicative of construct validity for theitem. In this study each item score was sub-tracted from the total score in order to avoid aspurious part-whole correlation (Cohen andCohen. 1975); the result is a corrected item total(sum for 37 items) which was then correlatedwith the item score. In this pilot test, factor analy-sis was not used to assess construct validity be-cause the ratio of sample size to number ofitems (2:1) was considered too low.A measure of criterion-related validity (Kerlinger,1978) was also examined to identify items whichwere not closely related to the end-user com-puting construct. The two global items measur-ing perceived overall satisfaction and successof the application were assumed to be valid meas-ures, and the sum of the two items was usedas a criterion scale. The items comprising thiscriterion scale were: "Is the system successful?"and "Are you satisfied with the system?" Theextent to which each item was correlated withthis two-item criterion scale provided a measureof criterion-related vaiidity.

    Items were eliminated if their correlation with thecorrected item total was below .5 or if their cor-relation with the two-item criterion scale wasbelow .4. These cutoffs were arbitrary; there are

    no accepted standards. The correlations with thecorrected item total {r ^ .5) and the two itemcriterion (r s? .4) were significant at p < .001and comparable to those used by ofher research-ers (Ives, et al., 1983). Thus, the cutoffs wereconsidered high enough to ensure that the itemsretained were adequate measures of the end-user computing satisfaction construct. These twocriteria enabled the researchers to reduce the38 items to 23. Five additional items were de-leted because they represented the same as-pects with only slightly different wordings (e.g.,"Does the system provide up-to-date informa-tion?" and "Do you find the information up-to-date?"). In each case, the wording with thelowest corrected item total correlation was de-leted. In the pilot study, the remaining 18 itemshad a reliability (Cronbach's alpha) of .94 anda correlation of .81 with the two-item criterionscale.

    Survey methodsTo further explore this 18-item instrument, thequestionnaire was administered to 44 firms. Thesample was select rather than random; however,the large number of firms used supports the gen-eralizability of the findings. In each of these firms,the MIS director was asked to identify the majorapplications and the major users who directlyinteract with each application. In many cases.the MIS director consulted with the heads of userdepartments to identify major end users. Thismethod may have failed to identify a few majorend users, especially microcomputer users. How-ever, working through the MIS director was con-sidered a practical necessity.

    In this survey, a separate criterion question("Overall, how would you rate your satisfactionwith this application?") was used. The criterionquestion used a five point scale: 1 ^ non-existent; 2 ^ poor; 3 ^ fair; 4 ^ good; 5 =excellent.

    Data were gathered by research assistants whofirst conducted personal interviews with the endusers (using the same structured interview proc-ess used in the pilot study) and then admini-stered the questionnaire. Again, the personal in-terviews enabled the research assistants toverify that the respondents directly interactedwith application software. The researchers com-pared the more qualitative interview commentswith the questionnaire data to identify inconsis-tencies (i.e., respondents who did not complete

    264 MIS QuarteriyiJune 1988

    DanilloDestacar

    DanilloNotaApproach = nesse caso significa abordagem

    DanilloDestacar

  • End-User Satisfactior}

    the questionnaire carefully). Only about eight re-spondents were discarded because interview com-ments did not correspond with the questionnairedata.

    A sample of 618 usable end users' responseswas obtained. This sample represented 250 dif-ferent applications with an average of 2,5 re-sponses per application. Bartlett's test of sphe-ricity had a chi-square value of 8033.46 and asignificance level of .00000. This suggests thatthe intercorrelation matrix contains enoughcommon variance to make factor analysis worthpursuing.

    The ratio of sample size to number ot items(34:1) was well above the minimum 10:1 ratiosuggested for factor analysis by Kerlinger (1978).However, in this case, a large sample was con-sidered essential. The items being factor ana-lyzed were selected because they were closelyrelated to each other {i.e., all items were thoughtto be measures of the same EUCS construct).Thus, the items could be expected to have con-siderable common variance and relatively largeerror variance compared to their uniquevariance.

    To assess reliability and validity by nature andtype of application, users were asked whethertheir application was: end-user developed; mi-crocomputer or mainframe: and monitor, excep-tion reporting, inquiry or analysis (Alloway andOuillard, 1983).

    Sample characteristicsThe sample contains responses from a varietyof industries and management levels (see Table1). The respondents indicated that 41.9 percentof the applications were "primarily developed byan end user" but only 91 (14.7 percent) had per-sonally developed the application themselves.Twenty five percent were microcomputer applications whereas 75 percent were mini or mainframe applications. The applications were 37,6percent decision support, 19.3 percent database,19.8 percent exception reporting, 19.9 percentmonitor, and 3.4 percent other (e.g.. wordprocessing).

    Data AnalysisThe researchers conducted an exploratory factoranalysis and modified the instrument, examineddiscriminant validity of the modified instrument,and assessed reliability and cnterion-related va-lidity by nature and type of application (Kerlin-ger, 1978; Schoenfeldt, 1984). Factor analysiswas used to identify the underlying factors orcomponents of end-user satisfaction that com-prise the domain of the end-user satisfaction con-struct. Items which were not factorially pure weredeleted to form a modified instrument thatwould facilitate the testing of more specific hy-potheses (Weiss, 1970). The researchers at-tempted to avoid the use of imprecise and am-biguous terms to label the factors (Bagozzi,

    Table 1. Respondents by Industry and PositionRespondents by Industry

    ManufacturingFinance, banking & insuranceEducationWholesale & retailTransportation, communication & utilitiesGovernment agenciesHealth services/hospitalsOtherTotal

    Respondents by PositionTop managementMiddle managementFirst level supervisorProfessional employees without supervisory responsibilitiesOther operating personnelTotal

    42.6%4.5%3.7%6.5%9.5%9.1%

    16.7%7.4%

    100.0%

    4.2%31.2%20.4%28.7%15.5%

    100,0%

    MIS Quarterly/June 1988 265

    DanilloDestacar

  • End-User Satisfaction

    1981), and examined discriminant validity(Campbell and Fiske, 1959).

    Factor analysisUsing the sample of 618 responses, the datawas examined using principal components analy-sis as the extraction technique and varimax asa method of rotation. Without specifying thenumber of factors, three factors with eigenvalues greater than one emerged. These fac-tors were interpreted as content / format, accuracy /timeliness, and ease of use/efficiency.

    These labels were considered imprecise be-cause factors appeared to contain two differenttypes of items (e.g., content and format items;accuracy and timeliness items). To achieve moreprecise and interpretable factors, the analysiswas conducted specifying two, four, five and sixfactors.

    The researchers felt that specifying five factorsresulted in the most interpretable structure.These factors were interpreted as content, ac-curacy, format, ease of use, and timeliness andexplained 78.0 percent of the variance. The load-ings of the 18 items on each factor (for factorloading greater than .30) is depicted in Table2 and a description of each item (Cl thru T2)is provided in Table 3.

    The items are grouped in Table 2 by their high-est (primary) factor loading. A number of itemshad factor loadings above .3 or .4 on additional(nonprimary) factors. Items with many multipleloadings may be excellent measures of overallend-user satisfaction, but including them in thescale blurs the distinction between factors. Toimprove the distinction between factors, itemswhich had factor loadings greater than .3 onthree or more items were deleted from thescale this includes C5, A3, A4, F3, F4, and E3.

    These deletions resulted in a 12-item scale formeasuring end-user computing satisfaction andimproved the match between the factor labelsand the questions. In the modified 12-item in-strument, only one item (C4) had a primaryfactor loading below .7. Furthermore, none ofthe items had a secondary loading above .4.Each of these 12 items had a corrected itemtotal correlation above .63 (a measure of inter-nal consistency) and a correlation with the crite-rion measure of above .51 (see Table 3). Figure3 illustrates this modified model for measuringend-user computing satisfaction.

    This 12-item instrument had a reliability of .92and a criterion-related validity of .76. The crite-rion was the separate measure of overall end-user satisfaction with the application. The reli-ability (alpha) of each factor was: content = ,89;

    ItemCodeClC2C3C4C5AlA2A3A4F1F2F3F4ElE2E3T1T2

    Table 2.

    Content.74759.73854.71888.66369.51188

    .30158

    .30357

    .42106

    .32981

    .40981

    Rotated Factor

    Accuracy

    ,36602.85959,83729.73136.56169

    .35880

    .35998

    .32913

    Matrix of 18-ltem

    Format

    .36191

    .34585

    .41323

    .78831

    .71263

    .64593

    .58806

    .34352

    Instrument

    Ease ofUse

    .30926

    .34685

    .32791

    .44132

    .82396

    .80421

    .55695

    Timeliness

    .39061

    .32456

    .39654

    .34936

    .77654

    .77251

    266 MIS Quarterly!Jur)e 1988

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

  • End-User Satisfaction

    Table 3. Reliability and Criterion-Related Validity ofMeasures of End-User Satisfaction

    ItemCode

    CI

    C2

    C3

    C4C5AlA2

    A3A4F1

    F2F3F4ElE2E3T1T2

    ItemDescription

    Does the system provide theprecise information you need?Does the information contentmeet your needs?Does the system providereports that seem to be justabout exactly what you need?Does the system provide sufficient information?Do you find the output relevant?Is the system accurate?Are you satisifed with theaccuracy of the system?Do you feel the output is reliable?Do you find the system dependable?Do you think the output ispresented in a useful format?is the information clear?Are you happy with the layout of the output?Is the output easy to understand?Is the system user friendly?Is the system easy to use?Is the system efficient?Do you get the information you need in time?Does the system provide up-to-date information?

    CorrectedItem-Total

    Correlation

    .77

    ,76

    .72

    .70

    .76

    .69

    .68

    .73

    .70

    .66

    .72

    .73

    .75

    .63

    .67

    .75

    .69

    .67

    CorrelationWith

    Criterion

    ,62

    .62

    .60,55,59.54

    .51,54,65

    .54

    .55

    .58

    .57

    .52

    .57

    .68

    .56,55

    accuracy = ,91: format = ,78: ease of use =,85: and timeliness = .82, The correlation ofeachfactorwith the criterion was: content ^ .69:accuracy = ,55; format = .60: ease of use =,58; and timeliness = .60.

    Convergent and discriminant validityanalysisTable 4 presents the measure correlation matrix,means, and variances. The multitrait-mul-timethod (MTMM) approach to convergent va-lidity tests that the correlations between meas-ures of the same theoretical construct aredifferent than zero and large enough to warrantfurther investigation. The smallest within-variable (factor) correlations are: content = .59;

    accuracy - ,82; format = 64: ease of use -.75; and timeliness - .70. For a sample of 618.these are significantly different than zero (p -.000) and large enough to encourage furtherinvestigation.

    Using the MTMM approach, discriminant valid-ity is tested for each item by counting thenumber of times it correlates more highly withan item of another variable (factor) than withitems of its own theoretical variable. Campbelland Fiske (1959) suggest determining whetherthis count is higher than one-half the potentialcomparisons. However, in this case, commonmethod variances are present so it is unclearhow large a count would be acceptable.

    An examination of the matrix in Table 4 revealszero violations (out of 112 comparisons) of the

    MIS Quarterly/June 1988 267

  • End-User Satisfaction

    condition for discriminant validity. For example,the lowest correlation between C1 and other con-tent items is .67 with 04. This correlation ishigher than C1's correlation with the other eightnoncontent items. Each ot the 12 items are morehighly correlated with the other item(s) in itsgroup than with any of the items measuring othervariables.

    Reliability and criterion validityanalysis by nature and type ofapplicationTable 5 describes the reliability and criterion-related validity ot the 12-item scale by natureand type of appiication. The instrument appears

    End-UserComputing Satisfaction

    CONTENTC l : Does the system provide the precise information you need?02: Does the information content meet your needs?03: Does the system provide reports that seem to be just about exactly what you need?C4: Does the system provide sufficient information?ACCURACYA1: Is the system accurate?A2: Are you satisified with the accuracy of the system?

    FORMATF1: Do you think the output is presented in a useful format?F2: Is the information clear?

    EASE OF USEEl : Is the system user friendly?E2: Is the system easy to use?

    TIMELINESST1: Do you get the information you need in time?T2: Does the system provide up-to-date information?

    Figure 3. A Model for Measuring End-User Computing Satisfaction

    268 MIS Quarterly/June 1988

  • End-User Satisfaction

    C203C4

    A1A2

    F1F2

    E1E2

    T1T2

    .72

    .68

    .67

    .49

    .48

    .52

    .56

    .51

    .52

    .53

    .52

    01

    Table 4. Correlation Matrix of

    .68

    .66

    .49

    .45

    .56

    .55

    .51

    .51

    .53

    .51

    02

    VARIABLE

    01C2C304A1A2F1F2E1E2T1T2

    .59

    .41

    .41

    .56

    .54

    .46

    .47

    .47

    .45

    03

    (ITEM)

    (n

    .55

    .48

    .56

    .55

    .41

    .41

    .50

    .55

    04

    = 618)

    .82

    .42

    .53

    .37

    .39

    .53

    .57

    A l

    Mean

    3.8913.9723.8624.0374.2974.2074.0994.2863.9644.0804.0964.247

    .48

    .57

    .39

    .39

    .51

    .54

    A2

    Measures

    .64

    .37 .44

    .43 .56

    .43 .46

    .44 .48

    F1 F2

    VARIANOE.920.822

    1.056.799.729.754.668.660

    1.2381.028.950.853

    .75

    .46 .44

    .44 .37 .70

    El E2 T1

    Table 5. Scale Reliability and Criterion ValidityBy Nature and Type of Application

    For All Applications

    Micro Computer Application?Yes { n - 147)No ( n - 429)

    Type of Application?Other (word processing) {n= 19)Monitor Applications (n -112)Exception Application (n^117)Inquiry Applications (n -111)Analysis Applications (n-223)

    End-User Developed Application?Yes (n = 236)No (n = 321)

    Cronbach'sAlpha for

    12-ltem Scale.92

    .91

    .93

    .94

    .93

    .90

    .92

    .94

    .91

    .93

    Correlation BetweenCriterion and12-lteni Scale

    .76"

    .64*

    .78'

    .85'

    .84'

    .65"

    .68*

    .79*

    .72'

    .77'Significant at p< .000.

    MIS Quarterly:Jur)e 1988 269

  • End-User Satisfaction

    to have more than acceptable reliability and cri-terion-related validity for microcomputer and main-frame applications, for monitor, exception, inquiryor analysis applications, and for end-user devel-oped applications as well as those developedby more traditional methodology.

    The reliability was consistently above .90 andshowed little variation by nature and type of ap-plication. With a minimum standard of .80 sug-gested for basic research and .90 suggested foruse in applied setting where important decisionswill be made with respect to specific test scores(Nunnally. 1978) the instrument's reliability isadequate for both academic research andpractice.

    The correlations between the criterion questionand the 12-item scaie were consistently high(greater than .5) but, interestingly, showed morevariation by nature and type of application. Minior mainframe applications had a correlation of.78 with the criterion compared to .64 for micro-computer applications. Analysis (.79) and moni-tor (.84) applications had higher correlations withthe criterion than exception (.65) or inquiry (.68)applications.In summary, it is the opinion of the researchersthat the instrument presented in this article rep-resents substantial progress towards establish-ment of a standard instrument for measuring end-user satisfaction. The data supports the con-struct and discriminant validity of the instrument.Furthermore, the instrument appears to have ade-quate reliability and criterion-related validityacross a variety of applications. However, con-tinuing efforts should be made to validate theinstrument. The test-retest reliability of the in-strument should be evaluated and another largemulti-organizational sample should be gatheredto confirm factor structure and discriminantvalidity.

    Exploring the Instrument'sPractical and TheoreticalApplicationIn this section, suggestions are made for practi-cal application of the instrument. Then tentativestandards for more precisely evaluating end-user applications are presented. Next, the use-fulness of the instrument for developing and test-ing more precise research questions is illustratedby exploring some hypotheses concerning the

    relationship between end-user involvement indesign and end-user satisfaction. Finally, sug-gestions for further research are discussed.

    Practical applicationThis 12-item instrument may be utilized to evalu-ate end-user applications. In addition to an over-all assessment, it can be used to compare end-user satisfaction with specific components (i.e.,content, format, accuracy, ease of use, or time-liness) across applications. Although there maybe reasons to add additional questions to evalu-ate unique features of certain end-user applica-tions, this basic set of 12 items are general innature, and experience indicates that it can beused for all types of applications. This providesa common framework for comparative analysis.

    The sample data used in this study representsthe major applications from 44 firms. This crossorganizational aspect of the sample makes it ap-propriate for the development of tentative stan-dards. Percentile scores for the 12-item end-user computing satisfaction instrument are pre-sented in Table 6. Other relevant sample statis-tics are: minimum = 16; maximum - 60: mean= 49.09; median = 51; and standard deviation- 8.302. These statistics may be useful in moreprecisely evaluating end-user satisfaction with aspecific application.

    Table 6. Percentile Scores 12-ltem Instrument

    Percentile102030405060708090

    Value374346485153545759

    Theoretical applicationIn the development of this instrument, itemswhich were not factorially pure were deleted. Thefive resultant components are relatively independ-ent of each other. With such component meas-ures, researchers may be able to achieve moreprecision in their research questions. Some com-ponents may be thought more closely associ-

    270 MIS Quarterly/June 1988

  • End-User Satisfaction

    ated with specific independent variables thanothers. The instrument provides a framework forformulating and testing such hypotheses.

    User information satisfaction has been used ex-tensively in studies of user involvement (Ives andOlson, 1984); however, these studies used gen-eral measures and did not explore research ques-tions concerning the components of satisfaction.For example, satisfaction with accuracy and time-liness are affected by how the application is op-erated (i.e., the promptness and care in dataentry). In contrast, design rather than operationalissues may be the dominant factors affecting sat-isfaction with content, format, and ease of use.Thus, one might expect end-user involvementin design to be more closely associated with con-tent, format, and ease of use than accuracy ortimeliness. This suggests two sets of hypothe-ses; the first is general in nature and the secondis more precise.

    HI : User participation in design is positively cor-related with end-user computing satisfactionand each of its components.

    H2: User participation in design is more closelycorrelated with content, format and ease ofuse than accuracy or timeliness.

    These hypotheses are used to illustrate the use-fulness of the end-user satisfaction instrumentfor examining such research questions.

    To explore these hypotheses, the researchersdeveloped an eight-item Likert type scale formeasuring user involvement in the end-user con-text. End users were asked about the amountof time they spent in specific design activities(e.g., initiating the project, determining informa-tion needs, developing output format, etc.). Thisinstrument had a reliability (Oronbach alpha) of.96.

    The results depicted in Table 7 support the firstset of hypotheses. End-user satisfaction andeach of its components are significantly corre-lated with the end-user s involvement in thedesign of the application.

    To examine results for the second set of hy-potheses, absolute differences between corre-lation coefficients were calculated (see Table 8).The results for ease of use do not support thesecond hypothesis. End-user involvement indesign was less positively correlated with easeof use than accuracy or timeliness. With respectto the results for content and format partially sup-

    Table 7. Correlation Between End-UserInvolvement in Design

    and End-User ComputingSatisfaction Constructs

    Overall EUOSOontentAccuracyFormatEase of useTimeiiness

    End-UserInvolvement

    in Design.32'.30*.21*.29*.20'.25'

    ' Significant at p = .000.

    port the second hypothesis. End-user involve-ment in design was more positively correlatedwith content and format than accuracy or timeli-ness. Using a test of the difference between cor-relation coefficients (Oohen and Cohen, 1975).two of these differences (content-accuracy andformat-accuracy) were found to be significant atp < .05.

    Table 8. Matrix of Differencein Correlations

    Accuracy TimelinessOontentFormatEase of Use

    .09*

    .08'

    .01

    .05

    .04

    .05' Significant at p = .05.

    The intent is not to test hypotheses per se orexplain the results obtained, but rather to illus-trate the usefulness of the end-user satisfactioninstrument for developing and testing more pre-cise research questions. The results suggest thatsome of the end-user satisfaction componentsdenved by factor analysis may be more closelyrelated to independent variables than others. Inthis illustration, end-user involvement in designwas used as the Independent variable. Futureresearch efforts might focus on other independ-ent variables such as end-user skill levels. EDPsupport policies, type of application, or the qual-ity of user documentation.

    ConclusionsThis article presents significant progress towardsthe development of a standard measure of end-user satisfaction with a specific application. De-signed for an end-user computing environment

    MIS Quarterly/June 1988 271

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

  • End-User Satistactior}

    rather than traditional data processing, the in-strument merges ease of use and informationproduct items. Whether this instrument ischosen, the authors encourage the MIS researchcommunity to move towards a standard instru-ment for measuring end-user satisfaction whichincludes both information product and ease otuse items.

    The instrument appears to have adequate reli-ability and validity across a variety of applica-tions. It is short, easy to use, and appropriatefor both practical and research purposes. Stan-dards are provided for use by practitioners. Itscomponent factors are distinct, enabling research-ers to develop and test more precise researchquestions.

    The lack of adequate mechanisms to evaluatethe effectiveness of end-user computing is evi-dent. End-user satisfaction is only one of sev-eral relevant measures of end-user computingsuccess. Additional work is needed to developmeasures of the breadth of end-user computingin an organization (i.e., penetration) and thedegree of sophistication (i.e., skill) of individualend users. Research on end-user computing simpact on efficiency, productivity, and competi-tive advantage would benefit from the availabil-ity of such measures.

    ReferencesAlioway. R.M. and Quillard, J.A. "User Manag-

    ers' Systems Needs." MIS Quarterly (7:2),June 1983, pp. 27-41.

    Bagozzi, R,P. An Examination of the Validityof Two Models ot Attitude," Multivariate Be-havioral Research (16). 1981, pp. 323-359.

    Bailey, J.E. and Pearson, S.W. "Developmentof a Tool for Measuring and Analyzing Com-puter User Satisfaction," Management Sci-ence (29:5). May 1983, pp. 530-545.

    Barki, H. and Huff, S.L "Change, Attitude toChange, and Decision Support System Suc-cess," Information and Management (9),1985, pp, 261-268,

    Baroudi, J.J., Olson, M.H., and lves, B. An Em-pirical Study of the Impact of User Involve-ment on System Usage and Information Sat-isfaction," Communications of the ACM (29:3),March 1986, pp. 232-238.

    Benjamin, R,l, "Information Technology in the199O's: A Long Range Planning Scenario,"MIS Quarterly (6:2), June 1982, pp. 11-31.

    Benson, DH. A Field Study of End-User Com-puting: Findings and Issues," MIS Quarterly'

    (7:4), December 1983, pp. 35-45.Branscomb, L.M. and Thomas J.C. "Ease of

    Use: A System Design Challenge," IBM Sys-tems Journal (23), 1984, pp, 224-235.

    Campbell, DT. and Fiske, D.W. Convergentand Discriminant Validation by the Multitrait-Multimethod Matrix," Psychological Bulletin(56:1), 1959. pp, 81-105.

    Canning, R.G. "Programming by End Users,"EDP Analyzer (19:5), May 1981.

    Cheney, P.H,, Mann, R.I., and Amoroso. D.L"Organizational Factors Affecting the Successof End-User Computing," The Journal of Man-agement Information Systems (3:1), Summer1986, pp. 65-80.

    Cohen, J. and Cohen, P. Applied Multiple Re-gression/Correlation Analysis for the Behav-ioral Sciences, Lawrence Eribaum Assoc.,Hillsdale, NJ, 1975.

    Crandall, R.H, "Information Economics and ItsImplications for the Further Development ofAccounting Theory," The Accounting Review(44), 1969, pp, 457-466,

    Davis, G.B. and Olson, M.H. Management In-formation Systems: Conceptual Foundations.Structure, and Development, McGraw-HillBook Co., New York, 1985, pp. 532-533.

    Debons, A., Ramage, W., and Orien, J. "Effec-tiveness Model of Productivity," in Researchon Productivity Measurement Systems for Ad-ministrative Sen/ices: Computing and Informa-tion Services (2), LF. Hanes and C H- Kriebel(eds,), July 1978, NSF Grant APR - 20546.

    Fishbein, M. and Ajzen, I, Be//e^, Attitude, Inten-tion and Behavior: An Introduction to Theoryand Research, Addison-Wesley, Reading,MA, 1975,

    Gallagher, C.A. "Perceptions of the Value of aManagement Information System," Academyof Management Journal (17:1), 1974, pp. 46-55.

    Gailetta, D.F. "A Longitudinal View of an OfficeSystem Failure," SIGOA Bulletin (7:1), 1986,pp. 7-11.

    Gailetta, D.F. and Lederer, A.L. "Some Cautionsof the Measurement of User Information Sat-isfaction," Graduate School of Business, TheUniversity of Pittsburgh, Working Paper WP-643, November 1986,

    Goodwin, N.C, "Functionality and Usability," Com-munications of the ACM (30:3), March 1987,pp. 229-333.

    Henderson, J.C. and Treacy, M.E. ManagingEnd-User Computing for Competitive Advan-tage," Sloan Management Review. Winter1986, pp. 3-14.

    272 MIS Quarterly/June 1988

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

    DanilloDestacar

  • End-User Satisfaction

    ives, B. and Olson. M. "User Involvement andMIS Success: A Review of Research." Man-agement Science (30:5). 1984. pp. 586-603.

    Ives, B., Olson, M,, and Baroudi, S. "The Meas-urement of User Information Satisfaction," Com-munications of the ACM (26:10), October1983, pp. 785-793.

    Kerlinger, F.N. Foundations of Behavioral Re-search, McGraw-Hill, New York. 1978.

    Lefkovits, H.C. "A Status Report on the Activi-ties of Codasyl End-User Facilities Commit-tee (EUFC)," Information and Management(2), 1979, pp, 137-163.

    Mahmood, M.A. and Becker, J.D. "Effect of Or-ganizational Maturity on End-User Satisfactionwith Information Systems," Journal of Man-agement Information Systems (2:3), Winter1985-86, pp. 37-64.

    Martin, J. Application Development Without Pro-grammers. Prentice Hall, Inc, EnglewoodCliffs, NJ, 1982, pp. 102-106.

    McLean, E.R. "End-Users as Application Devel-opers," MIS Quarterly (3:4), December 1979,pp. 37-46.

    Neumann, S. and Segev, E, "Evaluate Your In-formation System," Journal of Systems Man-agement (31:3), March 1980, pp. 34-41.

    Nolan, R. and Gibson. C.F. "Managing the FourStages of EDP Growth." Harvard BusinessReview (52:1), January/February 1974, pp. 76-88.

    Nolan, R. and Seward, H. "Measuring User Sat-isfaction to Evaluate Information Systems," inManaging the Data Resource Function. R.L.Nolan (ed.). West Publishing Co., LosAngeles, 1974.

    Nunnally, J,C. Psychometric Theory, McGraw-Hill. New York, 1978, p, 245.

    Raymond, L. "Organizational Characteristics andMIS Success in the Context of Small Busi-ness," MIS Quarterly (9:1), 1985, pp, 37-52,

    Rockart, J.F, and Flannery, L.S. "The Manage-ment of End User Computing," Communica-tions of the ACM (26:10), October 1983. pp.776-784.

    Schoenfeldt. L.F. Psychometric Properties of Or-ganizational Research Instruments," in Meth-ods and Analysis in Organizational Research,T.S. Bateman and G.R, Ferris (eds.), RestonPublishing Co., Reston, VA, 1984, pp. 68-80,

    Sondheimer, N. and Relies, N. "Human Factorsand User Assistance in interactive ComputingSystems: An Introduction," IEEE Transactionson Systems, Man, and Cybernetics SMC-12(2), March-April 1982, pp. 102-107.

    Sprague, R.H. 'A Framework for the Develop-ment of Decision Support Systems." MIS Quar-terly (4:4), 1980. pp, 1-26.

    Swanson, E.B. "Management Information Sys-tems: Appreciation and Involvement," Manage-ment Science (21:2), October 1974, pp. 178-188.

    Treacy, ME. "An Empirical Examination of aCausal Model of User Information Satisfac-tion," Center for Information Systems Re-search, Sloan School of Management, Mas-sachusetts institute of Technology, Apnl 1985.

    Yavelberg. I.S. "Human Performance Engineer-ing Considerations For Very Large Computer-Based Systems: The End User." The BellSystem Technical Journal (61:5), May-June1982, pp. 765-797.

    Weiss, D.J. "Factor Analysis in Counseling Re-search," Journal of Counseling Psychology(17), 1970. pp. 477-485.

    About the AuthorsWilliam J. Doll is a professor of MIS and strate-gic management at The University of Toledo andserves as a management consultant for area com-panies. The author of many articles in academicand professional journals Including the Academyof Management Journal, Communications of theAOM. MIS Quarterly. Information & Manage-ment, and the Journal of Systems Management,Dr. Doil has a doctoral degree in business ad-ministration from Kent State University and hasworked as a senior management systems ana-lyst on the corporate staff of BurroughsCorporation.

    G. Torkzadeh is an assistant professor of information systems in the Operations ManagementDepartment at The University of Toledo. Hehoids a Ph.D. in operations research from TheUniversity of Lancaster, England, and is amember of the O.R. Society of Great Britain,TIMS, DSI, ACM and SIM. He has been involvedin research programs pertaining to the applica-tion of O.R. (in the public sector), distributionresource allocation / re-allocation, and mathemati cal modelling, and has published in the Journalof Operational Research Society. Communica-tions of the ACM, and Information & Manage-ment. One of his current research interests isthe management of the information systemsfunction.

    MIS Ouarterly'June 1988 273

  • End-User Satisfaction

    9.10.11.

    13.

    14.

    15.16,17.

    18.19,

    AppendixMeasures of End-User Computing Satisfaction

    Forty Items Used in Pilot Study

    1. Is the system flexible?2. Does the system provide out-of-date

    information?3. Is it easy to correct the errors?4. Do you enjoy using the system?5. Do you think the output is presented in a

    useful format?6. Is the system difficult to operate?7. Are you satisfied with the accuracy of the

    system?8. Is the Information dear?

    Are you happy with the layout of the output?is the system accurate?Does the system provide sufficient informa-tion?

    12. Does the system provide up-to-date infor-mation?Do you trust the information provided by thesystem?Do you get the information you need intime?Do you find the output reievant?Do you feei the output is reliable?Does the system provide too much informa-tion?Do you find the information up-to-date?Does the system provide reports that seemto be just about exactly what you need?

    20. is the system successful?"21. is the system easy to use?22. is the system user friendly?23. Are the reports complete?24. Does the system provide the precise infor-

    mation you need?25. is the system efficient?26. Istheoutput easy to understand?27. Is the system troublesome?28. Is the system convenient?29. is the system difficult to interact with?30. Does the system provide comprehensive

    information?31. Do you think the system is reliable?32. Would you like more concise output?33. Does the information content meet your

    needs?34. Does the information you receive require

    correction?35. Do you find the system dependable?36. Would you like the system to be modified

    or redesigned?37. Do you think the reports you receive are

    somewhat out-of-date?38. Are you satisfied with the system?*39. Would you like the format modified?40. Do you get information fast enough?

    ' Criterion question.

    274 MIS Quarterly/June 1988