32
Usage of Business Intelligence Testing the Technology Acceptance Model on a BI System Pär Arvidsson & Dennis Pettersson Uppsala University Department of Business Studies Master Thesis 2012-05-25 Supervisor: Anna-Karin Stockenstrand

Usage of Business Intelligence - DiVA portaluu.diva-portal.org/smash/get/diva2:536333/FULLTEXT01.pdf · Usage of Business Intelligence Testing the Technology Acceptance Model on a

Embed Size (px)

Citation preview

Usage of Business Intelligence Testing the Technology Acceptance Model on a BI System

Pär Arvidsson & Dennis Pettersson

Uppsala University

Department of Business Studies

Master Thesis

2012-05-25

Supervisor: Anna-Karin Stockenstrand

A B S T R A C T

Business Intelligence (BI) has become an essential part of the modern enterprise, and what used to be thought of as a luxury is now a matter of survival. Recent economic developments have forced companies to rethink their IT investment strategy. BI investments are now targeting the majority of people in the organisation instead of a select few. Thus, it is important to understand why users of a BI system choose to accept and use the system. Previous research has established the Technology Acceptance Model (TAM) as one of the most powerful and parsimonious models explaining user acceptance and usage behaviour of information technologies. This quantitative study replicates the original TAM study with the purpose to increase the understanding of BI usage, and investigates the behaviour of the users of the BI system QlikView in the case company GE Healthcare. The results showed a lower explanatory power for the model when compared to previous research. This indicates that how useful a user perceives a BI system to be does not affect the amount of usage to the same extent as predicted by TAM. Possible causes for this are discussed, with an emphasis on the influence of what tasks a user is confronted with and the measurement of system use.

Keywords: Technology Acceptance Model, TAM, Business Intelligence, BI, system use, perception of usefulness, perception of ease of use

A C K N O W L E D G E M E N T S

Writing a thesis is never a solitary enterprise. We would like to thank our thesis supervisor, Anna-Karin Stockenstrand, for excellent input and encouragement during the entire journey. We are very grateful towards James Sallis for great input and advice concerning statistical methods. A huge thank you goes out to everyone involved at GE Healthcare. Without them this entire endeavour would not have happened. We also thank our fellow students in our seminar group. Thanks to everyone else not mentioned but who knows that they mattered in one way or another for making this thesis a reality.

Uppsala, 25th May 2012

Pär Arvidsson Dennis Pettersson

Table of Contents

INTRODUCTION ....................................................................................................................... 1

BACKGROUND .......................................................................................................................... 4

BUSINESS INTELLIGENCE ................................................................................................................ 4

BI ARCHITECTURE ........................................................................................................................... 4

BI FOR THE MASSES .......................................................................................................................... 6

THEORETICAL FRAMEWORK .............................................................................................. 7

THE CONCEPT OF ACCEPTANCE .................................................................................................... 7

THE TECHNOLOGY ACCEPTANCE MODEL ................................................................................... 7

ACADEMIC DISCOURSE ON TAM ................................................................................................... 8

METHOD .................................................................................................................................. 11

RESEARCH MODEL ........................................................................................................................ 11

RESEARCH SETTING ...................................................................................................................... 11

RESEARCH STRATEGY AND DESIGN ............................................................................................ 12

MEASURES ...................................................................................................................................... 13

DATA PROCESSING ........................................................................................................................ 14

RESULT ..................................................................................................................................... 15

DISCUSSION ............................................................................................................................. 18

CONCLUSION .......................................................................................................................... 22

BIBLIOGRAPHY ...................................................................................................................... 24

APPENDIX I .............................................................................................................................. 28

1

Introduction The nature of the business environment of contemporary organizations is rapidly changing (Lönnqvist & Pirttimäki, 2006; Gangadharan & Swami, 2004; Marjanovic, 2007). Information technology (IT) has, since its inception, transformed business and business operations (Popovic et al., 2010) and information itself has become an increasingly important resource in today’s knowledge-based economy (Briciu et al., 2009). As observed by Lönnqvist & Pirttimäki (2006, p. 32), “the need for timely and effective business information is recognized as essential for organizations not only to succeed, but even to survive”. It is therefore important for organizations to be able to effectively acquire, process and analyse the information in order to support decision making (Olszak & Ziemba, 2007). An organization doing this effectively should be able to gain a competitive edge as the analysed information would give an insight into the current competitive situation, thus enabling learning from the past and forecasting the future (Marjanovic, 2007). This process of collecting, processing and analysing information is often referred to as Business Intelligence (BI) (Okkonen et al., 2002).

BI as a concept seems to have as many definitions as there are users. Business professionals and academics have their own definition of the term depending on their own agenda (Gibson et al., 2004; Popovic et al., 2010). This paper uses the definition described by Lönnqvist & Pirttimäki (2006) as it captures not only the technological aspects but also incorporates a process-oriented view of the matter:

“An organized and systematic process by which organizations acquire, analyze and disseminate information from both internal and external information sources significant for their business activities and for decision making.” (Lönnqvist & Pirttimäki, 2006, p. 32)

BI is not a new phenomenon. In the early 1970’s, the first systems for decision support were designed (Watson & Wixom, 2007). However, in the 1990’s, when business computing became more or less ubiquitous, BI became an important topic for companies (Ortiz, 2010). Today, BI is a top priority among Chief Information Officers (CIO), and thus it is an area that receives much attention and resources in many companies (Watson & Wixom, 2007; Yeoh & Koronios, 2010). However, due to the recent economic downturn, mostly spurred on by the sovereign debt crisis in Europe, IT departments are being scrutinized and forced to become more efficient (Imhoff & White, 2011). This development has led companies on a quest for alternative ways of increasing the value of their BI initiatives, in other words: to do more with less. Within this endeavour, companies are increasingly striving to achieve pervasive BI, i.e. BI is provided to the vast majority of the organization and not only to a select few (Ortiz, 2010; Imhoff & White, 2011). Gone are the days when only upper management or certain departments could gain the benefits of BI - instead, at least theoretically, everyone in the organization can now use BI as a decision making tool (Negash & Gray, 2008).

A key factor for this to become a reality is to make the users of BI more or less independent from the IT department. This means that BI must take on a self-service characteristic, in other words that users are able to use the tool by themselves adapted to their informational needs and that they are able to customize the tool accordingly (Watson & Wixom, 2007). This is easier said than done though. According to Eckerson (2008), “the penetration of active BI users in organizations

2

is only 24%” (p. 4). Thus, user acceptance of the system is a critical success factor. Without active users, the BI system is not used and thus cannot be called pervasive. If it’s not pervasive the “do more with less” dimension is void, and the value of BI is not increased. In sum, the more use you can get out of a BI system, the more potential value can be gained from it. This is not unique to BI systems, but applies universally to investments in IT. The major underlying factor concerning the “productivity paradox”, referring to lacklustre returns from IT investments in spite of technological advances, is low system use (Venkatesh & Davis, 2000).

One of the most important, used and tested theories studying user acceptance of information systems is the Technology Acceptance Model (TAM) (King & He, 2006; Yousafzai et al., 2007). It has been both praised and criticised for its parsimonious nature and simplicity (Chuttur, 2009). In short, the model proposes that system use (which corresponds to user acceptance) is predicted by user motivation, which in turn is influenced by mainly two constructs: perceived usefulness and perceived ease of use (Chuttur, 2009). Perceived usefulness is defined as: “the degree to which a person believes that using a particular system would enhance his or her job performance” (Davis, 1989, p. 320). Perceived ease of use is defined as “the degree to which a person believes that using a particular system would be free of effort” (Davis, 1989, p. 320). The two individual beliefs mediate the influence on user acceptance stemming from other external variables (Zhang et al., 2008).

Several meta-analyses of previous research using the TAM have identified several gaps in TAM research. (Yousafzai et al., 2007; Legris et al., 2003; Schepers & Wetzels, 2007; King & He, 2006; Wu et al., 2011). First, almost all previous studies rely on self-reported usage of the system, which in itself is a subjective measure of usage. Using an objective measure of usage would rule out reporting biases on account of selective recall (Davis et al., 1992), inaccurate estimation (Collopy, 1996) and remove methodological problems such as common-method bias, hypothesis guessing and indistinguishable causation that might result from using self-reported measures (Straub et al., 1995; Szajna, 1996). Therefore, it is of great benefit to investigate whether TAM also holds true for actual system use in contrast to self-reported system use.

Second, most studies that have been made have focused on general information systems (e.g. office automation software) and not so much on business process applications (Legris et al., 2003). An extensive literature search and the meta-analyses indicate that TAM has not been applied to BI systems previously to any larger extent. It would therefore be valuable to investigate whether TAM holds true for a BI system, and the reasons for any similarities or differences in results compared to other studies.

Third, previous studies have not been consistent in what variables and constructs they study. (Wu et al., 2011; Yousafzai et al., 2007; Legris et al., 2003) There exist many versions of TAM, so this development is not entirely surprising. A few studies have included the degree to which a person feels that using a system is voluntary (Venkatesh, 2000; Agarwal & Prasad, 1997). The studied population has also varied. Most studies have, in fact, favoured to study students, often for the reason of practicality. This is not necessarily a critical problem, as the parsimonious nature of TAM and its predictive power allows it to be applied in a variety of settings.

3

Thus, the purpose of this study is to increase the understanding of what drives usage of Business Intelligence systems by testing the Technology Acceptance Model on a Business Intelligence system within one organisation using both computer recorded and self-reported usage data. The research questions are the following:

• To what extent can BI usage be explained by TAM? • How important is a user's perception of the BI system to his or her level of usage? • Are there any unique qualities in a BI system that could help explain the usage of the

system?

4

Background Business Intelligence Different types of decision support systems have been around since the early 1970’s, but it wasn’t until the 1990’s, when business computing became more or less ubiquitous, that the use and application of these systems really took off (Watson & Wixom, 2007). Since the 1990’s, the term Business Intelligence (BI) has been somewhat of a buzzword within IT and business, and much has been written on the topic. According to Negash (2004, p. 178), BI systems are data systems that “combine data gathering, data storage and knowledge management with analytical tools to present complex internal and competitive information to planners and decision makers”. Furthermore, Negash (2004) states that the purpose of BI is to deliver information at the right time, at the right location and in the right form to assist decision makers.

Lönnqvist & Pirttimäki (2006, p. 32) describe the term BI as “a managerial philosophy as well as a tool used to help organisations manage and refine business information with the objective of making more informed business decisions.” It is clear, according to Lönnqvist & Pirttimäki (2006), that BI is a term for not only gathering data, but also making sense of this data and spreading it to decision makers in the organisation in order for them to make better and more informed decisions. Moss & Atre (2003) emphasize that BI is neither a product nor a system, but rather a complete architecture built up of several applications supported by a range of databases with the purpose of providing the business community with easy access to business data.

These are but a sample of the various definitions of BI that have been mentioned in previous literature. The definition adopted by Lönnqvist & Pirttimäki (2006) captures the essence of BI and the fact that it is more than just a technical solution. Popovic et al. (2010) note that the problem with most BI definitions is that they do not incorporate any human aspects in the BI process, but focus excessively on technical aspects such as system architecture and software. They emphasize that BI is nothing without people to interpret the meaning of the information supplied and to make decisions based on this information. This is the main reason why the definition by Lönnqvist & Pirttimäki (2006) is utilized in this paper.

As discussed above, there are many definitions of the term BI, all with subtle differences. What remains the same in all of these definitions, however, are the technical components that the term BI comprises.

BI Architecture A BI system needs to incorporate two primary activities: “getting data in and getting data out” (Watson & Wixom, 2007, p. 96). In-between these two activities there are a number of processes that interrelate and work in a sequential fashion. These processes encompass data that is collected, managed and reported as well as analytics, where data is used to “analyze, forecast, predict, optimize and so on” (Davenport & Harris, 2007, p. 155). A technical architecture is required to accomplish this, where different technical components perform one function and communicate with other components. Various components might be shared with other technical architectures (e.g. ERP systems), thus a BI architecture is a subset of the overall IT structure within an organization (Kudyba & Hoptroff, 2001). A general BI architecture is presented in figure 1.

5

Figure 1: General BI Architecture, adapted from Davenport & Harris (2007)

Data Management. Davenport & Harris (2007) state that the collection and management of data is a concern of most IT systems. There are few transactions and exchanges within a company that do not get recorded in one way or another. Often BI systems rely on the collected data of other systems (e.g. ERP and CRM systems). There is such a thing as collecting too much however, according to Davenport & Harris (2007), and it is advisable to be wary of the danger of data proliferation. The data should be relevant and of value to whatever informational needs the BI systems is supposed to fulfil.

Transformation Tools and Processes. For data to become usable and accessible, it must go through the process of ETL (extract, transform and load), according to Davenport & Harris (2007). The easy part is extracting the data and loading it into data repositories. The tricky part is cleaning up and transforming the data into a decision-ready state that conforms to universal business rules set up by the organization. Also, the data must be standardized so that business concepts have the same meaning for everyone in the company.

Repositories. The data is stored in either a data warehouse, containing data from many different sources, or a data mart, a separate specialized portion of a data warehouse (Negash & Gray, 2008). The metadata repository contains “data about the data”, which entails the definitions of the data and how it should be used. As indicated by figure 1, metadata is present throughout the whole architecture.

Analytical Tools and Applications. At this stage the data is collected in repositories, ready to be used for analysis (Kudyba & Hoptroff, 2001). There are multiple technologies for that purpose according to the authors. OLAP, or online analytical processing, involves “aggregating large volumes of data in a cube which can be accessed by information users in a user friendly manner” (Kudyba & Hoptroff, 2001, p. 6). Data mining goes a step further by incorporating statistical techniques at an attempt to discover and identify relationships between variables.

Presentation Tools and Applications. Davenport & Harris (2007) explain that in order to effectively make use of the potential benefit that can be gained from a BI system, there must be some kind of tool in place that lets the user interact with the system in an easy manner. It should allow the user to create ad-hoc reports, to visualize complex data and relationships, to collaborate and share data with others and to be notified of exceptions and other things of importance.

Operational Processes. The overarching operational processes detail how the tools within the BI architecture are supposed to be used, i.e. “how the organization creates, manages, and maintains

6

data and applications” (Davenport & Harris, 2007, p. 173). Standards and policies must be in place in order to ensure and enforce reliability, scalability and security. This relates to the discussion above about seeing BI as a managerial philosophy as there are greater concerns to make note of than just using various technical applications in isolation, and the fact that business processes are seldom detached from each other but rather overlapping and interrelated.

BI for the masses In today’s economic environment companies must use BI in order to make smarter and faster decisions (Imhoff & White, 2011). This is what gives companies their competitive edge and allows them to quickly adapt to the market and the rapidly changing business environment. Making use of a BI system is becoming less of a luxury and more of a matter of survival in order to become a successful enterprise (Chaudhuri et al., 2011). Previously, organisations have been dependent on their IT departments to create and provide the relevant applications that the organizations need (Imhoff & White, 2011; Eckerson, 2008). Having the IT department as an intermediary has allowed companies to control the created applications and to make sure that metrics and information used throughout the entire organization are standardized and of the same structure. However, this has also put an increasing load on the IT departments as the need for information in organisations keep growing. Today, according to Eckerson (2008), many IT departments find themselves inundated with requests from the rest of the organization, thus there is a strong need to transform BI into more of a self-service technology.

Providing BI for the broad mass of people in the organization has come to be known by many names, such as “Self-Service BI”, “Pervasive BI”, and “BI for the masses”, according to Eckerson (2008). He states that all of these terms encompass the idea of spreading the use of BI throughout the organisation and making BI use more self-sufficient, and delegating the task of application creation and maintenance from the IT department towards empowered end-users. According to the advocates of this approach, this will come with certain benefits. The most important one being the organizational benefits of important information reaching decision makers at a greater velocity (Imhoff & White, 2011). Measuring the benefits of BI systems, or any IT system for that matter, is not a trivial, self-evident task (Lönnqvist & Pirttimäki, 2006). The “productivity paradox”, as explained by Venkatesh & Davis (2000), has long plagued IT investments - if an IT project has not failed miserably to begin with, it often does not give the returns that the investor was expecting. The major culprit is often that the system is not used to the degree that was expected or for its intended purpose. However, if a BI system is used as intended, the expected consequence of improved decision making should follow. And if the BI system is applied on a wide scale, then the benefits should increase proportionally to the usage.

The diffusion of BI in the organisation will, without doubt, also benefit the IT department by relieving them of much workload and letting them focus on more value adding activities (Imhoff & White, 2011; Eckerson, 2008). However, it also means an increasing demand on the end-users. A self-service BI system that is supposed to cater to a wide audience must be of a more generic nature than one that is designed for a specific role, function or area. Imhoff & White (2011) point out that the user must learn this system in such detail that he or she can get the benefit and use out of the system according to the informational need. Eckerson (2008) states that with an increased user responsibility, the more critical it is to design a system that is intuitive to use and to make sure that the users actually use the system. After all, a pervasive BI system must be used on a large scale or it cannot rightfully be called pervasive. Therefore, understanding the intricacies of user acceptance regarding BI systems is crucial.

7

Theoretical Framework The Concept of Acceptance The success of any information technology or system is highly, if not entirely, determined by user acceptance. All the potential benefits of a system, be it in the form of increased productivity, performance or anything else that affects an organization positively, do not matter if the users ultimately end up rejecting the system (Davis, 1993). Thus, understanding the concept of acceptance and the drivers behind it is not only of academic interest, but also for practitioners when designing and implementing a new system, as it often requires a big investment with less than certain outcome (Venkatesh & Davis, 2000). In this paper, the concept of acceptance refers to a user’s willingness to use a system according to its purpose, as defined by Schwarz & Chin (2007).

There are many ways to study user acceptance, but the most widely used model that specifically targets IT and IT systems, is the Technology Acceptance Model (TAM) constructed by Davis (1986). In the model, Davis (1989) connects the concept of acceptance with system use, arguing that they are essentially the same within the context of IT. If a system is used it must have been accepted by the user, if the system is accepted it is more likely it is used than not. This makes sense conceptually, especially in the light of research indicating that system usage is “the primary variable through which IT affects white collar performance” (Straub et al., 1995, p. 1328).

The Technology Acceptance Model The Technology Acceptance Model (TAM), as presented by Davis (1989), has been recognized as a powerful and parsimonious model to explain user acceptance of information technology. The purpose of the model is to explain acceptance of information technology across a wide range of settings and applications while at the same time being theoretically well founded (Davis, 1989). The model was developed as an adaption of the Theory of Reasoned Action (TRA) (Ajzen & Fishbein, 1980), and several other strands of theory, such as self-efficacy theory, the cost-benefit paradigm and research on the adoption of innovations, lend support to the theoretical underpinnings of the model (Davis, 1989).

The model describes the relationship between two belief constructs, perceived usefulness (PU) and perceived ease of use (PEU), and user attitudes, intentions and usage of a system (Straub et al., 1995). The two belief constructs, PU and PEU, are theorized to be the fundamental determinants of user acceptance as they capture and mediate the effects of other external variables on system use. PU is defined as: “the degree to which a person believes that using a particular system would enhance his or her job performance” (Davis, 1989, p. 320). PEU is defined as: “the degree to which a person believes that using a particular system would be free of effort” (Davis, 1989, p. 320). The actual system use is determined by a person’s behavioural intention, which mediates PU and PEU. The original model did also include a person’s attitude towards using the system, but this variable was later discarded as it did not fully mediate the impact of the belief constructs (PU and PEU) on behavioural intention explained by “people intending to use a technology because it was useful even though they did not have a positive affect (attitude) toward using” (Venkatesh, 2000, p. 343).

8

Davis (1989) found a difference in the linkages between PU, PEU and behavioural intention, where the PU-intention link was positively stronger, and PEU seemed to be not only a parallel to PU but also an antecedent to the variable. This makes sense conceptually. If a system is not useful it does not matter how easy it is to use, but if it is useful then a user can learn to live with the hardship of a more difficult to use system (Davis, 1989). Thus, the more useful a user perceives a system to be the more he or she will use it. If it is easy to use also affects usage positively but to a lesser extent than PU. If a system is easy to use, it can make the system appear more useful in the eyes of the beholder. The revised, final model can be seen in figure 2.

Figure 2: Technology Acceptance Model, adapted from Davis & Venkatesh (1996)

Academic Discourse on TAM TAM has been proven as a robust predictor of usage behaviour in several studies that have been conducted after its inception. Adams et al. (1992), for example, tested the model in two specific settings. In the first study they surveyed employees from 10 different organisations and on different levels of the organisation. In the second study, they tested TAM in a student setting, distributing a survey to MBA students at a university. They found that the model was valid for both these settings and that PU and PEU were important predictors of usage and that the causal relationships indicated by TAM were significant in both settings. In a comparison between different alternative models on the topic of usage, Mathieson (1991) found that TAM was the model that could best predict future use of a spread sheet program among the tested models. However, research on TAM has not been conclusive. Ma & Liu (2004) note that several studies have not been able to find a significant relationship between PEU and system use. According to Legris et al. (2003) the results of TAM studies have not been totally consistent nor clear. Other researchers have criticized methodological and theoretical assumptions that, according to them, are flawed (Burton-Jones & Straub, 2006; Straub et al., 1995; Doll & Torkzadeh, 1998; Sun & Zhang, 2006).

The Concept of Usage. Some researchers have taken issue with defining user acceptance and usage as essentially the same thing (Burton-Jones & Straub, 2006; Doll & Torkzadeh, 1998), going so far as saying that the system usage construct has been operationalized in MIS research without an adequate definition to begin with and the construct itself should be faced with more theoretical scrutiny in order to ensure that the conclusions drawn from previous research are actually correct. In most previous TAM studies, usage has been measured in somewhat simplified measures according to Burton-Jones & Straub (2006). They claim the construct to be more complex than what previous studies have assumed, and that there is a need to conceptually define the term and to come up with ways of measuring usage that includes more dimensions. Burton-Jones & Straub (2006) present a framework for arriving at a usage measure that takes the user, system and task into account, and that would produce a more complex, and perhaps also just,

9

measure of system use. This approach has also received criticism, because since TAM is supposed to be parsimonious, the construct PU should capture all aspects of task and usefulness, thereby there should be no need to separate task from the construct itself (Dishaw & Strong, 1999). Doll & Torkzadeh (1998) also discuss the intricacies of the construct of usage. They are of the same opinion as Burton-Jones & Straub (2006), namely that there is a need to further study usage and to have a more complex way of measuring it, taking more aspects than logged hours on a system into account. They also argue that IT systems nowadays have different purposes than they had before, and as a consequence there is also a need to reconsider the concept of usage. According to Yousafzai et al. (2007), a revised measure of usage would be especially valuable for technologies where the frequency of usage is not important for establishing user acceptance.

Self-reported vs. Actual Use. One specific issue with TAM research that has been brought into the open concerns the measurement of system usage, which is how you measure use and where researchers get that data. According to Yousafzai et al. (2007), the majority of previous studies have used self-reported usage (subjective) metrics whereas only a few have utilized actual (objective) system usage metrics (i.e. usage recorded by the system itself). Straub et al. (1995) identify that different studies use different measures of system usage, and they therefore question the findings of these studies. In their study, Straub et al. (1995) set out to address both conceptual and methodological issues related to the measurement of system usage. Contrary to what is expected, the authors find that the constructs of self-reported use and actual use do not appear to be strongly linked. They conclude that the belief constructs in TAM (PU and PEU) only have weak links to objective measures of system use. The reason why self-reported and actual system use differ is not entirely clear. The authors discuss several possible explanations, such as different biases and cognitive processes that make people bad estimators of their own usage. Other possible causes are measurement issues, for example that the processes for measuring actual usage might be faulty. They suggest there might be a need to reconceptualise the determinant of usage in TAM, and future research should look into viable, alternative measures and further research what effects the results of their study will have on the accuracy of TAM and previous TAM research (Straub et al., 1995). This issue is something that Legris et al. (2003) also point out and they identify this as an area where researchers should focus more of their attention. According to them, people are by nature bad at estimating, which leads to self-reported measures being completely different from actual computer recorded measures of system use. Since most studies have employed self-reported measures, they thereby question the relationships found in prior research.

Explanatory Power. Although many studies have found the model to be reliable and a powerful predictor of system use, there are those who are not convinced of the power of the model and who argue that there is a complexity inherent to system use that the model is not able to explain. Sun & Zhang (2006) have made an analysis of many previous TAM studies looking at what explanatory power was found in each respective study as well as the significance of the relationships between the constructs in TAM. They found that explanatory power over all was inconsistent and relatively low. They observed several interesting phenomena. First, a significant difference was found when comparing the explanatory power of field studies and experiments. Most previous research has been carried out in controlled experiments often using students as study objects. These types of experiments have yielded higher degrees of explanatory power than field studies set in a professional environment. Therefore, Sun & Zhang (2006) call for the inclusion of other factors reflecting the complexities of the real world setting.

10

Second, inconsistencies were found regarding the construct PEU. Some studies did not find any significant relationship between PEU and system use, whereas some did. This indicates there might be other factors that the original model does not take into account, which cause these differences in outcome. Sun & Zhang (2006) list ten variables that they propose could be factors moderating the relationships in the model. These factors are split into three categories: Organizational, Technology and Individual. The technology category covers aspects of the technology itself, such as the purpose of the technology, the complexity of the system and what productivity focus the system has. The individual category contains factors pertaining to the characteristics of the user, such as gender, age, experience, cultural background and intellectual capacity. The organizational category encompasses whether the system use is voluntary or not and the characteristics of task/profession.

Dishaw & Strong (1999) point out the need to take especially task characteristics into consideration when evaluating system use. They argue that IT is a tool by which users accomplish organisational tasks and the fact that this has not sufficiently been taken into account in previous studies have led to mixed results in IT evaluations. More explicit inclusion of task characteristics would, according to Dishaw & Strong (1999), lead to a better model of IT utilization, even though the construct PU in TAM should be able to capture task characteristics implicitly. PU should theoretically capture the external variable task, as “the system needs to be useful for something in order to be considered useful at all” (Dishaw & Strong, 1999, p. 11). According to Dishaw & Strong (1999), this is not always the case though, which is evident by taking into account the inconsistent results of prior studies, and they therefore propose a combination of the Task Technology Fit (TTF) model and TAM, two models that seemingly overlap in their theoretical reasoning. TTF posits that technology will only be used if it has the ability to support the tasks that the organisation requires from it. Its ability to do so is summarized in the construct task-technology fit. In the model, task-technology fit has a direct effect on system use (see figure 3). This task-specific model has the advantage that it explicitly addresses the impact of the task and the characteristics of the technology used. However, it does not address the perception and attitude aspects that are a part of TAM. Dishaw & Strong (1999) therefore argue for combining the two models, which would result in a more explicit task and technology focus while still retaining the psychometric qualities of TAM. Critics have pointed out that some users still use a technology they do not necessarily approve of just because it increases their job performance. Extending TAM with the task-technology fit construct would help explain this kind of behaviour, as some of the system use would actually stem from something other than a user's attitude towards the system. Their study showed a higher explanatory power for the hybrid model compared to TAM and TTF in isolation.

Even though TAM has its flaws according to some critics, it is still very much an accepted and established model, widely used within IS research concerning technology acceptance and usage. It is a good point of departure for investigating user behaviour of an IT system, which is the main reason why it was utilized in this study. Extensions to TAM (e.g. TAM2, UTAUT) have not been academically scrutinized nearly as much as the original, which hurts their credibility. The purpose of this study was to increase the understanding of system use in a BI setting, and not to further refine TAM. The extensions to TAM are extensions, and not necessarily refined versions that completely replace the original model.

11

Method Research Model The original TAM was to be tested on a BI system using two different dependent variables, self-reported and actual use. In line with previous studies, this was accomplished through performing multiple linear regression on the relationships presented in the model, i.e. Usage = PU + PEU. The constructs of attitude and behavioural intention were not included in the research model. Attitude towards a system, aside from being discarded by Davis (1989) when revising his model, is shaped in the early stages of when a system is implemented, according to Straub et al. (1995), thus becoming of little importance when studying a mature system (as is the case in this paper, see below). The main purpose of behavioural intention is to predict future use (Ajzen & Fishbein, 1980) and according to Moore & Benbasat (1991) the variable can be dropped from the model if no other variables intervene once an attitude towards the system has been formed. There are no indications that this would be the case, neither theoretically nor practically. Consistent with prior research (Davis, 1989; Straub et al., 1995) these two constructs were not included in the research model.

Research Setting GE Healthcare was chosen as a case company since the organization had implemented a self-service BI system called QlikView in 2009, which would ensure a substantial user base. Also, the authors had contacts within the organization, which would facilitate research and the procurement of data, as well as ready access to the system itself. The company is a unit of General Electric, with more than 46,000 employees, working within the field of medical technologies and services (General Electric, 2011).

QlikView is a Business Intelligence tool developed by the Swedish company QlikTech. The system provides a user friendly interface and allows for collaborative work as the data need not be stored locally, but can be shared via the means of cloud computing. The data, which can come from multiple sources such as ERP and CRM systems, can be analysed and visualized in various ways. QlikView allows for application creation within the system, and the application creator can easily combine datasets from various sources (QlikTech, 2012). The user accesses QlikView either by accessing a portal site on the web or via a desktop client that can be downloaded to the user’s computer. Users are all accessing the same application and the same data via the web, which allows them to collaborate in real-time via the application, for example by sharing graphs, testing scenarios and exploring the data collaboratively (QlikTech, 2011). Two generic application examples demonstrating the user interface can be seen in figure 3.

Within QlikView, different applications can be created and tailored to the needs of the users. One example of an application in QlikView that is used within GE Healthcare is a sales application. The user logs on to QlikView by a web portal and then accesses information on year-to-date sales per region, department or product. This allows the user to easily access information and to drill down into details and discover patterns or trends in the information. Other examples of applications within GE Healthcare are Inventory Analysis and Supply Chain Management, among others. In total there are around 40 individual applications available.

12

Figure 3: Screenshots of QlikView (QlikView, 2012)

QlikView is classified as a self-service BI system with a focus on the end-user and user friendliness. QlikView is the case system in this study as it represents an early step towards BI for the masses, and it has been used for quite some time in the case company. There is already a substantial user base in place, thus enabling the possibility of studying acceptance and usage of a BI tool used by a broad mass of people.

Research Strategy and Design A survey strategy was adopted in this paper for several reasons. First, the aim of the paper is to investigate user perception of a particular system and how the perception translates into system use. The purpose is therefore of an explanatory nature, where, according to Saunders et al. (2009), a survey strategy is often the most suitable, in particular if a model is to be tested wherein its variables are to be identified and correlated to each other. Second, a large amount of data was required in order to be able to test the model and to draw relevant conclusions, thus data collection through interviews or observation was not deemed to be feasible. Third, most prior research where TAM was involved utilized surveys, which further contributed to the choice of using a survey in this study.

A team within GE Healthcare, specialized in BI, was contacted. The team assisted by providing access to a tool used for surveying the user base of different IT applications. This enabled getting the access to complete user lists of the system so that a) every user of QlikView could receive a survey, and b) complete actual usage statistics for each user could be obtained. The questions in the survey were not of any sensitive character and the survey was sent using the organization’s internal survey system, which should contribute to lessening the risk of users feeling intimidated and afraid of answering truthfully. Using the internal survey system most likely increased the response rate and decreased response bias as the users are accustomed to getting surveys concerning different IT applications.

The survey questions were entirely adopted from previous TAM research. This increases the likelihood of the questions being correctly phrased, as they have been proven credible previously with high reliability and validity (Davis, 1989; Venkatesh & Davis, 2000). The first draft of the survey was sent to an expert in quantitative research and the survey was modified according to feedback. The revised survey was sent to experts within GE Healthcare who came back with input to the questions and how they had understood them and points of improvements. The survey was again modified according to the received feedback. The survey was then tested on a pilot group of 12 subjects in order to ensure enough variance in responses (low variance would probably be an indication of badly formulated questions) and to receive further feedback on the

13

survey design. The survey was constructed using GE Healthcare’s internal survey tool. This tool allows the user to access the data gathered in an easy fashion and it also facilitates distribution of the survey within the organisation. This pilot study was also a chance to test the functionality of the tool and what kind of data would be obtained from the final survey. The respondents in the pilot study were given the opportunity to comment on the survey design.

After reviewing the questions and altering them in accordance with the feedback received in the pilot study, the final survey was constructed. The questions can be found in appendix 1. The survey was distributed to 1309 QlikView users. 109 valid responses were received, giving a response rate of 9%. Saunders et al. (2009) mention a likely response rate of 11% for Internet based surveys as a rule of thumb. In light of that, the survey’s response rate is deemed sufficient for its purpose. The respondents showed a dispersed representation in their functional area, where Sales (42%), Marketing (10%) and Other (14%) represented the top ones. Even though Sales is in the majority, the remaining functional areas represented were diverse and should prevent one function to dominate the outcome. The experience of the users ranged from 0 to 60 months, with a mean of 16 months.

Limitations. This study is limited by its own scope. Only one case company and one BI system was studied at one point in time. Thus, the degree of generalizability could be limited, but the phenomena that surfaced are still valid indicators. Further research investigating TAM within the context of BI would help eliminate this limitation. There is also the risk of bias when conducting a study on a voluntary response basis. The risk is that mostly people with overt positive or negative views of the study object might reply to the survey, thus possibly making the sample biased in either direction.

Measures The measures used in the survey were similarly adopted from previous TAM research. The first item asks for the respondent’s identification number, which was required in order to establish the connection between the respondent’s perceptions and actual system usage. The identification number was only used to enable the gathering of the actual usage numbers stored in QlikView, and the answers of the respondents were otherwise used anonymously. All data was kept on an encrypted PC and the files containing any responses were password protected, ensuring the anonymity of the respondents.

The second item asked in which functional area the respondent thought his or her role best corresponded to. Several alternatives were available, including an "Other" option if the role couldn't be matched. These alternatives were picked out from the actual usage statistics, which had predefined functional areas.

The third item, experience, was measured by asking the respondents how many months of experience they had with using QlikView. This is in line with how Davis (1993) measured experience. This measure was chosen in order to secure a sufficient experience level of the studied population.

Self-reported system use was the fourth item, and measured by asking for the average number of hours per week spent using the system. This self-reported measure of usage is the most common measure in previous TAM research and typical in MIS studies (Davis, 1993). It also corresponded well with the actual system use metric, which measured the user's usage of QlikView in total

14

hours for the year 2011. The metric of actual use was then transformed into hours per week by dividing the total hours by the total weeks of use, thus making sure that users with less than a year's QlikView experience did not produce a misrepresentative number.

The last 8 items were questions regarding the user's perception of QlikView, 4 questions each for perception of usefulness and perception of ease of use. These questions were adopted from Davis (1986) and were unaltered in order to retain the validity and reliability of these items that have been tested and replicated multiple times (Legris et al., 2003). All the 8 questions used a 7-point Likert scale, ranging from "Strongly Agree" to "Strongly Disagree".

Data Processing The answers to the perception questions, where Likert scales were used, were coded as follows: “Strongly Agree” was assigned the value of 7, and the rest was coded in a descending order, ending up with “Strongly Disagree” with a value of 1. This was done using Excel, along with an overall analysis of the data to find potential outliers and anomalies. No outliers were found as most of the questions used Likert scales, which limits the possibility for irregular values. Two respondents could however not be found in the data regarding actual usage and had to be removed from the sample. This should not have any greater impact on the results.

The experience and self-reported usage items were regular scales and didn't need any coding. The function item is a nominal variable, but didn't need any prior coding, as only frequencies would be used. The data concerning actual system use was imported into the Excel document and integrated into the dataset by matching the data with the respondent's SSO number.

The data was then imported into IBM SPSS Statistics, a statistical software tool. Factorial analysis was performed on the perception items in order to both assure the validity and reliability of the measures, and to ensure that the different items could be added into two factors, PU and PEU respectively. The hypothesized correlations identified by TAM were analysed through the use of linear regression.

15

Result The variables PU and PEU were constructed by adding all the survey items in the respective category and dividing the total by the amount of items. Mean and standard deviations of the constructs (PU and PEU) are shown in table 1. The mean scores for both constructs exceeded 5 on a 7-point scale. The standard deviation for PEU was slightly larger than PU.

Table 1: Descriptives for the constructs

 N   Mean   Std.  Deviation  

PU   109   5,8463   1,29766  PEU   109   5,1697   1,43044  

Construct Validity and Reliability

Cronbach's alpha was calculated to measure the reliability of each construct, and the results are shown in table 2. No item removal would increase the alpha for the constructs. The constructs showed a high reliability, with Cronbach’s alpha coefficients of 0,964 and 0,928 respectively. Both coefficients were well above the acceptable value of 0,7 which suggests an excellent internal consistency.

Table 2: Construct Reliability

Construct   No.  Items   Cronbach  α  Perceived  Usefulness  (PU)   4   0,964  Perceived  Ease  of  Use  (PEU)   4   0,928  

Construct validity was supported by factor analysis, with factor loadings exceeding 0,8 and no cross-loadings above 0,5. The factorability of the items was examined before the analysis. All items correlated with the other items with at least 0,5. The Kaiser-Meyer-Olkin measure of sampling adequacy was 0,903, well above the recommended value of 0,6, and Bartlett’s test of sphericity was significant (χ2(28) = 1003,906; p < 0,000). Given these overall indicators, factor analysis was conducted with all 8 items.

A principle-components factor analysis using varimax rotation was conducted, with two factors explaining 87% of the variance. The method of extraction was based on eigenvalues greater than 0,8 in order to produce two factors according to the theorized constructs of TAM. The extraction communalities of all items were above 0,7. The final factor loadings can be seen in table 3.

16

Table 3: Factor Analysis

 Components   Communalities  

Item   1   2   Extraction  PU1   0,884   0,371   0,920  PU2   0,880   0,386   0,922  PU3   0,899   0,325   0,915  PU4   0,799   0,486   0,875  PEU1   0,426   0,800   0,822  PEU2   0,269   0,842   0,780  PEU3   0,361   0,845   0,845  PEU4   0,466   0,812   0,877  

Correlation Analysis

The correlation between the belief constructs (PU and PEU) and the system use variables (self-reported and actual) were respectively calculated using the Pearson correlation coefficient. PU and PEU are significantly positively correlated as shown in table 4. The correlation between self-reported use (SRU) and actual use (AU) was also investigated. SRU and AU are significantly positively correlated with a correlation coefficient of 0,643 as shown in table 5.

Table 4: Correlation Matrix for PU and PEU

Constructs   PU   PEU  Perceived  Usefulness  (PU)   1   0,752**  Perceived  Ease  of  Use  (PEU)   0,752**   1  

** p<0,01

Table 5: Correlation Matrix for SRU and AU

Constructs   SRU   AU  Self-­‐Reported  USE  (SRU)   1   0,634**  Actual  USE  (AU)   0,634**   1  

** p<0,01

Multiple Regression

Multiple regression analysis was conducted in order to identify the strength of the joint relationship of the independent variable (system use) with the independent variables (PU and PEU). A regression was made for each type of the independent variable, namely self-reported use (SRU) and actual use (AU), on PU and PEU. Results of the regressions are shown in table 6 and 7.

17

Table 6: Multiple regression results for SRU as dependent variable

 B   Std.  Error  of  B   Beta   t(109)   p-­‐level   VIF  

Intercept   -­‐0,275   1,048    

-­‐0,262   0,793    PU   0,551   0,264   0,295   2,084   0,040   2,298  

PEU   -­‐0,036   0,240   -­‐0,021   -­‐0,151   0,880   2,298  

Table 6 shows that PU is significantly positively correlated to SRU (p<0,05). PEU was not significantly correlated to SRU. The adjusted R squared value for the regression was 0,060, implying that 6% of the total variation in SRU is explained by the model. The variance inflation factors (VIF) for the regression were well below 5, thus showing no strong signs of multicollinearity. Visual analysis of residual plots for the regression revealed no obvious signs of heteroskedasticity.

Table 7: Multiple regression results for AU as dependent variable

 B   Std.  Error  of  B   Beta   t(109)   p-­‐level   VIF  

Intercept   -­‐0,598   1,280    

-­‐0,467   0,641    PU   0,773   0,323   0,343   2,394   0,018   2,298  

PEU   -­‐0,413   0,293   -­‐0,202   -­‐1,410   0,162   2,298  

Table 7 shows that PU is significantly positively correlated to AU (p<0,02). PEU was not significantly correlated to AU. The adjusted R squared value for the regression was 0,036, implying that 3,6% of the total variation in AU is explained by the model. The variance inflation factors (VIF) for the regression were well below 5, thus showing no strong signs of multicollinearity. Visual analysis of residual plots for the regression revealed no obvious signs of heteroskedasticity.

18

Discussion The results showed that when testing TAM in a BI setting the model did not show the same kind of explanatory power for neither self-reported nor actual usage as previous studies in other settings. Previous research replicating TAM have found the model to explain on average 40% of the variance in system usage (Legris et al., 2003). The results from this study showed the model to explain 3,6% and 6% of the variance for actual use and self-reported use respectively. PU was found to be correlated with both metrics, whereas PEU was not found to be significantly correlated with either. These results are intriguing, as the study conducted has followed the same type of method and used the same kind of metrics as in prior research. The fact that no correlation was found between PEU and the usage metrics is not too surprising as several studies have also not been able to find this relationship (Ma & Liu, 2004). The more interesting result is the relatively low goodness of fit. The model, in this setting, is still able to account for a degree of the variance in system use, but at a much lower level than what was expected. The question is what could be causing this. As mentioned, the study has been performed in a fashion that replicates the method and the metrics used in previous research. The most discernible difference in this study is the fact that it was conducted in a BI context, studying a BI system. Thus, it is logical to begin the investigation of why the results are so different with discussing further what makes the BI setting unique, specifically how BI is defined and what purpose it is trying to fulfil.

Returning to Lönnqvist & Pirttimäki’s (2006) definition of BI, it is all about having an organized and systematic process - a managerial method combined with technical solutions - by which the organization collects, analyses and distributes business related information coming from many different sources with the end goal of supporting decision making and making business activities more efficient. In other words, the end result of using a BI system is to make decisions and activities more efficient. Thus, looking at it from that perspective it would make sense that the end user is not really interested in using the system more than what is required of the task at hand. It should matter less how much the user actually uses the system, and more if the actual usage brings about the usefulness that the user is expecting or needing. Yousafzai et al. (2007) and Doll & Torkzadeh (1998) bring up this point in their critique of TAM. The assumption that more usage is always desirable and a sign of more acceptance, is not always true. After a system has been implemented, high system use could be an indication of inefficiency and instead less use is now desirable, depending on the purpose of the system. It is thus not so simple to connect usage and acceptance of a BI system as being two sides of the same coin. Using a BI system once per year or once per day will not matter if the usage fulfils the end goal: making the user more efficient. What matters though is that the user accepts the system in the first place. Conceptually, it is hard to disagree with the notion that perceived usefulness would not correlate directly with acceptance. In most cases where a user is confronted with a system that is not perceived to be useful, he or she would most certainly not accept it. Thus, the main culprit why the TAM model does not show the same amount of explanatory power in the case of the BI system studied in this paper could be the assumption that acceptance equals usage. The BI system must be useful in order to be accepted, but it is not self-evident that more usage is always desirable.

A BI system has a general purpose of supporting decision making at the same time as it has very specific usage requirements depending on who is using it. It needs to be easy to use but also complex enough for it to be useful to the entire range of different users and to be able to perform the various tasks that are required from it (Lönnqvist & Pirttimäki, 2006). Thus, measuring system use as a unidimensional variable and entirely ignoring how the technology is actually used

19

might be misleading. MIS research has in the past been based on the notion that system use is self-evident and therefore lacked a proper definition (Burton-Jones & Straub, 2006). Measuring it in one dimension seemed adequate and more usage was always considered desirable, thus making system use a measure of IT success (Doll & Torkzadeh, 1998). This kind of construct evades concepts such as cost of usage and the fact that when an IT system is in place much effort is put into making usage more efficient, i.e. less usage is now desirable. Burton-Jones & Straub (2006) argue that the system use construct has no widely accepted definition and it has been operationalized in many different kinds of measures. They suggest that if system use is a construct to be included in a model, then it should be properly defined beforehand and not be included as an afterthought. Also, the definition should be based on the research context, as a unified definition will be difficult to ascertain since system use is determined by the characteristics of the system. Hence, a possible reason for why TAM is not performing well in a BI system context could be that system use is not measured in a way that takes the context into account. As mentioned in the discussion above, a BI system is meant to be used in diverse ways and it seems how the system is used is important, something that the current system usage construct does not capture. The results indicate that what drives usage of a BI system is not merely its usefulness, but other factors that are more directly tied to the purpose of the system itself, such as what tasks or decisions the user is actually faced with.

The question then turns towards what factors determine the usage of a BI system. Again, looking at the definition of BI, the system is supposed to support the decision making of the user. The decision a user is confronted with depends on what needs to be done, i.e. the tasks that his or her role entails. Thus, role and task could be important variables to take into consideration in a BI setting. Theoretically, both variables should be implicitly included in the PU construct, as a tool needs to be useful for something in order to be useful at all (Dishaw & Strong, 1999). PU and PEU are mediating constructs that should be able to mediate all other external variables. In previous research focusing on TAM, the constructs have, in most cases, been able to do this (Yousafzai et al., 2007). In the BI setting in this study they seemingly failed to do so. It seems that the difference in system use, stemming from the different roles and tasks, had a greater effect on the model’s relationships than what was expected. In the studied organization, some functions were not dependent on the support of a BI system, whereas others depended on it on a daily basis. This can explain how two people with the same amount of perceived usefulness display totally different usage patterns, in other words how different degrees of usage translate into basically the same degree of perceived usefulness. Consequently, this would weaken the relationship between perceived usefulness and system use. If this is the case, that in a BI setting how the user actually uses the system is predetermined by what role and task he or she must perform, then it is reasonable to believe that there are variables not accounted for in TAM that directly affect the usage of the BI system.

According to Sun & Zhang (2006), prior TAM research has overlooked the importance of moderating factors, which could be an explanation for why many studies have reported inconsistent results with varying explanatory power. They list ten of the most important variables and sort them into three categories, where task/role and the purpose of the technology are considered to be important especially in a professional setting. If role and task are especially important in a BI setting, which seems to be the case according to the discussion above, then this might lead to the conclusion that the moderating effect of role and task would be high in this study. The fact that the purpose of the technology is identified by Sun & Zhang (2006) as a moderating variable could explain why applying TAM to different settings could yield different

20

results based on the technology studied. This reasoning resonates well with the discussion above regarding the purpose of BI and the possible implications for system use. Altogether, not taking moderating variables into account, such as task/role and the purpose of BI, could be a possible reason for why the model showed relatively low explanatory power in this study.

Dishaw & Strong (1999) suggest to extend TAM with the constructs found in TTF. In their study they show that there is a need to explicitly define task-technology fit as its own specific construct. Combining TAM with TTF leads to, indicated by their study, a higher degree of explanatory power. In this extended model, task-technology fit would be an independent construct that both directly affects system use and indirectly affects TAM’s perception constructs. So, it could be that some part of the task/role variables are indeed mediated by PU, but including these variables into the model as having a direct affect on system use could increase the explained variance reported in this study. Having a task-technology fit seems especially important in a BI setting, as the system must balance ease of use with being universal enough to support the tasks required from the user. Universality implies some kind of standardization, but a BI system must also accommodate the users with very particular requirements and as such be complex enough to fulfil these specialized needs. For that reason, it seems task-technology fit would be an important determinant of BI system use.

Burton-Jones & Straub (2006) specifically bring up task in their discussion concerning how to arrive at a properly defined system use measure that takes the user, system and task into account. System use is often defined in MIS research as hours logged or similar (as was done in this study in order to replicate TAM), which according to Burton-Jones & Straub (2006) might not always be suitable to the context. For BI systems, not taking the task into account in the model might have implications for the relationship between the belief constructs and system use. More usage is not necessarily a determinant of system acceptance nor does it translate directly to a higher degree of user perception. Less use can instead be an indication of efficiency and that users get what they want from the system in a short amount of time. Doll & Torkzadeh (1998) point out that previously more use was considered desirable, thus not always taking into account the purpose of the system. Up to a certain point this might be true, but once the system is used enough to be considered accepted, even more use might not be desirable. In order to take this aspect into consideration, Burton-Jones & Straub (2006) urge that usage should be defined as a multidimensional construct. Redefining system use of a BI system to include more dimensions than frequency of use would bring a better understanding on the usage itself and shed more light on the drivers behind it.

Continuing on the same line of thought, Doll & Torkzadeh (1998) propose usage to be measured in a more complex way than what has been the case in previous IS usage studies. Their argument is that users use IT systems in different ways today than what was the case in the past. Considering that how people use a technology is defined by the interaction between people, technology and the organizational setting, a metric that captures these complex interactions might be required. This reasoning could help answering why this study did not show the anticipated results. Defining system use from the characteristics of the BI system as well as the nature of the tasks, could lead to a more appropriate measure of system use. Exactly how this is to be done remains problematic. In order to draw conclusions from prior TAM studies, there must be comparability in method and how they define and measure the dependent variable. Thus, having a framework in the model that governs how the measure is defined can therefore be seen as important in order to ascertain the reliability of the model.

21

System use was not defined according to the BI setting in this study as indicated by the discussion above, since this study followed the methodology of prior TAM studies to allow for comparison. A proper definition of system use must start somewhere though, and the amount or frequency of use will most likely remain part of a more complex system use construct. Thus, it is of interest to consider whether there is any real difference between self-reported and objective measures or if they are interchangeable. Comparing the results between self-reported and actual usage, no significant difference between the outcomes of the two models was observed. It is interesting to note that the model where actual system use was the dependent variable showed a lower adjusted R squared compared to the model using self-reported use. Still, the results are very similar and there is no indication of either metric being different from each other. The correlation coefficient between actual and self-reported use was 0,634. Thus, the results in this study do not agree with Straub et al. (1995), who argue that the two types of usage metrics, one being subjective and one being objective, are not related at all. They also found no correlation between the two belief constructs and the objective measure of usage, which in this study was found for one of them, PU. They do however state that their result is not without limitations, and further research in other contexts and technologies could bring other conclusions. In this case, it seems that the context of BI indicates that either measure is okay to use - both are producing similar results. The reason why no difference is found is hard to make out though. Apart from possible methodological issues and errors, it seems that in this case the respondents were very good at estimating their own usage.

22

Conclusion In this study, the original TAM was tested on a BI system. The results showed that, in this setting, TAM had a much lower explanatory power compared to prior TAM research performed in other settings. This indicates that how useful a user perceives the BI system to be does not affect the amount of usage to the same extent as predicted by TAM. PU was found to have a weak correlation with system use, whereas no correlation was found between the latter and PEU. The lack of a significant correlation between system use and PEU contradicts the model, but in the light of prior research this result is not entirely surprising. Furthermore, the results did not differ much between using self-reported and actual system use as the dependent variable in the model. Other studies have indicated that there could be a significant difference between objective and subjective measures, something that could have implications for previous conclusions drawn from past TAM research. This study indicates that this is not the case for a BI system.

The potential reasons for the weak correlation between perceived usefulness and system use of the BI system were discussed. The results indicate that what drives usage of a BI system is not merely its usefulness, but other factors that are more directly tied to the purpose of the system itself, such as what tasks or decisions the user is actually faced with. The reasoning is grounded in the definition of BI, and how BI system use is not predefined (as in for example office software such as an e-mail program) but instead the system is supposed to be general enough to accommodate whatever informational needs its users have. Thus, how the system is used could be a vital clue to understanding the correlation between perception of usefulness and system use, and what other factors that directly affect the usage.

Task and role were identified as possible variables that could directly affect system use in a BI system and thus explain the weak link between PU and system use. The reasoning is anchored in the notion that since the end-goal is not to use a BI system as much as possible, the perception of usefulness would not be expected to determine the amount of system use to the same extent as in prior TAM research done in other settings. The decisions a user is faced with are more likely to determine the actual usage to a much greater extent. How system use is measured could be more important than what previous research has assumed. The concept itself lacks a theoretical grounding and there is an inconsistency between prior TAM studies in what they actually measure in terms of system use. Taking the research context into account, and basing system use on what the system comprises (e.g. Burton-Jones & Straub's (2006) system, user and task dimensions) would give a more fair view of system use and what actually drives it.

In sum, the results of this study suggest that the perception of usefulness does not determine the system use to the same degree as in prior TAM research. The reason behind this is most likely the variety in tasks that the system needs to support and it might be the tasks the users are confronted with that determine system use to a higher degree as long as the system has been accepted. By looking at what defines a BI system, it seems that thinking more about how the system is used and how it should be measured would help to identify the drivers behind the usage and consequently shed more light on BI user behaviour. This would in turn help system designers when implementing BI systems, but most importantly the reasoning in this study indicate that an ill defined measure of system use does not reflect the real meaning behind the use. As such, it is a bad idea for practitioners to evaluate a system based on only this construct.

23

Future Research. It would be of interest if this study could be replicated in other organisations using different BI systems. A greater sample would enable an analysis of difference between roles and functions within the case company, and more decidedly establish if task is as important a variable as reasoned in this paper. Further research should also focus more on user behaviour in order to help bring more understanding to what drives system use outside the user’s perception of usefulness.

24

Bibliography Adams, D. A., Nelson, R. R. & Todd, P. A. (1992). Perceived Usefulness, Ease of Use, and Usage of Information Technology: A Replication. MIS Quarterly 16(2), 227-247. Agarwal, R. & Prasad, J. (1997). The Role of Innovation Characteristics and Perceived Voluntariness in the Acceptance of Information Technologies. Decision Sciences 28(3), 557-582. Ajzen, I. & Fishbein, M. (1980). Understanding Attitudes and Predicting Social Behavior. Prentice-Hall, Englewood Cliffs, NJ, 1980. Briciu, S., Vrincianu, M. & Mihai, F. (2009). Towards a New Approach of the Economic Intelligence Process: Basic Concepts, Analysis Methods and Informational Tools. Theoretical and Applied Economics 4(4), 21-34. Burton-Jones, A. & Straub, D. (2006). Reconceptualizing System Usage: An Approach and Empirical Test. Information Systems Research, 17(3), 228-246. Chaudhuri, S., Dayal, U. & Narasayya, V. (2011). An Overview of Business Intelligence Technology. Communications of the ACM, 54(8), 88-98. Chuttur M. Y. (2009). Overview of the Technology Acceptance Model: Origins, Developments and Future Directions. Sprouts: Working Papers on Information Systems, 9(37). Collopy, F. (1996). Bias in retrospective self-reports of time use: an empirical study of computer users. Management Science, 42(5), 758-67. Davenport, T.H. & Harris J.G. (2007). Competing on Analytics: The New Science of Winning. Boston, MA: Harvard Business School Press. Davis, F. (1986). A Technology Acceptance Model for Empirically Testing New End-User Information Systems: Theory and Results. Doctoral dissertation, MIT Sloan School of Management, Cambridge, MA. Davis, F. (1989). Perceived Usefulness, Perceived Ease of Use, and User Acceptance of Information Technology. MIS Quarterly 13(3), 319-340. Davis, F. (1993). User acceptance of information technology: system characteristics, user perceptions and behavioral impacts. Int. J. Man-Machine Studies, 38, 475-487. Davis, F., Bagozzi, R. & Warshaw, P. (1992). Extrinsic and intrinsic motivation to use computers in the workplace. Journal of Applied Social Psychology, 22(14), 1111-32. Davis, F., & Venkatesh, V. (1996). A critical assessment of potential measurement biases in the technology acceptance model: three experiments. Int. J. Human-Computer Studies, 45, 19-45.

25

Dishaw, M.T. & Strong, D.M. (1999). Extending the Technology Acceptance Model with Task-Technology Fit Constructs. Information & Management, 36, 9-21 Doll, W. & Torkzadeh, G. (1998). Developing a multidimensional measure of system-use in an organizational context. Information & Management, 33(4), 171-185. Eckerson, W. W. (2008). Pervasive Business Intelligence: Techniques and Technologies to Deploy BI on an Enterprise Scale. TDWI Best Practices ReportThird Quarter 2008. Gangadharan, G. & Swami, S. (2004). Business Intelligence Systems: Design and Implementation Strategies. 26th International Conference on Information Technology Interfaces. Cavtat, Croatia 7-10 June 2004. General Electric (2011). Growth Starts Here, Annual Report 2010. New York, 2011. Gibson, M., Arnott, D. & Jagielska, I. (2004). Evaluating the Intangible Benefits of Business Intelligence: Review & Research Agenda. Decision Support in an Uncertain and Complex World: The IFIP TC8/WG8.3 International Conference 2004. Imhoff, C. & White, C. (2011). Self-Service Business Intelligence: Empowering Users To Generate Insights. TDWI Best Practices Report Third Quarter 2011. King, W. R. & He, J. (2006). A meta analysis of the technology acceptance model. Information & Management 43(6), 740-755. Kudyba, S. & Hoptroff, R. (2001). Data Mining and Business Intelligence: A Guide to Productivity, Hershey, PA, Idea Group Publishing. Legris, P., Ingham, J. & Collerette, P. (2003). Why do people use information technology? A critical review of the technology acceptance model. Information & Management 40, 191-204. Lönnqvist, A. & Pirttimäki, V. (2006). The Measurement of Business Intelligence. Information Systems Management 23(1), 32-40. Mathieson, K. (1991). Predicting User Intentions: Comparing the Technology Acceptance Model with the Theory of Planned Behaviour. Information Systems Research 2(3), 173-191. Marjanovic, O. (2007). The Next Stage of Operational Business Intelligence: Creating New Challenges for Business Process Management. 40th Annual Hawaii International Conference on System Sciences. Waikoloa, Hawaii 3-6 January 2007. Ma, Q. & Liu, L. (2004). The Technology Acceptance Model: A Meta-Analysis of Empirical Findings. Journal of Organizational and End-User Computing, 16(1), 59-72. Moore, G.C. & Benbasat, I. (1991). The Development of an Instrument to Measure the Perceived Characteristics of Adopting an Information Technology Innovation. Information Systems Research, 2(3). 192-222.

26

Moss, L. & Atre, S. (2003). Business Intelligence Roadmap: The Complete Project Lifecycle for Decision-Support Applications, Boston, MA, Addison-Wesley. Negash, S. (2004). Business Intelligence. Communications of the Association for Information Systems 13, 177-195. Negash, S. & Gray, P. (2008). Business Intelligence. Handbook on Decision Support Systems 2(7), 175-193. Okkonen, J., Pirttimäki, V., Hannula, M. & Lönnqvist, A. (2002). Triangle of Business Intelligence, Performance Measurement and Knowledge Management. 2nd Annual Conference on Innovative Research in Management, Stockholm, Sweden 9-11 May 2002. Olszak, C. & Ziemba, E. (2007). Approach to Building and Implementing Business Intelligence Systems. Interdisciplinary Journal of Information, Knowledge and Management 2, 135-147. Ortiz, S. (2010). Taking Business Intelligence to the Masses. Computer 43(7), 12-15. Popovic, A., Turk, T. & Jaklic, J. (2010). Conceptual Model of Business Value of Business Intelligence Systems. Management 15(1), 5-30. QlikTech International AB (2012). Available at: <http://www.qlikview.com>. Accessed 13 March 2012. QlikTech International AB (2011). The QlikView Product Family. Available at <http://www.qlikview.com/se/explore/products/overview>. Accessed 27 March, 2012. Saunders, M., Lewis, P., & Thornhill, A., (2009). Research Methods for Business Students, 5th ed., Essex: Prentice-Hall. Schepers, J. & Wetzels, M. (2007). A meta-analysis of the technology acceptance model: Investigating subjective norm and moderation effects. Information & Management 44, 90-103. Schwarz, A. & Chin, W. (2007). Looking Forward: Toward an Understanding of the Nature and Definition of IT Acceptance. Journal of the Association for Information Systems, 8(4), 230-243. Straub, D., Limayem, M., Karahanna, E. (1995). Measuring system usage: implications for IS theory testing, Management Science 41(8), 1328-42. Sun, H. & Zhang, P. (2006). The role of moderating factors in user technology acceptance. Int. J. Human-Computer Studies, 64, 53-78. Szajna, B. (1996). Empirical evaluation of the revised technology acceptance model, Management Science, 42(1), 85-92. Venkatesh, V. (2000). Determinants of Perceived Ease of Use: Integrating Control, Intrinsic Motivation, and Emotion into the Technology Acceptance Model. Information Systems Research 11(4), 342-365.

27

Venkatesh, V. & Davis, F. D., (2000) A Theoretical Extension of the Technology Acceptance Model: Four Longitudinal Field Studies, Management Science 46(2), 186-204. Watson, H. J. & Wixom, B. H. (2007). The Current State of Business Intelligence. Computer, 40(9), 96-99. Wu, K., Zhao, Y., Zhu, Q., Tan, X. & Zheng, H. (2011). A meta-analysis of the impact of trust on technology acceptance model: Investigation of moderating influence of subject and context type. International Journal of Information Management 31, 572-581. Yeoh, W. & Koronios, A. (2010). Critical Success Factors for Business Intelligence Systems. Journal of Computer Information Systems 50(3), 23-32. Yousafzai, S. Y., Foxall, G. R. & Pallister, J. G. (2007). Technology Acceptance: A Meta-Analysis of the TAM. Journal of Modelling in Management 2(3), 251-280. Zhang, S., Zhao, J. & Tan, W. (2008). Extending TAM for Online Learning Systems: An Intrinsic Motivation Perspective. Tsinghua Science and Technology 13(3), 312-317.

28

Appendix I Survey Questions

1. What is your SSO number?

2. In which function do you work? [Business Management, Engineering/Technology, Environmental Health and Safety, Finance, Information Technology, Logistics, Manufacturing, Marketing, Product Management, Quality, Sales, Services, Sourcing, Research and Development, Other]

3. How much experience do you have using QlikView? [answer in months]

4. How many hours per week do you spend using QlikView? [answer in hours]

Perception Questions: answers on a 7-point Likert scale, ranging from Strongly Agree to Strongly Disagree.

5. Using QlikView improves my performance on the job

6. Using QlikView in my job increases my productivity

7. Using QlikView enhances my effectiveness on the job

8. I find QlikView useful in my job

9. My interaction with QlikView is clear and understandable

10. Interacting with QlikView does not require a lot of mental effort

11. I find it easy to get QlikView to do what I want it to do

12. I find QlikView to be easy to use