82
Journal of Management and Business Research, ISSN 2162-8955, Vol. 2, No. 2, April 2012 1 The Journal of Management and Business Research Editor-In-Chief Dr. Abdul Sraiheen Editor-In-Chief: Kutztown University, USA (610) 683-4593 [email protected] Associate Editor Dr. Okan Akcay Kutztown University, USA (610) 683-4590 [email protected] Associate Editor Dr. Roger Hibbs Kutztown University, USA (610) 683-4580 [email protected] The Journal of Management and Business Research (JMBR) is a refereed, quarterly journal that serves the need for the Management field and Business Research while bridging theoretical and applied Business systems research that benefits both academics and Management professionals. The intention of the journal is to help the local and global business communities to efficiently exploit Business research towards efficient Management and the creation of business value. The journal welcomes all types of theoretical and applied research studies in Management and Business that add value to enterprise owners, customers, developers, and evaluators. That is, efficient business management, applied studies in Management science, enterprise resource planning, business process reengineering, and business decision support are particularly sought. All manuscripts in relevant areas of Management, Technology, and Business are also considered if they bear implications for the creation of business value through efficient Management and decision support. The audience of the Journal is members of the local and global business communities, researchers, students, and industrial practitioners in relation to information science. The journal invites original papers and technical reports that are not published or not being considered for publication anywhere else. The journal is published by the Berks Group of Management and Business Research. JMBR appears quarterly. The printing is assured by the American Institute of Technology and Business Research. Indexed at Ulrich's, EBSCO, Sciseek, Index Copernicus, Scirus, Pascal, PKP, Google Scholar & Pending Others

Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

11

The Journal of Management and Business Research Editor-In-Chief

Dr. Abdul Sraiheen

Editor-In-Chief:

Kutztown University, USA

(610) 683-4593

[email protected]

Associate Editor Dr. Okan Akcay

Kutztown University, USA

(610) 683-4590

[email protected]

Associate Editor Dr. Roger Hibbs

Kutztown University, USA

(610) 683-4580

[email protected]

The Journal of Management and Business Research (JMBR) is a refereed, quarterly journal that

serves the need for the Management field and Business Research while bridging theoretical and

applied Business systems research that benefits both academics and Management professionals.

The intention of the journal is to help the local and global business communities to efficiently

exploit Business research towards efficient Management and the creation of business value. The

journal welcomes all types of theoretical and applied research studies in Management and

Business that add value to enterprise owners, customers, developers, and evaluators. That is,

efficient business management, applied studies in Management science, enterprise resource

planning, business process reengineering, and business decision support are particularly sought.

All manuscripts in relevant areas of Management, Technology, and Business are also considered

if they bear implications for the creation of business value through efficient Management and

decision support. The audience of the Journal is members of the local and global business

communities, researchers, students, and industrial practitioners in relation to information science.

The journal invites original papers and technical reports that are not published or not being

considered for publication anywhere else. The journal is published by the Berks Group of

Management and Business Research. JMBR appears quarterly. The printing is assured by the

American Institute of Technology and Business Research.

Indexed at Ulrich's, EBSCO, Sciseek, Index Copernicus, Scirus, Pascal, PKP, Google Scholar &

Pending Others

Page 2: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

22

FROM THE EDITOR-IN-CHIEF

The Journal of Management and Business Research is a refereed journal which aims to publish

articles of high quality dealing with two major areas: Management and Business.

Even though most online business technologies attracted only large businesses and banks in the

past, due to the high costs involved, the rapid development of the Internet made it feasible for

public agencies, individual consumers and small businesses to participate. Today, every

organization in the global market place is certainly affected by global computing and the new

trends in Management and Business activities that have come with it.

Owners, however, are faced with the real challenge of creating business value in all their

management and business activities and in redefining the new requirements and directions for

survival and success in this global computing world. Owners do not hide the fact that intelligent

computing, information technology, and business intelligence have become not only necessary

for success, but a fundamental requisite for survival. Today, Management and Business research

take a very important piece of every company’s budget.

With its exceptional preeminence, Management and Business activities rely on technology which

embraces almost the whole fields of business, education, and science and touches at some point

or other, on almost every social issue of our time.

Let us have this forum where we all learn how to generate great business value though scientific

Management and Business research and efficient technology management.

Abdul Sraiheen, Ph.D.

Editor In-Chief

Page 3: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

33

Table of Contents

4 Eco-Friendly IT: Greener Approach to IT Rushabh Shah

25 Possibilistic Group Support System For Pricing And Inventory Problems Emna Boumediene, Lotfi Boumediene, Bel G Raggad

37 Saudi Arabia’s Economic Diversification: A Case Study in Entrepreneurship Kimanthi Ali Thompson, Dalal Thair Al-Aujan, Roaa AL-Nazha, Sara Al

Lwaimy, and Sumayah Al-Shehab

41 How to Effectively Manage IT Project Risks

Bradley Sean Susser, Pace University, NY

68 Efficiency and Productivity Analysis of Tunisian Banks

During a Recent Deregulation Period

Raéf Bahrini, Institute of High Commercial Studies of Sousse, Tunisia

Page 4: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

44

Eco-Friendly IT: Greener Approach to IT Rushabh Shah, Pace U, New York

Abstract

Information Technology is widely considered as a key tool that can help address the frightening energy and environmental challenges facing the world today. Environmental issues are receiving unprecedented attention from businesses and governments around the world. Eco-friendly Information Technology, also known as Green Computing, in particular, is geared towards utilizing Information Technology in creating a more environmentally friendly and cost-effective use of power and production in technology. Eco-friendly Information Technology starts with manufacturers producing environmentally friendly products and encouraging various departments to consider more friendly options like virtualization, power management and proper recycling habits.

Feeling pressure from customers and other stakeholders, organizations have begun to make serious improvements in their environmental performance, recognizing that if they fail to deliver on this, it frequently translates into a negative impact on profit. Many governments are introducing aggressive environmental policy, encompassing everything from greenhouse gas reduction and natural resource protection to clean power initiatives and incentives for energy efficiency.

The main purpose of this research paper is to discover the various issues relating to the harsh environmental impact caused by high energy resource consumption of data centers as well as discuss various eco-friendly solutions to address the issues. Advantages of implementing identified eco-friendly solutions to resolve the highlighted issues are also discussed in this research paper. Moreover, related case studies are presented to support how influential Information Technology companies resolved various issues pertaining to high energy consumption in data centers – companies that were able to utilized eco-friendly technology in resolving the issues facing the modern industry today. This research paper aims to establish and highlight the important link between the environment and Information Technology. This further emphasize that Information Technology can be a vital instrument in saving the environment through various eco-friendly solutions available. Keywords: Green IT, virtualization, eco-friendly, energy-efficient, environment.

1. Introduction

In recent years, we have seen a great increase in the number of companies joining the green movement bandwagon. As more and more organizations are becoming aware of their responsibilities to the environment, numerous efforts towards saving the environment are being implemented. Some companies see the move as a necessity as regulators consider limits on greenhouse gas emissions and consumers demand environmentally friendly products.

Page 5: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

55

The compounding effect of high gas emission, toxic waste materials, and high energy consumption has put a toll on the environment. Increasingly, more organizations are becoming aware of their responsibility to the environment as numerous efforts towards saving the environment are implemented through utilization of eco-friendly IT. As the name implies, eco-friendly IT refers to environmentally sustainable computing or Information Technology. The main goal of eco-friendly IT is to reduce the use of hazardous materials, maximize energy efficiency during the product’s lifetime, and promote the recyclability or biodegradability of obsolete products and factory waste.

In the Information Technology industry, energy consumption is considered to be a critical issue today. As data centers grow, their carbon footprints increases. One would think that a computer does not consumed much energy; however, if you think of it on a bigger scale, such as in the case of data centers, where you have thousands of computers with many processors and numerous memory cards - energy consumption becomes probabilistic for company owners as well as for the environment. The IT industry is not the only one experiencing such issues relating to high energy consumption, various highly developed industries as well has the same dilemma of coping up with the effects of modernization.

The effect of modernization has harsh impact on the environment, as the world becomes more modernized – various products are developed, manufactured, and used to keep abreast with the constant changes brought by the modern world. All aspects that go along with manufacturing a certain product produce unwanted toxic elements and pollutants that can have an adverse effect on the environment and the public health. Issues relating to the effect of production waste disposal, packaging materials discarding, and recycling obsolete products must be addressed by every organization to minimize pollution.

Various companies in the Information Technology industry adopted various eco-friendly IT solutions in support of creating a sustainable environment. Sustainability is an issue that affects organizations of all sizes. With the awareness of “green” issues at an all-time high, it is important that every company make every effort to be as environmentally conscious as possible.

Many businesses have discovered that eco-friendly IT initiatives offer costs savings benefits while reforming the organization, meeting stakeholder demands and complying with laws and regulations. In this study, IBM and Info-Tech Research Group find that businesses who complete eco-friendly IT initiatives realize significant cost savings alongside superior environmental performance.

2. Eco-friendly Adoption Needs

Page 6: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

66

Energy consumption is a critical issue for Information Technologies organizations today, whether the goal is to reduce cost, save the environment or keep data centers running efficiently and cost effectively. Data centers consume so much electricity that United States’ data centers alone consume 4.5 kWh annually which is 1.5% of the country’s total energy consumption. Industry analysts estimates that over the next 5 years, most enterprise data centers will spend as much on energy (power and cooling) as they do on hardware infrastructure. This number would likely double in the next few years as the demand for data centers increases due to the central computing need to support businesses and lifestyles. Servers basically are driving energy consumption and costs.

Rising energy costs has already had an impact on all businesses, and all businesses have increasingly been judged according to their environmental credentials, by legislators, customers and shareholders. This won’t just affect the obvious, traditionally power-hungry ‘smoke-belching’ manufacturing and heavy engineering industries, and the power generators. The Information Technology industry is more vulnerable than most – it has sometimes been a reckless and profligate consumer of energy. Development and improvements in technology have largely been achieved without regard to energy consumption.

The total amount of electricity used to operate data center servers and related infrastructure equipment in the United States was $2.7 billion in 2005 in comparison to $1.3 billion in 2000. Worldwide the total bill was $7.2 billion in 2005, compared with $3.2 billion in 2000. Looking at it in a different way, U.S. data center power consumption in 2005 was equivalent to about five 1,000- megawatt power plants or five typical nuclear or coal power according to analysts. In the United States, in 2005 Data center servers consumed 0.6 percent of all electricity. When counting with the infrastructure equipment such as network and cooling gear that figure goes up to 1.2 percent, about the same percentage consumed for televisions.

Today's data center design decisions all pivot around maximizing efficiency, while giving companies a path for future growth, says Steve Sams, VP of global site and facilities services for IBM. "We see our customers make very different design decisions than they used to," Sams says. "And the end result is that they are saving 30 percent in operational costs over the lifetime of the data center."

In many companies, there has been a shift away from dedicated data centers, as part of an attempt to provide all IT requirements by using smaller boxes within the office environment. Many have found this solution too expensive, experiencing a higher net spend on staff as well as with higher support costs. Energy consumption of distributed IT environments is difficult to audit, but some have also noted a progressive increase in power consumption with the move from centralized to decentralized, then to distributed architecture, and finally to mobility-based computing. Even where distributed computing remains dominant, the problems of escalating energy prices and environmental concernsnare present, albeit at a lower order of magnitude than in the data center environment, and even though the problems are rather more diffuse and more difficult to solve.

Page 7: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

77

Increase in server demand can be accounted to the huge market demand for Web content, video on demand, music downloads, and Internet telephony. Factors that contribute to excessive energy consumption in data centers are as follows:

Underutilized server hardware -Studies proved that a server consumes 80% of the total IT load and 40% of total data center consumption in 2006. Site infrastructure accounts to the 50% of total data center consumption -Servers typically house only single application where processors sit idle 85-95% of the time and while sitting idle, these servers uses nearly as much power as they do. -The inefficiency caused by running single application on x86 is not only wasteful but expensive due to electricity costs and increase in continuous computing demand.

Inefficient and aging data centers

-Many organizations have older application (legacy) running on older hardware. These applications and the hardware that they run on are expensive to manage and maintain because power consumption and hardware maintenance for older hardware is generally higher. -Companies running out of power and/or capacity to support the increase energy demands on inefficient and aging data centers caused by the following;

Utility incapable of providing adequate power High power consuming and dense equipment

Inability of IT staff to respond rapidly to changing business needs and computing requirements

-Work load in IT varies depending on the day or month and increases over time as the company grow or the demand for application increases. Due to the static nature of IT physical infrastructure, hardware and servers are over-provisioned to work for peak load. This is mainly due to the fact that applications are very difficult to reconfigure to different hardware once it is installed. Consequently, the inability to provision the physical infrastructure dynamically to accommodate these fluctuations leads to wasteful practices in the data centers which results to high energy consumptions.

Businesses in various industries are looking for different ways to relief themselves with the burden of increasing energy demands and costs. Moreover, businesses are seeking to free themselves from the constraints of inflexible and underutilized hardware. Many of these dilemmas are now being resolved through eco-friendly IT solutions that are catered towards creating a sustainable environment. Virtualization is on the top list of solutions which is the fundamental element of the green data centers.

Page 8: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

88

3. Eco-friendly Adoption Methods and Solutions

Modernization has brought increase need to have high-performance servers to meet the increasing demands for new applications. Consequently, energy usage in data centers rises in order to keep up with the trends which are becoming a major dilemma for many IT managers and corporate executives. Thus, the amount of power cooling systems needed for these servers increases as well which attributes to high electricity cost. Industries have noted that companies with data centers attribute 40% of their operating cost to power and cooling-related expenses alone. Furthermore, data centers are accounted for 23% of carbon emissions from global information and communications technology and claim about 1.5 percent of total electricity usage in the U.S. Much of this consumption comes from cooling the space used to house data servers. The data centers high operating cost has drove big companies like Microsoft, Google, and Yahoo to establish data centers in locations where hydro-electric power and wind energy is abundant. This move has compelling advantages for such companies to address the high operating cost of maintaining its data centers as well as supports its movement towards a greener environment. However, building massive data centers to a well situated location requires huge investments which not all companies could afford. Google and Microsoft alone have spent an estimated $1.15 billion to create their data centers. These companies feel that such effort towards driving down the data center operating cost is much needed to keep abreast with the continuous evolution of the Internet while reducing their corporate carbon footprint to save the environment. Companies want to reduce power usage these days, both to save cash on energy bills and to reduce their environmental impact. Saving energy is more than saving trees. Not only the environment clearly benefits from power-saving measures, but also the companies benefits from saving energy. That’s because solutions to improve energy efficiency is often cost effective. Various companies have acquired eco-friendly IT solutions to address the pressing environmental problems caused by inefficient usage of high energy consuming servers, aging servers, and the tremendous demand for cooling data centers. Various solutions to address power consumption are server virtualization, cutting data center energy consumption and changing data centers design and architecture. The following further details the eco-friendly IT solutions to address the aforementioned issues discussed in this research paper:

Virtualization and Server Consolidation

Page 9: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

99

Virtualization solutions has successfully reduced corporate carbon footprint and positively impacting the environment all over the world. Virtualization is the creation of a virtual version of hardware platform, operating system, and storage device or network resources. Virtualization provides tremendous energy benefits and lifeline to datacenters that are running low on capacity and high on power and cooling costs. Through virtualization, businesses can create virtualized, dynamic IT environments that are cost and energy efficient as well as support the eco-friendly movement that various companies are aiming to implement in their daily business operation. The ever changing demands on IT infrastructure are challenging the way we implement data storage. Mounting pressures due to capacity, skill shortage and reduction in IT related costs is forcing businesses to optimize available storage assets. This can be done by consolidating the use of geographically dispersed and underutilized servers and storage. Datafence provides expertise in designing and implementing server and storage consolidation and virtualization solutions, server consolidation & server virtualization. Initially we can perform a needs assessment based on current requirements. Thereafter we can design and deploy the most effective way to consolidate and centrally manage your data.

Source: http://www-03.ibm.com/press/attachments/GreenIT-final-Mar.4.pdf

Figure 1: Virtualisation Projets The following are advantages of virtualization:

Ability to contain an consolidate the number of servers in a data center

Page 10: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1100

-Allows businesses to run multiple application and Operating System workload on the same server. The typical setup is a 10 server workload running on a single physical server; however, there are companies that consolidate 30 to 40 server workload on one server.

-Dramatic reduction in server count results on lower IT energy consumption. Reducing the number of physical servers through virtualization cuts power and cooling cost and provides more computing power in less space. Virtualization can decrease energy consumption by 80 percent.

Ability to respond rapidly to changing business needs and computing requirements -Various companies providing virtualization services have diverse virtualization technology that allows administrators to move running virtual machines from one server to another with no disruption to the application or end users. They have the technology to monitor the utilization of pool of servers. Moreover, they have the technology to dynamically rebalance virtual machines across an entire resource pool of physical servers on an ongoing basis. Other technologies includes reduction of power consumption by turning off servers when there is unneeded capacity and servers are powered back on when the capacity is required,

Virtualization technology helps the environment -Every server that is virtualized saves 7,000 kWh of electricity and 4 tons of carbon dioxide emission per year. With more than a million workloads running on virtualization technology, the cumulative power savings are about 8 billion kWh.

Increases existing server and storage utilization and efficiency. Helps devise a centrally managed server storage plan. Centralizes and Efficiently Managed backup and recovery operations. Helps devise a simple disaster recovery and business continuity plan. Virtualization is used to consolidate the workloads of several under-utilized servers

to fewer machines, perhaps a single machine (server consolidation), bringing your savings on hardware, environmental costs, management, and administration of the server infrastructure.

Your legacy applications might simply not run on newer hardware and/or operating systems. Even if it does, if may under-utilize the server, so as above, it makes sense to consolidate several applications. Virtualization helps you here, as such applications are usually not written to co-exist within a single execution environment.

Virtualization can provide the illusion of hardware, or hardware configuration that you do not have (such as SCSI devices, multiple processors etc.). This can also be used to simulate networks of independent computers.

Virtualization allows for powerful debugging and performance monitoring. You can put such tools in the virtual machine monitor, for example. Operating systems can be debugged without losing productivity, or setting up more complicated debugging scenarios.

Virtualization makes software easier to migrate, thus aiding application and system mobility.

Page 11: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1111

25% of organizations expect server spending to grow by 5 percent to 10 percent, and 6 percent expect it to grow by 10 percent or more. And to reduce operating and capital costs, companies should consider server virtualization. Masking of server resources, including the number and identity of individual physical servers, processors, and operating systems, from server users is called as Server virtualization. Software application is used to divide one physical server into multiple isolated virtual environments and these virtual environments are sometimes called virtual private servers, but they are also known as guests, instances, containers or emulations. There are various approaches to server virtualization such as the virtual machine model, the paravirtual machine model, and virtualization at the operating system (OS) layer. Reasons for server virtualization are (1) Virtualization reduces the overall energy consumption of the server footprint, and hence it allows the same workload to run on fewer physical servers; (2) Virtualization alleviates out-of-space, power, and cooling constraints; and lastly, (3) Virtualization reduces the overall server footprint and cuts energy-related carbon dioxide emissions also electronic waste is reduced as less server equipment are required.

Even if server virtualization is used, there is still room to improve energy savings. The three process improvements that can help organizations to cut server energy costs are maximize virtual machines, cooling and design, and energy efficient servers.

I. Maximize virtual machines: Virtualization is not enough in addition to increasing

the overall server virtualization footprint, the main aim is additional energy savings by virtualizing more efficiently. Server virtualization ratios are not keeping pace with modern hardware and virtualization platform capabilities such as three virtual servers need one host server. Virtualizing more efficiently can help in avoiding new server purchases, not to mention the additional power, cooling, and space expenses from this new equipment. According to Doug Washburn, Forrester (Jan11, 2011) a key ratio that administrators use to determine the acceptable number of VMs per physical host is server CPU utilization. The direct relationship between CPU utilization is VMs per physical host, and energy savings. A standalone non virtualized server might run at an average of 10 percent to 15 percent utilization, whereas virtualized servers could theoretically approach 100 percent. If the numbers of VMs are increased per physical host, the total numbers of physical servers are decreased and energy consumption is also reduced. As server teams become more comfortable with higher server virtualization utilization ratios, they can safely add more VMs per physical server without diminishing service levels.

II. Cooling and design: Packing all this technology into such a small space generates a large amount of heat, and it is the power used by cooling and air conditioning systems that often makes up the majority of the utility bill in the datacenter.

Some manufacturers have been experimenting with different ways of cooling densely packed server, storage and network components, including server cabinet door designs that feature a variety of liquid cooled tubes to distribute cold air across racks, and direct spray technology that douses CPUs themselves with chemically treated water.

Page 12: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1122

Gartner estimates that improved row- and rack-based cooling techniques can reduce energy consumption by 15 per cent, for example, while redesigning datacenter floor plans and racks to bring colder air in and disperse heat (often called hot aisle, cold aisle design) more effectively can also take the weight off over-worked air conditioning systems.

Energy Efficient Servers and Architecture Management:

Datacenter management software: One of the biggest problems facing datacenter managers under pressure to reduce electricity consumption and utility bills is how to get accurate usage information.

Some manufacturers, such as IBM, have added power metering and monitoring utilities to their servers and racks, and linked management software to the power distribution units that monitor individual and multiple racks of servers, network switches and storage appliances to find out exactly how much power the equipment on each unit is using. Elsewhere, IntelliData Systems provides cabinet and rack-mounted power strips with built-in metering, environmental monitoring and remote shutdown capabilities for any attached equipment, as well as inline devices for individual mainframe computers.

A number of software vendors offer reporting tools that can detail trends and patterns in power usage, total power input, carbon emissions and costs, some for billing and charge-back purposes. Also available is modeling software that predicts how equipment can be re-arranged for optimum temperature control, making it easier for organizations to identify ways to reduce datacenter energy consumption.

Scottish and Southern Energy (SSE) has been using datacenter performance management suite since 2009, for example. The software has helped the utility company to map existing rack, server and network hardware and the relationships between them, and to migrate two datacenters from one provider to another when the existing facilities began to run out of capacity. SSE also uses it to predict and prevent failures, using modeling tools to identify potential problems with the electricity supply. Steve Wallage, managing director of Broad Group Consulting, a company specializing in giving advice on datacenters, managed services, outsourcing and virtualization, says more organizations are taking a closer interest not just in datacenter hardware, but also the applications and services that run on top of it to identify where potential efficiency improvements could be made. “There is a lot more effort now to understand datacenters and what goes on inside, not just the power units and chillers, but also the data and applications,” he says. “The banks have detailed analysis of every application in use, for example, and use information on different classes of datacenter infrastructure and location to decide whether they could move them into the cloud, and we will see a lot more corporate effort in that direction.” Datacenter pods: In some cases, both enterprises and service providers may not have to spend millions on building or leasing customized datacenter facilities: scaled down,

Page 13: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1133

“containerized” datacenters that fit into the back of a truck can meet permanent or temporary demand for infrastructure resources so long as there is somewhere close to the network point of presence to park it.

Running out of Processing Power:

This feature is completely because of the reason of the huge amount of storage involved and the business reports say the way the technology has been progressing very soon the storage capacity would be exhausted. The traditional methods were adding of additional hard disks and servers that needed to be installed, these days the servers are being installed virtually on the cloud that consume less power than these normal additional servers and hard drives.

Desktop Virtualization and Thin Clients Moving Desktops to a virtual server than keeping them on the actual server helps us a lot. They consume less power and also the storage problem can be solved to a great extent. Thin clients are generally without a CPU, RAM and are directly connected to the cloud server. The shared resources model inherent in desktop virtualization offers advantages over the traditional model, in which every computer operates as a completely self-contained unit with its own operating system, peripherals, and application programs. Overall hardware expenses may diminish as users can share resources allocated to them on an as-needed basis. Virtualization potentially improves the data integrity of user information because all data can be maintained and backed-up in the data center. Some of the advantages of Desktop Virtualization can be listed as follows:

Simpler provisioning of new desktops. Reduced downtime in the event of server or client hardware-failures. Lower cost of deploying new applications. Desktop image-management capabilities Increased data security. Longer refresh cycle for client desktop infrastructure. Secure remote access to an enterprise desktop environment.

Server Room Upgrades & New Server Room Builds Most of the Mid-size businesses face a preponderance of issues related to a server. There are many reasons that we need to upgrade to a new server.

Decrease cost and increase the effectiveness of the server as the server is not generally prepared for full capacity conditions.

Increase the server and the computing capacity of the server. The server rooms need to be increased as they are either too small or not compatible to the virtual

Page 14: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1144

servers they are connected to. The reliability of the old servers is questionable as they need to be upgraded after a

definite period of time. The mounting and maintenance of these old servers are questionable as it is often

very expensive to maintain these servers and handle the effective increase in the storage.

The infrastructure also needs to be sufficient enough to keep up to server expansion and the other aspects related to the new technologies that keep coming up.

Some of the advantages of these server room upgrades would be that the company would be in the competition for being one of the most innovative companies. The market keeps changing with the ever change in the technology. Thus it becomes very important for us the company to come up with new and better ideas that would keep it in competition in the market. These room upgrades and new servers have become a necessity the company needs to take a further step towards eco-friendly IT and have virtual servers that tend to consume less power and facilitate in smooth running of the company and enables it not to violate with the environment. This is has in turn enabled companies to develop successful projects.

Information Technology Energy Measurement A recent Info-Tech study found that 28% of mid-sized enterprises are piloting or implementing IT energy measurement, and another 25% plan to implement in the next 12 months. Adoption is driven by rising electricity costs, a need for data and guidance in planning future initiatives involving energy efficiency, and greater awareness of the impact of carbon emissions on energy consumption.

Source: http://www-03.ibm.com/press/attachments/GreenIT-final-Mar.4.pdf

Figure 2: Server Room upgrades

Page 15: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1155

This note demonstrates how to move through a gradual but effective energy measurement implementation, including:

Adoption drivers for energy measurement solutions. Simple, cost-effective metering solutions to estimate Information Technology's total

cost of energy. Using the total energy estimate to educate stakeholders about the cost and impact of

energy. Building a solid business case for a formal measurement solution. Success factors for moving through each stage of this energy measurement

implementation approach. A disguised case study of a real company, ABC Foods.

We need to understand how organizations can quantify the total cost of energy for IT, drive interest and attention for this operational cost, and ultimately build a business case for formal tools that allow full reporting, better infrastructure planning, and new quantifiable energy efficiency opportunities. IT Energy measurement can also be dealt by preventing the unnecessary wastage of the energy within a company like unwanted usage of the computers and use of printers.

Printer Consolidation Most of the companies across the United States have the best printers in the market and print over 300,000 pages in a fiscal year. As per a survey conducted it was found that more than 60% of the paper used goes in trash and more three forth of the paper that is wasted cannot be recycled. So we can imagine the amount the paper that has been wasted over the last few years with the increase in technology. Along with the printers, the maintenance of the printers, toners, cartridges etc. proves to be very expensive for the company. So one of the most important aspects within a firm would be to cut down on the use of printers within the firm and avoid the wastage of so much of paper that would help us preserve the energy and not harm the environment. Thus a very unique measure was undertaken by the companies where they only provide printouts where necessary and the rest of the important would be stored on the servers or share across virtually.

Remote Conferencing and Telecommunication Strategies

Page 16: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1166

The fuel prices have reached the skies and on the other hand emitting out so much of waste in the air tends to pollute the air. It tends to pollute the environment and creates an imbalance in the nature. Greenhouse effect comes into picture with the emission of such harmful gases in the environment. Human beings, plants and animals and every living creature are affected by such ways and means of emitting fuel. Thus we need to conserve the fuel for the right time and also save our planet. Thus in this initiative in the paper we would study the ways and means of remote conferencing and telecommunication strategies. Remote Conferencing and Collaboration involves two major aspects video conferencing and implementing them between two different offices or client sites. It also involves online collaboration environments. This feature helps us to convey our message in a much efficient and better way and also enables us to protect the environment as well. Telecommunication strategies and capabilities also have proven quite worthwhile and also enable us to protect the environment from the different hazards caused. Virtual private networks are the next big thing in these days. It has enabled users to start working from home and thus offices are getting less crowded, people need not travel by cars or other vehicles and consume fuel that would in turn pollute the environment. People prefer working from home as they can multi-task their work. They can take care of their family, chat with friends, watch television and do their work side by side. They need not wear proper office wear and be in their casuals doing their work. It has created employment for those who can work from home. It has enable the physically challenged to work from their own space and lead a life of dignity. Not surprisingly, businesses adopting travel reduction initiatives seek to decrease the travel and fuel consumption costs associated with driving or flying between office locations and to client sites. Some of these initiatives not only reduce costs of fuel, flights, hotels and related expenses, but also result in higher employee satisfaction.

Another major factor pushing companies to implement these initiatives, particularly telecommuting strategies, is to satisfy employees. This rang true for one CIO of a North American public company who notes that, “Our employees, faced with high gas prices, are coming back to us and saying, ‘I really like working here but I’m driving 30 miles one way, I may have to look at something else. People don’t want to move, especially for the salaries that we can pay. Telework is going to open up some avenues for us to get employees that are, frankly, out of our reach right now.” Organizations are also gaining access to remote talent that they otherwise would not be able to tap. In two-thirds of all travel reduction projects, organizations report their employees are very satisfied with the increased flexibility they are now offered.

Information Technology Equipment Recycling:

The IT industry has taken its share of plaudits for embracing the green agenda over the past few years. This is certainly well-deserved considering the substantial investment in virtual and related technologies that have helped reduce overall energy consumption.

Page 17: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1177

However, many of the measures have rightly been described as "low-hanging fruit" in that they were fairly easy to accomplish and produced relatively quick, quantifiable returns on investment. That may not be the case in the next phase of the green data center movement, however, in which the industry will increasingly be asked to do what's right for the environment even if it does not produce significant benefit, and may in fact be detrimental, to the bottom line.

Out of all initiatives in this study, the success of IT equipment recycling relies not on a business case with cost savings, but on a combination of environmental responsibility and regulatory pressures. The single most important factor in adopting recycling initiatives is to decrease waste sent to landfills. A close secondary consideration is ensuring equipment is responsibly discarded at end of life. Additionally, there appears to be greatly increased customer demand for responsible recycling practices. Space, too, plays an issue: Many IT departments are simply running out of closets and crannies to store old equipment.

A key example is recycling. Enterprises have traditionally left disposal of old equipment to suppliers or distributors, essentially washing their hands of it once depreciation had eroded its value. That approach isn't likely to hold up much longer considering the impact that refuse enterprise hardware is having on both the environment and municipal budgets that have to accommodate the e-waste.

Obsolete computers or other electronics are a valuable source for secondary raw materials, if treated properly; if not treated properly, they are a source of toxins and carcinogens. Rapid technology change, low initial cost, and planned obsolescence have resulted in a fast-growing surplus of computers or other electronic components around the globe. Technical solutions are available, but in most cases a legal framework, a collection system, logistics, and other services need to be implemented before applying a technical solution. The U.S. Environmental Protection Agency, estimates 30 to 40 million surplus PCs, classified as "hazardous household waste" would be ready for end-of-life management in the next few years. The U.S. National Safety Council estimates that 75% of all personal computers ever sold are now surplus electronics. Computer components contain many toxic substances, like dioxins, polychlorinated biphenyls (PCBs), cadmium, chromium, radioactive isotopes, and mercury. A typical computer monitor may contain more than 6% lead by weight, much of which is in the lead glass of the cathode ray tube (CRT). A typical 15-inch computer monitor may contain 1.5 pounds (1 kg) of lead, but other monitors have been estimated to have up to 8 pounds (4 kg) of lead. Circuit boards contain considerable quantities of lead-tin solders that are more likely to leach into groundwater or create air pollution due to incineration. The processing (e.g. incineration and acid treatments) required to reclaim these precious substances may release, generate, or synthesize toxic byproducts.

Page 18: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1188

Export of waste to countries with lower environmental standards is a major computer or electronic recycling concern. The Basel Convention includes hazardous wastes from computer CRT screens as an item that may not be exported Trans continentally without prior consent of both the country exporting the waste and that receiving the waste. Companies may find it cost-effective in the short term to sell outdated computers to less developed countries with lax regulations. It is commonly believed that a majority of surplus laptops are routed to developing nations as "dumping grounds for e-waste". The high value of working and reusable laptops, computers, and components (e.g. RAM) can help pay the cost of transportation for many worthless "commodities".

We have several recycling methods available and some of them can be listed as follows:

Consumer recycling involves taking the products directly back to the manufacturer or a refurbish firm.

Corporate recycling involves several businesses seeking a cost-effective way to recycle large amounts of computer equipment responsibly face a more complicated process. Businesses also have the options of sale or contacting the Original Equipment Manufacturers (OEMs) and arranging recycling options. Some companies pick up unwanted equipment from businesses, wipe the data clean from the systems, and provide an estimate of the product’s remaining value. For unwanted items that still have value, these firms buy the excess IT hardware and sell refurbished products to those seeking more affordable options than buying new.

Sale involves online auction of products and they get a good price in turn for the products that are need to be scrapped.

Donation involves the process of changing the parts that are required within the computer and then the entire computer would be given to a person in need of it.

Take back involves researching computer companies before a computer purchase, consumers can find out if they offer recycling services. Most major computer manufacturers offer some form of recycling. At the user's request they may mail in their old computers, or arrange for pickup from the manufacturer.

Exchange involves offering a free replacement service when purchasing a new PC. Dell Computers and Apple Inc. take back old products when one buys a new one. Both refurbish and resell their own computers with a one-year warranty.

Many companies purchase and recycle all brands of working and broken laptops and notebook computers, from individuals and corporations. Building a market for recycling of desktop computers has proven more difficult than exchange programs for laptops, smartphones, and other smaller electronics.

Page 19: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

1199

Scrapping/Recycling has become very essential due the rising price of precious metals — coupled with the high rate of unemployment during the Great Recession — has led to a larger number of amateur "for profit" electronics recyclers. Computer parts, for example, are stripped of their most valuable components and sold for scrap. Metals like copper, aluminum, lead, gold, and palladium are recovered from computers, televisions and more.

PC Power Management: Many look to managing end-user device power consumption as an easy, effective way to reduce energy costs. These power management initiatives include the following:

-Using software that centrally manages energy settings of PCs and monitors. -Enforcing standardized power settings on all PCs before distributing to end users. -Procuring energy-efficient equipment, such as Energy Star certified devices.

Older computers can use up to 300 watts during peak load, but less than eight watts during sleep modes. By maximizing the number of PCs and monitors controlled for hibernate, sleep or shut-down times, companies reduce the amount of energy consumed during lengthy idle times, particularly overnight. Procuring Energy Star-compliant devices or more energy-efficient equipment can also reduce power consumption during equipment use. This includes replacing old desktops with laptops, or refreshing CRT monitors with LCD flat-screens. Altogether, these power management strategies result in significant energy and maintenance cost savings; such benefits are realized by 65% of companies that complete such initiatives.

4. Key Success Factors in Eco-friendly IT Projects

The likelihood that companies will successfully implement Eco-friendly initiatives depends on the following factors:

1. Stakeholder Support: Any project in a firm has some stakeholder. It is indeed critical to have their support

for the success of that particular project, especially as far as eco-friendly use of

technology is concerned. Major stakeholders include C-level executives, IT directors, IT

staff, employees, and in some cases, property or facilities management. Although

gaining buy-in from all levels is important, the likelihood of success is higher when

implementations have support of C-level executives – specifically, the CEO. The most

successful projects are strongly supported by the CEO in more than three-quarters of

implementations. As an IT manager at a finance company said, “One of the reasons

we’ve been able to move forward with this is because of sponsorship and support from

the CEO and his executive team. Without that, we wouldn’t have the funding to do it. It

wouldn’t be pushed.”

Page 20: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2200

2. Lack of Implementation:

Source: http://www-03.ibm.com/press/attachments/GreenIT-final-Mar.4.pdf

Figure 3 : Facing extreme implementation barriers

Companies adopting eco-friendly information technology initiatives may face barriers

that inhibit the successful approval and implementation of these projects. A lack of

choice due to missed refresh cycles, inadequate funding, misalignment with physical

facilities, and a lack of resources, such as IT staff, can all be barriers. However, it is

found that less than one-third of respondents cite these as major barriers to

implementation; only 7% say they face extreme barriers. The most common barrier

for this latter group is a lack of flexibility due to missed refresh cycles.

3. Economic Trade-offs:

In a recent survey a few respondents were asked to anticipate the impact of the

downturn on their revenues, IT budget, prioritization of projects, and funding for eco-

friendly information technology projects for the next 12 months. Approximately 61%

of respondents did not believe that they will be affected by in such areas. These also

include more than 50% of respondents who do not think that tat funding for eco-

friendly information technology projects will drastically decrease. This is a positive

signal for eco-friendly information technology, showing cost-cutting benefits. This is

believed by 38% of the companies felt that cost saving would prove a success to their

projects.

5. Company Case Analysis Hewlett Packard HP’s Performance Optimized Datacenter (POD) is one example, with others available from

Page 21: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2211

Sun Microsystems (now Oracle), IBM and APC. The POD, which comes in 20ft and 40ft versions, provides up to 20 standard 19in 50U racks and 600kW of power (34kW per rack), and uses chilled water to keep the servers cool, alongside blower fans and heat exchangers, backed up by dual active power distribution paths for redundancy purposes. Whichever, if any, of these technologies or datacenter design methodologies individual organizations choose to deploy will depend very much on what they have in place already and the extent of the upgrade budget available to them. But the potential of innovative datacenter design to deliver reduced capital and operational costs means few IT departments can afford to ignore them.

DELL Dan Traynor, IT infrastructure director, Southern Company, United States Dell’s Challenge was its rapid business growth which created server sprawl, threatening to outstrip the available space in Southern Company’s data centers and driving up costs by consuming more energy each year.

Virtualizing and consolidating on Dell PowerEdge servers enable the Southern Company IT team to save data center space reduce costs and increase energy efficiency. Some of the benefits which Dell achieved were the virtual infrastructure that helped speed up new server deployment time by a week. Dell virtual infrastructure enabled IT to accommodate future growth, while slowing down the pace of energy consumption; Dell PowerEdge servers enabled up to 26:1 server consolidation to save data center space; Southern Company avoids over 2 million kilowatt hours of energy use with virtualized Dell servers; consolidating on Dell servers enabled IT to avoid an estimated U.S. $1.3 million in capital expenditures.

Dell’s approach is called the Efficient Data Center, and it can help you free up some 50 percent of your IT budget while also lowering your carbon footprint. Built on virtualization, automation and consolidation, this strategy yields open, robust and cost-effective solutions that help optimize the current center virtualize in a time frame that makes sense for the business and leverage cloud technologies where appropriate. In addition, the Efficient Data Center improves business continuity. Downtime costs money, drains resources and can harm a company's reputation. With an infrastructure that's virtual-ready, you can recover from server failure rapidly and without having to rebuild from scratch. Within minutes the functions performed by the failed server — whether it's virtual or physical — can be retargeted to an available spare server so that the applications are back up.

SAMSUNG: Kim Seungh-ho. October 4th, 2010."Samsung Electronics unveils ‘Smart & Green plus’

Page 22: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2222

Strategy" Data Center Electricity Consumption Doubles: A big increased in the number of server accounts for 90% of the extra power consumption, based on a study conducted by Stanford’s Jonathan Koomey. The energy consumed by data center servers, cooling equipment, and related infrastructure more than doubled in the United States and worldwide between 2000 and 2005, according to a new study.

An increased in the number of servers accounts for 90% of the additional power consumption, according to a study by author, Jonathan Koomey, a consulting professor at the Stanford University and a staff scientist at Lawrence Berkeley National Laboratory. The study was conducted by Advanced Micro Devices, which is touting its energy-efficient processors. Only 5% to 8% of the increase in data center electricity consumption is attributed to power use per unit. What is driving the server proliferation is the insatiable appetite for Web content, video on demand, music downloads, and Internet telephony. The total amount of electricity used to operate data center servers and related infrastructure equipment in the United States was $2.7 billion in 2005 in comparison to $1.3 billion in 2000. Worldwide the total bill was $7.2 billion in 2005, compared with $3.2 billion in 2000. Looking at it in a different way U.S. data center power consumption in 2005 was equivalent to about five 1,000- megawatt power plants or five typical nuclear or coal power plants says Koomey. In the United States in 2005 Data center servers consumed 0.6 percent of all electricity. When counting with the infrastructure equipment such as network and cooling gear that figure goes up to 1.2 percent, about the same percentage consumed for televisions. To overcome this big consumption of electricity by data center servers companies such as Samsung have lunched strategies to a more “smart and Green” approach. Samsung Electronics revealed the "smart & green plus" strategy at the 2010 Samsung mobile solution forum held in Taiwan on Sept 7. "The strategy reflects Samsung's strong will to lead the world's mobile semiconductor industry with high-function, low electric power and environment-friendly semiconductors," said at the forum Kwon Oh-hyun the president of the semiconductor business of Samsung Electronics. "At the same time, we will effectively cope with changes in the new mobile market environment by strengthening the win-win partnership between semiconductor manufacturers and set makers," Kwon Oh-hyun. "Samsung also plans to expand the "green memory campaign" to three fields - server, PC and mobile. Through updating their green memory campaign website, the company expects to introduce four top green memory products - DDR3, SSD, LPDDR2 and GDDR5," said Kwon. At the forum, Samsung introduced new mobile semiconductor products in keeping with the smart & green plus strategy, including 1GHz dual core application processor designed on low-power process technology, a high-performance 16gigabyte moviNANDTM chip with an eMMC4.41 interface, and an engineering sample of the world's first application processor utilizing 32 nanometer (nm) low-power process technology.

6. Conclusion

Page 23: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2233

The soaring demand for powerful servers and cooling equipment to support the data centers has brought vast effect in the energy consumption requirements of the modern industry today. This has become a problem of majority of the highly industrialized companies – problems with coping up with the demand for higher level technology while keeping the operation cost low has been quite a challenge for many businesses. Moreover, as industries are becoming more aware of the ill effects of globalization to our planet, everyone is doing their part and taking its steps towards contributing to a sustainable environment. Businesses around the world have discovered that going green isn’t just good for the planet; it is good for their bottom lines. The paper highlights how mid-size companies are realizing significant cost savings when they adopt eco-friendly information technology initiatives. Issues relating to high energy consumption of data centers are mostly attributed on how companies manage their system requirements. Most of the companies purchase a new server whenever there is a need for a new system. The accumulation of servers running in a single system brings so much impact on the high cost of electricity bill to run the machine as well as it has adverse effect to the environment. Servers emit a great amount of heat which can cause damage to the machine. Cooling equipment is needed to control the heat being emitted from the machines which also consume so much energy resources. Consolidating these systems into one server alone does not serve as a solution for this problem. Virtualization is the most popular eco-friendly solution to address the high energy consumption of data center. This is usually the first step that the IT department takes to consolidate their servers to significantly bring down the cost of maintaining data centers and high energy cost that is associated to it. One significant finding that we learned from this research paper is that virtualization alone is not the entire solution to address the pressing issues – it needs processes, procedures and management to benefit from the advantages that virtualization can bring in solving the aforementioned issues presented in this research paper.

References Dell 1, Practical solutions for environmental issues(2010). Retrieved from

http://content.dell.com/us/en/corp/dell-earth.aspx

Dell 2, Dell working on the various solutions for Green IT; Retrieved from www.dell.com/environment. Frrester, Ways to cut data center energy costs. Retrieved from http://features.techworld.com/data-centre/3245222/forrester-three-ways-to-cut-data-centre-energy-costs/?pn=1

IBM, Green IT: Why mid-size companies are investing now, http://www-

03.ibm.com/press/attachments/GreenIT-final-Mar.4.pdf

Page 24: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2244

McGee, M.K. InformationWeek February 17, 2007 "Data Center Electricity Bills Double” http://www.informationweek.com/news/197006830 Seungh-ho, K.. October 4th, 2010."Samsung Electronics unveils ‘Smart & Green plus’ Strategy"

http://www.informationweek.com/news/197006830

Techworld 1, Various Solutions for Green IT; Retrieved from http://features.techworld.com/latest/?cid=27##

Techworld 2, Data center design. Retrieved from http://features.techworld.com/data-centre/3229944/trends-shaping-data-centre-design/

Techworld 3, Data center management. Retrieved from http://features.techworld.com/data-centre/3208465/the-new-shape-of-data-centres/

Techtarget, What is Server Virtualization? Retrieved from http://searchservervirtualization.techtarget.com/definition/server-virtualization

Traynor, D., IT infrastructure director, Southern Company. Case Study of Dell (Virtualizing and consolidating); Retrieved from http://content.dell.com/us/en/enterprise/d/corporate~case-studies~en/Documents~2009-southern-company-10007421.pdf.aspx

Page 25: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2255

Possibilistic Group Support System For Pricing And Inventory Problems

Emna Boumediene, ISCAE, University of Manouba, Tunisia

Lotfi Boumediene, ISG, University of Tunis, Tunisia

Bel G Raggad, Pace U, New York

Abstract

The paper proposes a Possibilistic Group Support System (PGSS) for the retailer pricing and

inventory problem when possibilistic fluctuations of product parameters are controlled by a set

of possibilistic optimality conditions. Experts in various functional areas convey their subjective

judgment to the PGSS in the form of analytical models (for product parameters estimation),

fuzzy concepts (facts), and possibilistic propositions (for validation and choice procedures).

Basic probability assignments are used to elicit experts' opinions. They are then transformed into

compatibility functions for fuzzy concepts using the falling shadow technique. Evidence is

processed in the form of fuzzy concepts then is rewritten back to basic probability assignments

using the principle of least ignorance on randomness.

The PGSS allows the user (inventory control) to examine a trade-off between the belief value of

a greater profit and a lower amount of randomness associated with it. Managerial pricing and

inventory strategy is controlled using three fuzzy concepts expressing whether management is

acting softly, moderately, or aggressively. Management can soften their strategy and reinvoke

the PGSS until a final system recommendation becomes satisfactory.

Keywords: Possibilistic theory, Expert system, Group support system, Fuzzy set theory

1. Introduction

The determination of subjective probability needed to process subjective judgment relies

considerably on the perception of the human expert. The acquisition process of distributions for

subjective probabilities is usually characterized with inconsistency, which can expand when

multiple experts are involved in the estimation process.

Subjective judgment is often used in decision making under uncertainty. It is not uncommon that

experts produce different probability distributions for the same subject. When this happens,

combining their views requires a very difficult and costly inference process. However,

despite the arbitrariness, inconsistency, and cost of processing subjective managerial judgment,

experts' estimation of uncertainty associated with the decision domain remains a consequential and

valuable conceptual resource for current decision-making processes. Nevertheless, experts can

only provide incomplete and rough estimation for domain uncertainty. Estimates for domain

parameters are usually presented in a linguistic form.

Page 26: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2266

The retailer pricing and inventory problem treated in this article depends on subjective judgment

from various players, namely the purchase manager, the sales manager, the inventory manager,

suppliers, marketing and finance management. Obviously the decision maker alone cannot

possess expertise concerning all functional areas affecting the retailer pricing and inventory

policy. An efficient approach for managing the randomness intricating such a decision making process should

take into account the diversity of the group of experts and device a sound inference process.

The article proposes a Possibilistic Group Support System (PGSS) for the retailer pricing and

inventory problem when possibilistic fluctuations of product parameters are controlled by a set

of possibilistic optimality conditions. Experts in various functional areas convey their subjective

judgement to the PGSS in the form of analytical models (for product parameters estimation),

fuzzy concepts (facts), and possibilistic propositions (for validation and choice procedures).

Two known techniques are usually employed in possibilistic reasoning: basic probability

assignments and compatibility functions for fuzzy concepts. Even though they are very effective

in eliciting experts' opinions, basic probability assignments are very complex and costly for

combining evidence. In contrast, while compatibility functions are very easy to process, they are

characterized by their arbitrariness in representing experts' opinions as fuzzy concepts. In order

to avoid the disadvantages of bath techniques, the PGSS diversifies their usage by employing basic

probability assignments in the elicitation process and compatibility functions in the combining of

evidence.

The PGSS also allows the user (inventory control) to examine a trade-off between the belief

value of a greater profit and a lower amount of randomness associated with it. Managerial pricing

and inventory strategy is controlled using three fuzzy concepts expressing whether management is acting

softly, moderately, or aggressively. Management can soften their strategy and reinvoke the PGSS until a final system

recommendation becomes satisfactory.

2. The retailer pricing and inventory problem

While demand is constant in the EOQ problem, Lee (1993) explicitly allowed for the interdependency between demand

and price [7]. Lee attempted to determine the optimal selling price and order quantity of a retailer when the demand is a

nonlinear factor of the price and with quantity-discount cost. In general, the task is to maximize a retailer profit π (p,q)

where p and q are the price and order quantity, respectively. The profit is computed as follows:

π(p,q)=r-c-h-b

r: revenue

c: ordering cost

h: inventory holding cost

b: purchase cost.

The optimal solution is obtained by the maximization of π (p,q). This is an unconstrained signomial problem with one

degree of difficulty. Lee (1993) transformed the problem into a posynomial one as in Duffin, Peterson and Zener [2]. In

an intermediary stage, Lee proposed the computation of four important decision parameters: δ1, δ2, δ3 and δ4 which

respectively represent the proportions (or weights) of profit, ordering cost, inventory holding, and purchase cost to the

total revenue. These proportions play a very important prediction role of the deviation of various output variables from

their optimal values.

Page 27: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2277

Real data concerning the proportions of ordering cost and inventory cost to the total revenue may be easily obtained [4].

The two remaining decision parameters, proportions of profit and purchase costs to total revenue, may be approximated

as in [4]:

δ1 + δ3= cons1

δ3 - δ2= cons2

δ3 + δ4= cons3

where cons1, cons2, and cons3 are constants defined in [4].

Even though the optimal pricing and inventory policy changes when input parameters change, Lee proposed a set of

optimality conditions to control fluctuations in the input variables. This set of optimality conditions will be useful to

identify those output variables (price, quantity, revenue, profit), which are not realistic. The analytical model is not easy

to solve when demand is a nonlinear function of price with a constant elasticity. This problem can become even more

difficult if multiple products are involved in the study, or if the nonlinear function gets more complex.

The article considers the pricing and inventory problem for the same class of products assuming possibilistic fluctuations

of input parameters. If optimality conditions are fuzzified then the randomness on the profit function will be worthwhile

to study.

3. PGSS design

The PGSS is a computer-based information system designed to support a group of experts in the process of their

perception and cognizance of fuzzy concepts regarding a specific decision domain towards the definition of a common

possibilistic outcome. This article proposes and demonstrates a prototype of the PGSS for the pricing and inventory

policy problem.

The PGSS, as depicted in Figure 1, consists of a user-system dialog subsystem (USDS or just user subsystem) and the

expert system dialog subsystem (ESDS or just expert subsystem). The user subsystem is a computer interactive program

that assists the user in submitting a realistic input vector to the PGSS. The expert subsystem is an interactive computer

program that assists various experts, from different functional areas affecting pricing and inventory decisions, in

organizing, processing, and making their subjective judgment available to inventory control users.

Page 28: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2288

Figure 1: PGSS Design

4. Randomness management and possibilistic inference

Randomness is traditionally represented using additive probability distributions, in which the probability figures are

distributed to all singletons of the universe of discourse. On the other hand when experts do not hold sufficient

knowledge about the domain, their probability estimates can only go to same subsets of the universe. Ignorance of a

given concept can put the expert in a position where he/she can neither support nor reject the concept; the sum of

probabilities of the concept or its negation is therefore less than 1. This nonadditive property of the probability measure

is incorporated in Shafer's theory of basic probability assignment [1; 5; 6; 9].

Plausibility and belief measures are associated with a function called the basic probability assignment m defined as

follows:

m: 2u ---> [0, 1]

m(φ)=0 and ΣA≤U m(A)=1.

The value m(A) represents the degree of belief that a specific element of U belongs to the set A, but not to any special

subset of A.

The belief measure Bel is defined in terms of the basic assignment m as follows:

Bel: 2U ---> [0, 1]

Bel(A)= ΣB≤U m(B).

Group of Experts

(Functional Areas)

Users: Inventory Control

Exact Reasoning

Model Base User Judgement

Validation Validation Fuzzy

Concept Base

Basic Prob. Assignement

Base

Possibilistic

Inference

Process

Enhanced Pricing and Inventory Policy

Choice

Fuzzy Concept Base

Falling Shadow

Base

Page 29: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

2299

The plausibility measure Pl and the randomness Ran are defined in terms of the basic probability assignment as follows:

Pl, Ran: 2U ---> [0, 1]

Bel(A)= ΣB‡Φ m(B).

Ran(A)= Pl(A)-Bel(A)

Figure 2:

Basic

Probability

Assignment

The shaded areas, in Figure 2, are called focal elements of the basic probability assignment. In a prudent manner, the

belief function takes a minimal amount of probability, since only the intersections Ac and Ad are added to Bel(A). In

a more optimistic manner, the plausibility value, Pl(A), takes into account probability amounts associated with all subsets

intersecting A. The amount of randomness on the basic probability assignment is measured as the difference between the

plausibility and the belief functions.

Two problems are encountered in processing possibilistic evidence. While the process of basic probability assignments is

very complicate and complex, basic probability assignments better represent experts' perception and cognizance of fuzzy

concepts. In contrast, while compatibility functions are not easy to elicit from human experts, their process is known to

be quiet easy. The PGSS will employ basic probability assignment as a method for the representation of experts'

estimates and will use the compatibility functions of experts to use in the possibilistic inference process. In this manner,

those unwanted features of both methods are avoided.

A compatibility function can express the perception and the cognizance of the fuzzy concept by the individual expert.

Different experts may show different perception and cognizance of the same fuzzy concept. Experts may produce

different compatibility functions for the same fuzzy concept. When this occurs, it is necessary that the system

combines the individual fuzzy concepts to produce a common compatibility function of the fuzzy concept. This

article does not propose a direct and mathematically sound technique to combine the experts' compatibility functions of

the fuzzy concepts. We instead use a set-valued statistical method called the falling shadow of random subsets to

transform expert judgment into compatibility functions [8].

c

a

b

d

e

or : PL(A)=a+b+c+d : Bel(A)=a+b.

A

U

Page 30: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3300

The falling shadow method represents an expert compatibility function of a fuzzy concept as an area coverage that

corresponds to each random subset in the basic probability assignment [8]. That is, the perception and cognizance of the

fuzzy concept by an individual expert is expressed using a basic probability assignment. After consulting various experts,

a collection of basic probability assignments is obtained. The falling shadows of various basic probability assignments

are then constructed. Because a probability distribution defined on the collection of basic probability assignments is not

usually available, the arithmetic average of the falling shadows is used as an approximation to the mean [8]. In fact for a

large group of experts, the arithmetic average of the following shadows approaches the mean [4; 5]. The overall

compatibility function of the fuzzy concept is then obtained directly from the arithmetic average of the falling shadows.

For more detailed information on the falling shadow technique, one may refer to [8].

5. Assessment and validation of the input vector

The user first submits a tentative input vector which is next validated using validation propositions expressed as

fuzzy concepts and stored in the validation fuzzy concept base. If it is not realistic, the input vector will be then

transmitted back to the user for refinement and resubmission.

The user expresses his/her judgment using an input vector (a, b0, k, , , δ1, δ2, δ3, δ4) defined as follows:

Input vector:

a: inventory carrying rate per unit

b0: no-discount cost unit

k: scaling constant

: price elasticity

: quantity discount coefficient

δ1: profit proportion to total revenue

δ2: ordering cost proportion to total revenue

δ3: inventory holding cost proportion to total revenue

δ4: purchase cost proportion to total revenue

Let v be the variable name of one of the vector components estimated by management. The validation fuzzy concept

base includes three fuzzy concepts defined by their fuzzy subsets UNDER (underestimated), REAL (realistic), and

OVER (overestimated) and their respective compatibility functions UNDER, REAL and OVER. The compatibility

values UNDER(v), REAL(v) and OVER(v) of managerial estimates of the variable v with the three fuzzy concepts

UNDER, REAL, and OVER are then computed and examined. The fuzzy concept that corresponds to Max

{UNDER(v), REAL(v), OVER(v)} is most compatible with managerial judgment. In this manner, the validation

subsystem produces the validation status (underestimated, realistic, or overestimated) of managerial judgment

concerning all components of the input vector.

A sample of the possibilistic propositions stored in the validation concept base is provided in Figure 5. A sample of

the possibilistic choice propositions is provided in Figure 6.

6. Possibilistic system recommendation

If managerial judgment concerning all components of the input vector is compatible with the concept represented by

the fuzzy subset REAL, then the control is transferred immediately to the possibilistic inference process following

which a pricing and inventory policy is recommended. The user invokes the system for the purpose of determining

the pricing and inventory policy that satisfies a predefined goal. The user's goal is expressed in terms of the profit

concept 'π≥τ', τ >0 (τ may be understood as a tolerated minimum profit) in one of the following forms:

Goal:

Page 31: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3311

SOFT (low τ)

MODERATE (moderate τ)

AGGRESSIVE (high τ)

The system will provide an extended output vector for the price and inventory policy as follows:

Pricing and inventory policy vector:

p: price

q: order quantity

r: revenue

π: profit

Bel(π≥τ)

Ran(π≥τ)

Compatibility functions are obtained from basic probability assignments using their falling shadows. To avoid the

complex (and some times impossible) process of basic probability assignments, the inference process combines the

compatibility functions of the fuzzy concepts instead. The inference process returns the set of compatibility

functions for the fuzzy concepts {'π≥τ', τ>0}.

Multiple basic probability assignments may have the same falling shadow of random subsets. Liang and Song [8]

showed that the principle of least ignorance on randomness produces a unique basic probability assignment. In order

to compute randomness on the final system recommendation, the principle of least ignorance on randomness is used

to induce a basic probability assignment from the fuzzy concepts {'π≥τ', τ>0}.

For various values of Goal parameter

Figure 3: Compatibility function of the profit concept

The compatibility function of the fuzzy concept 'π≥τ' has no left tail, as shown in Figure 3 for various values of τ.

The support set is the interval [π- π

+]. The smallest interval on which µ=1 is the interval [π

- π

0] where π

0 is such that

π0=Min{x:µ(x)<1}. We therefore use Shafer's consonant belief structure. A consonant belief is characterized by its

nested focal intervals I1≤I2≤ … ≤In. Because the plausibility of the union of two intervals Ii and Ij equals the

maximum of subsets plausibilities, the measure of plausibility is therefore a possibility measure. Also, the belief

measure is a necessity measure since Bel(Ii Ij) equals the minimum of {Bel(Ii), Bel(Ij)}.

As in [9] the fuzzy subset of the payoff concept may be associated with the consonant belief structure I1≤I2≤ … ≤In

and hence:

μ(u)=Σi:xIi=m(Ii)=Pl(u)

The consonant belief and plausibility functions can be reconstructed from the compatibility function of profit fuzzy

concept, treated as a contour function [6] since:

Pl(Ii)=MaxxIiPl(x) and Bel(Ii)=MinxIi1-Pl(x).

Page 32: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3322

The interval [π0 π

+] is divided into N intervals of equal length δ. Horizontal rectangles of width

δ =[π+-π

0]/N and length (δ) are constructed as in Figure 4. That is, the focal elements {Ii, 1≤i≤N} are such that

m(Ii)=1/N for any i, 1≤i≤N. Also, for any subset Ax=[π- x],

I1

I2

I3

I4

I5

π- π

0 π

0+ … π

+

Figure 4: Induced basic probability assignment

The following concepts taken from the choice fuzzy concept base are defined as follows:

'SECURE(π, τ)' "Make sure that π≥τ;"

'τ=REAL': "Make sure that τ is realistic;"

'ALLOW(π, τ)': "The value of τ can yield the profit π."

The following two propositions are also reproduced from the choice fuzzy concept base, then applied on the above

fuzzy concepts, and combined together to yield the compatibility function of the profit fuzzy concept:

If τ is realistic and τ allows π

then π is realistic.

If π is realistic and π≥τ

then the concept 'π≥τ' is secured.

The fuzzy profit concept depends on the goal concept expressed by the fuzzy subsets: SOFT (low τ), MODERATE

(moderate τ), and AGGRESSIVE (high τ). The compositional operator will be used to process the fuzzy profit

concept.

'SECURE(π, τ)' {'τ =REAL' 'ALLOW(π, τ)'} 'π≥τ'

µSECURE(π, τ)= Maxz

{Mint

{[Maxx

{Miny

{µτ=REALµALLOW (π, τ)(x,y)}}]}

{µ(π≥τ) (z,t)}}

The PGSS computes {'τ =REAL' 'ALLOW(π, τ)' } 'π≥τ' as explained in the following steps:

Let πi=π

-+i(π

+-π

-)/N

1. Set up the compatibility values of 'τ =REAL' 'ALLOW(π, τ)' in the form of a matrix multiplication as follows:

µ'ALLOW(π, τ)'[π- … π

i … π

+]

π- 1.0 . 0.0

Page 33: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3333

. . . .

µ'τ=REAL'=[π- … π

i … π

+] π

i 1.0 1.0 .

. . . .

π+ 1.0 . 1.0

2. After taking the row vector µ'τ=REAL'=[ π- … π

i … π

+] and matching pairwise with the first column of the

matrix. Select the minimum of each pair of this match for the row and the first column. Then select the maximum of

all elements in the resulting vector.

3. Do the same for the rest of the columns of the matrix. This will result in the desired vector:

µ'τ=REAL' 'ALLOW(π, τ) '(π- … π

i … π

+).

Let µ'τ=REAL' 'ALLOW(π, τ) '([π- … π

i … π

+])=(µ

1 … µ

i … µ

N)

4. Apply the steps 1 to 3 using the row vector µ'τ=REAL' 'ALLOW(π, τ) '(π- … π

i … π

+) on the matrix given below.

µπ≥τ[π- … π

i … π

+]

π- 1.0 . 0.0

. . . .

(µ1 … µ

i … µ

N) π

i 1.0 1.0 .

. . . .

π+ 1.0 . 1.0

The resulting vector is in fact µSECURE(π, τ).

If the profit level predicted is realistic and the amount of randomness on the profit concept 'π≥τ' is low, then the

pricing and inventory policy recommended by the system is adopted. If however, either the profit is not realistic, or

the amount of randomness is high, then the user needs to communicate with some of the experts to discuss a possible

trade-off between the level of profit and the amount of randomness on the profit concept. At this stage, managerial

goal concepts may be revisited to see if it is possible to adjust the value of τ so that a softer strategy (SOFT (low τ),

MODERATE (moderate τ), and AGGRESSIVE (high τ)) could be considered. The system will be reinvoked in the

same manner, until a final enhanced pricing and inventory policy is accepted.

Validation Propositions:

if δ2 In overestimated

then the price will be overestimated.

if δ2 in overestimated

then the lot size will be underestimated.

if δ3 in overestimated then

then price will be overestimated.

if δ3 is overestimated

then the lot size will be underestimated.

if δ2 in underestimated

then the price will be underestimated.

if δ2 is underestimated

then the lot size will be overestimated.

if δ3 in underestimated

then the price will be underestimated.

if δ3 in underestimated

then the lot size will be overestimated.

Figure 5: Correction rules

Page 34: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3344

Choice Propositions:

if i and r increase then

the optimal price increases.

if i and r increase

then the optimal lot size decreases.

if k increase

then the optimal price increases.

if k increase

then the optimal lot size decreases.

if i and r decrease

then the optimal price decreases.

if i and r decrease

then the optimal lot size increases.

when =0 then if the ordering cost

then increases the optimal price goes up.

if τ is low and π≥τ

then π=π is realistic.

if τ is moderate and π≥τ

then π=( π+ π)/2 is realistic.

if τ is low and π≥τ

then π=π is realistic.

Figure 6: Sample of Choice propositions

Consider as an example [4], the demand D=105P

-3. The quantity discount function is c=5Q

.01. The optimal figures

[4] are p=7.3, q=13702, and π=3113.12.

Lee's proposed procedure only works for the problem of maximizing a signomial profit function with one

posynomial term. The procedure cannot be of use if multiple products are treated, since the objective function will

have more than one posynomial term. That is, approximation techniques become necessary when the degree of

difficulty increases. Those techniques are usually costly, lengthy, and often not so robust.

An alternative will be to apply possibilistic theory where evidence is processed in a logically sound manner. The

possibilistic propositions developed above are induced from the optimality conditions and solution bounds obtained

through exact reasoning in [4].

In this example, the profit bounds are computed in [4] as π-=3102.84 and π

+=3121.24. Suppose that the weights δ1,

δ2, δ3, and δ4 are estimated to be .5, .02, .08, and .4 respectively. Suppose that the weight estimates are examined by

the validation procedure and are found unrealistic. In this situation, those estimates are corrected and resubmitted.

This process terminates when the input vector becomes valid given the validation propositions stored in the

validation fuzzy concept base.

Let us vary the values of τ to examine the possible trade-off between a higher profit and a lower belief value. We

considered the three values of τ=3102 (low), τ=3117 (Moderate), and τ =3121 (high). The values of compatibility

with the fuzzy profit concept 'π≥τ', of possible profit values x, and the belief values associated with the intervals [π,

x] are provided in Table 1 (x [π- π

+]).

The higher the value of τ gets, the more difficult is the realization of a profit π greater than τ. The belief values for

the same subset of profit values decrease when τ goes up. As illustrated in Table 1, the softer is the pricing and

inventory policy, the higher will be the belief value for any subset in [π- x]. For example, for a fixed belief value of

.5, the maximum profit values associated with this belief when τ is low, when τ is moderate, and when τ is high, are

3111, 3116, and 3118.5 respectively. That is, the aggressive policy yields a higher profit given a fixed belief value.

Page 35: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3355

Let us consider now a higher belief value, say of 0.7, the maximum profit values associated with this belief when τ

is low, when τ is moderate, and when τ is high, are 3115, 3118, and 3119.5 respectively. With a higher belief value,

say 0.7, the maximum profit values for low, moderate, and high τ, all increase, with the aggressive policy yielding a

better profit.

Furthermore, let us fix the maximum profit x to 3117; Table 1 shows that the belief values for the subset [π- x] for

the soft, moderate, or aggressive policies are .80, .60, or .20 respectively. If we increase the profit x to 3119, then the

belief values for the subset [π- x] for the soft, moderate, or aggressive policies are .90, .80, or .20 respectively. In

both cases, the softer the policy, the lower is the belief value for a given profit.

It is very important therefore that the pricing and the inventory manager thinks of a trade-off between a higher belief

value (for π≥τ) and lower profit, according to the relationship structure explained above.

Table 1: Trade-off between higher beliefs and lower profits

PROFIT

π

LOW τ MODERATE τ HIGH τ

Membership Bel Membership Bel Membership Bel

3102 1.000000 0.05 1.000000 0.00 1.00 0.00

3103 0.947368 0.10 1.000000 0.00 1.00 0.00

3104 0.894736 0.15 1.000000 0.00 1.00 0.00

3105 0.842105 0.20 1.000000 0.00 1.00 0.00

3106 0.789473 0.25 1.000000 0.00 1.00 0.00

3107 0.736842 0.30 1.000000 0.00 1.00 0.00

3108 0.684210 0.35 1.000000 0.00 1.00 0.00

3109 0.631578 0.40 1.000000 0.00 1.00 0.00

3110 0.578947 0.45 1.000000 0.00 1.00 0.00

3111 0.526315 0.50 1.000000 0.00 1.00 0.00

3112 0.473684 0.55 1.000000 0.10 1.00 0.00

3113 0.421052 0.60 0.888888 0.20 1.00 0.00

3114 0.368421 0.65 0.777777 0.30 1.00 0.00

3115 0.315789 0.70 0.666666 0.40 1.00 0.00

3116 0.263157 0.75 0.555555 0.50 1.00 0.00

3117 0.210526 0.80 0.444444 0.60 1.00 0.20

3118 0.157894 0.85 0.333333 0.70 0.75 0.40

3119 0.105263 0.90 0.222222 0.80 0.50 0.60

3120 0.052631 0.95 0.111111 0.90 0.25 0.80

3121 0.000000 1.00 0.000000 1.00 0.00 1.00

6. Conclusion

The article considered the retailer pricing and inventory problem when possibilistic fluctuations of product

parameters are controlled by a set of possibilistic optimality conditions. Experts in various functional areas (for

example, the purchase manager, the sales manager, the inventory manager, suppliers, and marketing and finance

managers) convey their subjective judgment to the Possibilistic Group Support System (PGSS) in the form of

analytical models (for product parameters estimation), fuzzy concepts (facts), and possibilistic propositions (for

validation and choice procedures).

Page 36: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3366

In order to avoid the complexity of combining basic probability assignments, and in order to reduce the arbitrariness

of compatibility functions as a method for the representation of Experts' opinions as fuzzy concepts, the PGSS

reverses the roles of basic probability assignments and compatibility functions: Experts' opinions are represented

using basic probability assignments which are then transformed into falling shadows, then to compatibility functions

of fuzzy concepts. Possibilistic evidence is therefore processed as compatibility functions (not as basic probability

assignments). At the end of the process, the possibilistic recommendation (the fuzzy profit concept) is rewritten as a

basic probability assignment using the principle of least ignorance on randomness.

The PGSS also allows the user (inventory control) to examine a trade-off between the belief function of a greater

profit and a lower amount of randomness associated with it. Managerial pricing and inventory strategy is controlled

using three fuzzy concepts expressing whether management is playing softly ('π≥τ', low τ), moderately ('π≥τ',

moderate τ), or aggressively ('π≥τ', high τ). Management can soften their strategy and reinvoke the PGSS until a

final system recommendation becomes satisfactory.

References

1. Dubois, D., Fuzzy Set Connections as Combinations of Relief Structures, Information Sciences, 66, 245-275,

1992.

2. Duffin, R.J., Peterson, E.L. and C. Zener, Geometric Programming: Theory and Applications, Wiley, New York,

1967.

3. Goodman, I.R., Fuzzy Sets as Equivalent Classes of Random Sets and Possibility Theory, Pergamon Press, 1982.

4. Goodman, I.R. and H.T. Nguyen, Uncertainty Models for Knowledge-based Systems, North-Holland, New York,

1985.

5. Gonzalez, A. and Vila, M.A., Dominance Relations on Fuzzy Numbers, 64, 1-16, 1992.

6. Klir, G.J., Where Do We Stand on Measures of Uncertainty, Ambiguity, Fuzziness, and the Like?, Fuzzy Sets and

Systems, 24, 141-160, 1987.

7. Lee, W.J., Determining order Quantity and Selling Price by Geometric Programming: Optimal Solution, Rounds,

and Sensitivity, Decision Sciences, 24, 1, 76-88, 1993.

8. Liang, P. and F. Song, Computer-Aided Risk Evaluation System for Capital Investment, 22, 4, 391-400, 1994.

9. Shafer, G.A., A Mathematical Theory of Evidence, Princeton University Press, N.J. (1979).

10. Zadeh, L.A., Fuzzy Sets as a Basis for a Theory of Possibility, Fuzzy Sets and Systems, 1, 3-28, 1978.

11. Zadeh, L.A., A Theory of Approximate Reasoning, Machine Intelligence, 9, 149-194, 1979.

12. Zahedi, F., Intelligent Systems for Business, Wadsworth Inc., 1993.

Page 37: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3377

Saudi Arabia’s Economic Diversification: A Case Study in

Entrepreneurship

Kimanthi Ali Thompson, Prince Mohammad Bin Fahd University

Dalal Thair Al-Aujan, Prince Mohammad Bin Fahd University

Roaa AL-Nazha, Prince Mohammad Bin Fahd University

Sara Al Lwaimy, Prince Mohammad Bin Fahd University

Sumayah Al-Shehab, Prince Mohammad Bin Fahd University

Abstract

The Saudi Arabian economy is primarily dependent on a natural resource that is expected to

be depleted within the next 20-years. To date, 75% of all Saudi Arabia’s revenues are

generated from oil & gas exports, so in order for the Saudi economy to reach its goal of

sustainability - diversification will play a critical role for success. Forty years ago, Saudi

Arabia’s leaders developed what have become a series of 5-year economic development plans

aimed at achieving diversification by creating new business within major industry sectors that

include; communications, economic, health, housing, human resources management,

municipal and transportation. The KSA Ninth Development Plan (2010-2014) is a spending

initiative worth SR1, 444bn (US$385.2bn) and if successfully implemented the plan will

realize an annual GDP growth rate of 5.2% over the current plan’s five year life span.

Private sector growth will be the main driver for the economic diversification of Saudi

Arabia’s economy. To date, the majority of new business start-ups in Saudi have come in the

form of franchises deriving mainly from existing U.S. business models. Although

franchising provides a quick one-stop solution for establishing a business, the issue is that its

practice does not provide an adequate foundation for the economic sustainability of a country.

In order for Saudi Arabia to achieve economic diversification true Entrepreneurship must

begin within the Kingdom where new businesses are created based on innovation, technology

and the use of Saudi’s valuable resources.

Keywords: Diversification, Economic, Entrepreneurship, Franchising, GDP Growth,

Health Services, Human Resources, Innovation, KSA Ninth Development Plan, Middle East,

Oil, Petroleum, Saudi Arabia, Sustainability, Technology, U.S.

1. Introduction

Oil was discovered in Saudi Arabia during the 1930’s and since Saudi has grown into the

world’s largest producer and exporter of petroleum with the second largest proven reserves

(OPEC, 2010). As a result of abundant oil Saudi’s economy has been on a continuous path of

transformation and development that makes the country one of the fastest growing economies

in the world (Economy of Saudi Arabia, 2011). However, this transformation, mostly

predicated on a limited resource, hasn’t naturally progressed like most developing nations.

The Kingdom’s quick rise and huge wealth has been built largely without a sustainable

foundation, and unless Saudi Arabia’s leadership can find ways to decrease its dependency

off oil based products and services the country will eventually lose 75% of their petroleum

based export revenues. Therefore, developing sustainable strategies to diversify the Saudi

economy is a crucial and immediate mission of the government (Affairs, 2011).

Page 38: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3388

In a recent interview, Abdel Salam Al-Suhaimi, a public affairs official for the Saudi

Electricity Company, stated that his countries oil supply could be depleted by 2030 (Al-

Suhaimi, 2011). Furthermore, oil prices continuously fluctuate in response to global

economic and political changes. For example, in 2009 global economic growth declined and

as a result demand for energy decreased which lead to a sharp reduction in oil prices. Oil

exporting countries like Saudi Arabia were considerably impacted by this reduction and as a

result of a reduction in oil payment revenues the economy slowed (Ninth Development Plan,

2011). Ongoing economic development plans originally established to protect the Saudi

economy during turbulent times have historically had little effect because to date the sale of

oil still accounts for 80% of Saudi Arabia’s national income (Al-Suhaimi, 2011).

2. KSA Ninth Development Plan (2010 – 2014)

The latest development plan, the 2nd

KSA Ninth Development Plan (2010-2014) is a spending

initiative worth SR1, 444bn (US$385.2bn) that aims at realizing average annual GDP growth

of 5.2% (Ninth Development Plan, 2011). The growth in GDP is expected to result in

increased GDP per capita income from SR46, 200 (US$12bn) in 2009 to around SR53, 200

(US$14bn) in 2014 (Ninth Development Plan, 2011). The primary contributor to this growth

will be the nonoil private sector, which the government expects to grow 6.6% per year, on

average, during the 5-years taking its share of GDP to 61% from 48% ("Saudi Arabia GDP

growth", 2010). The government has allocated SR137.6bn (US$36.7bn) to be spent on

Human Resources Development and SR9bn (US$2.4bn) to be spent on Educational

Development. These spending plans include building community colleges and more career

training institutes, as well as additional public schools and technological facilities (Global

Education, 2010). These types of spending initiatives will ensure the availability of a highly

skilled and motivated Saudi work force in the future; however, more focused

entrepreneurship is still needed.

Other plan sectors might provide the best catalyst for future entrepreneurial development

within the Kingdom of Saudi Arabia. These sectors include; social & health services,

economic resources, transportation & communication and municipal & housing related

services. One example of a new business concept within Saudi stems from its cultural

heritage of large family units who live in the same household and improvements within the

healthcare industry. As the generation who initially benefited from the discovery of oil

begins to age new business opportunities will arise within the Kingdom in the form of health

care services. Innovated entrepreneurs can capitalize on this opportunity by creating

businesses within the health services sector that will care for and provide medical attention to

their specific needs. These types of healthcare businesses include; home health care services,

retirement communities & assisted living services and the medical supply and equipment

companies that will be needed to service them.

3. Entrepreneurship

Page 39: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

3399

Using positive examples of Western capitalism, the Saudi government has made steps to

decentralize the economy and as a result the entrepreneurial spirit has begun to take shape

within the Kingdom. Evidence of this transformation can be seen by the number of

recognizable brands already located within Saudi Arabia. Over the past five years,

franchising has tremendously grown and many brand names are already well entrenched in

the market. Industry sources state that fast food franchises already account for more than

60% of the total Saudi franchise market ("Saudi Arabia Franchise Statistics", 2010).

American firms have the lion share with more than 70% of all franchised operations in Saudi

Arabia from fast food, clothing outlets, hotels, car leasing, laundry services and printing

("Saudi Arabia Franchise Statistics", 2010).

Although private business ownership is not new to the Kingdom, until now it has primarily

been focused on franchising. This has resulted in a largely undiversified economy that is not

contributing to achieving the 9th

Development Plan’s objectives - which is sustainability

through economic diversification. As such, true innovation in the form of entrepreneurship

must be the key driver to economic diversification. One example of innovation is using

Saudi’s naturally hot climate to power a desalination plant. King Abdulaziz City for Science

and Technology (KACST), is currently building what will be the world’s largest solar-

powered desalination plant in the city of AL-Khafji, Saudi Arabia. Once complete the plant

will use a new kind of concentrated solar photovoltaic (PV) technology and new water-

filtration technology, which KACST developed jointly with IBM. Once completed, the plant

will produce 30,000 cubic meters of desalinated water per day which will meet the needs of

100,000 people (Patel, 2012). This example shows how true entrepreneurship will eventually

decrease the need to import while producing a product/service through the use of new

technology that can one day be exported to other countries.

4. Simple Conclusion

For over a half a century, American manufacturing has dominated the globe though new

technology creation and leading innovation. Today, the decline in U.S. based manufacturing,

the primary benefactor of America’s innovation dominance, has lead to a decline in the U.S.

economy and the displacement of disposable income. It’s this displacement that has left a

void within the international marketplace where products/ services are sold/ bought and

technology and innovation is shared. The Middle East, with its vast oil reserves, is mostly

sheltered from the economic disparity that is inflicting many nations today. These oil

reserves have created the catalyst for a new industrial revolution taking place within many

Middle Eastern countries, like Saudi Arabia.

The majority of research concerning the U.S., Saudi Arabia and the Middle Eastern is based

on Oil & Gas. The next wave of innovation in the form of entrepreneurship must come from

countries how the economic ability and marketplace to sustain future development have.

Doing business 2011 data for Saudi Arabia shows that out of 183 economies Saudi ranks 1st

in registering a property, 6th

in paying taxes, 13th

in starting a business, 14th

in dealing with

construction permits, 16th

in protecting investors and 18th

in trading across borders (World

Bank Group, n.d.). These statistics show an economy that is becoming less dependent on oil

revenues and more focused on non-oil economic dependence and diversification.

References

Page 40: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4400

1. Affairs, B.O. (2011, May 6). Saudi Arabia. Retrieved October 15, 2011, from U.S.

Department of State: http://www.state.gov/r/pa/ei/bgn/3584.htm

2. Al-Suhaimi, A.S. (2011, December 24). (Al-Aujan, D.T. Interviewer)

3. Economy of Saudi Arabia. (2011, October 7). Retrieved October 9, 2011, from

Wikipedia: http://en.wikipedia.org/wiki/Economy_of_Saudi_Arabia

4. Education in Saudi Arabia. (2011, December). Retrieved December 2011, from

Wikipedia: http://en.wikipedia.org/wiki/Education_in_Saudi_Arabia#cite_note-20

5. Ninth Development Plan. (2011). Retrieved December 10, 2011, from Ministry of

Economy and Planning:

http://www.mep.gov.sa/index.jsp%3bjsessionid%3d809DB039138CE6C654F00EB6

CE95FAEB.beta?event=ArticleView&Article.ObjectID=79

6. OPEC Share of World Crude Oil Reserves. (2010). Retrieved December 2011, from

OPEC: http://www.opec.org/opec_web/en/data_graphs/330.htm

7. Patel, P. (2012). Solar-Powered Desalination. Technology Review/MIT. Retrieved

from http://www.technologyreview.com/energy/25010/

8. Saudi Arabia Franchise Statistics. (2010, October 27). Retrieved from

http://www.franchiseek.com/saudi_arabia/franchise_saudi_arabia_statistics.htm

9. World Bank Group. (n.d.). Retrieved from

http://www.doingbusiness.org/data/exploreeconomies/saudi-arabia/

Page 41: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4411

How to Effectively Manage IT Project Risks

Bradley Sean Susser, Pace University, NY

Abstract

Although project management in its more contemporary form has continued to evolve over

the last 50 years it has continued to be plagued with Information Technology (IT) risks and

pitfalls. The evolution of Project Management has indeed been helpful to sovereigns and

companies over the years in organizing work around projects that span multiple industries by

providing better standards, policies, procedures, tools and techniques that have allowed many

to acquire knowledge in the areas of project scope management, project time management,

project costs management, project quality management, human resource management, project

communications management, project risk management and project procurement

management.

However despite all these improvements IT projects continue to be renowned for their high

rates of failure which is clearly evident in empirical backed research such as the one that was

provided by the Standish Group's 2009 CHAOS study that demonstrated a decrease in project

success rates, with 32% of all projects succeeding which are delivered on time, on budget,

with required features and functions. In contrast 44% of projects were challenged by being

late, over budget, and/or with less than the required features and functions and 24% failed

which were cancelled prior to completion or delivered and never used[The Standish Group

(Oct. 2009)]. It must be noted that in CHAOS Manifesto 2011, The Standish Group's showed

a marked increase in project success rates from 2008 to 2010 [The Standish Group (Oct.

2011)] but in 2011 PM Solutions Research also came out with a report called Strategies for

Project Recovery where they followed 163 companies split between small, medium, and large

organizations [PM Solutions (2011)]. On average, respondents managed $200 million in

projects each year of which approximately 37 percent were at risk. The average company in

the study therefore faced $74 million of at risk projects each year. The last two reports are

affirmations that organizational projects risk profiles are still quite high and remain a key

challenge in today’s environment.

Therefore in this paper we will provide a brief history on the evolution of Project

Management, the most common reasons projects fail, a detailed case study of a well-known

project failure, solutions and how to effectively manage and mitigate risks in IT projects,

incorporate an opinion from a highly recognized Project Management consulting firm on an

evolving risk management approach and then conclude by offering an added opinion which

will comprise of how to attain desirable outcomes.

Page 42: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4422

1. Introduction

The innovation and speed in IT has created considerable advancements in the last several

decades however due to the increasing number of governments and organizations around the

globe moving away from centralized system models to more distributed mediums, means the

complexity of businesses has been growing therefore the need to optimize the implementation

of Project Management approaches has overwhelmingly become of even greater significance.

Although IT projects have been the primary focus of project management its roots go back to

the late nineteenth century focusing at that time primarily on government initiatives. We

discuss project management in its historical context because several of the core principals,

methodologies, tools and techniques that comprise of project management not only are

essential in improving the success of a project but if applied properly also help mitigate any

risks while maximizing the potential for project success. It was in this period of the late

nineteenth century that the initial ground work in the area of Project Management was said to

be first formulated. In the United States for example the first extensive government project

was the transcontinental railroad, which began construction in the 1860s during the Industrial

age whereby industry luminaries were confronted with the intimidating task of organizing the

manual labor of thousands of workers and the processing and assembly of exceptional

quantities of raw material [Microsoft Corporation (No Date)].

Near the turn of this century, Frederick Winslow Taylor (1856–1915) an American industrial

engineer was one of the first to begin detailed studies of work by devising a system he coined

Scientific Management to determine the optimum means for carrying out a task in the

smallest amount of time by focusing on shifting knowledge of production from the workers to

the managers. He applied scientific reasoning to work in showing that labor can be analyzed

and improved by breaking up industrial production into very small and highly regulated steps

which required workers to obey the instructions of managers concerning the proper way to

perform very specific actions. Taylor’s theory was primarily applied in steel mills, such as

shoveling, lifting and moving parts. Prior to this time the only way to augment productivity

was to require individuals to work laboriously by putting in long hours. Taylor changed all

that by introducing the concept of working more efficiently rather than working harder and

longer therefore his theories had determined to be the very best way to perform these specific

isolated tasks. In 1887 Henry Gantt (1861-1919) a mechanical engineer, partnered up with

Frederick W. Taylor to leverage the theory of scientific management at Midvale Steel and

Bethlehem Steel, where they worked together until 1893 [Roebuck. K (May 2011)].

Gantt studied in great detail the order of operations in work. His studies of management

focused on navy ship construction during World War I and his Gantt Charts which were first

conceptualized in 1917 are one of the most frequently used project scheduling and progress

assessment tools to date. This was in fact the first quantitative technique of project

management in the area of schedule risk analysis. Although it has been refined over the years

in simple terms the chart can be described as a horizontal bar chart that illustrates project

tasks against a calendar whereby each bar represents a project task which are listed vertically

in the left hand column and the horizontal axis represents a calendar timeline. Tasks can

overlap one another by being carried out at the same time and can be shaded to indicate

project progress and percentage completion to depict which tasks are ahead of or behind

schedule providing further guidance for mitigating the potential for Scope Creep. Microsoft

Office Project over the years has improved upon Gantts original work but he is the one that

truly provided the initial foundation that is now incorporated into Microsoft’s widely used

Page 43: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4433

project software. Both Taylor and Gantt’s initial works were clearly revolutionary at the time

as they helped to establish a prerequisite for good Project Management that required a well-

defined development process making project management an unequivocal business

application. In the years leading up to World War II, marketing approaches, industrial

psychology, and human relations began to take hold as integral parts of project management.

During World War II, complicated government and military projects and a diminishing war-

time labor supply, accessed the need for new organizational structures. This lead to further

the evolution of project management when the U.S. navy in the 1950’s first developed the

Project Evaluation and Review technique whose acronym is PERT, while working on the

Polaris missile project during the Cold War era [Johnson, S. B. (March 2002)]. Sometimes

referred to as network diagrams the PERT chart lists the specific activities that make up a

project and the activities that must be completed before a specific activity can start. In more

detail the chart consists of a number of nodes that represent project tasks whereby each node

which can be depicted as either circles or rectangles are numbered showing the task, its

duration, the starting date and the completion date.

The directions of arrows on the lines that are incorporated in the chart indicate the order of

tasks and shows which activities must be completed before another activity may begin. One

of the primary functions of PERT charts was to address issues related to costs. In the early

1960’s organizations around the world began to seek out new management strategies and

applied the previous approaches and techniques described above to assist in allowing

businesses to better cope with the rapid expansion and changing business environment that

spanned across all industries worldwide. It is in this time period of the early 1960’s that

project management was viewed as an essential approach that all organizations and

sovereigns needed to make use of and began to form the contemporary foundation that is

embedded in today’s society for all businesses to continue to exist and flourish. Inclusive is

that many of these techniques can and should be applied to minimize any organizational

project risks while increasing profits in order to gain a competitive edge in today’s overall

market place.

2. Related of Literature

Significant analysis collected over time comprising of project success and failure rates have

been well documented so to begin with here is a brief summary on some of studies deemed

appropriate in this area. In a report issued back in a 2008 white paper written by Kathy Ellis

of IAG Consulting (www.iag.biz) titled “Business Analysis benchmark” The Impact of

Business Requirements on the Success of Technology Projects included surveys of over 100

companies with the average project size of $3 million which was certainly a wake-up call

[Ellis, K. (2008)]. The survey measured the current ability of organizations in performing

business requirements and an evaluation of the underlying causes of poor quality

requirements. Firm’s with inadequate business analysis capability were said to have 3 times

as many project failures as successes and 68 percent of the companies are less likely to

succeed based on the way they approach business analysis. In fact additional findings found

50 percent of group projects were “ runaways” taking over 180% of target time to deliver,

consuming in excess of 160 percent of the estimated budget , delivering under 70 percent of

the target required functionality, paying a premium of as much as 60% on time and budget

when they use poor requirement practices on their projects and over 41% of the IT

development budget for software, staff and external professional services was said to be

consumed by poor requirements at the average company using average analysts versus the

Page 44: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4444

optimal organization. IAG found using best requirements practices will estimate a project at

$3 million and better than half the time will spend $3 million on that project including all

failures, Scope Creep, and mistakes across the entire portfolio of projects.

This group will spend on average, $3.63 million per project while firms using poor

requirement practices will pay on average $5.87 million per project due to excessively high

time and budget overruns. In 2009 as we described in the above abstract the CHAOS report

issued by Standish Group International found 68 percent of projects either failed or 44

percent have been challenged[The Standish Group (Oct. 2009)]. Standish also stated 38

percent of projects between $750,000 and $3 million have a chance at success but when the

cost of a project exceeded $10 million there was only a 2 percent chance of success. In an

article titled “Making Change Work” a survey of 1,500 change management executives

issued by IBM in October of 2008 discovered 44 percent of all projects failed to meet time,

quality and budget goals while 15 percent were either halted or did not meet all the objectives

[Jorgensen H., Owen L., Neus A. (Oct.2008)].

Finally in a review of federally funded technology projects by the U.S. Government

Accountability office in July of 2008 they ascertained 49 percent of federal IT projects where

inadequately planned, inadequately performing or both [Powner, D. (July 2008)]. The

question than arises if failure in projects still persist than how can we increase the chances for

success? In essence that is what the rest of this paper aims to accomplish which is, we must

provide you with the reasons why projects fail and in contrast you will then be able to

determine through this assessment what not to do. We also intend to closely evaluate a major

project case study describing many of the variables that adversely impacted that particular

project and finally we offer a clear-cut outline through various methodologies, approaches

and techniques to properly mitigate and manage risk inclusive is the what is already

recognized throughout Project Management as the six major processes involved in risk

management and the evolving Committee of Sponsoring Organizations of the Treadway

Commission’s (COSO) Enterprise Resource Management (ERM) framework so that the

chances for a project being successful are increased substantially.

3. The Most Common Reasons Project Fail

In the figures provided by some of the research described above we ascertained some of the

projects pitfalls but going further we have comprised of a more detailed summary to discern

why projects fail. One of the first reasons is that Project sponsors are often times not devoted

to the projects objective by not actively being involved in the project strategy and they have

an insufficient comprehension of the overall project [Progress, Project (2008)]. It is also

unfortunate but a multitude of projects do not meet the strategic vision of the company

therefore if business requirements are not precisely defined, it can cause a project to not add

value to the bottom/top line or improve business processes. Remember IT projects must align

with overall business objectives. Another issue is projects commence for all the inappropriate

reasons as some begin solely to implement new technology without any concern for whether

the technology is accommodating of organizational business requirements. The opposite of

this is a project that does not support existing technology developing extensive Scope Creep

resulting in additional capital expenditures. In delving into the work breakdown structure

which is also a part of the project scope management knowledge area, it may be used

inefficiently such as not

Page 45: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4455

administering enough dedicated staff allocated to projects and team members may have

limited experience with a lack of required qualifications. Insufficient experience can also

cause project teams often times to take shortcuts to catch up to schedule and make up costs

by skipping steps.

Lack of communication and collaboration is also sometimes inefficient among all

stakeholders as this can certainly cause a project to fail. Indicative of another potential risk is

that executives and supervisors believe that they will be able to succeed in leading a project

but are rarely available therefore focusing in this regard is not on project delivery but on the

contentment of the project manager and his own time management. Another project pitfall is

an incomplete project scope definition which does not provide a project's advantages and the

deliverables that will produce them. One may assume that before a project is initiated a plan

and the processes that it is comprised of would be implemented however this is unfortunately

not always the case. A project plan that is non-existent, out of date, incomplete or

inadequately constructed and where just not enough time and effort is spent on project

leading can significantly have an adverse impact on any project.

This also can mean that value is not put into use to calculate baseline costs agreed during

baseline transfer against actual costs spent at any given time therefore costs do not form an

integral part of the project during execution. Moving on, insufficient funding and incorrect

budgeting is still a major reason for projects not delivering their goals and objectives within

the quality framework that was required because projects always need to deliver yesterday

within a specific budget. In addition premature commitment to a fixed budget and schedule

are usually inconsistent. When we discussed in its historical context how project management

came to fruition it is intriguing that many firms who are aware of how project management

came into being still continue to have no established project leading methodologies and best

practices aligned with the company's specific needs to assist in project performance.

Surprisingly, companies do not want to invest in best of breed methodologies that will benefit

the bottom line over a specified period, with projects delivered within budget. Remember

methodologies are the foundation of project management so you would have to wonder why

any organization would be incapable of understanding the various methodologies that are

rooted in Project Management. Inclusive companies do not recognize the value of using a

methodology to support and enable them to record their own best practice project results for

future reference and to build a knowledge base within the company. Also not all projects go

through a methodical signing off process using a proper post project approach to determine

lessons learned and to construct one’s own reference model for future use. Even more

astonishing is that many projects do not consist of good end to end testing procedures even if

a project has a signoff process as project managers sometimes do not manage to engage all

the necessary test resources for the final testing ahead of time. Finally, a certificate signed off

between sponsors and other third-parties will demonstrate project success but even that is

quite rare.

The following additional data below shows some of the most dominant risk factors identified

by Wiegers (1998), who categories these factors by sector:

Page 46: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4466

Project Sector Risk Factor % of Projects at Risk

MIS Creeping User Requirements 80

Excessive Schedule Pressure 65

Low Quality 60

Cost Overruns 55

Inadequate Configuration Control 50

Commercial Inadequate User Documentation 70

Low User Satisfaction 55

Excessive time to market 50

Harmful competitive actions 45

Litigation expense 30

Table 1: Most common risk factors for various project types (Wiegers 1998)

Governance can be a separate category all its own, comprised and correlated with

many of the reasons projects fail. The Office of Commerce of the UK Government together

with the National Audit Office lists eight common causes of project failure but we will just

focus on the six that deal primarily with governance related issues [AON Risk Solutions,

(2011)]. The first is a lack of a clear link between the project and the organizations key

strategic priorities including agreed measures of success; the second is lack of clear

ownership and leadership for the project from the organizations governing body; third is lack

of skills and proven approach to project management and risk management; evaluation of

proposals driven by initial price rather than long term value for money especially securing

delivery of business benefits; lack of understanding of and contact with project

contractors/service vendors at senior levels in the organization; and finally lack of project

team integration between clients, the supplier team and the supply chain. We cannot

emphasize enough how Governance is an essential factor to mitigating risks therefore we will

further discuss why it remains crucial to incorporate the proper governance framework in the

chapters to follow.

4. Detailed Case Study on Project Management Failure

Case studies are important in depicting real world events so we would be remiss in not

providing you with at least one notable case study that offers great insight of a large scale

project failure. We are referring to the construction of one of the most advanced reservation

systems in U.S. history, recognized by many as the CONFIRM Project. The project was

formulated back in 1988 by a consortium consisting of Hilton Hotels, Marriott (NYSE:

MAR), Budget Rent-A-Car (NASDAQ: CAR) and American Airlines Information Services

(AMRIS), a subsidiary of American Airlines (AAMRQ.PK), almost all publicly traded except

for Hilton which was on the New York Stock Exchange until it was acquired by Blackstone

Group for $20 billion back in July of 2007 [Cauley, L. (July 2007]. AMRIS was

subcontracted as the managing partner and Intrico was a newly established organization

whose responsibility was to exclusively run the new system. These organizations at the time

teamed up to cultivate and market what was expected to be the most state of the art

reservation system to be used for travel, car rental, and lodging services. Five years later after

numerous lawsuits and millions of dollars in cost overruns, the CONFIRM Project was

finally cancelled over grievous accusations from many of the leading executives that had

involvement in the project [Oz, E., 1994]. Although the objectives were articulated as

achievable in the original requirements document provided by AMRIS, they proved to be

quite ambiguous causing many of the initial requirements to change.

Page 47: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4477

In other words, there was a general agreement among the organizations regarding the need

for a new system however there was no clarity of what the new system’s goals and objectives

should be in order to satisfy the specific information requirements of the consortium therefore

continuous change in requirements resulted in an exorbitant amount of wasteful capital

expenditures. This of course is a prime example of Scope Creep which is where the

requirements and expectations of a project increase, often without regard to the impact on

budget and schedule. All stakeholders require specific responsibilities to be made clear and in

particular due to the disparate background of the players involved in the CONFIRM Project it

was believed that lack of communication and disorganization further fueled confusion about

requirements and design decisions among all project members.

Claims were also made that Intrico heads only met once a month when they should have had

meetings much more frequently. Also projects that are enormous in scope tend to have

elevated risks and levels of complexity that can discourage even the most competent of

teams. For example, the then president of AMRIS is reported to have indicated that “the task

of tying together CONFIRM’s Transaction Processing Facility-based central reservation

system with its decision support system proved to be overwhelming. . . We found they were

not integrable” [Halper, M., August 3, 1992]. Since the complexity of the project was so

evident it is even of greater significance that effective coordination must be implemented in

order to ensure the successful completion of a project.

Furthermore, the complexity issue should also have been analyzed more meticulously in the

early phases of the project life cycle because costs are significantly higher when major

changes to a project are made in the latter phases as opposed to the initial phases. In addition,

the failure of the database to recover in the event of a crash was, in the words of the VP of

Operations, due to the fact that “in the development of the DB2-based decision support

system, the company mistakenly implemented a version of Texas Instruments’ Information

Engineering Facility (IEF) computer-aided software engineering tool in which IEF generates

its own database structure.”

Also, the VP is reported to have suggested that for CONFIRM‘s size, they “should have

implemented a version of IEF in which the structure is dictated because the system was so big

that what IEF generated would have been impossible to maintain” [Halper, M., Aug. 10,

1992]. The VP of operations above quote is a primary example of not only a lack of

coordination due to mistakenly implementing Texas Instruments’ Information Engineering

Facility (IEF) computer-aided software engineering tool but also emphasizes once again the

importance of performing efficient analysis in the early stages of a projects lifecycle that

would have enabled the consortium to have selected a version of IEF in which the structure is

dictated to avoid unwieldy spending. Deficiencies in structure and organizational objectives

in the team’s efforts was additionally exacerbated by no clear leadership and active

interaction among parties in the CONFIRM Project causing complications in later phases of

the system development lifecycle. This is evident when AMRIS made allegations in its

lawsuit that the other three companies they worked with made poor staffing assignments that

crippled the project [Halper, M., October 12, 1992]. By not incorporating an appropriate

structure and project phased lifecycle approach the CONFIRM Project adversely impacted

the project team by not enabling them to recognize what the deliverables for each stage were

and to know if they had been satisfied. Clearly there was a lack of project feasibility phases

as well as project acquisition phases.

Page 48: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4488

In fact, the CEO of AMR is quoted as stating in a letter to the other three companies, “The

individuals to whom we gave responsibility for managing CONFIRM have proven to be

inept. Additionally, they have apparently deliberately concealed a number of important

technical and performance problems” [Zellner, W., 1994]. This letter implied that project

management failure created an environment where activities were not properly monitored and

concealed. But even if the allegations by AMR’s CEO were true, the IBM review

commission spoke “to the need of more critical review and immediate corrective action by

AMRIS management. Not doing so would almost assuredly result in failure” [Zellner, W.,

1994]. After all AMRIS was made ‘Managing Partner of Development’ for CONFIRM and

took on the responsibility for all aspects of the design and development of the system. In fact,

AMRIS executives initially stated to the consortium that the system would not be expensive

to run and would be completed in time to outpace competition in the hotel and car rental

industries however this statement proved to be false.

As with all failures the problems can be viewed from a number of levels. In its simplest form,

the CONFIRM project failed because those making key decisions underestimated the

complexity involved. Other contributing factors for CONFIRM projects debacle include a

lack of planning resulting in subsequent changes in strategy; making firm commitments in the

face of massive risks and uncertainty; lack of management oversight; poor stakeholder

management; communication breakdowns; failure to perform risk management and the list

goes on and on. The initial cost of the project was originally estimated at $55.7 million in

April 1988 with a completion date of June 1992. It was revised to $72.6 million in September

1989. This trend in escalating project cost continued till the project was canceled in July

1992, after 3 1/2 years and $125 million in costs [Oz, E., Oct. 1994]. Perhaps CONFIRM’s

project failure was a prelude to American Airlines more recent woes as the airlines current

status is that in late 2011 it filed for chapter eleven bankruptcy as its shares currently trade on

the Pink Sheet Exchange under the symbol “AAMRQ” at around .49 cents a share [Milford,

Phil, Schlangenstein, Mary and McLaughlin, David (Nov. 2011)]. The “Q” at the end of a

symbol denotes that a publicly traded company is in the process of bankruptcy.

5. Solutions to Effectively Manage and Mitigate Risks in IT Projects

Mapping the primary activities of each Project Management process group for each

knowledge area is an integral part of project management which can be found in PMI’s

PMBOK guide, a standard that describes best practices for what should be done to manage a

project effectively. Our focus however is take a closer look at one of the project management

knowledge areas that should be adapted over the whole project life cycle, that being Risk

Management. The discipline of Risk Management has evolved considerably over the years

including a number of standards and methodologies used to identify risks, measure them,

monitor them and ultimately mitigate the overall project risk profile. In fact in this section we

will meticulously examine and look at the process of how to best select and implement

countermeasures to address an organizations risk requirements.

Risk Management is often times overlooked but it can have a significant effect on the choice

of projects from deciding on the scope of projects and cultivating pragmatic schedules and

cost estimates as well as assisting project stakeholders to comprehend the description of the

project involving teams in defining strengths, weaknesses, opportunities and threats via

SWOT analysis and further helping to integrate the other project knowledge areas. This can

all lead to improving a projects success.

Page 49: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

4499

Before proceeding in describing in detail how to effectively manage project IT Risks it must

be noted that unfortunately this among all knowledge areas was shown to be of less

importance than all of the other areas and furthermore was among the least mature. This can

be seen in a survey performed by William Ibbs professor and group leader of the

Construction Management program at the University of California at Berkeley and Young-

Hoon Kwak, Ph.D., currently an assistant professor for the Project Management Program at

The George Washington University (GWU) [Ibbs, William and Young Kwak, Hoon (March

2000)]. The two surveyed over a period of two years four different industries and application

areas to collect project management practices information.A total of 38 large international

companies, including private and public sector organizations, participated in this study. The

four industries were: engineering and construction (EC); information management and

movement (IMM), also known as telecommunications; information systems (IS), also known

as software development; and hi-tech manufacturing (HTM). The Project Management

Maturity Assessment covered the project management knowledge areas of scope, time, cost,

quality, human resources, communications, risk and procurement weighted on a relative scale

of 1 (lowest) to 5 (highest). What they discovered was in their Project Management Maturity

Assessment methodology, that all companies averaged 3.26 on a relative scale of 1 (lowest)

to 5 (highest) which suggests that all areas could use improvement but the anomaly was in the

area of risk. Risk Management’s project management maturity level was the lowest among all

eight knowledge areas. Risk Management was the only knowledge area where overall project

management maturity rating was less than 3.

Inefficiencies in project risk can also be seen particular in the wake of the stock markets 2008

credit crisis. This was caused primarily by a lack of Governance and Risk Management

initiatives. When the US Senate Banking Committee asked US Federal Reserve Chairman

Ben S. Bernanke what lessons were learned from the current economic crisis, he replied,

“The importance of being very aggressive and not being willing to allow banks, you know,

too much leeway, particular when they’re inadequate in areas such as Risk Management

[Wyatt, E. (Feb. 2011) The irony of the downturn is that financial institutions bundled up

mortgages and sold many institutions on the idea that the housing market had increasingly

gone up throughout the years and the risk of any downturn was minimal to say the least.

These mortgages were then insured by many organizations to reduce any financial

institutional losses. In particular many hedge funds and insurance companies were to provide

a hedge to these financial institutions by insuring many of the mortgages provided by certain

institutions in case they went into default. Unfortunately many of the insurers did not have

the capital to cover the losses on these defaulted mortgages. This in turn led to insurers

having to sell off assets across the entire market spectrum causing a precipitous drop in all

global markets. If the proper countermeasures were in place due to proper risk management

this may have never occurred. One being, that mortgages should have had more stringent

criteria in place so as to not sell to those who evidently could not afford these homes. The no

money down policies and lack of resources should have been a clear indication that many

people could not afford such real estate. Regulators could have easily prevented this from

happening if the proper risk management was in place so Bernanke’s response to the U.S.

Senate Banking Committee is ironic to say the least as they also had a fiduciary responsibility

to the public at large. Governmental agencies should also have required those that insured

these bundled mortgages to have enough capital to cover any losses however this was never

implemented. The Project Management Maturity Assessment survey above and the brief

stock market synopsis are proof of the vital importance risk management plays throughout all

industries, organizations and governments.

Page 50: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5500

Before moving ahead with how to best mitigate risk or use the knowledge of accessing

projects that are at risk to our advantage we must define what Risk Management is and then

ask the three fundamental questions that should be addressed for proper risk analysis. Risk

Management is the identification, assessment, and prioritization of risks followed by

coordinated and economical application of resources to minimize, monitor, and control the

probability and/or impact of unfortunate events or to maximize the realization of

opportunities [Hubbard, Douglas (April 2009)]. The word Risk itself comes from the ancient

Italian word “Risicare” which means to dare [Zweig, J. (April, 2012)]. As described in the

definition of Risk Management above, risks can be either negative similar to insurance which

is a party undertaking to indemnify or guarantee another against loss by a specified

contingency or danger.

In referencing a project insurance is an activity taken to minimize the impact of a possible

threat to a project. In contrast positive Risk Management can be investing in opportunities

taking advantage of the risk as opposed to protecting against it. The phrase “the greater the

risk the greater the chance for reward” is clearly indicative of why companies take on risk

depending on their appetite for risk which is the level of risk an organization views as

acceptable. This depends on the type of organization and how it conducts business for

example financial institutions are typically risk averse but conservative meaning they want a

low residual risk and are willing to use whatever capital necessary to achieve this while in

contrast a retail company with a new clothing line may have a much greater tolerance for risk

as their primary objective is to obtain a competitive edge therefore with limited resources

they wish to spend less on risk controls. When first beginning the Risk Management process,

it is a good idea to identify the organizations boundaries of risk assessment but also we must

ask the following questions. How long will this project eventually take? (schedule risk), How

much will it finally cost? (cost risk), and Will its product perform according to

specifications? (performance risk) [GALWAY, L (Feb. 2004)]. After accessing the

organizations risk appetite it is now time to implement a Risk Management plan. Risk

Management planning is the process of deciding how to approach and administer risk

activities for the project. Planning is crucial in initiating the significance of Risk

Management, allocating proper resources and time to Risk Management and establishing the

foundation for analyzing risk. The goal of the Risk Management Plan is to determine the

strategy to manage project related risks such that there is acceptable minimal impact on cost

and schedule, as well as operational performance.

The next element is to identify the risks which are an initial and cyclical effort to identify

measure and document risks as they are identified. This process is analogous to a detailed risk

analysis approach that is a standard in the ISO 13335 series whereby the initial assessment it

identification of assets. A foundation of risks sets should be constructed and entered into

what is known as a project Risk Register or Risk Log, a document helping you track issues

and address problems as they arise [Staff, CIO (Sept. 2011)]. The Register will document the

various risks with their classification, mitigation and handling strategies, impact on cost and

schedule, and action items. As stated above this is a cyclical process, therefore baseline risks

should be identified through the normal course of the project planning process and

identification of any other risks should be performed throughout the entire project lifecycle.

Several risk identification tools and techniques include Brainstorming a relaxed, informal

approach to problem-solving with lateral thinking where there should be no criticism of ideas

[Schwalbe, K (2011)]. Ideas should only be evaluated at the end of the brainstorming session

which is then the time to explore solutions further using conventional approaches.

Page 51: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5511

Next is the Delphi technique based on the Hegelian Principle of achieving oneness of mind

through a three step process of thesis, antithesis and synthesis. In thesis and antithesis, all

present their opinion or views on a given subject, establishing views and opposing views. In

synthesis, opposites are brought together to form the new thesis. All participants are then to

accept ownership of the new thesis and support it, changing their own views to align with the

new thesis. Through a continual process of evolution, oneness of mind will supposedly

occur. Interviewing is a fact finding technique for collecting information in face to face,

phone, email or instant messaging discussions. Finally there is SWOT analysis an acronym

for strengths, weaknesses, opportunities and threats. Risk Management demands that it is

necessary to avoid, eliminate, or at the very least, minimize identified weaknesses and

threats. Weaknesses should be closely scrutinized in order to determine whether or not it is

possible to convert them into assets. Similarly, threats should be closely examined for the

opportunity of building strength in areas where they stood, once they have been eliminated.

Strengths and opportunities should be closely studied as well in order to maximize their

effectiveness.

Project Management would be well advised to take advantage of this simple, cost effective

management tool and to make it a fundamental step in the planning process. Additional

identification methods may be via check lists, assumption analysis and diagramming

techniques such as making use of flow charts and Cause-and-Effect diagrams. All of the

techniques used during the risk identification process increase collaboration to locate risks

before they become problems, set program priorities to arrive at a joint understanding of what

is important and identifying new risks and changes. Risk statements should be written for

each identified risk in a clear concise manner while containing only one risk condition and

one or more consequences of that condition.

The project manager than ensures that all project stakeholders are responsible for identifying

and capturing new risks which than should be added to the Risk Register right before the

initial project risk kick-off meeting. A Risk Register should record active risks along with the

date identified, date updated, target date and closure date. Also include a unique risk

identification number so that you know if that risk develops during the project and what the

status of the risk is at any given time, a description of the risk, type and severity of risk, its

impact, possible response action and the current status of risk [Staff, CIO (Sept. 2011)]. A

Risk Register framework usually consists of three ratings for impact; High, Medium and Low

according to Northrop Grumman Corporation (NOC) [Northrop Grumman Corporation,

(Nov. 2007)]. Northrop Grumman founded in Virginia in 1939, provides technologically

advanced, innovative products, services, and integrated solutions in aerospace, electronics,

information and services. Northrop’s impact categories are accessed by determining the cost

of an impact, the scope, schedule and quality all which are incorporated in its Risk Register

and is a good example of how companies should make use of the register.

NOC describes cost as an impact typically calculated as a dollar amount that has a direct

impact on the project. However, cost is sometimes estimated and reported as just added

resources, equipment, etc. This is true whenever these additional resources will not result in a

direct financial impact to the project due to the fact the resources are loaned or volunteered,

the equipment is currently idle and there is no cost of use, or there are other types of

donations that won’t impact the project budget. Regardless of whether there is a direct cost,

the additional resources should be documented in the risk statement as part of the mitigation

cost. Whenever there is the potential that the final product will not be completed as originally

intended there is a scope impact.

Page 52: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5522

Scope impact for representation purposes could be measured as a reduction of the number of

tower sites, elimination of trunking for a site, or not providing a back-up power source. It is

very important to estimate the schedule impact of a risk event as this often results is the basis

for elevating the other impact categories. Schedule delays frequently result in cost increases

and may result in a reduction of scope or quality. Schedule delays may or may not impact the

critical path of the project and an associated push out of the final end date. As an example, a

road wash-out for a tower site might delay completion of that site for 3 weeks but if another

site is scheduled to complete after delayed site the 3 week delay won’t impact the final end

date. Finally quality is frequently overlooked as an impact category and too often a reduction

in quality is the preferred choice for mitigation of a risk. Short cuts and low cost

replacements are ways of reducing cost impacts. If not documented appropriately and

approved by the project sponsor, mitigation strategies that rely upon a reduction in quality

can result in significant disappointment by the stakeholders [Northrop Grumman

Corporation, (Nov. 2007)].

The next step in the process is to perform risk analysis which is examining identified risks to

decide on the probability of occurrence, impact, and timeframe. The analysis step can be

performed by using either a quantitative or qualitative approach. Some of this was described

in the Northrop Risk Register model however we will extrapolate on these approaches in this

paragraph. While most organizations appear to use a qualitative approach especially for

accessing risks it is important to recognize the difference between the two as quantitative

analysis does follow qualitative analysis more often than not. However before we precede

further it must be noted that P.L. Bannerman in his studies discovered that none of the

seventeen IT projects he investigated used quantitative risk analysis [Bannerman, P.L., (Dec.

2008)].

Qualitative analysis is a methodology that uses a probability/impact risk level matrix analysis

to prioritize the identified project risks using a pre-defined rating scale. Risks are scored

based on their probability or likelihood of occurring and the impact on project objectives

should they occur. Probability/likelihood is commonly ranked on a high, medium to low

rating or a zero to one scale for example, .3 equating to a 30% probability of the risk event

occurring. The impact scale additionally is organizationally depicted for example as a high,

medium to low scale, with a high rating having the largest impact on project objectives such

as budget, schedule, or quality. Likelihood is used to provide an order of magnitude this is

than updated in the Risk Register as in the NOC case. Below I have acquired what I believe

to be an excellent descriptive example of the use of implementing Qualitative analysis charts

developed by Hulett & Associates, LLC, Project Management Consultants [LLC, Hulett &

Associates (2005)].

Page 53: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5533

Figure 1: The Separation of Risks into High, Medium to Low Rating. Hulett & Associates,

LLC

Figure 2: The Likelihood and Impact of a Risk Event measured between 0.0 (no likelihood)

and 1.0 (certainty) Hulett & Associates, LLC

Page 54: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5544

Figure 3: Impact of a Risk Should it Occur on Performance Objectives. Questions and

Associated Ratings Are Constructed. Hulett & Associates, LLC

Figure 4: Impact on Schedule Objective Hulett & Associates, LLC

Page 55: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5555

Figure 5: Probability-Impact Matrix Ranking Risks into Classes with Red, Yellow and Green

Designations of High, Moderate and Low risks Hulett & Associates, LLC

Any risk can be classified as high, moderate or low depending on its position in the P-I

matrix. Remember however that in order to better effectively make use of Qualitative

analysis it may be best to create charts for both positive and negative risks [Schwalbe, K

(2011)].

Quantitative analysis is additional analysis of the highest priority risks amid which a

arithmetical or a quantitative rating is appointed in order to establish a probabilistic analysis

of the project. This analysis measures the potential consequences for the project and evaluates

the probability of accomplishing distinct project goals, makes judgments when there is

ambiguity and constructs reasonable and attainable cost, schedule or scope targets. In order to

carry on a Quantitative risk analysis you require high-quality data, a well-constructed project

model and prioritized lists of project risks typically from carrying out a Qualitative risk

analysis. Remember this should only be done if it’s worth spending the time and effort

analyzing the risk or else it’s better to move from qualitative risk analysis to risk response

planning which is the next step in risk management. Again usually this type of analysis is

done for highest risks on the project to further investigate them. So the updated list in the

updated risk to the Risk Register is the input for

Page 56: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5566

Quantitative risk analysis. The numerical Quantitative risk data is typically collected by

analyzing past project data or by expert judgment. Sometimes numerical data are also used

for simulation and one of the simulation techniques is Monte Carlo analysis. For instance,

using Monte Carlo analysis you can check if the project is executed one hundred times. What

is the probability of completing a project on a specific date? Similar analysis can be done for

risk as well. Software packages such as Oracle’s “Crystal Ball” offer a suite for predictive

modeling, forecasting, Monte Carlo simulation and optimization to improve the strategic

decision-making process. Numerical data also helps in using the Decision Tree concept to

objectively analyze project risk and impact. However let me again emphasize this type of

analysis should only be done when it is worth doing it which is usually the case when you are

working with a complex multi-year project. The output of this process is the quantified list of

prioritized risks. Along with this sometimes the amount of contingency reserve in terms of

time and cost is also calculated as part of this process [Schwalbe, K (2011)]. Plenty of

empirical data shows that such techniques as

Monte Carlo analysis and Decision trees are quite effective but to avoid bogging you down

with details of all of these approaches we will just provide a comprehensive description of

just one, that being the Decision Tree. For example, your project requires you to place a

substantial equipment order but you believe there is a 20% risk that your principal hardware

supplier may be unable to provide all the equipment you need for a large order in a timely

manner [Mochal, T (July. 2008)]. This could be risk A. Two way your options you

correspond with a second vendor to see if they can execute the equipment order immediately

but as luck may have it this vendor who normally has the equipment in stock may have the

possibility of a strike which can cause a plant disruption so you now access this to be a 25%

possibility which is risk B. You need to then do is calculate the total risk for both of these

scenarios. The total risk is calculated by multiplying the individual risks. Since there is a 20%

chance of risk A, and a 25% chance of risk B, the probability that both risks will occur is 5%

(.20 x .25). You can use risk trees to come up with financial implications so let’s closely

examine figure A.

Figure 6: Generated By Tom Mochal

Page 57: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5577

This decision tree shows risks A and B. Risk A has two outcomes; outcome 1 is 20% likely to

occur, and outcome 2 is 80% likely to occur. The monetary value of risk A is $10,000. If

outcome 1 occurs, a second risk B is introduced, and there are three likely outcomes, 1.1, 1.2,

and 1.3. The monetary value of risk B is $30,000. Using the decision tree, you see that the

financial risks of the various outcomes are as follows:

Outcome 1.1 has a financial risk of $9,500 ($10,000 x .20) + ($30,000 x .25).

Outcome 1.2 has a financial risk of $23,000 ($10,000 x .20) + ($30,000 x .70).

Outcome 1.3 has a financial risk of $3,500 ($10,000 x .20) + ($30,000 x .05).

Outcome 2 has a financial risk of $8,000 ($10,000 x .80).

So the optimal choice is outcome 1.3 because it has the smallest financial risk impact. As you

can see a decision tree can mitigate your risk by enabling you to determine the probability

and impact of each risk combination so that you can make a more informed decision

[Mochal, T (July. 2008)].

Analogous to Project Management Quantitative and Qualitative analysis can be represented

in the following example to help to further understand the difference in the two approaches.

Risk analysis overlaps many areas of industries and organizations alike so we have

incorporated an example of both methodologies to further explain differences in both

Qualitative and Quantitative approaches by making use of a hybrid framework in the area of

IT security. Take for example a small banking institution that has 1,000 records. Assuming

these records were compromised you could then come up with the cost involved with the

compromise. Costs could involve getting in contact with the customers, creating new debit

card numbers for the files, and constructing and reissuing new debit cards. You would now

know the cost, which under meticulous examination you come up with a figure of $40 per

record. Again 1,000 records were exploited you can multiply the number of records times the

$40 deciphered for each record that had been compromised giving you a monetary cost of

$40,000. Assume the number of records grew to 500,000; you can then access the cost of a

breach to be $20 million. This is a prime example of quantitative analysis in terms that can

easily be understood. Pretty simplistic, except this is only one dimensional.

As the records increased so did the issue of complexity which is why now you must

incorporate a qualitative approach. Within the above example, in addition, you now have an

auditor walk through the door who says that you have 90 days to fix the vulnerability of the

system, which had no encryption mechanism between the database and the web server or on

the database server itself therefore the auditor points out that the bank is not in compliance

with specific financial standards. Now we will take a look at additional vulnerabilities such as

a code review, in which we discover that our assets are prone to an SQL injection attack (an

appended message to exploit the system and the data within it).

Page 58: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5588

Hence, there has to be controls in place to filter out such an attack. Currently, we have the

cost associated with the vulnerabilities in the system, and now the likelihood of

discoverability must be assessed. Using quantitative analysis, worst-case scenario would be

the compromise of 500,000 records, coming to a cost of $20 million as cited above. Going

by quantitative analysis, this is again a 1 dimensional evaluation therefore we must have a

way to assign risk level to vulnerability that takes other factors into consideration such as

making use of a high-medium-low rating scale. The information that we’ve gathered thus far

is the number of records could be from 1,000 to 500,000, records are valued at $40 each, the

data is not encrypted in transit or at rest, multiple business units could access and modify the

data, systems are maintained by the operations group and lastly, we have an audit

requirement to document encryption and apply mitigation controls. Let’s incorporate one

additional piece to our assessment which is reputation. Reputation encompasses impact on

earnings, consumer confidence, and publicity. We can easily assign a Qualitative risk level

of high as an SQL injection attack is not often detected by system logs and intrusion detection

services.

Reputation is at risk from going public with a loss of 500,000 consumer records and that once

this vulnerability is known there will be an increase in this type of attack on banking systems.

We now have the Qualitative cost and the Quantitative cost, both of which have a high risk

factor. Now here is where management plays an important role in why we incorporate the

single loss expectancy (SLE) formula. In using this example, we take the value of the asset

($40 in this case), and the exposure level (500,000) and multiply the asset value by the

exposure level to come up with an SLE of $20 million. We now calculate the annual loss

expectancy (ALE), which determines how many times per year this will occur. To do this,

you will take the SLE and multiply it by the annual rate of occurrence (ARO). In this

scenario let’s say the database is very new, so we can’t use historical examples.

Going back to a Qualitative approach, we can come up with an appropriate cost-benefit

analysis. So, we would come up with a way to mitigate this risk by customizing intrusion

detection signatures for traffic analysis that poses a threat to the database and host intrusion

detection software installed on both the web server and database server. Due to these

initiatives, we now feel comfortable reducing the risk rating from high to medium.

Furthermore, we could reduce the threat level to low via additional code testing. Inclusive is

HIPS (Hosted Intrusion Prevention Software) and IDS (Intrusion Detection Software) tools

being properly configured. The above example although exclusive to IT security risk

assessment is a good overview that can be used to describe Quantitative and Qualitative

analysis. After all Project Management encompasses a multitude of industries therefore the

area of IT security can also be inclusive. It must also be stated that IT security is a relevant

illustration in that it also documents the results of its risk analysis process in a Risk Register

to provide organizations information to make appropriate decisions as to how to best manage

identified risks.

Page 59: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

5599

The next step in the process of Risk Management is planning risk responses. This is the

identification of taking action or inaction chosen for the aim of efficiently controlling a given

risk. Specified action or inaction procedures should be chosen after the probable impact on

the project has been accessed. In simplistic terms this is responding to threats or

opportunities. There are many variations of standards in reference to response strategies, for

instance some organizations incorporate less strategies and some more. Again to keep things

easy we will provide you with five basic response strategies for treating negative risks and

four for treating positive risks. The idea here is to just provide you with a brief overview on

how to handle threats and opportunities.

The five strategies for treating negative risk are accept the risk, avoid the risk, reduce

likelihood of the risk occurring, impact mitigation and transfer the risk [Australian Agency

for International Development, (Nov. 2005)]. Accepting a risk is deciding to accept the

repercussions and likelihood of a specific risk. Sometime this is done because the

organization accesses it as being too low of a rating to have any effect on the project or they

may lack the resources to take care of the threat. If the latter is the case than monitoring

should always be inclusive. Avoiding the risk all together is the second category which

means not implementing any controls to counteract the risk because the rating once again

may be excessively low or you may not even perform a particular activity making the risk

nonexistent.

The Australian agency does state “that inappropriate risk avoidance could result in significant

cost penalties, diminished efficiency and impair the achievement of outcomes.” Reducing the

likelihood of a risk occurring is initializing countermeasures and controls that could include,

for example regular audits and checks, preventative maintenance, and education and training.

Impact mitigation is usually used when the likelihood of a threat is low but the impact if

propagated is high. Performing mitigation reduces the consequences of risk through efforts to

alleviating and dealing with the impacts such as making use of contingency planning. Finally

there is the transfer of risk which is allocating risk responsibilities from one party to another.

This is usually done by subcontracting to a third party but if this option is chosen the

Australian agency recommends collaboration and communication must occur on a regular

basis. The risk of choosing this strategy is the potential for increased capital expenditures or

the issue of accountability which is exactly what occurred in the CONFIRM project headed

by American Airlines subsidiary AMRIS. Now we get to what are the four basic response

strategies for positive risks those being exploiting, sharing, enhancing and acceptance

[Sharma, R. (Sept. 2009)]. Exploiting a positive risk is doing everything possible to increase

the probability that the risk will occur. An exploit example would be some members of your

team have devised a new technique to construct a product which would eventually lead to the

duration of the project to be diminished by 20 percent therefore you can exploit this by

ensuring that all team members use this new technique. The next positive risk would be

sharing which is collaborating or communicating with another individual, organization or

department to exploit a positive risk. For example, after conducting a SWOT Analysis you

decide to pursue a business deal which requires you to make use of Agile development

practices which is a systems development strategy wherein the system developers are given

the flexibility to select from a variety of tools and techniques to best accomplish a given task.

In your company, there is no knowledge of Agile development therefore you partner with

another organization that specialized in Agile development. In this scenario both parties

benefit.

Page 60: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6600

A third positive risk category is enhancing which includes identifying the root cause of a

positive risk so that you can influence the root cause to increase the likelihood of the positive

risk. For example, in order for you to get a business deal, your workforce needs to have

substantial JAVA skills so in order for your company to close the deal you can enhance the

positive risk by training your workforce on JAVA or hiring JAVA software specialists.

Finally the last category that encompasses positive risk is acceptance which means that you

select not to take any action towards a risk as sometimes opportunities simply fall on your lap

and you choose to accept them.

The final stage in Risk Management is the monitoring and control stage where information on

risk and metrics that was assessed during planning should be collected, tracked and analyzed

for patterns. Risk assessment, risk audits, variance and trend analysis, technical performance

measurements, reserve analysis and status meetings or periodic reviews are all tools and

techniques for performing monitoring and control. Outputs for this particular process are

updating the Risk Register, organizational process assets updates such as lessons learned

information, change requests and updates to the Project Management plan and other

documents [Schwalbe, K (2011)].

Before we proceed to the next chapter, we must stress that having the proper governance

framework in place is a significant element in mitigating risk which is probably why widely

applied approaches such as IT Governance Institute’s Control Objectives for Information and

related Technology (COBIT) approach, and ISO 9000 standards are being increasingly

utilized to manage IT risk as well as offering guidance on building IT risk into governance

processes. The proper governance hierarchal organizational structure is essential in making

sure that IT projects align with the overall business objectives as we have discussed several

times and as you ascend or descend through each group that is within the hierarchy it helps to

place added checks and balances or controls to ensure all projects succeed. This also could be

considered a top down approach some refer to as Enterprise Risk Management whose

acronym is ERM and which we will get into further detail about in the next section of this

paper [Warrier, S.R. & Chandrashekhar, P, (2006)].

Effective governance means that an organization is better able to access what the risks are

and have a plan in place to treat the difficult risks that would remain [AON Risk Solutions,

(2011)]. An efficient governance hierarchy from top to bottom comprises of the board

(ensuring accountability, monitoring & supervising, auditing, making strategic decisions and

making policies inclusive is successive planning) followed by the senior executive team

(making management decisions, formulating and executing strategy and managing assets).

The executive team is usually there to provide business governance. Just as great of

significance or even more so is what we call our key stakeholders whose primary function is

to provide project governance. This includes the project steering committee, sponsor and

chief risk officer (ensuring accountability, making project decisions, monitoring &

supervising projects and setting project policies) and will obviously be formed during the

initiation of a project. It must be further noted that the steering committee is integral and has

the primary responsibility to ensure project outcomes can be integrated into the business

processes.

Page 61: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6611

These project policies and especially project risk management are than passed down to the

project management team where a project manager is assigned to manage the projects. An

added department such as a Projects Management Office (PMO) can also be created to

maintain standards for project management within a company and be there for guidance,

documentation, and metrics related to the practices involved in managing and implementing

projects within the organization [TechTarget (Jan. 2008). Smaller organizations may not be

able to formulate an organizational hierarchal infrastructure like the one illustrated in the

former sentences because of insufficient resources therefore a small firm can certainly make

great use by adding a PMO to its organization. Having said that it must be emphasized that in

the wake of the financial calamity of 2008 it is of even greater significance to create an

organizational structure that includes the proper governance to set policies, procedures and

standards in order to mitigate the risk of any possible legal ramifications and the risks

associated with projects to minimize any financial losses or costs while maximizing profits.

In spite of wide spread and complete body of research on IT risk, there is extensive evidence

that the research findings and recommendations are not being applied in practice [Pfleeger,

(Sept. 2000)]. Governance initiatives and organizational commitment can certainly provide

the proper leadership and influence over all stakeholders to recognize the importance of

applying these research findings and recommendations to increase the chances for project

success. This prescribed infrastructure is an essential addendum to risk management.

6. Evolving Enterprise Risk Management (ERM) Assessment

Infosys Limited a Bangalore, Karnataka, India based organization with around 145,000 full

time employees, is a well renowned leader in the area of project management consulting

[Limited, Infosys (Jan. 2012)]. The company has been around since 1981 and has annual

revenues of $6.82 billion, gross profit of $2.54 billion and $3.72 billion in cash on hand

compared to zero debt. Therefore any recommendations and services they offer should be

taken seriously and this especially holds true for IT project Risk Management. After all the

company provides an enormous amount of products and services that encompass a multitude

of project management offerings through all market segments on a global scale. This is why

they have been focusing on and incorporating what many believe to be an evolutionary

discipline coined the Enterprise Risk Management (ERM) framework to optimally manage

their own risks as well as their clients. However Infosys does emphasize that the models

components must be customized to suit the needs of whatever organization integrates this

process [Warrier, S.R. & Chandrashekhar, P, (2006)]. In fact Microsoft Corporation also

acknowledges the value of ERM as it bought ERM vendor Prodiance back in the summer of

2011.

Page 62: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6622

It was around 1994 that the Committee of Sponsoring Organizations of the Treadway

Commission (COSO) issued Internal Control Integrated Framework to help businesses and

other entities assess and enhance their internal control systems [Committee of Sponsoring

Organizations of the Treadway Commission. Sept. 2004)]. In the wake of excessive financial

losses and business scandals COSO decided to team up with PricewaterhouseCoopers in 2001

to make enhancements to its initial approach by augmenting corporate governance and risk

management, with high level goals aligned with and supporting an organization’s mission,

making efficient use of and safeguarding resources, improving upon the reliability of

reporting and complying with applicable laws and regulations to formulate what is now

known as the Enterprise Risk Management framework. COSO defines ERM verbatim as “a

process, affected by an entity’s board of directors, management and other personnel, applied

in strategy setting and across the enterprise, designed to identify potential events that may

affect the entity, and manage risk to be within its risk appetite, to provide reasonable

assurance regarding the achievement of entity objectives.” Notice how the word Governance

continues to come up time and time again! ERM is believed to be the potential trend of the

future that takes a more holistic approach towards Risk Management. In fact many people in

the field of project management state there is a clear and concise correlation between ERM

processes and their advantages, primarily influenced by a multitude of factors including the

competency of management, the appetite for risk and risk culture to further demonstrate the

true value of ERM.

As talked about above governance is a critical element in accessing risk. Therefore the need

for corporate governance, internal control and risk management has become of vital concern

to organizations as many have suggested for the unification of all three with a single

management method known as the integrated governance, risk and compliance [Dittmar, L.

(no date)]. This led to what is now recognized as ERM because it highlights all three aspects

within its application process. In fact a the series of high profile business scandals and

failures which was caused by a lack of Risk Management provided additional support for the

renewed interest and popularity of ERM and a modification to a more coordinated, holistic

Risk Management approach that acknowledges the interdependencies of risks [Jablonowski,

M. (Sept. 2009)].

There are eight interrelated components that ERM is comprised of. These components are

very similar to many of the Risk Management planning categories we described in the

chapter above with some notable differences. They include:

• Internal Environment – The internal environment is the risk appetite of the organization

based on the individuals that make up the firm inclusive is their risk management philosophy,

integrity and ethical values, and the environment in which they do business.

• Objective Setting – This assures that objects are set and that they align with both the

mission of the organization and its appetite for risk

• Event Identification – Identifying internal and external events by comparing risks and

opportunities so that and organization can accomplish its objectives which is than redirected

back to the objective setting.

• Risk Assessment – Risks are rated on the likelihood and impact of the occurrence of an

event on a cyclical basis.

Page 63: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6633

• Risk Response – Management decides whether to avoid, accept, reduce or share the risk.

Once this is performed management constructs a specified set of actions to align risks with

the overall organizations risk appetite.

• Control Activities – Policies and procedures are constructed and incorporated to ensure the

appropriate risk responses are carried out.

• Information and Communication – The appropriate information is identified, collected and

communicated through some medium and timeframe that would allow stakeholders to

perform their responsibilities. This communication needs to span effectively across the entire

organization.

• Monitoring – The entire organization is monitored through ongoing management activities,

separate evaluations, or both. Inclusive are any modifications [Committee of Sponsoring

Organizations of the Treadway Commission. Sept. 2004)].

Infosys emphasizes as with any project that even with ERM all stakeholders must be involved

in order to get them to buy in to the overall risk management plan [Warrier, S.R. &

Chandrashekhar, P, (2006)]. Again although ERM has a specified framework the integration

of the approach may fluctuate somewhat as each organization has varying objectives, cultures

and is unique so an ERM discipline must be tweaked in order to align with an organization’s

overall goals. Infosys also suggests that prior to implementation of any ERM approach pilot

models should be constructed to ensure that the organization makes certain that the model is

effective, efficient and suites the needs of all stakeholders in order for an organization to

acquire robust results from the ERM. Some elements and standards are exclusive across all

industries and organizations however components such as different modes of communication,

technology enablers (dashboards, data, calculations such as Monte Carlo and reporting),

governance models, resourcing plans, risk appetite and so on must again be meticulously

analyzed before one begins to institutionalize an ERM approach deemed to be the most

appropriate fit.

Infosys recognizes that if ERM can be utilized appropriately it can provide organizations with

significant benefits which is why they have been integrating its framework in many of their

products and service offerings as well as making use of the approach internally. An example

of Infosys making use of ERM can be described in its ERM web application user interface

whereby a leading assessment agency in admission tests based in the UK used the application

to automate the process of moderation to increase the efficiency of the moderation process,

reduce lead time, minimize paper work, reduce costs and ensure process standardization

[Limited, Infosys Case Study, (2012)]. The results where that through the implementation of

the ERM application the UK based company was able to abled the assessment agency to

reduce manual intervention, accelerate the moderation process and ensure cost savings

inclusive was reducing paperwork, diminishing the turn-around time of the moderation

process, consistency and augmenting moderation, Multi-platform accessibility and finally

enhancing the customers' impression about the client.

7. Conclusion

Page 64: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6644

The complexity of projects throughout organizations around the globe has caused an

increasing number of risks therefore they must be addressed continuously in order to mitigate

any adverse organizational impacts while improving factors for success. This includes

constant risk analysis, participation and incorporating the necessary policies, procedures and

standards that must align IT technology with business objectives to effectively maximize an

entity’s return on investments (ROI) and return on objectives (ROO). Also keeping up with

growing number of new laws and regulations is an integral part in the overall process of risk

management to help reduce any financial loss attributed to lack of compliance. Furthermore

advancements in technology, although they have provided many benefits have also been

moving at such a rapid pace over the last decade that it is often difficult at times to properly

align IT with organizational goals. This is why Risk Management must be able to keep up

with these rapid advancements and the wider threat environment by regularly improving on

effective risk practices to augment project outcomes.

In summary the purpose of this paper was to provide a detailed understanding through

documentation and research such as that from the Standish Group to increase awareness of

the inherent risks associated with project failures as well as explaining how each organization

has different objectives therefore the Risk Management approaches we described above must

be customized and tailored to meet the objectives that are unique to each entity. This again is

analogous not only to IT Project Management but other areas of discipline such as IT

security. For example in IT Security Risk assessment four approaches have been established

through formal standards such as ISO 13335 in order to provide a range of alternative

approaches to access risk as each organization’s needs differ. There is a baseline approach

(applying the most basic level of security controls against the most common threats

recommended for small companies), the informal approach (recommended for small and mid-

size companies that apply a less structured process by just using the expertise and knowledge

of individuals performing analysis), the detailed risk analysis approach (a formal structured

and more complex approach that includes numerous stages of risk assessment usually suited

for large organizations) and a combined approach (making use of the baseline, informal and

detailed risk analysis approaches).

It is our hope that the information embedded in this paper will enlighten organizations and

sovereigns to recognize the importance of the empirical data so they can take the appropriate

measures to actively apply certain methodologies while accessing and managing projects

throughout their lifecycles and optimally utilizing the various approaches in practice that

have been discussed in great detail to achieve project success.

References

Bannerman, P.L. (Dec. 2008). Risk and Risk Management in Software Projects: a

reassessment. The Journal of Systems and Software Vol. 81, Issue 12 2118–2133.

Cauley, Leslie (Oct. 2007). Blackstone, Hilton Deal is Marriage of Titans. Retrieved from

USA TODAY: http://www.usatoday.com/money/industries/travel/2007-07-03-blackstone-

hilton_N.htm

Commission, Committee of Sponsoring Organizations of the Treadway (Sept. 2004).

Enterprise Risk Management - Integrated Framework. Retrieved from COSO.org:

http://www.coso.org/Publications/ERM/COSO_ERM_ExecutiveSummary.pdf

Page 65: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6655

Corporation, Microsoft (no date). A Quick History of Project Management. Retrieved from

Microsoft Corporation: http://office.microsoft.com/en-us/project-help/a-quick-history-of-

project-management-HA010351563.aspx

Corporation, Northrop Grumman (Nov. 2007). IM Risk Management Plan. Retrieved from

Interoperability Montana

http://interop.mt.gov/content/docs/IM_Risk_Management_Plan_v4_0.pdf

Development, Australian Agency for International (Nov. 2005). 6.3 Guidelines "Managing

Risk. Retrieved 13 April 2012 from Commonwealth of Australia:

http://www.ausaid.gov.au/ausguide/pdf/ausguideline6.3.pdf

Dittmar, Lee (no date). What are the Primary Challenges and Trends in Governance, Risk and

Compliance?. Retrieved from Deloitte Consulting LLP:

http://compliance.mashnetworks.com/player.aspx?channelGUID=74fe7a5d-7fce-427e-863d-

c0b597c427fb&clipGUID=5615f32e-e0fc-4cd6-9edc-f259766a6abd

Ellis, Kathy (2008). Business Analysis Benchmark. Retrieved from IAG Consulting:

http://www.iag.biz

GALWAY, LIONEL (Feb. 2005). Quantitative Risk Analysis for Project Management.

Retrieved 13 April 2012 from Rand Corporation:

http://www.rand.org/pubs/working_papers/2004/RAND_WR112.pdf

Group, The Standish (Oct. 2009). CHAOS Manifesto. Retrieved from The Standish Group:

http://standishgroup.com

Group, The Standish (Oct. 2009). CHAOS Manifesto. Retrieved from The Standish Group:

http://standishgroup.com

Group, The Standish (March. 2011). CHAOS Manifesto. Retrieved from The Standish

Group: http://standishgroup.com.

Halper, Mark (Aug. 1992) Outsourcer Confirms Demise of Reservation Coalition Plan.

Computerworld Vol. 26.

Halper, Mark (Aug. 2009). IS cover-up charged in system kill. Computerworld Vol. 26.

Halper, Mark. (Oct. 1992)."Too Many Pilots." Computerworld.

Hubbard, Douglas (April. 2009). The Failure of Risk Management: Why It's Broken and

How to Fix It. John Wiley & Sons. 1E, p. 46.

Ibbs, William and Young Kwak, Hoon (March 2000)“Assessing Project Maturity” Project

Management Journal 31

Jablonowski, Mark (Sept. 2009). The Bigger Picture: Recognizing Risk Management's Social

Responsibility. Retrieved from Deloitte Consulting LLP:

http://findarticles.com/p/articles/mi_qa5332/is_7_56/ai_n35637633/

Page 66: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6666

Johnson, Stephen B. (March. 2002). Bernard Schriever and The Scientific Vision. Retrieved

from Air Force Historical Foundation:

http://www.thefreelibrary.com/Bernard+Schriever+and+the+scientific+vision.-a083791580

Jorgensen, Hans Henrik, Owen, Lawrence and Neus, Andreas (Oct. 2008). Making Change

Work. Retrieved from IBM: http://www-935.ibm.com/services/us/gbs/bus/pdf/gbe03100-

usen-03-making-change-work.pdf

Limited, Infosys (2012). Infosys Limited Case Study. Retrieved from Infosys Limited: .

http://www.infosys.com/industries/education/case-studies/Pages/erm.aspx

Limited, Infosys (2012). Form 6k UNITED STATES SECURITIES AND EXCHANGE

COMMISSION Filing. Retrieved from Infosys Limited:

http://sec.gov/Archives/edgar/data/1067491/000106749112000007/index.htm

LLC, Hulett & Associates (2005). Qualitative Risk Assessment. Retrieved from

Interoperability Montana: http://www.projectrisk.com/qual_assess.html

Milford, Phil, Schlangenstein, Mary and McLaughlin, David (Nov. 2011). American Airlines

Parent AMR Files for Bankruptcy as Horton Is Named CEO. Retrieved from Bloomberg

News: http://www.bloomberg.com/news/2011-11-29/amr-files-for-bankruptcy-protection-in-

new-york-as-talks-with-pilots-end.html

Mochal, Tom (July 2005). See Effect of Dependent Risk by Using a Decision Tree. Retrieved

from CBS Interactive: http://www.techrepublic.com/blog/tech-manager/see-effect-of-

dependent-risk-by-using-a-decision-tree/569

Oz , Effy. (Oct. 1994). When Professional Standards are Lax: The CONFIRM Failure and its

Lessons: Communications of the ACM 37, 10, 29-36.

Pfleeger, S.L. (Sept. 2000). Risky Business: What we have yet to learn about risk

management, Journal of Systems and Software Vol. 53 Issue 3: 265–273

Powner, David A. (2008). OMB and Agencies Need to Improve Planning, Management, and

Oversight of Projects Totaling Billions of Dollars. Retrieved from U.S. Government

Accountability Office: http://www.gao.gov/assets/130/120968.pdf

Progress, Project (2008). The Bigger Picture: Recognizing Risk Management's Social

Responsibility. Retrieved from Project Progress providers of PRINCE2:

http://www.projectprogress.com/index.htm

Roebuck, Kevin (May. 2011). Project Portfolio Management - Optimizing for Payoff.

Retrieved from Tebbo: (163-166)

Schwalbe, Kathy (2011). Information technology Project Management. Retrieved from

Cengage Learning. 6E, (421-452)

Sharma, Rupen (Sept. 2009). How to Respond to Positive Risks. Retrieved from

brighthub.com: http://www.brighthub.com/office/project-management/articles/48400.aspx

Page 67: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6677

Solutions, AON Risk (2011). Governance of Project Risk. Retrieved from AON:

http://www.aon.com/hongkong/about-aon/attachments/project-governance-risk-guide.pdf

Solutions, PM (2011). . Strategies for Project Recovery- A PM SOLUTIONS RESEARCH

REPORT. Retrieved from Project Managament Solutions Inc.:

http://www.pmsolutions.com/collateral/research/Strategies%20for%20Project%20Recovery

%202011.pdf

Staff, CIO (Sept. 2011). How to Create a Risk Register. Retrieved from IDG

Communications: http://www.cio.com.au/article/401244/how_create_risk_register/

TechTarget (Jan. 2008). Project Management Office (PMO) Definition. Retrieved from

TechTarget: http://searchcio.techtarget.com/definition/Project-Management-Office

Warrier, S.R. & Chandrashekhar, P, (2006) “Enterprise Risk Management” from the

boardroom floor Infosys White Paper http://www.infosys.com/industries/insurance/white-

papers/Documents/enterprise-risk-management-paper.pdf

Wiegers, K. E. (Oct. 1998). Know Your Enemy: Software Risk Management Vol. 6(10), 38-

42

Wyatt, Edward, (Feb. 2011) “Fed Chief Says US Bolstered Its Ability to Handle Failure of a

Big Bank,” Retrieved from The New York Times:

http://www.nytimes.com/2011/02/18/business/economy/18regulate.html

Zellner, Wendy, (Jan. 1994) "Portrait of a Project As a Total Disaster," Business Week

Zweig, Jason (2012). Why Investors Can't Escape 'Risk. Retrieved from Wall Street Journal:

http://blogs.wsj.com/totalreturn/2012/04/06/why-investors-cant-escape-risk/

Page 68: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6688

Efficiency and Productivity Analysis of Tunisian Banks

During a Recent Deregulation Period

Raéf Bahrini, Institute of High Commercial Studies of Sousse, Tunisia

Abstract

The main object of this paper is to assess and to analyze efficiency and productivity dynamics

of Tunisian banks following new environmental changes such as deregulation, financial

innovations and progress in Information and Communication Technologies (ICT).

In order to provide an in-depth analysis, we applied the non-parametrical frontier efficiency

method called DEA (Data Envelopment Analysis) and the Malmquist Index able to measure

and to decompose technical efficiency and productivity changes. These methods were mostly

used in empirical studies in assessing the efficiency and productivity dynamics of banks

(Wheelock and Wilson, 1999; Alam, 2001; Berger and Mester, 2003; Isik and Hasan, 2003;

Amel et al., 2004; Staikouras et al., 2008; Huang and Tan Fu, 2009)

We find that Tunisian commercial banks have recorded an increase in their overall technical

efficiency during the period of 1999 to 2008. This increase was mainly due to pure technical

inefficiency and not to scale inefficiency. Thus, Tunisian commercial banks should focus on

improving their managerial methods in order to better control their techniques of production

and to offer the maximum services with the minimum resources available.

Our study shows also that Tunisian banks have increased their Total Factor Productivity over

the period 1999-2008. Through the Malmquist Index approach we showed that the

improvement of Tunisian banking productivity is mainly due to technological progress. In

fact, Tunisian banks were responsive to new technological changes by incorporating the

advanced technologies in their production process.

Keywords: Overall technical efficiency, Scale efficiency, Pure technical inefficiency, Total

Factor Productivity, Production technologies, Environmental changes, Data Envelopment

Analysis, Malmquist Index.

Page 69: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

6699

1. Introduction

Deregulation in banking sectors, structural changes in financial systems and progress in

Information and Communication Technologies are the principal changes that influenced

banking environment since the early 80s.

In order to evaluate the impact of the environmental changes on the intermediary role of

banks, many studies found that following these changes, banks have known a great decrease

in their market shares which resulted in profitability decline (Boyd & Gertler, 1994;

Hackethal, 2001; Allen & Santomero, 2001; Samolyk, 2004; Mester, 2007, etc.). These

evolutions were explained by the increase of competitive pressures brought about by financial

market and non-banking intermediaries. These studies have also highlighted a keen tendency

of economic agents to use new financial markets instruments in their transactions like stocks,

bonds, options, swaps, futures, etc.

Given the importance of the banking sector in any economy and the dramatic impact of the

environmental changes, many studies have focused on the impact of these changes on bank

efficiency and productivity levels (Alam, 2001; Berger & Mester, 2003; Isik & Hasan, 2003;

Amel et al., 2004; Staikouras et al., 2008; Huang and Tan Fu, 2009; etc.). These studies are

based on the hypothesis that increased competition caused by changes in the banking

environment will push banks to improve their allocation of resources and to incorporate

technological progress in order to become more efficient and productive.

Many empirical methods are used by these studies to construct an efficient production

frontier which is a linear combination of efficient banks and to calculate for each bank in

each time period its efficiency and productivity level relatively to this frontier.

Furthermore, these methods, known as Frontier efficiency approaches, allow us, on one hand,

to assess overall technical efficiency change and to define the contribution of its components

(pure technical efficiency and scale efficiency) in this change. On the other hand, these

methods are also useful for measuring total factor productivity change and decomposing it in

technological change and efficiency change.

In Tunisia, financial liberalization was an essential part of the structural adjustment program

of 1987. This process has encouraged the relaxation of banking regulations, have increased

the dynamism of financial market and the modernization of financial institutions in order to

create a new competitive and an innovative financial system.

In an attempt to assess Tunisian banks responses to regulatory and technological changes, we

measure and analyze changes in bank efficiency and productivity during the recent

deregulation period (1999-2008). Our purpose is then to answer the following question: How

is the efficiency and productivity of Tunisian banks following the changes in their

environment and how can we explain the changes recorded?

The remainder of the paper is organized as follows. Section 2 includes a brief review of the

literature related to bank production, bank efficiency and productivity. Section 3 presents

empirical methodology followed to assess and analyze the evolutions of Tunisian banks

levels of technical efficiency and total factor productivity. We present empirical results in

Section 4 and section 5 and we conclude in section 6.

Page 70: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7700

2. Literature Review

2.1 Bank outputs and inputs:

Taking into account the changes in the banking environment, a new trend of research has

emerged to focus on the measurement of bank efficiency and productivity changes and to

determine the effects of these changes on their productive performance.

To attend this goal, these studies have developed the industrial approach of banks. Following

this approach, the bank is assimilated as a simple firm looking for maximizing its profits by

finding the ideal combination of its inputs and outputs. However, most studies have faced a

conceptual problem summarized by the following question: what are the outputs and the

inputs of a bank? Several authors attribute the conceptual problem of measuring and

identifying bank’s outputs and inputs to the interdependence of its products and services

(Berger & Humphrey, 1992).

Given the lack of consensus in the literature regarding the precise definition of bank output

and input, the previous empirical researches were mainly based on two approaches: the

intermediation approach and the production approach.

The Production approach considers the bank as a firm utilizing capital and human resources

to produce different types of deposits and loans accounts. Outputs are measured by the

number of accounts or by the number of transactions for each type of product (Parsons et al.,

1990; Colwell & Davis, 1992; Schaffnit et al., 1997). Following the production approach,

bank efficiency and productivity are measured by comparing the quantities of services

produced with the quantities of resources used (Mlima & Hjalmarsson, 2002).

The Intermediation Approach considers the bank as a financial intermediary supposed

to perform two major roles: mobilizing financial resources and distributing efficiently theses

funds to boost economic development. It measures bank inputs and outputs by their monetary

value and not by their quantity as in the case of the production approach. They also show that

labor and capital are inputs and deposits can be considered both as input and as output

(Colwell & Davis, 1994).

Moreover, there are several other approaches in the literature to measure bank outputs and

inputs like: the Value-Added Approach, the Asset Approach, the User-cost approach and the

Risk Management Approach (Mlima & Hjalmarsson, 2002).

2.2 Bank technical efficiency

Referring to Farrel (1957), Aly et al. (1990), Berger and Humphrey (1992), Berg et al.

(1992), Miller and Noulas (1996), Siems and Barr (1998), etc., two components form the

overall technical efficiency, namely: pure technical efficiency and scale efficiency.

Pure technical efficiency measures the ability of a firm to maximize its outputs given an

amount of input available or to use less input to produce the same amount of output. It

reflects the organizational performance of the bank in the sense that better organization can

permit a better management of the technical aspects of production.

Page 71: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7711

Scale efficiency measures the contribution of a change in size to the reduction of banking

costs. In fact, a bank can benefit from economies of scale when it is not yet at the optimal

size which permits to minimize average costs.

2.3 Bank Productivity

Sharpe (2002) defines productivity as the relationship between outputs produced (goods and

services) and inputs used in the production process (human and non human resources). This

relationship is often expressed as a ratio and outputs and inputs are measured in

quantities and they are not affected by the change in prices.

Improving productivity means producing more output with the same amount of input

or using less input to produce the same amount of outputs. It is therefore crucial to

any Decision Making Unit to measure and analyze its productivity level.

According to the literature, bank productivity can be measured either by partial productivity

or by total factor productivity measures.

Early productivity studies were based on partial productivity measures introduced by

Solow (1957). Under this method, productivity is measured by the ratio of aggregate

output divided by the observed quantity of a single input; it was generally labor.

Berger and Mester (2003) the Bureau of Labor Statistics (BLS) has developed a measure

of labor productivity for commercial banks which is an index having as

a numerator bank outputs measured by the number of transactions relating to demand and

time deposits, loans, transactions made through ATMs and as a denominator the number of

employee hours worked.

Many studies show that total factor productivity measure is better than partial productivity

measure. Indeed, the first one uses a ratio that relates many outputs to many inputs while

the second implicitly assume that the output produced is the result of a single input without

taking into account the contribution of the other inputs involved in the production process

(Colwell & Davis, 1992; Lipsey & Carlaw, 2000, etc.).

Total factor productivity measure is determined by the difference between the growth rates of

outputs and inputs combined. It measures the contribution of all factors of production (other

than capital and labor) to output growth. It reflects the technical efficiency and measure the

rate of change of production technology (Lipsey & Carlaw, 2000).

More explicitly, bank total factor productivity depends at least on three major factors:

1) The characteristics of the technology used;

2) The choice of the scale of the production and the possibility of introducing the

advanced technology;

3) The efficiency of bank in organizing its production process.

A bank can achieve a productivity growth by simply applying the advanced technology.

Similarly, the productivity can be determined by the scale of production. In this case, a bank

can be productive because of its larger size even if it makes less productivity efforts than the

other smaller banks. Finally, bank productivity depends on the efficiency of its financial

transformation process.

Page 72: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7722

In fact, if we compare two banks of the same sizes, utilizing identical techniques of

production and operating in the same market, one bank can be more productive and more

performing than the other, simply because it is technically and economically more efficient.

3. Methodology

3.1 Data Envelopment Analysis (DEA)

Data Envelopment Analysis is a non-parametric approach which is able to determine the

technological frontiers or the possible production frontiers of each firm during each year,

utilizing linear programming techniques based on inputs and outputs data without the use of a

functional application to determine the given production process. The analysis consists on

assessing the efficiency scores which measure the distance separating the units outside the

frontier (inefficient units) from the efficient frontier which is made of the efficient banks in

the given sample, which means the banks that are able to produce a maximal quantity of

outputs given a limited quantity of inputs.

DEA method was pioneered by Farrel (1957) reformulated by Charnes et al. (1978) as

follows: Given N banks, each producing m different outputs utilizing n different inputs. The

technical efficiency of the bank is measured as follows:

Where hs is the bank efficiency score; yis is the amount of the ith

output produced by sth

the

bank ; xjs is the amount of the jth

input used by the sth

bank ; ui is the output weight and vj is

the input weight. We can now resolve the linear programming as follows:

et

The above linear programming is used to maximize the efficiency score of the bank s under

the two following constraints: The first constraint is that the efficient score is less or equal

than one. The second constraint requires that the output and input weighs must be positive.

To resolve this linear equation, we must determine the values of ui and vj, in order to

maximize the efficiency score for each bank.

The above linear programming was converted by Charnes et al. (1978) as follows:

S

et

Utilizing the dual programming, the problem becomes as follows:

S

Page 73: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7733

The variable is the measure of the overall technical efficiency and must be between zero

and one. The above Dual Programming estimates the efficient frontier under the hypothesis

of Constant Return of Scale (CRS). If we consider the example of a bank producing one

output using one input, the solution to the CRS problem is determined by the frontier OC of

the following figure:

Figure 1:

Source: Miller and Noulas (1996)

Each bank that is on the frontier is efficient. For this reason, the bank s which is found below

the frontier at the point S is inefficient. In this case, the overall technical efficiency ( ) is

determined by the ratio < 1. In fact, (1- ) measures the amount by which the inputs

must be reduced in order for the bank s to be able to produce the same output like the

efficient bank at the point F.

In addition, the overall technical efficiency estimated by the DEA method, can be

decomposed into “pure” technical efficiency and scale efficiency. To do so, we need to

resolve the linear program (4) to which we add another constraint to allow the estimate of the

efficiency frontier under the hypothesis of Variable Return of Scale (VRS). The added

constraint is as follows:

If we refer to the previous chart, the estimated efficient frontier (VRS) is represented by the

curve ABDV and the Pure Technical Efficiency of the bank s at the point S is given by the

Page 74: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7744

ratio The overall technical efficiency is the combination of the pure technical

efficiency and the scale efficiency which means that Thus, scale efficiency is

given by the formula It is represented by the ratio

3.2 Malmquist Productivity Index Method

This method allows us to measure the productivity change and to decompose it in two

components: technological change and efficiency change. We adopt the methodology of

Alam (2001) who considers the scenario of a production based on one output and one input.

Figure 2:

Source: Alam (2001)

[(O, Tt) and (O, Tt+1)] represent the frontiers of the technology of production at time t and t

+1 under the hypothesis of the Constant Return of Scale (CRS). Tt+1 is above the Tt, this

means that there was an evolution of technology or a technical progress occurred between t

and t+1. Therefore, to determine the technical change, we must define the technological

frontiers at time t and at t+1.

Let’s consider the case of a firm n during a given time t represented in the figure above by the

point (Xnt, Ynt). This firm is inefficient view that is localized inside the efficient frontier [O,

Tt] and its efficiency level is measured by the ratio [oa/ob < 1]. At t+1, this same firm is

represented by the point (Xn,t+1, Yn,t+1) which is also inside the frontier [O, Tt+1]. It is again

inefficient and its efficiency is measured by the ratio [oe/of <1]

Since we have defined the technological frontiers and have determined the different

efficiency scores using the DEA technique, it is only left to determine the total change of

Page 75: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7755

productivity and the contribution of each of its tow components in this change, which are in

fact measured by the Malmquist Productivity Index.

Grifell-Tatjé, and Lovell (1995) have defined the Malmquist Productivity Index for the

producer i between the time t and t+1 as follows:

Where :

If we return to the case of the firm “n” of the figure above, the decomposition can be

illustrated as follows:

= =

= Et+1 * At+1

Where Et+1 and At+1 represents respectively the technical efficiency change and the

technological change of the bank “n” between t and t + 1.

4. Assessing And Decomposing Technical Efficiency In The Case Of Tunisian Banks: A

DEA Approach

4.1 Variables and related data

As we work within the intermediation framework approach, we consider two outputs: Loans

(all forms of loans to customers) and Other Earning Assets (Portfolio securities or stocks and

loans to other banks and financial institution). For its production, the bank must need three

inputs: Fixed Assets, Interest bearing liabilities (Savings deposits, other deposits, interbank

deposits and special financial resources) and Labor (Number of full-time equivalent

employees).

Our aim is to measure and to analyze the Tunisian Banks efficiency change during the period

1999-2008 which was characterized by dramatic changes such as deregulation, financial

innovations and technological progress in informatics and communication.

We use Data Envelopment Analysis (DEA) to measure overall technical efficiency change as

well as its tow components: pure technical efficiency change and scale efficiency change.

The data of our study consist of the accounting values of inputs and outputs of the banks

which are from the Annual Reports published by the Tunisian Professional Banking

Associations (TPBA). Our sample consist of 10 Tunisian Commercial Banks which are active

and viable during all this period of our study (1999- 2008)

4.2- Results

The Following Table presents our results:

Page 76: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7766

Table 1: Tunisian Bank’s Technical Efficiency and its components: Pure Technical and of

Scale efficiency during the period 1999 to 2008:

Year Overall technical efficiency Pure technical efficiency Scale efficiency

1999 0,906 0,963 0,942

2000 0,927 0,964 0,962

2001 0,916 0,956 0,959

2002 0,925 0,962 0,963

2003 0,927 0,957 0,962

2004 0,937 0,965 0,972

2005 0,949 0,968 0,971

2006 0,918 0,944 0,973

2007 0,935 0,962 0,981

2008 0,939 0,951 0,967

Mean 0,928 0,959 0,965

Source: Output file generated by DEAP VERSION 2.1 software

We can determine from the above results the technical inefficiency levels such as:

Average Overall Technical Inefficiency = 1 - 0,928 = 7,2%.

Average Pure Technical Inefficiency = 1- 0,959 = 4,1%.

Average Scale Inefficiency = 1-0,965 = 3,5%.

Estimated Pure Technical Inefficiency of 4,1% in average, means that inefficient banks can

reduce the quantity of used inputs by 4,1% as compared to efficient banks or banks having

the best practices. In addition, average level of inefficiency of scale of 3,5% means that the

Tunisian banks can reduce their production costs by an average of 3,5% if they increase their

sizes. We find that overall technical inefficiency of Tunisian banks is more explained by pure

technical inefficiency than by scale inefficiency. Most empirical studies showed that the

technical efficiency levels recorded are due to the scale efficiency and not to the pure

technical efficiency.

Thus, Tunisian banks have responded to the environmental changes by increasing their sizes

through mergers and acquisitions and/or by increasing the scale of their production. Hence,

they realized economies of scale and then they have increased their efficiencies of scale.

Therefore, pure technical inefficiency, which is due to bad allocation of resources and to

mediocre management methods of production, represents the essential source of the recorded

overall technical inefficiency in the case of Tunisian banks.

We present the following figure:

Figure 3: Overall Technical Efficiency change and its Components:

Pure Technical Change and Scale Efficiency Change

Page 77: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7777

Source: Output file generated by DEAP VERSION 2.1 software

The figure above indicates that overall technical efficiency rapidly increased between 1999

and 2005 from 90,6% to 94,9%. Then it decreased between 2005 and 2008 to reach the level

of 93,9% by the end of the period. It shows also that pure technical efficiency have known

similar changes: it increased between 1999 and 2005 from 96,3% to 96,8%. Then it decreased

to reach the level of 95,1% by the end of 2008. Following this figure, scale efficiency have

increased during all the period going from 94,2% to 96,7%.

We conclude that overall technical inefficiency changes are mainly determined by pure

technical inefficiency changes suggesting that during the period 1999-2008 Tunisian banks

have not been able to improve their managerial performance. Indeed, they have to make

additional efforts to better control the technical aspects of their production and to improve the

quality of their organization.

We find also that during the study period Tunisian banks have reduced their scale

inefficiency which became at the level of 3,3 % by 2008. This means that most banks are

close to reach the optimal size which maximizes their scale efficiency. It is clear that they

preferred increasing their overall technical efficiency by increasing their sizes and not by

raising their pure technical efficiency levels.

Our results are consistent whit those of Chaffai and Dietsch (1998) which analyzed the

technical efficiency of the Tunisian banks during the period of 1986-1997 and found that

these banks have enhanced their scale efficiency from 82% to 97% at the expense of their

technical efficiency which have decreased from 82% to 68%.

5. Assessing And Decomposing Productivity Change In The Case Of Tunisian Banks:

Application Of The Malmquist Index

5.1 Data

The determination of the Malmquist Productivity Index’s values needs the estimation of the

efficiency frontiers using Data Evolution Analysis (DEA). Based on this fact, we will utilize

the same data and variables used in the precedent estimation.

Page 78: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7788

5.2 Results and Interpretations:

The estimated results of Tunisian commercial bank productivity using the Malmquist Index

within the DEA approach are represented in the table below:

Table 2: Total Factor productivity change and its components:

Technical and efficiency change between 1999 and 2008:

Year Efficiency

Change

Technical

Change

Pure

Efficiency

Change

Scale

Efficiency

Change

Total Factor

Productivity

Change

EFFCH TECHCH PECH SECH TFPCH

1999-2000 0,993 1,086 1,000 0,993 1,079

2000-2001 0,999 1,005 1,000 0,998 1,004

2001-2002 0,987 1,001 0,999 0,988 0,988

2002-2003 1,004 1,014 1,001 1,003 1,018

2003-2004 0,985 1,056 0,999 0,986 1,041

2004-2005 1,002 1,042 1,001 1,001 1,044

2005-2006 1,001 0,990 0,996 1,005 0,990

2006-2007 0,980 1,076 0,999 0,981 1,054

2007-2008 1,010 1,000 0,992 1,018 1,010

Mean 0,996 1,030 0,999 0,997 1,025

Source: Output file generated by DEAP VERSION 2.1 software

The Annual average of the total factor productivity has increased by 2,5 % (1.025-1 = 0.025)

between 1999 and 2008. This increase is explained by the enhancement of the technology by

3% (1.03-1 = 0.03) and also by a slight decrease of the technical efficiency of 0.2% or

(1.002-1 = 0.002).

Let us consider the figure below:

Figure 4: Total Factor productivity change in the case of Tunisian banks

between 1999 and 2008

Source: Output file generated by DEAP VERSION 2.1 software

Page 79: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

7799

The figure shown above, shows that the index TFPCH is greater than 1 during most years in

the period 1999-2008 with the exception of two periods: 2001-2002 and 2005-2006. These

results suggest that Tunisian banks have responded to financial and technological changes

that have characterized their environment by improving their level of total factor productivity

during most years of the study period (1999-2008).

Let us consider another figure:

Figure 5:

Source: Output file generated by DEAP VERSION 2.1 software

The figure above shows that TFPCH index evolutions over the study period can be explained

by TECHCH index changes and not by EFFCH index changes. Thus, the increase of total

factor productivity achieved by Tunisian banks during the period1999-2008 is mainly due

to technological advances introduced rather than their level of technical efficiency.

These results are consistent with those of Wheelock and Wilson (1999), Robelo and Mendes

(2000), Alam (2001), Sufian (2009). Their results showed that banks have reacted to the

dramatic changes in their environment by introducing information and communication

technologies in order to modernize their services and to become more competitive and to

increase their productivity as well.

Furthermore, the Malmquist Index approach showed us that Tunisian banks can be more

productive by enhancing their pure technical efficiency levels. This means that these banks

are able to enhance their organizational qualities in order to better manage the technical

aspects of their production.

Page 80: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

8800

6. Conclusion

We have tried throughout this study to measure and to analyze Tunisian commercial banks

technical efficiency and productivity changes during a period characterized by several

reforms and technological changes which have dramatically affected their environment.

Firstly, we have estimated the evolution of technical efficiency based on the non-parametrical

frontier efficiency method, known as DEA (Data Envelopment Analysis). The results have

shown that Tunisian commercial banks have recorded an increase in their technical efficiency

during the period of 1999 to 2008.

Through the decomposition of the overall efficiency change into pure technical efficiency and

scale efficiency change, we concluded that the evolutions of overall technical inefficiency

recorded by Tunisian commercial banks was mainly due to pure technical inefficiency and

not to scale inefficiency.

In light of these results, we can conclude that Tunisian commercial banks should focus on

improving their managerial methods in order to better control their techniques of production

and to offer the maximum services with the minimum resources available. And they should

concentrate less on increasing their sizes, since the expected gains from the scale change have

diminished by the end of the period.

Secondly, we have followed the methodology of Alam (2001) based on the Malmquist

Productivity Index to asses Tunisian commercial banks productivity change and to

decompose the whole change into technical efficiency change and technological change.

Our results showed that Tunisian commercial banks have increased their productivity over

the period 1999-2008. Through the Malmquist Index approach we showed that the

improvement of Tunisian banking productivity is mainly due to technological progress. These

results are consistent with those of the European and American banks, which have found that

the increase in their total factor productivity is mainly determined by technological change.

Given that bank productivity depends on the technology used in its production process and on

its technical efficiency, we can conclude that Tunisian commercial banks have focused in the

enhancement of their productivity on incorporating new banking technologies rather than on

increasing their levels of technical efficiency.

Page 81: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

8811

References

Alam, I.M.S. (2001). A non-parametric approach for assessing productivity dynamics of large

banks. Journal of Money, Credit, and Banking, 33, 121–139.

Allen, F., & Santomero, A.M. (2001). What do Financial Intermediaries do? Journal of

Banking and Finance, 25(2), 271-294.

Aly, H.Y., Garbowski, R., Pasurka C., & Rangan, N. (1990). Technical, scale and allocative

efficiencies in US banking: an empirical investigation. The Review of Economics and

Statistics, 72(2), 211-218.

Amel, D., Barnes, C., Panetta, F., & Salleo, C. (2004). Consolidation and efficiency in the

financial sector: A review of the international evidence. Journal of Banking and

Finance, 28, 2493-2519.

Bauer, P.W., Berger, A.N., & Humphrey, D.B. (1993). Efficiency and productivity growth in

US banking. In H.O. Fried, K.A.L. Lovell & S.S. Schmidt, The Measurement of

Productive Efficiency: Techniques and Applications (pp. 386–413). Oxford University

Press, Oxford,

Berg, S.A., Forsund, F.R., & Jansen, E.S. (1992). Malmquist indices of productivity growth

during the deregulation of Norwegian banking, 1980–89. Scandinavian Journal of

Economics. 94 (Supplement), 211–228.

Berger, A.N., & Humphrey, D.B., (1992). Measurement and Efficiency Issues in Commercial

Banking. In Z. Grillitches (Eds.), Output Measurement in the Service Sectors (pp.

245-279). University of Chicago Press, Chicago.

Berger, A.N., & Mester L.J. (2003). Explaining the dramatic changes in performance of US

banks: technological change, deregulation, and dynamic changes in competition.

Journal of Financial Intermediation, 12, 57–95.

Boyd, J.H., & Gertler, M. (1994). Are Banks Dead? Or Are the Reports Greatly Exaggerated?

Federal Reserve Bank of Minneapolis Quarterly Review, 18(3), 2-23.

Chaffai, M.E., & Dietsch, M. (1998). Comment Accroître les Performances des Banques

Commerciales Tunisiennes : Une Question d’Organisation ou de Taille ? Finances &

Développement Au Maghreb, 24, 79-87.

Charnes A., Cooper W., & Rhodes, E. (1978). Measuring the efficiency of Decision-Making-

Units. European Journal of Operational Research, 2 (6), 429-444.

Colwell, R.J., & Davis, E.P. (1992). Output, Productivity and Externalities – the case of

Banking. Bank of England Working Papers, N°3, London.

Farrell, M.J. (1957). The measurement of productive efficiency. Journal of the Royal

Statistical Society, 120, 253-281.

Grifell-Tatjé, E., & Lovell, C.A.K. (1997). The sources of productivity change in Spanish

banking. European Journal of European Research, 98, 364-380.

Hackethal, A. (2001). How Unique Are US Bank?: The Role of Bank in Five Major Financial

Systems. Journal of Economics and Statistics, 221(5-6), 592-619.

Huang C., & Tan-Fu, T. (2009). Uncertainty and total factor productivity in the Taiwanese

banking industry. Applied Financial Economics, 19 (9), 753-766.

Isik, I., & Hassan, M.K. (2003). Efficiency, ownership and market Structure, corporate

control and governance in the Turkish banking industry. Journal of Business Finance

and Accounting, 30 (9-10), 1363-1421.

Johnson, G., & Scholes, K. (1997). Exploring Corporate Strategy: Text and Cases (4th ed.).

New York: Prentice-Hall.

Lipsey, R.G., & Carlaw, K. (2000). What Does Total Factor Productivity Measure?

International Productivity Monitor, 1, 31-40.

Mester, L.J. (2007). Some Thoughts on the Evolution of the Banking System and the Process

of Financial Intermediation. Economic Review, First and Second Quarters, 67-75.

Page 82: Joouurrnnaall aooff tMMaannaggeemmeennt … › kcfinder › upload › files › jmbr0202.pdfJoouurrnnaall aooff tMMaannaggeemmeennt haanndd 2BBuussiinnee sss NRReesseeaarrcch,, II

JJoouurrnnaall ooff MMaannaaggeemmeenntt aanndd BBuussiinneessss RReesseeaarrcchh,, IISSSSNN 22116622--88995555,, VVooll.. 22,, NNoo.. 22,, AApprriill 22001122

8822

Miller, S.M., & Noulas, A.G. (1996). The Technical Efficiency of Large Bank Production.

Journal of Banking and Finance, 20, 495-509.

Milma, J.P., & Hjalmarsson, L. (2002). Measurement of Inputs and Outputs in The Banking

Industry. Tanzanet Journal, 3(1), 12-22.

Parsons, D.J., Gotlieb, C.C., & Denny, M. (1990). Productivity and computers in Canadian

banking. Journal of Productivity Analysis, 4, (1-2), 95-113.

Rebelo, J., & Mendez V. (2000). Malmquist indices of productivity change in Portuguese

banking: The deregulation period. International Advances in Economic Research, 6

(3), 531-543.

Samolyk, K. (2004). The Future of Banking in America: The evolving role of commercial

banks in U.S. credit markets. FDIC Banking Review, 16(2), 29-65.

Schaffinit, C., Rosen, D., & Paradis, J.C. (1997). Best Practice Analysis of Bank Branches:

An Application of DEA in Large Canadian Bank. European of Operational Research,

98, (2), 269-289.

Sharpe, A. (2002). Productivity Concepts, Trends and Prospects: An Overview. The Review

of Economic Performance and Social Progress, 2, 29-56.

Siems, T.F., & Barr, R.S. (1998). Benchmarking the productive efficiency of US banks.

Financial Industry Studies, 11-24.

Solow, R.M. (1957). Technical Change and the Aggregate Production Function. Review of

Economic and Statistics, 39, 312-320.

Staikouras C., Mamatzakis E., & Koutsomanoli-Filippaki A. (2008). Cost efficiency of the

banking industry in the South Eastern European region. Journal of International

Financial Markets, Institutions and Money, 18, 483-497.

Sufian, F. (2009). The impact of off-balance sheet items on banks’ total factor productivity:

empirical evidence from the Chinese banking sector. American Journal of Finance

and Accounting, 1(3), 213-238.

Wheelock, D.C., & Wilson, P.W. (1999). Technical progress, inefficiency and productivity

change in US banking, 1984–1993. Journal of Money, Credit, and Banking, 31, 213–

234.