76
JULY/AUGUST 2012 EVENTS | TECH TIPS | TRAINING | SUPPORT System Platform and Virtualisation The Cloud. Here to stay or fading drizzle? Virtual Reality in the Process Industry Cyber security and the Power industry The Economics of the Cloud Virtualisation not all smoke and mirrors Getting to grips with what’s unreal

Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

Embed Size (px)

DESCRIPTION

Invensys Protocol Magazine - Issue 10, July/August 2012. Virtualisation - not all smoke and mirrors. While this magazine is real, what it talks about doesn't necessarily exist in the same reality that we're used to. No one has ever seen most of the servers in today's industrial and business systems, for example, because they don't really exist - at least not as individual box-like entities. Yet they do all the work that their "real" counterparts do. Welcome to the world of virtualisation which is rapidly becoming the de-facto computational environment of today and tomorrow.

Citation preview

Page 1: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

JULY/AUGUST 2012

EVENTS | TECH TIPS | TRAINING | SUPPORT

System Platform and Virtualisation

The Cloud. Here to stay or fading drizzle?

Virtual Reality in the Process Industry

Cyber security and the Power industry

The Economics of the Cloud

Virtualisationnot all smoke and mirrors

Getting to grips

with what’s unreal

Page 2: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

011 510 0340 | [email protected] | www.advansys.co.za

▪ Process analysis and

performance optimisation

▪ Control system design,

specification and project

management

▪ Instrumentation

specification and installation

▪ PLC solutions

▪ SCADA / HMI solutions

▪ S88 Batch solutions

▪ S95 MES and “vertical”

integration solutions

▪ Reporting solutions

▪ Manufacturing Intelligence

solutions

▪ Software Applications

solutions

Advansys provides

specialised Industrial

Control and Automation

Engineering and

Consulting services to

the manufacturing and

utilities sectors throughout

Southern Africa.

Our services include:

Award winning innovation in HMI / SCADA Implementation

upgrade to advansys

Advansys invites you to contact us for Wonderware related

project initiatives and software licensing as follows:

1. HMI / SCADA Standards Design

and Development

2. ArchestrA System Platform Implementations

3. Production / KPI Reporting Solutions

4. Wonderware Licensing

5. Wonderware aligned Customer First Support

package with your annual renewal

InnovatIon award 2009 / BESt EMI aPPLICatIon 2009

toP SI award 2010 / BESt HMI aPPLICatIon 2010 /

toP SI award 2011

Advansys invites you to contact us for Wonderware related Advansys invites you to contact us for Wonderware related

project initiatives and software licensing as follows:

Advansys invites you to contact us for Wonderware related

project initiatives and software licensing as follows:

Page 3: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 1

Facebook: Wonderware Southern AfricaTwitter: WonderwareSAYouTube: WonderwareSA

www.protocolmag.co.za

Protocol MagazineOwner and Publisher: Invensys Operations Management Southern Africa

Marketing Manager: Jaco Markwat [email protected]

Editor: Denis du Buisson, GMT [email protected]

Advertising Sales: Heather Simpkins, The Marketing [email protected]

Distribution:Nikita Wagner [email protected]

ContributorsMany thanks to the following for their contributions to this issue of the magazine:

Contents2 Editor’s notes

3 Virtualisation and the ArchestrA System Platform

4

Invensys fi rst to support virtualisation technologies for High Availability and Disaster Recovery

5 About System Platform - your base for operational excellence

7 Getting started with System Platform and virtualisation

10 Virtualising the ArchestrA System Platform

12 Availability

14 Disaster Recovery

15 High Availability with Disaster Recovery

16 The economics of the cloud

30 The cloud ... here to stay or fading drizzle?

33 An African cloud

34 The role of virtual reality in the process industry

42 Cyber security in the power industry

43 Power industry locks down

48 Control and Automation security: Fort Knox or swing-door

50Eskom conforms to legal emission limits with help from Wonderware

55 Thin client computing for virtualisation and SCADA

59 Virtualisation needs high availability functionality

60 Virtualisation dictionary

63 Events - MESA SA “Adapt or Die” 2012 CONFERENCE

65 Use Protocol Magazine to generate business opportunities

68 2012 Training Schedule (Johannesburg)

69 Support – Customer FIRST

70 On the lighter side

72 Protocol crossword #55

• Rolf Harms, Director of Corporate Strategy and Michael Yamartino, Manager Corporate Strategy, both from Microsoft Corporation, for the highly informative article on the economics of the cloud

• John Coetzee of Aristotle Consulting for the article titled: Control and Automation security: Fort Knox or Swing-door?

• Gerhard Greeff, Divisional Manager at Bytes Systems Integration for the article titled: The Cloud ... here to stay or fading drizzle?

• Maurizio Rovaglio and Tobias Scheele of Invensys for the article dealing with the role of virtual reality in the process industry.

• Ernest Rakaczky and Thomas Szudajski of Invensys for the article on cyber security in the power industry.

JULY/AUGUST 2012

Contents

Page 4: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

2 | www.protocolmag.co.za

Editor’s Notes

Virtualisation – not all smoke and mirrors

While this magazine is real, what it talks

about doesn’t necessarily exist in the same

reality that we’re used to. No one has

ever seen most of the servers in today’s

industrial and business systems, for example,

because they don’t really exist – at least not

as individual box-like entities. Yet they do

all the work that their “real” counterparts

do. Welcome to the world of virtualisation

which is rapidly becoming the de-facto

computational environment of today and

tomorrow.

This becomes obvious when looking at the

market landscape for virtualisation as shown

in the diagram where the installed base of

physical units is dwarfed by that of logical

units.

When I fi rst came across virtualisation and

cloud computing, I asked myself a simple

question: Why? Why must I reprogram my

intellectually-challenged technology brain

cell to wrap itself around what sounds like

yet another (and somewhat far-fetched)

vendor-driven fad designed to separate

people from their wallets?

I discovered that what’s driven virtualisation

is getting the most from the unused power

in existing computer technology so that end-

users could access a greater set of solutions

at a lower cost. This came as a huge blow to

my home-cooked cynicism about a vendor-

driven fad but gave my technology brain cell

a new purpose and a reason to live.

With computing power doubling every 18

months according to Moore’s law, the PCs

on our desks and the servers in our server

rooms are doing nothing most of the time

– so why not put them to work? Powerful

desktop PCs can be replaced with Thin

Clients while a single server can now be host

to many “virtual” servers, each operating

in their own environment and making the

most use of the power of multi-core CPUs.

What’s more, these servers need not all be

in one box but “somewhere out there” in

the cloud of Internet and intranet-linked

computing resources and services. The

operational and cost benefi ts to be derived

from this arrangement are numerous but

that led me to another question: While this

may sound like the ultimate “outsourcing”

solution, what about security and proprietary

processes (e.g. real-time manufacturing)?

Quite simply, there are some applications

which are suited to the cloud environment

while others (e.g. real-time industrial

processes) are not.

Further, as outlined

in the article “The

economics of the cloud”,

there is no reason why

cloud security should be

any more lax than any

other. To quote: “In fact,

they [public clouds] are

likely to become more

secure than on premises

[private clouds] due to

the intense scrutiny providers must place on

security and the deep level of expertise they

are developing.”

As organisations continue to implement

virtualisation and converge their data

centre environments, client architectures

also continue to evolve in order to take

advantage of the predictability, continuity

and quality of service delivered by their

converged infrastructure.

Selected client environments move

workloads from PCs and other devices to

data centre servers, creating well-managed

virtual clients, with applications and client

operating environments hosted on servers

and storage in the data centre. For users,

this means they can access their desktop

from any location, without being tied to a

single client device. Since the resources

are centralised, users moving between

work locations can still access the same

client environment with their applications

and data. For IT administrators, this

means a more centralised, effi cient client

environment that is easier to maintain

and able to more quickly respond to the

changing needs of users and businesses.

So, it’s not all smoke and mirrors after all.

Until next time,

Denis du Buisson

[email protected]

Acknowledgement: Some of this material is

sourced from Wikipedia

Page 5: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 3

Virtualisation and the ArchestrA System PlatformLaunched in 2003, Invensys Wonderware’s System Platform is the unique unifying force behind

industrial applications and their implementation. This ArchestrA-based, industrial “operating system”

preserves past investments, helps save buckets of money on current engineering efforts and is ready

to do the same in the future - because that’s what it was designed to do.

Page 6: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

4 | www.protocolmag.co.za

Invensys first to support virtualisation technologies for High Availability and Disaster RecoveryCompany expands certification for VMware

and Microsoft Hyper-V virtualisation

platforms

In February 2012, Invensys Operations

Management announced that it had

expanded its certification for virtualisation

technology, making it the first industrial

automation provider to be certified for

high availability, disaster recovery and fault

tolerance in supervisory control applications

leveraging both the VMware and Microsoft

Hyper-V virtualisation platforms. The

company’s ArchestrA® System Platform 2012

and Wonderware® InTouch® 2012 software

are now certified for the latest VMware

solutions, including VMware vSphere version

5.0 and ESXi version 5.0 for mission-critical

applications.

“Historically, high-availability and disaster-

recovery solutions in supervisory control

systems were expensive to implement, not

only because of hardware and software

costs, but also because of additional

administrative burdens,” said Deon

van Aardt, Divisional Director, Invensys

Operations Management Southern Africa.

“Along with many other benefits, when

ArchestrA System Platform 2012 and InTouch

2012 software are deployed, they support

high-availability and disaster recovery

implementations using Windows Server

Hyper-V virtualisation from Microsoft, as

well as the latest remote desktop services

that are part of Windows Server 2008 R2.

Now, after a rigorous validation period,

our ArchestrA System Platform 2012 and

Wonderware InTouch 2012 software are

also certified for disaster recovery and high

availability using VMware virtualisation. All

this is possible on commercial operating

systems using off-the-shelf hardware, further

reducing cost and easing implementation of

mission-critical applications.”

Virtualisation software, like that offered

by Microsoft and VMware, transforms or

“virtualises” a computer’s hardware, such as

the CPU, hard drive and network controller,

to create a virtual computer that can run

its own operating system and applications

just like a “standard” computer. By sharing

hardware resources with each other, multiple

operating systems can run simultaneously on

a single physical computer. And because it

has the CPU, memory and network devices

of the “host,” a virtual machine is completely

compatible with all standard operating

systems, applications and drivers. With

virtualisation, users can safely run several

operating systems and applications at the

same time on a single computer, with each

having access to the resources it needs

when it needs them. And it’s all possible

with commercial off-the-shelf hardware and

operating systems.

“End-users are rapidly deploying

virtualisation solutions to reduce the

number of physical servers needed for

their plants in order to lower their hardware

costs, IT costs and energy bills,” said Craig

Resnick, vice president, ARC Advisory

Group. “Virtualisation technology also

helps end-users with system deployment

of high-availability, disaster-recovery and

fault-tolerance solutions as it is used to

quickly get plants back up and running

when computers fail, regardless of location.

Invensys Operations Management’s

certification for both VMware and Microsoft

Hyper-V ensures that its customers are

covered and protected regardless of their

choice of platform.”

“While the underlying technology is

sophisticated, virtualisation can deliver

benefits that are much simpler to

understand and achieve,” added van Aardt.

“By eliminating dependencies between

the physical hardware and the software,

customers have more choices to improve the

management of their applications, servers

and equipment. One of the many benefits

is the ability to move virtual machines

between host computers, which enables

a variety of different fail-safe scenarios to

be implemented, each providing options

for different levels of redundancy that

make systems more resilient, less prone

to equipment or site failure and simpler to

upgrade.”

At the OpsManage’11 conference last year,

Invensys Operations Management set up

and demonstrated a scenario where an

entire primary system in Florida failed over

to a backup disaster recovery system in

California.

“Customers were impressed with the ease

and speed with which the backup system

took over control, using only commercial

hardware,” said van Aardt. “That’s the

flexibility they are looking for to modernise

and optimise their businesses. As the

first industrial automation provider to

be certified for high availability, disaster

recovery and fault tolerance on the two

major virtualisation platforms, we look

forward to continuing to offer the products

our customers need to achieve the highest

levels of system availability, reliability and

operational efficiency.”

Page 7: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 5

About System Platform - your base for operational excellence

In a nutshell ...

The Wonderware System Platform provides a

versatile application server, a powerful historian

server, an easy-to-use information server and

unparalleled connectivity.

This de-facto information-capturing and

application-unifying standard is designed

with fl exibility and power for a wide range of

applications and industries from geo-SCADA to

real-time environments.

Key benefi ts

• Integrate people, information and processes

• Empower and motivate people to collaborate

• Drive, enforce and adapt processes

• Drive, enforce and adapt standards

• Achieve consistent high quality

• Enable innovation and adaptability to

change

• Reduce risk

• Shorten the time to value/time to market

• Retain highly valued people

• Increase profi ts

Key capabilities

• Common plant model reduces complexity

• Remote software deployment and

maintenance

• Extensible and easily maintained using

template-based and object oriented

structures

• Powerful role-based security model

• “Optimised for SCADA” network and

communication features

• Historical data collection and advanced

trending

• Web based reporting capabilities

• Support of Microsoft Remote Desktop

Services, Smart Card authentication and

Hyper-V virtualisation allow highly economic,

secure and available systems

The Wonderware® System Platform is a

strategic industrial software application

platform that’s built on ArchestrA®

technology for supervisory control,

Geo-SCADA as well as production and

performance management solutions.

Designed to suit the needs of industrial

automation and information personnel,

the Wonderware System Platform is the

backbone and manager for all functional

capabilities required for industrial software

solutions.

With the System Platform, Wonderware

provides an industrialised application

server, a powerful historian server, an

easy to use information server as well as

unparalleled connectivity, all specifi cally

built for demanding real-time industrial

environments.

System Platform 2012 features a virtual computer infrastructure.

Improved process control

Tighter integration of automation applications

Virtual computer infrastructure

ArchestrA System Platform 2012 supports

new high-availability disaster recovery

implementations using Windows Server

Hyper-V virtualisation from Microsoft. In

addition, ArchestrA System Platform 2012

software supports all the latest remote

desktop services that are part of Windows

Server 2008 R2.

Many Windows Server Hyper-V customers

in manufacturing and processing need to

integrate legacy automation, monitoring and

reporting systems across different locations.

By supporting the full spectrum of Windows

Server Hyper-V capabilities, Invensys

Operations Management is enabling the

fl exibility and technology their customers

need to achieve real-time business

optimisation.

Today’s manufacturers and processing

companies typically operate a range of

facilities, all with different automation,

monitoring and reporting systems.

Managing these legacy systems can be

2

3

1

With System Platform, implementation topology and architecture can be modifi ed and redeployed without any need for reengineering, giving you the advantage of freedom and speed.

About System Platform - your base for operational excellence

Page 8: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

6 | www.protocolmag.co.za

costly and time consuming. ArchestrA

System Platform solves this problem by

providing a common, highly-efficient

infrastructure to easily develop, manage

and maintain industrial applications with

exceptional scalability and openness.

These solutions allow users, anywhere

on the network, to design, build and

deploy industrial workflow, HMI as well as

automation and information applications,

while leveraging a powerful combination

of re-usable application templates, along

with easy and transparent management of

The Wonderware system platform provides cost-effective communication to virtually any plant information source, including historians, relational databases, quality and maintenance systems, enterprise business systems and manufacturing execution systems.

physical and virtualised computers. Not only

does this reduce engineering costs, shorten

project timetables and enforce automation

and operational procedures, it really

empowers the workforce and strengthens

the drive toward real-time business

optimisation.

What follows is a brief look at what you

need to consider for the implementation

of Virtualisation, High Availability and

Disaster Recovery on your existing ArchestrA

System Platform. These are extracts from

the comprehensive 700-page document

titled: “Wonderware ArchestrA System

Platform in a Virtualised Environment -

Implementation Guide”.

The full guide on implementing the

ArchestrA System Platform in a Virtualised

Environment using Microsoft Hyper-V

technology, failover clustering and other

strategies to create High Availability,

Disaster Recovery and High Availability with

Disaster Recovery capabilities can be viewed

on-line and you can also download it and

print it in part or in whole.

Page 9: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 7

Getting started with System Platform and virtualisationOverview of implementing ArchestrA System Platform in a virtualised environment

This is done using Microsoft Hyper-V technology, failover clustering

and other strategies to create High Availability (HA), Disaster Recovery

(DR) as well as High Availability integrated with Disaster Recovery

capabilities.

Virtualisation technologies are becoming high priority for

IT administrators and managers, software and systems engineers,

plant managers, software developers, and system integrators. Mission-

critical operations in both small and large-scale organisations demand

availability—defined as the ability of the user community to access the

system—along with dependable recovery from natural or man-made

disasters. Virtualisation technologies provide a platform for High

Availability and Disaster Recovery solutions.

Let’s assume that you and your organisation have done the necessary

research and analysis and have made the decision to implement

ArchestrA System Platform in a virtualised environment that will

replace the need for physical computers. Such an environment can

take advantage of advanced virtualisation features including

High Availability and Disaster Recovery. In that context, we’ll define

the terms as follows:

• Virtualisation is a concept in which access to a single underlying

piece of hardware (like a server) is coordinated so that multiple guest

operating systems can share that single piece of hardware with

no guest operating system being aware that it is actually sharing

anything.

So, instead of using, say, 4 servers all running at between 10% and

35% utilisation, one could rather run the applications as 4 virtual

machines but on 2 physical servers – and these 2 servers may now

run at say 70% utilisation.

In short, virtualisation allows for two or more virtual computing

environments on a single piece of hardware that may be running

different operating systems and decouples users, operating systems

and applications from physical hardware.

Virtualisation creates a virtual (as opposed to “real”) version of

ArchestrA System Platform or one of its components, including

servers, nodes, databases, storage devices and network resources.

• High Availability (HA) means having the physical servers on which

the community depends available for use the great majority of the

time (e.g. 99% or greater). So, if a physical server fails, the system

will ensure a rapid switchover to a standby backup unit to ensure

continuity of service.

A failover condition may result in some loss of data.

• Disaster Recovery (DR) was intended for use in case of the inability

of localised, on-line servers to function or in the event of their

destruction.

Because of this, systems able to cope with Disaster Recovery are

normally quite geographically remote from those they will be asked

to back up. For example, at the OpsManage’11 conference, Invensys

Operations Management set up and demonstrated a scenario where

an entire primary system in Florida failed over to a backup disaster

recovery system in California.

Because these systems are not on-line like the local servers, a failover

condition may result in the loss of more data than would be the case

in a local switch-over condition.

While these definitions are general and allow for a variety of HA

and DR designs, what follows focuses on virtualisation, an indispensible

element in creating the redundancy necessary for HA and DR solutions.

Types of Virtualisation

There are eight types of virtualisation:

• Hardware: A software execution environment separated from

underlying hardware resources. Includes hardware-assisted

virtualisation, full and partial virtualisation and paravirtualisation.

• Memory: An application operates as though it has sole access to

memory resources, which have been virtualised and aggregated

into one memory pool. Includes virtual memory and memory

virtualisation.

• Storage: Complete abstraction of logical storage from physical

storage

• Software: Multiple virtualised environments hosted within a single

operating system instance. Related is a virtual machine (VM) which is

a software implementation of a computer, possibly hardware-

assisted, which behaves like a real computer.

• Mobile: Uses virtualisation technology in mobile phones and other

types of wireless devices.

• Data: Presentation of data as an abstract layer, independent of

underlying databases, structures and storage. Related is database

virtualisation, which is the decoupling of the database layer

within the application stack.

• Desktop: Remote display, hosting or management of a graphical

computer environment—a desktop.

Getting started with System Platform and virtualisation

Page 10: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

8 | www.protocolmag.co.za

• Network: Implementation of a virtualised network address space

within or across network subnets.

Virtualisation using a Hypervisor

Microsoft Hyper-V technology implements a type of

hardware virtualisation using a hypervisor, permitting a number of

guest operating systems (virtual machines) to run concurrently on

a host computer. The hypervisor, also known as a Virtual Machine

Monitor (VMM), is so named because it exists above the usual

supervisory portion of the operating system.

There are two classifi cations of hypervisor:

• Type 1: Also known as a bare-metal hypervisor, runs directly on

the host hardware to control it and to monitor the guest operating

systems. Guest operating systems run as a second level above the

hypervisor.

• Type 2: Also known as a hosted hypervisor, runs within

a conventional operating system environment as a second software

level. Guest operating systems run as a third level above the

hypervisor.

Hyper-V architecture

Hyper-V implements Type 1 hypervisor virtualisation, in which

the hypervisor primarily is responsible for managing the physical

CPU and memory resources among the virtual machines. This

basic architecture is illustrated in fi gure 1.

VM and Hyper-V Limits in Windows Server 2008 R2

Tables 1 and 2 show maximum values for VMs and for a server running

Hyper-V in Windows Server 2008 R2 Standard and Enterprise editions,

respectively. By understanding the limits of the hardware, software and

virtual machines, you can better plan your ArchestrA System Platform

virtualised environment.

Component Maximum Notes

Virtual processor 4

Memory 64 GB

Virtual IDE disk 4 The boot disk must be attached to one of the IDE devices.

The boot disk can be either a virtual hard disk or a physical

disk attached directly to a virtual machine.

Virtual SCSI

controllers

4 Use of virtual SCSI devices requires integration services to be

installed in the guest operating system.

Virtual SCSI discs 256 Each SCSI controller supports up to 64 SCSI discs.

Virtual hard disk

capacity

2040 GB Each virtual hard disk is stored as a .vhd fi le on physical

media.

Size of physical discs

attached to a VM

Varies Maximum size is determined by the guest operating system.

Checkpoints

(snapshots)

50 The actual number depends on the available storage and

may be lower. Each snapshot is stored as an .avhd fi le that

consumes physical storage.

Virtual network

adapters

12 8 of these can be the “network adapter” type. This type

provides better performance and requires a virtual machine

driver that is included in the integration services packages.

The remaining 4 can be the “legacy network adapter” type.

This type emulates a specifi c physical network adapter and

supports the Pre-execution Boot Environment (PXE) to

perform a network-based installation of an operating system.

Virtual fl oppy drives 1

Serial (COM) ports 2

Component Maximum Notes

Logical processors 64

Virtual processors

per logical processor

8

Virtual machines per

server

384 (running)

Virtual processors

per server

512

Memory 1TB

Storage Varies – no limit

imposed by Hyper-V

Limited by the support capability of the

management operating system.

Physical network

adapters

No limit imposed by

Hyper-V

Virtual networks

(switches)

Varies – no limit

imposed by Hyper-V

Limited by available computing resources

Virtual network

switch ports per

server

Varies – no limit

imposed by Hyper-V

Limited by available computing resources

Figure 1: Hyper-V architecture

Table 1: Hyper-V Server maxima - Windows 2008 R2 Standard

Edition

Table 2: Hyper-V Server maxima - Windows 2008 R2 Enterprise

Edition

Page 11: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 9

Avantis Eurotherm Foxboro IMServ InFusion SimSci-Esscor Skelta Triconex Wonderware

Traffic that flows. Intelligent buildings that save energy. A public infrastructure that delivers whole new levels of service at lower cost. Systems, assets, people and the environment living in harmony.

You have imagined ArchestrA, the Wonderware technology that lets you manage your infrastructure as you like in an integrated way. Open, scalable, affordable.

Turn imagination into reality with Wonderware. Visit wonderware.com/Infrastructure for more info.

imagineimaginea better future

Building intelligence

Resources Optimization

Cost ReductionSecurity

BetterServices

integrated management

of the City

Public Utilities management

© Copyright 2012. All rights reserved. Invensys, the Invensys logo, Avantis, Eurotherm, Foxboro, IMServ, InFusion, Skelta, SimSci-Esscor, Triconex and Wonderware are trademarks of Invensys plc, its subsidiaries or affiliates. All other brands and product names may be trademarks of their respective owners.

Facility Management • Environment • Power • Smart Cities • Transportation • Waste • Water & Wastewater

Page 12: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

10 | www.protocolmag.co.za

Virtualising the ArchestrA System PlatformAbstraction versus Isolation

With the release of InTouch 10.0, supporting

the VMWare ESX platform, Wonderware

became one of the first companies to

support virtual machine operation of

industrial software. VMware ESX is referred

to as a “bare metal” virtualisation

system. The virtualisation is run in an

abstraction layer, rather than in a standard

operating system.

Microsoft takes a different approach

to virtualisation. Microsoft Hyper-V is a

hypervisor-based virtualisation system. The

hypervisor is essentially an isolation layer

between the hardware and partitions which

contain guest systems. This requires at least

one parent partition, which runs Windows

Server 2008.

Figures 2 and 3 show non-virtualised and

virtualised ArchestrA System Platform

topologies respectively.

Note:

An abstraction layer is a layer with

drivers that make it possible for the

virtual machine (VM) to communicate

with hardware (VMware).

In this scenario, the drivers

need to be present for proper

communication with the hardware.

With an isolation layer, the VM uses

the operating system, its functionality

and its installed drivers. This scenario

does not require special drivers.

As a comparison, the abstraction

layer in VMware is 32MB and in

Hyper-V it is 256kb.

Sizing recommendations for virtualisation

The following provides sizing guidelines and

recommended minima for ArchestrA System

Platform installations.

For a virtualisation-only implementation, you

can use these minima and guidelines to size

the virtualisation server or servers that will

host your System Platform configuration.

Cores and Memory

• Spare Resources - The host server should

always have spare resources of 25% above

what the guest machines require. For

example, if a configuration with five nodes

requires 20GB of RAM and 10 CPUs, the

host system should have 25GB of RAM

and 13 CPUs. If this is not feasible, choose

the alternative closest to the 25% figure,

but round up so that the host server has

32GB of RAM and 16 cores.

• Hyper-Threading - Hyper-Threading

Technology can be used to extend the

amount of cores, but it does impact

performance. An 8-core CPU will perform

better than a 4-core CPU that is Hyper-

Threading.

Storage

It is always important to plan for proper

Storage. A best practice is to dedicate

a local drive or virtual drive on a Logical

Unit Number (LUN) to each of the VMs

being hosted. We recommend SATA or

higher interfaces.

Recommended storage topology

To gain maximum performance, the host OS

also should have a dedicated storage drive.

A basic storage topology would include:

• Host storage

• VM storage for each VM

• A general disc which should be large

enough to hold snapshots, backups and

Figure 2: A common, non-virtualised ArchestrA System Platform topology.

Page 13: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 11

Figure 3: The same environment as figure 2 but virtualised using Hyper-V

Table 3: Minimum system Platform configurations for small (black) as well as medium and large (red) systems

other content. It should not be used by

the host or by a VM.

Recommended storage speed

Boot times and VM performance are

impacted both by storage bandwidth and

storage speed. Faster is always better. Drives

rated at 7200 rpm perform better than those

rated at 5400 rpm. Solid-state drives (SSDs)

perform better than 7200-rpm drives.

Keep in mind that multiple VMs attempting

to boot from one hard drive will be slow

and your performance will experience a

significant degradation. Attempting to save

on storage could well become more costly in

the end.

Networks

Networking is as important as any other

component for the overall performance of

the system.

Recommended networking for virtualisation

- If virtualisation is your only requirement,

your network topology could include the

following elements:

• Plant network

• Storage network

• Hyper-V network.

A best practice is to establish, on every

node, an internal-only Static Virtual Network.

In the event that the host and the guest

VMs become disconnected from the outside

world, you will still be able to communicate

through an RDP session independent of

external network connectivity.

Table 3 shows recommended minima for

System Platform configurations.

Item Cores RAM Storage

Galaxy Repository node 2-4 2-4 100/250

Historian 2-4 2-4 250/500

Application Server 2-2/4 2-4 100/100

RDS Servers 2-4/8 2-4/8 100/100

Information Servers 2-4 2-4 100/100

Historian Clients 2-2 2-4 100/100

Virtualising the ArchestrA System Platform

Page 14: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

12 | www.protocolmag.co.za

AvailabilityLevels of Availability

When planning a virtualisation implementation—for High Availability, Disaster Recovery, Fault Tolerance, and Redundancy—it is helpful to consider

levels or degrees of redundancy and availability as shown in table 4.

Level Description Comments

Level 0

Redundancy

No redundancy built into the architecture for

safeguarding critical architectural components

Expected failover: None

Level 1

Cold stand-by redundancy

Redundancy at the Application Object level

Safeguards single points of failure at the

DAServer level or AOS redundancy.

Expected failover: 10 to 60 seconds

Level 2

High Availability (HA)

• With provision to synchronise in real-time

• Uses virtualisation techniques

• Can be 1-n levels of hot standby

• Can be geographically diverse (DR)

• Uses standard OS and non-proprietary

hardware

Expected failover: Uncontrolled 30 seconds

to 2 minutes,

Disaster Recovery: 2 - 7 minutes

Level 3

Hot redundancy

Redundancy at the application level typically

provided by Invensys controllers. For example,

hot backup of Invensys software such as Alarm

System.

Expected failover: Next cycle or single-digit

seconds

Level 4

Lock-step fault tolerance (FT)

Provides lock-step failover Expected failover: Next cycle or without loss

of data.

For ArchestrA System Platform, this would be

a Marathon-type solution, which can also be a

virtualised system.

Table 4: Degrees of redundancy and availability that should be considered

High Availability

About HA

High Availability refers to the availability of resources in a

computer system following the failure or shutdown of one or more

components of that system.

At one end of the spectrum, traditional HA has been achieved

through custom-designed and redundant hardware. This solution

produces High Availability, but has proven to be very expensive.

At the other end of the spectrum are software solutions designed

to function with off-the-shelf hardware. This type of solution

typically results in significant cost reduction and has proven to survive

single points of failure in the system.

High Availability scenarios

The basic HA architecture implementation described here consists

of an on-line system including a Hyper-V Server and a number of

virtual PCs, linked by a LAN to an offline duplicate system. The LAN

accommodates a number of networks including a plant floor network

linked to plant operations, an I/O network linked to field devices and a

replication network linked to storage.

The basic architecture shown in figure 4 permits a number of common

scenarios:

IT maintains a virtual server

• A system engineer fails over all virtual nodes hosting

ArchestrA System Platform software to back up the virtualisation

server over the LAN.

• For a distributed system, the system engineer fails over all virtual

nodes to back up the virtualisation server over a WAN.

• IT performs the required maintenance, requiring a restart of the

primary virtualisation server.

Page 15: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 13

Figure 4: High-Availability architecture

Virtualisation server hardware fails (Note: This scenario is a

hardware failure, not software. A program that crashes or hangs is

a failure of software within a given OS.)

• The primary virtualisation server hardware fails with a

backup virtualisation server on the same LAN.

• For a distributed system, the virtualisation server hardware fails with

a backup virtualisation server over WAN.

A network fails on a virtual server.

• Any of the primary virtualisation server network components fail with

a backup virtualisation server on the same LAN, triggering a backup

of virtual nodes to the backup virtualisation server.

• Any of the primary virtualisation server

network components fail with a backup

virtualisation server connected via WAN,

triggering a backup of virtual nodes to

the backup virtualisation server over

WAN.

For these scenarios, the following

expectations apply:

• For the maintenance scenario, all

virtual images are up and running from

the last state of execution prior to

failover.

• For the hardware and network failure

scenarios, the virtual images restart

following failover.

• For LAN operations, you should

see operational disruptions for approximately 2-15 seconds (LAN

operations assume recommended speeds and bandwidth.

• For WAN operations, you should see operational disruptions for

approximately 2 minutes (WAN operations assume recommended

speeds and bandwidth.

Note: The disruption spans described here are general

and approximate.

Availability

Page 16: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

14 | www.protocolmag.co.za

Disaster RecoveryAbout DR

Disaster Recovery planning typically involves policies, processes, and

planning at the enterprise level, which is well outside the scope of this

article.

DR, at its most basic, is all about data protection. The most common

strategies for data protection include the following:

• Backups made to tape and sent off-site at regular intervals, typically

daily.

• For the hardware and network failure scenarios, the virtual images

restart following failover

• For the hardware and network failure scenarios, the virtual images

restart following failover

• Backups made to disk on-site, automatically copied to an off-site

disc, or made directly to an off-site disk.

• Replication of data to an off-site location, making use of storage

area network (SAN) technology. This strategy eliminates the need

to restore the data. Only the systems need to be restored or

synchronised.

• High availability systems which replicate both data and system off-

site. This strategy enables continuous access to systems and data.

The ArchestrA System Platform virtualised environment implements the

fourth strategy—building DR on an HA implementation.

Disaster Recovery scenarios

The basic DR architecture implementation described here builds on the

HA architecture by moving storage to each Hyper-V server and moving

the offline system to an off-site location.

The DR scenarios duplicate those described in “High Availability

Scenarios” above, with the variation that all failovers and backups occur

over a WAN as shown in figure 5.

Figure 5: Disaster Recovery architecture

Page 17: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 15

About HADR

The goal of a High Availability and Disaster Recovery (HADR) solution

is to provide a means to shift data processing and retrieval to a standby

system in the event of a primary system failure.

Typically, HA and DR are considered as individual architectures. HA and

DR combined treat these concepts as a continuum. If your system is

geographically distributed, for example, HA combined with DR

can make it both highly available and able to recover from a disaster

quickly.

Figure 6: Combined DR and HA architecture

High Availability with Disaster Recovery

HADR scenarios

The basic HADR architecture implementation described in this

guide builds on both the HA and DR architectures adding an offline

system plus storage at “Site A”. This creates a complete basic

HA implementation at “Site A” plus a DR implementation at “Site B”

when combined with distributed storage.

The scenarios and basic performance metrics described in “High

Availability Scenarios” above also apply to HADR.

High Availability with Disaster Recovery

Page 18: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

16 | www.protocolmag.co.za

The economics of the cloudCloud computing is neither an

invention nor a discovery. It

is the natural next step in an

information delivery evolution

that started with mainframes and

which is now poised to provide

an unprecedented level of

decision support to all levels of

the enterprise because, whatever

their size, they can at last all afford

it. Information and the access to

computing resources has never

before had such a breakthrough.

Rolf Harms, Director of Corporate

Strategy and Michael Yamartino,

Manager Corporate Strategy, both

from Microsoft Corporation, explain

the economics of the cloud.

Page 19: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 17

Computing is undergoing a seismic shift

from client/server to the cloud, a shift similar

in importance and impact to the transition

from mainframe to client/server. Speculation

abounds on how this new era will evolve in the

coming years, and IT leaders have a critical

need for a clear vision of where the industry

is heading. We believe the best way to form

this vision is to understand the underlying

economics driving the long-term trend. In this

paper, we will assess the economics of the

cloud by using in-depth modelling. We then

use this framework to better understand the

long-term IT landscape.

1. Introduction

When cars emerged in the early 20th

century, they were initially called “horseless

carriages”. Understandably, people were

sceptical at first, and they viewed the

invention through the lens of the paradigm

that had been dominant for centuries: the

horse and carriage. The first cars also looked

very similar to the horse and carriage (just

without the horse), as engineers initially

failed to understand the new possibilities

of the new paradigm, such as building for

higher speeds, or greater safety. Incredibly,

engineers kept designing the whip holder

into the early models before realising that it

wasn‘t necessary anymore.

Initially there was a broad failure to fully

comprehend the new paradigm. Banks

claimed that, “The horse is here to stay but

the automobile is only a novelty, a fad”.

Even the early pioneers of the car didn‘t fully

grasp the potential impact their work could

have on the world. When Daimler, arguably

the inventor of the automobile, attempted

to estimate the long-term auto market

opportunity, he concluded there could never

be more than 1 million cars, because of

their high cost and the shortage of capable

chauffeurs1.

By the 1920s the number of cars had already

reached 8 million and today there are over

600 million cars – proving Daimler wrong

hundreds of times over. What the early

pioneers failed to realise was that profound

reductions in both cost and complexity of

operating cars and a dramatic increase in

its importance in daily life would overwhelm

prior constraints and bring cars to the masses.

Today, IT is going through a similar change:

the shift from client/server to the cloud.

The cloud promises not just cheaper IT, but

also faster, easier, more flexible and more

effective IT.

Just as in the early days of the car industry,

it‘s currently difficult to see where this new

paradigm will take us. The goal of this

whitepaper is to help build a framework

that allows IT leaders to plan for the cloud

transition2. We take a long-term view in

our analysis, as this is a prerequisite when

evaluating decisions and investments that

could last for decades. As a result, we focus

on the economics of the cloud rather than on

specific technologies or other driving factors

like organisational change, as economics

often provide a clearer understanding of

transformations of this nature.

In Section 2, we outline the underlying

economics of the cloud, focusing on what

makes it truly different from client/server.

In Section 3, we will assess the implications

of these economics for the future of IT. We

will discuss the positive impact cloud will

have but will also discuss the obstacles that

still exist today.

Finally, in Section 4, we will discuss what‘s

important to consider as IT leaders embark

on the journey to the cloud.

2. Economics of the cloud

Economics are a powerful force in

shaping industry transformations. Today‘s

discussions on the cloud focus a great deal

on technical complexities and adoption

hurdles. While we acknowledge that such

concerns exist and are important, historically,

underlying economics have a much stronger

impact on the direction and speed of

disruptions, as technological challenges are

resolved or overcome through the rapid

innovation we‘ve grown accustomed to

(Figure 2). During the mainframe

era, client/server was initially viewed

as a “toy” technology, not viable as a

mainframe replacement.

Yet, over time, the client/server technology

found its way into the enterprise (Figure 3).

Similarly, when virtualisation technology was

first proposed, application compatibility

concerns and potential vendor lock-in

were cited as barriers to adoption.

Yet underlying economics of 20 to 30

Figure 1: Horseless carriage syndrome

Figure 2: Cloud opportunity. Source: Microsoft

1 Source: Horseless Carriage Thinking, William Horton Consulting.

2 Cloud in this context refers to cloud computing architecture, encompassing both public and private clouds.

The economics of the cloud

Page 20: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

18 | www.protocolmag.co.za

percent savings3 compelled CIOs to

overcome these concerns, and adoption

quickly accelerated.

The emergence of cloud services is

again fundamentally shifting the economics

of IT. Cloud technology standardises and

pools IT resources and automates man y

of the maintenance tasks done manually

today. Cloud architectures facilitate elastic

consumption, self-service, and pay-as-

you-go pricing.

Cloud also allows core IT infrastructure to

be brought into large data centres that take

advantage of significant economies of scale

in three areas:

• Supply-side savings. Large-scale data

centres (DCs) lower costs per server.

• Demand-side aggregation. Aggregating

demand for computing smoothes overall

variability, allowing server utilisation rates

to increase.

• Multi-tenancy efficiency. When changing

to a multitenant application model,

increasing the number of tenants (i.e.,

customers or users) lowers the application

management and server cost per tenant.

2.1 Supply-side economies of scale

Cloud computing combines the best

economic properties of mainframe and

3 Source: “Dataquest Insight: Many Midsize Businesses Looking Toward 100% Server Virtualisation”. Gartner, May 8, 2009.

4 Source: The Economics of Virtualisation: Moving Toward an Application-Based Cost Model, IDC, November 2009.

5 Not including app labour. Studies suggest that for low-efficiency data centres, three-year spending on power and cooling, including infrastructure, already outstrips three-year server

hardware spending.

6 Power Utilization Effectiveness equals total power delivered into a data centre divided by critical power – the power needed to actually run the servers. Thus, it measures the efficiency of

the data centre in turning electricity into computation. The best theoretical value is 1.0, with higher numbers being worse.

client/server computing.

The mainframe era

was characterised by

significant economies of

scale due to high up-front

costs of mainframes

and the need to hire

sophisticated personnel

to manage the systems.

As required computing

power – measured in

MIPS (million instructions

per second) – increased,

cost declined rapidly

at first (Figure 4), but

only large central IT organisations had the

resources and the aggregate demand to

justify the investment. Due to the high cost,

resource utilisation was prioritised over

end-user agility. Users‘ requests were put in

a queue and processed only when needed

resources were available.

With the advent of minicomputers and later

client/server technology, the minimum unit

of purchase was greatly reduced, and the

resources became easier to operate and

maintain. This modularisation significantly

lowered the entry barriers to providing IT

services, radically improving end-user agility.

However, there was a significant utilisation

trade-off, resulting in the current state of

affairs: data centres sprawling with servers

purchased for whatever need existed at the

time, but running at just 5%-10% utilisation4.

Cloud computing is not a return to the

mainframe era as is sometimes suggested,

but in fact offers users economies of scale

and efficiency that exceed those of a

mainframe, coupled with modularity and

agility beyond what client/server technology

offered, thus eliminating the trade-off.

The economies of scale emanate from the

following areas:

• Cost of power. Electricity cost is rapidly

rising to become the largest element

of total cost of ownership (TCO)5

currently representing 15%-20%. Power

Usage Effectiveness (PUE)6 tends to be

significantly lower in large facilities than in

smaller ones. While the operators of small

data centres must pay the prevailing local

rate for electricity, large providers can

pay less than one-fourth of the national

average rate by locating its data centres

in locations with inexpensive electricity

supply and through bulk purchase

Figure 3: Beginning the transition to client/server

technology. Source: “How convention shapes our

market” longitudinal survey, Shana Greenstein, 1997

Figure 4: Economies of scale (illustrative)

Page 21: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 19

7 Source: U.S. Energy Information Administration (July 2010) and Microsoft. While the average U.S. commercial rate is 10.15 cents per kilowatt hour, some locations offer power for as little as

2.2 cents per kilowatt hour

8 Source: James Hamilton, Microsoft Research, 2006.

9 In this paper, we talk generally about “resource” utilization. We acknowledge there are important differences among resources. For example, because storage has fewer usage spikes

compared with CPU and I/O resources, the impact of some of what we discuss here will affect storage to a smaller degree.

10 Multiple applications can run on a single server, of course, but this is not common practice. It is very challenging to move a running application from one server to another without also

moving the operating system, so running multiple applications on one operating system instance can create bottlenecks that are difficult to remedy while maintaining service, thereby limiting

agility. Virtualisation allows the application plus operating system to be moved at will.

agreements7.In addition, research has

shown that operators of multiple data

centres are able to take advantage of

geographical variability in electricity rates,

which can further reduce energy cost.

• Infrastructure labour costs. While cloud

computing significantly lowers labour

costs at any scale by automating many

repetitive management tasks, larger

facilities are able to lower them further

than smaller ones. While a single system

administrator can service approximately

140 servers in a traditional enterprise8, in a

cloud data centre the same administrator

can service thousands of servers. This

allows IT employees to focus on higher

value-add activities like building new

capabilities and working through the

long queue of user requests every IT

department contends with.

• Security and reliability. While often

cited as a potential hurdle to public cloud

adoption, increased need for security and

reliability leads to economies of scale due

to the largely fixed level of investment

required to achieve operational security

and reliability. Large commercial cloud

providers are often better able to bring

deep expertise to bear on this problem

than a typical corporate IT department,

thus actually making cloud systems more

secure and reliable.

• Buying power. Operators of large data

centres can get discounts on hardware

purchases of up to 30 percent over smaller

buyers. This is enabled by standardising

on a limited number of hardware and

software architectures. Recall that for

the majority of the mainframe era, more

than 10 different architectures coexisted.

Even client/server included nearly a

dozen UNIX variants and the Windows

Server OS, and x86 and a handful of

RISC architectures. Large-scale buying

power was difficult in this heterogeneous

environment. With cloud, infrastructure

homogeneity enables scale economies.

Going forward, there will likely be many

additional economies of scale that we

cannot yet foresee. The industry is at the

early stages of building data centres at a

scale we‘ve never seen before (Figure 5).

The massive aggregate scale of these mega

DCs will bring considerable and ongoing

R&D to bear on running them more

efficiently, and make them more efficient

for their customers. Providers of large-scale

DCs, for which running them is a primary

business goal, are likely to benefit more

from this than smaller DCs which are run

inside enterprises.

2.2 Demand-side economies of scale

The overall cost of IT is determined not

just by the cost of capacity, but also by the

degree to which the capacity is efficiently

utilised. We need to assess the impact that

demand aggregation will have on costs of

actually utilised resources (CPU, network,

and storage)9.

In the non-virtualised data centre, each

application/workload typically runs on

its own physical server10. This means the

number of servers scales linearly with the

number of server workloads. In this model,

utilisation of servers has traditionally been

Figure 6: Random variability (exchange server). Source: Microsoft)

Figure 5: Relatively recent large data centre projects. Source: Press releases

The economics of the cloud

Page 22: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

20 | www.protocolmag.co.za

extremely low, around 5 to 10 percent11.

Virtualisation enables multiple applications

to run on a single physical server within

their optimised operating system instance,

so the primary benefit of virtualisation is

that fewer servers are needed to carry the

same number of workloads. But how will this

affect economies of scale? If all workloads

had constant utilisation, this would entail a

simple unit compression without impacting

economies of scale. In reality, however,

workloads are highly variable over time,

often demanding large amounts of resources

one minute and virtually none the next.

This opens up opportunities for utilisation

improvement via demand-side aggregation

and diversification.

We analysed the different sources of

utilisation variability and then looked at the

ability of the cloud to diversify it away and

thus reduce costs.

We distinguish five sources of variability and

assess how they might be reduced:

1. Randomness. End-user access patterns

contain a certain degree of randomness.

For example, people check their email

at different times (Figure 6). To meet

service level agreements, capacity

buffers have to be built in to account for

a certain probability that many people

will undertake particular tasks at the

same time. If servers are pooled, this

variability can be reduced.

2 Time-of-day patterns. There are daily

recurring cycles in people‘s behaviour:

consumer services tend to peak in the

evening, while workplace services tend

to peak during the workday. Capacity

has to be built to account for these

daily peaks but will go unused during

other parts of the day causing

low utilisation. This variability

can be countered by running the

same workload for multiple time

zones on the same servers (Figure

7) or by running workloads with

complementary time-of-day patterns

(for example, consumer services and

enterprise services) on the same

servers.

3. Industry-specific variability. Some

variability is driven by industry

dynamics. Retail firms see a spike

during the holiday shopping season

while U.S. tax firms will see a peak

before April 15 (Fig. 8). There are

multiple kinds of industry variability —

some recurring and predictable (such

as the tax season or the Olympic

Games), and others unpredictable

(such as major news stories).

. The common result is that capacity

has to be built for the expected peak

(plus a margin of error). Most of this

capacity will sit idle the rest of the

time. Strong diversification benefits

exist for industry variability.

4. Multi-resource variability.

Compute, storage, and input/

output (I/O) resources are generally

bought in bundles: a server contains

a certain amount of computing

power (CPU), storage, and I/O (e.g.,

networking or disk access). Some

workloads like search use a lot of

CPU but relatively little storage or

I/O, while other workloads like email

tend to use a lot of storage but little

CPU (Figure 9).

While it‘s possible to adjust capacity

by buying servers optimised for CPU or

storage, this addresses the issue only to

a limited degree because it will reduce

flexibility and may not be economical

from a capacity perspective. This

variability will lead to resources going

unutilised unless workload diversification

is employed by running workloads with

complementary resource profiles.

5. Uncertain growth patterns. The

difficulty of predicting future need for

computing resources and the long

lead-time for bringing capacity online is

another source of low utilisation (Figure

10). For start-ups, this is sometimes

referred to as the “TechCrunch effect”.

Enterprises and small businesses

all need to secure approval for IT

investments well in advance of actually

knowing their demand for infrastructure.

Even large private companies face

this challenge, with firms planning

their purchases six to twelve months

11 Source: The Economics of Virtualisation: Moving Toward an Application-Based Cost Model, IDC, November 2009.

Figure 7: Time-of-day patterns for search. Source:

Bing search volume over 24-hour period

Figure 8: Industry-specific variability. Source: Alexa

Internet

Figure 9: Multi-resource variability (illustrative).

Source: Microsoft

Figure 10: Uncertain growth patterns. Source:

Microsoft

Page 23: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 21

in advance (Figure 10). By diversifying

among workloads across multiple

customers, cloud providers can

reduce this variability, as higher-than-

anticipated demand for some workloads

is cancelled out by lower-than-

anticipated demand for others.

A key economic advantage of the cloud is

its ability to address variability in resource

utilisation brought on by these factors. By

pooling resources, variability is diversified

away, evening out utilisation patterns. The

larger the pool of resources, the smoother

the aggregate demand profile, the higher

the overall utilisation rate and the cheaper

and more efficiently the IT organisation can

meet its end-user demands.

We modelled the theoretical impact of

random variability of demand on server

utilisation rates as we increased the number

of servers12. Figure 11 indicates that a

theoretical pool of 1,000 servers could

be run at approximately 90% utilisation

without violating its SLA. This only holds

true in the hypothetical situation where

random variability is the only source of

variability and workloads can be migrated

between physical servers instantly without

interruption. Note that higher levels

of uptime (as defined in a service level

agreement or SLA) become much easier to

deliver as scale increases.

Clouds will be able to reduce time-of-day

variability to the extent that they are

diversified amongst geographies and

workload types. Within an

average organisation, peak IT

usage can be twice as high as

the daily average. Even in large,

multi-geography organisations,

the majority of employees and

users will live in similar time

zones, bringing their daily cycles

close to synchrony. Also, most

organisations do not tend to have

workload patterns that offset

one another: for example, the

email, network and transaction

processing activity that takes place during

business hours is not replaced by an equally

active stream of work in the middle of the

night. Pooling organisations and workloads

of different types allows these peaks and

troughs to be offset.

Industry variability results in highly

correlated peaks and troughs throughout

each firm (that is, most of the systems

in a retail firm will be at peak capacity

around the holiday season (e.g., web

servers, transaction processing, payment

processing, databases)13. Figure 12 shows

industry variability for a number of different

industries, with peaks ranging from 1,5x to

10x average usage.

Microsoft services such as Windows Live

Hotmail and Bing take advantage of

multi-resource diversification by layering

different subservices to optimise workloads

with different resource profiles (such as

CPU bound or storage bound). It is difficult

to quantify these benefits, so we have not

included multi-resource diversification in our

model.

Some uncertain growth pattern variability

can be reduced by hardware standardisation

and just-in-time procurement, although likely

not completely. Based on our modelling, the

impact of growth uncertainty for enterprises

with up to 1,000 servers is 30 to 40 percent

over-provisioning of servers relative to a

public cloud service. For smaller companies

(for example, Internet start-ups), the impact

is far greater.

So far we have made the implicit assumption

that the degree of variability will stay the

same as we move to the cloud. In reality, it

is likely that the variability will significantly

increase, which will further increase

economies of scale. There are two reasons

why this may happen:

• Higher expectation of performance.

Today, users have become accustomed

to resource constraints and have learned

to live with them. For example, users

will schedule complex calculations to

run overnight, avoid multiple model

iterations, or decide to forgo time-

consuming and costly supply chain

optimisations. The business model of

cloud allows a user to pay the same for

1 machine running for 1,000 hours as he

would for 1,000 machines running for 1

hour. Today, the user would likely wait

1,000 hours or abandon the project. In

the cloud, there is virtually no additional

cost to choosing 1,000 machines and

accelerating such processes. This will have

a dramatic impact on variability. Pixar

Animation Studios, for example runs its

computer-animation rendering process on

Windows Azure because every frame of

their movies takes eight hours to render

today on a single processor, meaning it

would take 272 years to render an entire

movie. As they said, “We are not that

patient.” With Azure, they can get the job

done as fast as they need. The result is

huge spikes in Pixar‘s usage of Azure as

they render on-demand.

• Batch processes will become real time.

Many processes — for example, accurate

stock availability for online retailers — that

were previously batch driven, will move

12 To calculate economies of scale arising from diversifying random variability, we created a Monte Carlo model to simulate data centres of various sizes serving many random workloads.

For each simulated DC, workloads (which are made to resemble hypothetical web usage patterns) were successively added until the expected availability of server resources dropped below

a given uptime of 99.9 percent or 99.99 percent. The maximum number of workloads determines the maximum utilization rate at which the DC‘s servers can operate without compromising

performance.

13 Ideally, we would use the server utilisation history of a large number of customers to gain more insight into such patterns. However, this data is difficult to get and often of poor quality. We

therefore used web traffic as a proxy for the industry variability.

Figure 11: Diversifying random variability.

Source: MicrosoftInternet

Figure 12: Industry variability. Source:

Microsoft, Alexa Internet, Inc.

The economics of the cloud

Page 24: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

22 | www.protocolmag.co.za

to real-time. Thus, multi-stage processes

that were once sequential will now occur

simultaneously, such as a manufacturing

firm that can tally its inventory, check its

order backlog and order new supplies

at once. This will amplify utilisation

variability.

We note that even the largest public

clouds will not be able to diversify away

all variability; market level variability will

likely remain. To further smooth demand,

sophisticated pricing can be employed. For

example, similar to the electricity market

(Figure 13), customers can be given the

incentive to shift their demand from high

utilisation periods to low utilisation periods.

In addition, a lower price spurs additional

usage from customers due to price elasticity

of demand. Demand management will

further increase the economic benefits of

the cloud.

2.3 Multi-tenancy economies of scale

The previously described supply-side and

demand-side economies of scale can be

achieved independent of the application

architecture, whether it be traditional

scale-up or scale-out, single tenant or

multitenant. There is another important

source of economies of scale that can be

harnessed only if the application is written

as a multi-tenant application. That is, rather

than running an application instance

for each customer – as is done for

on-premises applications and most

hosted applications such as dedicated

instances of Microsoft Office 365 – in

a multi-tenant application, multiple

customers use a single instance of the

application simultaneously, as in the

case of shared Office 365. This has two

important economic benefits:

• Fixed application labour

amortised over a large number of

customers. In a single-tenant instance,

each customer has to pay for its own

application management (that is, the

labour associated with update and

upgrade management and incident

resolution). We‘ve examined data from

customers, as well as Office 365-D and

Office 365-S to assess the impact. In

dedicated instances, the same activities,

such as applying software patches, are

performed multiple times – once for

each instance. In a multi-tenant instance

such as Office 365-S, that cost is shared

across a large set of customers, driving

application labour costs per customer

towards zero. This can result in a

meaningful reduction in overall cost,

especially for complex applications.

• Fixed component of server

utilisation amortised over large

number of customers. For each

application instance, there is a

certain amount of server overhead.

Figure 14 shows an example from

Microsoft‘s IT department in which

intraday variability appears muted

(only a 16 percent increase between

peak and trough) compared to actual

variability in user access. This is caused

by application and runtime overhead,

which is constant throughout the day. By

moving to a multitenant model with a

single instance, this resource overhead

can be amortised across all customers.

We have examined Office 365-D, Office

365-S and Microsoft Live@edu data to

estimate this overhead, but so far it has

proven technically challenging to isolate

this effect from other variability in the

data (for example, user counts and server

utilisation) and architectural differences in

the applications. Therefore, we currently

assume no benefit from this effect in our

model.

Applications can be entirely multitenant by

being completely written to take advantage

of these benefits, or can achieve partial

multi-tenancy by leveraging shared services

provided by the cloud platform. The greater

the use of such shared services, the greater

the application will benefit from these multi-

tenancy economies of scale.

2.4 Overall impact

The combination of supply-side economies

of scale in server capacity (amortising

costs across more servers), demand-side

aggregation of workloads (reducing

variability), and the multi-tenant application

model (amortising costs across multiple

customers) leads to powerful economies of

scale. To estimate the magnitude, we built a

cost scaling model which estimates the long

term behaviour of costs.

Figure 15 shows the output for a workload

that utilises 10 percent of a traditional server.

The model indicates that a 100,000-server

data centre has an 80% lower total cost of

ownership (TCO) compared to a 1,000-server

data centre.

Figure 13: Variable electricity pricing. Source:

Ameren Illinois Utilities

Figure 14: Utilisation overhead. Source: Microsoft

Figure 15: Economies of scale in the cloud.

Source: Microsoft

Figure 16: IT spending breakdown.

Source: Microsoft

Page 25: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 23

This raises the question: what impact will the

Cloud Economics we described have on the

IT budget? From customer data, we know

the approximate breakdown between the

infrastructure costs, costs of supporting and

maintaining existing applications and new

application development costs (Figure 16).

The cloud impacts all three of these areas.

The supply-side and demand-side savings

impact mostly the infrastructure portion,

which comprises over half of spending.

Existing app maintenance costs include

update and patching labour, end-user

support, and license fees paid to vendors.

They account for roughly a third of spending

and are addressed by the multi-tenancy

efficiency factor.

New application development accounts for

just over a tenth of spending14, even though

it is seen as the way for IT to innovate.

Therefore IT leaders generally want to

increase spending here. The economic

benefits of cloud computing described here

will enable this by freeing up room in the

budget to do so. We will touch more on this

aspect in the next paragraph as well as in

Section 3.

2.5 Harnessing cloud economics

Capturing the benefits described above

is not a straightforward task with today‘s

technology. Just as engineers had to

fundamentally rethink design in the early

days of the car, so too will developers have

to rethink design of applications. Multi-

tenancy and demand-side aggregation

is often difficult for developers or even

sophisticated IT departments to implement

on their own. If not done correctly, it could

end up either significantly raising the costs

of developing applications (thus at least

partially nullifying the increased budget

room for new app development); or

capturing only a small subset of the savings

previously described. The best approach in

harnessing the cloud economics is different

for packaged applications vs. new/custom

applications.

Packaged applications: While virtualising

packaged applications and moving them

to cloud virtual machines (e.g., virtualised

Exchange) can generate some savings, this

solution is far from ideal and fails to capture

the full benefits outlined in this section. The

cause is twofold. First, applications designed

to be run on a single server will not easily

scale up and down without significant

additional programming to add load-

balancing, automatic failover, redundancy

and active resource management. This

limits the extent to which they are able to

aggregate demand and increase server

utilisation. Second, traditional packaged

applications are not written for multi-tenancy

and simply hosting them in the cloud does

not change this. For packaged applications,

the best way to harness the benefits of cloud

is to use SaaS offerings like Office365, which

have been structured for scale-out and

multi-tenancy to capture the full benefits.

New/custom applications: Infrastructure-

as-a-Service (IaaS) can help capture some

of the economic benefits for existing

applications. Doing so is, however, a bit of a

“horseless carriage” in that the underlying

platform and tools were not designed

specifically for the cloud. The full advantage

of cloud computing can only be properly

unlocked through a significant investment

in intelligent resource management. The

resource manager must understand both the

status of the resources (networking, storage,

and compute) as well as the activity of the

applications being run. Therefore, when

writing new apps, Platform as a Service most

effectively captures the economic benefits.

PaaS offers shared services, advanced

management and automation features

that allow developers to focus directly on

application logic rather than on engineering

their application to scale.

To illustrate the impact, a start-up named

Animoto used Infrastructure-as-a-Service

(IaaS) to enable scaling – adding over 3,500

servers to their capacity in just 3 days as

they served over three-quarters of a million

new users. Examining their application later,

however, the Animoto team discovered that

a large percentage of the resources they

were paying for were often sitting idle –

often over 50%, even in a supposedly elastic

cloud. They restructured their application

and eventually lowered operating costs by

20%. While Animoto is a cloud success story,

it was only after an investment in intelligent

resource management that they were able to

harness the full benefits of the cloud. PaaS

would have delivered many of these benefits

“out-of-the-box” without any additional

tweaking required.

3. Implications

In this Section, we will discuss the

implications of the previously described

economics of cloud. We will discuss the

ability of private clouds to address some of

the barriers to cloud adoption and assess

the cost gap between public and private

clouds.

3.1 Possibilities and obstacles

The economics we described in section 2 Figure 17: Capturing cloud benefits. Source: Microsoft

14 New application development costs include only the cost of designing and writing the application and excluding the cost of hosting them on new infrastructure. Adding these costs results

in the 80% / 20% split seen elsewhere.

The economics of the cloud

Page 26: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

24 | www.protocolmag.co.za

will have a profound impact on IT. Many IT

leaders today are faced with the problem

that 80% of the budget is spent on “keeping

the lights on” and maintaining existing

services and infrastructure. This leaves

few resources available for innovation or

addressing the never-ending queue of

new business and user requests. Cloud

computing will free up significant resources

that can be redirected to innovation.

Demand for general purpose technologies

like IT has historically proven to be very price

elastic (Figure 18). Thus, many IT projects

that previously were cost-prohibitive will now

become viable thanks to cloud economics.

However, lower TCO is only one of the key

drivers that will lead to a renewed level of

innovation within IT:

• Elasticity is a game-changer because,

as described before, renting 1 machine

for 1,000 hours will be nearly equivalent

to renting 1,000 machines for 1 hour in

the cloud. This will enable users and

organisations to rapidly accomplish

complex tasks that were previously

prohibited by cost or time constraints.

Being able to both scale up and scale

down resource intensity nearly instantly

enables a new class of experimentation

and entrepreneurship.

• Elimination of capital expenditure

will significantly lower the risk

premium of projects, allowing for more

experimentation. This both lowers

the costs of starting an operation and

lowers the cost of failure or exit – if an

application no longer needs certain

resources, they can be decommissioned

with no further expense or write-off.

• Self-service Provisioning servers

through a simple web portal rather than

through a complex IT procurement

and approval chain can lower friction

in the consumption model, enabling

rapid provisioning and integration of

new services. Such a system also allows

projects to be completed in less time

with less risk and lower administrative

overhead than previously.

• Reduction of complexity. Complexity

has been a long standing inhibitor

of IT innovation. From an end-user

perspective SaaS is setting a new bar for

user friendly software. From a developer

perspective Platform as a Service (PaaS)

greatly simplifies the process of writing

new applications, in the same way as

cars greatly reduced the complexity

of transportation by eliminating, for

example, the need to care for a horse.

These factors will significantly increase

the value add delivered by IT. Elasticity

enables applications like yield management,

complex event processing, logistics

optimisation, and Monte Carlo simulation,

as these workloads exhibit nearly infinite

demand for IT resources. The result will be

massively improved experience, including

scenarios like real-time business intelligence

analytics and HPC for the masses.

However, many surveys show that significant

concerns currently exist around cloud

computing. As Figure 19 shows, security,

privacy, maturity, and compliance are

the top concerns. Many CIOs also worry

about legacy compatibility: it is often

not straightforward to move existing

applications to the cloud.

• Security and Privacy CIOs must

be able to report to their CEO and

other executives how the company‘s

data is being kept private and secure.

Financially and strategically important

data and processes often are protected

by complex security requirements.

Legacy systems have typically been

highly customised to achieve these goals,

and moving to a cloud architecture can

be challenging. Furthermore, experience

with the built-in, standardised security

capabilities of cloud is still limited and

many CIOs still feel more confident with

legacy systems in this regard.

• Maturity and Performance – The cloud

requires CIOs to trust others to provide

reliable and highly available services.

Unlike on-premises outages, cloud

outages are often highly visible and may

increase concerns

• Compliance and Data Sovereignty –

Enterprises are subject to audits and

oversight, both internal and external (e.g.

IRS, SEC). Companies in many countries

have data sovereignty requirements that

severely restrict where they can host

data services. CIOs ask: which clouds

can comply with these systems and

what needs to be done to make them

compliant?

While many of these concerns can be

addressed by the cloud today, concerns

remain and are prompting IT leaders to

explore private clouds as a way of achieving

the benefits of cloud while solving these

problems. Next, we will explore this in more

detail and also assess the potential tradeoffs.

3.3 Private clouds

Microsoft distinguishes between public

and private clouds based on whether the IT

resources are shared between many distinct

organisations (public cloud) or dedicated

to a single organisation (private cloud).

This taxonomy is illustrated in Figure 20.

Compared to traditional virtualised data

centres, both private and public clouds

benefit from automated management (to

save on repetitive labour) and homogenous

hardware (for lower cost and increased

flexibility). Due to the broadly-shared nature

of public clouds, a key difference between

private and public clouds is the scale and

scope at which they can pool demand.

• Traditional virtualised data centres

generally allow pooling of resources

within existing organisational

boundaries — that is, the corporate IT

group virtualises its workloads, while

departments may or may not do the

Figure 18: Price elasticity of storage. Source:

Coughlin Associates

Figure 19: Public cloud concerns.

Source: Gartner CIO survey

Page 27: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 25

same. This can diversify away some of

the random, time-of-day (especially if

the company has offices globally), and

workload-specific variability, but the size

of the pool and the difficulty in moving

loads from one virtual machine to another

(exacerbated by the lack of homogeneity

in hardware configurations) limits the

ability to capture the full benefits. This is

one of the reasons why even virtualised

data centres still suffer from low utilisation.

There is no application model change so

the complexity of building applications is

not reduced.

• Private clouds move beyond

virtualisation. Resources are now pooled

across the company rather than by

organisational unit15 and workloads are

moved seamlessly between physical

servers to ensure optimal efficiency and

availability. This further reduces the impact

of random, time-of-day and workload

variability. In addition, new, cloud-

optimised application models (Platform

as a Service such as Azure) enable more

efficient application development and

lower ongoing operations costs.

• Public clouds have all the same

architectural elements as private clouds,

but bring massively higher scale to

bear on all sources of variability. Public

clouds are also the only way to diversify

away industry-specific variability, the

full geographic element of time-of-day

variability and bring multi-tenancy

benefits into effect.

Private clouds can address some of the

previously-mentioned adoption concerns. By

having dedicated hardware, they are easier

to bring within the corporate firewall, which

may ease concerns around security and

privacy. Bringing a private cloud on-premise

can make it easier to address some of the

regulatory, compliance and sovereignty

concerns that can arise with services that

cross jurisdictional boundaries. In cases

where these concerns weigh heavily in an IT

leader‘s decision, an investment in a private

cloud may be the best option.

Private clouds do not really differ from public

cloud regarding other concerns, such as

maturity and performance. Public and

private cloud technologies are developing

in tandem and will mature together. A

variety of performance levels will be

available in both public and private form,

so there is little reason to expect

that one will have an advantage over

another16. While private clouds can

alleviate some of the concerns, in

the next paragraph we will discuss

whether they will offer the same kind

of savings described earlier.

3.4 Cost trade-off

While it should be clear from the prior

discussion that conceptually the public

cloud has the greatest ability to capture

diversification benefits, we need to get a

better sense of the magnitude. Figure 21

shows that while the public cloud addresses

all sources of variability the private cloud can

address only a subset.

For example, industry variability cannot be

addressed by a private cloud, while growth

variability can be addressed only to a limited

degree if an organisation pools all its internal

resources in a private cloud. We modelled

all of these factors, and the output is shown

in Figure 22.

The lower curve shows the cost for a public

cloud (same as the curve shown in Figure

15). The upper curve shows the cost of a

private cloud. The public cloud curve is

lower at every scale due to the greater

impact of demand aggregation and the

multi-tenancy effect. Global scale public

clouds are likely to become extremely

large, at least 100,000 servers in size, or

possibly much larger, whereas the size of an

organisation‘s private cloud will depend on

its demand and budget for IT.

Figure 22 also shows that for organisations

with a very small installed base of servers

(<100), private clouds are prohibitively

expensive compared to public cloud. The

only way for these small organisations or

departments to share in the benefits of at

scale cloud computing is by moving to a

public cloud.

Figure 20: Comparing virtualisation, private cloud and public cloud. Source: Microsoft

Figure 21: Diversification benefits. Source: Microsoft

15 Aggregation across organisational units is enabled by two key technologies: live migration, which moves virtual machines while remaining operational, thereby enabling more dynamic

optimization; and self-service provisioning and billing.

16 Private clouds do allow for a greater degree of customization than public clouds, which could enhance performance for a certain computational task. Customization requires R&D effort

and expense, however, so it is difficult to make a direct price/performance comparison.

Figure 22: Cost benefit of public cloud. Source:

Microsoft

The economics of the cloud

Page 28: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

26 | www.protocolmag.co.za

For large agencies with an installed base

of approximately 1,000 servers, private

clouds are feasible but come with a

significant cost premium of about 10

times the cost of a public cloud for the

same unit of service, due to the combined

effect of scale, demand diversification and

multi-tenancy.

In addition to the increase in TCO, private

clouds also require upfront investment

to deploy – an investment that must

accommodate peak demand requirements.

This involves separate budgeting and

commitment, increasing risk. Public clouds,

on the other hand, can generally be

provisioned entirely on a pay-as-you-go

basis.

3.5 Finding balance today: weighing the benefits of private cloud against the costs

We‘ve mapped a view of how public and

private clouds measure up in Figure 23.

The vertical axis measures the public cloud

cost advantage. From the prior analysis we

know public cloud has inherent economic

advantages that will partially depend on

customer size, so the bubbles‘ vertical

position is dependent on the size of the

server installed base. The horizontal axis

represents the organisation‘s preference for

private cloud. The size of the circles reflects

the total server installed base of companies

of each type. The bottom-right quadrant

thus represents the most attractive areas for

private clouds (relatively low cost premium,

high preference).

We acknowledge that Figure 23 provides

a simplified view. IT for is not monolithic

within any of these industry segments. Each

organisation‘s IT operation is segmented

into workload types, such as email or

ERP. Each of these has a different level of

sensitivity and scale, and CIO surveys reveal

that preference for public cloud solutions

currently varies greatly across workloads

(Figure 24).

An additional factor is that many application

portfolios have been developed over the

past 15-30 years and are tightly woven

together. This particularly holds true for

ERP and related custom applications at

larger companies who have more sizable

application portfolios. Applications that are

more “isolated “such as CRM, collaboration,

or new custom applications may be more

easily deployed in the cloud. Some of those

applications may need to be integrated back

to current on-premises applications.

Before we draw final conclusions, we need to

make sure we avoid the “horseless carriage

syndrome” and consider the likely shift

along the two axes (economics and private

preference).

3.6 The long view: cloud transition over time

As we pointed out in the introduction of this

paper, it is dangerous to make decisions

during the early stages of a disruption

without a clear vision of the end state. IT

leaders need to design their architecture

with a long term vision in mind. We therefore

need to consider how the long term forces

will impact the position of the bubbles on

Figure 23. We expect two important shifts to

take place. First, the economic benefit of

public cloud will grow over time. As more

and more work is done on public clouds, the

economies of scale we described in Section

2 will kick in and the cost premium on private

clouds will increase over time. Customers

will increasingly be able to tap into the

supply-side, demand-side and multi-tenancy

savings as discussed previously. As shown in

Figure 25, this leads to an upward shift along

the vertical axis.

At the same time, some of the barriers to

cloud adoption will begin to fall. Many

technology case studies show that, over

time, concerns over issues like compatibility,

security, reliability and privacy will be

addressed. This will likely also happen for

the cloud, which would represent a shift to

the left on Figure 25. Below we will explore

some of the factors that cause this latter

shift.

Cloud security will evolve

Public clouds are in a relatively early stage

of development, so naturally critical areas

like reliability and security will continue to

improve. Data already suggests that public

cloud e-mail is more reliable than most

on-premises implementations. In PaaS, the

Figure 23: Cost and benefit of private clouds.

Source: Microsoft

Figure 24: Cloud-ready workloads (2010). Source: Microsoft survey question: In the next 24-24

months, please indicate if a cloud offering would augment on-premise offering or completely

replace it.

Page 29: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 27

automatic patching and updating of cloud

systems greatly improves the security of all

data and applications, as the majority of

exploited vulnerabilities take advantage of

systems that are out-of-date. Many security

experts argue that there are no fundamental

reasons why public clouds would be less

secure; in fact, they are likely to become

more secure than on premises due to the

intense scrutiny providers must place on

security and the deep level of expertise they

are developing.

Clouds will become more compliant

Compliance requirements can come

from within an organisational, industry,

or government (e.g., European Data

Protection Directive) and may currently be

challenging to achieve with cloud without

a robust development platform designed

for enterprise needs. As cloud technologies

improve and as compliance requirements

adapt to accommodate cloud architectures,

the cloud will continue to become more

compliant and therefore feasible for more

organisations and workloads. This was the

case, for example, with e-signatures, which

were not accepted for many contracts and

documents in the early days of the Internet.

As authentication and encryption technology

improved and as compliance requirements

changed, e-signatures became more

acceptable. Today, most contracts (including

those for opening bank accounts and taking

out loans) can be signed with an e-signature.

The large group of customers who are

rapidly increasing reliance on public

clouds—small and medium businesses

(SMBs) and consumers of Software as

a Service (SaaS)—will be a formidable

force of change in this area. This

growing constituency will continue to ask

governments to accommodate the shift

to cloud by modernising legislation. This

regulatory evolution will make the public

cloud a more viable alternative for large

enterprises and thus move segments

along the horizontal axis toward public

cloud preference.

Decentralised IT (also known as ‘rogue

IT’) will continue to lead the charge

Many prior technology transitions were

led not by CIOs but by departments,

business decision makers, developers, and

end users – often in spite of the objections

of CIOs. For example, both PCs and servers

were initially adopted by end users and

departments before they were officially

embraced by corporate IT policies. More

recently, we saw this with the adoption of

mobile phones, where consumer adoption

is driving IT to support these devices.

We‘re seeing a similar pattern in the cloud:

developers and departments have started

using cloud services, often without the

knowledge of the IT group (hence the

name “rogue clouds”). Many business users

will not wait for their IT group to provide

17 http://open.blogs.nytimes.com/2007/11/01/self-service-prorated-super-computing-fun/

Figure 25: Expected preference shift for public and

private cloud. Source: Microsoft

Figure 26: Increasing adoption of software as a service (SaaS). Source: Gartner

them with a private cloud; for these users,

productivity and convenience often trump

policy.

It is not just impatience that drives “rogue

clouds”; ever-increasing budgetary

constraints can lead users and even

departments to adopt cheaper public cloud

solutions that would not be affordable from

traditional channels. For example, when

Derek Gottfrid wanted to process all 4TB of

the New York Times archives and host them

online, he went to the cloud without the

knowledge of the Times‘ IT department17.

Similarly, the unprecedented pricing

transparency that the public cloud offers, will

put further pressure from the CEO and CFO

on CIOs to move to the public cloud.

CIOs should acknowledge that these

behaviours are commonplace early in

a disruption and either rapidly develop

and implement a private cloud with the

same capabilities or adopt policies which

incorporate some of this behaviour, where

appropriate, in IT standards.

Perceptions are rapidly changing

Strength in SaaS adoption in large

enterprises serves as proof of changing

perceptions (Figure 26) and indicates that

even large, demanding enterprises are

The economics of the cloud

Page 30: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

28 | www.protocolmag.co.za

moving to the left on the horizontal axis (i.e.,

reduced private preference). Just a few years

ago, very few large companies were willing

to shift their e-mail, with all the confidential

data that it contains, to a cloud model. Yet

this is exactly what is happening today.

As positive-use cases continue to spur more

interest in cloud technology, this virtuous

cycle will accelerate, driving greater interest

in and acceptance of the cloud.

In summary, while there are real hurdles

to cloud adoption today, these will likely

diminish over time. While new, unforeseen

hurdles to public cloud adoption may

appear, the public cloud economic

advantage will grow stronger with time

as cloud providers unlock the benefits of

economics we discussed in Section 2. While

the desire for a private cloud is mostly driven

by security and compliance concerns around

existing workloads, the cost effectiveness

and agility of the public cloud will enable

new workloads.

Revisiting our “horseless carriage”

analogy, we see that cars became a huge

success not simply because they were

faster and better (and eventually more

affordable) than horse-drawn carriages.

The entire transportation ecosystem had to

change. Highway systems, driver training

programmes, accurate maps and signage,

targeted safety regulation and a worldwide

network of fuelling infrastructure all had

to be developed to enable this transition.

Each successive development improved the

value proposition of the car. In the end, even

people‘s living habits changed around the

automobile, resulting in the explosion of

the suburbs in the middle part of the 20th

century. This created “net new” demand

for cars by giving rise to the commuting

professional class. This behavioural change

represented a massive positive feedback

loop that inexorably made the automobile

an essential, irreplaceable component of

modern life.

Similarly, we believe the cloud will be enabled

and driven not just by economics and

qualitative developments in technology and

perception, but by a series of shifts from IT

professionals, regulators, telecom operators,

ISVs, systems integrators and cloud platform

providers. As the cloud is embraced more

thoroughly, its value will increase.

4. The journey to the cloud

Because we are in the early days of the cloud

paradigm shift, there is much confusion

about the direction of this ongoing

transformation. In this paper, we looked

beyond the current technology and focused

on the underlying economics of cloud to

define the destination – where all of this

disruption and innovation is leading our

industry. Based on our analysis, we see a

long-term shift to cloud driven by three

important economies of scale:

• Larger data centres can deploy

computational resources at significantly

lower cost than smaller ones;

• Demand pooling improves the utilisation

of these resources, especially in public

clouds; and

• Multi-tenancy lowers application

maintenance labour costs for large

public clouds. Finally, the cloud offers

unparalleled levels of elasticity and agility

that will enable exciting new solutions and

applications.

For businesses of all sizes, the cloud

represents tremendous opportunity. It

represents an opportunity to break out of

the longstanding tradition of IT professionals

spending 80 percent of their time and

budget “keeping the lights on”, with few

resources left to focus on innovation. Cloud

services will enable IT groups to focus

more on innovation while leaving non-

differentiating activities to reliable and

cost-effective providers. Cloud services

will enable IT leaders to offer new solutions

that were previously seen as either cost

prohibitive or too difficult to implement. This

is especially true of cloud platforms (Platform

as a Service), which significantly reduce

the time and complexity of building new

applications that take advantage of all the

benefits of the cloud.

This future won‘t materialise overnight.

IT leaders need to develop a new 5- to

10-year vision of the future, recognising

that they and their organisations will play a

fundamentally new role in their company.

They need to plot a path that connects

where they are today to that future. An

important first step in this is to segment

their portfolio of existing applications

(Figure 27). For some applications, the

economic and agility benefits may be very

strong so they should be migrated quickly.

However, barriers do exist today and while

we outlined in section 3 that many of them

will be overcome over time, the cloud may

not be ready for some applications today.

For tightly-integrated applications with

fairly stable usage patterns, it may not make

sense to move them at all, similar to how

Figure 27: Segmenting the IT portfolio. Source: Microsoft

Page 31: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 29

some mainframe applications were never

migrated to client/server. While new custom

applications don‘t have the legacy problem,

designing them in a scalable, robust fashion

is not always an easy task. Cloud-optimised

platforms (Platform as a Service) can

dramatically simplify this task.

This transition is a delicate balancing act.

If the IT organisation moves too quickly in

areas where the cloud is not ready, it can

compromise business continuity, security

and compliance. If it moves too slowly, it can

put the company at a significant competitive

disadvantage versus competitors who do

take full advantage of cloud capabilities,

giving up a cost, agility, or value advantage.

Moving too slowly also increases the risk

that different groups or individuals within the

company will each adopt their own cloud

solution in a fragmented and uncontrolled

fashion (“rogue IT”), wresting control over IT

from the CIO. IT leaders who stay ahead of

the cloud trend will be able to control and

shape this transition; those who lag behind

will increasingly lose control.

To lead the transition, IT leaders need to

think about the long term architecture of

their IT. Some see a new role emerging,

that of a Cloud Services Architect, who

determines which applications and services

move to the cloud and exactly when such

a move takes place based on a business

case and a detailed understanding of the

cloud capabilities available. This should start

by taking inventory of the organisation’s

resources and policies. This includes an

application and data classification exercise

to determine which policy or performance

requirements (such as confidential or top

secret data retention requirements) apply

to which applications and data. Based on

this, IT leaders can determine what parts

of their IT operation are suitable for public

cloud and what might justify an investment

in private cloud. Beginning in this manner

takes advantage of the opportunity of cloud

while striking balance between economics

and security, performance and risk.

To accomplish this, IT leaders need a partner

who is firmly committed to the long-term

vision of the cloud and its opportunities,

one who is not hanging on to legacy IT

architectures. At the same time, this partner

needs to be firmly rooted in the realities

of today‘s IT so it understands current

challenges and how to best navigate the

journey to the cloud. IT leaders need a

partner who is neither incentivised to push

for change faster than is responsible nor to

keep IT the same. Customers need a partner

who has done the hard work of figuring out

how best to marry legacy IT with the cloud,

rather than placing that burden on the

customer by ignoring the complexities of

this transformation.

At Microsoft, we are “all in” on the cloud. We

provide both commercial SaaS (Office 365)

and a cloud computing platform (Windows

Azure Platform). Office 365 features the

applications customers are familiar with like

Exchange email and SharePoint collaboration,

delivered through Microsoft‘s cloud. Windows

Azure is our cloud computing platform,

which enables customers to build their own

applications and IT operations in a secure,

scalable way in the cloud. Writing scalable

and robust cloud applications is no easy

feat, so we built Windows Azure to harness

Microsoft‘s expertise in building our cloud-

optimised applications like Office 365, Bing,

and Windows Live Hotmail. Rather than just

moving virtual machines to the cloud, we build

a Platform as a Service that reduces complexity

for developers and IT administrators.

Microsoft also brings to the cloud the richest

partner community in the world. We have

over 600,000 partners in more than 200

countries servicing millions of businesses. We

are already collaborating with thousands of

our partners on the cloud transition. Together

we are building the most secure, reliable,

scalable, available, cloud in the world.

Over the last three decades, Microsoft

has developed strong relationships with

IT organisations, their partners and their

advisors. This offers us an unparalleled

understanding of the challenges faced by

today‘s IT organisations. Microsoft is both

committed to the cloud vision and has the

experience to help IT leaders on the journey.

Microsoft has a long history of bringing to

life powerful visions of the future. Bill Gates

founded Microsoft on the vision of putting

a PC in every home and on every desktop

in an era when only the largest corporations

could afford computers. In the journey

that followed, Microsoft and our partners

helped bring PCs to over one billion homes

and desktops. Millions of developers and

businesses make their living on PCs and we

are fortunate to play a role in that.

Now, we have a vision of bringing the power

of cloud computing to every home, every

office and every mobile device. The powerful

economics of the cloud drive all of us towards

this vision. Join Microsoft and our partners on

the journey to bring this vision to life.

The economics of the cloud

Page 32: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

30 | www.protocolmag.co.za

Gerhard Greeff – Divisional Manager at Bytes Process Management and Control

The cloud ... here to stay or fading drizzle?

Will the cloud work for manufacturing? Will

plants accept cloud-based applications? Is

the cloud just a vendor-driven fad?

Introduction

Recent technology developments have been

focused more on collaboration, optimisation,

consolidation, doing more with less, treating

the cloud and IT services as utilities, etc.

Within manufacturers’ organisations, there

has been a profound move by business,

the true owners of applications, to take

back ownership of the business while the

IT department is tasked with providing

an infrastructure that can deliver or make

available these applications securely,

anywhere and on any device.

This trend is quite evident in employees

bringing in their own devices to the offi ce

and demanding to have their enterprise

applications available on them. Employees

and departments are becoming increasingly

self-suffi cient in meeting their own IT needs.

Products and applications have become

easier to use and technology offerings

are addressing an ever-widening range

of business requirements in areas such

as video-conferencing, digital imaging,

employee collaboration, sales force support

and systems back-up, to mention but a few.

Considering the global economic

environment, the business ability made

possible by technology and the above-

mentioned changing consumption of

technology trends, organisations today

have an increasing challenge to maintain

the existing whilst driving innovation to

continuously improve business effi ciency and

profi tability.

Cloud infrastructure concerns

One of the more recent hypes punters

are promoting is cloud services, but is this

technology a real contributor or simply

another vendor-driven fad? Certainly, there

may be success stories in some industries.

But do they really win or are they just

Guinea pigs. I believe only time will tell. The

manufacturing industry is holding out, but at

the 2011 MESA Conference, the theme was

around cloud technology. This indicates that

even within manufacturing, there is at least

some interest.

“What value can this technology bring

to manufacturing?” is the predominant

question manufacturing technologists ask

themselves. The cloud, by nature, is virtual

and not tied to any specifi c hardware or site.

For manufacturing plants specifi cally, this

sounds uncertain and non-secure. Certainly

at Supply Chain and Enterprise level, the

cloud could add value as these systems

do not need to be on-line 24/7/365. But to

what degree can manufacturing plants trust

technology that absolutely, positively has

to deliver all the time. What will happen if

the technology fails to deliver? In the case

of manufacturing plants, at best it will mean

that production stops, at worst someone

may die.

In addition, there are security considerations

at plant level to take into account. How

secure is the cloud infrastructure and does

it suit manufacturing needs? I believe that

adequate security can be applied at cloud

level, but this is normally IT-level security

with levels of encryption that may slow

down the data transfer rates, something true

manufacturing facilities can ill afford. A few

seconds or milliseconds at SCM or ERP level

are acceptable, but makes a huge difference

at a real-time level on the plant fl oor.

Security may thus be a major inhibitor that

may constrain acceptance at plant level.

In South Africa, as with many other countries

on the continent, bandwidth is restricted,

unreliable and prone to failure. the cloud

in its purest form therefore needs to

be considered very carefully in terms of

Page 33: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 31

reliability and bandwidth needs. If, for

instance, the Manufacturing Operations

Management (MOM) system is on the cloud,

what would happen at plant level when

the communication fails? So bandwidth

reliability (or lack thereof) is another

constraint on acceptance.

With regards to the application technology

available within manufacturing operations,

I have some concerns as well. I am not sure

how ready MOM technologies are for the

cloud, specifically with regards to host server

(firmware) or VMWare upgrades, patches

and technology upgrades, re-establishing

OPC connectivity, not to mention virus and

other security updates. For some of these,

a hardware re-boot is often required. How

would one go about re-booting a server

running somewhere in the cloud in a virtual

environment sharing hardware with other

applications owned by other companies?

This would not be easy to accomplish and

at best would take hours instead of minutes.

At plant level, where time equates to

production, getting systems up-and-running

fast is the key and delays in achieving this

means loss of production or at least the

missing of a critical plant event or loss of

production data.

Manufacturing solution providers are also

more frequently releasing applications

that run on personal devices such as

smart phones and tablets. Some of these

applications are specifically structured to live

in the cloud. Most of these are only required

to deliver Key Performance Indicators or

critical event information to executives and

not to actually deliver execution ability.

Some manufacturers have expressed

concerns about the mechanisms of data

delivery to these devices and the security of

the data.

In light of the above concerns, it should be

clear why the cloud is not as yet generally

accepted at plant level. Does this mean

that manufacturing facilities should discard

the cloud as “pie in the sky”? My belief

is that they can only do this at their own

peril. There is much to learn from cloud

infrastructure that can be applied to the

benefit of manufacturing plants.

On-site architecture

The concepts of “private cloud”, “on-site

cloud” or “virtualisation” comes to mind.

Virtualising the plant manufacturing

applications from a redundancy and

reliability perspective can bring about great

peace of mind for plant IT people. Having

the infrastructure on-site will alleviate the

fears of outside intrusion and remove the

constraint of unreliable bandwidth.

With a good virtualisation strategy and

infrastructure, manufacturing plants can

save capital and add manufacturing value

About Bytes Process Management and Control (PMC)

PMC delivers an integrated solution

strategy and implementation

practice designed to help

organisations leverage their installed

IT environment and enable them to

increase the ROI from their overall

solution investment.

PMC is a division of BYTES Systems

Integration and operates primarily in

the Manufacturing IT sector. PMC is

highly regarded for its achievements

in implementing integrated MES,

MIS, EMI, SCADA and other plant

automation solutions.

PMC boasts an impressive 15-year

track record of successful consulting

and solution implementations. PMC

specialises in assisting its customers

to achieve operational excellence

by combining their experience in

industrial processes and information

technology to deliver value-adding

solutions. These solutions include all

levels of industrial IT such as:

• Manufacturing application needs

analysis and strategy formulation,

• Enterprise Manufacturing

Intelligence (EMI) solutions,

• Manufacturing Execution Systems

(MES),

• Process monitoring and control

systems (SCADA/PLC),

• Equipment anomaly detection and

failure prediction/online condition

monitoring (OCM), as well as

• Integrating these multi-level

industrial information systems into

the ERP

Figure 1: Manufacturing 2.0 architecture

in the long term. With on-site infrastructure,

plants will have more freedom to control

their own applications and will be more

secure in the knowledge that they own their

hardware and software. The same applies to

The cloud ... here to stay or fading drizzle?

Page 34: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

32 | www.protocolmag.co.za

manufacturing enterprises that are reluctant

to hand the control of their applications and

infrastructure to some vendor that lives in

the cloud somewhere.

When one looks at Manufacturing 2.0

architecture for larger enterprises, the cloud

and virtualisation fit right into the basic

concepts. On-site cloud infrastructure can

deliver on most of the MOM requirements

(such as MOM, LIMS and WMS) at plant

level in a virtualised environment. It can

deliver the role-based user interaction,

Operations Process Management,

Enterprise Manufacturing Intelligence as

well as the basic application services and

development administration management.

It will also provide the infrastructure that

enables the Manufacturing Service Bus and

manufacturing Master Data Management.

Global cloud infrastructure can deliver the

same concepts at Enterprise level for SCM,

CRM and ERP applications.

Small manufacturing concerns

When we look at smaller manufacturing

concerns, cloud applications and

infrastructure as well as Software as a Service

(IaaS and SaaS) may be valid business

models to access advanced functionality

that are too expensive by other means. It

does depend on the specific needs of the

manufacturing facility of course, as even for

small operations real-time MOM connectivity

may still be required. But as a strategy it

may be something to consider for smaller

operations.

Conclusion

Cloud infrastructure may not be well

accepted by the more conservative

manufacturing concerns at this time. The

technology has not proven itself to the

degree accepted by the manufacturing

industry but it may just be a question of

time before it does. The cloud also fits in

with other concepts being implemented

by manufacturing enterprises such as

Manufacturing 2.0, so it may just be a

question of time before it is adopted.

For more information, contact :

Gerhard Greeff,

Divisional Manager: Bytes PMC

Cell : (+27) 82-654-0290

Mailto: [email protected]

Page 35: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 33

An African cloudThanks to access to the latest international

fibre technology and South African know-

how, Africa could rapidly become one of

the most connected continents in the world

while making cloud computing a reality at

the local and international levels.

“There are many challenges in building

data centres around the African continent,”

says Lex van Wyk, MD Teraco Data

Environments, “…but with the rapid growth

in IT requirements, Africa needs a solution to

keep up with the demand. With the recent

growth in fibre capacity along the east

and west coasts, Teraco has identified the

opportunity to offer the ideal data centre

environment to service providers wanting to

provide IT solutions into the local and other

African markets further afield.”

Along with current benefits like free peering,

cost effective interconnects, access to all

major carriers, resilient power, remote

support and high levels of security, Teraco

is the ideal space for the cloud to be

established in Africa. “Our facilities in South

Africa already boast connectivity to all the

undersea cables offering a combined 28

landing points along the East and West

coasts of the continent, as well as all major

carriers operating in SA and several active

cloud providers. This in effect means

the Africa Cloud eXchange is already in

operation,” says van Wyk.

NAPAfrica, was launched in March 2012

providing an open, free and public peering

facility with the aim of making peering

simple and available to everyone. Van Wyk

says that the Africa Cloud eXchange concept

is no different. “We want to provide a highly

secure data centre environment with easy

access to global connectivity providers,”

says van Wyk.

Richard Vester, Director of Cloud Services

at EOH says, “Teraco provides a world-class

data centre facility which meets our unique

SLA requirements and more importantly

allows for our customers to connect from

any network onto their private clouds. The

ability to deliver services on demand across

Teraco’s data centre ensures we meet the

operational requirements of our customers.”

Teraco is the first premier grade data centre

environment in Africa to provide access to a

vendor-neutral colocation space for sharing

and selling cloud services. “The Africa Cloud

eXchange allows South African and African

cloud providers to host their platforms

and offer services from a vendor-neutral,

well connected and highly secure data

centre environment thereby opening up

South Africa to the rest of the continent,”

concludes van Wyk.

For more information,

visit www.teraco.co.za

An African cloud

Page 36: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

34 | www.protocolmag.co.za

The role of virtual reality in the process industryVirtual reality has been with us for decades with probably its most impressive manifestation

in flight simulators. If you’ve ever been in one, the experience is so real that after a while,

it’s hard to believe that you’re in an artificial environment. In fact, commercial pilots must

train regularly on simulators to retain their licences.

This technology is now being applied to the industrial environment to familiarise personnel

with aspects of operations, maintenance and safety that would otherwise be costly,

impractical or even dangerous.

Maurizio Rovaglio, Vice President, Solutions Services, Invensys and Tobias Scheele, Vice

President/ General Manager Advanced Applications, Invensys, explain the applications and

benefits of virtual reality in industry.

Page 37: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 35

Introduction

Until recently, the use of virtual reality (VR)

had been limited by systems constraints.

Real-time rendering of equipment views

places extreme demands on processor

time and the invariable need for expensive

hardware. As a result, VR solutions were

largely ineffective, being unrealistically slow

or oversimplified.

However, as VR technology continues

to develop, ongoing advances in

hardware processing power and software

development will allow VR to be used as the

interface with computer-based multimedia

activities that include training, process

design, maintenance and safety.

This paper discusses the range of

multimedia VR aids that can be used

economically and effectively to support

computer-based multimedia activities.

Overview

For the purposes of this article, VR is defined

as: “A three-dimensional (3D) environment

generated by a computer, with a run-time

that closely resembles reality to the person

interacting with the environment.”

This environment is further defined as:

• Immersive — where extra computer

peripherals (such as goggles and

gloves) are used to produce the effect

of being inside the computer-generated

environment;

• Non-immersive — the environment is

displayed in a conventional manner on

a display screen and interacts through

standard computer inputs (such as a

mouse or a joystick).

The key feature that VR brings to computer-

aided process engineering is the real-time

rendering capability. This has been used

to great effect in other areas (such as the

gaming, aircraft and medical industries) and

is now poised for use within the process

industry.

The key to an effective virtual environment

system is the close integration of the

enabling hardware with software support

tools. This process, known as systems

integration, demands that operation and

implementation — hardware and software —

be dealt with together.

However, before considering the type of

technology used to implement a VR system

for process purposes, it is useful to consider

the various forms of the virtual environment.

It is assumed that a VR-based system can

only be provided by a head-mounted

display. However, a head-mounted display

may be the wrong device to use for some

applications as it only creates a single-user

experience. Therefore, a broader definition

of VR must be assumed, one that retains

the key attributes of a VR system, for

example, the greater sense of presence and

interaction the user receives when immersed

in a virtual environment.

The technology used to deliver the virtual

environment is very important, whereas

the technology used to create the virtual

environment is critical.

Unfortunately, there is a tendency to think

in terms of head-mounted displays when

considering VR systems, and this narrow

perspective leads to confusion when talking

about other types of VR. The solution is

to use “VR” as an all-embracing term to

cover all forms of VR systems, including

stereoscopic auditorium, 3D localisation

in conventional projection or a mix of

both. Some individuals use the term virtual

environments instead of VR adding further

confusion.

However, it is better to think of a virtual

environment as a computer representation

of a synthetic world. This means a virtual

environment can be defined irrespective

of the delivery technology. But it is not

simply sufficient to produce a lone virtual

environment. Their component parts require

controlling in such a way that the user

believes they are actually immersed in a

real environment. This requires a process/

machinery simulation tool that interacts with

the virtual environment in a tangible action/

reaction mode.

Different forms of delivering VR systems are

defined by their peripheral technologies.

For example, the term “desktop VR” does

not relate to the virtual environment, but the

delivery technology used. Today, a desktop

VR system is generally based on a PC

platform, which employs the latest graphic

systems to provide optimum performance at

a reasonable cost.

Figure 1: Basic 3D visualisation Figure 2: A detailed, photo-realistic environment

The role of virtual reality in the process industry

Page 38: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

36 | www.protocolmag.co.za

The route to a simple solution is usually extraordinaryGet an end to end solution tailor-made for your business with Business Connexion’s Professional ServicesWhen it comes to making extraordinary connections, nothing comes close to the human brain. That’s why it’s the inspiration behind our ProfessionalServices. With our unique understanding of your business model, value chains and strategy, we can supply you with an end to end solution that helps youmake the most of your Business Processes, Applications Portfolio, Application Management and third party solutions. With our unique integrated solutions,we can help you build systems that enable you to enhance and grow your business. We call it the amplifying power of Connective Intelligence™.

www.bcx.co.za

Page 39: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 37

The route to a simple solution is usually extraordinaryGet an end to end solution tailor-made for your business with Business Connexion’s Professional ServicesWhen it comes to making extraordinary connections, nothing comes close to the human brain. That’s why it’s the inspiration behind our ProfessionalServices. With our unique understanding of your business model, value chains and strategy, we can supply you with an end to end solution that helps youmake the most of your Business Processes, Applications Portfolio, Application Management and third party solutions. With our unique integrated solutions,we can help you build systems that enable you to enhance and grow your business. We call it the amplifying power of Connective Intelligence™.

www.bcx.co.za

Page 40: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

38 | www.protocolmag.co.za

The virtual plant

The 3D content section of the VR

environment requires a CAD file as the

basic source of the material. This either can

be in standard 2D, or advanced formats

(such as those created by COMOSFEED™,

SMARTPLANT® and AUTOCAD®). These

programs generate a 3D CAD® file used

to speed up the conversion process

required for photo-realistic, real-time

graphics. Initially, a basic 3D geometry is

created to reflect plant specifications, and

then software such as 3dStudioMax® or

Maya® is used to process graphics details.

This software adds the details and small

adjustments needed to turn a flat CAD into

a photo-realistic product. Various other

tools and applications optimise textures and

illumination to further improve the effect.

Unlike conventional, non real-time

rendering, a real-time program allows

users to move and interact freely within the

environment. Special graphic technology

permits the environment to be rendered at

60 frames a second, compared to one frame

per second in the traditional approach.

Specific optimisation techniques are

required to achieve 60 frames a second that

include the following:

• Level of detail (LOD) geometries, used

where the detail is not needed

• UV maps compression for the illumination

data that is baked on textures

• Texture tiling to prevent pixel wastage

• BSP/portals generation for large-scale

environments

Once the graphics have been created,

the next step is to detach the geometries

that represent the interactive actors. This

is important because it separates dynamic

geometries (those that can move and be

interacted with) from static geometries

(those that cannot).

The final step is to create a collision

geometry that resembles the graphics

geometry. This allows users to collide with

the virtual environment rather than simply

passing through it.

VR platform and architecture

The VR interactive system is a server-centric

distributed application that centralises

scene updates. Therefore, it enables

scene rendering to be carried out on many

concurrent stations. The server synchronises

directly with SimSci-Esscor’s SIM4ME®

simulation engine, so the properties of each

plant element in the VR scene is constantly

updated in time with the process simulation.

Other stations have various roles within the

simulation and are able to communicate

with each other through a network using the

standard TCP/IP protocol.

The server application handles

communication among the various modules

and is responsible for the updated version of

all scene parameters. It retains a copy of the

scene graph — a hierarchical representation

of the 3D scene — that is synchronous with

the one present in each satellite application.

The server application constantly updates

scene graph data, notifying changes via the

network protocol to satellite applications.

These satellite applications are in command

of rendering the visualised data and

providing additional functionality to

users. Meanwhile, the main client station

reproduces the plant environment and

allows users to perform actions on plant

elements (for example opening a valve),

playing the role of Field- Operator. All

actions performed by the virtual Field

Operator are tracked and synchronised with

the other platform elements, including the

process simulator. Outputs can be displayed

on various systems, from standard desktop

monitors and head-mounted displays to

immersive projection systems. Both mono

and stereoscopic vision can be used.

The VR system requires a monitoring station

that centralises all information on a running

simulation. This includes the number and

type of connected stations to the 3D model

used and specific training exercises being

carried out within the simulation. The

monitoring station can be integrated with

the Instructor Station on traditional OTS

systems, giving a single point for managing

a full training session. Events and training

exercises are triggered by the Instructor

Station and transmitted to both the SIM4ME

engine and the VR platform.

IPS’ DYNSIM® and FSIM Plus® interface

directly with the VR system main simulation

modules. They give a fully synchronised

integration between the 3D world and the

process/control simulation. So any action

that the Field Operator carries out in the

3D environment is immediately reflected

in DYNSIM. Conversely, any value that is

updated by DYNSIM is also updated in the

VR platform.

Figure 3: A schematic system architecture

Page 41: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 39

Impact of VR on training

The main advantage that VR brings to both

theoretical and conceptual training is that

it allows trainees to become much more

familiar with the layout and operation of

the subject matter. For example, training

on a specific piece of equipment not only

involves 3D models that can be viewed from

any angle, but also allows that equipment

to be set in motion. For integrated systems,

such as complex processes, VR allows

trainees to walk around the 3D model and

improve their spatial awareness of the

plant.

In addition, when integrated with the

detailed DYNSIM simulation environment,

VR techniques can be used to enhance the

representation of process unit behaviour.

There are three main ways for this

integration to be represented:

• A navigational front-end representation

of continuous rather than discrete state

for multi-degree of freedom objects. This

supplies visual feedback only, with no

equipment interaction

• As above but with equipment interaction

• Complete environment emulation (a

synthetic world) by a link between process

simulation models and physical-spatial

models for all training objectives

Note that giving users a fully interactive

view can, in some cases, detract from the

objectives being taught, as it can be a much

more complex system to understand.

The main elements of a training session are:

1. Setting objectives;

2. Outlining contents;

3. Choice of methodology;

4. Assessment.

The VR platform should guide the user

through the development of all the following

elements:

1. Setting ‘training objectives’ highlights

the different options available:

• Technical training focussed on transferring

technical knowledge

• Operational training focussed on skills

and procedures

• Safety training focussed on possible

hazards

• Emergency response; how to react to a

critical situation

• Interpersonal skills training (crew training):

communication, collaborative decision-

making and teamwork

These are all supported by the VR platform.

Some training modules may be solely

devoted to process knowledge, for example

a session that provides greater process

understanding for operators. However, most

training sessions should deal with all the

elements listed above.

2. The majority of the training sessions

will be structured around the learning

of specific tasks. The content is normally

structured in the form of a detailed

account of tasks.

3. The VR platform facilitates good

teaching practice by allowing trainers

to individually match up trainees with

the training mode most suitable for

them. Therefore, some trainers might

introduce tasks slowly in a progressive

learning curve, while others might prefer

that trainees meet the full task in all its

complexity (step-by-step guidance/task

guided mode).

4. VR training allows skills transfer

rather than simply knowledge transfer.

Importantly, it also allows these skills

to be tested. So assessing trainee

performance becomes a simpler task.

Note that the platform also includes

alternative modes to score results from

training sessions.

VR models in process design

Fig. 4 shows an example of iterative and

concurrent process design based on

the use of VR models. The client/EPC is

responsible for the overall design process

while the design teams within construction,

mechanical, control system, etc., are

responsible for the design of the plant

subsystems, (such as process equipment,

building structure and installations). All

design teams are also responsible for

providing correct and updated input data to

the “VR database”. The VR provider, working

for the client, manages all the VR data and

makes updated and corrected VR models

accessible for everyone involved in the

design process.

The VR models from 1 to n in Fig. 4 provide

the design teams with structured and

easy-to-understand design information. This

is done in such a way that is not possible

using a traditional design approach based

on 2D CAD drawings. By navigating in the

models, stakeholders can analyse the design

from both a general and a more detailed

perspective. Moreover, with VR models, it is

easier to explain and discuss different design

solutions with a larger group of stakeholders,

particularly where they have different ways of

interpreting 2D design drawings. This ability

to collect views from different perspectives

Figure 4: An iterative design process with specified VR models in a concurrent and multi-

disciplinary design situation.

The role of virtual reality in the process industry

Page 42: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

40 | www.protocolmag.co.za

gives a better and more productive overall

design approach. It also makes it much

simpler to discover and correct collisions

and design errors earlier in the design

phase.

Fig. 5 shows how a VR model identified a

bad design solution, in this case process

water outlets hindering access. Finding a

solution to this type of error after the event

is often highly costly. Such an error also

affects production by generating delays in

the re-scheduling and re-planning of some

activities.

During a trial on the use of VR models,

users commented that a major benefit of

the technology is that it gives a far greater

appreciation of other skilled areas involved

Figure 5: A screenshot extracted from a VR model showing a design solution that would have

blocked the access

Figure 6: A screenshot showing the use of avatars

for investigating the maintainability of the process

machinery in the plant.

in the overall project. It also saves time. Said

one design manager, “I was sceptical at

first. Then I realised that by studying one VR

model I could save a lot of time and be more

focused on important issues rather than

searching through piles of drawings.”

Increasing time-pressure on projects,

partnerships and/or MAC roles is likely to

be the stimulus that enhances collaboration

between stakeholders. This could lead to

a concurrent design approach in which

VR models are used to coordinate and

communicate the design to the client. In

addition, as well as making it easier for

them to make crucial decisions, VR models

can also involve the client in everyday

design work. Being able to quickly sort the

relevant information and present it in an

easy and comprehensible way enables

the client to collect opinions from a

wider audience, such as the operations

and maintenance staff, and to improve

the organisation’s decision-making

procedures.

VR in maintenance tasks

To understand the training aids required

for maintenance training, the first

sensible, logical step is to look at the

task that the trainee is expected to

perform after completing the course.

In process operations, for example,

the organisation of that task is heavily

dependent on the industry sector, the

range of equipment to be maintained

and specific company culture.

Irrespective of the subject matter, however,

in general a maintenance task can be broken

down into the following subtasks:

• Replication - being able to reproduce the

reported fault.

• Identification - being able to accurately

diagnose the source of the fault.

• Rectification - correcting the fault by

taking action appropriate to the policies

of the maintenance establishment.

• Confirmation - checking to see that the

identified fault has been cleared. Each of

the four stages described above requires

a mixture of generic and specific physical

and mental skills.

When using VR facilities, the usual

approach is to train users to have a deep

understanding of both the maintenance task

itself and the science behind the involved

equipment. This means that the structure

of a typical training course includes training

objectives that can be taken from a broader

number of training categories:

1. Initial Theoretical Training

2. Instructor Led Training

3. Systems Appreciation

4. Fault Diagnostics Training

5. Recognition Training

6. Equipment Familiarisation

7. Scenarios Simulation

8. Visual Appreciation

9. Hand/Eye Coordination

10. Spatial Appreciation

For example, in the analysis of an overall

plant working environment, a specially

designed avatar of large size could

mimic the behaviour of operational and

maintenance staff. This is primarily a system

analysis (3, 8, and 10) where working spaces,

escape routes, risky areas and transportation

routes within the plant are investigated from

a logistics viewpoint. The result of such

analysis allows maintenance procedures

to be optimised and highlights if there is a

need to ask a design team for improvements

or modifications.

A second example (see fig. 6) also refers to

spatial appreciation (10), but this time to

improve equipment familiarisation (6) and

hand/eye coordination (9). The operation

Page 43: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 41

Figure 7: A virtual fire

of a highly automated industrial process

depends largely on the maintainability

of its process equipment. Because of the

huge economic impact that a failure could

have, preventing such events has a very

high priority. Therefore, to make sure that

maintenance can be properly conducted and

performed on time, maintenance personnel

can participate in training using avatars or

in “first person” through VR models of the

process machinery and layout. Therefore,

maintenance issues involving diagnostics,

timing and procedures can be highlighted

and consequently optimised.

VR in safety

Using VR, field operators feel completely

immersed and perceive the virtual

environment as if it was the real plant. By

simply putting on their goggles, they are

able to see stereoscopically the spatial

depth of the surroundings, walk-through the

virtual plant and “feel” it. 3D spatial sound

contributes to this natural feel, as does the

ability to perform tasks using different hand-

held devices. Once immersed in a virtual

environment where everything resembles

reality, all normal and abnormal situations

can be experimented on and tested by

operators. Any action either in the field or

in the control room is simulated rigorously

in terms of process behaviour with a clear

action/reaction perception. In practice, VR

allows operators to test every abnormal

situation that can be thought of, alongside

little-understood atypical plant behaviours.

Both expected and predictable malfunctions

can be tested in their entirety, up to and

including the disaster that might result. After

all, learning from a virtual disaster can help

avoid the real thing.

The strength of this approach is two-fold:

safety can be tested and experimented

upon as a training tool; and risk-assessors

are better able to identify hazardous

scenarios. Together, they improve the ability

of operators to make the right decisions

at the right times. In other words, VR

makes training, risk assessment and safety

management more effective and realistic

than ever before.

Conclusions

VR provides a 3D computer-generated

representation of a real or imaginary world

in which the user experiences real-time

interactions and the feeling of actually being

present.

VR technology is well developed and

cost-effective, even for smaller organisations

or companies who might be considering

its use. The flexibility of VR-based training

systems means that they are simple to

configure, use and will form an increasingly

important element of new training systems.

The ability to simulate complex processes

by virtual actions means that trainees

experience an environment that changes

over time. At the same time, using computer

models of real equipment is risk free and

allows endless experimentation without the

need to take real equipment off-line and

risk production. This allows users to learn

within computer-generated environments

and gives them the opportunity to make

mistakes and suffer the consequences

without putting themselves at risk.

Overall, VR improves design procedures

and is a far superior training tool to more

traditional approaches. As a result, it saves

both staff time and money.

EYESIM for VR

To meet your virtual reality operator training

needs, Invensys offers EYESIM. EYESIM is

a comprehensive solution linking Control

Room Operators to Field Operators and

Maintenance Operators by means of a

High-Fidelity Process Simulation and Virtual

Walkthrough Plant Environment. EYESIM

provides complete Plant Crew Training

to improve skills that are safety-critical by

enabling operators to perform tasks in

a simulated environment, allowing them

to react quickly and correctly, facilitating

reactions in high stress conditions and

instilling standards for team training and

communications.

The EYESIM solution is comprised of a

modelling engine, powered by SimSci-

Esscor’s DYNSIM; services through the

SIM4ME bridge, and is coupled with a high-

performing Virtual Reality Engine and a high

quality 3D Modelling/Scanning toolset.

The role of virtual reality in the process industry

Page 44: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

42 | www.protocolmag.co.za

Cyber security in the power industryWhile Eskom is seen as the principal source of electrical power

in South Africa, many large companies have their own power

generation capability, to the extent that they can contribute to the

national grid.

Many of these facilities are subject to cyber threats that could

severely disrupt production at the local and national level – and

what applies to power generation, applies equally well to many

other industries.

Ernest Rakaczky, Director of Process Control Network Security,

Invensys and Thomas Szudajski, Director of Global Power

Marketing, Invensys, explain the security challenges facing the

power industry.

Page 45: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 43

Power industry locks down1. Introduction

Like it or not, the power industry is

susceptible to a variety of cyber threats,

which can wreak havoc on control systems.

Management, engineering and IT must

commit to a comprehensive approach that

encompasses threat prevention, detection

and elimination.

Consider a couple of plausible threat

scenarios:

Cyber- attack scenario 1

Using “war diallers,” simple personal

computer programs that dial consecutive

phone numbers looking for modems, a

hacker finds modems connected to the

programmable circuit breakers of the electric

power control system, cracks passwords that

control access to the circuit breakers, and

changes the control settings to cause local

power outages and damage equipment. He

lowers the settings from, for example, 500

A to 200 A on some circuit breakers, taking

those lines out of service and diverting

power to neighbouring lines. At the same

time, he raises the settings on neighbouring

lines to 900 A, preventing the circuit

breakers from tripping and overloading

the lines. This causes significant damage to

transformers and other critical equipment,

resulting in lengthy repair outages.

Cyber-attack scenario 2

A power plant serving a large metropolitan

district has successfully isolated the control

system from the business network of the

plant, installed state-of-the-art firewalls,

and implemented intrusion detection

and prevention technology. An engineer

innocently downloads information on a

continuing education seminar at a local

college, inadvertently introducing a virus

into the control network. Just before the

morning peak, the operator screens go

blank and the system is shut down.

Although the above scenarios are

hypothetical, they represent the kinds of real

threats facing cyber security experts around

the world. Cyber security has become as

much a part of doing business in the 21st

century as traditional building security was in

the last. While power engineers have always

taken measures to maximise the security and

safety of their operations, heightened global

terrorism and increased hacker activity have

added a new level of urgency and concern.

Many plants are convinced their networks

are isolated and consequently secure,

but without ongoing audits and intrusion

detection, that security could be just a

mirage. Moreover, the growing demand for

open information sharing between business

and production networks increases the need

to secure transactions and data. For power

generating companies, where consequences

of an attack could have widespread impact,

the need for cyber security is even more

pressing.

A recent U.S. General Accounting Office

report, titled Critical Infrastructure

Protection, Challenges and Efforts to Secure

Control Systems, offered the following

examples of actions that might be taken

against a control system:

• Disruption of operation by delaying or

blocking information flow through control

networks, thereby denying network

availability to control system operators

• Making unauthorised changes

to programmed instructions in

programmable logic controllers (PLCs),

remote terminal units (RTUs) or distributed

control system (DCS) controllers, changing

alarm thresholds or issuing unauthorised

commands to control equipment. This

could potentially result in damage to

equipment, premature shutdown, or

disabling of control equipment.

• Sending false information to control

system operators either to disguise

unauthorised changes or to initiate

inappropriate actions by system operators

• Modifying control system software,

producing unpredictable results

• Interfering with operation of safety

systems

Historically, control system vendors have

dealt with such threats by focusing on

meeting customer specifications within

guidelines and metrics set by industry

standards groups such as the Institute

of Electrical and Electronics Engineers

(IEEE) and Instrument Society of America

(ISA). Indeed, much of this compliance

was designed into proprietary equipment

and applications, which were beyond the

skills of all but the most determined cyber

attacker. Increasingly, however, process

control networks are better equipped for

gathering information about generation

and distribution and sharing it with business

networks using standard communications

protocols such as Ethernet or IP. These open

protocols are being used to communicate

between dispatch, marketing, corporate

headquarters and plant control rooms as

well. While such sharing enables more

strategic management of enterprise assets,

it does increase security requirements.

2. Open exposure

The open and interoperable nature of

today’s industrial automation systems - many

of which use the same computing and

networking technologies as general purpose

IT systems - requires engineers to pay close

attention to network and cyber security

issues. Not doing so can potentially lead to

injury or loss of life; environmental damage;

corporate liability; loss of corporate license

to operate; loss of production, damage to

equipment; and reduced quality of service.

Such threats can come from many sources,

external and internal, ranging from terrorists

and disgruntled employees to environmental

groups and common criminals. Making

matters worse, the technical knowledge,

skills and tools required for penetrating

IT and plant systems are becoming more

widely available. Figure 1 shows that as the

incidence of threats increases, the level of

Power industry locks down

Page 46: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

44 | www.protocolmag.co.za

sophistication necessary to implement an

attack is decreasing, making it all the easier

for intruders.

Many companies are bracing for the worst.

Major power producers, for example, have

begun paying greater attention to security,

as manifested by active participation in

industry standards groups, including the

Department of Energy, the Federal Energy

Regulatory Commission, and the North

American Electric Reliability Council.

Power producers have also been putting

more pressure on automation suppliers and

their partners to accelerate development of

technologies that will support compliance

with emerging standards. The power industry

is looking to non-traditional suppliers to

improve control room design and access,

operator training, and procedures affecting

control system security that lie outside the

domain of control system vendors.

3. Not just an engineering problem

While power engineers will play a critical

role in hardening power operations against

intruders, collaboration and support of

both corporate management and the

IT department is essential. A company-

wide vulnerability audit of a large U.S.

utility revealed some areas of technical

vulnerability in the control system, but most

of the findings had to do with organisational

issues:

• Lack of plant-wide awareness of cyber

security issues in general

• Inconsistent administration of systems

(managed by different business units)

• Lack of a cyber security incident response

plan

• Poor physical access to some critical

assets

• Lack of a management protocol for

accessing cyber resources

• Lack of a change management process

• Undocumented perimeter access points

• Lack of a disaster recovery plan

• Inability to measure known vulnerabilities

Corporate management must first

acknowledge the need for secure

operations. Then, because few companies

will have the resources to harden all

processes against all possible threats,

management must guide development of

a security policy that will set organisational

security priorities and goals. Finally,

companies must foster collaboration among

all layers of management, IT and project and

plant engineering. Project engineers need

to understand the security risks and possible

mitigation strategies. IT, which brings much

of the security expertise, must understand

the need for real-time availability to keep

units online.

With priorities in place, engineering and

IT can work together to create a plan that

should, at a minimum, address the following

issues:

• An approach to convergence of IT and

plant networks

• A process for managing secure and

insecure protocols on the same network

• Methods for monitoring, alerting and

diagnosing plant network control systems

and their integration with the corporate

network

• A method for retaining forensic

information to support investigation/legal

litigation

• A means of securing connectivity to

wireless devices

Management must also recognise that

investment in prevention will have a

far greater payback than investment in

detection and removal. Although investment

in the latter areas may be necessary to ward

off immediate threats, focusing on activities

that prevent attacks in the first place will

reduce the need for future detection and

removal expenditures.

Embracing open standards in the age

of cyber-terrorism is another pressing

management issue. While sticking with

proprietary technologies may seem much

less vulnerable to intrusion, doing so will

limit reliability, availability and efficiency

improvements that could be available from

integration of digital technologies and

advanced applications. In fact, proprietary

technology could become even more

expensive as vendors seek to recover the

cost of additional hardening that may still

be needed, since these systems are secure

owing only to their obscurity, not to some

inherent capability. Management support at

the highest levels will help ensure that any

technical hardening is implemented most

strategically and cost-effectively.

Figure 1: Attack sophistication versus intruder knowledge.

Page 47: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 45

Find out how Invensys can help secure your future:Logon to www.invensys.co.za for a whitepaper, brochure, customer success stories and migration video.

Invensys provides a path to keep your automation and control system continuously current with

cost effective modernisation solutions, regardless of your system’s age or vintage.

Staying competitive means continuously improving the assets of your plant, throughout its productive life.

Page 48: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

46 | www.protocolmag.co.za

4. A prevention-based cyber security architecture

One of the most effective ways to implement

a prevention-based, standards-driven cyber

security architecture is to segment the

network into several zones, each of which

would have a different set of connectivity

requirements and traffic patterns. Firewalls

placed at strategic locations provide the

segmentation. Intrusion detection and

prevention systems are also deployed at

key locations and alerts are reported to

a monitoring centre. Figure 2 illustrates

a multi-zone cyber security architecture,

consisting of five segments:

• The Internet Zone, which is the

unprotected public Internet

• The Data Centre Zone, which may be a

single zone or multiple zones that exist at

the corporate data centre, dispatch and

corporate engineering

• The Plant Network Zone, which carries

the general business network traffic

(messaging, ERP, file and print sharing,

and Internet browsing). This zone may

span multiple locations across a wide area

network. Traffic from this zone may not

directly access the Control Network Zone

• The Control Network Zone, which has

the highest level of security, carries the

process control device and application

communications. Traffic on this network

segment must be limited to only the

process control network traffic as it is

very sensitive to the volume of traffic and

protocols used

• The Field I/O Zone, where

communications are typically direct hard

wired between the I/O devices and their

controllers. Security is accomplished by

physical security means.

An extra level of control - commonly

implemented as DMZs on the firewall - is

often added for supplemental security.

These supplemental zones are typically used

for data acquisition, service and support, a

public zone and an extranet sub-zone.

The Data Acquisition and Interface Sub-Zone

is the demarcation point and interface for all

communications into or out of the process

control network. This sub-zone contains

servers or workstations that gather data

from the controls network devices and

make it available to the plant network.

The Service and Support Sub-Zone is

typically used by outsourcing agencies,

equipment vendors or other external

support providers that may be servicing

the controls network. This connection

point should be treated no differently

than any other connection to the outside

world and should therefore utilise strong

authentication, encryption or secure VPN

access. Modems should incorporate

encryption and dial-back capability.

Devices introduced to the network should

use updated anti-virus software. This last

item is particularly important for service

providers, who will often bring a PC into

the plant for analysis. An example is turbine

monitoring. What’s more, power companies

should audit outsourcing providers for

adequate security measures.

The Public Sub-Zone is where public facing

services exist. Web servers, SMTP messaging

gateways and FTP sites are examples of

services found in this sub-zone.

The Extranet Sub-Zone is commonly used to

connect to the company’s trading partners.

Partners connect to these by various

methods including dialup, private lines,

frame-relay and VPN. VPN connections

are becoming more common due to

the proliferation of the Internet and the

economy of leveraging shared services.

Firewall rules are used to further control

where the partners are allowed access as

well as address translation.

5. Securing the business network

The two most critical components of data

centre security are a perimeter firewall and

an internal firewall. The perimeter firewall

controls the types of traffic to and from the

public Internet while the internal firewall

controls the types of internal site-to-site

traffic and site-to-data centre traffic. The

internal firewall is essential for controlling

or containing the spread of network-borne

viruses. It also restricts types of traffic

allowed between sites and provides and

protects the data centre from internal

intruders.

6. Securing the plant control networks

At the plant control network level are the

firewall, intrusion detection and prevention

technology, modems, and wireless access

points - all of which are integrated with a

communications infrastructure involving

equipment such as routers, bridges and

switches.

Firewalls restrict the types of traffic allowed

into and out of the control network zone,

and can be configured with rules that

permit only traffic designated as essential,

triggering alarms for noncompliant traffic.

Alarms should be monitored 24/7, either by

an internal or third party group. In addition,

each unit’s network should be isolated,

in particular from remote sites. This is

extremely important for recovery.

The firewall should use a logging server to

capture all firewall events either locally or

in a central location. One can, for example,

configure the firewall to allow remote telnet

access to the control network, but while the

firewall can monitor access to connections,

it cannot provide information about what

someone might be attempting to do with

those connections. A hacker could be

accessing the control system through telnet

and the firewall would have no way of knowing

whether the activity is from friend or foe. That

is the job of Intrusion Detection Systems (IDS)

and Intrusion Prevention Systems (IPS), which

can detect usage patterns.

An IDS monitors packets on a network wire

Figure 2: Multi-zone cyber security architecture.

Page 49: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 47

and determines if the observed activity is

potentially harmful. A typical example is a

system that watches for a large number of

TCP connection requests to many different

ports on a target machine, thus discovering

if someone is attempting a TCP port

scan. An IDS may run either on the target

machine, which watches its own traffic, or

on an independent machine such as an IDS

appliance (also referred to as Host IDS).

An IPS complements the IDS by blocking

the traffic that exhibits dangerous behaviour

patterns. It prevents attacks from harming

the network or control system by sitting in

between the connection and the network

and the devices being protected. Like an

IDS, an IPS can run in host mode directly on

the control system station, and the closer

to the control system it is, the better the

protection.

Modems connect devices asynchronously

for out-of-band access to devices. Because

modems can connect outside directly

through public carriers, they are unaffected

by security measures and represent a

significant point of vulnerability. At the

very least, any modem with links to the

main control network should be a dial-back

modem, which will not transmit data until

it receives dial-back authentication from

the receiving system. For sensitive data,

encryption is also recommended.

Wireless access points are radio-based

stations that connect to the hard-wired

network. Wireless communications can

be supported if implemented securely.

Solutions provided must be capable of

both preventing unauthorised access and

ensuring that data transmitted is encrypted

to prevent “eavesdropping.” For maximum

flexibility, devices must be capable of data

encryption with dynamic or rotating keys;

filtering or blocking Media Access Control

(MAC) addresses that uniquely identify each

network node; disabling broadcasting of

Service Set Identifiers (SSID), passwords

that authorise wireless LAN connections;

and compliance with 802.11 and 802.1x

standards. Consumer grade equipment is

not recommended and VPN connection

with software clients is preferable to WEP

or proprietary data encryption. This allows

support for multi-vendor wireless hardware

with a common solution.

VPN concentrators are devices that

encrypt the data transferred between the

concentrator and another concentrator or

client based on a mutually agreed upon key.

This technology is most widely used today

to allow remote users to securely access

corporate data across the public Internet.

The same technology can be used to add

additional security accessing data across

wireless and existing corporate WANs. In

lieu of a separate VPN concentrator, it is

possible to utilise VPN services that are

integrated with the firewall.

While the firewalls, IDS, IDP and encryption,

add the greatest hardening, they must work

in tandem with the existing communications

infrastructure. The job of the routers, hubs,

bridges, switches, media converters and

access units is to keep network information

packets flowing at the desired speed

without collision. The more network traffic is

routed, segmented and managed, the more

easily any intrusion can be contained and

eliminated.

Although some of these systems do have

certain levels of security functionality built

in, it is not wise to rely on that to protect

mission-critical data. Routers, for example,

can be configured to mimic basic firewall

functionality by screening traffic based on

an approved access list, but they lack a

hardened operating system and other robust

capabilities of a true firewall.

7. Planned-in prevention

Developing a prevention approach to

plant control systems will require a new

approach to network security between the

plant network layer and business/external

systems. It is an ongoing process that

begins with awareness and assessment,

continues through the creation of policy

and procedures and the development of

the security solution, and includes ongoing

security performance management.

Some of the key activities for the awareness

and assessment phase include defining

security objectives, identifying system

vulnerabilities, establishing the security

plan and identifying the key players on

the security team. In this phase, one is

determining which networks are involved

and how isolated they are from each other.

Is there a DCS for common systems, coal

handling, service water, for example? How

vulnerable are remote facilities, such as

landfill and water treatment?

At the policy and procedures phase, one

would review safety and security aspects of

established industry standards such as ISO

17799, ISA-SP99, META, and CERT along

with regulatory drivers, such as those offered

by FERC, NERC, and DOE. Local regulatory

requirements related to site security and

safety must also be considered.

The security solution phase is where

one would focus on technologies and

processes for system access control,

perimeter security and isolation, identity and

encryption, intrusion detection and system

management. In the security program

performance and management phase, one

would address continual monitoring and

alerting, yearly audits, periodic testing and

evaluation, and continual updating of system

requirements.

Following the procedures defined above

will not guarantee immunity from cyber

attack, but will ensure that the risk has been

managed as strategically and cost-effectively

as possible.

Power industry locks down

Page 50: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

48 | www.protocolmag.co.za

John Coetzee – Business Development Director, Aristotle Consulting (Pty) Ltd.

Control and Automation security: Fort Knox or swing-door

Introduction

Fort Knox is a (gold) bullion repository

located in Kentucky. It was built in 1936 on a

military base and has walls of granite, steel

and concrete to protect its exterior. The

building can be completely isolated and

supply its own water and power for a limited

period. The vault is sealed with a 20-ton

steel door. The combination to enter the

vault is split across different personnel, so no

one person knows the entire combination.

The building, under the control of the USA

Mint, is currently guarded by over 20 000

soldiers, making it one of the most secure

locations on the entire planet1.

In contrast, a swing-door is defi ned as “A

door that is opened by either pushing or

pulling from either side (i.e. opens both

ways) and is not normally capable of being

locked2”.

Hopefully, your security is neither Fort Knox

(no-one will ever be able to get anything

done) nor a swing-door (nothing will ever be

controlled).

Background

Events over the last decade have resulted

in far greater legal and governance

requirements being placed on corporates,

including in the IT arena. The South African

response has largely been addressed by

the King III code of practice4. King III has an

entire section dedicated to IT governance

and management and how it relates to

the Corporate Board, section 5 – The

governance of information technology. This

section goes on to prescribe that the board

must take responsibility for IT governance

as well as delegation of responsibility to

IT management for the implementation of

an IT governance framework. IT needs to

form part of the risk management of the

organisation. In Section 5.6.1 it specifi cally

refers to information security and the need

for management systems. In order to comply

with these best practices requirements,

corporates often refer to international

standards such as CobiT.

The CobiT5 standard references the

word “security” over 250 times and has

entire sections dedicated to the effective

management of security within IT. CobiT

goes on to defi ne Information Technology

Figure 1 - Fort Knox Bullion Depository (Wikipedia3)

Page 51: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 49

About Aristotle Consulting

Aristotle Consulting offers consulting and

training services to the Systems of the

Manufacturing Industry. Aristotle Consulting

can assist in developing and implementing your

security plan. Aristotle Consulting specialises

in providing consulting in best practice for your

entire Manufacturing Systems portfolio.

as the hardware and software that facilitate

the input, output, storage, processing, and

transmission of data with no distinction

made based on the application of these

elements. This implies that the standard

is equally applicable to Control and

Automation (C&A) equipment and software.

Security approach

The goals of IT security are to:

• Protect C&A assets

• Control access to critical / confidential

information

• Ensure information exchanges between

systems are trustworthy

• Resist attack from external disaster or

sabotage

• Ensure failure recovery

• Maintain information and processing

integrity.

So, how do you approach a security model

for C&A? Below is a recommendation based

on the CobiT standard.

• Define a C&A security and risk plan,

implemented into policies and

procedures.

• Define a security, risk and compliance

responsibility model.

• When acquiring new technology,

implement the security and audit

measures as part of the installation and

commissioning.

• Manage changes to your environment

with a clear change management

procedure, including impact assessments,

authorisation mechanism, change tracking

and change completion.

• Manage the physical environment by

defining and implementing access control

policies.

• Monitor and evaluate the internal control

mechanisms periodically to identify

deficiencies and continuously improve

these mechanisms.

• Record, manage and address security

incidents.

Conclusion

The use of “traditional IT” equipment

is now pervasive in C&A systems and

architecture. This now should be managed

and governed according to requirements

based locally on King III. In order to achieve

this, look to international best practice on

how implement this, such as CobiT, which

presents a useful model. Aim for a security

model that is neither Fort Knox nor a swing-

door, but has a suitable level of control,

risk management and ease of use for your

organisation.

References

1. HARGROVE, J., 2003, Fort Knox Bullion

Depository, Dayton: Teaching & Learning

Company.

2. WIKTIONARY, 2011 [Online] Available

at www.wiktionary.ord/wiki/swing_door

[Accessed on 9 May 2012].

3. WIKIPEDIA, 2012 [Online] Available at

http://en.wikipedia.org/wiki/Fort_Knox_

Bullion_Depository [Accessed on 9 May

2012].

4. KING III, 2009. King III Code of

Governance for South Africa 2009, Institute

of Directors. [Online] Available at: http://

african.ipapercms.dk/IOD/KINGIII/

kingiiicode/ [Accessed on 10 April 2012].

5. COBIT, 2007, 4.1 ed., IT Governance

Institute. Rolling Meadows.

For more information contact:

John Coetzee

Business Development Director

Aristotle Consulting (Pty) Ltd

Landline: +27 79 517 5261

Mailto: [email protected]

Web site: www.aristotleconsulting.co.za

Find us on Linked In: http://www.linkedin.

com/profile/view?id=60025171&trk=tab_pro

Control and Automation security: Fort Knox or swing-door

Page 52: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

50 | www.protocolmag.co.za

Eskom conforms to legal emission

limits with help from Wonderware

The measurement of stack emissions at coal-fired power stations is of high importance to Eskom as

exceeding the emission limits may result in the forced shutdown of generating units. These emission

levels are imposed by legislation and must therefore be monitored and alarmed continuously.

Page 53: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 51

About Eskom Holdings Limited

Eskom generates approximately

95% of the electricity used in South

Africa and approximately 45% of

the electricity used in Africa. The

company generates, transmits and

distributes electricity to industrial,

mining, commercial, agricultural

and residential customers and

redistributors. The majority of sales

are in South Africa. Other countries

of southern Africa account for a small

percentage of sales.

Additional power stations and major

power lines are being built to meet

rising electricity demand in South

Africa.

Eskom buys electricity from and

sells electricity to the countries of

the Southern African Development

Community (SADC). The future

involvement in African markets

outside South Africa (that is the

SADC countries connected to the

South African grid and the rest of

Africa) is currently limited to those

projects that have a direct impact on

ensuring a secure supply of electricity

for South Africa.

The Current Eskom Power Generation Fleet:

(One Unit = One boiler + one generator)

To address the problem, system integrator

Bytes Systems Integration used Wonderware

solutions to implement a comprehensive

emission monitoring system which is flexible

enough to handle geographically-dispersed

data sources while complying to various

business rules. The result is a system which

is helping to ensure the supply of electricity

while minimising the impact on the

environment.

Background

The combustion of coal produces almost as

much carbon emissions as the combustion

of petroleum (figure 1). What can we do

about the more than two gigatons of carbon

released into the atmosphere in the form of

carbon dioxide every year? The answer is;

“not much” unless you treat the problem

at its source – which is exactly what Eskom

has been doing for several decades. 95%

of Eskom’s generating capacity comes from

coal, and ash emissions from Eskom’s coal-

fired power stations have reduced by more

than 90% since the early 1980s due to the

installation of efficient pollution abatement

technology and the decommissioning of

older plant.

“Without treatment, we would be spewing

concentrations of 30 000 to 60 000 mg

of ash/normal cubic metre into the

atmosphere,” says Dr. Kristy Ross, senior

consultant at Eskom. “But with the use of

abatement technology such as electrostatic

precipitators or fabric filter plants, more

than 99% of ash is removed from flue gas

stream providing a particulate emission

concentration of usually less than 200 mg/

normal cubic metre.”

Getting legal

Every power station has an emissions licence

with which it needs to comply. This ensures

that the environment and health of people in

the vicinity of power stations are not affected

negatively. The emissions licence specifies

two limits – a ‘normal’ operating limit, which

emissions must be below for 96% of the

time and a ‘cap’ limit, which emissions must

never exceed. For example, Lethabo Power

Station’s licence limits are:

• Normal limit: 75 mg/Nm3

• Cap limit: 300 mg/Nm3

• Grace period: 90 hours per stack (three

units). Time given to rectify malfunctions

such as poor quality coal, equipment

breakdown, etc. The normal limit can be

exceeded for this time, but emissions

must remain below the cap limit.

Staying within these legal requirements,

however, isn’t plain sailing. Because of

the capacity shortage, shutdowns for

maintenance or repair are reduced to a

minimum which means that equipment isn’t

Figure 1: The main contributors to global fossil carbon emissions

“This is definitely going to form part of our Peak Hour Risk Analysis”

Vusi Shabangu, Head Office Generation Control Centre Shift Manager

Eskom conforms to legal emission limits with help from Wonderware

Page 54: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

52 | www.protocolmag.co.za

“This project proves once again that the Wonderware System Platform can be used to add tremendous value to any organisation. The flexibility of System Platform enabled us to connect to multiple source systems and applications to deliver critical decision-making information to the highest levels of the company.”

Gerhard Greeff, Bytes Process Management and Control

necessarily operating at maximum efficiency.

Varying coal qualities and high load factors

also contribute to the difficulty of complying

with the legal emission limits.

“Under exceptional circumstances, where

taking a unit off load would result in

load-shedding, we ask the authorities for

short-term exemption from the emission

licence rules, usually from the normal limit,”

says Dr.Ross.

The problem

Control room operators at power stations

must keep a constant lookout for potential

emission problems that might exceed the

set legal limits. In the case of such an event,

it might be necessary to exercise what’s

known as “load loss” but this will have a

ripple effect in that another power station

will be required to take up the slack by

ramping up its production.

The solution

“Given the scope of the problem and

Eskom’s nation-wide footprint of 13

operational coal-fired power stations, it

was decided that emission status should be

centrally monitored and controlled in real-

time from the Integrated Generation Control

Centre at Megawatt Park,” adds Dr. Ross.

This would allow the information to be

available remotely through a user-friendly

interface so that environmental specialists

could take the necessary action to control

some complex processes. Top executives

also needed access to this information via a

web-interface.

In short, the project goals were to:

• Prevent financial and production losses

caused by forced outages.

• Prevent environmental degradation and

fines from authorities due to exceeded

emission limits.

• Deliver real-time KPI dashboards and

reports.

• Give early warning alarms before emission

limits are exceeded, enabling preventative

measures to be implemented.

Solution selection

System Platform (ArchestrA)-certified system

integrator Bytes Systems Integration was

chosen for the project because of the

company’s long-standing and successful

relationship with Eskom, notably in the

Enterprise Manufacturing Intelligence (EMI)

field of which this project forms part.

Due to its ease of integration with other

initiatives and customisation capabilities

as well as its scalability, Bytes would use

the existing Wonderware infrastructure

consisting of System Platform (ArchestrA),

Historian, Historian Client (ActiveFactory),

InTouch (SCADA/HMI), Information Server

and Alarm Provider.

Implementation

Figure 3 shows the interaction necessary

between all the players. Aggregated hourly

averages from each power station is sent to

Megawatt Park who can make the necessary

operational decisions.

Machiel Engelbrecht of Bytes explains: “For

example, during start-up after shutdown, a

unit’s emissions will be higher than during

normal operation. So it’s important to know

when a unit is about to come on line, how

long it was off and how long the higher

emission level is likely to last. This helps

apply for the necessary exemption from

the authorities. In addition, the supplied

information places Megawatt Park is in

a good position to initiate preventative

measures and to ensure optimal load

distribution in the event of a shutdown due

to excessive emissions.”

The geographically-dispersed historians

are used as the base for real-time

information and this is then compared to

targets, plans and projections from other

transactional systems such as information

supplied by National Control (Simmerpan

- Germiston).

Trending information is required to monitor

the emission levels over certain time periods

and this is done with ActiveFactory. Aspects

of the Wonderware Historian Client are also

used to calculate the hourly time-weighted

averages.

Wonderware’s Information Server is used to

Figure 2: Cause and effect – High load demand resulted in emissions exceeding the cap limit which

initiated “load loss” which, in turn, brought emissions to within the legal limits.

Page 55: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 53

for the operators at the stations as it is a

product with which they are familiar. We

used MSSQL to extract the time-weighted

hourly averages from the Wonderware

Historian.”

The main beneficiaries of the system’s

information are senior consultants

from Environmental Management who

generate reports for the authorities. Other

beneficiaries include top executives and

head office Generation Control Centre

personnel, especially those involved with the

early warning system which involves risk and

strategic analysis.

The system is integrated with the legal

documentation of the authorities and client.

It is also integrated with a number of other

transactional and web-based systems within

the business infrastructure.

Benefits

• Weekly reports are now supplemented

with hourly monitoring – no more “after

the fact” initiatives

• Early warnings of possible forced load

losses – allows for pro-active decision-

making

distribute data and information where it can

be monitored from any client workstation.

A single dashboard for each thermal power

station was developed showing hourly

emission values together with their specific

limits and for how long the normal limit

was exceeded. Additionally any applicable

exemptions to the limits are also shown

and the system adapts automatically. Early

warnings are raised when the emission

“I cannot believe how quickly this system was implemented”

Dr Kristy Ross, senior consultant, Eskom

“Due to the integration, scalability and versatility of the various Wonderware solutions used, it was possible to deliver a sophisticated system quickly for a process which was previously accomplished manually due to its complexity.”

Machiel Engelbrecht, Bytes Systems Integration

Figure 3: System flow diagram (Eskom WAN)

Figure 4: Desktop “widgets” alert supervisors of critical conditions and help them drill down to the cause through easy-to-understand dashboards.

values get within 20% of the acceptable

limits where the right decision can be made

to prevent penalties and unit shutdowns.

A simplified robot on the dashboard gives

a quick overview of the stations’ status

from where the user can drill down to more

detailed information.

The first three Power Stations’ emission

monitoring was delivered within a month.

This was followed by training of head office’s

control centre operators, environmental

consultants, managers and station

personnel.

Bytes enlisted the help of environmental

specialists from Eskom to provide scenarios

and business rules. They also interacted

with various system owners within the

organisation for access to data. All the

development and tests were done on a

live system. “The end-user was helpful by

entering manual data such as exemption

information, in parallel to their existing

process,” says Engelbrecht. “This speeded

up delivery as all values could be verified in

real time. Excel was used as an input form

Eskom conforms to legal emission limits with help from Wonderware

Page 56: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

54 | www.protocolmag.co.za

• Real-time KPI dashboards and reports –

presents a window on reality rather than

history

• 24 Hour monitoring and alarming –

follows the business Eskom is in

• Enables preventative measures to be

implemented – early detection of trends is

crucial to minimising downtimes

• Ensures compliance with emissions

licence – elimination of environmental

degradation and fines as far as possible

• Real-time monitoring of plant

performance – provides symptoms of

potential problems before they affect

service deliv

Figure 5: System topology for each coal-fired power station (only Lethabo power station shown)

Conclusion

Eskom’s vast resources literally run South

Africa. Many people do not fully understand

the consequences of not having them. Quite

simply, Eskom is responsible for the way we

run our lives and it is encouraging to see the

steps the company is taking to ensure the

continuity of that way of life.

So, the next time you experience a blackout,

it may not be due to insufficient generating

capacity but to ... ash. Thankfully, that

particular source of annoyance is being

minimised rapidly through Eskom’s proactive

initiatives.

Page 57: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 55

Thin client computing for virtualisation and SCADAThin clients - Affordable terminals

In the 1970s and 80s, the PC revolution

gained momentum because PCs freed

people from the “big brother” syndrome

of mainframes by putting intelligence and

computing power to use on a “one-per-

desk” basis. And this is still the case today.

But there are those instances where going

back to that earlier concept but using today’s

technology is by far the more practical and

cost-effective alternative.

Thin-client terminals provide a low-

cost alternative for system expansion while

increasing reliability. They are an ideal and

cost-effective solution for client/server

architectures featuring InTouch for Terminal

Services software.

These compact, lightweight, robust

industrial thin-client terminals are available

as an ACP ThinManager-ready client or a

thin client for Microsoft operating systems.

Rugged and reliable, thin client terminals

have no moving parts. There is no need to

worry about client system reconfiguration or

data loss.

Features and benefits

• Thin client computers are available as

ACP ThinManager-ready or with the

Microsoft Windows CE operating system

• Compact, reliable and robust hardware

with no moving parts

• Consistent operator interface from

supervisory to operator panels

• A standardised, maintainable approach to

supervisory and HMI systems

• Reduced hardware, maintenance and

lifecycle costs

• Reduced software administration and

management costs

• Lower support costs

Decrease your total cost of ownership

Thin client computers can decrease the total

cost of ownership for plant floor systems in

all of the following ways.

Reducing hardware costs

Thin client computers are affordable yet

flexible, durable and reliable. They are

usually factory-tested, certified and ready to

install right out of the box.

Standardising on one software platform

InTouch software can be used as part

of a thin-client/server architecture,

in standalone thick clients or as the

visualisation tool within the Wonderware

System Platform architecture. 

Decreasing administration and maintenance costs

System programmes and applications

are hosted on a centralised server, so

software enhancements, migration,

upgrades and deployment are simplified.

No programmes or system configurations

are required at the client level whenever

application changes occur.

Increasing security and reliability

Since all applications and critical data

are stored on a centralised server, it is

much easier to secure information at the

client level. Thin client computers are ideal

for locked-down local applications

and are less vulnerable to unauthorised

system data modifications and virus

infections. These thin-client terminals also

provide exceptional reliability because

there are no moving parts such as hard

drives and fans, making them ideal

for environments that are too harsh

for conventional PCs.

Applications

Thin client computers are ideal for

visualising, monitoring and controlling

machine or process operations. For

applications already using InTouch

for Terminal Services software or ACP

thin-client management and configuration

software, alternative thin client terminals of

choice can be a drop-in replacement.

Replacing legacy graphical operator

panels with Wonderware’s Thin Client

Computers can eliminate communication

gaps between traditional operator panels

and supervisory level HMIs. Thin clients used

as operator panels reduce hardware costs

while increasing

reliability and uniting

disparate sources of

information.

Thin client

computers are

also excellent

for SCADA/

HMI applications

requiring remote,

secure and locked-

down operations,

Virtualisation makes the most of server power and technology which negates the need for intelligence at the PC level making Thin Clients the obvious, cost-effective replacement for traditional desktop PCs.

Thin client computing for virtualisation and SCADA

Page 58: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

56 | www.protocolmag.co.za

protecting against potential system

tampering. 

Thin clients have an obvious price and

reliability advantage over their (fat) PC

counterparts only if they can be properly

supported, provide the same functionality

and meet operational requirements. Thin

clients have found a natural niche in the

virtualisation and SCADA / HMI environment

because of their cost-effectiveness,

robustness, lack of moving parts such as

disc drives and ability to work in harsh

environments.

Their support is also much simpler than

going around and upgrading possibly

dozens of PCs because their “intelligence”

comes from one source.

Thin client support - ACP ThinManager Platform 5

The ThinManager Platform takes control of

resources in the modern factory

The number of software companies that

offer solutions for the modern workplace

is staggering. Browse any trade magazine

and the ads are plentiful with promises

to make you and your business more

efficient and profitable. Granted, many offer

viable solutions ranging from application

development to PLC programming, but in

practice, many fall short of their promised

ideals and still fail to fully address the real

needs of the modern plant floor.

Automation Control Products (ACP) was

created by a group of Integrators working

in process control environments every

day. The years spent repairing, replacing

and upgrading PCs on the factory floor

led to the idea that there should be a

better way to manage all of the computer

resources—hardware and software—in these

compromising and harsh environments.

This simple idea drove ACP to create

ThinManager.

Thin Clients make more sense than PCs

There are several issues with maintaining

PCs in plant floor environments but two that

are constant irritations are hard drives and

fans. Hard drives are susceptible to magnetic

interference and the vibration of heavy

machinery and assembly lines and computer

fans are always pulling dust and debris into

the box that eventually causes the PC to

overheat.

It was these facts among others that led

ACP to begin looking at thin clients as

an alternative to the expensive PC-based

systems that were literally failing every day.

Thin clients have no hard drives and no fans.

A better software solution

While thin client hardware was an excellent

(and cheap) replacement for PCs, setting

up and administrating a terminal services

environment proved difficult and time

consuming. The existing management tools

were also severely lacking. Available tools

do not offer enough functionality to manage

a thin client system properly—and so the

creation of ThinManager began.

The goal was a management tool that would

offer control over the thin client terminals

but also the terminal server systems

themselves. By fully managing both ends

of the thin client system, administrators

would have total control of the thin client

environment and therefore total control of

the plant floor. ACP also knew that industrial

customers would need extensibility, support

for touch-screens as well as sound and

video cards. ThinManager was designed to

provide such extensibility and has always

provided support for more of these than any

other solution available.

Reducing management costs and consolidating applications

Since saving time and money was the main

reason for creating ThinManager, these same

two goals have driven the functionality of

the software. An easy-to-use user interface,

allowing simple, wizard-based configuration

of terminals and servers was one of the

first things ThinManager performed.

ThinManager-Ready thin clients allow you to

easily connect to a terminal server to gain

access to Windows applications, without

necessarily exposing the full desktop and

Start menu to the operator. Applications

only need to be installed once on the

terminal server but can be deployed to

multiple users. This is still one of on the

greatest benefits of terminal services and the

time-savings it offers reduces system setup

time and costs right out of the box.

ThinManager also offers some very powerful

features at the server side of the thin client

system. Instant Failover allows for terminal

servers configured with ThinManager to

switch back and forth between servers in the

event of a server failure or if a server needs

to be taken down for routine maintenance.

The Instant Failover feature is so robust that

the thin client users are not even aware when

a server goes offline. This feature is part of

ThinManager’s “zero” downtime goal.

High availability is of course a necessity for

any plant solution and usually, if a PC fails,

your process naturally becomes unavailable.

However because of the nature of terminal

services architectures, should a thin client

fail, your process software continues to run,

uninterrupted. Replacement of a thin client

can be done by an unskilled operator and

within a few minutes the operator station is

fully functional once more.

More core functionality

After ThinManager was installed into several

large customer sites and proved itself as a

viable alternative to traditional PCs, ACP’s

customers began requesting even more

features from ThinManager. One of the

first was the ability to see more than one

session at each terminal. ACP addressed

the problem and the result is something we

call MultiSession. Now ThinManager-Ready

Page 59: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 57

thin clients can see multiple sessions by

accessing a simple drop-down menu.

ThinManager’s ability to see anything,

anywhere is what makes it such a powerful

tool for the factory floor. Built on top of the

MultiSession feature are other powerful

features like SessionTiling where you can tile

up to 25 sessions on a single screen. There

is also SmartSession which can balance the

load on a collection of terminal servers,

without the need to install a clustered

system.

Perhaps the most used feature to emerge

from MultiSession is MultiMonitor. Just

as the name states, MultiMonitor allows

multiple monitors to be attached to a single

terminal and display multiple sessions.

The ability to arrange sessions on multiple

monitors is virtually unlimited. Currently

ThinManager supports thin clients that allow

up to five monitors to run through a single

terminal.

Increased visualisation and enhanced security

Since ACP customers were finding

MultiSession so useful that ACP decided

to add even more functionality. IP cameras

were the next natural progression.

ThinManager allows a user to not only see

an image coming from an IP camera, but

lets them overlay that image on top of their

HMI. Now a worker could be at one end of

a baking oven and view the other end from

the terminal they are standing at without

having to walk back across the floor.

The IP camera feature can be combined

with ThinManager’s security module called

TermSecure to allow administrators to view

users when they log into a terminal. With

TermSecure, administrators can also deploy

programmes to specific terminals or users

and manage access through keyboard

logins, USB dongles or RFID cards that allow

users to “swipe in” instead of logging in.

This is an additional layer of security on top

of Window’s usernames and passwords.

The inherent security of thin client hardware

also allows factories to lock-down their

production environments even further. Thin

clients have no CD/DVD drives for loading

malicious content and ThinManager-Ready

thin clients have their USB drives disabled

by default. This means that employees are

not loading viruses, games or music onto

company hardware—and since there is no

hard drive present in the thin clients, there

is nothing to gain by theft of the unit nor is

there any data lost if a unit is taken.

New functionality will deliver virtualisation and mobile applications

Another technology ACP developers

see gaining traction in the IT world is

virtualisation. They believe this is another

resource that ThinManager can make easier

to manage for system administrators. Now,

just like they did with terminal services, ACP

is moving management of virtual resources

in the ThinManager tree giving ThinManager

administrators access to many of the same

features that are available in VMware’s

vSphere Client. ACP’s development team is

currently working on adding built-in support

for virtualisation. This will allow ThinManager

users to manage virtualisation and terminal

services at a much lower cost than previously

available.

If that were not enough, ACP has also

developed an application for Apple’s iOS

devices. Most of ThinManager’s capabilities

are now available on a mobile platform

for Apple’s iPhone and iPad. You can look

for this technology to expand into making

mobile devices become actual thin clients

themselves in the near future.

The final word

You would be hard-pressed to find as robust

a solution for managing your plant floor

environment as ThinManager. No other

management tool¬ takes control of all the

resources used in today’s modern factories

as completely as ACP’s ThinManager. With

more than 12 years in business, ThinManager

is a proven technology. Thousands of

companies in 30 countries, including one

in ten Fortune 500 companies, use ACP

ThinManager and ThinManager-Ready Thin

Clients for their daily operations.

Thin client computing for virtualisation and SCADA

Page 60: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

58 | www.protocolmag.co.za

Page 61: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 59

Virtualisation needs high availability functionality

Product offerings

ftServer systems ensure uptime

assurance and operational

simplicity for:

• Wonderware System Platform

• Wonderware Application Server

• Wonderware InTouch® HMI for

terminal services

• Wonderware Historian

• Wonderware InBatch™ Software

• Wonderware Device Integration

I/O Servers

• Wonderware MES software

• Kepware KEPServer for

Wonderware

• ACP ThinManager® Platform

Whether you’re using thin clients or not,

server failures in critical production areas

and especially in a virtual computing

environment don’t only lead to costly

downtimes but can also have immediate and

serious consequences. Data loss, regulatory

non-compliance and health risks are just a

few of the hazards that can arise from even a

brief outage.

That’s why manufacturers running

critical Wonderware solutions rely on

fault-tolerant Stratus servers for continuous

24/7/365 uptime assurance. For nearly

a decade, Stratus ftServer systems have

delivered unsurpassed levels of reliability —

for both the server and operating system.

Today, ftServer uptime averages six nines

(i.e. 99.9999%), a number that translates to

less than 32 seconds of downtime per year.

Engineered to prevent failure

The ftServer architecture eliminates

single points of failure. Every system

comes equipped with replicated hardware

components, built-in Automated Uptime

technology and proactive availability

monitoring and management features.

These capabilities automatically diagnose

and address hardware and software errors

that would quickly halt processing

in other x86 servers. Even in the event of

a hard component failure, the duplicate

component simply continues normal

processing. As a result, your Wonderware

solutions run uninterrupted and your

critical data is fully protected from loss or

corruption — even data not yet written to

disk.

Why Invensys Wonderware customers choose Stratus

• Delivers industry-leading uptime

>99.999%

• Easy to set up and operate: load-and-go

simplicity; no need for sophisticated IT skills

• Simple to maintain: special features

include remote monitoring, diagnostics

and alerts; hot-swappable components;

no need for failover testing or scripts

• Provides comprehensive uptime

protection for Wonderware solutions

in Microsoft® Windows Server® and

VMware® vSphere™ environments

• Mission-critical services offer

comprehensive, 24/7 support and long-

term coverage plans

• Server consolidation: Wonderware System

Platform runs on a single, virtualised

ftServer system

Worry-free solutions that are simple to maintain

Real-time solutions place high demands

on hardware. When business needs

dictate continuous trouble-free

performance, the fault-tolerant ftServer

system is the right choice. Equipped

with Intel® Xeon® quad-core processors,

QuickPath Interconnect technology, up to

96 GB memory and 8TB of physical storage,

fifth-generation ftServer systems have

never been so right for critical heavy-duty

workloads. A choice of operating systems

adds to the versatility of ftServer solutions

and offers virtual environments the simplicity

of automatic “load-and-go” availability.

In any setting, Stratus’ advanced

technology makes these servers quick to

deploy, simple to manage and cost-effective

to own. Built-in self-diagnostics and “call-

home” capabilities automatically report

issues and, when necessary, the system even

orders its own hot swappable replacement

parts. No special IT skills are required to

replace hardware components, saving time,

effort and expense.

Virtualisation needs high availability functionality

Page 62: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

60 | www.protocolmag.co.za

Acronyms

COBIT Control OBjectives for Information and related Technology

DR Disaster Recovery

HA High Availability

IDE Integrated Development Environment (ArchestrA)

IaaS Infrastructure as a Service

LAN Local Area Network

PaaS Platform as a Service

SaaS Software as a Service

SCSI Small Computer System Interface

SSD Solid State Drive

VM Virtual Machine or Virtual Memory

VMM Virtual Machine Monitor

WAN Wide Area Network

Cloud computing – Cloud computing is the delivery of computing as a

service rather than a product, whereby shared resources, software and

information are provided to computers and other devices as a utility (like

the electricity grid) over a network (typically the Internet).

COBIT – First released in 1996, COBIT is a framework created by ISACA

(Information Systems Audit and Control Association) for information

technology (IT) management and IT Governance. It is a supporting

toolset that allows managers to bridge the gap between control

requirements, technical issues and business risks.

Virtualisation dictionaryCores - Two or more independent actual processors that are part of a

single computing environment

Disaster Recovery (DR) - The organisational, hardware and software

preparations for system recovery or continuation of critical infrastructure

after a natural or human-induced disaster.

High Availability (HA) - A primarily automated implementation which

ensures that a pre-defined level of operational performance will be met

during a specified, limited time frame.

Hypervisor – In computing, a hypervisor, also called virtual machine

manager (VMM), is one of many hardware virtualisation techniques

allowing multiple operating systems, termed guests, to run concurrently

on a host computer. It is so named because it is conceptually one level

higher than a supervisory programme. The hypervisor presents to the

guest operating systems a virtual operating platform and manages the

execution of the guest operating systems. Multiple instances of a variety

of operating systems may share the virtualised hardware resources.

Hypervisors are very commonly installed on server hardware, with the

function of running guest operating systems, that themselves act as

servers.

Infrastructure as a Service (IaaS) - In this most basic cloud service

model, cloud providers offer computers – as physical or more often as

virtual machines –, raw (block) storage, firewalls , load balancers and

networks. IaaS providers supply these resources on demand from their

large pools installed in data centres. Local area networks including

IP addresses are part of the offer. For the wide area connectivity, the

Internet can be used or - in carrier clouds - dedicated virtual private

networks can be configured.

To deploy their applications, cloud users then install operating system

images on the machines as well as their application software. In

this model, it is the cloud user who is responsible for patching and

maintaining the operating systems and application software. Cloud

providers typically bill IaaS services on a utility computing basis, that is,

cost will reflect the amount of resources allocated and consumed.

Local Area Network (LAN) - A local area network (LAN) is a computer

network that interconnects computers in a limited area such as a

home, school, computer laboratory, or office buildings. The defining

characteristics of LANs, in contrast to wide area networks (WANs),

include their usually higher data-transfer rates, smaller geographic area

and lack of a need for leased telecommunication lines.

Paravirtualisation – A case where a hardware environment is not

simulated; however, the guest programmes are executed in their own

isolated domains, as if they were running on separate systems. Guest

programmes need to be specifically modified to run in this environment.

Platform as a Service (PaaS) - In the PaaS model, cloud providers

deliver a computing platform and/or solution stack typically including

Page 63: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 61

operating system, programming language execution environment,

database and web server. Application developers can develop and

run their software solutions on a cloud platform without the cost and

complexity of buying and managing the underlying hardware and

software layers. With some PaaS offers, the underlying computation and

storage resources scale automatically to match application demand such

that the cloud user does not have to allocate resources manually.

Private cloud - Private cloud is cloud infrastructure operated solely for a

single organisation, whether managed internally or by a third-party and

hosted internally or externally.

They have attracted criticism because users “still have to buy, build, and

manage them” and thus do not benefit from less hands-on management,

essentially “[lacking] the economic model that makes cloud computing

such an intriguing concept”.

Public cloud - Public cloud applications, storage, and other resources

are made available to the general public by a service provider. These

services are free or offered on a pay-per-use model. Generally, public

cloud service providers like Microsoft and Google own and operate the

infrastructure and offer access only via Internet (direct connectivity is not

offered).

Software as a Service (SaaS) - In this model, cloud providers install and

operate application software in the cloud and cloud users access the

software from cloud clients. The cloud users do not manage the cloud

infrastructure and platform on which the application is running. This

eliminates the need to install and run the application on the cloud user’s

own computers simplifying maintenance and support. What makes a

cloud application different from other applications is its elasticity. This

can be achieved by cloning tasks onto multiple virtual machines at run-

time to meet the changing work demand. Load balancers distribute the

work over the set of virtual machines. This process is transparent to the

cloud user who sees only a single access point. To accommodate a large

number of cloud users, cloud applications can be multitenant, that is, any

machine serves more than one cloud user organisation. It is common to

refer to special types of cloud based application software with a similar

naming convention: desktop as a service, business process as a service,

Test Environment as a Service, communication as a service.

The pricing model for SaaS applications is typically a monthly or yearly

flat fee per user

Virtualisation - In computing, this is the creation of a virtual (rather than

actual) version of something, such as a hardware platform, operating

system, a storage device or network resources. It’s a concept in which

access to a single underlying piece of hardware (like a server) is

coordinated so that multiple guest operating systems can share that

single piece of hardware with no guest operating system being aware

that it is actually sharing anything. In short, virtualisation allows for two

or more virtual computing environments on a single piece of hardware

that may be running different operating systems and decouples users,

operating systems and applications from physical hardware.

Virtualisation types:

Hardware - Hardware virtualisation or platform virtualisation refers

to the creation of a virtual machine that acts like a real computer with

an operating system. Software executed on these virtual machines is

separated from the underlying hardware resources. For example, a

computer that is running Microsoft Windows may host a virtual machine

that looks like a computer with Ubuntu Linux operating system with the

result that Ubuntu-based software can be run on the virtual machine.

In hardware virtualisation, the host machine is the actual machine on

which the virtualisation takes place and the guest machine is the virtual

machine. The words host and guest are used to distinguish the software

that runs on the actual machine from the software that runs on the virtual

machine. The software or firmware that creates a virtual machine on the

host hardware is called a hypervisor or Virtual Machine Monitor.

Different types of hardware virtualisation include:

• Full virtualisation - Almost complete simulation of the actual

hardware to allow software, which typically consists of a guest

operating system, to run unmodified

• Partial virtualisation - Some but not all of the target environment

is simulated. Some guest programmes, therefore, may need

modifications to run in this virtual environment.

• Paravirtualisation - A hardware environment is not simulated;

however, the guest programmes are executed in their own

isolated domains, as if they were running on separate systems.

Guest programmes need to be specifically modified to run in this

environment

1. Desktop - Desktop virtualisation is the concept of separating the

logical desktop from the physical machine. One form of desktop

virtualisation, virtual desktop infrastructure (VDI), can be thought as

a more advanced form of hardware virtualisation: Instead of directly

Virtualisation dictionary

Page 64: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

62 | www.protocolmag.co.za

interacting with a host computer via a keyboard, mouse and monitor

connected to it, the user interacts with the host computer over a network

connection (such as a LAN, Wireless LAN or even the Internet) using

another desktop computer or a mobile device. In addition, the host

computer in this scenario becomes a server computer capable of hosting

multiple virtual machines at the same time for multiple users.

Another form of desktop virtualisation, session virtualisation, allows

multiple users to connect and log into a shared but powerful computer

over the network and use it simultaneously. Each is given a desktop

and a personal folder in which they store their files. With Multi-seat

configuration, session virtualisation can be accomplished using a single

PC with multiple monitors, keyboards and mice connected.

Thin clients, which are seen in desktop virtualisation, are simple and/

or cheap computers that are primarily designed to connect to the

network; they may lack significant hard disk storage space, RAM or even

processing power but in this environment, this matters little.

Using Desktop Virtualisation allows companies to stay more flexible in an

ever changing market. Having Virtual Desktops allows for development

to be implemented quicker and more expertly. Proper testing can also

be done without the need to disturb the end user. Moving the desktop

environment to the cloud also allows for less single points of failure in the

case where a third party is allowed to control the company’s security and

infrastructure.

2. Software – Software virtualisation includes the following:

• Operating system-level virtualisation - The hosting of multiple

virtualised environments within a single OS instance

• Application virtualisation and workspace virtualisation - The hosting

of individual applications in an environment separated from the

underlying OS. Application virtualisation is closely associated with

the concept of portable applications.

• Service virtualisation – This involves emulating the behaviour of

dependent (e.g. third-party, evolving, or not implemented) system

components that are needed to exercise an application under test

(AUT) for development or testing purposes. Rather than virtualising

entire components, it virtualises only specific slices of dependent

behaviour critical to the execution of development and testing tasks.

3. Memory - Memory virtualisation means aggregating RAM resources

from networked systems into a single memory pool. This leads to the

concept of Virtual Memory, which gives an application programme the

impression that it has contiguous working memory, isolating it from the

underlying physical memory implementation.

4. Storage - Storage virtualisation is the process of completely

abstracting logical storage from physical storage

5. Data - Data virtualisation is the presentation of data as an abstract

layer, independent of underlying database systems, structures and

storage. Database virtualisation is the decoupling of the database

layer, which lies between the storage and application layers within the

application stack.

6. Network - Network virtualisation is the creation of a virtualised

network addressing space within or across network subnets

Virtual Machine (VM) – A virtual machine is a software implementation

of a machine (i.e. a computer) that executes programmes like a physical

machine. Virtual machines are separated into two major categories,

based on their use and degree of correspondence to any actual machine.

A system virtual machine provides a complete system platform which

supports the execution of a complete operating system (OS). In contrast,

a process virtual machine is designed to run a single programme, which

means that it supports a single process. An essential characteristic of

a virtual machine is that the software running inside is limited to the

resources and abstractions provided by the virtual machine—it cannot

break out of its virtual world.

Virtual Memory – The concept whereby an application programme has

the impression that it has contiguous working memory, isolating it from

the underlying physical memory implementation.

Wide Area Network (WAN) - A WAN is a telecommunication network

that covers a broad area (i.e. any network that links across metropolitan,

regional or national boundaries). Business and government entities

use WANs to relay data among employees, clients, buyers and

suppliers from various geographical locations. In essence this mode of

telecommunication allows a business to effectively carry out its daily

function regardless of location.

Acknowledgement: Most of the definitions in this dictionary are sourced

from Wikipedia

Page 65: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 63

Events - MESA SA “Adapt or Die” 2012 Conference

The MESA SA Executive

Committee announces that

the 2012 MESA Conference is

scheduled for 13th and 14th of

November 2012 at the Indaba

Hotel, Fourways. The format of

the Conference has changed

considerably as result of user

requests.

On the first morning of the conference, MESA SA will run the first

Executive Education Programme session in South Africa, designed by

the MESA Global Education Programme (GEP) members. This session

is a half-day session specifically aimed at the busy executive that

cannot afford to be out of the office for a full day or more. The session

provides an overview of MES/MOM, where it fits in the organisation,

the difference between MES/MOM and ERP systems, where and how

to deploy MES/MOM and the benefits and pitfalls of MES/MOM

deployments.

After lunch on the 13th, the conference proceedings will kick off with

an international speaker followed by user case-studies and concluding

with a networking session for the delegates. As per request from our

vendors and sponsors, MESA SA also plans to have an exhibition hall

available this year where teas and lunches will be served. This will give

the exhibitors more time to interact with users and provide exhibitors

more space than previous years.

On the 14th, the conference will kick off with a motivational speaker

followed by more user case-studies. The conference will conclude at

afternoon tea so that travellers can catch their flights home.

In addition to the conference, MESA SA is also holding a post-

conference training programme on the 15th and 16th of November to

present the two-day Certificate of Awareness (CoA) GEP training. This

training is aimed at operational and project managers responsible for

MES/MOM systems and sales and marketing people from vendors. 32

MES/MOM professionals in South Africa did this training last year and

the training received great reviews. Registration for this training can be

done directly on the MESA International website (www.mesa.org).

Please keep this date open and watch the press and social media for

further details.

The planned Agenda is as follows:

13th November

08:30 – 12:00 MESA Global Executive Education Programme

presented by Jan Snoeij from Logica and MESA

EMEA

12:00 – 13:00 Lunch in Exhibition Hall

13:00 – 15:30 International speaker and user case studies

15:30 – 17:00 Networking session

14th November

08:30 – 12:00 Motivational speaker and user case studies

12:00 – 13:00 Lunch in Exhibition Hall

13:00 – 15:30 User case studies

15:30 – 16:00 Afternoon tea

16:00 Delegates depart

15th November

08:30 – 17:00 Post Conference Training – MESA Global

Education Programme Certificate of Awareness

16th November

08:30 – 15:00 Post Conference Training – MESA Global

Education Programme Certificate of Awareness

MESA SA “Adapt or Die” 2012 call for papers

MESA Southern Africa is sending out a Call for Papers for the

conference scheduled for 13th and 14th November this year. The

theme of the conference is “Adapt or Die”.

Any papers and presentations explaining how your company adapted

your manufacturing operations to:

• Changing market conditions,

• Changes in the skills make-up of your company,

Events - MESA SA “Adapt or Die” 2012 Conference

Page 66: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

64 | www.protocolmag.co.za

• Changes in the systems landscape,

• Changes in the market, or

• Changes within your suppliers or raw materials will be suitable.

Presentations are also welcome that discuss any new or different

way your company is looking at any aspect within the Operations

Management environment (Maintenance, Production, Quality,

Inventory) and has implemented a project to support for instance:

• People,

• Equipment,

• Materials,

• Products,

• Capacity,

• Production Schedule, or

• Operations Performance.

The MESA SA executive committee will be looking for real-life case

studies that indicate how your company changed or adapted your

way of working in order to adapt to the changing environment or to

increase the cost-effectiveness of your operations.

The best speaker at the conference (as decided by the MESA SA

Executive committee) will get an i-Pad as a prize.

Please look at the following dates if you have a good story to tell and

send in your submission/contribution.

• Abstract due (120 – 250 words) – 17 August 2012

• Evaluation complete – 31 August 2012

• Presentation Due – 19 October 2012

Please send your abstracts to [email protected] and

[email protected].

MESA SA “Adapt or Die” 2012 call for sponsors and exhibitors

Various exhibition and sponsorship opportunities will be available for

interested vendors and integrators.

We are looking for the following sponsors:

• Platinum sponsor – This sponsor will be allowed to erect banners in

the actual conference/speaker venue as well as get a free Exhibitor

space in the Exhibitor Hall. The Platinum Sponsor will also be

recognised as such on the Programme and printed and electronic

media going out before and on the conference day. This sponsor will

also get special mention during the conference proceedings and will

have the opportunity to add brochures and literature to the delegate

bags.

• International Speaker Sponsorship - This sponsor will be

recognised as the GOLD SPONSOR and will get a free Exhibitor

space in the Exhibitor Hall. The Gold Sponsor will also be recognised

as such on the Programme and printed and electronic media going

out before and on the conference day. This sponsor will also get

special mention during the conference proceedings.

• Speaker gifts – This sponsor will be recognised on the Programme

and printed media

• Networking session – This sponsor will be recognised on the

Programme and printed media

• Best Speaker prize (i-Pad) – This sponsor will be recognised on the

Programme and printed media

• Name tags and delegate bags – This sponsor will be recognised on

the Programme and printed media

• Exhibitors - The exhibitors will be recognised on the programme

and printed media

If you are interested in any of the above sponsorships and related costs,

please contact [email protected] and daniel.spies@sappi.

com.

Page 67: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 65

Use Protocol Magazine to generate business opportunitiesProtocol magazine continues to be well

received on a bi-monthly basis by 6500

industry professionals like you, at every

level of the country’s leading mining

and manufacturing companies. You can

leverage this highly-qualified readership

to be heard.

How do you promote yourself right now?

Some of the things you might be doing

could include inserting opinion pieces,

adverts, editorials and other material

into South Africa’s leading manufacturing

and mining magazines. A good choice

since these are excellent and professional

publications that land on decision-makers’

desks every month.

What Protocol offers is all the advantages

of a professional magazine with a large

circulation but the cherry on the cake is

that all the readers of Protocol have one

thing in common – Wonderware solutions

in the areas of SCADA, MES, EMI, BPM and

enterprise integration – in fact, anything to

do with industrial and corporate production

IT. Everything in Protocol is aimed at helping

end users get more from their Wonderware

investment and trigger them to look at new

possibilities. Nobody wants to reinvent a

costly development or investigation wheel

and what you have to offer will go a long way

to stopping that happening.

Let’s think for a minute about your perfect

promotion vehicle and what it should do for

you:

• It must convey your message in a

professional manner to a large, targeted

and qualified audience

• It must generate incremental business (if

you’re a solution supplier) or recognition

(if you’re an end-user)

• It must generate market awareness of

your capabilities

• It must do all that at a reasonable cost

Protocol magazine meets all these criteria.

If you’re an end-user, your stakeholders are

most interested to know how well you’re

looking after their interests by lowering costs

and improving efficiency. Your colleagues

in the industry are keen to see how you’ve

implemented Wonderware solutions so that

they can evaluate if these will have the same

benefits in their environments.

If you’re a system integrator, end-users

want to know what you’ve done so that they

can consider you as a solution supplier for

their next project.

If you’re a hardware or software vendor,

end-users and system integrators want to

know about how well your offerings work

in the Wonderware environment and how

they can help them do a better and more

cost-effective job.

What medium will work best for you?

Success stories:

They won’t cost you a cent and you don’t

have to write them. Simply send an e-mail

to your account manager stating that you

have the makings of a good story and why

you think it is so. You will then be sent a

Guideline and a Permission to Publish form

to complete and return.

The Guideline is in the form of prompts to

which you supply the answers to the best of

your ability. This, together with the graphical

information required, will be used to write

the article which will be sent back to you for

editing, approval, etc.

The Permission to Publish form must be

signed by the end-user of the installation

and system integrator / solution vendor

(if applicable) before work on the article is

started. This ensures that all the work that

goes into compiling the story will not be

wasted.

You are free to use the completed success

story in any marketing sense you wish and

you have hundreds of examples on our

web site and in past issues of A2ware and

FutureLinx.

Opinion Pieces:

Once again, there’s no cost involved and

you don’t have to worry about probably

not having majored in English. Decide on a

central theme and the idea(s) you want to

put across, then jot down all the reinforcing

arguments you can think of (as well as

references if applicable). Also include any

supporting graphics you feel will better

illustrate the point.

Send your draft article to your account manager

and, if necessary, we’ll make the necessary edits

before returning it to you for approval.

Comments to the editor, Q&As, Product and/or service information:

Send your submissions to Denis your

account manager and they (as well as the

Use Protocol Magazine to generate business opportunities

Page 68: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

66 | www.protocolmag.co.za

answers) will be published in the next issue

(if interesting and relevant).

Material formats

Text – In Microsoft Word format

Graphics – In PowerPoint, Bitmap or JPEG

format (the last two in the highest possible

resolution you have)

Advertising:

For all your advertising requirements –

including the drafting of effective adverts

from scratch – contact Heather Simpkins at

The Marketing Suite.

So what are we really saying?

As an end-user or supplier of Invensys

Wonderware and associated solutions, you

form part of the world’s largest ecosystem

of professionals in the fi elds of industrial

automation and the delivery of actionable

intelligence from the shop fl oor to the top

fl oor.

That makes you pretty special.

That makes what you have to say signifi cant

and important.

In other words, what you have to say matters

and we have made it as easy as possible for

you to say it!

You will be talking to people with the same

reality as you and who have the same

problems and concerns.

So, what we’re really saying is, use Protocol

magazine to say what you believe needs

to be said.

i Did you know that if you don’t talk to anyone, they’re not likely to talk to you or send orders?

Services

and Support

Page 69: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 67

Use Protocol Magazine to generate business opportunities

Services

and Support

Page 70: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

68 | www.protocolmag.co.za

2012 Training Schedule(Johannesburg)

InTouch Part 1 Fundamentals (includes New Graphics)

• 2 - 6 July

• 30 July – 3 August

• 3 – 7 September

• 8 – 12 October

• 29 October – 2 November

• 26 – 30 November

InTouch Part 2 Advanced (includes New Graphics)

• 4 – 8 June

• 9 – 13 July

• 10 – 14 September

• 5 – 9 November

System Platform – Application Server (includes new graphics)

• 7 – 11 May

• 25 – 29 June

• 23 – 27 July

• 27 – 31 August

• 1 – 5 October

• 19 – 23 November

Historian (includes ActiveFactory and Wonderware Information Server)

• 14 – 18 May

• 18 – 22 June

• 20 – 24 August

• 17 – 21 September

• 15 – 19 October

• 12 – 16 November

• 3 – 7 December

NOTE:

The dates shown apply to training

at our offi ces in Bedfordview,

Johannesburg. Regional training is

presented on demand. A minimum of

six delegates is required to arrange

a course.

Regional training venues:

Durban: Khaya Lembali,

Morningside.

Cape Town: Durbanville Conference

Centre.

Port Elizabeth: Pickering Park

Conference Centre, Newton Park.

i Did you know that your bottom line is directly proportional to the effectiveness of your workforce?

As the owner of some of the world’s most

popular, advanced and versatile industrial

automation, information and MES software

solutions, you’ll want to get the most

from your investment and that includes

getting the best training in the business.

We routinely train about 600 professionals

like you every year not only on how to use

our solutions but how to turn our product

features into real business benefi ts.

So, let us suggest a training curriculum best

suited to your needs. For all your training

requirements, contact Emmi du Preez at

[email protected] or

call her on 011 607 8286

Page 71: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 69

Support – Customer FIRSTIn a nutshell ...

Comprehensive Services

Customer FIRST is not just technical

support, it’s a comprehensive

programme to help you manage your

systems and protect your investments.

Real Value

Customer FIRST members enjoy the

many benefits of a closer collaborative

relationship with Invensys.

• Responsive services

• Depth of expertise

• Proactive planning

• Continuous performance monitoring

• Emergency contingency provisioning

• Deep discounts on hardware

• Software and services

These important elements make the

Customer FIRST membership an

essential part of your business success.

Maximise asset performance

Downtime costs businesses millions of Rand

- Customer FIRST support gives you options

to maximise productivity by keeping your

operations running smoothly.

Outages, both planned and unplanned,

are costly; businesses increasingly need to

employ effective pre-emptive strategies

to reduce risks and employ efficient and

effective resourcing strategies to ensure that

non-productive time is kept to a minimum.

Customer FIRST is not just technical support,

it’s a comprehensive programme to help

you manage your systems and maximise the

performance of your assets.

Downtime hurts - Customer FIRST can help

Even the most reliable equipment requires

downtime, perhaps for routine maintenance,

preventative maintenance, upgrades or

replacement. You need to ensure that

downtime is kept to a minimum and to

ensure that there is minimal production loss

as a result.

• Customer FIRST provides you with

access to great hardware maintenance,

software maintenance and comprehensive

lifecycle management services to help

you optimise your planned downtime and

minimise unplanned downtime events.

Recovery time is critical and any delays

in acquiring either replacement parts, or

the expertise required to quickly resolve

problems, can have a significant financial

impact on your business.

• Customer FIRST provides you with timely

access to critical spare parts with the

ability to manage spares more easily and

ensure the reliability of your systems.

What’s more, extended downtime presents

other risks to your business such as failing

to meet contractual obligations to your

customers and the loss of business that

may ensue.

• Customer FIRST also gives you access

to Invensys technical resources to

help you ensure that your system is

back to capacity in as short a time as

possible. Our world-class global service

organisation is available locally, so the

help you need is never far away.

Asset performance is not just about

maximising availability though; you need

to ensure that your assets are working to

their maximum potential. You also need

to minimise the risk to your business of

missed schedules, poor quality or regulatory

violations, with the business consequences

that may follow.

Customer FIRST gives you proactive remote

health monitoring services to spot warning

signs before problems occur and advanced

consulting services to tune your systems to

maximum performance.

Customer first – our mission: your success

Customer FIRST membership gives you

access to award-winning technical support,

hardware and software maintenance

services, lifecycle management and remote

Services, training and consulting services

and much more. The programme provides

you with comprehensive services and flexible

options to choose exactly the right kind of

programme to suit your business needs and

help you to maximise asset performance.

Contact information

Support Telephone Number:

0861-WONDER (0861-966337) or

0800-INVENSYS (Toll Free)

E-mail: [email protected] or

[email protected]

Support – Customer FIRST

Page 72: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

70 | www.protocolmag.co.za

On the lighter sideThe world’s greatest philosophers could learn a thing or two from this lot:

• If life gives you lemons, stick them down

your shirt and make your boobs look

bigger.

• Just because nobody complains doesn’t

mean all parachutes are perfect.

• When the pin is pulled, Mr. Grenade is not

our friend.

• Great minds discuss ideas. Average

minds discuss events. Small minds discuss

people.

• If things get better with age, I’m

approaching magnificent!

• Suppose you were an idiot. And suppose

you were a member of parliament. But I

repeat myself.

• What happens if a big asteroid hits

Earth? Judging from realistic simulations

involving a sledge hammer and a common

laboratory frog, we can assume it will be

pretty bad.

• We don’t see things as they are; we see

things as we are.

• My wife submits and I obey, she always lets

me have her way.

• Diarrhoea is hereditary... it runs in your

jeans

• I named my dog ‘Herpes’ because he

won’t heel

• Save the trees, wipe your butt with an owl

• I went to my doctor and asked for

something for persistent wind. He gave

me a kite.

• The great thing about democracy is

that it gives every voter a chance to do

something stupid.

• Snowmen fall from Heaven unassembled.

• If all else fails, immortality can always be

assured by spectacular error.

• That all men are equal is a proposition

which, at ordinary times, no sane

individual has ever given his assent

• A marriage is always made up of two

people who are prepared to swear that

only the other one snores.

• Marriage is an institution consisting of a

master, a mistress and two slaves, making

in total, two.

• In some cultures, what I do would be

considered normal.

• One of my pet peeves is women who

don’t put the toilet seat back up when

they’re finished.

• I don’t have a sense of decency. That way,

all my other senses are enhanced.

• Anatidaephobia: The fear that

somewhere, somehow, a duck is watching

you.

• Do not argue with an idiot. He will drag

you down to his level and beat you with

experience.

• The last thing I want to do is hurt you. But

it’s still on the list.

• If I agreed with you, we’d both be wrong.

• We never really grow up; we only learn

how to act in public.

• Knowledge is knowing a tomato is a fruit;

Wisdom is not putting it in a fruit salad.

• The early bird might get the worm, but

the second mouse gets the cheese.

• Evening news is where they begin with

‘Good evening,’ and then proceed to tell

you why it isn’t.

• To steal ideas from one person is

plagiarism. To steal from many is research.

• A bus station is where a bus stops. A train

station is where a train stops. My desk is a

work station.

• How is it one careless match can start a

forest fire, but it takes a whole box to start

a campfire?

• Dolphins are so smart that within a few

weeks of captivity, they can train people

to stand on the very edge of the pool and

throw them fish.

• I thought I wanted a career; turns out I just

wanted paycheques.

• A bank is a place that will lend you money

if you can prove that you don’t need it.

• Whenever I fill out an application, in the

part that says “In an emergency, notify:” I

put “ A DOCTOR.”

• I didn’t say it was your fault, I said I was

blaming you.

• Why does someone believe you when you

say there are four billion stars, but check

when you say the paint is wet?

• Behind every successful man is his

woman. Behind the fall of a successful

man is usually another woman.

• A clear conscience is usually the sign of a

b a d memory.

• The voices in my head may not be real,

but they have some good ideas!

• I discovered I scream the same way

whether I’m about to be devoured by a

great white shark or if a piece of seaweed

touches my foot.

• Some cause happiness wherever they go.

Others, whenever they go.

• There’s a fine line between cuddling and

Page 73: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

July/August 2012 | 71

On the lighter side

holding someone down so they can’t get

away.

• I used to be indecisive. Now I’m not sure.

• I always take life with a grain of salt... plus

a slice of lemon... and a shot of tequila.

• You’re never too old to learn something

stupid.

• To be sure of hitting the target, shoot first

and call whatever you hit the target.

• Nostalgia isn’t what it used to be.

• A bus is a vehicle that runs twice as fast

when you are after it as when you are in it.

• Change is inevitable, except from a

vending machine.

• Friday, I was in a bookstore and I started

talking to a French looking girl. She was

a bilingual illiterate - she couldn’t read in

two different languages.

• If toast always lands butter-side down,

and cats always land on their feet, what

happen if you strap toast on the back of a

cat and drop it?

• The other day, I was walking my dog

around my building... on the ledge. Some

people are afraid of heights. Not me, I’m

afraid of widths.

• When I have a kid, I want to buy one of

those strollers for twins. Then put the kid

in and run around, looking frantic. When

he gets older, I’d tell him he used to have

a brother, but he didn’t obey.

• Sometimes I think the surest sign that

intelligent life exists elsewhere in the

universe is that none of it has tried to

contact us.”

• ‘If women are so bloody perfect at

multi-tasking, how come they can’t have a

headache and sex at the same time?

Page 74: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

72 | www.protocolmag.co.za

Protocol Crossword #55

When you’ve completed the crossword, the letters in the coloured

boxes spell out the name of the Invensys foundation for all applications

that now supports virtualisation.

Note: This magazine contains the answers to a number of the clues.

E-mail your answer to: [email protected]. The sender of the

first correct answer received will get a hamper of Invensys Wonderware

goodies.

Clues across: 1. The creation of something other than the real thing

(14)

13. Object-Oriented Programming (3)

14. Small willow used for basket work (5)

15. Is it a car? Is it a skirt? (4)

17. Part of a train (7)

20. Not Miss or Mrs (2)

21. Cunning animals (5)

22. Cash register (4)

23. Difficult and tiring (7)

25. Eggs (3)

26. User Requirement Specification (3)

28. A dog will let out one if you hurt him (4)

29. Skeletons are made of them (5)

31. Aluminium symbol (2)

32. Ocean (3)

33. Volcanic rock (2)

35. Real Time (2)

36. Many can be saved with proper safety measures (5)

37. The world-wide web (8)

41. Royal Navy (2)

42. Places where you might pitch a tent (9)

44. It’s all around us (3)

46. Electrical Engineering (2)

47. Prefix attached to everyday words to add a

computer, electronic or online connotation (especially

to 32 down) (5)

49. 7 down greatly improves the quality of this (8)

53. Doctors’ prescriptions are notoriously not this (7)

54. Melodious sound (5)

56. Industrious island (6)

58. Friend (U.K. and Australia) (4)

59. Messy with “un” in front (4)

62. Not old (3)

64. Range of numbers expressing the relative acidity or

alkalinity of a solution (2)

65. Future Value (of an investment) (2)

66. Not me (3)

67. They’re good for you (6)

70. Elsa was born this way (4)

72. Old horse (3)

75. Commercial stomach settler (3)

76. One who is the ultimate beneficiary of Invensys

solutions (4)

77. Measures to ensure that a pre-defined level of

operational performance will be met during a specified,

limited time frame (4,12)

Clues down:1. SimSci-Esscor has elegant solutions in this area (7,7)

2. Makes liquid cloudy by stirring up sediment (5)

3. The GFIP introduced this highly unpopular way of

getting extra revenue (4)

4. Universal Product Code (3)

5. Truck (5)

6. Exists (2)

7. The technique of representing the real world by a

computer programme (10)

8. Fable guy (5)

9. Transmit Ready (data coms.) (2)

10. Operational Management Office or washing powder

(3)

11. Tricky Dicky (U.S. president) (5)

12. The infrastructure for system recovery after a natural

or human-induced catastrophe (8,8).

16. Not ever (5)

18. It owns and flies passenger aircraft (7)

19. Car body (2)

21. Full Service Banking (3)

24. University officials (5)

27. Flat-topped hill or mountain (Mexico) (4)

30. Small number (3)

32. The tighter these measures, the less likely systems

are likely to be hacked (8)

34. Raw material before metal is extracted (3)

37. Intellectual Property.

38. Movie alien (2)

39. National Senior Certificate (3)

40. You might start running one in a bar, for example (3)

45. Id est (in other words) (2)

48. Young in appearance and manner (8)

49. Old Germiston registration (2)

50. American Bar Association (3)

51. More recent (5)

52. Depart (2)

54. Paper sat-nav? (3)

55. Compass point (2)

57. Used in Iraq with shock (3)

60. Most volcanically active body in the solar system (2)

61. What’s brown and sounds like a bell? (4)

63. Girl’s name (4)

65. Imperial unit of measurement (4)

68. Slippery fish (3)

69. Cry uncontrollably (3)

71. Regional System Integrator (3)

73. Expression of surprise (2)

74. Gallium symbol (2)

Answer to Protocol crossword #54:

Question: What is Invensys Operations

Management’s name for its all-inclusive

enterprise control system?

Answer: InFusion

Page 75: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

Achieving competitive and efficient process plant operation is an increasingly tough challenge in today’s fast moving business environment.

Measurement Under Control

Selecting the most reliable and longest life measurement instrumentation is more important than ever. Invensys Foxboro offers time proven innovative measurement solutions that make this possible, leading the way with longer life pH, redox and conductivity measurement sensors and instrumentation.

View our full range of measurement tools and instrumentation at:

www.invensys.co.za or call 0800 INVENSYS for more information

Page 76: Invensys Protocol Magazine – "Virtualisation – not all smokeand mirrors"

Get Intouch with your artistic side that craves performance

An HMI and way more.Wonderware InTouch®

Creating powerful visualisation and supervisory apps is a snap with the world’s number one HMI software.

iom.invensys.com/InTouchFor more information, call us on 0800 INVENSYS

Avantis Eurotherm Foxboro InFusion SimSci-Esscor Skelta Triconex Wonderware

© Copyright 2012. All rights reserved. Invensys, the Invensys logo, Avantis, Eurotherm, Foxboro, IMServ, InFusion, Skelta, SimSci-Esscor, Triconex and Wonderware are trademarks of Invensys plc, its subsidiaries or affiliates. All other brands and product names may be trademarks of their respective owners.