13
EQUIPMENT AVAILABILITY IN CONTROL SYSTEMS By: Marvin B Schmitt Danielle M Lorimer Instrument Supervisor Corporate Account Manager Holcim - Portland Plant Rockwell Automation ABSTRACT Control system coordination in today’s manufacturing facilities is crucial to achieve performance objectives. Cement facilities daily consumption can exceed 15,000 tones limestone rock, 1,100 tones of coal, and 1 Gig watt-hour electrical demand. With huge facilities the equipment availability becomes a key indicator to overall facility performance. Control systems need to help production and maintenance teams achieve the availability goals. This cascades to higher control system availability and functionality requirements. Designers are utilizing innovative software methods to integrate systems to allow for flexible and reliable service while ensuring proper equipment alarming and interlock protection. Software objects are able to encapsulate the desired functionality which provides for better system standardization and overall coordination. We all want to build successful control systems. The following discussion will highlight several key aspects to PCS installations that the design teams need to understand and consider throughout the design of a highly available PCS. The topics are based on our personal experiences while supporting large facility integration projects. Index TermsProcess Control System (PCS), Net Present Value (NPV), Internal Rate of Return (IRR), Programmable Logic Controller (PLC), Inputs and Outputs (IOs), system architecture (SA), Topology, Factory Automation, Maintenance, Production Management., Un-Interruptible Power Supply (UPS) INTRODUCTION While many Project Managers find ourselves in new economic territory, it is now ever so important to have good cost control and manufacturing efficiency. Over the last 50 years, manufacturing sites have reduced in number but have increased in overall production capacity. Technological process advances have created these giant manufacturing facilities that operate with smaller operations and maintenance staffing. The economies of scale and cheap energy costs have allowed these facilities to transport raw materials to the manufacturing site, and when the manufacturing process is completed, distribution will transfer the finished product long distances for end consumer utilization. As manufacturing costs rise, distribution costs rise, and product demand drops, industry will have to continue to improve processes and manufacturing methods. In recent years, and due to the rapid deterioration of economic conditions, we now find ourselves in a crisis where all forms of cost controls are necessary. The large manufacturing facilities have achieved a better manufacturing cost per unit and likely better overall quality; however, we will be forced to provide better process efficiencies to overcome higher energy costs. All too often, we don’t look for the opportunities until we are forced to analyze and evaluate the different options. In order to achieve good quality finished products, there must be manufacturing

[IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

Embed Size (px)

Citation preview

Page 1: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

EQUIPMENT AVAILABILITY IN CONTROL SYSTEMS

By:

Marvin B Schmitt Danielle M Lorimer Instrument Supervisor Corporate Account Manager

Holcim - Portland Plant Rockwell Automation

ABSTRACT

Control system coordination in today’s manufacturing facilities is crucial to achieve performance objectives. Cement facilities daily consumption can exceed 15,000 tones limestone rock, 1,100 tones of coal, and 1 Gig watt-hour electrical demand. With huge facilities the equipment availability becomes a key indicator to overall facility performance. Control systems need to help production and maintenance teams achieve the availability goals. This cascades to higher control system availability and functionality requirements. Designers are utilizing innovative software methods to integrate systems to allow for flexible and reliable service while ensuring proper equipment alarming and interlock protection. Software objects are able to encapsulate the desired functionality which provides for better system standardization and overall coordination.

We all want to build successful control systems. The following discussion will highlight several key

aspects to PCS installations that the design teams need to understand and consider throughout the design of a highly available PCS. The topics are based on our personal experiences while supporting large facility integration projects.

Index Terms— Process Control System (PCS), Net Present Value (NPV), Internal Rate of Return (IRR), Programmable Logic Controller (PLC), Inputs and Outputs (IOs), system architecture (SA), Topology, Factory Automation, Maintenance, Production Management., Un-Interruptible Power Supply (UPS)

INTRODUCTION

While many Project Managers find ourselves in new economic territory, it is now ever so important to have good cost control and manufacturing efficiency.

Over the last 50 years, manufacturing sites have reduced in number but have increased in overall production capacity. Technological process advances have created these giant manufacturing facilities that operate with smaller operations and maintenance staffing. The economies of scale and cheap energy costs have allowed these facilities to transport raw materials to the manufacturing site, and when the manufacturing process is completed, distribution will transfer the finished product long distances for end consumer utilization.

As manufacturing costs rise, distribution costs rise, and product demand drops, industry will have to

continue to improve processes and manufacturing methods. In recent years, and due to the rapid deterioration of economic conditions, we now find ourselves in a crisis where all forms of cost controls are necessary. The large manufacturing facilities have achieved a better manufacturing cost per unit and likely better overall quality; however, we will be forced to provide better process efficiencies to overcome higher energy costs.

All too often, we don’t look for the opportunities until we are forced to analyze and evaluate the

different options. In order to achieve good quality finished products, there must be manufacturing

Jhampton
Text Box
978-1-4244-6409-8/10/$26.00 ©2010 IEEE
Page 2: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

consistency. This applies to all aspects of the manufacturing world including production, maintenance, distribution, and accounting. Control systems have been working to control processes but now more than ever, we need the information from our production and maintenance processes to analyze and determine where improvements can be made.

Since many manufacturing departments are now subject to downsizing in order to meet the current

business demands, following a strategy to size the manufacturing facility based on what the market can support is increasingly important.

How do we wisely design PCSs that allow for the ever changing manufacturing needs? We first must

realize that the modern PCS does much more than simply control the process. A manufacturing facility requires production, maintenance, and management teams to work together to meet the overall goals. Successful PCS installations need to coordinate the needs of all departments to serve the business as a whole. To begin the process you must determine the system vision that you are working to build. Once the vision starts to take shape, then you can explore the many requirements that are necessary to realize such a vision.

The control systems controlling different industries see many extremes of automation requirements. A

lot of engineers talk about common sense, but you often find that it isn’t all that common. Each industry has gone through their individual technological advancements because of the dedicated people who have worked many educational years in that industry. These are often little improvements that provide good results to keep the processes running. What may make sense to one industry seems ever so foreign to the next. The following discussion of control system fundamentals is intended to highlight some of the PCS possibilities that should be considered for either Greenfield projects or the upgrade of existing control system.

CONTROL SYSTEM FUNDAMENTALS

Fundamental #1 – System Life Cycle

Systems need to have a solid lifecycle path and sustainability plan through all system components and technologies. Process information is used to understand and introduce process improvements and the PCS is responsible to deliver this information. For continuous processes, PCSs must be flexible to allow for deploying changes without stopping the process (e.g. RunTime Edit) and such changes must be allowed throughout the entire PCS.

When considering capital investments, it is important to take into account how long the asset will last and what additional maintenance investments will be required to keep the asset performing. A PCS can be broken into different sub-categories such as visuallization and controller: these two components have different lifecycles but in today’s integrated world, the lifecycles are closely coupled together. We need to consider the complete lifecycle including the system maintenance, parts availability, future software and hardware upgrades.

First, consider the visualization component and the hardware and software platforms it utilizes. There are three (3) basic parts that the system must utilize including the computer, operating system, as well as the visualization software. These three pieces are closely integrated and lifecycles should be considered together. The server hardware lifecycle is close to eight (8) years and four (4) years for a desktop machine. Several visualization systems utilize some form of client-server topology where the servers operate on a server class computer and the client operates on a desktop class computer. After four (4) years, a PCS will require some hardware replacements ensure optimal reliability.

Consider the impacts between the client and servers and the various hardware and operating system

Page 3: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

interfaces that the system utilizes. Have you ever tried to purchase a computer with Windows NT™ 4.0 operating system installed in the last few years or how about a computer where you can reliably install Windows NT 4.0™? The compatibility of the hardware, operating system and visualization needs to be carefully considered.

Secondly, consider the controller layer of the PCS. For the purposes of this discussion, let’s define the controller system as the PLC and all of the related modules used to deliver process information to the PLC, excluding the process instrumentation. Controllers are capable of operation for fifteen (15) to twenty-two (22) years before failures provide enough justification to replace the controller system. Controllers are some variation of a PLC and have proven themselves to be robust in an industrial environment. If systems data flow requirements and electrical circuits are properly designed, minimal changes are required to maintain the systems. Replacement parts need to be available for at least a fifteen (15) year period.

Process changes coupled with technology changes require us to provide better process information using a variety of interfaces. The controller layer is not only comprised of digital inputs and outputs, but of networks and all interfaced sub-system devices. As industry evolves, more information is utilized to analyze the process and implement process improvements. Although a controller system may be able to operate reliably for fifteen (15) years, the process is changing faster and the controller system needs to be flexible to allow for process changes. The more recent trend is to see more networks and less electrical cabling to satisfy data requirements.

If we explore how a controller system is built, we will find many micro-controllers distributed throughout the architecture. For example, each module in a PLC system utilizes a micro-controller. Micro-controllers provide a narrowly defined service to the architecture and this is based on the firmware that is installed in that micro-controller. The firmware is developed and maintained by the controller’s manufacturer. Controller modules dealing with flexible networks experience more firmware changes. The firmware flexibilities can affect the lifecycle of the controller system where you can update firmware software rather than replacing controller hardware components.

System maintenance plays an important role in the system’s lifecycle. In order to realize the IRR and NPV of a PCS investment, adequate maintenance will be required to see the asset through its complete lifecycle. There are many different maintenance approaches; however, we can all agree that maintenance is expensive and often reduced in tougher economic times. Maintenance requires knowledgeable resources to perform the work. There resources need the right tools to perform their jobs.

For plants who do not have sufficient internal resources, there are manufacturer support agreements that can provide access to manufacturer information, software updates, and technical support. These maintenance agreements are typically an annual contract between the facility and the manufacturer. There is a wealth of information available, but the local facility needs to be knowledgeable to be able to administer and perform much of the work. Often, a strategy where the manufacturer provides routine inspections and guidance in coordination with the maintenance staff can show positive results. The idea is to identify and repair problems before the problem is realized from production losses.

Fundamental #2 –Network Infrastructure

All systems require infrastructure and this infrastructure needs to be flexible to support both the current and future system demands. In order to provide quality information and control, quality instrumentation; correct cabling; correct installation and networking infrastructure; coordinated process system design; and process analysis are all necessary pieces of the equation.

Engineers live for analogies and infrastructure is analogous to a foundation where a solid foundation

will allow for growth and change, but replacing a foundation in midstream is rather painful. A broken

Page 4: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

foundation on a new installation is an embarrassment. This is an area where we must start out with a solid plan and it truly is a critical item in the design that is unfortunately often overlooked and plagued with assumptions. The system functionality should not be assumed based on fancy sales propaganda.

In today’s control world, the networks provide the data super-highway throughout a manufacturing

facility. It cannot be stressed enough that these networks must be designed and installed correctly to achieve the desired performance. Safety needs to be a core value of all manufacturing facilities and network stability plays a major role. If you don’t have accurate data that should have triggered an interlock, corrective action, or communication, then you are dealing with a very serious problem. It is disheartening to see network failures due to the fundamental network rules being overlooked.

Network design is serious and critical. Data flow requirements need to be thoroughly understood and the technology correctly applied to satisfy the requirement. Adequate spare capacity is necessary since the need for information will likely increase rather than decrease over time.

While not all data in the control world is time critical, it is necessary to consider the network load that is

created from all data flows to design the networks appropriately. In the IT world, delays in network data equate to delayed information to the user group; in the control world, the same quantity of delay equates to facility outages. As the network technologies grow, there is an increasing amount of common equipment utilized between both the control and IT worlds. In order to design systems correctly, the right mix of IT and control resources need to join forces to complete the design and configuration of these networks.

Networks consist of hardware modules, fiber optic cabling, copper cabling, and wireless equipment. A lot of thought needs to go into the physical installation and you need to consider cable selection, cable routing, cable terminations, cable distances, communication data rate requirements, and bandwidth loading. Network segregation needs to be considered for maintenance flexibility and network troubleshooting.

The network topology is the method to connect all of the network nodes. There are several strategies such as ring, star, and redundant. Every manufacturer has a “Design Considerations” guide that discusses the various network topologies and how they can be combined. Consider how the entire network will respond to component failures and make the network simple. Why? Network problems are hard to troubleshoot and can be intermittent. Network problems result in un-controlled plant stops and large process equipment problems. Your network must work and you need to be able to find problems quickly.

A network media inspection and performance testing protocol needs to be developed to ensure that the

design has been followed and properly commissioned. These protocols can make or break a control system. If you have the best design on paper but never inspect the real world installation, how reasonable is it to expect optimal network performance? Let’s stop with the finger pointing and communicate the requirements. Let’s work together with the impacted parties to realize a successful project.

During the initial installation and commissioning, performance baselines need to be established and referenced throughout the life of the system. Using routine network assessments, problems can be identified and scheduled for planned repair before serious impacts result in lost production. The intention is to create stable and highly available networks, but we are dealing with equipment that will eventually fail, given enough years of service. Maintenance personnel need to be able to quickly identify the failure points and to implement corrective actions with minimal effect to production.

Although networks allow for simplified connectivity from cabling perspective, oftentimes, each network node’s connectivity configuration is very specific. The control topology must consider the connectivity requirements and impacts during the replacement of a failed network node such as a frequency drive. For example, if the controller must be placed in program mode to replace a frequency drive, then that entire controller must be taken offline to complete a simple maintenance replacement. This may seem

Page 5: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

trivial, but in today’s integrated world, the systems and network configurations are very specific and can be restrictive to only allow part- for- part replacement right down to the most granular details. These details can have a very serious impact to the required spare parts inventory that you will hopefully never use. With a 20% cost of capital, your spare parts need to be carefully considered and spares that can be utilized in many locations in a facility help reduce the money frozen in the spare inventory coffers.

Fundamental #3 –Security

Security breaches can result in loss of data and attacks can have safety and environmental impacts. Industrial network security is critical for cement facilities since connecting the control network to the Enterprise LAN brings all of the security risks of the Internet to the control system. Key network security considerations include the control of data flow between control and enterprise networks via a firewall; the configuration of user accounts and access based on role; the provision of secure connectivity for remote access to automation devices; the detection and handling of malicious traffic and others. As a first step, you must determine your security priorities as they relate to availability, confidentiality, and integrity. For cement manufacturers, availability is usually the top priority, followed by integrity of data, and then confidentiality. Security objectives can be met by combining multiple techniques ranging from remote access via trusted connections, external vendors access rights from within and outside the plant; protection of stored data (historian) and hardware infrastructure (switches, routers, etc.).

It is important to recognize that threats to your network can come from external (malware, un-

authorized access) as well as internal sources (un-knowingly spreading a virus, user errors, disgruntled employees).

Simple techniques such as change management, approval processes and logs can also prevent costly

and un-safe incidents. For example, a plant recently experienced a major component failure in the kiln area. The investigation showed that a well intended programmer had removed a process interlock while troubleshooting the system, and subsequently forgot to re-activate it, resulting in major equipment damage. After restoring system function, the management team provided additional technical training and reviewed and modified user access levels.

DATE HMI Station Screen PLC Task Routine Rung

Description

Progra

mer

13/08/2009 all42_422_OV

1

422-RFB, RFD, RFF, RFH, (RFJ).S1 dropped out tw ice w ith speed detectro failure

Rene

HMI PLC

Example: change management log Visualization applications can utilize security schemes to provide different roles to either allow or deny

the ability to change configuration, tuning parameters, set points, and program selections. Sometimes security schemes can help establish some level of protocol to help administer system changes to help protect the equipment that the PCS is controlling while providing better read only visibility for system parameters. This can greatly affect availability of the manufacturing plant. It can also increase the number of minds working to understand and improve your processes.

Security policies and procedures are a key element of achieving a secure manufacturing environment.

Establishing and following a precise set of security policies throughout the network will help to minimize

Page 6: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

incidents and maintain the integrity of the industrial environment.

Fundamental #4 – Control the Process Designs need to consider safety, environmental, process, production, maintenance, and many more.

Through both good and bad experiences, we learn more about design considerations. No one has a crystal ball and we are only able to make decisions based on what we know at the time. As designers, our role is to learn how to improve and strive for excellence. Designing a large system is a challenge, but if we acquire the right information, we can make good design decisions.

You must go and see similar installations and understand the challenges and obstacles that have been

experienced. With this, you need to understand similarities and differences. Often, through a design review, a simple model is used to validate the design. You must remember that the design reviews need to consider the scalability of the design and where additional stress points will be seen as the design is scaled up. Thorough understanding of these points is needed to review designs and to understand full scale system performance before the system is completed built and deployed.

The controller network topology- as opposed to the process topology, can be defined as the controller IO layout compared to the process equipment. Some people refer to this as the control architecture. The basic idea is to identify where there are logical segments of both the process equipment as well as the controller equipment and to position the control over the equipment in a manner that provides the best solution. This needs to be a design decision for design reasons. It seems fundamental to have a single controller controlling one complete process. However, installations have been built using 8 main controllers to control 50 processes with 10,000+ IO. Communications between the IOs to controllers and controllers- to- controllers need to be correctly designed. This design needs to be coordinated with the network infrastructure previously discussed.

The design of the data flows can impact the fault tolerance of the overall PCS. How do you go about bringing in all the process information to the controller for data processing? A single controller is not able to control the complete facility. A single network is likely not acceptable. What design criteria should we use to ensure that we have adequate spare capacity for future growth? How often is this spare capacity utilized during initial development because the system complexity wasn’t well understood? Design considerations need to go into the controller topology with respect to the process impacts.

First let’s start by defining the basic topology layers. One strategy is to categorize the layers as follows: 1. Business data collection 2. Visualization systems 3. Main controller system with related IOs and networks 4. Sub-system Controllers

The data flow within and between each group needs to be carefully defined. A common mistake is the

different definitions that are used for the data from one layer to the other. As an example, in a project, an OEM will provide an equipment skid and its related controls. We would describe this as a sub-system. The sub-system needs to receive commands and needs to provide status to the main controller layer. So, we have an interface between the sub-system and the main controller. Now, let’s look at the human element. We have two (2) different groups that need to agree on the definition of the data that is exchanged and how it should be utilized. These groups are coming from different backgrounds and different approaches, and they must get aligned for a successful interface integration. Experiences show that these sub-system interfaces are a cause of many plant stops because of poor data definitions.

The IO topology design needs to meet the facilities operations and maintenance activity requirements. The design considerations need to reach beyond the theoretical concept that the equipment will have 100% availability and will provide accurate data 100% of the time.

Page 7: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

Controllers in contemporary systems are much more complicated compared to their predecessors. In the past, PLCs only performed three (3) basic operations: update IOs, solve the program, enable diagnostics/communications. There wasn’t much in terms of an interrupt schedule or communication priority in those older PLCs. Today, controllers from various manufactures address these issues differently; however controllers still concentrate information from IOs, calculate responses and serve data to clients on a much larger scale. They also provide the flexibility to connect to a broadening spectrum of equipment; having said that, there is nevertheless a limit as to how much a controller can perform. A clear understanding of the controller resources is needed to design the IO system and to optimize the data flow. This becomes much more critical for large systems as we push the limits of the controllers. Design reviews need to scale up the design to realistic controller loads to validate the concepts. This makes certifying systems quite challenging due to the complexity of controller resource utilization. Stressing the controllers during design reviews can be done with simulation if the simulation is part of the design.

Controller utilization is difficult to describe but an analogy is the opportunity cost of buying vs. not

buying a car or truck. The cost of the car is the money you would pay but it also represents the financial opportunity you would have to do something else with those funds in the event you did not purchase the vehicle. A controller can only do so much: the more load you create in one category will limit the available load for another category.

We need to consider the impact of IO configuration. Take for example a network scanner module that concentrates information from all its network nodes. This scanner module is configured in the processing controller. The data is exchanged between the controller and the network scanner through a second network. If the topology and configuration design did not optimize the data flow all the way through from the first network to the second network, then controller and network resources are being wasted. On small systems, this is seldom a problem. For large systems, this leads to serious problems too often recognized late in the design where the operational impact of re-work can be devastating. Re-mapping an entire network of IO can be rather painful.

Often, there isn’t enough focus on the data quality of the IO data. What does the controller do if the data isn’t valid and how does it detect if it is valid? For some controllers, the user program must be set up to detect IO failures and to provide the desired response. Without this type of user software, the data can be frozen in its last state. In this case, the controller is basing decisions on inaccurate information. Is that tank really 50% full with a desired 75% level? Will the control valve respond? If a response is needed upon loss of communications to IOs, you then have to program it, and this program will consume controller resources.

A PCS topology design should consider the following: 1. Business systems to controller communications

a. Define the impact of data losses b. Understand the impact to the controller resources

2. Visualization to controller communications a. Define the impact of visualization losses that can be caused by maintenance activity, power

losses, & system performance b. Understand redundancy and related impact c. Controller resources required to complete the data flow d. Server resources required to complete the data flow e. Carefully design data structures with focus on communication impacts. Large controller

memory can lead to inefficient data packets. 3. Controller to IO communications

a. Define IO update rates carefully and understand impacts to network performance b. Define redundancy carefully and understand where there is and is not redundancy in the

installation

Page 8: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

c. Understand the impact of power outages and the recovery between the IO network and the controllers to help define where UPS systems may be best utilized

d. Module selection and failure response 4. Controller to sub-system communications

a. Data definitions, data definitions, data Definitions b. Define if it really needs to be a sub-system or evaluate possible integration into the main

controller layer c. From an owner’s perspective, sub-systems are developed by others.. Define if this design

meets the owner’s expectations. Often, sub-systems utilize a different controller than the main controller layer and connectivity to legacy equipment isn’t the manufacturer’s main focus. Further more, the supplier’s focus surely isn’t towards interfacing to their competitor’s equipment.

5. Manufacturing process systems versus controllers control range a. Define the process and where logical control breaks exist b. Define the range of control for each controller c. Consider controller-to-controller communications d. Consider maintenance activities for each controller.

6. Just because you can doesn’t make it a good idea. Consider the following points a. Scalability b. Network impacts c. Server resource utilization d. Controller resource utilization e. Controller communication priorities f. Fault tolerance g. How to Test and simulate to validate

Among other things to consider, it is also important to understand the anatomy of your controller (for example: the controller CPU executes application code and performs message processing; the backplane CPU communicates with I/Os and sends and receives data from the backplane while the I/O information is sent/received asynchronous to program executions) as well as its specific memory usage characteristics (i.e. additional memory is used at run time to buffer incoming messages, to store multiple edit rungs during on-line edits and to display trends (buffered data).

Another critical element is how your controller operating system assigns tasks priorities. You can not

implement a system that performs well without understanding the relative priority for different tasks such as power up and down; timers; redundancy/switchover; watchdog; diagnostics; scanning; messaging; IO monitoring; communications and others.

You must also understand the pre-defined communication limits for your system as they relate to the

maximum number of connections; cached messaging connections; un-connected transmit and receive buffers. It is also critical to understand the factors that can affect HMI performance such as the number of tags on scan and the screen navigation schema

Fundamental #5 – Process Reporting

PCS systems utilize data to make decisions. Garbage in equals garbage out; it is indeed a critical requirement to have good data getting to the decision makers in your process. Whether this is a controller or an operator, the data must be available and accurate to carry out good decisions. The PCS must be capable of quickly deploying change when the information analysis shows that a change is necessary.

Facilities are in business to make money. Accounting for the production, related costs and efficiencies feed into business accounting models. This data is used for many purposes geared toward keeping the

Page 9: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

business healthy. As such, PCS designers should understand the business data requirements. There often is an interface between the PCS and the business reporting software. Regardless of the data values or data users, the designer needs to be aware of the data flow. This data can be provided without a significant impact on the networks or controllers, but there is an impact and it should be properly designed. The tough part is getting the right resources talking early enough to identify the needs and to properly define the requirement and solutions.

Many large manufacturing facilities operate on a 24/7-365 basis by utilizing shift labor. There are

usually between four (4) and six (6) operation teams that rotate their schedules to provide the labor required by the manufacturing process. After working eight (8) or twelve (12) hour shifts, human fatigue can affect the performance of the overall process. We are all human and without exception suffer from the effects of fatigue. PCSs can provide valuable information to assist with post-incident evaluations. A simple example of process failures is not limited to the control system but to the business environment as a whole. The shift handover meetings and operation logs don’t often describe the complete situation, where it can be analyzed, and what type of corrective actions should be implemented. Information needs to be readily available to support the required post-incident analysis and to help streamline the improvement efforts.

Data flows to our visualization and business systems are key. We speak of hind sight thinking about the could haves and would haves; but the unfortunate fact still remains that we often didn’t. PCSs need to provide data in a format that empowers our operators, technicians, and managers to make the correct decisions in real time. It may be idealistic to believe that a PCS would be able to make decisions throughout all of the business, but it is capable to provide accurate information to help reach a good decision.

Manufacturing environments can be hectic and to think that the process or equipment will always

behave within a narrow set of guidelines is not realistic. This is a good thing because we’ve all jobs to address the problems. Problems are part of being in business.

A production operator needs real time data so that they are able to control the plant. It is obvious that

the operator needs the current status of the plant accessible to them. The PCS needs to provide some structure to the chaos that will be seen when process and equipment problems are encountered.

Process data trends are a lifeline that PCSs must provide. The tools in a process historian must

provide the ability to present a time- based chart showing many process values at the same time. This is crucial to determine cause and effect relationships. They can be used to show what is currently happening as well as what happened minutes, days, or months ago. These systems need to be easy to access and to quickly configure a trend to explore cause and effect theories. It can either validate or disprove these theories and steer the direction of the facility.

Real time alarming with first out annunciation can help provide the operator with the root cause of what

the PCS is seeing. If the alarming is clear and accurate, it will focus the operations and maintenance efforts to quickly solve the problem.

A simple database that captures alarms and provides an interface to view historical alarms is helpful to

analyze the information flow from the PCS to the operator. Has anyone ever arrived at a facility in the morning to learn of a problem during the night.- “The plant tripped and it was caused by this condition!”? Let the chaos begin!! We are out of the chutes and everyone is racing to solve the reported problem. But wait!! What really happened? If you have historical trending, historical alarms, and good quality operation reports, you now have analysis tools that can be used to determine a closer version of reality. We are all human and a classic saying is to “believe half of what you see and none of what you hear because we are all missing something”. Our perception is our reality so let’s get our perception close to reality.

Data analysis of historical alarms can also be used to drive maintenance activities for frequently

occurring alarms. This information can be used to improve equipment behavior, identify equipment

Page 10: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

problems, or improve alarm settings. Quality alarms will help drive operations and maintenance activities. Reporting and analysis tools can be helpful to identify and show the need for improvements. By establishing a performance benchmark, you can have information to help identify the root cause of problems and to change your course of action for improvement.

Fundamental #6 – System Maintainability You are looking to install a new or replacement PCS system. After the initial design is completed and

all the commissioning and startup work is complete, you will not have to touch the system for the next 10 years, right? This idea is extremely optimistic. As we strive to improve efficiency, throughput, and cost reduction, the processes will change. While we may not be able to predict how these will change, we should understand that they will change. You will add process measurement points, learn more about the process, install additional equipment, and add more load to the PCS.

Maintenance tactics include run-to-failure, preventative maintenance and condition based monitoring.

Since electronic equipment has a random failure mode, PM’s are typically not effective for Process Control Systems, with the exception of environmental (dust-free, climate controlled environment for the servers), and power protection (UPS and others. Advancements in distributed intelligence nowadays take equipment reliability to a new level by leveraging the existing PCS architecture for condition based monitoring and intelligent advisories for pumps, compressors and chillers, fans, blowers and other. Equipment monitoring IOs include vibration, temperature, low frequency, spike energy and other data that is acquired automatically and displayed in the control room. Such systems store frequency bands for common machine types; translate monitoring data into actionable descriptions for common failure modes, provide base lining and trending capabilities. To sum up, condition monitoring systems provide advanced warning of impending equipment failures, allowing maintenance to proactively schedule remediation work ahead of a costly failure.

PCS maintenance teams need to be focused on providing 100% system availability. This is a tall order to fill and the system design can either allow or prevent the needed flexibilities. If the design considers all the system availability requirements, then you have a chance. Just because the plant is down, it doesn’t mean that it is acceptable for the PCS system to be down because both production and maintenance utilize the system.

The PCS downtime during major plant outages is very precious to the maintenance team. The PCS

design needs minimize the requirement that the PCS is down to complete maintenance activities.

Today’s controllers are capable of controlling a large amount of IOs. These IOs are brought into the controller through some form of network. The design should consider the impacts to the facility for the network maintenance. Can we add another IO module without shutting the entire network or controller off? Can we troubleshoot and identify network component problems without shutting the system off. Can run-time edits be introduced at the visualization, controller, and network levels without adverse affect to the others?

There is a lot of discussion towards redundancy in system components and software. Any form of

redundancy is no better than its weakest component or common component. This is true whether you are looking at power, instrumentation, cabling, IO modules, controllers, or visualization servers. While redundancy schemes are provided to address the more common failures, the redundancy builds complexity. Sometimes this complexity causes more plant stops than the failures it is intended to prevent. While redundancy can provide a backup solution to recover from PCS equipment failures, you really need to evaluate the complete cost and benefit of the redundancy schemes.

There is no lab to test the plants’ real world response when considering large facilities. Shutting the plant down to introduce process changes is not an option for many processes. The PCS needs to provide the ability to introduce run-time edits without any negative impacts to visibility or control. Process

Page 11: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

control algorithm changes and smaller process improvements can be introduced while the process is running. Knowledgeable resources can carefully switch segments of control software to change the process behaviors. This ability helps introduce process improvements and provides the ability to perform trials.

Maintenance of any system requires information. Plan to organize and maintain an information library. The repository of electronic manuals, system drawings, and performance benchmarks may seem overwhelming, but we need to empower our maintenance teams. The reference materials of the complete system need to be available at 2 AM when our minds aren’t quite at 100% and we need to know where to find a problem. In any form of business, we need to expect some employee turnover. By investing and organizing the library, you help provide insurance that the system can withstand exposure when resources change roles.

Fundamental #7 –System Coordination & Integration System requirement definitions are necessary to understand to evaluate the various PCS systems

available and their feasibility for your facilities applications. Years ago, it was nice to have a system that would simply control the dynamics of a manufacturing process. Today’s systems are increasingly complicated and highly integrated together. The improved integration can provide better system coordination and information; however, can also create additional challenges for process improvements where downtime is required to deploy system changes. Technology is changing at a fast pace, but is it very important to refresh our memory with the purpose of a control system. PCSs need to provide process control, proper control coordination, and the ability to improve the process through both control and business information data flows.

In order to develop sound project plans, a clear understanding of the process, information data flows, and commissioning availability are all necessary inputs to determine project requirements and prepare project execution strategies. The accuracy of this information can make or break a project after approvals are granted. Simply stated, design errors and poor project planning results in facility availability problems. Facility availability problems result in negative cash flows as you are trying to fix the problem and are not making production.. Good project planning and execution can overcome many of these risks, but only if the risks are understood and properly addressed. If these risks are not addressed, the result can more than double the project cost as a result of a relatively short facility outage when compared to the overall project timeline. This type of impact will devastate any project proposal’s financial evaluation and it has to be avoided. This is not intended to scare you away from necessary upgrades, but is however intended to increase awareness that PCSs play a very critical role in the facility operations and the design and deployment must be properly planned and executed.

All of this integration doesn’t just happen at the blink of an eye. It has to be built, tested, deployed, commissioned, tuned, and maintained. A solid project integration plan needs to be developed that considers the resources required to perform each step as well as the risks involved. The idea is to provide a positive impact at each step in the process. No two projects are identical, but there are some basic pieces that need to be considered:

1. Requirement definition 2. Design and development 3. Simulation and testing 4. Deployment timelines 5. Commissioning 6. Startup 7. Post-startup system tuning

Let’s focus some more an integration management for a moment. You will surely agree that a solid controls governance program is critical to the success of your project. Part of the program needs to address the broad activities that pertain to integration management. For the purpose of this paper, we define integration management as a focused approach to a time-phased process that enhances

Page 12: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

communication between all project team members and that reduces inefficiencies that inhibit project goals and deliverables.. It is a repeatable, measurable and auditable process that delivers the right tools to track and validate the quality of data needed to manage the project. It provides a single point of communication that aligns internal and external project teams and keeps them focused while maintaining accountability of deliverables. Some of the challenges addressed by integration management include the parallel management of multiple, external vendors; the conformity to standards; the points of integration (system level, interlocking, diagnostics, tracking), documentation control and compliance audits.

A good integration management program should at least address the following areas: • Include major controls milestones • Track all released controls specifications for the project • Define and track all versions of firmware and software to be used in the project • Schedule 30/60/90 % engineering design reviews • Generate a “points of integration” map • Organize design compliance audits for wiring diagrams, audit, HMI and panels. • Perform network architecture and installation validation • Monitor installation and operational qualification

Fundamental #8 – Software Methodology

Modular programming techniques enable the delivery of standardized programming structures, conventions, configurations and strategies. The goals of modular programming are to allow faster and easier application software development; faster and easier testing of the application software; more reliable application software; and improved interoperability with other equipment and systems. The adoption of a modular software design approach can reduce project variability as well as engineering and start up time while dramatically reducing the number of start up issues.

A key programming technique is to create and re-use instruction blocks (similar to object-oriented

programming) that encapsulate commonly used functions or device controls. The benefits of using such blocks are as follows:

Code re-usability

You can use blocks to promote consistency between projects by re-using commonly used control algorithms. If you have an algorithm that will be used multiple times in the same project or in multiple projects, you can incorporate the algorithm inside a block to make it modular and easier to re-use. Although the inputs, outputs and parameters are different from one instance to the other, the logic is identical for each instance. An easy-to-understand interface

You can place a complicated algorithm inside a block thereby providing an easy-to-understand interface (by making only essential parameters visible or required) and reducing the time necessary for documentation development

Simplified code maintenance

During the modularizing phase, we separate both the data and the functions which work on that data into separate units called classes. Classes are intended to protect the data and hide the implementation details of the functions. Protection and hiding is accomplished by not letting the user(s) see the data or implementation of the functions which work on that data. We then provide public interfaces which allows others to access and manipulate the data in the manner we see best. A truly well engineered modularized program will have the basic data structures at the bottom of a hierarchy and will build on these module layer by module layer until the pinnacle is reached which would be the main starting point of the program (bottom up instead of top down).

Page 13: [IEEE 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference - Colorado Springs, CO, USA (2010.03.28-2010.04.1)] 2010 IEEE-IAS/PCA 52nd Cement Industry Technical Conference -

To quote Grady Booch in his well known “Object-oriented analysis and design with applications”, the

task of the software development team is to engineer the illusion of simplicity.

CONCLUSION

PCS systems are increasingly challenging to design, integrate and commission. The design establishes the foundation of the entire PCS system. Designers need to be able to consider and understand all the data flow requirements to build the system. The project team needs to be knowledgeable and capable to address each fundamental area. The integration of PCSs requires the time of knowledgeable resources and we can never underestimate the value of human capital. In order to do the job right, you have to assemble a great team.

We all want to build PCSs that are able to meet the expectations of the owners. The systems are

becoming increasingly complex and provide more data integration to other roles of the business including production, maintenance, process, administration, and accounting. In order to build successful systems, we must define the requirement and consider the PCS fundamentals:

1. System Lifecycle, 2. Network Infrastructure, 3. System Security, 4. Control the Process, 5. Process Reporting, 6. System Maintainability, 7. System Coordination and Integration, and 8. Software Methodology.