4
PROCESS CONTROL in handling false positives, since many downloaded files or browser plugins are innocuous. Making this determination requires human antivirus expertise and systems that are outside of the scope of this article. Without a team of security experts checking sites and continually adding heuristics and signatures for identifying both suspicious and malicious sites, any system is quickly outdated and loses its usefulness. The end result Though this sort of process is time- consuming, the result can be shared between many end users, which justifies the resources expended to determine a website’s maliciousness. In the case of this example used in a secure web filtering system, it means empowering businesses to block malicious websites even when antivirus programs don’t yet have detection for the exploits or malware being delivered. Since malware authors have a tendency to check their creations against antivirus scan- ners before releasing them, the occurrence of malware that is not detected by antivi- rus systems is unfortunately too common. Luckily, websites hosting malware rotate less frequently than the malware itself, so that a website blocking access to a website that is detected as malicious will avoid even brand-new malware variants that are rotated into position as the older malware variant starts getting detected by antivirus software. A wide variety of companies including Trend Micro, Websense and eSoft have recognised these fundamental truths and are pushing cloud-based secure web filter- ing as a required layer of protection for modern networks. Best of all, the cloud- based nature of this protection keeps net- works protected without risking sensitive data or local network resources. References 1. Lilly, Justin. “Google account sus- pended: A post mortem.” 8 August 2009 <http://justinlilly.com/ blog/2009/aug/07/google-account- suspended-post-mortem/> 2. Boyd, Dana. “a google horror story: what happens when you are disappeared.” 8 Feb. 2008 <http://www.zephoria.org/ thoughts/archives/2008/02/08/a_goog- le_horror.html> 3 Graves, Lee. “Fake Blogs Serve Rogue Malware.” 9 September 2009 <http:// threatcenter.blogspot.com/2009/09/ fake-blogs-serve-rogue-malware.html> 4. Stiennon, Richard. “Pharma-fraud escalates dramatically.” 23 June 2009 <http://threatchaos.com/2009/05/ pharma-fraud-escalates-dramatically/> 5. “Obfuscated code.” Wikipedia. 15 September 2009 <http:// en.wikipedia.org/wiki/Obfuscated_ code#Obfuscation_in_malicious_ software> 6. Hall, Stephen. “Hosted javascript leading to .cn PDF malware.” ISC Blog. 10 April 2009 <http://isc.sans. org/diary.html?storyid=6178> 7. “JSUnpack” 15 Sep. 2009 <http:// jsunpack.blogspot.com/> 8. Stephens, Didier. “SpiderMonkey.” 15 September 2009 http://blog.didierste- vens.com/programs/spidermonkey/ 10 Network Security October 2009 Figure 5: Bridging the gap between threat release and antivirus signature release. Securing process control networks For example, power companies trade information about their generating capacity on a power exchange com- modities market. Hospitals network their critical care monitoring systems to record patient vital statistics for outcome studies. As the author and political Dominic Storey, technical director, EMEA, Sourcefire UK It is well known that organisations are becoming increasingly thirsty for data. Having details on the performance of individuals, the company and market factors has long been a necessity in the high tech and finance sectors. Now, manufacturing organisations running large process control networks are finding themselves in the same position.

Securing process control networks

Embed Size (px)

Citation preview

Page 1: Securing process control networks

PROCESS CONTROL

in handling false positives, since many downloaded files or browser plugins are innocuous. Making this determination requires human antivirus expertise and systems that are outside of the scope of this article.

Without a team of security experts checking sites and continually adding heuristics and signatures for identifying both suspicious and malicious sites, any system is quickly outdated and loses its usefulness.

The end resultThough this sort of process is time-consuming, the result can be shared between many end users, which justifies the resources expended to determine a website’s maliciousness. In the case of this

example used in a secure web filtering system, it means empowering businesses to block malicious websites even when antivirus programs don’t yet have detection for the exploits or malware being delivered. Since malware authors have a tendency to check their creations against antivirus scan-ners before releasing them, the occurrence of malware that is not detected by antivi-rus systems is unfortunately too common.

Luckily, websites hosting malware rotate less frequently than the malware itself, so that a website blocking access to a website that is detected as malicious will avoid even brand-new malware variants that are rotated into position as the older malware variant starts getting detected by antivirus software.

A wide variety of companies including Trend Micro, Websense and eSoft have

recognised these fundamental truths and are pushing cloud-based secure web filter-ing as a required layer of protection for modern networks. Best of all, the cloud-based nature of this protection keeps net-works protected without risking sensitive data or local network resources.

References

1. Lilly, Justin. “Google account sus-pended: A post mortem.” 8 August 2009 <http://justinlilly.com/blog/2009/aug/07/google-account-suspended-post-mortem/>

2. Boyd, Dana. “a google horror story: what happens when you are disappeared.” 8 Feb. 2008 <http://www.zephoria.org/thoughts/archives/2008/02/08/a_goog-le_horror.html>

3 Graves, Lee. “Fake Blogs Serve Rogue Malware.” 9 September 2009 <http://threatcenter.blogspot.com/2009/09/fake-blogs-serve-rogue-malware.html>

4. Stiennon, Richard. “Pharma-fraud escalates dramatically.” 23 June 2009 <http://threatchaos.com/2009/05/pharma-fraud-escalates-dramatically/>

5. “Obfuscated code.” Wikipedia. 15 September 2009 <http://en.wikipedia.org/wiki/Obfuscated_code#Obfuscation_in_malicious_software>

6. Hall, Stephen. “Hosted javascript leading to .cn PDF malware.” ISC Blog. 10 April 2009 <http://isc.sans.org/diary.html?storyid=6178>

7. “JSUnpack” 15 Sep. 2009 <http://jsunpack.blogspot.com/>

8. Stephens, Didier. “SpiderMonkey.” 15 September 2009 http://blog.didierste-vens.com/programs/spidermonkey/

10Network Security October 2009

Figure 5: Bridging the gap between threat release and antivirus signature release.

Securing process control networks

For example, power companies trade information about their generating capacity on a power exchange com-modities market. Hospitals network their critical care monitoring systems to record patient vital statistics for outcome studies. As the author and political

Dominic Storey, technical director, EMEA, Sourcefire UK

It is well known that organisations are becoming increasingly thirsty for data. Having details on the performance of individuals, the company and market factors has long been a necessity in the high tech and finance sectors. Now, manufacturing organisations running large process control networks are finding themselves in the same position.

Page 2: Securing process control networks

PROCESS CONTROL

advisor Bernard Baruch once said: “If you get all the facts, your judgment can be right; if you don’t get all the facts, it can’t be right”. Good decisions can’t be made without good data.

Most companies run several types of network other than their regular data network. For instance, there is the telephone network that connects each desk. This has traditionally comprised a proprietary private branch exchange (PBX) but in recent years, many organisations have upgraded their PBX to a fully digital network. Now, organ-isations are taking the next step and upgrading their digital phone networks to TCP/IP networks and using voice over IP (VoIP). The benefits include integration of voice and data into a new range of applications, the use of a common cable infrastructure for both phone and data.

Another common network is the process control network (PCN). This network is used to connect industrial machinery together so that the indus-trial processes can be automated. For instance, a nuclear power station may have thousands of sensors monitor-ing everything from core temperature, coolant temperature, steam turbine pressure, turbine rotation speed, fuel rod neutron flux … and so on. All these sensors are linked back to con-trollers across the dedicated process control network.

Like PBXs, most PCNs have used proprietary means of connection and proprietary protocols to communicate. Also like PBXs, this is changing, with standards-based means of connection and communication replacing the old interconnects. The benefits are similar; common infrastructure and protocol means common access to data and lower cost of PCN implementation.

However, making process control data available to corporate users is not sim-ply a matter of adding a router or two. Differences between the architecture and the goals of each network can make security management – and the safety of the process control network – non-trivial. To understand why this is so, let’s look at some characteristics of each network in further detail.

Process Control Networks & SCADAAs mentioned, PCNs, are specialised communications networks that interface industrial equipment to their control-lers and safety systems. In the past, these devices were connected together using proprietary network technolo-gies and protocols. In the last few years, companies have been ripping out the old RS422 serial-bus based PCNs and replacing them with copper or fibre based TCP/IP networks. Collectively, the new networks are labelled SCADA, which stands for Supervisory Control And Data Acquisition. This moniker perfectly describes their function.

PCN componentsAlthough there are an infinite number of ways to build a manufacturing plant, there is a relatively simple architecture for networking them all together. Let’s look at the world of RTUs, PLCs, HMIs, and historians.

First, you need to acquire data about your process, be it widget counting on a production line, measuring the tem-perature of a blast furnace or monitoring the heartbeat of a patient during surgery. The job of data collection and report-ing is handled by a remote telemetry unit (RTU). As well as reporting on a process, an RTU also has the ability to change a process. Or to put it another way, you can read from it to acquire data about a process and you can write back to it to make change to the process.

“Although there are an infinite number of ways to build a manufacturing plant, there is a relatively simple architecture for networking them all together”

What does the reading and writing? This is the job of a programmable logic controller (PLC). Most processes need to be kept running at optimum conditions. For example, a blast furnace may need more or less fuel to regulate its tempera-ture, or a patient may need oxygen levels adjusted during surgery. The PLC is the autopilot that performs this regulation.

It operates a closed cycle, known as a proportional–integral–derivative (PID) loop, which means that if the PLC detects any change in a monitored quan-tity it will effect just enough opposite change to bring the monitored quantity back to where it should be.

PLCs do a bit more than PID loop control (otherwise they would be called PID loop controllers). For example, they may take in other inputs such as process control overrides, safety interlocks and so on. These would allow the process to be suspended or diverted, allowing humans into the plant for maintenance, for example. PLCs read from and write to the RTUs using simple protocols, such as Modbus.

So how do people get involved? The answer is via a human-machine inter-face or HMI. This normally boils down to a PC somewhere, or perhaps a Unix terminal. The HMI is the machine that will represent the status of the process to the operator. HMI displays may not be traditional monitors at all. They could be banks of lamps on a electricity control room display board.

A related component to the HMI is the historian – this is the term given to the system that formats and records process data. This historical record is often a regulatory requirement, but may also be used for trend analysis and fur-ther system tuning.

Most process control systems have another process loop running in paral-lel, with data gathered and analysed by entirely different systems. This is the critical control system (CSS). The idea is that if the PLC PID loop fails for some reason and the process goes out of con-trol, the CSS is the backup system that can shut it down safely.

Some differences between PCN and corporate networks

One of the biggest illustrative differences between a PCN and a corporate network is in relation to the term ‘a matter of life and death’. In the corporate world, if you are told to do something quickly, it’s sometimes deemed a matter of life and

October 2009 Network Security11

Page 3: Securing process control networks

PROCESS CONTROL

death by over-eager managers, who really should get out more. You are given to understand that the person, department or organisation might be mildly to mod-erately inconvenienced if you don’t com-plete your assignment in a timely manner.

In the PCN world, it is quite literally a matter of life and death. If a valve is not shut in time, the plant may explode and people may die. If a new oxygen line is not made available to a patient during heart surgery, then they will likely die. Failure is final in a PCN.

PCNs tend to be designed and built by engineers, not by IT professionals. Engineers tend to have specific goals, such as to keep the process running effi-ciently and safely, share data with others to ensure this happens, and trust other engineers to do their jobs.

This is where things get interesting from a security perspective. The core protocols of SCADA systems, designed and built as they are by engineers with open mentalities, have no security what-soever. There is no read or write level security in Modbus – any device can issue a ‘read coils’ or ‘write coils’ func-tion call to measure and change a proc-ess. This means that a rogue PLC could affect enormous damage to a process. Whilst the PCN was a separate network, this was not an issue, but this is no longer the case.

When networks collideAs PCN and the corporate networks have become one and the same through the power of networking, so organisa-tions are now faced with a new prob-lem: how to protect the vulnerable PCN from hackers, crackers, and all the other nefarious goings-on that hap-pen in the corporate world. And most importantly, the corporate network is connected to the internet. How will this be secured? It turns out that the stand-ard arsenal of IT security tools often create more problems than they solve, due to some unique characteristics of the SCADA environment.

Process control equipment is unlike standard IT kit. It is not renewed every three years. It does not have to double in power every 18 months. It

is often very modest in terms of proces-sor power, and is often running on old equipment. It is not uncommon to find Windows NT Server, Windows 3.11, DOS boxes, VAXes, Dec PDP11s and other equipment that the IT depart-ment would class as antiquated running a current manufacturing process. The philosophy ‘if it ain’t broke, don’t fix it’ is central to the PCN.

Now the networks are joined, and the first thing the IT depart-ment wants to do is to scan the PCN to determine the networked assets. Usually they are stopped by irate engi-neers, which is just as well, because many of these older operating systems and applications do not react well to scans. Many of these devices would destabilise or crash, which could be disastrous.

“Most IPS vendors do not have specific SCADA rules. If they do, the rules cannot be inspected or modified to take into account local PCN environmental conditions”

So, instead, perhaps the IT depart-ment embarks on a long and painstak-ing physical audit. What they find is typically not encouraging. Many of the machines have not been patched for years. This may be due to laziness on the engineering side, a result of lack of downtime, or a limitation of the PCN software. When the IT department contacts the SCADA system vendors, it often finds that they are not forthcom-ing with security patches. Often, those vendors have limited appreciation of security as a discipline overall.

Most IPS systems are useless too. Most IPS vendors do not have specific SCADA rules. If they do, the rules can-not be inspected or modified to take into account local PCN environmental conditions. If an organisation has rolled their own process control using their own client server software, then all bets are off regarding most IPS systems. They don’t lend themselves to adding end-user rule sets. Running inline is also a no-no. False positives could lead to too many ‘life or death’ incidents.

Where is the threat, anyway?Threats to corporate networks are largely understood, and PCN threats overlap sub-stantially. Some key threats for PCNs are:

manufacturing/industrial process.

(gas, power, water), external attackers in the form of terrorism.

-tries, external attacks from animal rights terrorists.

cause downtime.

Let’s look at some real life examples:

Russia, 2000: Hackers cracked Gazprom systems, the Russian state gas company, gaining access to the gas-flow switchboard.

Queensland, Australia, 2001: Insider hacks into sewage treatment plant. A former employee of the software devel-opment team repeatedly hacked into the SCADA system that controlled the plant, releasing over 250 000 gallons of raw sewage into nearby rivers and parks.

Oak Harbour, Ohio, 2003: The Slammer worm penetrated the Davis-Besse nuclear power plant, disabling a safety monitoring system for five hours. Slammer gained access to the plant via an unsecured contractor network.

St. Louis, Missouri, 2005: Equipment malfunction at water storage dam. The gauges at the Sauk Water stor-age dam read differently than the gauges at the dam’s remote monitoring station, causing a catastrophic failure, which released one billion gallons of water.

Harrisburg, Pennsylvania, 2006: Intruder planted malicious software in a water treatment system. A foreign hacker penetrated security of a water filtering plant through the internet. The intruder planted malicious software that was capable of affecting the plant’s water treatment operations.

Willows, CA, 2007: Hacker sabo-taged a water canal SCADA system. An intruder installed unauthorised software and damaged the computer used to

12Network Security October 2009

Page 4: Securing process control networks

OO VULNERABILITY

October 2009 Network Security13

divert water from the Sacramento River.Lodz, Poland, 2008: Teenage boy

hacked into the track control system of the city tram system, derailing four vehi-cles. He had adapted a television remote control so it could change track switches.

How to secure process control networks So what is an IT security architect to do? How can the PCN be secured if it can’t even be monitored? Is there any approach that will work in the PCN? Will it be compatible with corporate security approaches? Fortunately the answer is yes. Here’s how it can be done:

Understanding process control networks, the people and the politics is vital if you are to be successful in building a security solution that truly secures a combined corporate and SCADA network."

Do not use scanners. Instead, imple-ment a passive asset tracking (PAT) system. PAT systems determine assets by monitoring the nuances of their com-

munications. By fingerprinting these communications and by running protocol analysis on all communications, a map of host operating system and version, host services and versions, client services and versions can be determined. Once these are known, vulnerability profiles can be built for each asset on the network.

Use IDS where possible, correlated with PAT network intelligence. Use the IPS backed up with PAT data to dynamically change access control to the SCADA network – much more reliable than simple IPS.

Implement a compliance monitoring system. This system, which should also be able to enforce compliance, should be able to detect change in the configura-tion of hosts. It must be able to use the PAT as a feed.

Use a rules-based escalation system. This system should be able to correlate IDS, PAT and network behavioural analysis (NBA) data sources. In this way, you can more reliably determine in real time the extent of an intrusion and the compromises made. You can also make use of multiple data sources to enhance your reliability. For example, combining IDS and PAT means you can downgrade

intrusion events that don’t apply, such as Microsoft exploits against a DEC PDP11.

Use identity monitoring systems. These should be able to decode user names and correlate these with intrusion events. In this way, internal staff hacks can be reliably identified.

Use a security monitoring system with Role-Based Account Management (RBAC) and distributed monitoring facilities. This will enable you to give the engineers controlled access to the security data. Most engineers are dis-trustful of IT, believing that they can ‘do it better than IT’. Providing them access is a great way to befriend them, and hav-ing them on your side is vital if security is to be effective.

SummaryUnderstanding process control networks, the people and the politics is vital if you are to be successful in building a security solu-tion that truly secures a combined corporate and SCADA network. There are good solutions that can help. It pays to be selec-tive though – ensure whatever system you choose can provide effective security now and at minimum, over the next three years.

Measuring the vulnerability of an object-oriented design

Software security is a critical issue for today’s computing departments, and yet in many cases, it has received little or no attention. It has been observed that vulnerabilities left in the software dur-ing development process are responsible for successful attacks.2 Security experts and practitioners have strong opinions when it comes to minimising vulner-

abilities in order to reduce the exploita-tion of software. In addition, it has been realised that reducing vulnerabilities early in the software development life cycle can reduce considerable effort in later phases.1,3,4 Also, vulnerabilities introduced in the phase manifest them-selves with the ongoing development life cycle.5 Moreover, recommended changes

and modifications, as a result, may easily be adapted at design phase. But absence of any efficient tool or mechanism to handle the vulnerabilities at this phase has made the process time consuming, resource consuming, and error prone.1,6

Unfortunately, both qualitative and quantitative methodologies to assess vulnerabilities at early stage of software development life cycle are still miss-ing.7,8 Researchers and practitioners have been repeatedly advocating for the possibility of integrating vulnerability assessment during the design phase.9 Successful efforts on incorporating qual-ity early in the design phase motivated the authors to integrate security well

A. Agrawal and R. A. Khan, Babasaheb Bhimrao Ambedkar University, Lucknow, India.

Traditional approaches to security focus primarily on antivirus, firewalls, intru-sion detection, and so on.1 In spite of these protection, the attacks continue, and data breaches and other losses are escalating. This proves that network security alone cannot protect application from attacks. What else is missing?