7

Click here to load reader

Do we know what we are talking about?

Embed Size (px)

Citation preview

Page 1: Do we know what we are talking about?

May 7990 Computer Fraud & Security Bulletin

COST EFFECTIVE COMPUTER SECURITY

DO WE KNOW WHAT WE ARE TALKING ABOUT?

Stan Dormer

It would seem self evident that after 20 years

of attention to computer security by the computer

industry, that everyone would understand what

is meant by ‘Cost-Effective Computer Security’.

Sadly, the daily incidents reported as occurring

in computer centres such as attacks by hackers

on systems, simple computer frauds perpetrated

by staff, staggering losses due to badly

conceived and poorly implemented systems, tell

us remorselessly that we barely understand what

Computer Security’ is about let alone making it

“Cost Effective”. To quote Edward Singh, the

successful hacker who appeared in a recent

BBC Antenna Broadcast on the problems of

computer viruses:

“The problem is that security officers don’t

have the faintest idea about computer security

. . . they don’t know anything about security!”

Whilst Singh was clearly boasting about his

ability to enter commercial networks without

invitation, my personal experience in the field of

computer audit and security makes me feel that

his quote is too close to the truth to be funny. So

I want to revisit some of the basic principles of

security. But, most importantly, I want to look at

this subject from a behavioural point of view. I do

this because time and time again it is the human

element that makes or breaks our security. In

exploring some of the dynamics of computer

security I’ll need to define some ground rules

along the way. We’ll call them ‘Stan’s Secret

Security Rules’ because only people reaching

CFSwill be in the know. We don’t want everyone

to get too smart otherwise we could all be out of

a job.

Computer Security - What is Computer Security all about?

I believe that computer security is about protecting our organization’s investment in hardware, software, human resources and information.

Against what? Against:

l Attack - Overtly we simply want to defend

our investment. Covertly we want to be seen and known to be strong by the whole world! If a juvenile hacker penetrates our all-electronic validated security system by

using an unexpected route, it is a little late, and very embarrassing, for our PR officer to have to cry “Unfair! That wasn’t in the company’s risk scenario.

l Loss - Overtly we want, in the

organization’s best interest, to minimize or

eliminate loss. Covertly we want insiders to

see what we, the shrewd super-performers, are doing and how we don’t bungle, screw-up

or waste computer resources through carelessness!

l Pain - Overtly we want to show the outside

world that we are in confident control of our

own automation! Covertly we live in fear of taking people to court on Christmas Eve due

to our use of dumb automatic debt recovery

systems, of sending out invoices for Op (with

a 5% discount for prompt payment), of delivering 19 Red Dresses Size 14 against the customer’s order for a Nikon Camera. If

all we can say in these situations is that it was a ‘computer fault’ then we are telling the world that we are running an uncontrolled business, an insecure business and we look

stupid! Very stupid!

So security is about us, the rest of the

management and the organization feeling safe

from attack, loss or pain. But remember the human dimensions here, they are important and

they give us two clues:

l Clue One. Computer security is fronted by a veneer of overt “properness” the sort of

01990 Elsevier Science Publishers Ltd 7

Page 2: Do we know what we are talking about?

Computer Fraud & Security Bulletin May 1990

things that we can easily set rules for. E.g. “It is wrong for you to disclose your password.”

But behind this veneer there are some unspoken needs and wants that we want others to appreciate or even act on. E.g. “Er . . . I wish you hadn’t changed those parameters until I’d had a chance to realize what would happen!” “I know that this security device looks roughly made and you get cut when inserting your Id-badge, but umm . . . you cannot fail to marvel at the savings I made by buying them by the gross!” The root problem is that of building security systems based on the overt expectations of

properness and fair play, and not at the same time of satisfying the covert wants and needs of others.

Clue Two. For a security system to ‘work’ it must be accepted as a reasonable solution by all the people who come into ‘working’ contact with

it. Not just acceptable to security officers, auditors, managers and others who have an immediate vested interest in a baseline of security. We can always design a system that self-destructs, taking the hardware, software, data and all the buildings with it when certain security criteria were not met. This would be very secure but probably unpopular with the staff council and any customers foolish enough to be on our premises at the time.

So lets define the first two of Stan’s Secret Security Rules.

RULE 1. Our computer security systems must always satisfy hidden needs and wants as well as obvious criteria.

RULE2. Our computer security systems must always be accepted as workable by our workforce.

Rule 1 will lead us to a more coherent security strategy because we are meeting needs at several levels. In addition we will be armed with persuasive reasons for adoption by being able to disclose hidden benefits. Rule 2 will lead us to solutions that are endorsed by our staff and

will be thus more robust. If any of our computer security solutions don’t meet both these criteria as a matter of course then we will not have anything but the most shallow and perishable form of protection. In summary then, effective computer security will stem from finding the correct control framework. This framework must meet many requirements. Some of these requirements are inevitably related to human behaviour. We have met the first two of these,

now let’s dig deeper.

Computer Security - What is Effectiveness About?

In crudest terms risk analysis tells us to lock ourselves away in a room with like minded colleagues and to forecast the likely perils and

threats that could afflict our system. To forecast the likelihood of these traumas. To price the effect of these occurrences. To find countermeasures to combat these nuisances. And to ensure that we spend less on the risk control programme than the potential losses which we might incur without security in place. Good game! Good game! Have you ever tried to play ‘Spot-the-Ball’. If you have you notice that the wretched ball, in the solutions issued the following week, is always in the most unlikely place. It is never in the place that you would suspect judging from the eye lines and positions of the players. In the same way who would have

forecast that:

An operator could remove a demountable disk pack without the operating system noticing, remount a new but wrong pack still without the operating system noticing. These actions were followed by the system happily overwriting 50 000 unprocessed customer orders in a shade over 45 seconds.

A guard concerned about the lax fire precautions in a computer centre would take it upon himself to demonstrate to senior management how poor the fire control procedures were, by actually setting a small fire going in a corridor. And accidentally burning the building down.

8 01990 Elsevier Science Publishers Ltd

Page 3: Do we know what we are talking about?

May 7990 Computer fraud & Security Bulletin

l A newly installed fire alarm system, that was prone to repeated accidental triggering was

fixed permanently by the caretaker of the computer centre by the simple expedient of taping over the reset button. The fire that occurred on the Tuesday, within a machine

room, burnt for nearly 20 minutes before anyone noticed anything amiss. It was the

Wednesday before anyone realized that the alarms hadn’t sounded.

l That the burglars would choose to steal from the computer centre f50 000 worth of newly

delivered PCs during the one weekend that the security system was accidentally disabled by the decorators.

Risk analysis will highlight the obvious, the baseline of security, the things and areas that may be at risk and which ought to be dealt with

- but it won’t often reveal the weird and wonderful routes by which your security will actually be compromised. So your control

programme must be good enough to remain

robust in the face of unknown threats via

unknown routes by unknown persons. This leads us to Rule 3.

RULE 3. Effective security depends on an interdependency between controls

so that if one measure fails a fallback control still remains in operation.

Let’s look at this in more detail. Risks themselves are often interdependent. The failure of one control mechanism may invalidate another or produce a consequential effect in an apparently unconnected system. In practice this might result in a ‘Domino effect’ where a small failure triggers a very unwelcome train of events. First some examples.

l During the stormy weather this January in the

UK the loss of power to a computer centre

due to failure in the national grid caused the company to cut over to a standby generator. Due to less than rigorous attention to routine maintenance of the standby generation set this in turn failed shortly after coming under load. Unfortunately because of a prolonged

systems failure the previous week the workload waiting to be processed was four

times the normal peak. The company had no other contingency arrangements. They were on the brink of catastrophe! By good fortune the company was bailed out by a well

disposed power plant supplier who loaned

them a generation set for a fortnight. Having survived this near disaster the company has

now relaxed and sees no urgent need to

reconsider existing control, contingency and security arrangements. They sincerely believe that this was a freak chain of events unlikely to be repeated - and therefore not

worthwhile controlling!

l The accidental depression of the wrong key

on a workstation during the start up phase of a system caused an unfamiliar message to appear on the operator’s screen. The message said: “Do you want to re-initialize

the system Y/N?“. The operator knowing that the wrong key had been hit, interpreted this

message as meaning: “Do you want to try again ?” and responded “Yes”. The system

then proceeded to clear down all major files, logs and control tables readying itself for a system rebuild! It did this without issuing any further warning about what was to take place.

This system was the control core for physical

security within a large installation and was responsible for validating staff ID-cards, door

entry PINS and door release mechanisms. Chaos ensued. Free movement was barred

in controlled corridors to all staff and twenty people found themselves trapped within a

machine room with the doors locked up. The

movement of tapes between machine rooms

and library storage ground to a halt. To release themselves from the machine room

the operations staff broke a fire alarm

break-glass unit. This caused the machine room doors to go onto emergency override

and open. This in turn caused autodialling of the local fire brigade who had three

appliances on site within five minutes. In all, over three hours processing was lost on this major site and the consequential effects lasted for two further days. When the dust

had settled the backups for the security

01990 Elsevier Science Publishers Ltd 9

Page 4: Do we know what we are talking about?

Computer Fraud & Security Bulletin May 1990

system files on the workstation were retrieved and the system restored. Unfortunately these backups were more than a month old and when the system was brought back up the configuration and staff accesses had reverted to the state they were in 6 weeks earlier. Further chaos ensued.

Both of these examples show the ‘Domino effect’ triggered by a singular event that was not contained by a carefully designed set of interdependent controls.

In the first example at least three control mechanisms failed.

. Controls over standby arrangements including routine preventive maintenance; controls over system design quality and

failsoft arrangements to ensure that an unacceptable backlog of work could not build up; controls that ensured that the company would learn from previous mistakes -

feedback control.

l In the second example again there were several control failures. Controls over system design to invalidate the effects of miskeying; control procedures to assist the operator by issuing adequate warning of major system events about to take place (reinitialization); control procedures within the data centre to ensure that doors could be opened within a security framework without triggering further events unnecessarily; control procedures to ensure that backup was timely and appropriate in order to be able to restart with the minimum of inconvenience.

Both organizations suffered LOSS and PAIN and were insecure. They were lucky that nobody decided to exploit their moments of weakness and launch an ATTACK! Neither organization had effective computer security even though both had spent some time and money on window dressing.

In the final analysis it doesn’t matter how little or how much you spend on computer security, if, when it comes to the crunch, your countermeasures don’t work. Your company will

still slide inexorably and ungraciously down the pan, although some will at least do it at less cost than others! So lets now define a further rule, Rule 4.

RULE4. All your computer security countermeasures must be tested and proven to work, in isolation, in combination, and by real people working in real environments.

Computer security -What is cost effectiveness about?

Rules 1 to 4 were about improving effectiveness through criteria of acceptability, containment and proof, so now let’s look at the costs. People always have a perception about what they are willing to spend to achieve a goal. Thus we are willing to spend more for high quality sound reproduction if listening to music is an important aspect of our lives. We are willing to spend a certain amount on house insurance to avoid the trauma of our houses being destroyed by fire, even though we believe it will never happen to us, because we understand what it would mean to our lives if we personally suffered a major uninsured loss. In the same way company management according to inclination and value judgements will be willing to spend different amounts on computer security according to their perceptions of its importance.

Risk analysis or any other purely quantitative approach does not take into account entrepreneurial, behavioural, or perceived values that are part of company culture. It is often for this reason that internal auditors find themselves in the situation where they know that a control procedure is important, but they cannot somehow convince the management team that the procedure is equally important to them. This is because of hidden perceptual or cultural factors that operate during the decision making process.

Different subjective value judgements will always be associated with computer security countermeasures that:

l Eliminate a historically recognized attack, loss or pain

10 01990 Elsevier Science Publishers Ltd

Page 5: Do we know what we are talking about?

May 1990 Computer Fraud & Security Bulletin

Reduce the likelihood of a historically recognized attack, loss or pain

Solve a problem that occurs on an almost daily basis that causes repetitive loss or pain

Reduce the likelihood of a future possible attack, loss or pain

Eliminate the likelihood of a future possible attack, loss or pain

Proposals for countermeasures aimed at solving problems of a historical nature will be hard to sell to a management that has become accustomed to living with failure and muddling through.

Proposals for countermeasures aimed at solving problems that have never occurred before will be hard to sell because management

won’t believe that the risks really could occur, or, alternatively, that the staff couldn’t cope if they did occur.

Proposals for countermeasures aimed at solving daily irritants often succeed.

As people who will be in a position to sell to management recommendations for improved computer security you should recognize the

value attached to any of your proposals that promise instant relief. These are analgesic countermeasures. This leads us to Rule 5.

RULE 5. Irrespective of any other factor computer security measures that offer relief today from loss or pain

will be perceived as more cost-effective.

Logically Mr Speck the Vulcan in the television programme Startrek would tell us that if the predicted annual cost of a countermeasure was less than the predicted potential annual loss, due to a risk materializing, then we are in a profitable situation. Unfortunately the possession of a pair of pointed ears doesn’t guarantee that we can estimate with any realism the likely annual loss except in the most simple circumstances. This is due to the conspiratorial

and interdependent nature of real world events. Nor does the possession of even a pair of very pointed ears guarantee that our management cultures wilfbe convinced by the validity of our figures. That is to say, deeply convinced enough to spend real money on our recommendations.

So what can we do? This.

First recognize that fuzzy figures are quickly spotted and, that beyond a certain degree of fuzziness, will be unacceptable no matter how carefully researched and presented. Secondly recognize that it will often be necessary to create a safety zone, or comfort factor, between the actual annual cost of security countermeasures and the predicted annual potential loss. Although it is obvious that the bigger this zone the more acceptable any countermeasure recommendation will be, you must still make proper allowances for company cultures. Thus if the computer security measure proposed is a matter of national security then the zone might be negative! Whereas the acceptable level of cost, for a countermeasure to be seen as cost-effective by a sceptical senior manager within a profit conscious, interest rate hit, low margin High St. retailer, might be 5% or less of the assumed saving. This isn’t necessarily logical but it will be more palatable in cultural terms, and it may result in the adoption of some countermeasures as opposed to none. This enables us to now define Rule 6.

RULE 6. Cost-effectiveness of computer security is as much to do with value perception as it is to do with cold logic and allowance must be made for this in any recommendation.

Cost Effectiveness - Maintaining value for money

We are all aware that certain computer security countermeasures, controls and procedures lose effectiveness with the passage of time. For example, there is a negative correlation between the secrecy of a password needed for access to a system and its age. There is a negative correlation between the numbers and types of users on a system and our ultimate ability to control it. This means that the cost

61990 Elsevier Science Publishers Ltd 11

Page 6: Do we know what we are talking about?

Computer Fraud & Security Bulletin May 7990

effectiveness of such security measures also decays with time. This in turn means that we must make appropriate allowances for this in our risk control programme. Firstly the rate of decay of effectiveness will be more rapid if our countermeasures don’t meet the requirements of Rules 1 and 2. Our staff here will be the undermining factor. Secondly the rate of decay of effectiveness will be more rapid if:

l We lose interest in the countermeasure and

it becomes a ‘backburner’ procedure.

l We have no mechanism for periodically

evaluating and maintaining its current worth.

This leads us to Rule 7.

RULE 7. The introduction of a new computer security countermeasure must be accompanied by a plan for the periodic review and clear display of its ongoing worth.

Whilst the routine inspection of aspects of computer security within an organization assists management to recognize where procedures are

absent or failing, this is no substitute for a planned programme designed to maintain the

value of a countermeasure. What elements should be in the security maintenance plan?

l THE OVERALL PLAN

This applies to all key measures:

- All key computer security countermeasure must be seen to satisfy Rules 1 to 4 at the

outset.

- All key computer security countermeasures must figure in a Corporate Computer Security programme. This must be approved

at the highest level within the organization,

must be easily understood by and published to all the workforce who have any contact with computer systems.

- No person acting individually should have the authority to discontinue any current

computer security countermeasure.

- Penalties for infringement of

countermeasures should be established and agreed at the outset. This is necessary to

avoid argument over what is reasonable behaviour and what is not.

l THE INDIVIDUAL PLAN

Each critical countermeasure should have a plan defined for it that:

- Sets out how the ongoing success or failure of the control will be recorded. Many organizations install security features but make no provision for monitoring them.

Hence they cannot say at any point in time whether they have worked or not! Therefore nothing is known about whether they are matching assumptions made about cost-

effectiveness.

- Spells out how any instances of failure of a

key control will be ‘post mortemed’, by whom, and how the control will be overhauled in the light of findings. Again many organizations experience occasional

failure or control exceptions but fail to capitalize on these experiences.

- Defines accountabilities and responsibilities for its daily operation, including the recording of ongoing expenditure in support of the control where this is relevant.

- Defines the routine management periodic review of the control’s

effectiveness.

Computer Security - Don’t Devalue Currency!

initiated ongoing

Your

Many organizations make two fatal mistakes when installing new security measures. Firstly they implement a countermeasure in an unfit

state. For example, an inadequately tested procedure or some piece of sophisticated but unreliable electronic equipment. The immediate impact is a loss of confidence and the immediate

start of a decay of cost-effectiveness because the solution is seen to be worse than the problem. They have violated Rule 4! But

12 01990 Elsevier Science Publishers Ltd

Page 7: Do we know what we are talking about?

Mav 1990 Computer Fraud & Security Bulletin

remember that the real impact will be at the behavioural level and thus unseen because staff will ‘cooperate’ superficially.

Secondly, many organizations fail to devote any energy to the public relations aspect of the installation of a security countermeasure or change in working procedures. In particular more junior management and staff don’t know what, why and how these changes will affect them. They were not party to the scheme. These are not their own procedures. These things are being introduced by management to control something that they are doing ‘wrong’ or ‘badly’. These things will impact the way they work and management obviously don’t care about this. Rule 2 has been violated and woops! You now have a behavioural problem and the search by staff for a thousand and one tiny ways of not cooperating with the procedure has begun.

So let’s now define Rule 8.

RULE 8 A pound spent on PR is worth a thousand pounds of electronics

Organizations that manage change effectively have learnt that it is essential to execute change in a climate that is conducive to that change. In the same way the introduction of any new computer security measure needs the right climate. We need to modify people’s behaviour so that they understand and endorse the change. We need to reveal what we are concerned about to our staff. We need to talk about the solution that will be implemented and how they fit into this solution. We need our staff to accept that the problem belongs to them as well as to the organization. We need to convince staff that their application of the countermeasure is a personal and valued contribution to the whole organization. In short we need a PR plan. This plan should:

l Define how the countermeasure will be acceptance tested and fitted to the organization.

l Define how the workforce will be introduced to the key countermeasure, educated in its

use, convinced of its value, and encouraged

to report difficulties with, or improvements to, its operation.

l Define how a reinforcement programme of activities will be put in place so that from time to time our workforce are reintroduced to the security values that we hold important.

Computer security - Cost-effectiveness starts with action!

Of all the rules the last is the most important, because security can only take effect after installation. RULE 9.

Act now, cost-effective security is a matter for now - not tomorrow!

Somewhere along the line, in your efforts to promote better computer security, you will make mistakes. Don’t be fainthearted! Our knowledge of people behaviour and its impact on security measures grows through experimentation, through modification, but mostly from capitalizing on failure as well as building on success!

Just be careful to carry your people with you - and they will work for you, not against you.

Finally remember this:

“An organization would secure nothing, if it waited until it could do it so cost effective/y that no one would find fault with what it had done.”

This presentation was made by Stan Dormer at the /IA’s COMPACS 90 conference.

THREATS TO THE APPLE MAC

VULNERABILITIES OF AN ILLUSION IN SOFTWARE

Ho ward Oakley

In the last year or two, Macintosh Systems has emerged from the closet of a cult architecture

01990 Elsevier Science Publishers Ltd 13