Upload
eword
View
215
Download
1
Embed Size (px)
DESCRIPTION
System Administration University of Wollongong 2004 - Technical Aspects of Security and Myths
Citation preview
CSCI322/MCS9322 Spring 2004Systems Administration
Technical Aspects of Security and Myths. What to do!
Daniel F. SaffiotiSchool of IT and CSThe University of Wollongong
Objectives.
In this lecture we are going to look at:
– Some of the factors administrators should consider when implementing secure systems.
– The dangers of trust and other infrastructure.
– What a good administrator does to ensure secure computing.
More to it then writing policy…
Many people think that security is about writing policy and putting it on the shelf..
Security depends on procedure. A good administrator will always be on the lookout for changes in the industry, technology or organisation.
The following section outlines things you should consider when working on security.
Logging
Logging does not increase the security of a system per se.
It does give you (the SA) a tool to help you increase the security of systems and detect when a compromise has occurred.
The best reason for logging is as follows:
– If you log attempts to break in (definition?) then when you see such activity you can strengthen defenses in the area under attack.
Logging should be done to at least 2 destinations. The local machine and a relatively secure log host.
If a person gets root on a machine they can delete the log files, but not on the log host (hopefully).
Alternatively, they may attack the log host first in which case the local machine logs may be your only record of events if the attack fails.
On Unix logging is typically performed by syslog.
Syslog allows facilities to be defined. Each facility represents a different kind of log.
The logs can be stored on a local host or forwarded to a remote host.
There are other variants of syslog. One being syslogng which allows for pattern matching and other features.
Syslog defines facilities which identify programs that can log.
Facilities include, auth, kern, mail, cron, lpr and several others.
Syslog and most of the variants consult the file /etc/syslog.conf to configure the daemon.
For each facility we may specify where to put the log data via a path.
We can also discriminate on each facility using severity levels. Examples include info, err, crit, emerg, alert.
Below is an example file.
*.err;kern.*;auth.notice;authpriv,remoteauth,install.none;mail.crit /dev/console
ftp.* /var/log/ftp.lognetinfo.err /var/log/netinfo.loginstall.* /var/log/install.log
In the above example we have many facilities, each with different severity levels going to different places. Notice how we can use * as a wildcard.
There are defined facilities in the kernel - a user can not define there own. Facilities which are not defined typically end up as user.
It should be noted that on most Unix machines logs are kept in /var/log or /var/adm.
To write to syslog we use either a library of can send data directly through /dev/log. This varies on the system. Some have a named socket in the file system.
Whatever logging level you choose, you should automate the monitoring of logs. A system administrator typically can not read the logs.
Swatch and logsurfer are 2 shareware utilities for this task.
They can inform you (via pager or mail) when they detect a problem.
They can also perform maintenance e.g. rotation, truncation. Sometimes people forget about logs and they can damage things.
You should certainly log all attempts to connect to your machines via the network.
In more secure environments you may want to monitor all machine activity (if you have the space and can afford the performance loss).
Solaris for example has process auditing. Every time a process is executed it can log details relating to its invocation. This can be very expensive.
Logging too much data, however, is counter productive since it becomes harder to identify the important log entries.
There are things which you should always log. Below is a list;
– Su attempts (switch user).– Network connections (tcpd does this).– Failed logins.– Rejected file system mount operations.– Transaction on services e.g. Mail, FTP etc.
Typically we would use the cron job scheduler to check for anomalies in the logs.
Never completely trust your logs.
The UNIX syslog daemon will accept log requests from anyone and, in the network case, from anywhere.
This means people can, potentially, insert bogus log messages into your logs.
Similarly, a hacker can flood a host with log messages to cover illicit activity.
You should be able to detect this, however, by appropriate log filters.
A good (bad?) technique is to flood a log host for a few hours to fill the filesystem where the log files are kept. Then launch an attack.
If you succeed, modify the local log files. The administrator will know that something occurred, but it will be harder to track you down.
Human Factors.
You can never predict users. Users have this ability to effect security through behaviors. We have seen this with passwords. There are many others.
Examples include:– People storing sensitive information on insecure
machines for convenience.– Users letting their boy/girl friend use their account.– Users installing their own modems.
When setting up site security, too many people focus exclusively on the external threat.
They believe firewalls etc are vital, but internal security is not considered.
It is best to assume a hostile user base even when that is not the case.
Cost Factors.
The cost of protecting data has to be justified.
Never forget that your competitors can always try to bribe your staff if they really want the information.
On the other hand, a web site may have no valuable data on it, but corporate reputation can be damaged if it was compromised; which has a real monetary value.
Enforcing Security.
As I mentioned previously there are themes/ ideas we need to consider when drafting policy.
There are also factors which often don’t spring to mind that need to be considered.
When it comes to formulating procedure though what do we do? We typically resort to technology to help us.
The following section outlines a number of technologies/ ideas which can help us implement good security procedures.
Physical Security.
As a general rule, no system is secure against an intruder with physical access to the machine. This is often forgotten.
An intruder can:– boot from alternate media.– remove (or copy) disks or tapes.– install additional hardware.
Machine rooms need to be secure and the data they house needs to be protected.
This can be very difficult especially in environments where you have contractors.
Where possible you should design your machine room with this in mind.
You should also where possible exploit some of the technologies in the actual machine. For example you may set up firmware passwords to stop people from altering the boot path.
Servers need to be in secure environments.
Auditors may insist on good physical security, as they should.
In extreme cases, it can be better to destroy resources when physical security is compromised (Department of Defense).
Backups.
Backup strategies are an important adjunct to system security since, if a compromise is detected, trusted data will have to reloaded from backups.
Once a machine is root compromised - that’s it. You can not trust the machine and its data any more.
However sometimes people when they attack organisations go after the backups.
Security is about planning and handling these strange events.
Consider the example of an organisation that gets root compromised. The organisation does not detect it for some time. During this time the vilan destroys the backups.
There should be procedures to make sure things work - this is disaster recovery policy.
Integrity Checks.
Ideally, all of the system components not required to be modified would be read only and not modifiable even by root.
This is difficult to achieve.
Integrity checking of important files one of the best ways to detect intrusion.
The open source utilities tripwire or ace compute file checksums and compares with previously computed values and thus detects when files change.
Ideally, checksums should be stored where an intruder can not alter them (write once devices).
Of course, the intruder may replace your version of tripwire with hacked version – or replace a shared library with a version with Trojan code but this type of intrusion is sophisticated and rare.
Integrity checking is also a useful SA tool for detecting policy violations. Examples of this include:– Installing software without updating
software logs.
Encryption
Encryption is a basic tool of system security and should be used a lot more than it is.
It is not used as much as it should be because of its high cost (in CPU) and difficulties of administration (PKI).
Particularly sensitive information should be encrypted, especially for network transmission.
Encryption comes in various flavors (block or stream, symmetric or asymmetric) and lots of algorithms with various key lengths.
– Block/stream ciphers generally relate to how the data is grouped for encryption.
– Symmetric Asymmetric refer to whether or not the algorithm is reversible. Symetric ciphers use one key, where as asymetric use two.
– Public & Private Key crypto system describe the process of use keys. Typically messages are encrypted with the public part of the key and decrypted with the private part of the key. The public key is known by other people. Only one person know about the private key.
– Sometimes we have a mixture of symmetric/asymmetric systems used.
Examples of encryption ciphers include:
– DES (56 bit).– Blowfish (128 bit).– RSA (up to 1024 bits).
As I mentioned previously people do not look encryption technologies due to the management problems associated with them. How do you effectively issues keys for encryption ciphers?
This is managed by PKI (Public Key Infrastructures). The PKI’s are trusted.
Many people like PGP (Pretty Good Privacy) for the fact it has a simple infrastructure.
Message Digests.
A message digest is basically a checksum allowing detection of cases where data (a message) has been modified.
It is usually implemented as a one way hash function which is very difficult to reverse - Examples of this include SHA1 (160 bit checksum)
It should be noted a digital signature is a message digest encrypted with a private key from a public key encryption algorithm. This private key is obtained from an authority.
The checksum must be first decrypted with the appropriate public key. Then it can be compared with the checksum of the message at the receiver.
Any person tampering with the message can compute a new checksum, but they have no way to encrypt it since they do not have the original sender’s private key.
Thus the signature ensures the message received is the same as the one sent.
It also ensures that the sender is identifiable.
Trusted Hosts
The rlogin/rsh protocols (and others) allow for remote users to sign on (or run a process) without specifying a password.
This even applies to the root account.
This is a very handy feature but presents a severe potential security risk.
The file /etc/hosts.equiv allows a SA to indicate which other machines are “trusted” at the user level.
Users from machines listed in this file can sign on as the equivalent user without a password.
Users may even be “mapped” to other users to handle user name mismatches.
It gets worse…
The file $HOME/.rhosts allows a user to indicate that either himself or some other user on a remote machine may sign onto their account from that machine without a password.
The Web of Trust.
Because one machine trusts another, which in turn may trust another, the trust relationships form a web.
An intruder with a compromised account can traverse such a web to remote machines.
The System Administrator on those remote machines may not even be aware of the trust relationship used. Be on the lookout for this.
Here are some examples of the file:
# The following line allows any user # called dfs to log in to this host+ dfs
# Allows any user from the named host# yoshi to access the system as a local# useryoshi +
As you would expect there are - directives to restrict access.
Trust may be convenient but if not controlled can be problematic - consider the following scenario.
– Mary says “I will trust machine X, it’s run by Jim and he can be trusted and we have a lot of users in common.
– Jim says “I will trust machine Y because he has a tape and can do my backups for me.”
– Now Mary’s machine trusts machine Y without her being aware of it.
The problem is even worse since one of Mary’s users can set up an rhosts file to trust some machine where his girlfriend works.
Now if his girlfriend’s account is compromised, they can get onto Mary’s machine and launch an attack.
The Web of Trust has been very successfully used by intruders. Robert Tappen Morris exploited this trust weakness with his hack.
The 1988 Internet worms used the Web of Trust as one way to move between machines.
Once on a machine, it could read /.rhosts and /etc/hosts.equiv.
It assumed that machines that the current machine trusted would also trust the current machine.
This turns out to be a reasonable assumption in practice.
Management of trust relationships (elimination of trust relationships) is a very important part of overall system security for a system administrator.
This applies to all trust relationships such as sign on, file access, authentication and name services.
Trust and IP/DNS Spoofing.
Spoofing refers to the general practice of forging either sources of information or the results of queries.
Because most of the principles of trust across networks involves knowing the source of the packets, forging packets is an effective way of bypassing system security.
This form of attack can be used where any IP-address-based trust arrangement is used; such as rlogin or NFS.
The problem can be summarized thus: – “How do we know that who is on the other
end of the wire is who they say they are?”.
Kerberos
Kerberos was designed to solve some of these problems.
Users get “tickets” which can be passed around between machines so that servers can authenticate requests.
Tickets are just like passwords, except that they can be safely passed around the network and are immune from sniffing (at least partially).
Tickets may be passed around because they are always encrypted; with keys that only known to the authenticated users and with the current time.
This prevents playback attacks.
The Human Aspect of Security.
Probably the majority of security compromises occur because of “human factors”.
These include:– Bad choices of password– System Administrator errors e.g. simple typing errors– Bugs in code– Bad system configurations– Bad system default settings
These sorts of problems are much more common than an attacks against encryption keys.
Examples of carelessness include:– System Administrator runs a script to dump
the password and shadow files for some reason, then forgets to chmod them to 600.
– Files left laying around containing encrypted passwords.
– User finds and runs crack.
Other examples include: – System Administrator sets up new
machine, edits the inetd.conf file to remove unsecure services but happens to overlook rexec.
– Rexecd has a buffer overrun bug which can be exploited to get root access.
– Vendor should take responsibility for this problem as well.
Machines should ship in secure configurations – and almost never do!
What to do as an Administrator.
Ensure physical security.– This is obviously more important as the
level of trust granted a machine increases e.g. NIS+ and DNS Servers need to be secure.
Stay on latest OS release.
Stay on latest patch releases.– A large number of break-ins occur
exploiting bugs that have been fixed by new OS releases or patches. Experience indicates that staying on current OS and patch levels is the single most important step in ensuring system security.
After OS install, audit machine security.– Remove unnecessary services.– Check permissions on important
files/directories.
Arrange backups of all important data.– This is the only way to recover from a break-in.
Run integrity checks regularly.– Consider augmenting this with security scans
(COPS, nmap, saint, nessus, Tiger, SATAN). Such tools typically look for vulnerabilities.
Set up logging properly and monitor all important logs.
Know and control your trust relationships.
What else to do?
Administrators need to keep up to date with the world. Here are some common sites/ email addresses which can keep you informed about security.
CERT www.cert.org AUSCERT www.auscert.org.aubugtraq [email protected] safer.siamrelay.comrootshell www.rootshell.com