SSCP_Notes_2.0

Embed Size (px)

DESCRIPTION

Notes for Sscp

Citation preview

  • 1

    SSCP Study Notes 1. Access Controls 2. Administration 3. Audit and Monitoring 4. Risk, Response, and Recovery 5. Cryptography 6. Data Communications 7. Malicious Code

    Modified version of original study guide by Vijayanand Banahatti (SSCP)

  • 2

    Table of Content

    1.0 ACCESS CONTROLS...... 03 2.0 ADMINISTRATION ... 07 3.0 AUDIT AND MONITORING...... 13 4.0 RISK, RESPONSE, AND RECOVERY....... 18 5.0 CRYPTOGRAPHY....... 21 6.0 DATA COMMUNICATIONS...... 25 7.0 MALICIOUS CODE..... 31 REFERENCES........ 33

  • 3

    1.0 ACCESS CONTROLS

    Access control objects: Any objects that need controlled access can be considered an access control object. Access control subjects: Any users, programs, and processes that request permission to objects are access control subjects. It is these access control subjects that must be identified, authenticated and authorized. Access control systems: Interface between access control objects and access control subjects.

    1.1 Identification, Authentication, Authorization, Accounting

    1.1.1 Identification and Authentication Techniques Identification works with authentication, and is defined as a process through which the identity of an object is ascertained. Identification takes place by using some form of authentication.

    Authentication Types Example Something you know Passwords, personal identification numbers (PIN), pass phrases,

    mother's maiden name, fave sports team etc Something you have Proximity cards, Identification tokens, Keys,

    Identification badges, Passports, certificates, transponders, smart cards etc.

    Something you are Fingerprints, Signatures, Eye characteristics, Facial characteristics, Voiceprints, DNA.

    These three types of authentication types can be combined to provide greater security. These combinations are called factors of authentication. (Two factor or three factor)

    1.1.2 Authorization Techniques A process through which an access control subject is authenticated and identified, the subject is authorized to have a specific level or type of access to the access control object.

    1.1.3 Accounting Techniques Access control system is accountable for any security-related transaction providing accountability. Accountability within a system means that anyone using the system is tracked and held accountable or responsible for their actions. Example: Authentication audit trail or log, privilege elevation audit trail or log.

    1.1.4 Password Administration Password administration is an important part of any access control system. The selection, management, and auditing of passwords must occur through either automated or administrative methods. o Password Selection: Policy generally deals with minimum password length, required character usage, Password expiry, password reuse etc. o Password Management: Anything that happens to the password during its entire life cycle, from a user needing their password reset to automatic password expiry. o Password audit and control: To determine the overall functionality of the access control system to help to reduce unauthorized access and attacks. Good audit logging practice can be followed.

    1.1.5 Assurance In order to provide assurance, the following four questions must be answered: (i.e. CIA + Accountability)

    - Are transactions between the access control subject and access control object confidential? - Is the integrity of the access control object ensured and guaranteed? - Is the access control object available to be accessed when needed? - Is the access control system accountable for what it authenticates (e.g. logging/auditing)?

    If all four can be answered affirmatively then assurance has been properly assured.

    1.2 Access Control Administration

    After implementation of the access control system, administering the process is the major task. This involves following factors.

    Account Administration: Administration of all user, system, and service accounts used within the access control system. This includes creation (authorization, rights, permissions), maintenance (account lockout-reset, audit, password policy), and destruction (rename or delete) of accounts.

  • 4

    Access Rights and Permissions: The owner of the data should decide any rights and permissions for a specific account. The principle of least privilege will be used here to grant all the rights and permissions necessary to an account to perform the required duties, but not more than required or needed. Monitoring: The changes to accounts, the escalation of privileges should be logged and should be constantly monitored for security. Removable Media Security: Any removable media from the system can be the vulnerability. All removable media should be restricted or controlled in some manner to provide for the best possible system security. Management of Data Caches: Access control is not only for users - any type of information which is on the system needs to be considered - e.g. temporary data caches (pagefile, dr watsons, .tmp files etc)

    1.3 Access Control Models, Methodologies and Implementation

    1.3.1 Type of Access Control Polices Access control policies are put into place to mitigate and control security risks. These policies are the guidelines that should be followed by both automated access control systems and actual physical security. Following policies work together to support a global access control policy.

    Type of Policies Description Preventive Polices for preventing the vulnerabilities from getting exploited.

    E.g. patching policy, background checks, data classification, separation of duties etc

    Detective This type of policy is implemented to know when an attack is occurring. E.g. IDS, log monitoring etc

    Corrective Policies that address immediate correction action, after the vulnerabilities getting exploited. These policies include disaster recovery plans, emergency restore procedures, password lockout threshold etc.

    NB: Don't confuse with Control types (p.14)

    1.3.2 Access Control Policy Implementation Methods

    Administrative: In this implementation a policy is administratively controlled through workplace policies or orders passed down to subordinates. Do not have any automated steps built in and require people to do as they are told. It does offer an easy way to implement a first line of defense. E.g. written policy on passwords (length, expiration, lockout) etc.

    Logical/Technical: In this implementations automated methods of enforcing access control policies are used. This type of implementation of policies restricts human errors during operation stage. E.g. Actual lockout password restrictions implemented (length, expiration, lockout), use of SSL, SSH etc.

    Physical: This type of implementation includes everything from controlling access to a secure building to protecting network cabling from electro-magnetic interference (EMI). Example: Security guards, Biometric devices, ID badges, Perimeter defenses (walls/fences), Physical locks.

    Note 1: The policies and implementations may be combined - I.e. Preventive / Administrative (e.g. written password policy); Detective / Logical/Technical (e.g. IDS); Corrective / Administrative (e.g. disaster recovery plan). There are also some, for e.g. CCTC that may be seen as Preventive/Physical (when recording only) & Detective/Physical (when being actively monitored).

    Note 2: Don't confuse the SSCP usage of policy with Windows Policies (e.g. min password length etc)

    1.3.3 Access Control Models

    Discretionary Access Control (DAC): The data owner decides the access. (Owner can change permissions). Mandatory Access Control (MAC): The system decides the access depending on the classification (sensitivity label). Stronger than DAC. (Only central admin can change permissions, but the data owner still decides on the data classification). Role-based access control (RBAC) aka Non- Discretionary: The role of the user/task (subject) determines the access to the data object. Uses a centrally administrated set of controls to determine how subjects and objects interact.

  • 5

    Formal Models:

    1. Biba First formal model to address integrity. The Biba model bases its access control on levels of integrity. It consists of three primary rules. 1. A subject at a given integrity level X can only read objects at the same or higher integrity levels - the

    simple integrity axiom. 2. A subject at integrity level X can only write objects at the same or lower integrity levels - the * (star)

    integrity axiom. 3. A subject at integrity level X can only invoke a subject at the same or lower integrity levels.

    2. Clark/Wilson This model is similar to Biba, as it addresses integrity. Protecting the integrity of information by focusing on preventing authorized users from making unauthorized modifications of data, fraud, and errors within commercial applications.

    It uses segregation of duties or separation of duties. The principle of segregation of duty states no single person should perform a task from beginning to end, but that the task should be divided among two or more people to prevent fraud by one person acting alone. This ensures the integrity of the access control object by securing the process used to create or modify the object.

    3. Bell/LaPadula This formal model specifies that all access control objects have a minimum-security level assigned to it so that access control subjects with a security level lower than the security level of the objects are unable to access the object. The Bell-LaPadula formal model only addresses confidentiality. It is what the MAC model is based on. Bell-LaPadula also formed the basis of the original "Orange Book". Note: Bell-LaPadula does not address integrity or availability. Remember: No read up / No write down.

    ORANGE BOOK: Department of Defense Trusted Computer System Evaluation Criteria (TCSEC) book or the Orange book. Orange book requires that the system to be configured as standalone.

    Division Level Definition A: VERIFIED PROTECTION A1 Verified Protection/Design

    B1 Labeled Security Protection B2 Structured Protection

    B: MANDATORY PROTECTION

    B3 Security Domains C1 Discretionary Security C: DISCRETIONARY PROTECTION C2 Controlled Access Protection

    D: MINIMAL PROTECTION None Minimal Protection Security - Evaluated and failed

    RED BOOK: Is in 2 parts Trusted Network Interpretation of the TCSEC and Trusted Network Interpretation Environments Guideline: Guidance for Applying the Trusted Network Interpretation. The guidelines within this book are as strict as the Orange book itself, but it is designed to work with networked environments.

    Note: The Orange book does NOT address integrity.

    1.3.4 Access Control Methodologies

    Centralized access control: All access control queries being directed to a central point of authentication. This type of system allows for a single point of administration for the entire access control system. Decreases the administrative effort, but also raises costs. Implementation more difficult. Example: Kerberos, Remote Authentication Dial-In User Service (RADIUS) and Terminal Access Controller Access Control System (TACACS), TACACS+ (allows encryption of data).

    Decentralized access control: Access control system is not centralized to a single computer system or group of systems. Offers the advantage of providing for access control system functionality in cases where connectivity to a centralized access control system is difficult. Difficult to maintain a decentralized access control system compared to a centralized access control system. Some examples of this are a Windows workgroup where every member of the workgroup handles access control, or a database system that handles its own authentication.

  • 6

    1.4 Remote Authentication

    To provide reliable authentication for remote users in small organizations it is possible to use the default authentication method of the software being used for remote access. For large organization the following authentication methods are used: Remote Authentication Dial-In User Service (RADIUS) and Terminal Access Controller Access Control System (TACACS/TACACS+).

    1.4.1 RADIUS Using RADIUS, a remote access server accepts the authentication credentials from the access control subject and passes them along to the RADIUS server for authentication. The RADIUS server then responds to the remote access server either with authorization or denial. A major advantage of RADIUS is that communication between the RADIUS server and the remote access server is encrypted, which helps increase the overall security of access control. 1.4.2 TACACS Older, does not use encryption and less often used. It allows for a centralized access control approach that keeps all access control changes isolated to a single place. When the TACACS server receives the identification data, it either returns authorization information or denies access to the user. This information is passed back to the remote access server in clear text and the remote access server responds appropriately. 1.4.3 TACACS+ Same as TACACS, in this authentication information is going across the network in an encrypted format.

    1.5 Single Sign On (SSO)

    With SSO, the user authenticates once, and the fact that they have been authenticated is passed onto each system that they attempt to access. Some SSO products are:

    - Kerberos (see below) - SESAME

    - NetSP - Kryptoknight

    - X.509 (think NSD) - Snareworks

    Advantage of SSO Disadvantages of SSO - Hire/fire and enable/disable access to systems

    quickly and efficiently. - Reduced admin effort of forgotten passwords. - Improved end user experience.

    - Cost. - Difficult to implement. - If users SSO password is compromised the

    attacker has access to all the users systems.

    1.4.1 Kerberos (see p. 47-50 of SSCP book for more)

    Kerberos is a network authentication protocol designed to provide strong authentication for client/server applications through the use of symmetric-key authentication and tickets (authentication tokens). Kerberos systems use private keys, and a Kerberos server must have copies of all keys on it, which requires a great deal of physical security. It allows for cross-platform authentication.

    Kerberos has a Key Distribution Center (KDC) which holds all keys and provides central authentication services. It uses time-stamping of it tickets to help ensure they are not compromised (i.e. non-repudiation) and an overall structure of control called a realm. Because of the time-stamping it is important that clocks of systems are synchronized. Susceptible to replay attacks if ticket is compromised within an allotted time frame.

    The Authentication Service (AS) is the part of the KDC that authenticates clients. The Ticket Granting Service (TGS) makes the tickets and issues them to clients.

    User Logon process: 1. User identifies themselves and presents their credentials to the KDC (password, smart card etc) 2. The AS authenticates the credentials. 3. The TGS issues a Ticket Granting Ticket (TGT) that is associated with the client token.

    The TGT expires when the user ends their session (disconnects/logs off ) and is cached locally for the duration of the session.

    Resource Access process: As above then 4. The TGT is presented to the KDC along with details of remote resource the client requires access to. 5. The KDC returns a session ticket to the client. 6. The session ticket is presented to the remote resource and access is granted.

    Note: Kerberos does NOT address availability.

  • 7

    2.0 ADMINISTRATION

    2.1 Security Administration Principles

    Authorization A process through which an access control subject is authenticated and identified, the subject is authorized to have a specific level or type of access to the access control object.

    Identification and Authentication

    Identification works with authentication, and is defined as a process through which the identity of an object is ascertained. Identification takes place by using some form of authentication.

    Accountability Accountability within a system means that anyone using the system is tracked and held accountable or responsible for their actions. Example: Authentication audit trail or log, privilege elevation audit trail or log.

    Non-repudiation Non-repudiation is an attribute of communications that seeks to prevent future false denial of involvement by either party. Non-repudiation is consequently an essential element of trust in e-business.

    Least privileges The principle of least privilege states that a user should be given enough access to the system to enable him/her to perform the duties required by their job. Elevated levels of access should not be granted until they are required to perform job functions. Owners of the information in a system are responsible for the information and are the appropriate authority for authorizing access level upgrades for the system users under their control.

    Data Classification The primary purpose of data classification is to indicate the level of confidentiality, integrity and availability that is required for each type of information. It helps to ensure that the data is protected in the most cost-effective manner. The data owner always decides the level of classification.

    2.2 CIA Triad

    Confidentiality Confidentiality means that information on the system/network is safe from disclosure to unauthorized individuals. Integrity Integrity of information within an organization means the information is whole and complete and has not been altered in any way, except by those authorized to manipulate it. The integrity of a computer or information system could impact the integrity of the critical information contained therein. Losing the integrity of a system or the information in that system means that the information can no longer be trusted. Availability Availability is having the information available right when its needed. When availability is considered with respect to the critical information within an organization, it is easy to see why it becomes so crucial that it is always there when it is needed.

    2.3 Data classification

    To decide what level of protection and how much, it is recommended to conduct an analysis which will determine the value of information and system resources. The value can be determined by analyzing the impact due to loss of data, unauthorized disclosure of information, the cost of replacement, or the amount of embarrassment/loss of reputation. Each level is defined to have more serious consequences if it is not protected. Remember, the data owner always decided the level of classification. Common classification levels (from highest to the lowest level)

    Commercial Military

    Confidential Private Sensitive Public

    Top Secret Secret Confidential Sensitive but unclassified Unclassified

  • 8

    2.4 System Life Cycle Phases and Security Concerns

    Applies to new developments and systems improvements and maintenance. Security should be included at each phase of the cycle. Security should not be addressed at the end of development because of the added cost, time and effort. Separation of duties should be practiced in each phase (e.g. programmer not having production access). Changes must be authorized, tested and recorded. Any changes must not affect the security of the

    system or its capability to enforce the security policy.

    The seven phases are:

    1. Project Initiation (Requirements Analysis) Initial study and conception of project. InfoSec involvement:

    Perform Risk Assessment to: - Define sensitivity of information and level of protection needed - Define criticality of system and security risks - Ensure regulatory/legal/privacy issues are addressed and compliance to security standards

    2. Project Definition, Design & Analysis (Software Plans and Requirements) Functional/system design requirements Ensure requirements can be met by application. InfoSec involvement:

    Determine acceptable level of risk (level of loss, percentage of loss, permissible variance) Identify security requirements and controls

    - Determine exposure points in process - i.e. threats and vulnerabilities - Define controls to mitigate exposure

    Due diligence, legal liabilities and reasonable care 3. System Design Specification

    Detailed planning of functional components Design of test plans and program controls InfoSec involvement:

    Incorporate security mechanisms and verify program controls Evaluate encryption options

    4. Software Development Writing/implementing the code/software InfoSec involvement:

    Develop information security-related code Implement unit testing Develop documentation

    5. Implementation, evaluation and testing Installing system software and testing software against requirements Documenting the internal design of the software InfoSec involvement:

    Conduct acceptance testing Test security software Certification and accreditation (where applicable)

    6. Maintenance Product changes and bug fixes InfoSec involvement:

    Penetration testing and vulnerability assessment Re-certification

    7. Revision or disposal Major modification to the product Evaluation of new requirements and making decision to replace rather than re-code InfoSec involvement:

    Evaluation of major security flaws Security testing of major modifications

  • 9

    2.5 Due Diligence / Due Care

    The concepts of due diligence and due care require that an organization engage in good business practices relative to the organizations industry.

    Due Diligence is the continual effort of making sure that the correct policies, procedures and standards are in place and being followed. Due diligence may be mandated by various legal requirements in the organizations industry or compliance with governmental regulatory standards.

    An example of Due Care is training employees in security awareness as opposed to simply creating a policy with no implementation plan or follow-up. Another example is requiring employees to sign statements that they have read and understood appropriate acceptable use policies.

    In lay terms, due diligence is the responsibility a company has to investigate and identify issues, and due care is doing something about the findings.

    2.6 Certification / Accreditation / Acceptance / Assurance

    Certification deals with testing and assessing the security mechanism in a system. Accreditation pertains to the management formally recognizing the system and its security level OR being approved by a designated approving authority. Acceptance designates that a system has met all security and performance requirements that were set for the project. Assurance is a term used to define the level of confidence in system.

    Once a system is built the certification process begins to test the system for all security and functional requirements. If the system meets all requirements it gains accreditation. Accredited systems are then accepted into the operational environment. This acceptance is because the owners and users of the system now have a reasonable level of assurance that the system will perform as intended, both from a security and functional perspective.

    2.7 Data/information Storage

    Primary: Main memory, which is directly accessible by CPU - volatile and looses its value on power failure. Secondary: Mass storage devices (hard-drive, floppy drive, tapes, CDs). Retains data even if computer is off. Real (physical) memory: Referred to main memory or random-access memory (RAM). Virtual memory: "Imaginary" memory area supported by OS and implemented in conjunction with hardware. RAM (Random Access Memory): Can read and write data. ROM (Read Only Memory): Can only read data, hold instructions for starting up the computer. PROM (Programmable Read Only Memory): Memory chip on which program can be stored. But cannot wipe it and use for storing other data. EPROM (Erasable Programmable Read Only Memory): Can be erased by exposing to ultraviolet light. EEPROM (Electrically Erasable...PROM): Can be erased by exposing to an electrical charge.

    2.8 System Security Architecture

    Deals specifically with those mechanisms within a system that ensure information is not tampered with while it is being processed or used. Different levels of information are labeled and classified based upon their sensitivity. There are three common modes used to control access to systems containing classified information: Note: All are MAC models.

    2.8.1 System High Mode Proper clearance required for ALL information on the system. All users that have access must have a security clearance that authorizes their access. Although all users have access, they may not have a need to know for all the information because there are various levels of information classification. The levels of information classification are clearly labeled to make it clear what the access requirements are. All users can access SOME data, based on their need to know.

    2.8.2 Compartment Mode Proper clearance required for THE HIGHEST LEVEL of information on the system. All users that have access to the system must have a security clearance that authorizes their access. Each user is authorized to access the information only when a need to know requirement can be justified. A strict documentation process tracks the access given to each user and the individual who granted the access. All users can access SOME data, based on their need to know and formal access approval.

  • 10

    2.8.3 Multilevel Secure Mode (MLS) Proper clearance required for ALL information on the system. All users that have access to the system must have a security clearance that authorizes their access. Uses data classification and Mandatory Access Control (MAC) to secure the system. Processes and data are controlled. Processes from lower security levels are not allowed to access processes at higher levels. All users can access SOME data, based on their need to know, formal access approval and clearance level.

    In addition, there are several system security architecture concepts that may be applied:

    2.8.4 Hardware Segmentation Within a system, memory allocations are broken up into segments that are completely separate from one another. The kernel within the operating system controls how the memory is allocated to each process and gives just enough memory for the process to load the application and the process data. Each process has its own allocated memory and each segment is protected from one another.

    2.8.5 Trusted computing base Is defined as the total combination of protection mechanisms within a computer system. Includes hardware, software and firmware. Originated from the Orange Book. Security perimeter: Defined as resources that fall outside of TCB. Communication between trusted components and un-trusted components needs to be controlled to ensure that confidential information does not flow in an unintended way. Reference monitor: Is an abstract machine (access control system), which mediates all access that subjects have to objects to ensure that the subjects have the necessary access rights and to protect the objects from unauthorized access and destructive modification. Compares access level to data classification to permit/deny access. Security kernel: Made up of mechanisms (h/w, s/w, firmware) that fall under the TCB and implement and enforce the reference monitor. At the core of TCB and is the most common approach to building trusted systems. Must be isolated from the reference monitor.

    2.8.6 Data Protection Mechanisms Layered design: Layered design is intended to protect operations that are performed within the kernel. Each layer deals with a specific activity: the outer layer performs normal tasks (least trusted) and the inner layer more complex and protected (most trusted) tasks. Segmenting processes like this mean that untrusted user processes running in the outer layers will not be able to corrupt the core system. Data abstraction: Data abstraction is the process of defining what an object us, what values it is allowed to have, and the operations that are allowed against the object. The definition of an object is broken down to its most essential form leaving only those details required for the system to operate. Data hiding: Data hiding is the process of hiding information available to one process level in the layered model from processes in other layers. Data hiding is a protection mechanism meant to keep the core system processes safe from tampering or corruption.

    2.9 Change Control / Configuration Management (p.135-139)

    Changes will occur in both the application development process and the normal network and application upgrade process the requester does not necessarily understand the impact of these changes. Changes are unavoidable at every stage of system development. Change control does not apply as strictly to the development process as to production systems.

    Change control helps ensure that the security policies and infrastructure that have been built to protect the organization are not broken by changes that occur on the system from day to day. Configuration management is the process of identifying, analyzing and controlling the software and hardware of a system. The process starts with configuration change request submitted to the configuration control board (CCB). The board will review the effect of change and approve or reject it. Tools used for change control / configuration management: Checksum (e.g. MD5 hash), Digital signatures, IDS, file integrity monitors, Enterprise security manager, Software Configuration management. These tools can all be used to verify the integrity of both production and development files/software and help ensure that the organization does not suffer an outage because of bad changes (i.e. changes planned correctly but implemented with flaws because of, for example, corrupt files.). They can also help create "golden images" of production data/configurations.

    It is important to enforce the change control / configuration management process. Some tools for detecting violations are: NetIQ, PentaSafe, PoliVec and Tripwire. These tools offer solutions for monitoring the configuration of systems and alerting on out-of-course changes.

  • 11

    2.10 Policy, Standard, Guidelines, Baselines

    Security Policy

    Is a general statement written by senior management to dictate what type of role security plays within the organization - it also provides scope and direction for all further security.

    Standards Specifies how hardware/software products are to be used. Provide a means to ensure that specific technology, applications, parameters and procedures are carried out in a uniform way. These rules are usually compulsory within a company and they need to be enforced.

    Baselines Provides the minimum level of security necessary throughout the organization. Guidelines Are recommended actions and operational guides when a specific standard does not apply. Procedures Are step-by-step actions to achieve a certain task.

    2.11 Roles and Responsibilities

    Senior Manager Ultimately responsible for security of the organization and the protection of its assets. Security professional Functionally responsible for security and carries out senior managers directives. Data Owner Is usually a member of management and is ultimately responsible for the protection

    and use of data. Decides upon the classification of the data. Will delegate the responsibility of the day-to-day maintenance of the data to the data custodian.

    Data Custodian Is given the responsibility of the maintenance and protection of the data. User Any individual who routinely uses the data for work-related tasks. Must have the

    necessary level of access to the data to perform the duties.

    2.12 Structure and Practices

    Separation of duties Makes sure that one individual cannot complete a risky task by herself. More than one person would need to work together to cause some type of destruction or fraud and this drastically reduces its probability of exploitation.

    Non-disclosure agreements To protect the company if /when an employee leaves for one reason or another. Job rotation No one person should stay in one position for a long period of time because it

    can end up giving too much control to this one individual.

    2.13 Security Awareness

    The weakest link in any security program in any organization is the users. Part of a quality security program is teaching users what security means to the organization and how each user impacts the process

    Make security part of the hiring process. Gain support from upper management. Provided tailored security and policy training:

    - Security-related job training for operators - Awareness training for specific departments or personnel groups with security sensitive positions - Technical security training for IT support personnel and system administrators - Advanced training for security practitioners and information system auditors. - Security training for senior mgrs, functional mgrs and business unit mgrs/other group heads.

    Perform clean-desk spot checks. Lead by example.

    2.14 Security Management Planning

    A security plan is meant to provide a roadmap for the organization concerning security. It is also meant to be specific to each organization. Because they are unique the process may differ for each organization but generally the steps are:

    Define the Mission and Determine Priorities (p.151) Determine the Risks and Threats to Priority Areas (p.151) Create a Security Plan to Address Threats (p.152)

    - Develop security policies (p.152) - Perform security assessment (p. 153) - Identify security solutions (p. 153) - Identify costs, benefits, and feasibility of solutions and finalize the plan (p.153) - Present the plan to the higher management in order to gain management buy-in (p.153)

  • 12

    2.15 Common Development of a Security Policy

    The phases of the common development process of a security policy are:

    Initial & Evaluation Writing a proposal to management that states the objectives of the policy. Development Drafting and writing the actual policy, incorporating the agreed objectives. Approval The process of presenting the policy to the approval body. Publication Publishing and distributing the policy within the organization. Implementation Carrying out and enforcing the objectives of the policy. Maintenance Regularly reviewing the policy to ensure currency (may be on a scheduled basis).

  • 13

    3.0 AUDIT AND MONITORING

    Auditing is the process to verify that a specific system, control, process, mechanism, or function meets a defined list of criteria. Gives security mangers the ability to determine the compliance with a specific policy or standard. Often used to provide senior management with reports on the effectiveness of security controls. Monitoring is the process to collect information to identify security events and report in a pre-described format.

    3.1 Control Types

    Directive Usually set by management or administrators or to ensure that the requisite actions or activities for maintaining policy or system integrity take place.

    Preventive To inhibit persons or processes from being able to initiate actions or activities that could potentially violate the policy for which the control was devised.

    Detective To identify actions or activities from any source that violate the policy for which the control was devised. Detective controls often act as a trigger for a corrective control.

    Corrective To act upon a situation where a policy has been violated. Often called countermeasures, they act in an automated fashion to inhibit the particular action/activity that violated a policy from becoming more serious than it already is. Used to restore controls.

    Recovery To act upon a situation where a policy has been violated. Recovery controls attempt to restore the system or processes relating to the violation in policy to their original state.

    3.2 Security Audits

    The auditing process provides a well-defined set of procedures and protocols to measure compliance or deviation from applicable standards, regulations etc.

    Auditing goals should be coupled with governance. Ensures that auditing goals align with the business goals. Governance considers organizational relationships and processes that directly affect the entire enterprise.

    Once the goal of an audit has been clearly identified, the controls required to meet the objective can be planned - this is often called the control objective.

    3.2.1 Internal and External Security Audit

    Internal auditors are employees of the organization in which the audit is taking place. They examine the existing internal control structure for compliance to policies and help management accomplish objectives through a disciplined approach to governance, control and risk mitigation. External auditors are often hired as external contractors to address specific regulatory requirements. Organizations should always check the credentials of the third party before starting the audit and a non-disclosure agreement should be signed (as a minimum).

    3.2.2 Auditing Process

    The Department of Defense (DoD) provides detailed steps that are particular to an IT audit:

    1. Plan the audit - Understand the business context of the security audit - Obtain required approvals from senior management and legal representatives - Obtain historical information on previous audits, if possible - Research the applicable regulatory statutes - Assess the risk conditions inherent to the environment

    2. Determine the existing controls in place and the risk profile

    - Evaluate the current security posture using risk-based approach - Evaluate the effectiveness of existing security controls - Perform detection/control risk assessment - Determine the total resulting risk profile

    3. Conduct compliance testing

    - Determine the effectiveness of policies and procedures - Determine the effectiveness of segregation of duties

    4. Conduct substantive testing

    - Verify that the security controls behave as expected - Test controls in practice

    5. Determine the materiality of weaknesses found

    - If the security exploits found were to be executed, what would be the tangible ($) impact to the business and the intangible (reputation) impact.

    - Determine if the security exploits increase the organizational risk profile 6. Present findings - Prepare the audit report and the audit opinion

    - Create recommendations

  • 14

    3.2.3 Audit Data Sources (p. 192)

    Audit sources are locations from where audit data can be gathered, for valuation and analysis. The auditor should always consider the objectivity of the information source. Audit sources can be gathered from a number of locations such as:

    - Organization charts - Network topology diagrams - Business process and development documentation

    - Hardware and software inventories - Informal interviews with employees - Previous audit reports

    3.2.4 Audit Trails

    Audit trails are a group of logs or relevant information that make up the set of evidence related to a particular activity. For every action taken on an information system there should be a relevant log entry containing information about the name of the system, the userid of the user, what action what taken and the result of the action.

    One of the most difficult aspects of establishing an audit trail is ensuring audit trail integrity. Integrity of the audit trail is crucial to event reconstruction of a security incident. It is important to protect the audit trail from unauthorized access and log tampering. The use of a Central Logging Facility (CLF) to maintain disparate system logs is recommended. Backups of audit logs should also be considered.

    Audit log reviews will be done to review the level of detail that should be covered so that general inferences can be made about host activity and granular enough to investigate further into a particular event.

    Audit trails provide a method of tracking or logging that allow for tracing security-related activity. Useful audit trails include:

    - Password changes - Privilege use - Privilege escalation

    - Account creations and deletions - Resource access - Authentication failures

    System Events provide triggers that are captured in the audit trail and used to demonstrate a pattern of activity. The following are examples of events tracked:

    - Startup and shutdown - Log in and log off - Object create, delete, and modify

    - Admin/operator actions - Resource access denials - Resource access approvals

    Sampling and Data Extraction is done when there is no original data available. In this case, the administrator would have to use collection techniques such as interviews or questionnaires to extract the data from a group of respondents. Data sampling allows them to extract specific information. This is most often used for the detection of anomalous activity. Retention periods indicate how long media must be kept to comply with regulatory constraints. The key question is "how long is long enough"? Largely depends on regulatory/compliance issues.

    3.2.5 Security Auditing Methods

    The auditing methods should be well documented, and proven to be reconstructable if required. The frequency of review depends on the type and importance of audit. The security audit report should highlight all the findings and recommendations whenever required. The following are two types of methods that are commonly used for security audits.

    Penetration testing (p.201): Classified as proactive security audit, by testing security controls via a simulation of actions that can be taken by real attackers.

    When preparing for a penetration test, a list of attacks that will take place has to be generated or mapped. This list of attacks can be likened to an audit checklist. A responsible penetration test requires careful coordination and planning to minimize the likelihood of negative impact to an organization.

  • 15

    A penetration test is the authorized, scheduled and systematic process of using known vulnerabilities and exploiting the same in an attempt to perform an intrusion into host, network, physical or application resources.

    The penetration test can be conducted on internal (a building access or Intranet host security system) or external (the company connection to the Internet) resources. It normally consists of using an automated and manual testing of organization resources. The process includes.

    - Host identification - i.e. identification of open ports and services running. - Fingerprinting the OS and applications running - i.e. identification of OS version and

    applications running (TCP/IP fingerprinting techniques, banner grabbing etc.) - Creating a vulnerability matrix for the host according to the OS and application by using

    common sources such as SecurityFocus, CERT etc. to collate known vulnerabilities. - Vulnerability analysis using automated tools (like ISS, Nessus etc) and manual techniques. - Reporting the weakness and preparing the road map ahead.

    Checklist Audit (p.198): Standard audit questions are prepared as template and used for a wide variety of organizations (e.g. SPRINT).

    If an auditor relies on the checklist too much and does not perform his or her own verification of related details based on observations unique to the environment, a major security flaw could go unnoticed. The same is true of software tools that automate the audit process and/or check for security vulnerabilities (see CAATs below).

    Other types of security audit methods are war-dialing (to see if there are any open modems), dumpster diving (to test the effectiveness of the secure disposal of confidential information), social engineering (to test employees security behaviour) and war-driving (looking for unsecured wireless access points)

    3.2.6 Computer Assisted Audit Tool (CAAT)

    A CAAT is any software or hardware used to perform audit processes. CAATs can help find errors, detect fraud, identify areas where processes can be improved and analyze data to detect deviations from the norm.

    The advantage of using CAATs is the automation of manual tasks for data analysis. The danger of using them is reliance on tools to replace human observation and intuition. Auditors should use CAATs to exhaustively test data in different ways, test data integrity, identify trends, anomalies, and exceptions and to promote creative approaches to audits while leveraging these tools.

    Some example of (mainframe based) CAATs are: EZTrieve, CA-PanAudit, FocAudit and SAS. PC's can also be used for spreadsheet/database programs for auditing or a Generalize Audit Software (GAS) tool can be used to perform these audit functions - e.g. Integrated Development Environment Applicatin (IDEA)

    3.2.7 Central Logging Facility (CLF)

    A CLF helps ensure that audit and system logs are sent to a secure, trusted location that is separate and non-accessible from the devices that are being monitored.

    A CLF can collect and integrate disparate data from multiple systems and help determine a pattern of attack through data correlation. It can also reveal discrepancies between remote logs and logs kept on a protected server - in this way it may detect log tampering.

    3.3 Reporting and Monitoring Mechanism

    The monitoring can be real-time, ad-hoc or passive, depending on the need and importance. To keep the system security up to date, security administrators must constantly monitor the system and be aware of attacks as they happen. Monitoring can be done automatically or manually, but either way a good policy and practice of constant monitoring should be in place.

    Warning Banners will warn the users of systems about their adherence to acceptable usage policy and their legal liability. This will add to the process of legal requirements during prosecution of malicious users. In addition the banners warn all users that anything they do on the systems is subject to monitoring.

  • 16

    Keystroke Monitoring is a process whereby computer system administrators view or record both the keystrokes entered by a computer user and the computer's response during a user-to-computer session. Traffic analysis allows data captured over the wire to be reported in human readable format for action. Trend analysis draws on inferences made over time on historical data (mostly traffic). Can show how an organization increases or decreases its compliance to policy (or whatever is being audited) over time. Event Monitoring provides alerts, or notification, whenever a violation in policy is detected. IDSs typically come to mind, but firewall logs, server/app logs, and many other sources can be monitored for event triggers. Closed Circuit Television (CCTV) will monitor the physical activity of persons Hardware monitoring is carried out for fault detection and software monitoring for detecting the illegal installation of software. Alarms and signals work with IDS. An alarm allows an administrator to be made aware of the occurrence of a specific event. This can give the administrator a chance to head off an attack or to fix something before a situation gets worse. These notifications can include paging, calling a telephone number and delivering a message, or notification of centralized monitoring personnel Violation Reports are used extensively in monitoring an access control system. This type of report basically shows any attempts of unauthorized access. This could simply be a list of failed logon attempts reported. Also see Clipping Levels p. 17 Honeypots are deliberately kept by the organizations for studying attackers' behavior and also in drawing attention away from other potential targets. Misuse detectors analyze system activity, looking for events or sets of events that match a predefined pattern of events that describe a known attack. Sometimes called "signature-based detection." The most common form of misuse detection used in commercial products specifies each pattern of events corresponding to an attack as a separate signature

    Intrusion Detection Systems (IDS) provide an alert when an anomaly occurs that does not match a predefined baseline or if network activity matches a particular pattern that can be recognized as an attack. There are two major types of intrusion detection:

    - Network-based IDS (NIDS) which will sniff all network traffic and report on the results. - Host-based IDS (HIDS) which will operate on one particular system and report only on items affecting that

    system. Intrusion detection systems use two approaches:

    Signatures based identification (aka knowledge-based)

    Anomalies identification (aka statistical-anomaly based or behaviour abased)

    - Detect known attacks - Pattern matching - Similar to virus scan

    - Looks for attacks indicated through abnormal behavior. - The assumption here is that all intrusive events are considered anomalies. - A profile of what is considered normal activity must be built first.

    3.4 Types of Attack

    Dictionary Attack: A dictionary attack uses a flat text file containing dictionary words (sometimes in multiple languages) and many other common words. They are systematically tried against the users password.

    Brute Force Attack: In this type of attack, every conceivable combination of letters, numbers, and symbols are systematically tried against the password until it is broken. It may take an incredibly long time due to different permutations and combinations that require to be tried.

    Denial of Service (DoS): Is a situation where a circumstance, either intentionally or accidentally, prevents the system from functioning as intended or prevents legitimate users from using that service. In certain cases, the system may be functioning exactly as designed however it was never intended to handle the load, scope, or parameters being imposed upon it. Denial-of-service attack is characterized by an explicit attempt by attackers to prevent legitimate users of a service from using that service. Examples include: - Attempts to "flood" a network, thereby preventing legitimate network traffic. - Attempts to disrupt connections between two machines, thereby preventing access to a service. - Attempts to prevent a particular individual from accessing a service. - Attempts to disrupt service to a specific system or person.

    Distributed Denial of Service (DDoS): Similar to DoS attack but the attacker uses other systems to launch the denial of service attack. A trojan horse could be placed on the "slave" system that allows the attacker to launch the attack from this system.

  • 17

    Spoofing: Spoofing is a form of attack where the intruder pretends to be another system and attempts to provide/obtain data and communications that were intended for the original system. This can be done in several different ways including IP spoofing, session hijacking, and Address Resolution Protocol (ARP) spoofing.

    Man In The Middle Attacks: Performed by effectively inserting an intruders system in the middle of the communications path between two other systems on the network. By doing this, an attacker is able to see both sides of the conversation between the systems and pull data directly from the communications stream. In addition, the intruder can insert data into the communications stream, which could allow them to perform extended attacks or obtain more unauthorized data from the host system.

    Spamming attacks: Spamming or the sending of unsolicited e-mail messages is typically considered more of an annoyance than an attack, but it can be both. It slows down the system, making it unable to process legitimate messages. In addition to that mail servers have a finite amount of storage capacity, which can be overfilled by sending a huge number of messages to the server, thus effectively leading to DoS attack on the mail server.

    Sniffing: The process of listening/capturing the traffic going across the network either using a dedicated device or a system configured with special software and a network card set in promiscuous mode. A sniffer basically sits on the network and listens for all traffic going across the network. The software associated with the sniffer is then able to filter the captured traffic allowing the intruder to find passwords and other data sent across the network in clear text. Sniffers have a valid function within information technology by allowing network analysts to troubleshoot network problems, but they can also be very powerful weapons in the hands of intruders.

    3.4 TEMPEST

    TEMPEST is the U.S. government codename for a set of standards for limiting electric or electromagnetic radiation emanations from electronic equipment such as microchips, monitors, or printers. It helps ensure that devices are not susceptible to attacks like Van Eck Phreaking.

    3.5 Clipping Level

    Using clipping levels refers to setting allowable thresholds on a reported activity. Clipping levels set a baseline for normal user errors, and only violations exceeding that threshold will be recorded for analysis of why the violations occurred.

    For example, a clipping level of three can be set for reporting failed log-on attempts at a workstation. Thus, three or fewer log-on attempts by an individual at a workstation will not be reported as a violation (thus eliminating the need for reviewing normal log-on entry errors.)

  • 18

    4.0 RISK, RESPONSE, AND RECOVERY

    Risk Management Identification, measurement and controlling the risk. Risk Assessment Process of determining the relationship of threats to vulnerabilities and the controls in

    place and the resulting impact (objective process). Risk Analysis Using a risk analysis process determines the overall risk (subjective process). The negative

    impact can be loss of integrity, availability or confidentiality. The RA should recommend controls to mitigate the risk (i.e. counter-measures).

    4.1 Risk Management (Risks, Threats, Vulnerabilities and exposures)

    Risk management is the cyclical process of identification, measurement and control of loss associated with adverse events. Includes risk analysis, the selection/evaluation of safeguards, cost benefit analysis, safeguard/countermeasure implementation etc. Is made up of multiple steps (p. 231 SSCP):

    Identification Each risk is that is potentially harmful is identified. Assessment The consequences of a potential threat are determined and the likelihood and frequency of a

    risk occurring are analyzed. Planning Data that is collected is put into a meaningful format, which is used to create strategies to

    diminish or remove the impact of a risk. Monitoring Risks are tracked and strategies evaluated on a cyclical basis - i.e. even though a risk has been

    dealt with it cannot be forgotten. Control Steps are taken to correct plans that are not working to improve the management of risk

    Vulnerability: Weakness in an information system that could be exploited by a threat agent (e.g. software bug). Threat: Any potential danger, which can harm an information system - accidental or intentional (e.g. hacker). Risk: Is the likelihood of a threat agent taking advantage of vulnerability. Risk = Threat x Vulnerability Exposure: An instance of being exposed to losses from a threat agent. Assets: The business resources associated with the system (tangible and intangible). These will include: hardware, software, personnel, documentation, and information communication etc. The partial or complete loss of assets might affect the confidentiality, integrity or availability of the system information. Controls: Put in place to reduce, mitigate, or transfer risk. These can be physical, administrative or technical (see p. 4). They can also be deterrent, preventive, corrective or detective (see p. 14). Safeguards: Controls that provide some amount of protection to assets. Countermeasures: Controls that are put in place as a result of a risk analysis to reduce vulnerability. Risk Mitigation: The process of selecting and implementing controls to reduce the risk to acceptable levels

    Note: Risks can be reduced, accepted, managed, mitigated, transferred or deemed to require additional analysis.

    4.2 Risk Analysis

    To identify and analyze risks we need to do a risk analysis. Two general methodologies used for risk analysis.

    4.2.1 Quantitative Risk analysis: The results show the quantity in terms of value (money). It will give the real numbers in terms of costs of countermeasures and amount of damage that can happen. The process is mathematical and known as risk modeling, based on probability models. AV = Asset Value ($) EF (Exposure Factor) [Max 100%] = Percentage of asset loss caused by presumed successful attacks. SLE (Single Loss Expectancy) [is the cost] = Asset value x Exposure Factor ARO (Annualized Rate of Occurrence) = Estimated frequency a threat will occur within a year = Likelihood of an event taking place x the number of times it could occur in a single year. ALE (Annualized Loss Expectancy) = SLE x ARO ROI (Return on Investment) = ALE / Annualized cost of countermeasures ($) [Generally, if ROI is greater than 1.0 then countermeasure should be placed.] Cost/Benefit Analysis = Compares the cost of a control to the benefits [ALE (before) - ALE (after) - Annual Cost = Value of safeguard] (p.267 of SSCP for examples)

    4.2.2 Qualitative Risk analysis: Walk through different scenarios of risk possibilities and rank the seriousness of the threats and the sensitivity of the assets. This provides higher level, subjective results than quantitative risk analysis.

    Note: Quantitative takes longer and is more complex.

  • 19

    4.3 Risk Analysis /Assessment Tools and Techniques

    DELPHI Delphi techniques involve a group of experts independently rating and ranking business risk for a business process or organization and blending the results into a consensus. Each expert in the Delphi group measures and prioritizes the risk for each element or criteria.

    COBRA 'Consultative, Objective and Bi-functional Risk Analysis'. It is a questionnaire PC system using expert system principles and extensive knowledge base. It evaluates the relative importance of all threats and vulnerabilities.

    OCTAVE Operationally Critical Threat, Asset, and Vulnerability Evaluation (OCTAVE) is a risk-based strategic assessment and planning technique for security.

    NIST Risk Assessment Methodology (SP800-30)

    Step 1.System Characterization Step 2.Threat Identification Step 3.Vulnerability Identification Step 4.Control Analysis

    Step 5.Likelihood Determination Step 6.Impact Analysis Step 7.Risk Determination Step 8.Control Recommendations Step 9.Results Documentation

    4.4 Recovery from Disaster

    Disaster/business recovery planning is a critical component of the business continuity planning process.

    4.4.1 Business continuity planning (BCP) (p.268 SSCP) BCP is the process of proactively developing, documenting, and integrating processes, procedures that will allow an organization to respond to a disaster such that critical business functions will continue with minimal or insignificant changes until such time as its normal facilities are restored. BCP encompasses the full restoration process of all business operations.

    Because BCP focuses on restoring the normal business functions of the entire business is it important that critical business functions are identified. It is the responsibility of each department to define those requirements that are essential to continuing their operations. Therefore, it is important to assess and document the requirements for each department within the business continuity plan. This is typically performed through a business impact analysis (BIA). Note: A BIA is usually performed immediately prior to doing the BCP.

    4.4.2 Disaster recovery planning (DRP) (p.271 SSCP) Disaster recovery plans should document the precautions taken so that the effects of a disaster will be minimized and the actions that must be taken to enable the organization to either maintain or quickly resume business-critical systems.1) Emergency response 2) Backup 3) Recovery

    BCP addresses restoring key business functions whilst a DRP focuses on restoring information systems.

    4.4.3 Backups (p.277 SSCP) Full Backup Backs up all data in a single backup job. Changes archive bit. Incremental Backs up all data since the last backup (i.e. new and modified). Changes archive bit. Differential Backs up all data changed since the last full backup. Does not change archive bit. Copy Backup Makes a full backup but does not change the archive bit

    Typical tape rotation schedule is Grandfather (monthly full backup) - Father (weekly) - Son (daily) It is also important that a backup is only as good as its ability to be restored - test restores of data should be performed regularly.

    4.4.4 Business Impact Analysis (BIA) Is a process used to help business units understand the impact of a disruptive event. The impact may be financial (quantitative - loss of stock value) or operational (qualitative - unable to respond to customer). A BIA identifies the companys critical systems needed for survival and estimates the outage time that can be tolerated. First step of a BIA is to identify all business units within the organization (you then interview each to determine its criticality).

    4.4.5 Testing Disaster Recovery Plans: Various methods exist to test the DRP: Checklist test: Copies of the plan are distributed to management/participant for review. Structured Walkthrough test: Business unit management meets in a room to review the plan. Simulation Test: All support personnel meet in a practice execution session. Parallel Test: Complete live-test without taking operational system down (e.g. on staging) Full-Interruption Test: Normal production shut down, with real disaster recovery processes.

  • 20

    4.4.6 Restoration and Recovery Hot Site The most prepares facility (and most expensive) that has the necessary hardware, software,

    phone lines, network connections etc to allow a business to resume business functions almost immediately.

    Warm Site Not as well equipped as a hot site, but has part of the necessary hardware, software, network etc needed to restore business functions quickly. Most commonly used.

    Cold Site Cheaper, ready for equipment to be brought in during emergency, but no hardware resides at the site, though does have AC, electrical wiring etc. May not work when a disaster strikes.

    Reciprocal Site

    This is an arrangement with another company, so that one will accommodate the other in the event of an emergency - not ideal for large companies. Is the cheapest option. Main concern is compatibility of equipment.

    When deciding on appropriate locations for alternate sites, it is important that they be in different geographical locations that cannot be victim to the same disaster. This should be balanced with the need for the alternate site not to be so far away that it will significantly add to the downtime.

    When moving business functions to an alternate site the most critical should be moved first. When moving business functions back to primary site, the least critical should be moved first.

    4.5 Response to Incident (p. 282 SSCP) Incident: Violating an explicit or implied security policy. E.g. attempts to gain unauthorized access to system/data, unauthorized use of system for processing/storage of data, changes to system hardware/software without authorization.

    Incident response: Activities that are performed when security related incident occurs that has potential for adverse effect to the system or organization. The objective of incident response and subsequent investigation is as follows:

    - Control and manage the incident (i.e. ensure all applicable logs/evidence is preserved). - Timely investigation and assessment of the severity of the incident (i.e. draw up list of suspects,

    understand how the intruder gained access, document the damage caused etc). - Timely recovery or bypass of the incident to resume normal operating conditions (i.e. restore the

    breached system to original state, whilst ensuring it is secure). - Timely notification of the incident to senior administrators/management (i.e. communicate the results

    of the investigation, especially if there are legal impacts) - Prevention of similar incidents (i.e. apply security measures to ensure the breach cannot occur again).

    Generally speaking the following steps should be conducted when investigating an incident: - Contact senior management and the Incident Response Team. - Do NOT power down or reboot the system or open any files (i.e. in no way alter the system state). - Unplug the system from the network. - Document any processes that are running and any open files/error messages etc. - Save the contents of memory/page files and any system or application logs. - If possible make a byte by byte image of the physical disk (ideally on write-once media - e.g. CD).

    Because any evidence collected may be used in possible criminal proceedings, thorough documentation must be kept. In particular a chain of custody must be established for any evidence acquired.

    A chain of custody proves where a piece of evidence was at a given time and who was responsible for it. This helps ensure the integrity of the evidence.

    Best evidence: Original or primary evidence rather than a copy or duplicate. Secondary: A copy of evidence or oral description of its contents. Direct: Proves/disproves specific act through oral testimony based on information gathered through

    the witness's five senses. Real: Tangible objects/physical evidence. Conclusive: Incontrovertible - overrides all other evidence. Opinions: Two different types: Expert - may offer an opinion based on personal expertise or facts.

    Nonexpert - may testify only as to facts. Circumstantial: Inference of information from other, intermediate, relevant facts. Documentary: Printed business records, manuals, printouts. Demonstrative: Used to aid a jury (charts, illustrations etc). Corroborative: Supporting evidence used to help prove an idea or point. It cannot stand on its own, but is

    used as a supplementary tool to help prove a primary piece of evidence. Hearsay: Also known as second-hand evidence. Evidence that is not based on personal, first hand

    knowledge of the witness but was obtained from another source. Not usually admissible in court (hearsay rule), though there are exceptions. Computer based evidence is considered to be hearsay, but is admissible if relevant.

  • 21

    5.0 CRYPTOGRAPHY

    Cryptography: Science of secret writing that enables you to store and transmit data in a form that is available only to the intended individuals. Cryptosystem: Hardware or software implementation of cryptography that transforms a message to ciphertext and back to plaintext. Cryptoanalysis/Cryptanalysis: Recovering plaintext from ciphertext without a key or breaking the encryption. Cryptology: The study of both cryptography and cryptoanalysis. Ciphertext: Data in encrypted or unreadable format. Encipher: Converting data into an unreadable format. Decipher: Converting data into a readable format. Cryptovariable (key): Secret sequence of bits (key) used for encryption and decryption. Steganography: The art of hiding the existence of a message in a different medium (e.g. in jpg, mp3 etc) Key Escrow: The unit keys are split into two sections and given to two different escrow agencies to maintain.

    The Cryptographic Systems How the plain-text is processed

    Stream ciphers (p.348 SSCP): Stream ciphers are symmetric algorithms that operate on plaintext bit-by-bit. Stream cipher algorithms create a keystream that is combined with the plaintext to create the ciphertext. As with other ciphers, the processing of plaintext uses an XOR operation. E.g. of stream cipher is RC4.

    Block ciphers (p.346 SSCP): Encrypts data in discrete chunks of a fixed size. Block ciphers are symmetric - they use the same secret key for encryption and decryption. Commonly, the block size will be 64 bits, but the ciphers may support blocks of any size, depending on the implementation. 128-bit block ciphers are becoming common.

    Algorithms used or number of keys used.

    Symmetric Encryption Algorithms: Also known as private key, because only one key is used and it must be kept secret for security. Both parties will be using the same key for encryption and decryption. Much faster than asymmetric systems, hard to break if using a large key size. Key distribution requires a secure mechanism for key delivery. Limited security as it only provides confidentiality. The "out-of-band method" means that the key is transmitted through another channel than the message.

    Asymmetric Encryption Algorithms: Also known as public key. Two different asymmetric keys are mathematically related - public and private key. Better key distribution than symmetric systems. Better scalability than symmetric systems. Can provide confidentiality, authentication and non-repudiation.

    Key clustering = When a plaintext message generates identical ciphertext messages using the same transformation algorithm, but with different keys.

    Secure message format: Entire message encrypted by the receivers public key - only the receiver can decrypt the message using his/her own private key, thus ensuring confidentiality. [This is the normal method] Open message format: Entire message encrypted by the senders private key - anyone can decrypt the message using the sender's public key, but they can be sure that the message originated from the sender. Secure and signed format: Signed by the senders private key and entire message encrypted with the receiver's public key. Only the receiver can decrypt the message using his/her own private key, thus ensuring confidentiality. By signing the message with the sender's private key, the receiver can verify its authenticity using the sender's public key. [Most secure]

    5.1 Symmetric Encryption Algorithms (p.333 SSCP)

    Data Encryption Standard (DES) [Sometimes referred to as Data Encryption Algorithm - DEA]: Based on IBMs 128-bit algorithm Lucifer. A block encryption algorithm, 64 bit in -> 64 bit out. 56 bits make up the true (effective) key and 8 bits used for parity. A block of 64 bits is divided in half and each char is encrypted one at a time. The chars are put through 16 rounds of transposition and substitution. 3DES: Uses 48 rounds in its computation. Heavy performance hit and it can take up to three times longer than DES to perform encryption and decryption. 168-bit key size (i.e. 3x56) Advanced Encryption Standard (AES): NIST replacement standard for DES based on Rijndael. AES is a block cipher with a variable block and key length. Employs a round transformation that is comprised of three layers of distinct and invertible transformations: The non-linear layer; The linear mixing layer; The key addition layer. AES has 3 key length options: 128, 192 and 256 bits.

  • 22

    International Data Encryption Algorithm (IDEA): 128-bit key is used. Block cipher operates on 64 bit blocks of data. The 64-byte data block is divided into 16 smaller blocks and each has eight rounds of mathematical functions performed on it. Used in PGP. Skipjack: Used for electronic encryption devices (hardware). This makes it unique since the other algorithms might be implemented in either hardware or software. SkipJack operates in a manner similar to DES, but uses an 80-bit key and 32 rounds, rather than 56-bit keys and 16 rounds (DES). Blowfish: A block cipher that works on 64-bit blocks of data. The key length can be up to 448 bits and the data blocks go through 16 rounds of cryptographic functions. RC4/5: A block cipher that has a variety of parameters it can use for block size, key size and the number of rounds used. Block sizes: 32/64/128 and key size up to 2048 bits.

    5.2 Asymmetric Encryption Algorithms (p. 331 SSCP)

    Diffie-Hellman Algorithm: This was the first published use of public key cryptography (1976). Because of the inherent slowness of asymmetric cryptography, the Diffie-Hellman algorithm was not intended for use as a general encryption scheme, rather its purpose was to transmit a private key for Data Encryption Standard (DES) (or some similar symmetric algorithm) across an insecure medium - i.e. key distribution. RSA: Rivest, Shamir and Adleman proposed another public key encryption system. Provides authentication (digital signature), encryption and key exchange. Is used in many web browsers with SSL and in SSH. Security is based on the difficulty of factoring large numbers. Digital Signature Algorithm (DSA) aka Digital Signature Standard (DSS - See below): A public key encryption algorithm. The algorithm utilizes public and private key pairs. Only the private key of capable of creating a signature. This permits verification of the senders identity as well as assurance of the integrity of the message data that has been signed. The hash function used in the creation and verification process is defined in the Secure Hash Standard (SHS). The private key and the digest (hash-value) are then used as inputs to the DSA, which generates the signature. For message and sender verification, the recipient uses the hash function to create a message digest, and then the senders public key is used to verify the signature. Allowed key range from 512 to 1,024 bits. DSA is slower than RSA for signature verification. Elliptic Curve Cryptosystem (ECC): Provides digital signatures, secure key distribution and encryption. Requires smaller percentage of the resources than other systems due to the use of modular exponentiation of discrete logarithmic functions.

    5.3 Symmetric vs Asymmetric Systems

    Attribute Symmetric system Asymmetric system Keys One key for encryption and decryption Two keys, one for encryption another for decryption Key exchange Out of band Symmetric key is encrypted and sent with message; thus, the

    key is distributed by inbound means Speed Faster algorithm More complex and slower (resource intensive) Key length Fixed key length Variable key length Practical Use For encryption of large files For key exchange (secret key) and distribution of keys Security Confidentiality and integrity Confidentiality, integrity, authentication and non-repudiation

    5.4 Message integrity

    One-way hash: Is a function that takes a variable-length string a message, and compresses and transforms it into a fixed length value referred to as a hash value. The hash value of one-way hash is called message digest. It is cannot be performed in reverse. It only provides integrity of a message, not confidentiality or authentication. It is used in hashing to create a fingerprint for a message. Digital signatures: An encrypted hash value of a message. First compute the hash of the document, and then encrypt the message digest with the sender's private key. The result is the digital signature Digital signature standard (DSS): A standard for digital signatures, functions and acceptable use. Is a standard, does NOT concern itself with encryption.

    5.4.1 Hash Algorithms (p.337 SSCP)

    MD4: Produces 128-bit hash values. Used for high-speed computation in software implementation and is optimized for microprocessors. MD5: Produces 128-bit hash values. More complex than MD4. Processes text in 512-bit blocks. MD2: Produces 128-bit hash values. Slower than MD4 and MD5 SHA: Produces 160-bit hash values. This is then inputted into the DSA, which computes the signature for a message. The message digest is signed instead of the whole message. SHA-1: Updated version of SHA.

  • 23

    HAVAL: Is a variable length one-way hash function and is the faster modification of MD5. Processes text in 1024-bit blocks. HAVAL compresses a message of arbitrary length into a digest of 128, 160, 192, 224 or 256 bits. In addition, HAVAL has a parameter that controls the number of passes a message block (of 1024 bits) is processed. A message block can be processed in 3, 4 or 5 passes.

    Hash Salting: Refers to the process of adding random data to the hash value. Many hashes have weaknesses or could be looked up on a hash lookup table (if the table were big enough and the computer fast enough). Salting the hash negates this weakness. Cryptographic protocols that use salts include SSL.

    5.5 Link and end-to-end encryption

    Link encryption: Encrypts all the data along a specific communication path like a T3 line or telephone circuit. Data, header, trailers, addresses and routing data that are part of the packets are encrypted. Provides protection against packet sniffers and eavesdroppers. Packets have to be decrypted at each hop and encrypted again. It is at the physical level of the OSI model. End-to-end encryption: Only data is encrypted. Is usually initiated at the application layer of the originating computer. Stays encrypted from one end of its journey to the other. Higher granularity of encryption is available because each application or user can use a different key. It is at the application level of the OSI model.

    5.6 Cryptography for Emails

    Privacy-Enhanced Mail (PEM): Provided confidentiality, authentication, and non-repudiation. Specific components that can be used: - Messages encrypted with DES in CBC mode - Authentication provided by MD2 or MD5

    - Public key management provided by RSA - X.509 standard used for certification structure and format

    Message Security Protocol (MSP): Can sign and encrypt messages and perform hashing functions. Pretty Good Privacy (PGP): Developed by Phil Zimmerman. Uses RSA public key encryption for key management and IDEA symmetric cipher for bulk encryption of data. PGP uses pass-phrases to encrypt the users private key that is stored on their hard drive. It also provides digital signatures. S/MIME - Secure Multipurpose Internet mail Extensions: S/MIME is the RSA developed standard for encrypting and digitally signing electronic mail that contains attachments and for providing secure electronic data interchange (EDI). Provides confidentiality through the users encryption algorithm, integrity through the users hashing algorithm, authentication through the use of X.509 public key certificates and non-repudiation through cryptographically signed messages - i.e. uses a public-key based, hybrid encryption scheme.

    5.7 Internet Security

    S-HTTP - Secure Hypertext Transport Protocol: Encrypts messages with session keys. Provides integrity and sender authentication capabilities. Used when an individual message needs to be encrypted. HTTPS: Protects the communication channel between two computers. Uses SSL and HTTP. Used when all information that passes between two computers needs to be encrypted.

    SSL (Secure Sockets Layer): Protects a communication channel by use of public key encryption. Uses public-key (asymmetric) for key exchange and certificate-based authentication and private-key (symmetric) for traffic encryption.

    Provides data encryption, server authentication, message integrity and client authentication. Keeps the communication path open until one of the parties requests to end the session (use TCP). Lies beneath the application layer and above the transport layer of the OSI model. Originally developed by Netscape - version 3 designed with public input. Subsequently became the Internet standard known as TLS (Transport Layer Security). If asked at what layer of OSI SSL operates, the answer is Transport.

    SET - Secure Electronic Transaction: System for ensuring the security of financial transactions on the Internet. Mastercard, Visa, Microsoft, and others supported it initially. With SET, a user is given an electronic wallet (digital certificate) and a transaction is conducted and verified using a combination of digital certificates and digital signatures in a way that ensures privacy and confidentiality. Uses some but not all aspects of a PKI. SSH: Used to securely login and work on a remote computer over a network. Uses a tunneling mechanism that provides terminal like access to computers. Should be used instead of telnet, ftp, rsh etc.

    IPSec (Internet Protocol Security): A method of setting up a secure channel for protected data exchange between two devices. Provides security to the actual IP packets at the network layer. Is usually used to establish VPN. It is an open, modular framework that provides a lot of flexibility. Suitable only to protect upper layer protocols. IPSec uses two protocols: AH and ESP.

  • 24

    AH (Authentication Header): Supports access control, data origin authentication, and connectionless integrity. AH provides integrity, authentication and non-repudiation - does NOT provide confidentiality. ESP (Encapsulating Security Payload): Uses cryptographic mechanism to provide source authentication (by IP header), confidentiality and message integrity.

    IPSec works in two modes: 1. Transport mode: Only the payload of the message is encrypted. (for peer-to-peer) 2. Tunnel mode: Payload, routing and header information is encrypted. (for gateway-to-gateway)

    5.8 PKI (p. 355 SSCP)

    Public key cryptography started in 1976 by Diffie and Hellman and in 1977, Rivest, Shamir and Adleman designed the RSA Cryptosystem (first public key system). Each public key cryptosystem has its own policies, procedures and technology required to manage the systems. The X.509 standard provides a basis for defining data formats and procedures for the distribution of public keys via certificates that are digitally signed by CA's.

    5.8.1 X.509

    X.509 is the standard used to define what makes up a digital certificate. It was developed from the X.500 standard for Directory Services. Section 11.2 of X.509 describes a certificate as allowing an association between a user's distinguished name (DN) and the user's public key. A common X.509 certificate would include: DN, Serial Number, Issuer, Valid From, Valid To, Public Key, Subject etc.

    The following are the components of a PKI:

    Digital certificate: An electronic file issued by a trusted third party Certificate Authority (CA). It contains credentials of that individual along with other identifying information (i.e. a user's public key). There are two types of digital certificates: server certificates and personal certificates.

    Certificate Authority (CA): An organization that maintains and issues public key certificates, it is equivalent to passport office. They are responsible for the lifetime of a certificate - i.e. issuing, expiration etc. CA's issue certificates validating the identity of a user or system with a digital signature. CA's also revoke certificates by publishing to the CRL. Cross-certification is the act or process by which two CAs each certify a public key of the other, issuing a public-key certificate to that other CA, enabling users that are certified under different certification hierarchies to validate each other's certificate

    Note: A key is renewed at or near the end of key's lifetime, provided none of the information has changed. If any information used to issue the key changes it should be revoked and a new key issued.

    Certificate Revocation List (CRL): A list of every certificate that has been revoked for whatever reason. This list is maintained periodically and made available to concern parties. CRL's are usually based on an LDAP server.

    Registration authority (RA): Performs the certification registration duties. A RA is internal to a CA and provides the interface between the user and the CA. It authenticates the identity of the users and submits the certificate request to the CA.

    PKI provides confidentiality, access control, integrity, authentication and non-repudiation. PKI enabled applications and standards that rely on PKI include SSL, S/MIME, SET, IPSec and VPN.

    5.9 Cryptographic Attacks

    Ciphertext-only attack: Capturing several samples of ciphertext encrypted using the same algorithm and analyzing it to determine the key. Known-plaintext only: The attacker has the plaintext and the ciphertext, which has been encrypted. So the attacker can be analyze the ciphertext to determine the key. Chosen-plaintext attack: The attacker can choose the plaintext that gets encrypted. This is typically used when dealing with black-box type of encryption algorithm. Man-in-the-middle attack: Eavesdropping on different conversations. Using digital signatures during the session-key exchange can circumvent the attack. Dictionary attacks: Takes a password file with one-way function values and then takes the most commonly used passwords and run them through the same one-way function. These files are then compared. Replay attack: An attacker copies a ticket and breaks the encryption and then tries to impersonate the client and resubmit the ticket at a later time to gain unauthorized access to a resource.

  • 25

    6.0 DATA COMMUNICATIONS

    6.1 Data Communication Models:

    TCP/IP OSI Description Application Provides different services to the applications (HTTP, FTP, Telnet, SET,

    HTTP-S). Provides non-repudiation at application level. Presentation Converts the information (ASCII, JPEG, MIDI, MPEG, GIF)

    7

    Application 6

    5 Session Handles problems which are not communication issues (PPP, SQL,

    Gateways, NetBEUI) Transport 4 Transport Provides end to end communication control (TCP, UDP, TLS/SSL)

    Internet 3 Network (packets)

    Routes the information in the network (IP, IPX, ICMP, RIP, OSPF, IPSec, Routers)

    Datalink (frame) Provides error control between adjacent nodes (Ethernet, Token Ring, FDDI, SLIP, PPP, RARP, L2F, L2TP, PPTP, FDDI, ISDN, 802.11, switches, bridges)

    2

    Network

    1 Physical (bits) Connects the entity to the transmission media (UTP, coax, voltage Levels, signaling, hubs, repeaters) converts bits into voltage for transmission.

    The session layer enables communication between two computers to happen in three different modes: 1. Simplex: Communication takes place in one direction. 2. Half-duplex: Communication takes place in both directions, but only one system can send information at a time. 3. Full-duplex: Communication takes place in both direction and both systems can send information at a time. Datalink (Layer 2) primarily responsible for error correction at the bit-level Transport (layer 4) primarily responsible for error correction at the packet level

    6.2 TCP/IP - Transmission Control Protocol/Internet Protocol

    IP: Main task is to support inter-network addressing and packet forwarding and routing. Is a connectionless protocol that envelops data passed to it from the transport layer. IPv4 uses 32 bits for its address and IPv6 uses 128 bits. TCP: Is a reliable and connection-oriented protocol that ensures packets are delivered to the destination computer. If a packet is lost during transmission, TCP has the capability to resend it. Provides reliability and ensures that the packets are delivered. There are more overheads in TCP packet.

    Encapsulation Process:

    Note: The IP header contains a protocol field. Common values are 1=ICMP 2=IGMP 6=TCP 17=UDP

    TCP Handshake: 1. Host sends a SYN packet 2. Receiver answers with a SYN/ACK packet 3. Host sends an ACK packet

    UDP: Is a best-effort and connectionless oriented protocol. Does not have packet sequencing, flow and congestion control and the destination does not acknowledge every packet it receives. The