176
1 Business Environment & Concepts Terminology Access Controls An access control system is a system which enables an authority to control access to areas and resources in a given physical facility or computer-based information system. An access control system, within the field of physical security , is generally seen as the second layer in the security of a physical structure. Access control is, in reality, an everyday phenomenon . A lock on a car door is essentially a form of access control. A PIN on an ATM system at a bank is another means of access control. Bouncers standing in front of a night club is perhaps a more primitive mode of access control (given the evident lack of information technology involved). The possession of access control is of prime importance when persons seek to secure important, confidential, or sensitive information and equipment. Item control or electronic key management is an area within (and possibly integrated with) an access control system which concerns the managing of possession and location of small assets or physical (mechanical) keys. Physical access Underground entrance to the New York City Subway system Physical access by a person may be allowed depending on payment, authorization, etc. Also there may be one-way traffic of people . These can be enforced by personnel such as a border guard , a Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

Embed Size (px)

Citation preview

Page 1: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

1

Business Environment & ConceptsTerminology

Access Controls

An access control system is a system which enables an authority to control access to areas and resources in a given physical facility or computer-based information system. An access control system, within the field of physical security, is generally seen as the second layer in the security of a physical structure.

Access control is, in reality, an everyday phenomenon. A lock on a car door is essentially a form of access control. A PIN on an ATM system at a bank is another means of access control. Bouncers standing in front of a night club is perhaps a more primitive mode of access control (given the evident lack of information technology involved). The possession of access control is of prime importance when persons seek to secure important, confidential, or sensitive information and equipment.

Item control or electronic key management is an area within (and possibly integrated with) an access control system which concerns the managing of possession and location of small assets or physical (mechanical) keys.

Physical access

Underground entrance to the New York City Subway system

Physical access by a person may be allowed depending on payment, authorization, etc. Also there may be one-way traffic of people. These can be enforced by personnel such as a border guard, a doorman, a ticket checker, etc., or with a device such as a turnstile. There may be fences to avoid circumventing this access control. An alternative of access control in the strict sense (physically controlling access itself) is a system of checking authorized presence, see e.g. Ticket controller (transportation). A variant is exit control, e.g. of a shop (checkout) or a country.

In physical security, the term access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons. Physical access control can be achieved by a human (a guard, bouncer, or receptionist), through mechanical means such as locks and keys, or through technological means such as access control systems like the Access control vestibule. Within these environments, physical key management may also be employed as a means of

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 2: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

2

further managing and monitoring access to mechanically keyed areas or access to certain small assets.

Physical access control is a matter of who, where, and when. An access control system determines who is allowed to enter or exit, where they are allowed to exit or enter, and when they are allowed to enter or exit. Historically this was partially accomplished through keys and locks. When a door is locked only someone with a key can enter through the door depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed.

Electronic access control uses computers to solve the limitations of mechanical locks and keys. A wide range of credentials can be used to replace mechanical keys. The electronic access control system grants access based on the credential presented. When access is granted, the door is unlocked for a predetermined time and the transaction is recorded. When access is refused, the door remains locked and the attempted access is recorded. The system will also monitor the door and alarm if the door is forced open or held open too long after being unlocked.

[edit] Access control system operation

When a credential is presented to a reader, the reader sends the credential’s information, usually a number, to a control panel, a highly reliable processor. The control panel compares the credential's number to an access control list, grants or denies the presented request, and sends a transaction log to a database. When access is denied based on the access control list, the door remains locked. If there is a match between the credential and the access control list, the control panel operates a relay that in turn unlocks the door. The control panel also ignores a door open signal to prevent an alarm. Often the reader provides feedback, such as a flashing red LED for an access denied and a flashing green LED for an access granted.

The above description illustrates a single factor transaction. Credentials can be passed around, thus subverting the access control list. For example, Alice has access rights to the server room but Bob does not. Alice either gives Bob her credential or Bob takes it; he now has access to the server room. To prevent this, two-factor authentication can be used. In a two factor transaction, the presented credential and a second factor are needed for access to be granted. The second factor can be a PIN, a second credential, operator intervention, or a biometric input. Often the factors are characterized as

something you have, such as an access badge or passcard, something you know, e.g. a PIN, or password. something you are, typically a biometric input.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 3: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

3

[edit] Credential

A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being, that enables an individual access to a given physical facility or computer-based information system. Typically, credentials can be something you know (such as number or PIN), something you have (such as an access badge), something you are (such as a biometric feature) or some combination of these items. The typical credential is an access card, key fob, or other key. There are many card technologies including magnetic stripe, bar code, Wiegand, 125 kHz proximity, 26 bit card-swipe, contact smart cards, and contactless smart cards. Also available are key-fobs which are more compact than ID cards and attach to a key ring. Typical biometric technologies include fingerprint, facial recognition, iris recognition, retinal scan, voice, and hand geometry.

Credentials for an access control system are typically held within a database, which stores access credentials for all staff members of a given firm or organisation. Assigning access control credentials can be derived from the basic tenet of access control, i.e. who has access to a given area, why the person should have access to the given area, and where given persons should have access to. As an example, in a given firm, senior management figures may need general access to all areas of an organisation. ICT staff may need primary access to computer software, hardware and general computer-based information systems. Janitors and maintenance staff may need chief access to service areas, cleaning closets, electrical and heating apparatus, etc.

[edit] Access control system components

An access control point, which can be a door, turnstile, parking gate, elevator, or other physical barrier where granting access can be electrically controlled. Typically the access point is a door. An electronic access control door can contain several elements. At its most basic there is a stand-alone electric lock. The lock is unlocked by an operator with a switch. To automate this, operator intervention is replaced by a reader. The reader could be a keypad where a code is entered, it could be a card reader, or it could be a biometric reader. Readers do not usually make an access decision but send a card number to an access control panel that verifies the number against an access list. To monitor the door position a magnetic door switch is used. In concept the door switch is not unlike those on refrigerators or car doors. Generally only entry is controlled and exit is uncontrolled. In cases where exit is also controlled a second reader is used on the opposite side of the door. In cases where exit is not controlled, free exit, a device called a request-to-exit (REX) is used. Request-to-exit devices can be a pushbutton or a motion detector. When the button is pushed or the motion detector detects motion at the door, the door alarm is temporarily ignored while the door is opened. Exiting a door without having to electrically unlock the door is called mechanical free egress. This is an important safety feature. In cases where the lock must be electrically unlocked on exit, the request-to-exit device also unlocks the door.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 4: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

4

Accounting Information System (AIS)

An accounting information system (AIS) is the system of records a business keeps to maintain its accounting system. This includes the purchase, sales, and other financial processes of the business. The purpose of an AIS is to accumulate data and provide decision makers (investors, creditors, and managers) with information.

While this was previously a paper-based process, most businesses now use accounting software. In an electronic financial accounting system, the steps in the accounting cycle are dependent upon the system itself. For example, some systems allow direct journal posting to the various ledgers and others do not.

Accounting Information Systems (AISs) combine the study and practice of accounting with the design, implementation, and monitoring of information systems. Such systems use modern information technology resources together with traditional accounting controls and methods to provide users the financial information necessary to manage their organizations.

AIS TECHNOLOGY Input The input devices commonly associated with AIS include: standard personal computers or workstations running applications; scanning devices for standardized data entry; electronic communication devices for electronic data interchange (EDI) and e-commerce. In addition, many financial systems come "Web-enabled" to allow devices to connect to the World Wide Web.

Process Basic processing is achieved through computer systems ranging from individual personal computers to large-scale enterprise servers. However, conceptually, the underlying processing model is still the "double-entry" accounting system initially introduced in the fifteenth century.

Output Output devices used include computer displays, impact and nonimpact printers, and electronic communication devices for EDI and e-commerce. The output content may encompass almost any type of financial reports from budgets and tax reports to multinational financial statements.

Application Controls

Application controls apply to the processing of individual accounting applications and help ensure the completeness and accuracy of transaction processing, authorization, and validity. Types of application controls include:

Data Capture Controls – ensures that all transactions are recorded in the application system, transactions are recorded only once, and rejected transactions are identified, controlled, corrected, and reentered into the system.

Data Validation Controls – ensures that all transactions are properly valued. Processing Controls – ensures the proper processing of transactions.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 5: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

5

Output Controls – ensures that computer output is not distributed or displayed to unauthorized users.

Error Controls – ensures that errors are corrected and resubmitted to the application system at the correct point in processing.

Application controls may be compromised by the following application risks:

Weak security Unauthorized access to data and unauthorized remote access Inaccurate information and erroneous or falsified data input Misuse by authorized end users Incomplete processing and/or duplicate transactions Untimely processing Communication system failure Inadequate training and support

[edit] Tests of Controls

Tests of controls are audit procedures performed to evaluate the effectiveness of either the design or the operation of an internal control. Tests of controls directed toward the design of the control focuses on evaluating whether the control is suitably designed to prevent material weaknesses. Tests of controls directed toward the operation of the control focuses on assessing how the control was applied, the consistency with which it was applied, and who applied it. In addition to inquiring with appropriate personnel and observation of the application of the control, an IT auditor’s main focus when testing the controls is to do a re-performance of the application of the control themselves

Application Firewall

A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications. It is a device or set of devices configured to permit, deny, encrypt, decrypt, or proxy all (in and out) computer traffic between different security domains based upon a set of rules and other criteria.

Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria.

There are several types of firewall techniques:

1. Packet filter : Packet filtering inspects each packet passing through the network and accepts or rejects it based on user-defined rules. Although difficult to configure, it is

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 6: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

6

fairly effective and mostly transparent to its users. In addition, it is susceptible to IP spoofing.

2. Application gateway : Applies security mechanisms to specific applications, such as FTP and Telnet servers. This is very effective, but can impose a performance degradation.

3. Circuit-level gateway : Applies security mechanisms when a TCP or UDP connection is established. Once the connection has been made, packets can flow between the hosts without further checking.

4. Proxy server : Intercepts all messages entering and leaving the network. The proxy server effectively hides the true network addresses.

Application Software

Application software is computer software designed to help the user perform a particular task. Such programs are also called software applications, applications or apps. Typical examples are word processors, spreadsheets, media players and database applications.

Application software should be contrasted with system software (infrastructure) or middleware (computer services/ processes integrators), which is involved in integrating a computer's various capabilities, but typically does not directly apply them in the performance of tasks that benefit the user. A simple, if imperfect analogy in the world of hardware would be the relationship of an electric light bulb (an application) to an electric power generation plant (a system). The power plant merely generates electricity, not itself of any real use until harnessed to an application like the electric light that performs a service that benefits the user.

In computer science, an application is a computer program designed to help people perform a certain type of work. An application thus differs from an operating system (which runs a computer), a utility (which performs maintenance or general-purpose chores), and a programming language (with which computer programs are created). Depending on the work for which it was designed, an application can manipulate text, numbers, graphics, or a combination of these elements. Some application packages offer considerable computing power by focusing on a single task, such as word processing; others, called integrated software, offer somewhat less power but include several applications. [1]

User-written software tailors systems to meet the user's specific needs. User-written software include spreadsheet templates, word processor macros, scientific simulations, graphics and animation scripts. Even email filters are a kind of user software. Users create this software themselves and often overlook how important it is.

The delineation between system software such as operating systems and application software is not exact, however, and is occasionally the object of controversy. For example, one of the key questions in the United States v. Microsoft antitrust trial was whether Microsoft's Internet Explorer web browser was part of its Windows operating system or a separable piece of application software. As another example, the GNU/Linux naming controversy is, in part, due to disagreement about the relationship between the Linux kernel and the operating systems built over this kernel. In some types of embedded systems, the application software and the operating

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 7: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

7

system software may be indistinguishable to the user, as in the case of software used to control a VCR, DVD player or microwave oven.

Audit SoftwareComputer Assisted Audit Techniques or Computer Aided Audit Tools (CAATS), also known as Computer Assisted Audit Tools and Techniques (CAATTs), is a growing field within the Financial audit profession. CAATTs is the practice of using computers to automate or simplify the audit process. In the broadest sense of the term, CAATTs can refer to any use of a computer during the audit. This would include utilizing basic software packages such as SAS, Excel, Access, Crystal Reports, Cognos, Business Objects, and also word processors. In practice, however, CAATTs has become synonymous with incorporating Data analytics into the audit process. This is one of the emerging fields within the audit profession.

Traditional Audit Example

Traditionally auditors have been criticized because they reach conclusions based upon limited samples. It is not uncommon for an auditor to sample 30-50 transactions and declare a problem or conclude that "controls appear to be effective." Management upon hearing the verdict of the auditors may question the validity of the audit conclusions. Management realizes that they conduct thousands or perhaps millions of transactions a year and the auditor only sampled a handful. The auditor will then state that the conducted the sample based upon Generally Accepted Audit Standards (GAAS) and that their sample was statistically valid. The auditor is then forced to defend their methodology.

Another common criticism of the audit profession occurs after a problem emerges. Whenever a problem emerges within a department, management might ask, "Where were the auditors?" If the audit department had reviewed the area recently it becomes a sticky situation as the Audit Manager attempts to explain that the reason the problem wasn't identified was because the problem was outside of the scope of the audit. The Audit manager might also try to explain that the sample was "a statistically valid sample with a 95% confidence level." The Audit Committee doesn't care that the audit was conducted according to GAAS, they only care that a problem went unnoted by the audit department.

[edit] CAATTs Alternative

CAATTs addresses these problems. CAATTs, as it is commonly used, is the practice of analyzing large volumes of data looking for anomalies. A well designed CAATTs audit will not be a sample, but rather a complete review of all transactions. Using CAATTs the auditor will extract every transaction the business unit performed during the period reviewed. The auditor will then test that data to determine if there are any problems in the data. For example, using CAATTs the auditor can find invalid Social Security Numbers (SSN) by comparing the SSN to the issuing criteria of the Social Security Administration. The CAATTs auditor could also easily look for duplicate vendors or transactions. When such a duplicate is identified, they can approach management with the knowledge that they tested 100% of the transactions and that they identified 100% of the exceptions.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 8: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

8

[edit] Traditional Audit vs CAATTs on Specific Risks

Another advantage of CAATTs is that it allows auditors to test for specific risks. For example, an insurance company may want to ensure that it doesn't pay any claims after a policy is terminated. Using traditional audit techniques this risk would be very difficult to test. The auditor would "randomly select" a "statistically valid" sample of claims (usually 30-50.) They would then check to see if any of those claims were processed after a policy was terminated. Since the insurance company might process millions of claims the odds that any of those 30-50 "randomly selected" claims occurred after the policy was terminated is extremely unlikely. Even if one or two of those claims was for a date of service after the policy termination date, what does that tell the auditor?

Using CAATTs the auditor can select every claim that had a date of service after the policy termination date. The auditor then can determine if any claims were inappropriately paid. If they were, the auditor can then figure out why the controls to prevent this failed. In a real life audit, the CAATTs auditor noted that a number of claims had been paid after policies were terminated. Using CAATTs the auditor was able to identify every claim that was paid and the exact dollar amount incorrectly paid by the insurance company. Furthermore, the auditor was able to identify the reason why these claims were paid. The reason why they were paid was because the participant paid their premium. The insurance company, having received a payment, paid the claims. Then after paying the claim the participant's check bounced. When the check bounced, the participant's policy was retroactively terminated, but the claim was still paid costing the company hundreds of thousands of dollars per year.

Which looks better in an audit report:

"Audit reviewed 50 transactions and noted one transaction that was processed incorrectly"

or

"Audit utilized CAATTS and tested every transaction over the past year. We noted XXX exceptions wherein the company paid YYY dollars on terminated policies."

However, the CAATTs driven review is limited only to the data saved on files in accordance with a systematic pattern. Much data is never documented this way. In addition saved data often contains deficiencies, is poorly classified, is not easy to get, and it might be hard to become convinced about its integrity. So, for the present CAATTs is complement to an auditor's tools and techniques. In certain audits CAATTs can't be used at all. But there are also audits which simply can't be made with due care and efficiently without CAATTs.

[edit] Specialized SoftwareIn the most general terms, CAATTs can refer to any computer program utilized to improve the audit process. Generally, however, it is used to refer to any data extraction and analysis software. This would include programs such as SAS, Excel, Access, Crystal Reports, Business Objects,

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 9: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

9

etc. There are, however, two main companies that have developed specialized data analytic software specifically for auditors. They are Audit Command Language (ACL) and Interactive Data Extraction and Analysis (IDEA). InformationActive's ActiveData For Excel, sold under CCH's ProSystem fx and CorpSystem brands implements data analytics for auditors in the Excel environment. Other players include Monarch and the newly released (Q4 2008) TopCAATs.

Benefits of audit software include:

They are independent of the system being audited and will use a read-only copy of the file to avoid any corruption of an organization’s data.

Many audit-specific routines are used such as sampling. Provides documentation of each test performed in the software that can be used as

documentation in the auditor’s work papers.

Audit specialized software can easily perform the following functions:

Data queries . Data stratification . Sample extractions . Missing sequence identification . Statistical analysis . Calculations. Duplicate inquires. Pivot tables . Cross tabulation

B2B

Business-to-business (B2B) describes commerce transactions between businesses, such as between a manufacturer and a wholesaler, or between a wholesaler and a retailer. Contrasting terms are business-to-consumer (B2C) and business-to-government (B2G).

The volume of B2B transactions is much higher than the volume of B2C transactions. The primary reason for this is that in a typical supply chain there will be many B2B transactions involving subcomponent or raw materials, and only one B2C transaction, specifically sale of the finished product to the end customer. For example, an automobile manufacturer makes several B2B transactions such as buying tires, glass for windshields, and rubber hoses for its vehicles. The final transaction, a finished vehicle sold to the consumer, is a single (B2C) transaction.

B2C

Business-to-consumer (B2C, sometimes also called Business-to-Customer)[1] describes activities of businesses serving end consumers with products and/or services.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 10: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

10

An example of a B2C transaction would be a person buying a pair of shoes from a retailer. The transactions that led to the shoes being available for purchase, that is the purchase of the leather, laces, rubber, etc. as well as the sale of the shoe from the shoemaker to the retailer would be considered (B2B) transactions.

Backdoor

A backdoor in a computer system (or cryptosystem or algorithm) is a method of bypassing normal authentication, securing remote access to a computer, obtaining access to plaintext, and so on, while attempting to remain undetected. The backdoor may take the form of an installed program or could be a modification to an existing program or hardware device.The threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference.[1] They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the word trapdoor here clearly coincides with more recent definitions of a backdoor. However, since the advent of public key cryptography the term trapdoor has acquired a different meaning. More generally, such security breaches were discussed at length in a RAND Corporation task force report published under ARPA sponsorship by J.P. Anderson and D.J. Edwards in 1970.[2]

A backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. A famous example of this sort of backdoor was as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password (his dead son's name) which gave the user access to the system, and to undocumented parts of the system (in particular, a video game–like simulation mode and direct interaction with the artificial intelligence).

An attempt to plant a backdoor in the Linux kernel, exposed in November 2003, showed how subtle such a code change can be.[3] In this case a two-line change appeared to be a typographical error, but actually gave the caller to the sys_wait4 function root access to the system.[4]

Although the number of backdoors in systems using proprietary software (software whose source code is not publicly available) is not widely credited, they are nevertheless frequently exposed. Programmers have even succeeded in secretly installing large amounts of benign code as Easter eggs in programs, although such cases may involve official forbearance, if not actual permission.

It is also possible to create a backdoor without modifying the source code of a program, or even modifying it after compilation. This can be done by rewriting the compiler so that it recognizes code during compilation that triggers inclusion of a backdoor in the compiled output. When the compromised compiler finds such code, it compiles it as normal, but also inserts a backdoor (perhaps a password recognition routine). So, when the user provides that input, he gains access to some (likely undocumented) aspect of program operation. This attack was first outlined by Ken Thompson in his famous paper Reflections on Trusting Trust (see below).

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 11: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

11

Many computer worms, such as Sobig and Mydoom, install a backdoor on the affected computer (generally a PC on broadband running insecure versions of Microsoft Windows and Microsoft Outlook). Such backdoors appear to be installed so that spammers can send junk e-mail from the infected machines. Others, such as the Sony/BMG rootkit distributed silently on millions of music CDs through late 2005, are intended as DRM measures — and, in that case, as data gathering agents, since both surreptitious programs they installed routinely contacted central servers.

A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology: Crypto '96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g., via publishing, being discovered and disclosed by reverse engineering, etc.). Also, it is computationally intractable to detect the presence of an asymmetric backdoor under black-box queries. This class of attacks have been termed kleptography; they can be carried out in software, hardware (for example, smartcards), or a combination of the two. The theory of asymmetric backdoors is part of a larger field now called cryptovirology.

There exists an experimental asymmetric backdoor in RSA key generation. This OpenSSL RSA backdoor was designed by Young and Yung, utilizes a twisted pair of elliptic curves, and has been made available.

Batch Processing

Batch processing is execution of a series of programs ("jobs") on a computer without manual intervention.

Batch jobs are set up so they can be run to completion without manual intervention, so all input data is preselected through scripts or command-line parameters. This is in contrast to "online" or interactive programs which prompt the user for such input. A program takes a set of data files as input, process the data, and produces a set of output data files. This operating environment is termed as "batch processing" because the input data are collected into batches on files and are processed in batches by the program.

Batch processing has these benefits:

It allows sharing of computer resources among many users and programs, It shifts the time of job processing to when the computing resources are less busy, It avoids idling the computing resources with minute-by-minute mannual intervention

and supervision, By keeping high overall rate of utilization, it better amortizes the cost of a computer,

especially an expensive one.

[edit] HistorySource – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 12: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

12

Batch processing has been associated with mainframe computers since the earliest days of electronic computing in the 1950s. Because such computers were enormously costly, batch processing was the only economically-viable option of their use. In those days, interactive sessions with either text-based computer terminal interfaces or graphical user interfaces were not widespread. Initially, computers were not even capable of having multiple programs loaded into the main memory.

Batch processing has grown beyond its mainframe origins, and is now frequently used in UNIX environments and Microsoft Windows too. UNIX systems use shells and other scripting languages. DOS systems use batch files powered by COMMAND.COM, Microsoft Windows has cmd.exe, Windows Script Host and advanced Windows PowerShell.

[edit] Modern SystemsDespite their long history, batch applications are still critical in most organizations. While online systems are now used when mannual intervention is not desired, they are not well suited to the high-volume, repetitive tasks. Therefore, even new systems usually contain a batch application for cases such as updating information at the end of the day, generating reports, and printing documents.

Modern batch applications make use of modern batch frameworks such as Spring Batch, which is written for Java, to provide the fault tolerance and scalability required for high-volume processing. In order to ensure high-speed processing, batch applications are often integrated with grid computing solutions to partition a batch job over a large number of processors.

[edit] Common batch processing usage[edit] Printing

A popular computerized batch processing procedure is printing. This normally involves the operator selecting the documents they need printed and indicating to the batch printing software when, where they should be output and priority of the print job. Then the job is sent to the print queue from where printing daemon sends them to the printer.

[edit] Databases

Batch processing is also used for efficient bulk database updates and automated transaction processing, as contrasted to interactive online transaction processing (OLTP) applications.

[edit] Images

Batch processing is often used to perform various operations with digital images. There exist computer programs like Batch Image Processor that let one resize, convert, watermark, or otherwise edit image files.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 13: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

13

[edit] Converting

Batch processing is also used for converting a number of computer files from one format to another. This is to make files portable and versatile especially for proprietary and legacy files where viewers are not easy to come by.

Job schedulingUNIX utilizes cron and at facilities to allow for scheduling of complex job scripts. Windows has a job scheduler. Most high-performance computing clusters use batch processing to maximize cluster usage.

Batch Total Checks for missing records. Numerical fields may be added together for all records in a

batch. The batch total is entered and the computer checks that the total is correct, e.g., add the 'Total Cost' field of a number of transactions together.

Bit

In computing and telecommunications a bit is a basic unit of information storage and communication (a contraction of "binary digit"). It is the maximum amount of information that can be stored by a device or other physical system that can normally exist in only two distinct states. These states are often interpreted (especially in the storage of numerical data) as the binary digits 0 and 1. They may be interpreted also as logical values, either "true" or "false"; or two settings of a flag or switch, either "on" or "off".

In information theory, "one bit" is typically defined as the uncertainty of a binary random variable that is 0 or 1 with equal probability,[1] or the information that is gained when the value of such a variable becomes known.[2]

ByteThere are several units of information which are defined as multiples of bits, such as byte (8 bits), kilobit (either 1000 or 210 = 1024 bits), megabyte (either 8,000,000 or 8×220 = 8,388,608 bits), etc.

Centralized Processing

Centralized computing is computing done at a central location, using terminals that are attached to a central computer. The computer itself may control all the peripherals directly (if they are physically connected to the central computer), or they may be attached via a terminal server.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 14: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

14

Alternatively, if the terminals have the capability, they may be able to connect to the central computer over the network. The terminals may be text terminals or thin clients, for example.

It offers greater security over decentralized systems because all of the processing is controlled in a central location. In addition, if one terminal breaks down, the user can simply go to another terminal and log in again, and all of their files will still be accessible. Depending on the system, they may even be able to resume their session from the point they were at before, as if nothing had happened.

This type of arrangement does have some disadvantages. The central computer performs the computing functions and controls the remote terminals. This type of system relies totally on the central computer. Should the central computer crash, the entire system will "go down" (i.e. will be unavailable).

HistoryThe very first computers did not have separate terminals as such; their primitive input/output devices were built in. However, soon it was found to be extremely useful for multiple people to be able to use a computer at the same time, for reasons of cost - early computers were very expensive, both to produce and maintain, and occupied large amounts of floor space. The idea of centralized computing was born. Early text terminals used electro-mechanical teletypewriters, but these were replaced by cathode ray tube displays (as found in 20th century televisions and computers). The text terminal model dominated computing from the 1960s until the rise to dominance of home computers and personal computers in the 1980s.

[edit] Contemporary statusAs of 2007, centralized computing is now coming back into fashion - to a certain extent. Thin clients have been used for many years by businesses to reduce total cost of ownership, while web applications are becoming more popular because they can potentially be used on many types of computing device without any need for software installation. Already, however, there are signs that the pendulum is swinging back again, away from pure centralization, as thin client devices become more like diskless workstations due to increased computing power, and web applications start to do more processing on the client side, with technologies such as AJAX and rich clients.

In addition, mainframes are still being used for some mission-critical applications, such as payroll, or for processing day-to-day account transactions in banks. These mainframes will typically be accessed either using terminal emulators (real terminal devices are not used much any more) or via modern front-ends such as web applications - or (in the case of automated access) protocols such as web services protocols.

Hybrid client model

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 15: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

15

Some organisations use a hybrid client model partway between centralized computing and conventional desktop computing, in which some applications (such as web browsers) are run locally, while other applications (such as critical business systems) are run on the terminal server. One way to implement this is simply by running remote desktop software on a standard desktop computer.

Check Digit

A check digit is a form of redundancy check used for error detection, the decimal equivalent of a binary checksum. It consists of a single digit computed from the other digits in the message.

With a check digit, one can detect simple errors in the input of a series of digits, such as a single mistyped digit, or the permutation of two successive digits

The final digit of a Universal Product Code is a check digit computed as follows:[1]

1. Add the digits (up to but not including the check digit) in the odd-numbered positions (first, third, fifth, etc.) together and multiply by three.

2. Add the digits (up to but not including the check digit) in the even-numbered positions (second, fourth, sixth, etc.) to the result.

3. If the last digit of the result is 0, then the check digit is 0.4. The check digit will be the smallest number required to round the Sum up to the nearest

multiple of 10.

For instance, the UPC-A barcode for a box of tissues is "036000241457". The last digit is the check digit "7", and if the other numbers are correct then the check digit calculation must produce 7.

1. We add the odd number digits: 0+6+0+2+1+5 = 142. Multiply the result by 3: 14 × 3 = 423. We add the even number digits: 3+0+0+4+4 = 114. We add the two results together: 42 + 11 = 535. 60 (the next highest multiple of 10) modulo 53 is 7. Therefore, 7 is the check digit.[2]

[edit] ISBN 10

The final character of a ten digit International Standard Book Number is a check digit computed so that multiplying each digit by its position in the number (counting from the right) and taking the sum of these products modulo 11 is 0. The last digit (which is multiplied by 1) is the check digit, chosen to make the sum correct. It may need to have the value 10, which is represented as the letter X. For example, take the ISBN 0-201-53082-1. The sum of products is 0×10 + 2×9 + 0×8 + 1×7 + 5×6 + 3×5 + 0×4 + 8×3 + 2×2 + 1×1 = 99 ≡ 0 modulo 11. So the ISBN is valid.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 16: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

16

While this may seem more complicated than the first scheme, it can be validated very simply by adding all the products together then dividing by 11. The sum can be computed without any multiplications by initializing two variables, t and sum, to 0 and repeatedly performing t = t + digit; sum = sum + t; (which can be expressed in C as sum += t += digit;). If the final sum is a multiple of 11, then the ISBN is valid.

Chief Security Officer (CSO)A chief security officer (CSO) is a corporation's top executive who is responsible for security. The CSO serves as the business leader responsible for the development, implementation and management of the organization’s corporate security vision, strategy and programs. They direct staff in identifying, developing, implementing and maintaining security processes across the organization to reduce risks, respond to incidents, and limit exposure to liability in all areas of financial, physical, and personal risk; establish appropriate standards and risk controls associated with intellectual property; and direct the establishment and implementation of policies and procedures related to data security. Those primarily responsible for information security may have the title of Chief Information Security Officer (CISO) to differentiate the positions.

Client/Server

Client-server computing or networking is a distributed application architecture that partitions tasks or work loads between service providers (servers) and service requesters, called clients.[1] Often clients and servers operate over a computer network on separate hardware. A server machine is a high-performance host that is running one or more server programs which share its resources with clients. A client does not share any of its resources, but requests a server's content or service function. Clients therefore initiate communication sessions with servers which await (listen to) incoming requests Client-server describes the relationship between two computer programs in which one program, the client program, makes a service request to another, the server program. Standard networked functions such as email exchange, web access and database access, are based on the client-server model. For example, a web browser is a client program at the user computer that may access information at any web server in the world. To check your bank account from your computer, a web browser client program in your computer forwards your request to a web server program at the bank. That program may in turn forward the request to its own database client program that sends a request to a database server at another bank computer to retrieve your account balance. The balance is returned to the bank database client, which in turn serves it back to the web browser client in your personal computer, which displays the information for you.

The client-server model has become one of the central ideas of network computing. Many business applications being written today use the client-server model. So do the Internet's main application protocols, such as HTTP, SMTP, Telnet, DNS. In marketing, the term has been used to distinguish distributed computing by smaller dispersed computers from the "monolithic" centralized computing of mainframe computers. But this distinction has largely disappeared as mainframes and their applications have also turned to the client-server model and become part of network computing.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 17: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

17

Each instance of the client software can send data requests to one or more connected servers. In turn, the servers can accept these requests, process them, and return the requested information to the client. Although this concept can be applied for a variety of reasons to many different kinds of applications, the architecture remains fundamentally the same.

The most basic type of client-server architecture employs only two types of hosts: clients and servers. This type of architecture is sometimes referred to as two-tier. It allows devices to share files and resources. The two tier architecture means that the client acts as one tier and application in combination with server acts as another tier.

Cold SiteA cold site is the most inexpensive type of backup site for an organization to operate. It does not include backed up copies of data and information from the original location of the organization, nor does it include hardware already set up. The lack of hardware contributes to the minimal startup costs of the cold site, but requires additional time following the disaster to have the operation running at a capacity close to that prior to the disaster.

Computer Operator

A role within IT, computer operators oversee the running of computer systems, ensuring that the machines are running and physically secured. The traditional role of a computer operator was to work with mainframes which required a great deal of management day-to-day. Computer operator positions are distinct from system administrators in that they only require a 2-year college Associate's degree (similar to nuclear power plant operators and car mechanics), and are paid significantly less than system administrators, and traditionally used operating systems other than UNIX before the 1990s with far more rudimentary operations than a complex UNIX system. The computer operator works in a computer room (now days known as "data centers"). The employment of operators has greatly decreased due to modern technology making the more traditional roles obsolete. Most of the duties that operations staff undertake is taught on-the-job as the variety of roles is unique to the systems they help manage.

Logging events is also amongst the operators role, listing each backup that is run or things such as machine malfunctions. Operators assist System administrators and programmers in testing and debugging of new systems and programs prior to them becoming production environments.

As modern day computing has led to a ger proliferation of personal computers, the role of the operator has changed to include these within their duties. Similar roles such as managing the backup systems, cycling tapes or other media, filling and maintaining printers, indeed anything that is monotonous or in need of legwork for the system or network administrators s handled by the operation staff.

The shifting and changing of duties for the operators has resulted due to the speed of change from older mainframe systems to newer self-managing systems but overall the operator fills in as a lower level system administrator.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 18: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

18

Computer Programmer

A programmer is someone who writes computer software. The term computer programmer can refer to a specialist in one area of computer programming or to a generalist who writes code for many kinds of software. One who practices or professes a formal approach to programming may also be known as a programmer analyst. A programmer's primary computer language (Lisp, Java, Delphi, C++, etc.) is often prefixed to the above titles, and those who work in a web environment often prefix their titles with web. The term programmer can be used to refer to a software developer, software engineer, computer scientist, or software analyst. However, members of these professions typically possess other software engineering skills, beyond programming; for this reason, the term programmer is sometimes considered an insulting or derogatory oversimplification of these other professions. This has sparked much debate amongst developers, analysts, computer scientists, programmers, and outsiders who continue to be puzzled at the subtle differences in these occupations.[1][2][3][4][5]

Those proficient in computer programming skills may become famous, though this regard is normally limited to software engineering circles. Ada Lovelace is popularly credited as history's first programmer. She was the first to express an algorithm intended for implementation on a computer, Charles Babbage's analytical engine, in October 1842.[6] Her work never ran, though that of Konrad Zuse did in 1941. The ENIAC programming team, consisting of Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman were the first working programmers.[7][8]

Computer programmers write, test, debug, and maintain the detailed instructions, called computer programs, that computers must follow to perform their functions. Programmers also conceive, design, and test logical structures for solving problems by computer. Many technical innovations in programming — advanced computing technologies and sophisticated new languages and programming tools — have redefined the role of a programmer and elevated much of the programming work done today. Job titles and descriptions may vary, depending on the organization.

Programmers work in many settings, including corporate information technology departments, big software companies, and small service firms. Many professional programmers also work for consulting companies at client' sites as contractors. Licensing is not typically required to work as a programmer, although professional certifications are commonly held by programmers. Programming is widely considered a profession (although some authorities disagree on the grounds that only careers with legal licensing requirements count as a profession).

Programmers' work varies widely depending on the type of business they are writing programs for. For example, the instructions involved in updating financial records are very different from those required to duplicate conditions on an aircraft for pilots training in a flight simulator. Although simple programs can be written in a few hours, programs that use complex mathematical formulas whose solutions can only be approximated or that draw data from many existing systems may require more than a year of work. In most cases, several programmers work together as a team under a senior programmer’s supervision.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 19: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

19

Programmers write programs according to the specifications determined primarily by more senior programmers and by systems analysts. After the design process is complete, it is the job of the programmer to convert that design into a logical series of instructions that the computer can follow. The programmer codes these instructions in one of many programming languages. Different programming languages are used depending on the purpose of the program. COBOL, for example, is commonly used for business applications which are run on mainframe and midrange computers, whereas Fortran is used in science and engineering. C++ is widely used for both scientific and business applications. Java, C# and PHP are popular programming languages for Web and business applications. Programmers generally know more than one programming language and, because many languages are similar, they often can learn new languages relatively easily. In practice, programmers often are referred to by the language they know, e.g. as Java programmers, or by the type of function they perform or environment in which they work: for example, database programmers, mainframe programmers, or Web developers.

When making changes to the source code that programs are made up of, programmers need to make other programmers aware of the task that the routine is to perform. They do this by inserting comments in the source code so that others can understand the program more easily. To save work, programmers often use libraries of basic code that can be modified or customized for a specific application. This approach yields more reliable and consistent programs and increases programmers' productivity by eliminating some routine steps.

[edit] Testing and debugging

Programmers test a program by running it and looking for bugs. As they are identified, the programmer usually makes the appropriate corrections, then rechecks the program until an acceptably low level and severity of bugs remain. This process is called testing and debugging. These are important parts of every programmer's job. Programmers may continue to fix these problems throughout the life of a program. Updating, repairing, modifying, and expanding existing programs sometimes called maintenance programmer. Programmers may contribute to user guides and online help, or they may work with technical writers to do such work.

Certain scenarios or execution paths may be difficult to test, in which case the programmer may elect to test by inspection which involves a human inspecting the code on the relevant execution path, perhaps hand executing the code. Test by inspection is also sometimes used as a euphemism for inadequate testing. It may be difficult to properly assess whether the term is being used euphemistically.

Customer Relationship Management (CRM)

Customer relationship management (CRM) are methods that companies use to interact with customers. The methods include employee training and special purpose CRM software. There is an emphasis on handling incoming customer phone calls and email, although the information collected by CRM software may also be used for promotion, and surveys such as those polling customer satisfaction.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 20: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

20

Initiatives often fail because implementation was limited to software installation, without providing the context, support and understanding for employees to learn.[1] Tools for customer relationship management should be implemented "only after a well-devised strategy and operational plan are put in place".[2]

Other problems occur[3] when failing to think of sales as the output of a process that itself needs to be studied and taken into account when planning automation[4].

From the outside, customers interacting with a company perceive the business as a single entity, despite often interacting with a number of employees in different roles and departments. CRM is a combination of policies, processes, and strategies implemented by an organization to unify its customer interactions and provide a means to track customer information. It involves the use of technology in attracting new and profitable customers, while forming tighter bonds with existing ones.

CRM includes many aspects which relate directly to one another:

Front office operations — Direct interaction with customers, e.g. face to face meetings, phone calls, e-mail, online services etc.

Back office operations — Operations that ultimately affect the activities of the front office (e.g., billing, maintenance, planning, marketing, advertising, finance, manufacturing, etc.)

Business relationships — Interaction with other companies and partners, such as suppliers/vendors and retail outlets/distributors, industry networks (lobbying groups, trade associations). This external network supports front and back office activities.

Analysis — Key CRM data can be analyzed in order to plan target-marketing campaigns, conceive business strategies, and judge the success of CRM activities (e.g., market share, number and types of customers, revenue, profitability).

Proponents of CRM software claim that it doesn't only allow more effective ways of managing customer relationships, but also more customer-centric ways of doing business[5]. Executives often cite the need for the proper tools as a barrier to delivering the experience their customers expect. A 2009 study of over 860 corporate executives revealed only 39% believe that their employees have tools and authority to solve customer problems.[6]

DataThe term data means groups of information that represent the qualitative or quantitative attributes of a variable or set of variables. Data (plural of "datum", which is seldom used) are typically the results of measurements and can be the basis of graphs, images, or observations of a set of variables. Data are often viewed as the lowest level of abstraction from which information and knowledge are derived.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 21: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

21

Database

A database is an integrated collection of logically related records or files consolidated into a common pool that provides data for one or more multiple uses.

One way of classifying databases involves the type of content, for example: bibliographic, full-text, numeric, image. Other classification methods start from examining database models or database architectures: see below.

The data in a database is organized according to a database model. As of 2009 the relational model occurs most commonly. Other models such as the hierarchical model and the network model use a more explicit representation of relationships.

On-line Transaction Processing systems (OLTP) often use a "row-oriented" or an "object-oriented" datastore architecture, whereas data-warehouse and other retrieval-focused applications like Google's BigTable, or bibliographic database (library catalogue) systems may use a Column-oriented DBMS architecture.

Document-Oriented, XML, knowledgebases, as well as frame databases and RDF-stores (also known as triple-stores), may also use a combination of these architectures in their implementation.

Not all databases have or need a database schema ("schema-less databases").

Over many years general-purpose database systems have dominated the database industry. These offer a wide range of functions, applicable to many, if not most circumstances in modern data processing. These have been enhanced with extensible datatypes (pioneered in the PostgreSQL project) to allow development of a very wide range of applications.

There are also other types of databases which cannot be classified as relational databases. Most notable is the object database management system, which stores language objects natively without using a separate data definition language and without translating into a separate storage schema. Unlike relational systems, these object databases store the relationship between complex data types as part of their storage model in a way that does not require runtime calculation of related data using relational algebra execution algorithms.

[edit] Database management systemsMain article: Database management system

A database management system (DBMS) consists of software that organizes the storage of data. A DBMS controls the creation, maintenance, and use of the database storage structures of organizations and of their end users. It allows organizations to place control of organization-wide database development in the hands of Database Administrators (DBAs) and other specialists. In

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 22: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

22

large systems, a DBMS allows users and other software to store and retrieve data in a structured way.

Database management systems are usually categorized according to the database model that they support, such as the network, relational or object model. The model tends to determine the query languages that are available to access the database. One commonly used query language for the relational database is SQL, although SQL syntax and function can vary from one DBMS to another. A common query language for the object database is OQL, although not all vendors of object databases implement this. A great deal of the internal engineering of a DBMS is independent of the data model, and is concerned with managing factors such as performance, concurrency, integrity, and recovery from hardware failures. In these areas there are large differences between products.

A relational database management system (RDBMS) implements features of the relational model. In this context, Date's "Information Principle" states: "the entire information content of the database is represented in one and only one way. Namely as explicit values in column positions (attributes) and rows in relations (tuples). Therefore, there are no explicit pointers between related tables." This contrasts with the object database management system (ODBMS), which does store explicit pointers between related types.

[edit] Components of DBMS

According to the wikibooks open-content textbooks, "Design of Main Memory Database System/Overview of DBMS" Most DBMS as of 2009 are relational DBMS. Other less-used DBMS systems, such as the object DBMS, are generally used in areas of application-specific data management where performance and scalability take higher priority than the flexibility of ad hoc query capabilities provided via the relational algebra execution algorithms of a relational DBMS.

[edit] RDBMS components

Interface drivers - A user or application program initiates either schema modification or content modification. These drivers[which?] are built on top of SQL. They provide methods to prepare statements, execute statements, fetch results, etc. Examples include DDL, DCL, DML, ODBC, and JDBC. Some vendors provide language-specific proprietary interfaces. For example MySQL provides drivers for PHP, Python, etc.

SQL engine - This component interprets and executes the SQL query. It comprises three major components (compiler, optimizer, and execution engine).

Transaction engine - Transactions are sequences of operations that read or write database elements, which are grouped together.

Relational engine - Relational objects such as Table, Index, and Referential integrity constraints are implemented in this component.

Storage engine - This component stores and retrieves data records. It also provides a mechanism to store metadata and control information such as undo logs, redo logs, lock tables, etc.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 23: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

23

[edit] ODBMS components

Language drivers - A user or application program initiates either schema modification or content modification via the chosen programming language. The drivers then provide the mechanism to manage object lifecycle coupling of the application memory space with the underlying persistent storage. Examples include C++, Java, .NET, and Ruby.

Query engine - This component is responsible for interpreting and executing language-specific query commands in the form of OQL, LINQ, JDOQL, JPAQL, others. The query engine returns language specific collections of objects which satisfy a query predicate expressed as logical operators e.g. >, <, >=, <=, AND, OR, NOT, GroupBY, etc.

Transaction engine - Transactions are sequences of operations that read or write database elements, which are grouped together. The transaction engine is concerned with such things as data isolation and consistency in the driver cache and data volumes by coordinating with the storage engine.

Storage engine - This component stores and retrieves objects in an arbitrarily complex model. It also provides a mechanism to manage and store metadata and control information such as undo logs, redo logs, lock graphs, etc.

[edit] Primary tasks of DBMS packages

Database Development: used to define and organize the content, relationships, and structure of the data needed to build a database.

Database Interrogation: can access the data in a database for information retrieval and report generation. End users can selectively retrieve and display information and produce printed reports and documents.

Database Maintenance: used to add, delete, update, correct, and protect the data in a database.

Application Development: used to develop prototypes of data entry screens, queries, forms, reports, tables, and labels for a prototyped application. Or use 4GL or 4th Generation Language or application generator to develop program codes.

[edit] Types[edit] Operational database

These databases store detailed data needed to support the operations of the entire organization. They are also called subject-area databases (SADB), transaction databases, and production databases. These are all examples:

Customer databases Personal databases Inventory databases

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 24: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

24

[edit] Analytical database

These databases store data and information extracted from selected operational and external databases. They consist of summarized data and information most needed by an organization's management and other[which?] end-users. Some people refer to analytical databases as multidimensional databases, management databases, or information databases.

[edit] Data warehouse

A data warehouse stores data from current and previous years — data extracted from the various operational databases of an organization. It becomes the central source of data that has been screened, edited, standardized and integrated so that it can be used by managers and other end-user professionals throughout an organization

[edit] Distributed database

These are databases of local work-groups and departments at regional offices, branch offices, manufacturing plants and other work sites. These databases can include segments of both common operational and common user databases, as well as data generated and used only at a user’s own site.

[edit] End-user database

These databases consist of a variety of data files developed by end-users at their workstations. Examples of these are collections of documents in spreadsheets, word processing and even downloaded files.

[edit] External database

These databases provide access to external, privately-owned data online — available for a fee to end-users and organizations from commercial services. Access to a wealth of information from external database is available for a fee from commercial online services and with or without charge from many sources in the Internet.

[edit] Hypermedia databases on the web

These are a set of interconnected multimedia pages at a web-site. They consist of a home page and other hyperlinked pages[citation needed] of multimedia or mixed media such as text, graphic, photographic images, video clips, audio etc.

[edit] Navigational database

Navigational databases are characterized by the fact that objects in it are found primarily by following references from other objects. Traditionally navigational interfaces are procedural, though one could characterize some modern systems like XPath as being simultaneously navigational and declarative.Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 25: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

25

[edit] In-memory databases

In-memory databases primarily rely on main memory for computer data storage. This contrasts with database management systems which employ a disk-based storage mechanism. Main memory databases are faster than disk-optimized databases since[citation needed] the internal optimization algorithms are simpler and execute fewer CPU instructions. Accessing data in memory provides faster and more predictable performance than disk. In applications where response time is critical, such as telecommunications network equipment that operates emergency systems, main memory databases are often used.

Document-oriented databases

Document-oriented databases are computer programs designed for document-oriented applications. These systems may be implemented as a layer above a relational database or an object database. As opposed to relational databases, document-based databases do not store data in tables with uniform sized fields for each record. Instead, they store each record as a document that has certain characteristics. Any number of fields of any length can be added to a document. Fields can also contain multiple pieces of data.

[edit] Real-time databases

A real-time database is a processing system designed to handle workloads whose state may change constantly. This differs from traditional databases containing persistent data, mostly unaffected by time. For example, a stock market changes rapidly and dynamically. Real-time processing means that a transaction is processed fast enough for the result to come back and be acted on right away. Real-time databases are useful for accounting, banking, law, medical records, multi-media, process control, reservation systems, and scientific data analysis. As computers increase in power and can store more data, real-time databases become integrated into society and are employed in many applications.

Data Encryption

In cryptography, encryption is the process of transforming information (referred to as plaintext) using an algorithm (called cipher) to make it unreadable to anyone except those possessing special knowledge, usually referred to as a key. The result of the process is encrypted information (in cryptography, referred to as ciphertext). In many contexts, the word encryption also implicitly refers to the reverse process, decryption (e.g. “software for encryption” can typically also perform decryption), to make the encrypted information readable again (i.e. to make it unencrypted).

Encryption has long been used by militaries and governments to facilitate secret communication. Encryption is now commonly used in protecting information within many kinds of civilian systems. For example, in 2007 the U.S. government reported that 71% of companies surveyed utilized encryption for some of their data in transit.[1] Encryption can be used to protect data "at rest", such as files on computers and storage devices (e.g. USB flash drives). In recent years Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 26: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

26

there have been numerous reports of confidential data such as customers' personal records being exposed through loss or theft of laptops or backup drives. Encrypting such files at rest helps protect them should physical security measures fail. Digital rights management systems which prevent unauthorized use or reproduction of copyrighted material and protect software against reverse engineering (see also copy protection) are another somewhat different example of using encryption on data at rest.

Encryption is also used to protect data in transit, for example data being transferred via networks (e.g. the Internet, e-commerce), mobile telephones, wireless microphones, wireless intercom systems, Bluetooth devices and bank automatic teller machines. There have been numerous reports of data in transit being intercepted in recent years.[2] Encrypting data in transit also helps to secure it as it is often difficult to physically secure all access to networks.

Encryption, by itself, can protect the confidentiality of messages, but other techniques are still needed to protect the integrity and authenticity of a message; for example, verification of a message authentication code (MAC) or a digital signature. Standards and cryptographic software and hardware to perform encryption are widely available, but successfully using encryption to ensure security may be a challenging problem. A single slip-up in system design or execution can allow successful attacks. Sometimes an adversary can obtain unencrypted information without directly undoing the encryption. See, e.g., traffic analysis, TEMPEST, or Trojan horse.

One of the earliest public key encryption applications was called Pretty Good Privacy (PGP), according to Paul Rubens. It was written in 1991 by Phil Zimmermann and was purchased by Network Associates (now PGP Corporation) in 1997.

There are a number of reasons why an encryption product may not be suitable in all cases. First, e-mail must be digitally signed at the point it was created to provide non-repudiation for some legal purposes, otherwise the sender could argue that it was tampered with after it left their computer but before it was encrypted at a gateway according to Paul. An encryption product may also not be practical when mobile users need to send e-mail from outside the corporate network.[3]

Data Flow Diagram

A data-flow diagram (DFD) is a graphical representation of the "flow" of data through an information system. DFDs can also be used for the visualization of data processing (structured design).

On a DFD, data items flow from an external data source or an internal data store to an internal data store or an external data sink, via an internal process.

A DFD provides no information about the timing or ordering of processes, or about whether processes will operate in sequence or in parallel. It is therefore quite different from a flowchart, which shows the flow of control through an algorithm, allowing a reader to determine what operations will be performed, in what order, and under what circumstances, but not what kinds

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 27: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

27

of data will be input to and output from the system, nor where the data will come from and go to, nor where the data will be stored (all of which are shown on a DFD).

9/30

Data Independence: Data independence is the type of data transparency that matters for a centralized DBMS. It refers to the immunity of user applications to make changes in the definition and organization of data, and vice-versa.

Physical data independence deals with hiding the details of the storage structure from user applications. The application should not be involved with these issues, since there is no difference in the operation carried out against the data.

The data independence and operation independence together gives the feature of data abstraction. There are two levels of data independence. The logical structure of the data is known as the schema definition. In general, if a user application operates on a subset of the attributes of a relation, it should not be affected later when new attributes are added to the same relation. Logical data independence indicates that the conceptual schema can be changed without affecting the existing schemas. The physical structure of the data is referred to as "physical data description". Physical data independence deals with hiding the details of the storage structure from user applications. The application should not be involved with these issues since, conceptually, there is no difference in the operations carried out against the data. There are two types of data independence:

Data Mart: A data mart is a subset of an organizational data store, usually oriented to a specific purpose or major data subject, that may be distributed to support business needs.[1] Data marts are analytical data stores designed to focus on specific business functions for a specific community within an organization. Data marts are often derived from subsets of data in a data warehouse, though in the bottom-up data warehouse design methodology the data warehouse is created from the union of organizational data marts.

Data Mining: Data mining is the process of extracting patterns from data. As more data are gathered, with the amount of data doubling every three years,[1] data mining is becoming an increasingly important tool to transform these data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery.

While data mining can be used to uncover patterns in data samples, it is important to be aware that the use of non-representative samples of data may produce results that are not indicative of the domain. Similarly, data mining will not find patterns that may be present in the domain, if those patterns are not present in the sample being "mined". There is a tendency for insufficiently knowledgeable "consumers" of the results to attribute "magical abilities" to data mining, treating the technique as a sort of all-seeing crystal ball. Like any other tool, it only functions in conjunction with the appropriate raw material: in this case, indicative and representative data that the user must first collect. Further, the discovery of a particular pattern in a particular set of data

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 28: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

28

does not necessarily mean that pattern is representative of the whole population from which that data was drawn. Hence, an important part of the process is the verification and validation of patterns on other samples of data.

Data Structure: In computer science, a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently.[1][2]

Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to specific tasks. For example, B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers.

Data structures are used in almost every program or software system. Specific data structures are essential ingredients of many efficient algorithms, and make possible the management of huge amounts of data, such as large databases and internet indexing services. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design.

Data Warehouse: is a repository of an organization's electronically stored data. Data warehouses are designed to facilitate reporting and analysis[1].

A Data Warehouse houses a standardized, consistent, clean and integrated form of data sourced from various operational systems in use in the organization, structured in a way to specifically address the reporting and analytic requirements.

This definition of the data warehouse focuses on data storage. However, the means to retrieve and analyze data, to extract, transform and load data, and to manage the data dictionary are also considered essential components of a data warehousing system. Many references to data warehousing use this broader context. Thus, an expanded definition for data warehousing includes business intelligence tools, tools to extract, transform, and load data into the repository, and tools to manage and retrieve metadata.

Database Administrator (DBA): A database administrator (DBA) is a person responsible for the design, implementation, maintenance and repair of an organization's database. They are also known by the titles Database Coordinator or Database Programmer, and is closely related to the Database Analyst, Database Modeler, Programmer Analyst, and Systems Manager. The role includes the development and design of database strategies, monitoring and improving database performance and capacity, and planning for future expansion requirements. They may also plan, co-ordinate and implement security measures to safeguard the database.[1] Employing organizations may require that a database administrator have a certification or degree for database systems (for example, the Microsoft Certified Database Administrator).

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 29: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

29

Database Management System (DBMS): A Database Management System (DBMS) is a set of computer programs that controls the creation, maintenance, and the use of the database of an organization and its end users. It allows organizations to place control of organization-wide database development in the hands of database administrators (DBAs) and other specialists. DBMSes may use any of a variety of database models, such as the network model or relational model. In large systems, a DBMS allows users and other software to store and retrieve data in a structured way. It helps to specify the logical organization for a database and access and use the information within a database. It provides facilities for controlling data access, enforcing data integrity, managing concurrency controlled, restoring database.

Database Structure:

Database Tuning: describes a group of activities used to optimize and homogenize the performance of a database. It usually overlaps with query tuning, but refers to design of the database files, selection of the database management system (DBMS), operating system and CPU the DBMS runs on.

The goal is to maximize use of system resources to perform work as efficiently and rapidly as possible. Most systems are designed to manage work efficiently, but it is possible to greatly improve performance by customizing settings and the configuration for the database and the DBMS being tuned.

Debugging: is a methodical process of finding and reducing the number of bugs, or defects, in a computer program or a piece of electronic hardware thus making it behave as expected. Debugging tends to be harder when various subsystems are tightly coupled, as changes in one may cause bugs to emerge in another.

Decision Support System (DSS): is a class of information systems (including but not limited to computerized systems) that support business and organizational decision-making activities. A properly designed DSS is an interactive software-based system intended to help decision makers compile useful information from a combination of raw data, documents, personal knowledge, or business models to identify and solve problems and make decisions.

Typical information that a decision support application might gather and present are:

an inventory of all of your current information assets (including legacy and relational data sources, cubes, data warehouses, and data marts),

comparative sales figures between one week and the next, projected revenue figures based on new product sales assumptions;

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 30: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

30

Denial-of-Service Attack: A denial-of-service attack (DoS attack) or distributed denial-of-service attack (DDoS attack) is an attempt to make a computer resource unavailable to its intended users. Although the means to carry out, motives for, and targets of a DoS attack may vary, it generally consists of the concerted efforts of a person or people to prevent an Internet site or service from functioning efficiently or at all, temporarily or indefinitely. Perpetrators of DoS attacks typically target sites or services hosted on high-profile web servers such as banks, credit card payment gateways, and even root nameservers.

One common method of attack involves saturating the target (victim) machine with external communications requests, such that it cannot respond to legitimate traffic, or responds so slowly as to be rendered effectively unavailable. In general terms, DoS attacks are implemented by either forcing the targeted computer(s) to reset, or consuming its resources so that it can no longer provide its intended service or obstructing the communication media between the intended users and the victim so that they can no longer communicate adequately.

Denial-of-service attacks are considered violations of the IAB's Internet proper use policy, and also violate the acceptable use policies of virtually all Internet Service Providers. They also commonly constitute violations of the laws of individual nations.[

Digital Certificate: In cryptography, a public key certificate (also known as a digital certificate or identity certificate) is an electronic document which uses a digital signature to bind together a public key with an identity — information such as the name of a person or an organization, their address, and so forth. The certificate can be used to verify that a public key belongs to an individual.

In a typical public key infrastructure (PKI) scheme, the signature will be of a certificate authority (CA). In a web of trust scheme, the signature is of either the user (a self-signed certificate) or other users ("endorsements"). In either case, the signatures on a certificate are attestations by the certificate signer that the identity information and the public key belong together.

For provable security this reliance on something external to the system has the consequence that any public key certification scheme has to rely on some special setup assumption, such as the existence of a certificate authority.[1]

Certificates can be created for Unix-based servers with tools such as OpenSSL's ssl-ca.[2] or SuSE's gensslcert. Similarly, Microsoft Windows 2003 contains Certificate Authority for the creation of digital certificates. In Windows Server 2008 the capability is in Active Directory Certification Authority.

Distributed Processing: The word distributed in terms such as "distributed computing", "distributed system", "distributed programming", and "distributed algorithm" originally referred to computer networks where individual computers were physically distributed within some geographical area.[3] The terms are nowadays used in a much wider sense, even when referring to

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 31: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

31

autonomous processes that run on the same physical computer and interact with each other by message passing.[4]

While there is no single definition of a distributed system,[5] the following defining properties are commonly used:

There are several autonomous computational entities, each of which has its own local memory.[6]

The entities communicate with each other by message passing.[7]

In this article, the computational entities are called computers or nodes.

A distributed system may have a common goal, such as solving a large computational problem.[8]

Alternatively, each computer may have its own user with individual needs, and the purpose of the distributed system is to coordinate the use of shared resources or provide communication services to the users.[9]

Other typical properties of distributed systems include the following:

The system has to tolerate failures in individual computers.[10]

The structure of the system (network topology, network latency, number of computers) is not known in advance, the system may consist of different kinds of computers and network links, and the system may change during the execution of a distributed program.[11]

Each computer has only a limited, incomplete view of the system. Each computer may know only one part of the input.[12

E-Business: Electronic Business, commonly referred to as "eBusiness" or "e-Business", may be defined as the utilization of information and communication technologies (ICT) in support of all the activities of business. Commerce constitutes the exchange of products and services between businesses, groups and individuals and can be seen as one of the essential activities of any business. electronic commerce focuses on the use of ICT to enable the external activities and relationships of the business with individuals, groups and other businesses [1].

Louis Gerstner, the former CEO of IBM, in his book, Who Says Elephants Can't Dance? attributes the term "e-Business" to IBM's marketing and Internet teams in 1996.

Electronic business methods enable companies to link their internal and external data processing systems more efficiently and flexibly, to work more closely with suppliers and partners, and to better satisfy the needs and expectations of their customers.

In practice, e-business is more than just e-commerce. While e-business refers to more strategic focus with an emphasis on the functions that occur using electronic capabilities, e-commerce is a subset of an overall e-business strategy. E-commerce seeks to add revenue streams using the

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 32: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

32

World Wide Web or the Internet to build and enhance relationships with clients and partners and to improve efficiency using the Empty Vessel strategy. Often, e-commerce involves the application of knowledge management systems.

E-business involves business processes spanning the entire value chain: electronic purchasing and supply chain management, processing orders electronically, handling customer service, and cooperating with business partners. Special technical standards for e-business facilitate the exchange of data between companies. E-business software solutions allow the integration of intra and inter firm business processes. E-business can be conducted using the Web, the Internet, intranets, extranets, or some combination of these.

E-Commerce: commonly known as (electronic marketing) e-commerce or eCommerce, consists of the buying and selling of products or services over electronic systems such as the Internet and other computer networks. The amount of trade conducted electronically has grown extraordinarily with widespread Internet usage. The use of commerce is conducted in this way, spurring and drawing on innovations in electronic funds transfer, supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems. Modern electronic commerce typically uses the World Wide Web at least at some point in the transaction's lifecycle, although it can encompass a wider range of technologies such as e-mail as well.

A large percentage of electronic commerce is conducted entirely electronically for virtual items such as access to premium content on a website, but most electronic commerce involves the transportation of physical items in some way. Online retailers are sometimes known as e-tailers and online retail is sometimes known as e-tail. Almost all big retailers have electronic commerce presence on the World Wide Web.

Electronic commerce that is conducted between businesses is referred to as business-to-business or B2B. B2B can be open to all interested parties (e.g. commodity exchange) or limited to specific, pre-qualified participants (private electronic market). Electronic commerce that is conducted between businesses and consumers, on the other hand, is referred to as business-to-consumer or B2C. This is the type of electronic commerce conducted by companies such as Amazon.com.

Electronic commerce is generally considered to be the sales aspect of e-business. It also consists of the exchange of data to facilitate the financing and payment aspects of the business transactions.

Electronic Access Controls: In computer security, access control includes authentication, authorization and audit. It also includes measures such as physical devices, including biometric scans and metal locks, hidden paths, digital signatures, encryption, social barriers, and monitoring by humans and automated systems.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 33: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

33

Electronic Data Interchange (EDI): The National Institute of Standards and Technology in a 1996 publication [1] defines Electronic Data Interchange as "the computer-to-computer interchange of strictly formatted messages that represent documents other than monetary instruments. EDI implies a sequence of messages between two parties, either of whom may serve as originator or recipient. The formatted data representing the documents may be transmitted from originator to recipient via telecommunications or physically transported on electronic storage media.". It goes on further to say that "In EDI, the usual processing of received messages is by computer only. Human intervention in the processing of a received message is typically intended only for error conditions, for quality review, and for special situations. For example, the transmission of binary or textual data is not EDI as defined here unless the data are treated as one or more data elements of an EDI message and are not normally intended for human interpretation as part of online data processing." [2]

EDI can be formally defined as 'The transfer of structured data, by agreed message standards, from one computer system to another without human intervention'. Most other definitions used are variations on this theme. Even in this era of technologies such as XML web services, the Internet and the World Wide Web, EDI is still the data format used by the vast majority of electronic commerce transactions in the world.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 34: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

34

Electronic Funds Transfer (EFT)

Electronic funds transfer or EFT refers to the computer-based systems used to perform financial transactions electronically.

The term is used for a number of different concepts:

Cardholder-initiated transactions, where a cardholder makes use of a payment card Direct deposit payroll payments for a business to its employees, possibly via a payroll

services company Direct debit payments from customer to business, where the transaction is initiated by the

business with customer permission Electronic bill payment in online banking, which may be delivered by EFT or paper

check Transactions involving stored value of electronic money, possibly in a private currency Wire transfer via an international banking network (generally carries a higher fee) Electronic Benefit Transfer Electronic Benefit Transfer (EBT) is an electronic system

in the United States that allows state governments to provide financial and material benefits to authorized recipients via a plastic debit card. Common benefits provided via EBT are typically sorted into two general categories: Food Stamp and Cash benefits. Food stamp benefits are federally authorized benefits that can be used only to purchase food and non-alcoholic beverages. Cash benefits include State General Assistance, TANF (Temporary Aid for Needy Families) benefits and refugee benefits.

Enterprise Resource Planning (ERP)

Enterprise Resource Planning (ERP) is a term usually used in conjunction with ERP software or an ERP system which is intended to manage all the information and functions of a business or company from shared data stores.[1]

An ERP system typically has modular hardware and software units and "services" that communicate on local area networks, wide area networks, internet and intranet. The modular design allows a business to add or reconfigure modules (perhaps from different vendors) while preserving data integrity in one shared database that may be centralized or distributed

Ethernet

Ethernet was originally based on the idea of computers communicating over a shared coaxial cable acting as a broadcast transmission medium. The methods used show some similarities to radio systems, although there are fundamental differences, such as the fact that it is much easier to detect collisions in a cable broadcast system than a radio broadcast. The common cable

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 35: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

35

providing the communication channel was likened to the ether and it was from this reference that the name "Ethernet" was derived.

From this early and comparatively simple concept, Ethernet evolved into the complex networking technology that today underlies most LANs. The coaxial cable was replaced with point-to-point links connected by Ethernet hubs and/or switches to reduce installation costs, increase reliability, and enable point-to-point management and troubleshooting. StarLAN was the first step in the evolution of Ethernet from a coaxial cable bus to a hub-managed, twisted-pair network. The advent of twisted-pair wiring dramatically lowered installation costs relative to competing technologies, including the older Ethernet technologies.

Above the physical layer, Ethernet stations communicate by sending each other data packets, blocks of data that are individually sent and delivered. As with other IEEE 802 LANs, each Ethernet station is given a single 48-bit MAC address, which is used to specify both the destination and the source of each data packet. Network interface cards (NICs) or chips normally do not accept packets addressed to other Ethernet stations. Adapters generally come programmed with a globally unique address, but this can be overridden, either to avoid an address change when an adapter is replaced, or to use locally administered addresses.

Despite the significant changes in Ethernet from a thick coaxial cable bus running at 10 Mbit/s to point-to-point links running at 1 Gbit/s and beyond, all generations of Ethernet (excluding early experimental versions) share the same frame formats (and hence the same interface for higher layers), and can be readily interconnected.

Due to the ubiquity of Ethernet, the ever-decreasing cost of the hardware needed to support it, and the reduced panel space needed by twisted pair Ethernet, most manufacturers now build the functionality of an Ethernet card directly into PC motherboards, eliminating the need for installation of a separate network card.

Executive Information System (EIS)

An Executive Information System (EIS) is a type of management information system intended to facilitate and support the information and decision-making needs of senior executives by providing easy access to both internal and external information relevant to meeting the strategic goals of the organization. It is commonly considered as a specialized form of a Decision Support System (DSS) [1]

The emphasis of EIS is on graphical displays and easy-to-use user interfaces. They offer strong reporting and drill-down capabilities. In general, EIS are enterprise-wide DSS that help top-level executives analyze, compare, and highlight trends in important variables so that they can monitor performance and identify opportunities and problems. EIS and data warehousing technologies are converging in the marketplace.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 36: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

36

In recent years, the term EIS has lost popularity in favour of Business Intelligence (with the sub areas of reporting, analytics, and digital dashboards).

Extensible Markup Language (XML)

XML (Extensible Markup Language) is a set of rules for encoding documents electronically. It is defined in the XML 1.0 Specification produced by the W3C and several other related specifications; all are fee-free open standards.[1]

XML’s design goals emphasize simplicity, generality, and usability over the Internet.[2] It is a textual data format, with strong support via Unicode for the languages of the world. Although XML’s design focuses on documents, it is widely used for the representation of arbitrary data structures, for example in web services.

There are a variety of programming interfaces which software developers may use to access XML data, and several schema systems designed to aid in the definition of XML-based languages.

As of 2009, hundreds of XML-based languages have been developed,[3] including RSS, Atom, SOAP, and XHTML. XML-based formats have become the default for most office-productivity tools, including Microsoft Office (Office Open XML), OpenOffice.org (OpenDocument), and Apple's iWork [4]

XML documents may begin by declaring some information about themselves, as in the following example.

<?xml version="1.0" encoding="UTF-8" ?>

Extensible Business Reporting Language (XBRL)

XBRL was developed for business and accounting applications. It is an XML-based application used to create, exchange and analyze financial reporting information that was developed for worldwide use. The AICPA led consortium that developed XBRL has promoted the application as a freely licensed product. In typical usage, XBRL consists of an instance document, containing primarily the business facts being reported, and a collection of taxonomies (called a Discoverable Taxonomy Set (DTS)), which define metadata about these facts, such as what the facts mean and how they relate to one another. XBRL uses XML Schema, XLink, and XPointer standards.

Instance Document

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 37: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

37

The instance document begins with the <xbrl> root element. There may be more than one XBRL instance embedded in a larger XML document. The XBRL instance document itself holds the following information:

Business Facts – facts can be divided into two categories o Items are facts holding a single value. They are represented by a single XML

element with the value as its content.o Tuples are facts holding multiple values. They are represented by a single XML

element containing nested Items or Tuples.o

<?xml version="1.0" ?> - <inventory> - <inventoryItem>   <itemCode>SG2003</itemCode>   <brandName>Saucony</brandName>   <model>Grid 2003</model>   <type>female</type>   <size>7.5</size>   <price>54.99</price>   <supplier>USA Sports Distributor</supplier>   </inventoryItem> - <inventoryItem>   <itemCode>SJ2002</itemCode>   <brandName>Saucony</brandName>   <model>Jazz 2002</model>   <type>female</type>   <size>8.0</size>   <price>77.50</price>   <supplier>USA Sports Distributor</supplier>   </inventoryItem> - <inventoryItem>   <itemCode>SO2001</itemCode>   <brandName>Saucony</brandName>   <model>Omni 2001</model>   <type>female</type>   <size>7.5</size>   <price>98.99</price>   <supplier>USA Sports Distributor</supplier>   </inventoryItem> - <inventoryItem>   <itemCode>NA2003</itemCode>   <brandName>Nike</brandName>   <model>Air Max 2003</model>   <type>female</type>   <size>7.5</size>   <price>99.50</price>   <supplier>USA Sports Distributor</supplier>   </inventoryItem>   </inventory>

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 38: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

38

In the design of XBRL, all Item facts must be assigned a context.

Contexts define the entity (e.g. company or individual) to which the fact applies and the period of time the fact is relevant. Date and time information appearing in the period element must conform to ISO 8601. Scenarios provide further contextual information about the facts, such as whether the business values reported are actual, projected, or budgeted.

Units define the units used by numeric or fractional facts within the document, such as USD, shares. XBRL allows more complex units to be defined if necessary. Facts of a monetary nature must use a unit from the ISO 4217 namespace.

Taxonomies are a collection of XML schema documents and XML documents called linkbases by virtue of their use of XLink. The schema must ultimately extend the XBRL instance schema document and typically extend other published XBRL schemas on the xbrl.org website.

Schemas define Item and Tuple "concepts" using <xsd:element> elements. Concepts provide names for the fact and indicate whether or not it's a tuple or an item, the data type (such as monetary, numeric, fractional, or textual), and potentially more metadata. Items and Tuples can be regarded as "implementations" of concepts, or specific instances of a concept. A good analogy for those familiar with object oriented programming would be that Concepts are the classes and Items and Tuples are Object instances of those classes. This is the source of the use of the "instance document" terminology. In addition to defining concepts, Schemas reference linkbase documents. Tuples instances are 1..n relationships with their parents; their metadata is simply the collection of their attributes.

Linkbases are a collection of Links, which themselves are a collection of locators, arcs, and potentially resources. Locators are elements that essentially reference a concept and provide an arbitrary label for it. In turn, arcs are elements indicating that a concept links to another concept by referencing the labels defined by the locators. Some arcs link concepts to other concepts. Other arcs link concepts to resources, the most common of which are human-readable labels for the concepts. The XBRL 2.1 specification defines five different kinds of linkbases.

o Label Linkbase – This linkbase provides human readable strings for concepts. Using the label linkbase, multiple languages can be supported, as well as multiple strings within each language.

o Reference Linkbase – This linkbase associates concepts with citations of some body of authoritative literature.

o Calculation Linkbase – This linkbase associates concepts with other concepts so that values appearing in an instance document may be checked for consistency.

o Definition Linkbase – This linkbase associates concepts with other concepts using a variety of arc roles to express relations such as is-a, whole-part, etc.

o Presentation Linkbase – This linkbase associates concepts with other concepts so that the resulting relations can guide the creation of a user interface, rendering, or visualisation.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 39: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

39

ExtranetAn extranet is a private network that uses Internet protocols, network connectivity, and possibly the public telecommunication system to securely share part of an organization's information or operations with suppliers, vendors, partners, customers or other businesses. An extranet can be viewed as part of a company's intranet that is extended to users outside the company, usually via the Internet. It has also been described as a "state of mind" in which the Internet is perceived as a way to do business with a selected set of other companies (business-to-business, B2B), in isolation from all other Internet users. In contrast, business-to-consumer (B2C) models involve known servers of one or more companies, communicating with previously unknown consumer users.

Fat Client

A fat client or rich client is a computer (client) in client-server architecture networks which typically provides rich functionality independently of the central server. Originally known as just a 'client' or 'thick client', the name is contrasted to thin client, which describes a computer heavily dependent on a server's applications.

A fat client still requires at least periodic connection to a network or central server, but is often characterised by the ability to perform many functions without that connection. In contrast, a thin client generally does as little processing as possible and relies on accessing the server each time input data needs to be processed or validated.

Field

In computer science, data that has several parts can be divided into fields. For example, a computer may represent today's date as three distinct fields: the day, the month and the year.

Relational databases arrange data as sets of database records, also called rows. Each record consists of several fields; the fields of all records form the columns.

Field Check

File

At the lowest level, many modern operating systems consider files simply as a one-dimensional sequence of bytes. At a higher level, where the content of the file is being considered, these binary digits may represent integer values, text characters, image pixels, audio or anything else. It is up to the program using the file to understand the meaning and internal layout of

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 40: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

40

information in the file and present it to a user as more meaningful information (such as text, images, sounds, or executable application programs).

At any instant in time, a file might have a size, normally expressed as number of bytes, that indicates how much storage is associated with the file. In most modern operating systems the size can be any non-negative whole number of bytes up to a system limit. However, the general definition of a file does not require that its instant size has any real meaning, unless the data within the file happens to correspond to data within a pool of persistent storage.

Information in a computer file can consist of smaller packets of information (often called "records" or "lines") that are individually different but share some trait in common. For example, a payroll file might contain information concerning all the employees in a company and their payroll details; each record in the payroll file concerns just one employee, and all the records have the common trait of being related to payroll—this is very similar to placing all payroll information into a specific filing cabinet in an office that does not have a computer. A text file may contain lines of text, corresponding to printed lines on a piece of paper. Alternatively, a file may contain an arbitrary binary image (a BLOB) or it may contain an executable.

The way information is grouped into a file is entirely up to the person designing the file. This has led to a plethora of more or less standardized file structures for all imaginable purposes, from the simplest to the most complex. Most computer files are used by computer programs. These programs create, modify and delete files for their own use on an as-needed basis. The programmers who create the programs decide what files are needed, how they are to be used and (often) their names.

In some cases, computer programs manipulate files that are made visible to the computer user. For example, in a word-processing program, the user manipulates document files that the user personally names. The content of the document file is arranged in a way that the word-processing program understands, but the user chooses the name and location of the file and provides the bulk of the information (such as words and text) that will be stored in the file.

File Attribute

A file attribute is metadata that describes or is associated with a computer file. For example, an operating system often keeps track of the date a file was created and last modified, as well as the file's size and extension (and what application to open it with). File permissions are also kept track of. The user may attach other attributes themselves, such as comments or color labels, as in Apple Computer's Mac OS X (version 10.3 or later).

In MS-DOS, OS/2 and Microsoft Windows the attrib command can be used to change and display file attributes.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 41: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

41

Financial Risk

Firewall

A firewall is a part of a computer system or network that is designed to block unauthorized access while permitting authorized communications. It is a device or set of devices configured to permit, deny, encrypt, decrypt, or proxy all (in and out) computer traffic between different security domains based upon a set of rules and other criteria.

Firewalls can be implemented in either hardware or software, or a combination of both. Firewalls are frequently used to prevent unauthorized Internet users from accessing private networks connected to the Internet, especially intranets. All messages entering or leaving the intranet pass through the firewall, which examines each message and blocks those that do not meet the specified security criteria.

Flowchart

Flowcharts are used in analyzing, designing, documenting or managing a process or program in various fields

A typical flowchart from Computer Science textbooks may have the following kinds of symbols:

Start and end symbolsRepresented as circles, ovals or rounded rectangles, usually containing the word "Start" or "End", or another phrase signaling the start or end of a process, such as "submit enquiry" or "receive product".

ArrowsShowing what's called "flow of control" in computer science. An arrow coming from one symbol and ending at another symbol represents that control passes to the symbol the arrow points to.

Processing stepsRepresented as rectangles. Examples: "Add 1 to X"; "replace identified part"; "save changes" or similar.

Input/OutputRepresented as a parallelogram. Examples: Get X from the user; display X.

Conditional or decisionRepresented as a diamond (rhombus). These typically contain a Yes/No question or True/False test. This symbol is unique in that it has two arrows coming out of it, usually from the bottom point and right point, one corresponding to Yes or True, and one corresponding to No or False. The arrows should always be labeled. More than two arrows can be used, but this is normally a clear indicator that a complex decision is being

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 42: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

42

taken, in which case it may need to be broken-down further, or replaced with the "pre-defined process" symbol.

A number of other symbols that have less universal currency, such as:

A Document represented as a rectangle with a wavy base; A Manual input represented by parallelogram, with the top irregularly sloping up from

left to right. An example would be to signify data-entry from a form; A Manual operation represented by a trapezoid with the longest parallel side at the top, to

represent an operation or adjustment to process that can only be made manually. A Data File represented by a cylinder.

Flowcharts may contain other symbols, such as connectors, usually represented as circles, to represent converging paths in the flowchart. Circles will have more than one arrow coming into them but only one going out. Some flowcharts may just have an arrow point to another arrow instead. These are useful to represent an iterative process (what in Computer Science is called a loop). A loop may, for example, consist of a connector where control first enters, processing steps, a conditional with one arrow exiting the loop, and one going back to the connector. Off-page connectors are often used to signify a connection to a (part of another) process held on another sheet or screen. It is important to remember to keep these connections logical in order. All processes should flow from top to bottom and left to right.

Fourth-Generation LanguageA fourth-generation programming language (1970s-1990) (abbreviated 4GL) is a programming language or programming environment designed with a specific purpose in mind, such as the development of commercial business software[1]. In the evolution of computing, the 4GL followed the 3GL in an upward trend toward higher abstraction and statement power. The 4GL was followed by efforts to define and use a 5GL.All 4GLs are designed to reduce programming effort, the time it takes to develop software, and the cost of software development. They are not always successful in this task, sometimes resulting in inelegant and unmaintainable code. However, given the right problem, the use of an appropriate 4GL can be spectacularly successful as was seen with MARK-IV and MAPPER (see History Section, Santa Fe real-time tracking of their freight cars – the productivity gains were estimated to be 8 times over COBOL). The usability improvements obtained by some 4GLs (and their environment) allowed better exploration for heuristic solutions than did the 3GL.

Gateway

Gateways work on all seven OSI layers . The main job of a gateway is to convert protocols among communications networks. A router by itself transfers, accepts and relays packets only across networks using similar protocols. A gateway on the other hand can accept a packet formatted for one protocol (e.g. AppleTalk) and convert it to a packet formatted for another protocol (e.g. TCP/IP) before forwarding it. A gateway can be implemented in hardware, software or both, but they are usually implemented by software installed within a router. A

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 43: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

43

gateway must understand the protocols used by each network linked into the router. Gateways are slower than bridges, switches and (non-gateway) routers.

A gateway is a network point that acts as an entrance to another network. On the Internet, a node or stopping point can be either a gateway node or a host (end-point) node. Both the computers of Internet users and the computers that serve pages to users are host nodes, while the nodes that connect the networks in between are gateways. For example, the computers that control traffic between company networks or the computers used by internet service providers (ISPs) to connect users to the internet are gateway nodes.

In the network for an enterprise, a computer server acting as a gateway node is often also acting as a proxy server and a firewall server. A gateway is often associated with both a router, which knows where to direct a given packet of data that arrives at the gateway, and a switch, which furnishes the actual path in and out of the gateway for a given packet

General Controls

IT General Controls (ITGC)ITGC represent the foundation of the IT control structure. They help ensure the reliability of data generated by IT systems and support the assertion that systems operate as intended and that output is reliable. General controls are tested prior to testing the application controls as they ensure the proper functioning of the information system and therefore support the application controls. ITGC usually include the following types of controls:

Control Environment, or those controls designed to shape the corporate culture or "tone at the top."

Change management procedures - controls designed to ensure changes meet business requirements and are authorized.

Source code /document version control procedures - controls designed to protect the integrity of program code

Software development life cycle standards - controls designed to ensure IT projects are effectively managed.

Security policies, standards and processes - controls designed to secure access based on business need.

Incident management policies and procedures - controls designed to address operational processing errors.

Technical support policies and procedures - policies to help users perform more efficiently and report problems.

Hardware /software configuration, installation, testing, management standards, policies and procedures.

Disaster recovery /backup and recovery procedures, to enable continued processing despite adverse conditions.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 44: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

44

IT Application ControlsIT application or program controls are fully-automated (i.e., performed automatically by the systems) designed to ensure the complete and accurate processing of data, from input through output. These controls vary based on the business purpose of the specific application. These controls may also help ensure the privacy and security of data transmitted between applications. Categories of IT application controls may include:

Completeness checks - controls that ensure all records were processed from initiation to completion.

Validity checks - controls that ensure only valid data is input or processed. Identification - controls that ensure all users are uniquely and irrefutably identified. Authentication - controls that provide an authentication mechanism in the application

system. Authorization - controls that ensure only approved business users have access to the

application system. Problem management - controls that ensure all application problems are recorded and

managed in a timely manner. Change management - controls that ensure all changes on production environment are

implemented with preserved data integrity. Input controls - controls that ensure data integrity fed from upstream sources into the

application system.

Groupware

Collaborative software (also referred to as groupware or workgroup support systems) is software designed to help people involved in a common task achieve their goals. Collaborative software is the basis for computer supported cooperative work. Such software systems as email, calendaring, text chat, wiki, and bookmarking belong to this category. It has been suggested that Metcalfe's law — the more people who use something, the more valuable it becomes — applies to such software.

The more general term social software applies to systems used outside the workplace, for example, online dating services and social networks like Friendster, Twitter and Facebook. The study of computer-supported collaboration includes the study of this software and social phenomena associated with it.

Hardware

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 45: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

45

Typical PC hardwareHardware of Personal Computer.1. Monitor2. Motherboard3. CPU4. RAM Memory5. Expansion card6. Power supply7. CD-ROM Drive8. Hard Disk9. Keyboard10. Mouse

Though a PC comes in many different form factors, a typical personal computer consists of a case or chassis in a tower shape (desktop) and the following parts:

Motherboard

The motherboard is the main component inside the case. It is a large rectangular board with integrated circuitry that connects the rest of the parts of the computer including the CPU, the RAM, the disk drives (CD, DVD, hard disk, or any others) as well as any peripherals connected via the ports or the expansion slots.

Components directly attached to the motherboard include:

The central processing unit (CPU) performs most of the calculations which enable a computer to function, and is sometimes referred to as the "brain" of the computer. It is usually cooled by a heat sink and fan.

The chipset mediates communication between the CPU and the other components of the system, including main memory.

RAM Stores all running processes (applications) and the current running OS. RAM Stands for Random Access Memory

The BIOS includes boot firmware and power management. The Basic Input Output System tasks are handled by operating system drivers.

Internal Buses connect the CPU to various internal components and to expansion cards for graphics and sound.

o Current The northbridge memory controller, for RAM and PCI Express

PCI Express , for expansion cards such as graphics and physics processors, and high-end network interfaces

PCI , for other expansion cards SATA , for disk drives

o Obsolete ATA (superseded by SATA) AGP (superseded by PCI Express)

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 46: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

46

VLB VESA Local Bus (superseded by AGP) ISA (expansion card slot format obsolete in PCs, but still used in industrial

computers) External Bus Controllers support ports for external peripherals. These ports may be

controlled directly by the southbridge I/O controller or based on expansion cards attached to the motherboard through the PCI bus.

o USB o FireWire o eSATA o SCSI

[edit] Power supplyMain article: Power supply unit (computer)

Includes power cords, switch, and cooling fan. Supplies power at appropriate voltages to the motherboard and internal disk drives. It also converts alternating current to direct current and provides different voltages to different parts of the computer.

[edit] Video display controllerMain article: Graphics card

Produces the output for the computer monitor. This will either be built into the motherboard or attached in its own separate slot (PCI, PCI-E, PCI-E 2.0, or AGP), in the form of a graphics card.

Most video cards support the most basic requirements, and video card manufacturing companies are doing a good job of keeping up with the requirements the games need. However the games are still evolving faster than the video because of manufacturing companies.

[edit] Removable media devicesMain article: Computer storage

CD (compact disc) - the most common type of removable media, suitable for music and data.

o CD-ROM Drive - a device used for reading data from a CD.o CD Writer - a device used for both reading and writing data to and from a CD.

DVD (digital versatile disc) - a popular type of removable media that is the same dimensions as a CD but stores up to 12 times as much information. It is the most common way of transferring digital video, and is popular for data storage.

o DVD-ROM Drive - a device used for reading data from a DVD.o DVD Writer - a device used for both reading and writing data to and from a DVD.o DVD-RAM Drive - a device used for rapid writing and reading of data from a

special type of DVD. Blu-ray Disc - a high-density optical disc format for data and high-definition video. Can

store 70 times as much information as a CD. o BD-ROM Drive - a device used for reading data from a Blu-ray disc.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 47: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

47

o BD Writer - a device used for both reading and writing data to and from a Blu-ray disc.

HD DVD - a discontinued competitor to the Blu-ray format. Floppy disk - an outdated storage device consisting of a thin disk of a flexible magnetic

storage medium. Used today mainly for loading RAID drivers. Iomega Zip drive - an outdated medium-capacity removable disk storage system, first

introduced by Iomega in 1994. USB flash drive - a flash memory data storage device integrated with a USB interface,

typically small, lightweight, removable, and rewritable. Capacities vary, from hundreds of megabytes (in the same ballpark as CDs) to tens of gigabytes (surpassing, at great expense, Blu-ray discs).

Tape drive - a device that reads and writes data on a magnetic tape, used for long term storage and backups.

[edit] Internal storage

Hardware that keeps data inside the computer for later use and remains persistent even when the computer has no power.

Hard disk - for medium-term storage of data. Solid-state drive - a device similar to hard disk, but containing no moving parts and stores

data in a digital format. RAID array controller - a device to manage several internal or external hard disks and

optionally some peripherals in order to achieve performance or reliability improvement in what is called a RAID array.

[edit] Sound cardMain article: Sound card

Enables the computer to output sound to audio devices, as well as accept input from a microphone. Most modern computers have sound cards built-in to the motherboard, though it is common for a user to install a separate sound card as an upgrade. Most sound cards, either built-in or added, have surround sound capabilities.

[edit] Other peripheralsMain article: Peripheral

In addition, hardware devices can include external components of a computer system. The following are either standard or very common.

Includes various input and output devices, usually external to the computer system.

[edit] InputMain article: Input

Text input devices Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 48: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

48

o Keyboard - a device to input text and characters by depressing buttons (referred to as keys), similar to a typewriter. The most common English-language key layout is the QWERTY layout.

Pointing devices o Mouse - a pointing device that detects two dimensional motion relative to its

supporting surface.o Optical Mouse - a newer technology that uses lasers, or more commonly LEDs to

track the surface under the mouse to determine motion of the mouse, to be translated into mouse movements on the screen.

o Trackball - a pointing device consisting of an exposed protruding ball housed in a socket that detects rotation about two axes.

Gaming devices o Joystick - a general control device that consists of a handheld stick that pivots

around one end, to detect angles in two or three dimensions.o Gamepad - a general handheld game controller that relies on the digits (especially

thumbs) to provide input.o Game controller - a specific type of controller specialized for certain gaming

purposes. Image , Video input devices

o Image scanner - a device that provides input by analyzing images, printed text, handwriting, or an object.

o Webcam - a low resolution video camera used to provide visual input that can be easily transferred over the internet.

Audio input devices o Microphone - an acoustic sensor that provides input by converting sound into

electrical signals.

Hash Total

Hash functions are primarily used in hash tables, to quickly locate a data record (for example, a dictionary definition) given its search key (the headword). Specifically, the hash function is used to map the search key to the hash. The index gives the place where the corresponding record should be stored. Hash tables, in turn, are used to implement associative arrays and dynamic sets.

In general, a hashing function may map several different keys to the same index. Therefore, each slot of a hash table is associated with (implicitly or explicitly) a set of records, rather than a single record. For this reason, each slot of a hash table is often called a bucket, and hash values are also called bucket indices.

Thus, the hash function only hints at the record's location — it tells where one should start looking for it. Still, in a half-full table, a good hash function will typically narrow the search down to only one or two entries.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 49: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

49

Hot SiteA hot site is a duplicate of the original site of the organization, with full computer systems as well as near-complete backups of user data. Real time synchronization between the two sites may be used to completely mirror the data environment of the original site using wide area network links and specialized software. Following a disruption to the original site, the hot site exists so that the organization can relocate with minimal losses to normal operations. Ideally, a hot site will be up and running within a matter of hours or even less. Personnel may still have to be moved to the hot site so it is possible that the hot site may be operational from a data processing perspective before staff has relocated. The capacity of the hot site may or may not match the capacity of the original site depending on the organizations requirements. This type of backup site is the most expensive to operate. Hot sites are popular with organizations that operate real time processes such as financial institutions, government agencies and ecommerce providers

Hypertext Markup Language (HTML)

HTML, which stands for Hyper Text Markup Language, is the predominant markup language for web pages. It provides a means to create structured documents by denoting structural semantics for text such as headings, paragraphs, lists etc as well as for links, quotes, and other items. It allows images and objects to be embedded and can be used to create interactive forms. It is written in the form of HTML elements consisting of "tags" surrounded by angle brackets within the web page content. It can include or can load scripts in languages such as JavaScript which affect the behavior of HTML processors like Web browsers; and Cascading Style Sheets (CSS) to define the appearance and layout of text and other material. The W3C, maintainer of both HTML and CSS standards, encourages the use of CSS over explicit presentational markup.[1]

Hyper Text Markup Language(HTML) is the encoding scheme used to create and format a web document. A user need not be an expert programmer to make use of HTML for creating hypertext documents that can be put on the internet.

Importing Data

Generalized Audit Software is a software designed to read, process and write data with the help of functions performing specific audit routines and with self-made macros. It is a tool in applying Computer Assisted Auditing Techniques. Functions of generalized audit software include importing computerized data; thereafter other functions can be applied: the data can be e.g. browsed, sorted, summarized, stratified, analyzed, taken samples from, and made calculations, conversions and other operations with.

Examples of generalized audit software are Audit Command Language (ACL), Interactive Data Extraction and Analysis (IDEA), Statistical Analysis System (SAS), and Statistical Package for Social Sciences (SPSS). TopCAATs is a new player (released Q4 2008) in this market and runs from within Excel.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 50: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

50

Information

Information is a term with many meanings depending on context, but is as a rule closely related to such concepts as meaning, knowledge, instruction, communication, representation, and mental stimulus. Simply stated, information is a message received and understood. In terms of data, it can be defined as a collection of facts from which conclusions may be drawn. There are many other aspects of information since it is the knowledge acquired through study or experience or instruction. But overall, information is the result of processing, manipulating and organizing data in a way that adds to the knowledge of the person receiving it.

Information is the state of a system of interest. Message is the information materialized.

Information is a quality of a message from a sender to one or more receivers. Information is always about something (size of a parameter, occurrence of an event, value, ethics, etc). Viewed in this manner, information does not have to be accurate; it may be a truth or a lie, or just the sound of a falling tree. Even a disruptive noise used to inhibit the flow of communication and create misunderstanding would in this view be a form of information. However, generally speaking, if the amount of information in the received message increases, the message is more accurate.

This model assumes there is a definite sender and at least one receiver. Many refinements of the model assume the existence of a common language understood by the sender and at least one of the receivers. An important variation identifies information as that which would be communicated by a message if it were sent from a sender to a receiver capable of understanding the message. In another variation, it is not required that the sender be capable of understanding the message, or even cognizant that there is a message, making information something that can be extracted from an environment, e.g., through observation, reading or measurement.

Input Controls

IT application or program controls are fully-automated (i.e., performed automatically by the systems) designed to ensure the complete and accurate processing of data, from input through output. These controls vary based on the business purpose of the specific application. These controls may also help ensure the privacy and security of data transmitted between applications. Categories of IT application controls may include:

Completeness checks - controls that ensure all records were processed from initiation to completion.

Validity checks - controls that ensure only valid data is input or processed. Identification - controls that ensure all users are uniquely and irrefutably identified. Authentication - controls that provide an authentication mechanism in the application

system. Authorization - controls that ensure only approved business users have access to the

application system.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 51: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

51

Problem management - controls that ensure all application problems are recorded and managed in a timely manner.

Change management - controls that ensure all changes on production environment are implemented with preserved data integrity.

Input controls - controls that ensure data integrity fed from upstream sources into the application system.

Information Risk

Information security means protecting information and information systems from unauthorized access, use, disclosure, disruption, modification or destruction.[1]

The terms information security, computer security and information assurance are frequently incorrectly used interchangeably. These fields are interrelated often and share the common goals of protecting the confidentiality, integrity and availability of information; however, there are some subtle differences between them.

These differences lie primarily in the approach to the subject, the methodologies used, and the areas of concentration. Information security is concerned with the confidentiality, integrity and availability of data regardless of the form the data may take: electronic, print, or other forms.

Computer security can focus on ensuring the availability and correct operation of a computer system without concern for the information stored or processed by the computer.

Governments, military, corporations, financial institutions, hospitals, and private businesses amass a great deal of confidential information about their employees, customers, products, research, and financial status. Most of this information is now collected, processed and stored on electronic computers and transmitted across networks to other computers.

Should confidential information about a business' customers or finances or new product line fall into the hands of a competitor, such a breach of security could lead to lost business, law suits or even bankruptcy of the business. Protecting confidential information is a business requirement, and in many cases also an ethical and legal requirement.

For the individual, information security has a significant effect on privacy, which is viewed very differently in different cultures.

The field of information security has grown and evolved significantly in recent years. As a career choice there are many ways of gaining entry into the field. It offers many areas for specialization including: securing network(s) and allied infrastructure, securing applications and databases, security testing, information systems auditing, business continuity planning and digital forensics science, to name a few.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 52: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

52

Internet

The Internet is a global system of interconnected computer networks that use the standardized Internet Protocol Suite (TCP/IP) to serve billions of users worldwide. It is a network of networks that consists of millions of private and public, academic, business, and government networks of local to global scope that are linked by copper wires, fiber-optic cables, wireless connections, and other technologies. The Internet carries a vast array of information resources and services, most notably the inter-linked hypertext documents of the World Wide Web (WWW) and the infrastructure to support electronic mail. In addition it supports popular services such as online chat, file transfer and file sharing, gaming, commerce, social networking, publishing, video on demand, and teleconferencing and telecommunications. Voice over Internet Protocol (VoIP) applications allow person-to-person communication via voice and video.

The origins of the Internet reach back to the 1960s when the United States funded research projects of its military agencies to build robust, fault-tolerant and distributed computer networks. This research and a period of civilian funding of a new U.S. backbone by the National Science Foundation spawned worldwide participation in the development of new networking technologies and led to the commercialization of an international network in the mid 1990s, and resulted in the following popularization of countless applications in virtually every aspect of modern human life. As of 2009, an estimated quarter of Earth's population uses the services of the Internet

Intranet

An intranet is built from the same concepts and technologies used for the Internet, such as client-server computing and the Internet Protocol Suite (TCP/IP). Any of the well known Internet protocols may be found in an intranet, such as HTTP (web services), SMTP (e-mail), and FTP (file transfer). Internet technologies are often deployed to provide modern interfaces to legacy information systems hosting corporate data.

An intranet can be understood as a private version of the Internet, or as a private extension of the Internet confined to an organization by a firewall. The first intranet websites and home pages began to appear in organizations in 1990 - 1991. Although not officially noted, the term intranet first became common-place with early adopters, such as universities and technology corporations, in 1992.

Intranets are also contrasted with extranets; the former are generally restricted to employees of the organization, while the latter may also be accessed by customers, suppliers, or other approved parties.[1] Extranets extend a private network onto the Internet with special provisions for access, authorization, and authentication.

An organization's intranet does not necessarily have to provide access to the Internet. When such access is provided it is usually through a network gateway with a firewall, shielding the intranet from unauthorized external access. The gateway often also implements user authentication,

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 53: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

53

encryption of messages, and often virtual private network (VPN) connectivity for off-site employees to access company information, computing resources and internal communications.

Local Area Network (LAN)

is a computer network covering a small physical area, like a home, office, or small group of buildings, such as a school, or an airport. The defining characteristics of LANs, in contrast to wide-area networks (WANs), include their usually higher data-transfer rates, smaller geographic area, and lack of a need for leased telecommunication lines.

ARCNET, Token Ring and many other technologies have been used in the past, and G.hn may be used in the future, but Ethernet over twisted pair cabling, and Wi-Fi are the two most common technologies currently in use.

Macro

Keyboard macros and mouse macros allow short sequences of keystrokes and mouse actions to be transformed into other, usually more time-consuming, sequences of keystrokes and mouse actions. In this way, frequently-used or repetitive sequences of keystrokes and mouse movements can be automated. Separate programs for creating these macros are called macro recorders.

During the 1980s, macro programs -- originally SmartKey, then SuperKey, KeyWorks, Prokey -- were very popular, first as a means to automatically format screenplays, then for a variety of user input tasks. These programs were based on the TSR (Terminate and stay resident) mode of operation and applied to all keyboard input, no matter in which context it occurred. They have to some extent fallen into obsolescence following the advent of mouse-driven user interface and the availability of keyboard and mouse macros in applications, such as word processors and spreadsheets, which makes it possible to create application-sensitive keyboard macros.

Keyboard macros have in more recent times come to life as a method of exploiting the economy of massively multiplayer online role-playing game (MMORPG)s. By tirelessly performing a boring, repetitive, but low risk action, a player running a macro can earn a large amount of the game's currency. This effect is even larger when a macro-using player operates multiple accounts simultaneously, or operates the accounts for a large amount of time each day. As this money is generated without human intervention, it can dramatically upset the economy of the game by causing runaway inflation. For this reason, use of macros is a violation of the TOS or EULA of most MMORPGs, and administrators of MMORPGs fight a continual war to identify and punish macro users[3].

[edit] Application macros and scripting

Keyboard and mouse macros that are created using an application's built-in macro features are sometimes called application macros. They are created by carrying out the sequence once and letting the application record the actions. An underlying macro programming language, most

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 54: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

54

commonly a Scripting language, with direct access to the features of the application may also exist.

The programmers' text editor Emacs (short for "editing macros") follows this idea to a conclusion. In effect, most of the editor is made of macros. Emacs was originally devised as a set of macros in the editing language TECO; it was later ported to dialects of Lisp.

Another programmer's text editor Vim (a descendant of vi) also has full implementation of macros. It can record into a register (macro) what a person types on the keyboard and it can be replayed or edited just like VBA macros for Microsoft Office. Also it has a scripting language called Vimscript [4] to create macros.[5]

Visual Basic for Applications (VBA) is a programming language included in Microsoft Office and some other applications. However, its function has evolved from and replaced the macro languages that were originally included in some of these applications.

[edit] Macro virusMain article: Macro virus (computing)

VBA has access to most Microsoft Windows system calls and executes when documents are opened. This makes it relatively easy to write computer viruses in VBA, commonly known as macro viruses. In the mid-to-late 1990s, this became one of the most common types of computer virus. However, during the late 1990's and to date, Microsoft has been patching and updating their programs. In addition, current anti-virus programs immediately counteract such attacks.

[edit] Text substitution macrosLanguages such as C and assembly language have simple macro systems, implemented as preprocessors to the compiler or assembler. C preprocessor macros work by simple textual search-and-replace at the token, rather than the character, level. A classic use of macros is in the computer typesetting system TeX and its derivatives, where most of the functionality is based on macros. MacroML is an experimental system that seeks to reconcile static typing and macro systems. Nemerle has typed syntax macros, and one productive way to think of these syntax macros is as a multi-stage computation. Other examples:

m4 is a sophisticated, stand-alone, macro processor. TRAC PHP Macro Extension TAL , accompanying Template Attribute Language SMX , for web pages ML/1 Macro Language One The General Purpose Macroprocessor is a contextual pattern matching macro processor,

which could be described as a combination of regular expressions, EBNF and AWK SAM76 minimac , a concatenative macro processor.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 55: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

55

troff and nroff, for typesetting and formatting Unix manpages.

See also: Assembly language#Macros and Algorithm

[edit] Procedural macrosMacros in the PL/I are written in a subset of PL/I itself: the compiler executes "preprocessor statements" at compilation time, and the output of this execution forms part of the code that is compiled. The ability to use a familiar procedural language as the macro language gives power much greater than that of text substitution macros, at the expense of a larger and slower compiler.

Frame Technology's frame macros have their own command syntax but can also contain text in any language. Each frame is both a generic component in a hierarchy of nested subassemblies, and a procedure for integrating itself with its subassembly frames (a recursive process that resolves integration conflicts in favor of higher level subassemblies). The outputs are custom documents, typically compilable source modules. Frame Technology can avoid the proliferation of similar but subtly different components, an issue that has plagued software development since the invention of macros and subroutines.

Most assembly languages have less powerful procedural macro facilities, for example allowing a block of code to be repeated N times for loop unrolling; but these have a completely different syntax from the actual assembly language.

[edit] Lisp macrosLisp's uniform, parenthesized syntax works especially well with macros. Languages of the Lisp family, such as Common Lisp and Scheme, have powerful macro systems because the syntax is simple enough to be parsed easily. Lisp macros transform the program structure itself, with the full language available to express such transformations. Common Lisp and Scheme differ in their macro systems: Scheme's is based on pattern matching, while Common Lisp macros are functions that explicitly construct sections of the program.

Being able to choose the order of evaluation (see lazy evaluation and non-strict functions) enables the creation of new syntactic constructs (e.g. control structures) indistinguishable from those built into the language. For instance, in a Lisp dialect that has cond but lacks if, it is possible to define the latter in terms of the former using macros.

Macros also make it possible to define data languages that are immediately compiled into code, which means that constructs such as state machines can be implemented in a way that is both natural and efficient.[6]

[edit] Macros for machine-independent software

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 56: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

56

Macros are normally used to map a short string (macro invocation) to a longer sequence of instructions. Another, less common, use of macros is to do the reverse: to map a sequence of instructions to a macro string. This was the approach taken by the STAGE2 Mobile Programming System, which used a rudimentary macro compiler (called SIMCMP) to map the specific instruction set of a given computer to counterpart machine-independent macros. Applications (notably compilers) written in these machine-independent macros can then be run without change on any computer equipped with the rudimentary macro compiler. The first application run in such a context is a more sophisticated and powerful macro compiler, written in the machine-independent macro language. This macro compiler is applied to itself, in a bootstrap fashion, to produce a compiled and much more efficient version of itself. The advantage of this approach is that complex applications can be ported from one computer to a very different computer with very little effort (for each target machine architecture, just the writing of the rudimentary macro compiler).[7][8] The advent of modern programming languages, notably C, for which compilers are available on virtually all computers, has rendered such an approach superfluous. This was, however, one of the first instances (if not the first) of compiler bootstrapping.

Magnetic Ink Character Reader (MICR)

Magnetic Ink Character Recognition, or MICR, is a character recognition technology used primarily by the banking industry to facilitate the processing of cheques. The technology allows computers to read information (such as account numbers) off of printed documents. Unlike barcodes or similar technologies, however, MICR codes can be easily read by humans.

MICR characters are printed in special typefaces with a magnetic ink or toner, usually containing iron oxide. As a machine decodes the MICR text, it first magnetizes the characters in the plane of the paper. Then the characters are then passed over a MICR read head, a device similar to the playback head of a tape recorder. As each character passes over the head it produces a unique waveform that can be easily identified by the system.

The use of magnetic printing allows the characters to be read reliably even if they have been overprinted or obscured by other marks, such as cancellation stamps. The error rate for the magnetic scanning of a typical check is smaller than with optical character recognition systems. For well printed MICR documents, the "can't read" rate is usually less than 1% while the substitution rate (misread rate) is in the order of 1 per 100,000 characters.

Management Information System (MIS)

A management information system (MIS) is a subset of the overall internal controls of a business covering the application of people, documents, technologies, and procedures by management accountants to solve business problems such as costing a product, service or a business-wide strategy. Management information systems are distinct from regular information systems in that they are used to analyze other information systems applied in operational activities in the organization.[1] Academically, the term is commonly used to refer to the group of

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 57: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

57

information management methods tied to the automation or support of human decision making, e.g. Decision Support Systems, Expert systems, and Executive information systems.[1]

It has been described as, "MIS 'lives' in the space that intersects technology and business. MIS combines tech with business to get people the information they need to do their jobs better/faster/smarter. Information is the lifeblood of all organizations - now more than ever. MIS professionals work as systems analysts, project managers, systems administrators, etc., communicating directly with staff and management across the organization." [2]

Mapping

Data mapping is the process of creating data element mappings between two distinct data models. Data mapping is used as a first step for a wide variety of data integration tasks including:

Data transformation or data mediation between a data source and a destination Identification of data relationships as part of data lineage analysis Discovery of hidden sensitive data such as the last four digits social security number

hidden in another user id as part of a data masking or de-identification project Consolidation of multiple databases into a single data base and identifying redundant

columns of data for consolidation or elimination

For example, a company that would like to transmit and receive purchases and invoices with other companies might use data mapping to create data maps from a company's data to standardized ANSI ASC X12 messages for items such as purchase orders and invoices.

X12 standards are generic Electronic Data Interchange (EDI) standards designed to allow a company to exchange data with any other company, regardless of industry. The standards are maintained by the Accredited Standards Committee X12 (ASC X12), with the American National Standards Institute (ANSI) accredited to set standards for EDI. The X12 standards are often called ANSI ASC X12 standards.

In the future, tools based on semantic web languages such as Resource Description Framework (RDF), the Web Ontology Language (OWL) and standardized metadata registry will make data mapping a more automatic process. This process will be accelerated if each application performed metadata publishing. Full automated data mapping is a very difficult problem (see Semantic translation).

Hand-coded, graphical manual

Data mappings can be done in a variety of ways using procedural code, creating XSLT transforms or by using graphical mapping tools that automatically generate executable transformation programs. These are graphical tools that allow a user to "draw" lines from fields in one set of data to fields in another. Some graphical data mapping tools allow users to "Auto-connect" a source and a destination. This feature is dependent on the source and destination data element name being the same. Transformation programs are automatically created in SQL, Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 58: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

58

XSLT, Java programming language or C++. These kinds of graphical tools are found in most ETL Tools (Extract, Transform, Load Tools) as the primary means of entering data maps to support data movement.

Data-driven mappingThis is the newest approach in data mapping and involves simultaneously evaluating actual data values in two data sources using heuristics and statistics to automatically discover complex mappings between two data sets. This approach is used to find transformations between two data sets and will discover substrings, concatenations, arithmetic, case statements as well as other kinds of transformation logic. This approach also discovers data exceptions that do not follow the discovered transformation logic.

Semantic mappingSemantic mapping is similar to the auto-connect feature of data mappers with the exception that a metadata registry can be consulted to look up data element synonyms. For example, if the source system lists FirstName but the destination lists PersonGivenName, the mappings will still be made if these data elements are listed as synonyms in the metadata registry. Semantic mapping is only able to discover exact matches between columns of data and will not discover any transformation logic or exceptions between columns.

Multiprocessing

Multiprocessing is the use of two or more central processing units (CPUs) within a single computer system. The term also refers to the ability of a system to support more than one processor and/or the ability to allocate tasks between them.[1] There are many variations on this basic theme, and the definition of multiprocessing can vary with context, mostly as a function of how CPUs are defined (multiple cores on one die, multiple chips in one package, multiple packages in one system unit, etc.).

Multiprocessing sometimes refers to the execution of multiple concurrent software processes in a system as opposed to a single process at any one instant. However, the terms multitasking or multiprogramming are more appropriate to describe this concept, which is implemented mostly in software, whereas multiprocessing is more appropriate to describe the use of multiple hardware CPUs. A system can be both multiprocessing and multiprogramming, only one of the two, or neither of the two.

Multiprogramming

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 59: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

59

Computer multiprogramming is the allocation of a computer system and its resources to more than one concurrent application, job or user ("program" in this nomenclature).

Initially, this technology was sought in order to optimize use of a computer system, since time and processing resources were often wasted when a single job waited for human interaction or other data input/output operations.

Multiprogramming capability was developed as a feature of operating systems in the late 1950s and came into common use in mainframe computing in the mid- to late 1960s. This followed the development of hardware systems that possessed the requisite circuit logic and instruction sets to facilitate the transfer of control between the operating system and one or more independent applications, users or job streams.

The use of multiprogramming was enhanced by the arrival of virtual memory and virtual machine technology, which enabled individual programs to make use of memory and operating system resources as if other concurrently running programs were, for all practical purposes, non-existent and invisible to them.

Multiprogramming should be differentiated from multi-tasking since not all multiprogramming entails—or has the capability for-- "true" multi-tasking. This is the case even though the use of multi-tasking generally implies the use of some multiprogramming methods.

In this context, the root word "program" does not necessarily refer to a compiled application, rather, any set of commands submitted for execution by a user or operator. Such could include a script or job control stream and any included calls to macro-instructions, system utilities or application program modules. An entire, interactive, logged-in user session can be thought of as a "program" in this sense.

A program generally comprises numerous tasks, a task being a relatively small group of processor instructions which together achieve a definable logical step in the completion of a job or the execution of a continuous-running application program. A task frequently ends with some request requiring the moving of data, a convenient opportunity to allow another program to have system resources, particularly CPU time.

In multiprogramming, concurrent running (sharing of the processor) is achieved when the operating system identifies opportunities to interrupt the handling of one program between tasks (e.g., when it is waiting for input/output) and to transfer process control to another program (application, job or user). To a great extent, the ability of a system to share its resources equitably—or according to certain priorities—is dependent upon the design of the programs being handled and how frequently they may be interrupted.

Multi-tasking eliminates that dependency and expands upon multiprogramming by enabling the operating system supervisor to interrupt programs in the middle of tasks and to transfer processor control so rapidly that each program is now assured a portion of each processing second, making the interruptions imperceptible to most human-interactive applications.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 60: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

60

Network

A computer network allows computers to communicate with many other and to share resources and information. The Advanced Research Projects Agency (ARPA) funded the design of the "Advanced Research Projects Agency Network" (ARPANET) for the United States Department of Defense. It was the first operational computer network in the world.[1] Development of the network began in 1969, based on designs begun in the 1960s.

[edit] Network classificationThe following list presents categories used for classifying networks.

[edit] Connection method

Computer networks can also be classified according to the hardware and software technology that is used to interconnect the individual devices in the network, such as Optical fiber, Ethernet, Wireless LAN, HomePNA, Power line communication or G.hn. Ethernet uses physical wiring to connect devices. Frequently deployed devices include hubs, switches, bridges and/or routers.

Wireless LAN technology is designed to connect devices without wiring. These devices use radio waves or infrared signals as a transmission medium.

ITU-T G.hn technology uses existing home wiring (coaxial cable, phone lines and power lines) to create a high-speed (up to 1 Gigabit/s) local area network.

Wired Technologies

Twisted-Pair Wire - This is the most widely used medium for telecommunication. Twisted-pair wires are ordinary telephone wires which consist of two insulated copper wires twisted into pairs and are used for both voice and data transmission. The use of two wires twisted together helps to reduce crosstalk and electromagnetic induction. The transmission speed range from 2 million bits per second to 100 million bits per second.

Coaxial Cable – These cables are widely used for cable television systems, office buildings, and other worksites for local area networks. The cables consist of copper or aluminum wire wrapped with insulating layer typically of a flexible material with a high dielectric constant, all of which are surrounded by a conductive layer. The layers of insulation help minimize interference and distortion. Transmission speed range from 200 million to more than 500 million bits per second.

Fiber Optics – These cables consist of one or more thin filaments of glass fiber wrapped in a protective layer. It transmits light which can travel over long distance and higher bandwidths. Fiber-optic cables are not affected by electromagnetic radiation. Transmission speed could go up

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 61: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

61

to as high as trillions of bits per second. The speed of fiber optics is hundreds of times faster than coaxial cables and thousands of times faster than twisted-pair wire.

Wireless Technologies

Terrestrial Microwave – Terrestrial microwaves use Earth-based transmitter and receiver. The equipment look similar to satellite dishes. Terrestrial microwaves use low-gigahertz range, which limits all communications to line-of-sight. Path between relay stations spaced approx. 30 miles apart. Microwave antennas are usually placed on top of buildings, towers, hills, and mountain peaks.

Communications Satellites – The satellites use microwave radio as their telecommunications medium which are not deflected by the Earth's atmosphere. The satellites are stationed in space, typically 22,000 miles above the equator. These Earth-orbiting systems are capable of receiving and relaying voice, data, and TV signals.

Cellular and PCS Systems – Use several radio communications technologies. The systems are divided to different geographic area. Each area has low-power transmitter or radio relay antenna device to relay calls from one area to the next area.

Wireless LANs – Wireless local area network use a high-frequency radio technology similar to digital cellular and a low-frequency radio technology. Wireless LANS use spread spectrum technology to enable communication between multiple devices in a limited area. Example of open-standard wireless radio-wave technology is IEEE 802.11b.

Bluetooth – A short range wireless technology. Operate at approx. 1Mbps with range from 10 to 100 meters. Bluetooth is an open wireless protocol for data exchange over short distances.

The Wireless Web – The wireless web refers to the use of the World Wide Web through equipments like cellular phones, pagers,PDAs, and other portable communications devices. The wireless web service offers anytime/anywhere connection.

[edit] Scale

Networks are often classified as Local Area Network (LAN), Wide Area Network (WAN), Metropolitan Area Network (MAN), Personal Area Network (PAN), Virtual Private Network (VPN), Campus Area Network (CAN), Storage Area Network (SAN), etc. depending on their scale, scope and purpose. Usage, trust levels and access rights often differ between these types of network - for example, LANs tend to be designed for internal use by an organization's internal systems and employees in individual physical locations (such as a building), while WANs may connect physically separate parts of an organization to each other and may include connections to third parties.

Network Administrator

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 62: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

62

Network administrator is a modern profession responsible for the maintenance of computer hardware and software that comprises a computer network. This normally includes the deployment, configuration, maintenance and monitoring of active network equipment. A related role is that of the network specialist, or network analyst, who concentrates on network design and security.

The Network Administrator is usually the highest level of technical/network staff in an organization and will rarely be involved with direct user support. The Network Administrator will concentrate on the overall health of the network, server deployment, security, and ensuring that the network connectivity throughout a company's LAN/WAN infrastructure is on par with technical considerations at the network level of an organization's hierarchy. Network Administrators are considered Tier 3 support personnel that only work on break/fix issues that could not be resolved at the Tier1 (helpdesk) or Tier 2 (desktop/network technician) levels.

Depending on the company, the Network Administrator may also design and deploy networks. However, these tasks may be assigned to a Network Engineer should one be available to the company.

The actual role of the Network Administrator will vary from company to company, but will commonly include activities and tasks such as network address assignment, assignment of routing protocols and routing table configuration as well as configuration of authentication and authorization – directory services. It often includes maintenance of network facilities in individual machines, such as drivers and settings of personal computers as well as printers and such. It sometimes also includes maintenance of certain network servers: file servers, VPN gateways, intrusion detection systems, etc.

Network specialists and analysts concentrate on the network design and security, particularly troubleshooting and/or debugging network-related problems. Their work can also include the maintenance of the network's authorization infrastructure, as well as network backup systems.

The administrator is responsible for the security of the network and for assigning IP addresses to the devices connected to the networks. Assigning IP addresses gives the subnet administrator some control over the professional who connects to the subnet. It also helps to ensure that the administrator knows each system that is connected and who personally is responsible for the system.

Network Operating System

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 63: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

63

A networking operating system is an operating system that contains components and programs that allow a computer on a network to serve requests from other computer for data and provide access to other resources such as printer and file systems.

Features Add, remove and manage users who wish to use resources on the network. Allow users to access to the data on the network. This data commonly resides on the

server. Allow users to access data found on other network such as the internet. Allow users to access hardware connected to the network. Protect data and services located on the network.

Network operating system features may include:

basic support for hardware ports security features such as authentication, authorization, login restrictions, and access

control name services and directory services file, print, data storage, backup and replication services remote access system management network administration and auditing tools with graphic interfaces clustering capabilities fault tolerance and high availability

Examples JUNOS , used in routers and switches from Juniper Networks. Cisco IOS (formerly "Cisco Internetwork Operating System") is a NOS having a focus on

the internetworking capabilities of network devices. It is used on Cisco Systems routers and some network switches.

BSD , also used in many network servers. Linux Windows Microsoft server Novell netware

MisconceptionSome device operating systems, including Mac OS X and all versions of Microsoft Windows since Windows 2000, include NOS features. A NOS is an OS that has been specifically written to implement and maintain networks.

Network Protocol

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 64: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

64

In computing, a protocol is a set of rules which is used by computers to communicate with each other across a network. A protocol is a convention or standard that controls or enables the connection, communication, and data transfer between computing endpoints. In its simplest form, a protocol can be defined as the rules governing the syntax, semantics, and synchronization of communication. Protocols may be implemented by hardware, software, or a combination of the two. At the lowest level, a protocol defines the behavior of a hardware connection.

While protocols can vary greatly in purpose and sophistication, most specify one or more of the following properties:[citation needed]

Detection of the underlying physical connection (wired or wireless), or the existence of the other endpoint or node

Handshaking Negotiation of various connection characteristics How to start and end a message Procedures on formatting a message What to do with corrupted or improperly formatted messages (error correction) How to detect unexpected loss of the connection, and what to do next Termination of the session and/or connection.

Network Interface Card

Although other network technologies exist, Ethernet has achieved near-ubiquity since the mid-1990s. Every Ethernet network card has a unique 48-bit serial number called a MAC address, which is stored in ROM carried on the card. Every computer on an Ethernet network must have a card with a unique MAC address. Normally it is safe to assume that no two network cards will share the same address, because card vendors purchase blocks of addresses from the Institute of Electrical and Electronics Engineers (IEEE) and assign a unique address to each card at the time of manufacture.

Whereas network cards used to be expansion cards that plug into a computer bus, the low cost and ubiquity of the Ethernet standard means that most newer computers have a network interface built into the motherboard. These either have Ethernet capabilities integrated into the motherboard chipset or implemented via a low cost dedicated Ethernet chip, connected through the PCI (or the newest PCI Express) bus. A separate network card is not required unless multiple interfaces are needed or some other type of network is used. Newer motherboards may even have dual network (Ethernet) interfaces built-in.

The card implements the electronic circuitry required to communicate using a specific physical layer and data link layer standard such as Ethernet or token ring. This provides a base for a full network protocol stack, allowing communication among small groups of computers on the same LAN and large-scale network communications through routable protocols, such as IP.

Node

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 65: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

65

In data communication, a physical network node may either be a data circuit-terminating equipment (DCE) such as a modem, hub, bridge or switch; or a data terminal equipment (DTE) such as a digital telephone handset, a printer or a host computer, for example a router, a workstation or a server.

If the network in question is a LAN or WAN, every LAN or WAN node (that are at least data link layer devices) must have a MAC address. Examples are computers, packet switches and ADSL modem (with Ethernet interface). Note that a hub constitutes a physical network node, but not a LAN node in this sense, since a hubbed network logically is a bus network. Analogusly, a repeater or PSTN modem (with serial interface) are physical network nodes but not LAN nodes in this sense.

If the network in question is the Internet, many physical network nodes are host computers, also known as Internet nodes, identified by an IP address, and all hosts are physical network nodes. However, datalink layer devices such as switches, bridges and WLAN access points do not have an IP host address (except sometimes for administrative purposes), and are not considered as Internet nodes, but as physical network nodes or LAN nodes.

If the network in question is a distributed system, the nodes are clients, servers or peers. In a peer-to-peer or overlay network, nodes that actively route data for the other networked devices as well as themselves are called supernodes.

Normalization

In the field of relational database design, normalization is a systematic way of ensuring that a database structure is suitable for general-purpose querying and free of certain undesirable characteristics—insertion, update, and deletion anomalies—that could lead to a loss of data integrity.[1] E.F. Codd, the inventor of the relational model, introduced the concept of normalization and what we now know as the First Normal Form (1NF) in 1970.[2] Codd went on to define the Second Normal Form (2NF) and Third Normal Form (3NF) in 1971,[3] and Codd and Raymond F. Boyce defined the Boyce-Codd Normal Form in 1974.[4] Higher normal forms were defined by other theorists in subsequent years, the most recent being the Sixth Normal Form (6NF) introduced by Chris Date, Hugh Darwen, and Nikos Lorentzos in 2002.[5]

Informally, a relational database table (the computerized representation of a relation) is often described as "normalized" if it is in the Third Normal Form.[6] Most 3NF tables are free of insertion, update, and deletion anomalies, i.e. in most cases 3NF tables adhere to BCNF, 4NF, and 5NF (but typically not 6NF).

A standard piece of database design guidance is that the designer should create a fully normalized design; selective denormalization can subsequently be performed for performance reasons.[7] However, some modeling disciplines, such as the dimensional modeling approach to data warehouse design, explicitly recommend non-normalized designs, i.e. designs that in large part do not adhere to 3NF.[8]

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 66: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

66

Object-Oriented Programming

Object-oriented programming has roots that can be traced to the 1960s. As hardware and software became increasingly complex, quality was often compromised. Researchers studied ways to maintain software quality and developed object-oriented programming in part to address common problems by strongly emphasizing discrete, reusable units of programming logic. The methodology focuses on data rather than processes, with programs composed of self-sufficient modules (objects) each containing all the information needed to manipulate its own data structure. This is in contrast to the existing modular programming which had been dominant for many years that focused on the function of a module, rather than specifically the data, but equally provided for code reuse, and self-sufficient reusable units of programming logic, enabling collaboration through the use of linked modules (subroutines). This more conventional approach, which still persists, tends to consider data and behavior separately.

An object-oriented program may thus be viewed as a collection of interacting objects, as opposed to the conventional model, in which a program is seen as a list of tasks (subroutines) to perform. In OOP, each object is capable of receiving messages, processing data, and sending messages to other objects and can be viewed as an independent 'machine' with a distinct role or responsibility. The actions (or "operators") on these objects are closely associated with the object. For example, the data structures tend to 'carry their own operators around with them' (or at least "inherit" them from a similar object or class).

Online Analytical Processing (OLAP)

Online analytical processing, or OLAP (pronounced /ˈoʊlæp/), is an approach to quickly answer multi-dimensional analytical queries.[1] OLAP is part of the broader category of business intelligence, which also encompasses relational reporting and data mining.[2] The typical applications of OLAP are in business reporting for sales, marketing, management reporting, business process management (BPM), budgeting and forecasting, financial reporting and similar areas. The term OLAP was created as a slight modification of the traditional database term OLTP (Online Transaction Processing).[3]

Databases configured for OLAP use a multidimensional data model, allowing for complex analytical and ad-hoc queries with a rapid execution time. They borrow aspects of navigational databases and hierarchical databases that are faster than relational databases.[4]

The output of an OLAP query is typically displayed in a matrix (or pivot) format. The dimensions form the rows and columns of the matrix; the measures form the values.

At the core of any OLAP system is the concept of an OLAP cube (also called a multidimensional cube or a hypercube). It consists of numeric facts called measures which are categorized by dimensions. The cube metadata is typically created from a star schema or snowflake schema of

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 67: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

67

tables in a relational database. Measures are derived from the records in the fact table and dimensions are derived from the dimension tables.

Each measure can be thought of as having a set of labels, or meta-data associated with it. A dimension is what describes these labels; it provides information about the measure.

A simple example would be a cube that contains a store's sales as a measure, and Date/Time as a dimension. Each Sale has a Date/Time label that describes more about that sale.

Any number of dimensions can be added to the structure such as Store, Cashier, or Customer by adding a column to the fact table. This allows an analyst to view the measures along any combination of the dimensions.

Online Realtime Processing (OLRT)

There are a number of differences between real-time and batch processing. These are outlined below:

Each transaction in real-time processing is unique. It is not part of a group of transactions, even though those transactions are processed in the same manner. Transactions in real-time processing are stand-alone both in the entry to the system and also in the handling of output.

Real-time processing requires the master file to be available more often for updating and reference than batch processing. The database is not accessible all of the time for batch processing.

Real-time processing has fewer errors than batch processing, as transaction data is validated and entered immediately. With batch processing, the data is organised and stored before the master file is updated. Errors can occur during these steps.

Infrequent errors may occur in real-time processing; however, they are often tolerated. It is not practical to shut down the system for infrequent errors.

More computer operators are required in real-time processing, as the operations are not centralised. It is more difficult to maintain a real-time processing system than a batch processing system.

FeaturesSource – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 68: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

68

Rapid response

Fast performance with a rapid response time is critical. Businesses cannot afford to have customers waiting for a TPS to respond, the turnaround time from the input of the transaction to the production for the output must be a few seconds or less.

Reliability

Many organizations rely heavily on their TPS; a breakdown will disrupt operations or even stop the business. For a TPS to be effective its failure rate must be very low. If a TPS does fail, then quick and accurate recovery must be possible. This makes well–designed backup and recovery procedures essential.

Inflexibility

A TPS wants every transaction to be processed in the same way regardless of the user, the customer or the time for day. If a TPS were flexible, there would be too many opportunities for non-standard operations, for example, a commercial airline needs to consistently accept airline reservations from a range of travel agents, accepting different transactions data from different travel agents would be a problem.

Controlled processing

The processing in a TPS must support an organization's operations. For example if an organization allocates roles and responsibilities to particular employees, then the TPS should enforce and maintain this requirement.

Operating Risk

DefinitionThe Basel Committee defines operational risk as:

"The risk of loss resulting from inadequate or failed internal processes, people and systems or from external events."

However, the Basel Committee recognizes that operational risk is a term that has a variety of meanings and therefore, for internal purposes, banks are permitted to adopt their own definitions of operational risk, provided the minimum elements in the Committee's definition are included.

Scope exclusionsSource – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 69: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

69

The Basel II definition of operational risk excludes, for example, strategic risk - the risk of a loss arising from a poor strategic business decision.

Other risk terms are seen as potential consequences of operational risk events. For example, reputational risk (damage to an organization through loss of its reputation or standing) can arise as a consequence (or impact) of operational failures - as well as from other events.

Basel II event type categoriesThe following lists the official Basel II defined event types with some examples for each category:

1. Internal Fraud - misappropriation of assets, tax evasion, intentional mismarking of positions, bribery

2. External Fraud- theft of information, hacking damage, third-party theft and forgery3. Employment Practices and Workplace Safety - discrimination, workers compensation,

employee health and safety4. Clients, Products, & Business Practice- market manipulation, antitrust, improper trade,

product defects, fiduciary breaches, account churning5. Damage to Physical Assets - natural disasters, terrorism, vandalism6. Business Disruption & Systems Failures - utility disruptions, software failures, hardware

failures7. Execution, Delivery, & Process Management - data entry errors, accounting errors, failed

mandatory reporting, negligent loss of client assets

DifficultiesIt is relatively straightforward for an organization to set and observe specific, measurable levels of market risk and credit risk. By contrast it is relatively difficult to identify or assess levels of operational risk and its many sources. Historically organizations have accepted operational risk as an unavoidable cost of doing business.

Methods of operational risk managementBasel II and various Supervisory bodies of the countries have prescribed various soundness standards for Operational Risk Management for Banks and similar Financial Institutions. To complement these standards, Basel II has given guidance to 3 broad methods of Capital calculation for Operational Risk

Basic Indicator Approach - based on annual revenue of the Financial Institution Standardized Approach - based on annual revenue of each of the broad business lines of

the Financial Institution

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 70: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

70

Advanced Measurement Approaches - based on the internally developed risk measurement framework of the bank adhering to the standards prescribed (methods include IMA, LDA, Scenario-based, Scorecard etc.)

The Operational Risk Management framework should include identification, measurement, monitoring, reporting, control and mitigation frameworks for Operational Risk.

Operating System

An operating system (OS) is an interface between hardware and user which is responsible for the management and coordination of activities and the sharing of the resources of the computer that acts as a host for computing applications run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Almost all computers (including handheld computers, desktop computers, supercomputers, video game consoles) as well as some robots, domestic appliances (dishwashers, washing machines), and portable media players use an operating system of some type.[1] Some of the oldest models may, however, use an embedded operating system that may be contained on a data storage device.

Operating systems offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. Users may also interact with the operating system with some kind of software user interface (SUI) like typing commands by using command line interface (CLI) or using a graphical user interface (GUI, commonly pronounced “gooey”). For hand-held and desktop computers, the user interface is generally considered part of the operating system. On large multi-user systems like Unix and Unix-like systems, the user interface is generally implemented as an application program that runs outside the operating system. (Whether the user interface should be included as part of the operating system is a point of contention.)

Common contemporary operating systems include BSD, Darwin (Mac OS X), Linux, SunOS (Solaris/OpenSolaris), and Windows NT (XP/Vista/7). While servers generally run Unix or some Unix-like operating system, embedded system markets are split amongst several operating systems,[2][3] although the Microsoft Windows line of operating systems has almost 90% of the client PC market.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 71: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

71

Packet

A packet consists of two kinds of data: control information and user data (also known as payload). The control information provides data the network needs to deliver the user data, for example: source and destination addresses, error detection codes like checksums, and sequencing information. Typically, control information is found in packet headers and trailers, with user data in between.

Different communications protocols use different conventions for distinguishing between the elements and for formatting the data. In Binary Synchronous Transmission, the packet is formatted in 8-bit bytes, and special characters are used to delimit the different elements. Other protocols, like Ethernet, establish the start of the header and data elements by their location relative to the start of the packet. Some protocols format the information at a bit level instead of a byte level.

A good analogy is to consider a packet to be like a letter: the header is like the envelope, and the data area is whatever the person puts inside the envelope. A difference, however, is that some networks can break a larger packet into smaller packets when necessary (note that these smaller data elements are still formatted as packets).

A network design can achieve two major results by using packets: error detection and multiple host addressing.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 72: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

72

Error detection

It is more efficient and reliable to calculate a checksum or cyclic redundancy check over the contents of a packet than to check errors using character-by-character parity bit checking.

The packet trailer often contains error checking data to detect errors that occur during transmission.

Host addressing

Modern networks usually connect three or more host computers together; in such cases the packet header generally contains addressing information so that the packet is received by the correct host computer. In complex networks constructed of multiple routing and switching nodes, like the ARPANET and the modern Internet, a series of packets sent from one host computer to another may follow different routes to reach the same destination. This technology is called packet switching.

Packet Filtering

Packet filtering inspects each packet passing through the network and accepts or rejects it based on user-defined rules. Although difficult to configure, it is fairly effective and mostly transparent to its users. In addition, it is susceptible to IP spoofing.

Packet filters act by inspecting the "packets" which represent the basic unit of data transfer between computers on the Internet. If a packet doesn’t match the packet filter's set of rules, the packet filter will drop (silently discard) the packet, or reject it (discard it, and send "error responses" to the source).

This type of packet filtering pays no attention to whether a packet is part of an existing stream of traffic (it stores no information on connection "state"). Instead, it filters each packet based only on information contained in the packet itself (most commonly using a combination of the packet's source and destination address, its protocol, and, for TCP and UDP traffic, the port number).

Parallel Processing

The simultaneous use of more than one CPU or processor core to execute a program or multiple computational threads. Ideally, parallel processing makes programs run faster because there are more engines (CPUs or cores) running it. In practice, it is often difficult to divide a program in such a way that separate CPUs or cores can execute different portions without interfering with each other. Most computers have just one CPU, but some models have several, and multi-core processor chips are becoming the norm. There are even computers with thousands of CPUs.Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 73: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

73

With single-CPU, single-core computers, it is possible to perform parallel processing by connecting the computers in a network. However, this type of parallel processing requires very sophisticated software called distributed processing software.

Note that parallel processing differs from multitasking, in which a CPU provides the illusion of simultaneously executing instructions from multiple different programs by rapidly switching between them, or "interleaving" their instructions.

Parallel processing is also called parallel computing. In the quest of cheaper computing alternatives parallel processing provides a viable option. The idle time of processor cycles across network can be used effectively by sophisticated distributed computing software.

Permanent FileThe information stored in a permanent file is not altered or deleted. An example would be the U.S. Government’s file of social security numbers and associated receipts and payments.

Phishing

In the field of computer security, phishing is the criminally fraudulent process of attempting to acquire sensitive information such as usernames, passwords and credit card details by masquerading as a trustworthy entity in an electronic communication. Communications purporting to be from popular social web sites, auction sites, online payment processors or IT administrators are commonly used to lure the unsuspecting public. Phishing is typically carried out by e-mail or instant messaging,[1] and it often directs users to enter details at a fake website whose look and feel are almost identical to the legitimate one. Even when using server authentication, it may require tremendous skill to detect that the website is fake. Phishing is an example of social engineering techniques used to fool users,[2] and exploits the poor usability of current web security technologies.[3] Attempts to deal with the growing number of reported phishing incidents include legislation, user training, public awareness, and technical security measures.

A phishing technique was described in detail in 1987, and the first recorded use of the term "phishing" was made in 1996. The term is a variant of fishing,[4] probably influenced by phreaking [5] [6] or password harvesting fishing, and alludes to baits used to "catch" financial information and passwords.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 74: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

74

Physical Access Controls

Underground entrance to the New York City Subway system

Physical access by a person may be allowed depending on payment, authorization, etc. Also there may be one-way traffic of people. These can be enforced by personnel such as a border guard, a doorman, a ticket checker, etc., or with a device such as a turnstile. There may be fences to avoid circumventing this access control. An alternative of access control in the strict sense (physically controlling access itself) is a system of checking authorized presence, see e.g. Ticket controller (transportation). A variant is exit control, e.g. of a shop (checkout) or a country.

In physical security, the term access control refers to the practice of restricting entrance to a property, a building, or a room to authorized persons. Physical access control can be achieved by a human (a guard, bouncer, or receptionist), through mechanical means such as locks and keys, or through technological means such as access control systems like the Access control vestibule. Within these environments, physical key management may also be employed as a means of further managing and monitoring access to mechanically keyed areas or access to certain small assets.

Physical access control is a matter of who, where, and when. An access control system determines who is allowed to enter or exit, where they are allowed to exit or enter, and when they are allowed to enter or exit. Historically this was partially accomplished through keys and locks. When a door is locked only someone with a key can enter through the door depending on how the lock is configured. Mechanical locks and keys do not allow restriction of the key holder to specific times or dates. Mechanical locks and keys do not provide records of the key used on any specific door and the keys can be easily copied or transferred to an unauthorized person. When a mechanical key is lost or the key holder is no longer authorized to use the protected area, the locks must be re-keyed.

Electronic access control uses computers to solve the limitations of mechanical locks and keys. A wide range of credentials can be used to replace mechanical keys. The electronic access control system grants access based on the credential presented. When access is granted, the door is unlocked for a predetermined time and the transaction is recorded. When access is refused, the door remains locked and the attempted access is recorded. The system will also monitor the door and alarm if the door is forced open or held open too long after being unlocked.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 75: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

75

Access control system operation

When a credential is presented to a reader, the reader sends the credential’s information, usually a number, to a control panel, a highly reliable processor. The control panel compares the credential's number to an access control list, grants or denies the presented request, and sends a transaction log to a database. When access is denied based on the access control list, the door remains locked. If there is a match between the credential and the access control list, the control panel operates a relay that in turn unlocks the door. The control panel also ignores a door open signal to prevent an alarm. Often the reader provides feedback, such as a flashing red LED for an access denied and a flashing green LED for an access granted.

The above description illustrates a single factor transaction. Credentials can be passed around, thus subverting the access control list. For example, Alice has access rights to the server room but Bob does not. Alice either gives Bob her credential or Bob takes it; he now has access to the server room. To prevent this, two-factor authentication can be used. In a two factor transaction, the presented credential and a second factor are needed for access to be granted. The second factor can be a PIN, a second credential, operator intervention, or a biometric input. Often the factors are characterized as

something you have, such as an access badge or passcard, something you know, e.g. a PIN, or password. something you are, typically a biometric input.

Credential

A credential is a physical/tangible object, a piece of knowledge, or a facet of a person's physical being, that enables an individual access to a given physical facility or computer-based information system. Typically, credentials can be something you know (such as number or PIN), something you have (such as an access badge), something you are (such as a biometric feature) or some combination of these items. The typical credential is an access card, key fob, or other key. There are many card technologies including magnetic stripe, bar code, Wiegand, 125 kHz proximity, 26 bit card-swipe, contact smart cards, and contactless smart cards. Also available are key-fobs which are more compact than ID cards and attach to a key ring. Typical biometric technologies include fingerprint, facial recognition, iris recognition, retinal scan, voice, and hand geometry.

Credentials for an access control system are typically held within a database, which stores access credentials for all staff members of a given firm or organisation. Assigning access control credentials can be derived from the basic tenet of access control, i.e. who has access to a given area, why the person should have access to the given area, and where given persons should have access to. As an example, in a given firm, senior management figures may need general access to all areas of an organisation. ICT staff may need primary access to computer software, hardware and general computer-based information systems. Janitors and maintenance staff may need chief access to service areas, cleaning closets, electrical and heating apparatus, etc.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 76: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

76

Access control system components

An access control point, which can be a door, turnstile, parking gate, elevator, or other physical barrier where granting access can be electrically controlled. Typically the access point is a door. An electronic access control door can contain several elements. At its most basic there is a stand-alone electric lock. The lock is unlocked by an operator with a switch. To automate this, operator intervention is replaced by a reader. The reader could be a keypad where a code is entered, it could be a card reader, or it could be a biometric reader. Readers do not usually make an access decision but send a card number to an access control panel that verifies the number against an access list. To monitor the door position a magnetic door switch is used. In concept the door switch is not unlike those on refrigerators or car doors. Generally only entry is controlled and exit is uncontrolled. In cases where exit is also controlled a second reader is used on the opposite side of the door. In cases where exit is not controlled, free exit, a device called a request-to-exit (REX) is used. Request-to-exit devices can be a pushbutton or a motion detector. When the button is pushed or the motion detector detects motion at the door, the door alarm is temporarily ignored while the door is opened. Exiting a door without having to electrically unlock the door is called mechanical free egress. This is an important safety feature. In cases where the lock must be electrically unlocked on exit, the request-to-exit device also unlocks the door.

Access control topology

Typical access control door wiring

Access control door wiring when using intelligent readers

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 77: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

77

Access control decisions are made by comparing the credential to an access control list. This lookup can be done by a host or server, by an access control panel, or by a reader. The development of access control systems has seen a steady push of the lookup out from a central host to the edge of the system, or the reader. The predominate topology circa 2009 is hub and spoke with a control panel as the hub and the readers as the spokes. The lookup and control functions are by the control panel. The spokes communicate through a serial connection; usually RS485. Some manufactures are pushing the decision making to the edge by placing a controller at the door. The controllers are IP enabled and connect to a host and database using standard networks.

Types of readers

Access control readers may be classified by functions they are able to perform:

Basic (non-intelligent) readers: simply read card number or PIN and forward it to a control panel. In case of biometric identification, such readers output ID number of a user. Typically Wiegand protocol is used for transmitting data to the control panel, but other options such as RS-232, RS-485 and Clock/Data are not uncommon. This is the most popular type of access control readers. Examples of such readers are RF Tiny by RFLOGICS, ProxPoint by HID, and P300 by Farpointe Data.

Semi-intelligent readers: have all inputs and outputs necessary to control door hardware (lock, door contact, exit button), but do not make any access decisions. When a user presents a card or enters PIN, the reader sends information to the main controller and waits for its response. If the connection to the main controller is interrupted, such readers stop working or function in a degraded mode. Usually semi-intelligent readers are connected to a control panel via an RS-485 bus. Examples of such readers are InfoProx Lite IPL200 by CEM Systems and AP-510 by Apollo.

Intelligent readers: have all inputs and outputs necessary to control door hardware, they also have memory and processing power necessary to make access decisions independently. Same as semi-intelligent readers they are connected to a control panel via an RS-485 bus. The control panel sends configuration updates and retrieves events from the readers. Examples of such readers could be InfoProx IPO200 by CEM Systems and AP-500 by Apollo. There is also a new generation of intelligent readers referred to as "IP readers". Systems with IP readers usually do not have traditional control panels and readers communicate directly to PC that acts as a host. Examples of such readers are PowerNet IP Reader by Isonas Security Systems, ID08 by Solus has the built in webservice to make it user friendly, Edge ER40 reader by HID Global, LogLock and UNiLOCK by ASPiSYS Ltd, and BioEntry Plus reader by Suprema Inc.

Some readers may have additional features such as LCD and function buttons for data collection purposes (i.e. clock-in/clock-out events for attendance reports), camera/speaker/microphone for intercom, and smart card read/write support.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 78: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

78

Point-of-Sale System (POS)Point of sales (POS) or checkout is both a checkout counter in a shop, and the location where a transaction occurs. Colloquially, a "checkout" refers to a POS terminal or more generally to the hardware and software used for checkouts, the equivalent of an electronic cash register. A POS terminal manages the selling process by a salesperson accessible interface. The same system allows the creation and printing of the voucher.

Hospitality industry

Hospitality point of sales systems are computerized systems incorporating registers, computers and peripheral equipment, usually on a computer network. Like other point of sale systems, these systems keep track of sales, labor and payroll, and can generate records used in accounting and book keeping. They may be accessed remotely by restaurant corporate offices, troubleshooters and other authorized parties.

Point of sales systems have revolutionized the restaurant industry, particularly in the fast food sector. In the most recent technologies, registers are computers, sometimes with touch screens. The registers connect to a server, often referred to as a "store controller" or a "central control unit." Printers and monitors are also found on the network. Additionally, remote servers can connect to store networks and monitor sales and other store data.

The efficiency of such systems have decreased service times and increased efficiency of orders.

Another innovation in technology for the restaurant industry is Wireless POS. Many restaurants with high volume use wireless handheld POS to collect orders which are sent to a server. The server sends required information to the kitchen in real time.

Restaurant business

Restaurant POS refers to point of sale (POS) software that runs on computers, usually touchscreen terminals or wireless handheld devices. Restaurant POS systems assist businesses to track transactions in real time.

Typical restaurant POS software is able to print guest checks, print orders to kitchens and bars for preparation, process credit cards and other payment cards, and run reports. In addition, some systems implement wireless pagers and electronic signature capture devices.

In the fast food industry, registers may be at the front counter, or configured for drive through or walk through cashiering and order taking. Front counter registers take and serve orders at the same terminal, while drive through registers allow orders to be taken at one or more drive through windows, to be cashiered and served at another. In addition to registers, drive through and kitchen monitors may be used by store personnel to view orders. Once orders appear they may be deleted or recalled by "bump bars", small boxes which have different buttons for different uses. Drive through systems are often enhanced by the use of drive through wireless (or headset) systems which enable communications with drive through speakers.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 79: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

79

POS systems are often designed for a variety of clients, and can be programmed by the end users to suit their needs. Some large clients write their own specifications for vendors to implement. In some cases, POS systems are sold and supported by third party distributors, while in other cases they are sold and supported directly by the vendor.

Wireless systems consist of drive though microphones and speakers (often one speaker will serve both purposes), which are wired to a "base station" or "center module." This will, in turn broadcast to headsets. Headsets may be an all-in-one headset or one connected to a belt pack.

Hotel business

POS software allows for transfer of meal charges from dining room to guest room with a button or two. It may also need to be integrated with property management software.

Primary Storage

Primary storage (or main memory or internal memory), often referred to simply as memory, is the only one directly accessible to the CPU. The CPU continuously reads instructions stored there and executes them as required. Any data actively operated on is also stored there in uniform manner.

Historically, early computers used delay lines, Williams tubes, or rotating magnetic drums as primary storage. By 1954, those unreliable methods were mostly replaced by magnetic core memory, which was still rather cumbersome. Undoubtedly, a revolution was started with the invention of a transistor, that soon enabled then-unbelievable miniaturization of electronic memory via solid-state silicon chip technology.

This led to a modern random-access memory (RAM). It is small-sized, light, but quite expensive at the same time. (The particular types of RAM used for primary storage are also volatile, i.e. they lose the information when not powered).

As shown in the diagram, traditionally there are two more sub-layers of the primary storage, besides main large-capacity RAM:

Processor registers are located inside the processor. Each register typically holds a word of data (often 32 or 64 bits). CPU instructions instruct the arithmetic and logic unit to perform various calculations or other operations on this data (or with the help of it). Registers are technically among the fastest of all forms of computer data storage.

Processor cache is an intermediate stage between ultra-fast registers and much slower main memory. It's introduced solely to increase performance of the computer. Most actively used information in the main memory is just duplicated in the cache memory, which is faster, but of much lesser capacity. On the other hand it is much slower, but much larger than processor registers. Multi-level hierarchical cache setup is also

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 80: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

80

commonly used—primary cache being smallest, fastest and located inside the processor; secondary cache being somewhat larger and slower.

Main memory is directly or indirectly connected to the CPU via a memory bus. It is actually comprised of two buses (not on the diagram): an address bus and a data bus. The CPU firstly sends a number through an address bus, a number called memory address, that indicates the desired location of data. Then it reads or writes the data itself using the data bus. Additionally, a memory management unit (MMU) is a small device between CPU and RAM recalculating the actual memory address, for example to provide an abstraction of virtual memory or other tasks.

As the RAM types used for primary storage are volatile (cleared at start up), a computer containing only such storage would not have a source to read instructions from, in order to start the computer. Hence, non-volatile primary storage containing a small startup program (BIOS) is used to bootstrap the computer, that is, to read a larger program from non-volatile secondary storage to RAM and start to execute it. A non-volatile technology used for this purpose is called ROM, for read-only memory (the terminology may be somewhat confusing as most ROM types are also capable of random access).

Many types of "ROM" are not literally read only, as updates are possible; however it is slow and memory must be erased in large portions before it can be re-written. Some embedded systems run programs directly from ROM (or similar), because such programs are rarely changed. Standard computers do not store non-rudimentary programs in ROM, rather use large capacities of secondary storage, which is non-volatile as well, and not as costly.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 81: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

81

Private Network

In Internet Protocol terminology, a private network is typically a network that uses private IP address space, following the standards set by RFC 1918 and RFC 4193. These addresses are common in home and office local area networks (LANs), as globally routable addresses are scarce, expensive to obtain, or their use is not necessary. Private IP address spaces were originally defined in efforts to delay IPv4 address exhaustion, but they are also a feature of the next generation Internet Protocol, IPv6.

These addresses are private because they are not globally delegated, meaning they aren't allocated to a specific organization. Anyone can use these addresses without approval from a

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 82: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

82

regional Internet registry (RIR). Consequently, they are not routable within the public Internet. If such a private network needs to connect to the Internet, it must use either a network address translator (NAT) gateway, or a proxy server.

The most common use of these addresses is in home networks, since most Internet service providers (ISPs) only allocate a single IP address to each residential customer, but many homes have more than one networked device (for example, several computers and a printer). In this situation, a NAT gateway is usually used to enable Internet connectivity to multiple hosts. They are also commonly used in corporate networks, which for security reasons, are not connected directly to the Internet. Often a proxy, SOCKS gateway, or similar devices, are used to provide restricted Internet access to internal users. In both cases, private addresses are seen as adding security to the internal network, since it's impossible for an Internet host to connect directly to an internal system.

Because many private networks use the same private IP address space, a common problem occurs when merging such networks: the collision of address space, resulting in duplication of addresses on multiple devices. In this case, networks must renumber, often a difficult and time-consuming task, or a NAT router must be placed between the networks to masquerade the duplicated addresses.

It is not uncommon for packets originating in private address spaces to leak onto the Internet. Poorly configured private networks often attempt reverse DNS lookups for these addresses, causing extra load on the Internet root nameservers. The AS112 project attempted to mitigate this load by providing special "blackhole" anycast nameservers for private addresses which only return "not found" answers for these queries. Organizational edge routers are usually configured to drop ingress IP traffic for these networks, which can occur either by accident, or from malicious traffic using a spoofed source address. Less commonly, ISP edge routers will drop such egress traffic from customers, which reduces the impact to the Internet of such misconfigured or malicious hosts on the customer's network.

Processing ControlsIn business and accounting, Information technology controls (or IT controls) are specific activities performed by persons or systems designed to ensure that business objectives are met. They are a subset of an enterprise's internal control. IT control objectives relate to the confidentiality, integrity, and availability of data and the overall management of the IT function of the business enterprise. IT controls are often described in two categories: IT general controls ITGC and IT application controls. ITGC include controls over the information technology (IT) environment, computer operations, access to programs and data, program development and program changes. IT application controls refer to transaction processing controls, sometimes called "input-processing-output" controls. Information technology controls have been given increased prominence in corporations listed in the United States by the Sarbanes-Oxley Act. The COBIT Framework (Control Objectives for Information Technology) is a widely-used framework promulgated by the IT Governance Institute, which defines a variety of ITGC and application control objectives and recommended evaluation approaches. IT departments in

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 83: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

83

organizations are often led by a Chief Information Officer (CIO), who is responsible for ensuring effective information technology controls are utilized.

IT controls and the Sarbanes-Oxley Act (SOX)SOX requires the chief executive and chief financial officers of public companies to attest to the accuracy of financial reports (Section 302) and require public companies to establish adequate internal controls over financial reporting (Section 404). Passage of SOX resulted in an increased focus on IT controls, as these support financial processing and therefore fall into the scope of management's assessment of internal control under Section 404 of SOX.

The COBIT framework may be used to assist with SOX compliance, although COBIT is considerably wider in scope. The 2007 SOX guidance from the PCAOB[1] and SEC[2] state that IT controls should only be part of the SOX 404 assessment to the extent that specific financial risks are addressed, which significantly reduces the scope of IT controls required in the assessment. This scoping decision is part of the entity's SOX 404 top-down risk assessment. In addition, Statements on Auditing Standards No. 109 (SAS109)[3] discusses the IT risks and control objectives pertinent to a financial audit and is referenced by the SOX guidance.

IT controls that typically fall under the scope of a SOX 404 assessment may include:

Specific application (transaction processing) control procedures that directly mitigate identified financial reporting risks. There are typically a few such controls within major applications in each financial process, such as accounts payable, payroll, general ledger, etc. The focus is on "key" controls (those that specifically address risks), not on the entire application.

IT general controls that support the assertions that programs function as intended and that key financial reports are reliable, primarily change control and security controls;

IT operations controls, which ensure that problems with processing are identified and corrected.

Specific activities that may occur to support the assessment of the key controls above include:

Understanding the organization’s internal control program and its financial reporting processes.

Identifying the IT systems involved in the initiation, authorization, processing, summarization and reporting of financial data;

Identifying the key controls that address specific financial risks; Designing and implementing controls designed to mitigate the identified risks and

monitoring them for continued effectiveness; Documenting and testing IT controls; Ensuring that IT controls are updated and changed, as necessary, to correspond with

changes in internal control or financial reporting processes; and Monitoring IT controls for effective operation over time.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 84: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

84

To comply with Sarbanes-Oxley, organizations must understand how the financial reporting process works and must be able to identify the areas where technology plays a critical part. In considering which controls to include in the program, organizations should recognize that IT controls can have a direct or indirect impact on the financial reporting process. For instance, IT application controls that ensure completeness of transactions can be directly related to financial assertions. Access controls, on the other hand, exist within these applications or within their supporting systems, such as databases, networks and operating systems, are equally important, but do not directly align to a financial assertion. Application controls are generally aligned with a business process that gives rise to financial reports. While there are many IT systems operating within an organization, Sarbanes-Oxley compliance only focuses on those that are associated with a significant account or related business process and mitigate specific material financial risks. This focus on risk enables management to significantly reduce the scope of IT general control testing in 2007 relative to prior years.

Push Reporting

Push services are often based on information preferences expressed in advance. This is called a publish/subscribe model. A client might "subscribe" to various information "channels". Whenever new content is available on one of those channels, the server would push that information out to the user.

Synchronous conferencing and instant messaging are typical examples of push services. Chat messages and sometimes files are pushed to the user as soon as they are received by the messaging service. Both decentralised peer-to-peer programs (such as WASTE) and centralised programs (such as IRC or XMPP) allow pushing files, which means the sender initiates the data transfer rather than the recipient.

Email is also a push system: the SMTP protocol on which it is based is a push protocol (see Push e-mail). However, the last step —from mail server to desktop computer— typically uses a pull protocol like POP3 or IMAP. Modern e-mail clients make this step seem instantaneous by repeatedly polling the mail server, frequently checking it for new mail. The IMAP protocol includes the IDLE command, which allows the server to tell the client when new messages arrive. The original BlackBerry was the first popular example of push technology for email in a wireless context.

Another popular type of Internet push technology was PointCast Network, which gained popularity in the 1990s. It delivered news and stock market data. Both Netscape and Microsoft integrated it into their software at the height of the browser wars, but it later faded away and was replaced in the 2000s with RSS (a pull technology).

Other uses are push enabled web applications including market data distribution (stock tickers), online chat/messaging systems (webchat), auctions, online betting and gaming, sport results, monitoring consoles and sensor network monitoring.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 85: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

85

Push technology is used in Windows Update to push updates to a user's computer.

Record

A business record is a document that records a business dealing. Business records include meeting minutes, memorandums, employment contracts, and accounting source documents.

It must be retrievable at a later date so that the business dealings can be accurately reviewed as required. Since business is dependent upon confidence and trust, not only must the record be accurate and easily retrieved, the processes surrounding its creation and retrieval must be perceived by customers and the business community to consistently deliver a full and accurate record with no gaps or additions.

Most business records have specified retention periods based on legal requirements or internal company policies. This is important because in many countries (including the United States) documents are required by law be disclosed to government regulatory agencies or to the general public. Likewise, they may be discoverable if the business is sued. Under the business records exception in the Federal Rules of Evidence, certain types of business records, particularly those which are made and kept with regularity, may be considered admissible in court despite containing hearsay.

In computer science, a record (also called tuple or struct) is one of the simplest data structures, consisting of two or more values or variables stored in consecutive memory positions; so that each component (called a field or member of the record) can be accessed by applying different offsets to the starting address.

For example, a date may be stored as a record containing a 16-bit numeric field for the year, a three-letter field for the month, and an 8-bit numeric field for the day-of-month. As this example shows, the fields of a record need not all have the same size and encoding; therefore, in general one cannot easily obtain the field which has a run-time computed index in the field sequence, as one can do in an array.

A record type is a data type that describes such values and variables. Most modern computer languages allow the programmer to define new record types. The definition includes specifying the data type of each field, its position in the record, and an identifier (name or label) by which it can be accessed.

Records can exist in any storage medium, including main memory and mass storage devices such as magnetic tapes or hard disks. Records are a fundamental component of most data structures, especially linked data structures. Many computer files are organized as arrays of logical records, often grouped into larger physical records or blocks for efficiency.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 86: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

86

RouterA router[1] is a networking device whose software and hardware are usually tailored to the tasks of routing and forwarding information. For example, on the Internet, information is directed to various paths by routers.

For the pure Internet Protocol (IP) forwarding function, router design tries to minimize the state information kept on individual packets. Once a packet is forwarded, the router should no longer retain statistical information about it. It is the sending and receiving endpoints that keeps information about such things as errored or missing packets.

Forwarding decisions can involve decisions at layers other than the IP internetwork layer or OSI layer 3. Again, the marketing term switch can be applied to devices that have these capabilities. A function that forwards based on data link layer, or OSI layer 2, information, is properly called a bridge. Marketing literature may call it a layer 2 switch, but a switch has no precise definition.

Among the most important forwarding decisions is deciding what to do when congestion occurs, i.e., packets arrive at the router at a rate higher than the router can process. Three policies commonly used in the Internet are Tail drop, Random early detection, and Weighted random early detection. Tail drop is the simplest and most easily implemented; the router simply drops packets once the length of the queue exceeds the size of the buffers in the router. Random early detection (RED) probabilistically drops datagrams early when the queue exceeds a configured size. Weighted random early detection requires a weighted average queue size to exceed the configured size, so that short bursts will not trigger random drops.

A router uses a routing table to decide where the packet should be sent so if the router can’t find the preferred address then it will look down the routing table and decide which is the next best address to send it to.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 87: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

87

Supply Chain Management (SCM)

Supply chain management (SCM) is the management of a network of interconnected businesses involved in the ultimate provision of product and service packages required by end customers (Harland, 1996). Supply Chain Management spans all movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption (supply chain).

Another definition is provided by the APICS Dictionary when it defines SCM as the "design, planning, execution, control, and monitoring of supply chain activities with the objective of creating net value, building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand, and measuring performance globally."

Supply chain management encompasses the planning and management of all activities involved in sourcing, procurement, conversion, and logistics management. It also includes the crucial components of coordination and collaboration with channel partners, which can be suppliers, intermediaries, third-party service providers, and customers. In essence, supply chain

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 88: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

88

management integrates supply and demand management within and across companies. More recently, the loosely coupled, self-organizing network of businesses that cooperate to provide product and service offerings has been called the Extended Enterprise.

Supply chain management can also refer to supply chain management software which includes tools or modules used to execute supply chain transactions, manage supplier relationships and control associated business processes.

Supply chain event management (abbreviated as SCEM) is a consideration of all possible events and factors that can disrupt a supply chain. With SCEM possible scenarios can be created and solutions devised.

Supply chain management problemsSupply chain management must address the following problems:

Distribution Network Configuration: number, location and network missions of suppliers, production facilities, distribution centers, warehouses, cross-docks and customers.

Distribution Strategy: questions of operating control (centralized, decentralized or shared); delivery scheme, e.g., direct shipment, pool point shipping, cross docking, DSD (direct store delivery), closed loop shipping; mode of transportation, e.g., motor carrier, including truckload, LTL, parcel; railroad; intermodal transport, including TOFC (trailer on flatcar) and COFC (container on flatcar); ocean freight; airfreight; replenishment strategy (e.g., pull, push or hybrid); and transportation control (e.g., owner-operated, private carrier, common carrier, contract carrier, or 3PL).

Trade-Offs in Logistical Activities: The above activities must be well coordinated in order to achieve the lowest total logistics cost. Trade-offs may increase the total cost if only one of the activities is optimized. For example, full truckload (FTL) rates are more economical on a cost per pallet basis than less than truckload (LTL) shipments. If, however, a full truckload of a product is ordered to reduce transportation costs, there will be an increase in inventory holding costs which may increase total logistics costs. It is therefore imperative to take a systems approach when planning logistical activities. These trade-offs are key to developing the most efficient and effective Logistics and SCM strategy.

Information: Integration of processes through the supply chain to share valuable information, including demand signals, forecasts, inventory, transportation, potential collaboration, etc.

Inventory Management: Quantity and location of inventory, including raw materials, work-in-progress (WIP) and finished goods.

Cash-Flow: Arranging the payment terms and methodologies for exchanging funds across entities within the supply chain.

Supply chain execution means managing and coordinating the movement of materials, information and funds across the supply chain. The flow is bi-directional.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 89: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

89

Activities/functionsSupply chain management is a cross-function approach including managing the movement of raw materials into an organization, certain aspects of the internal processing of materials into finished goods, and the movement of finished goods out of the organization and toward the end-consumer. As organizations strive to focus on core competencies and becoming more flexible, they reduce their ownership of raw materials sources and distribution channels. These functions are increasingly being outsourced to other entities that can perform the activities better or more cost effectively. The effect is to increase the number of organizations involved in satisfying customer demand, while reducing management control of daily logistics operations. Less control and more supply chain partners led to the creation of supply chain management concepts. The purpose of supply chain management is to improve trust and collaboration among supply chain partners, thus improving inventory visibility and the velocity of inventory movement.

Several models have been proposed for understanding the activities required to manage material movements across organizational and functional boundaries. SCOR is a supply chain management model promoted by the Supply Chain Council. Another model is the SCM Model proposed by the Global Supply Chain Forum (GSCF). Supply chain activities can be grouped into strategic, tactical, and operational levels .

Strategic

Strategic network optimization, including the number, location, and size of warehousing, distribution centers, and facilities.

Strategic partnerships with suppliers, distributors, and customers, creating communication channels for critical information and operational improvements such as cross docking, direct shipping, and third-party logistics.

Product life cycle management , so that new and existing products can be optimally integrated into the supply chain and capacity management activities.

Information technology infrastructure to support supply chain operations. Where-to-make and what-to-make-or-buy decisions. Aligning overall organizational strategy with supply strategy.

Tactical

Sourcing contracts and other purchasing decisions. Production decisions, including contracting, scheduling, and planning process definition. Inventory decisions, including quantity, location, and quality of inventory. Transportation strategy, including frequency, routes, and contracting. Benchmarking of all operations against competitors and implementation of best practices

throughout the enterprise. Milestone payments. Focus on customer demand.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 90: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

90

Operational

Daily production and distribution planning, including all nodes in the supply chain. Production scheduling for each manufacturing facility in the supply chain (minute by

minute). Demand planning and forecasting, coordinating the demand forecast of all customers and

sharing the forecast with all suppliers. Sourcing planning, including current inventory and forecast demand, in collaboration

with all suppliers. Inbound operations, including transportation from suppliers and receiving inventory. Production operations, including the consumption of materials and flow of finished

goods. Outbound operations, including all fulfillment activities, warehousing and transportation

to customers. Order promising, accounting for all constraints in the supply chain, including all

suppliers, manufacturing facilities, distribution centers, and other customers.

Supply chain managementOrganizations increasingly find that they must rely on effective supply chains, or networks, to successfully compete in the global market and networked economy.[1] In Peter Drucker's (1998) new management paradigms, this concept of business relationships extends beyond traditional enterprise boundaries and seeks to organize entire business processes throughout a value chain of multiple companies.

During the past decades, globalization, outsourcing and information technology have enabled many organizations, such as Dell and Hewlett Packard, to successfully operate solid collaborative supply networks in which each specialized business partner focuses on only a few key strategic activities (Scott, 1993). This inter-organizational supply network can be acknowledged as a new form of organization. However, with the complicated interactions among the players, the network structure fits neither "market" nor "hierarchy" categories (Powell, 1990). It is not clear what kind of performance impacts different supply network structures could have on firms, and little is known about the coordination conditions and trade-offs that may exist among the players. From a systems perspective, a complex network structure can be decomposed into individual component firms (Zhang and Dilts, 2004). Traditionally, companies in a supply network concentrate on the inputs and outputs of the processes, with little concern for the internal management working of other individual players. Therefore, the choice of an internal management control structure is known to impact local firm performance (Mintzberg, 1979).

In the 21st century, changes in the business environment have contributed to the development of supply chain networks. First, as an outcome of globalization and the proliferation of multinational companies, joint ventures, strategic alliances and business partnerships, significant success factors were identified, complementing the earlier "Just-In-Time", "Lean Manufacturing" and "Agile Manufacturing" practices.[2] Second, technological changes, particularly the dramatic fall in information communication costs, which are a significant component of transaction costs,

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 91: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

91

have led to changes in coordination among the members of the supply chain network (Coase, 1998).

Many researchers have recognized these kinds of supply network structures as a new organization form, using terms such as "Keiretsu", "Extended Enterprise", "Virtual Corporation", "Global Production Network", and "Next Generation Manufacturing System".[3] In general, such a structure can be defined as "a group of semi-independent organizations, each with their capabilities, which collaborate in ever-changing constellations to serve one or more markets in order to achieve some business goal specific to that collaboration" (Akkermans, 2001).

Secondary Storage

Secondary storage (or external memory) differs from primary storage in that it is not directly accessible by the CPU. The computer usually uses its input/output channels to access secondary storage and transfers the desired data using intermediate area in primary storage. Secondary storage does not lose the data when the device is powered down—it is non-volatile. Per unit, it is typically also an order of magnitude less expensive than primary storage. Consequently, modern computer systems typically have an order of magnitude more secondary storage than primary storage and data is kept for a longer time there.

In modern computers, hard disk drives are usually used as secondary storage. The time taken to access a given byte of information stored on a hard disk is typically a few thousandths of a second, or milliseconds. By contrast, the time taken to access a given byte of information stored in random access memory is measured in billionths of a second, or nanoseconds. This illustrates the very significant access-time difference which distinguishes solid-state memory from rotating magnetic storage devices: hard disks are typically about a million times slower than memory. Rotating optical storage devices, such as CD and DVD drives, have even longer access times. With disk drives, once the disk read/write head reaches the proper placement and the data of interest rotates under it, subsequent data on the track are very fast to access. As a result, in order to hide the initial seek time and rotational latency, data are transferred to and from disks in large contiguous blocks.

When data reside on disk, block access to hide latency offers a ray of hope in designing efficient external memory algorithms. Sequential or block access on disks is orders of magnitude faster than random access, and many sophisticated paradigms have been developed to design efficient algorithms based upon sequential and block access . Another way to reduce the I/O bottleneck is to use multiple disks in parallel in order to increase the bandwidth between primary and secondary memory.[2]

Some other examples of secondary storage technologies are: flash memory (e.g. USB flash drives or keys), floppy disks, magnetic tape, paper tape, punched cards, standalone RAM disks, and Iomega Zip drives.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 92: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

92

The secondary storage is often formatted according to a file system format, which provides the abstraction necessary to organize data into files and directories, providing also additional information (called metadata) describing the owner of a certain file, the access time, the access permissions, and other information.

Most computer operating systems use the concept of virtual memory, allowing utilization of more primary storage capacity than is physically available in the system. As the primary memory fills up, the system moves the least-used chunks (pages) to secondary storage devices (to a swap file or page file), retrieving them later when they are needed. As more of these retrievals from slower secondary storage are necessary, the more the overall system performance is degraded.

Tertiary storage

Large tape library. Tape cartridges placed on shelves in the front, robotic arm moving in the back. Visible height of the library is about 180 cm.

Tertiary storage or tertiary memory,[3] provides a third level of storage. Typically it involves a robotic mechanism which will mount (insert) and dismount removable mass storage media into a storage device according to the system's demands; this data is often copied to secondary storage before use. It is primarily used for archival of rarely accessed information since it is much slower than secondary storage (e.g. 5–60 seconds vs. 1-10 milliseconds). This is primarily useful for extraordinarily large data stores, accessed without human operators. Typical examples include tape libraries and optical jukeboxes.

When a computer needs to read information from the tertiary storage, it will first consult a catalog database to determine which tape or disc contains the information. Next, the computer will instruct a robotic arm to fetch the medium and place it in a drive. When the computer has finished reading the information, the robotic arm will return the medium to its place in the library.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 93: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

93

Security AdministratorComputer security is a branch of technology known as information security as applied to computers and networks. The objective of computer security includes protection of information and property from theft, corruption, or natural disaster, while allowing the information and property to remain accessible and productive to its intended users. The terms computer system security, means the collective processes and mechanisms by which sensitive and valuable information and services are protected from publication, tampering or collapse by unauthorized activities or untrustworthy individuals and unplanned events respectively.A chief security officer (CSO) is a corporation's top executive who is responsible for security. The CSO serves as the business leader responsible for the development, implementation and management of the organization’s corporate security vision, strategy and programs. They direct staff in identifying, developing, implementing and maintaining security processes across the organization to reduce risks, respond to incidents, and limit exposure to liability in all areas of financial, physical, and personal risk; establish appropriate standards and risk controls associated with intellectual property; and direct the establishment and implementation of policies and procedures related to data security. Those primarily responsible for information security may have the title of Chief Information Security Officer (CISO) to differentiate the positions.

Segregation of DutiesSeparation of duties (SoD) is the concept of having more than one person required to complete a task. It is alternatively called segregation of duties or, in the political realm, separation of powers.

Separation of duties is one of the key concepts of internal control and is the most difficult and sometimes the most costly one to achieve. In essence, SoD implements an appropriate level of checks and balances upon the activities of individuals. R. A. Botha and J. H. P. Eloff of IBM describe SoD as follows.

Separation of duty, as a security principle, has as its primary objective the prevention of fraud and errors. This objective is achieved by disseminating the tasks and associated privileges for a specific business process among multiple users. This principle is demonstrated in the traditional example of separation of duty found in the requirement of two signatures on a cheque.[1]

Actual job titles and organizational structure may vary greatly from one organization to another, depending on the size and nature of the business. With the concept of SoD, business critical duties can be categorized into four types of functions: authorization, custody, record keeping, and reconciliation. In a perfect system, no one person should handle more than one type of function.

The term SoD is already well-known in financial accounting systems. Companies in all sizes understand not to combine roles such as receiving checks (payment on account) and approving write-offs, depositing cash and reconciling bank statements, approving time cards and have custody of pay checks, etc. SoD is fairly new to the IS department, and a high portion of SOX internal control issues come from IT.[2]

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 94: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

94

In information systems, segregation of duties helps reduce the potential damage from the actions of one person. IS or end-user department should be organized in a way to achieve adequate separation of duties. According to ISACA's Segregation of Duties Control matrix [3], some duties should not be combined into one position. This matrix is not an industry standard, just a general guideline suggesting which positions should be separated and which require compensating controls when combined.

Depending on a company's size, functions and designations may vary. When duties can not be separated, compensating controls should be in place. Compensating controls are internal controls that are intended to reduce the risk of an existing or potential control weakness. If a single person can carry out and conceal errors and/or irregularities in the course of performing their day-to-day activities, they have been assigned SoD incompatible duties. There are several control mechanisms that can help to enforce the segregation of duties:

1. Audit trails enable IT managers or Auditors to recreate the actual transaction flow from the point of origination to its existence on an updated file. Good audit trails should be enabled to provide information on who initiated the transaction, the time of day and date of entry, the type of entry, what fields of information it contained, and what files it updated.

2. Reconciliation of applications and an independent verification process is ultimately the responsibility of users, which can be used to increase the level of confidence that an application ran successfully.

3. Exception reports are handled at supervisory level, backed up by evidence noting that exceptions are handled properly and in timely fashion. A signature of the person who prepares the report is normally required.

4. Manual or automated system or application transaction logs should be maintained, which record all processed system commands or application transactions.

5. Supervisory review should be performed through observation and inquiry.6. To compensate mistakes or intentional failures by following a prescribed procedure,

independent reviews are recommended. Such reviews can help detect errors and irregularities.

PatternThe separation of duties pattern is applied to functions the performance of which requires power that can be abused. The pattern is:

1. Start with a function that is indispensable, but potentially subject to abuse.2. Divide the function into separate steps, each necessary for the function to work or for the

power that enables that function to be abused.3. Assign each step to a different person or organization.

Three general categories of functions must be separated:

authorization function

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 95: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

95

recording function, e.g. preparing source documents or code or performance reports custody of asset whether directly or indirectly, e.g. receiving checks in mail or

implementing source code or database changes.

ApplicationThe accounting profession has invested significantly in separation of duties because of the understood risks accumulated over hundreds of years of accounting practice.

By contrast, many corporations in the United States found that an unexpectedly high proportion of their Sarbanes-Oxley internal control issues came from IT. Separation of duties is commonly used in large IT organizations so that no single person is in a position to introduce fraudulent or malicious code or data without detection. Role based access control is frequently used in IT systems where SoD is required. Strict control of software and data changes will require that the same person or organizations performs only one of the following roles:

Identification of a requirement (or change request); e.g. a business person Authorization and approval; e.g. an IT governance board or manager Design and development; e.g. a developer Review, inspection and approval ; e.g. another developer or architect. Implementation in production; typically a software change or system administrator.

This is not an exhaustive presentation of the software development life cycle, but a list of critical development functions applicable to separation of duties.

To successfully implement separation of duties in information systems a number of concerns need to be addressed:

The process used to ensure a person's authorization rights in the system is in line with his role in the organization.

The authentication method used such as knowledge of a password, possession of an object (key, token) or a biometrical characteristic.

Circumvention of rights in the system can occur through database administration access, user administration access, tools which provide back-door access or supplier installed user accounts. Specific controls such as a review of an activity log may be required to address this specific concern.

Server

Server may refer to: David Battles

In computing:

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 96: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

96

Server (computing) , a server application, operating system, computer, or appliance o server computer , a computer dedicated for server applicationso Application server , a server dedicated to running certain software applicationso Communications server , carrier-grade computing platform for communications

networkso Database server , see file AB2o Fax server , provides fax services for clientso File server , provides file serviceso Game server , a server that video game clients connect to in order to play online

togethero Home server , a server for the homeo Newsreader server , a server that feeds Usenet groups to client Newsreaderso Name Server or DNS servero Print server , provides printer serviceso Proxy server , provides database server in serviceso Sound server , provides multimedia broadcasting / streaming.o Standalone server , an emulator for client-server (web-based) programso Web server , a server that HTTP clients connect to in order to send commands and

receive responses along with data contentso Web Feed Server , a server that distributes, manages, and tracks internal and

external RSS feeds in an enterpriseo Client-server , a software architecture that separates "server" functions from

"client" functionso The X Server , part of the X Window Systemo Peer-to-peer , a network of computers running as both clients and serverso Catalog server , a central search point for information across a distributed network

A server computer, sometimes called an enterprise server, is a computer system that provides essential services across a network, to private users inside a large organization or to public users in the internet.

Many servers have dedicated functionality such as web servers, print servers, and database servers.

Enterprise servers are known to be very fault tolerant, for even a short-term failure can cost more than purchasing and installing the system. For example, it may take only a few minutes' down time at a national stock exchange to justify the expense of entirely replacing the system with something more reliable.

Service Level Agreement (SLA)

A service level agreement (SLA) is a negotiated agreement between two parties where one is the customer and the other is the service provider. This can be a legally binding formal or informal

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 97: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

97

"contract" (see internal department relationships). Contracts between the service provider and other third parties are often (incorrectly) called SLAs — as the level of service has been set by the (principal) customer, there can be no "agreement" between third parties (these agreements are simply a "contract"). Operating Level Agreements or OLA(s), however, may be used by internal groups to support SLA(s).

The SLA records a common understanding about services, priorities, responsibilities, guarantees, and warranties. Each area of service scope should have the "level of service" defined. The SLA may specify the levels of availability, serviceability, performance, operation, or other attributes of the service, such as billing. The "level of service" can also be specified as "target" and "minimum," which allows customers to be informed what to expect (the minimum), whilst providing a measurable (average) target value that shows the level of organization performance. In some contracts, penalties may be agreed upon in the case of non-compliance of the SLA (but see "internal" customers below). It is important to note that the "agreement" relates to the services the customer receives, and not how the service provider delivers that service.

SLAs have been used since late 1980s by fixed line telecom operators as part of their contracts with their corporate customers. This practice has spread such that now it is common for a customer to engage a service provider by including a service-level agreement in a wide range of service contracts in practically all industries and markets. Internal departments (such as IT, HR, and Real Estate) in larger organization have adopted the idea of using service-level agreements with their "internal" customers — users in other departments within the same organization. One benefit of this can be to enable the quality of service to be benchmarked with that agreed to across multiple locations or between different business units. This internal benchmarking can also be used to market test and provide a value comparison between an in-house department and an external service provider.

Service-level agreements are, by their nature, "output" based — the result of the service as received by the customer is the subject of the "agreement." The (expert) service provider can demonstrate their value by organizing themselves with ingenuity, capability, and knowledge to deliver the service required, perhaps in an innovative way. Organizations can also specify the way the service is to be delivered, through a specification (a service-level specification) and using subordinate "objectives" other than those related to the level of service. This type of agreement is known as an "input" SLA. This latter type of requirement is becoming obsolete as organizations become more demanding and shift the delivery methodology risk on to the service provider. SLA.

Common metricsService-level agreements can contain numerous service performance metrics with corresponding service level objectives. A common case in IT Service Management is a call center or service desk. Metrics commonly agreed to in these cases include:

ABA (Abandonment Rate): Percentage of calls abandoned while waiting to be answered.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 98: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

98

ASA (Average Speed to Answer): Average time (usually in seconds) it takes for a call to be answered by the service desk.

TSF (Time Service Factor): Percentage of calls answered within a definite timeframe, e.g., 80% in 20 seconds.

FCR (First Call Resolution): Percentage of incoming calls that can be resolved without the use of a callback or without having the caller call back the helpdesk to finish resolving the case.

TAT (Turn Around Time): Time taken to complete a certain task.

Uptime Agreements are another very common metric, often used for data services such as shared hosting, virtual private servers and dedicated servers. Common agreements include percentage of network uptime, power uptime, amount of scheduled maintenance windows, etc.

Many SLAs track to the ITIL specifications when applied to IT services.

Typical contentsSLAs commonly include segments to address: a definition of services, performance measurement, problem management, customer duties, warranties, disaster recovery, termination of agreement.[1]

From a business perspective, you may need to look at Service Level Management (SLM) if you need to differentiate the service (e.g., to Gold, Silver, or Bronze) and have a differentiated price for each level of service.[2] Key points are to write the SLA in the language that the user understands and to have regular service reviews.

In Cloud ComputingCloud computing, (also Grid computing and service-oriented architecture), use the concept of service level agreements to control the use and receipt of (computing) resources from and by third parties.

Any SLA management strategy considers two well-differentiated phases: the negotiation of the contract and the monitoring of its fulfillment in run-time. Thus, SLA Management encompasses the SLA contract definition (basic schema with the QoS (quality of service) parameters), SLA negotiation, SLA monitoring, and SLA enforcement, according to defined policies.

The main point is to build a new layer upon the grid, cloud, or SOA middleware able to create a negotiation mechanism between providers and consumers of services. [3]. A European Union funded Framework 7 research project, SLA@SOI[4], is researching aspects of multi-level, multi-provider SLAs within service-oriented infrastructure and cloud computing.

In outsourcing

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 99: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

99

Outsourcing involves the transfer of responsibility from an organization to a supplier. The management of this new arrangement is through a contract that may include a Service Level Agreement (SLA). The contract may involve financial penalties and the right to terminate if SLAs are consistently missed. Setting, tracking, and managing SLAs is an important part of Outsourcing Relationship Management (ORM) discipline. It is typical that specific SLAs are negotiated up front as part of the outsourcing contract, and they are utilized as one of the primary tools of outsourcing governance.

Shareware

The term shareware, popularized by Bob Wallace,[1] refers to proprietary software that is provided to users without payment on a trial basis and is often limited by any combination of functionality, availability or convenience. Shareware is often offered as a download from an Internet website or as a compact disc included with a periodical such as a newspaper or magazine. The aim of shareware is to give buyers the opportunity to use the program and judge its usefulness before purchasing a license for the full version of the software.

Shareware is usually offered as a trial version with certain features only available after the license is purchased, or as a full version, but for a trial period. Once the trial period has passed the program may stop running until a license is purchased. Shareware is often offered without support, updates, or help menus, which only become available with the purchase of a license. The words "free trial" or "trial version" are indicative of shareware.

The term shareware is used in contrast to retail software, which refers to commercial software available only with the purchase of a license which may not be copied for others, public domain software, which refers to software not copyright protected, and freeware, which refers to copyrighted software for which the author solicits no payment (though he or she may request donations).

Software

Computer software, or just software is a general term used to describe the role that computer programs, procedures and documentation play in a computer system.[1]

The term includes:

Application software , such as word processors which perform productive tasks for users. Firmware , which is software programmed resident to electrically programmable memory

devices on board mainboards or other types of integrated hardware carriers. Middleware , which controls and co-ordinates distributed systems. System software such as operating systems, which interface with hardware to provide the

necessary services for application software.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 100: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

100

Software testing is a domain dependent of development and programming. Software testing consists of various methods to test and declare a software product fit before it can be launched for use by either an individual or a group.

Testware , which is an umbrella term or container term for all utilities and application software that serve in combination for testing a software package but not necessarily may optionally contribute to operational purposes. As such, testware is not a standing configuration but merely a working environment for application software or subsets thereof.

Software includes things such as websites, programs or video games, that are coded by programming languages like C or C++.

"Software" is sometimes used in a broader context to mean anything which is not hardware but which is used with hardware, such as film, tapes and records.[2]

Spam

Spam is the abuse of electronic messaging systems (including most broadcast media, digital delivery systems) to send unsolicited bulk messages indiscriminately. While the most widely recognized form of spam is e-mail spam, the term is applied to similar abuses in other media: instant messaging spam, Usenet newsgroup spam, Web search engine spam, spam in blogs,wiki spam, online classified ads spam, mobile phone messaging spam, Internet forum spam, junk fax transmissions, social networking spam, and file sharing network spam.

Spamming remains economically viable because advertisers have no operating costs beyond the management of their mailing lists, and it is difficult to hold senders accountable for their mass mailings. Because the barrier to entry is so low, spammers are numerous, and the volume of unsolicited mail has become very high. The costs, such as lost productivity and fraud, are borne by the public and by Internet service providers, which have been forced to add extra capacity to cope with the deluge. Spamming is widely reviled, and has been the subject of legislation in many jurisdictions.[1]

People who create electronic spam are called spammers.[2]

Structured Query Language (SQL)

The most common operation in SQL is the query, which is performed with the declarative SELECT statement. SELECT retrieves data from one or more tables, or expressions. Standard SELECT statements have no persistent effects on the database. Some non-standard implementations of SELECT can have persistent effects, such as the SELECT INTO syntax that exists in some databases.[11]

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 101: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

101

Queries allow the user to describe desired data, leaving the database management system (DBMS) responsible for planning, optimizing, and performing the physical operations necessary to produce that result as it chooses.

A query includes a list of columns to be included in the final result immediately following the SELECT keyword. An asterisk ("*") can also be used to specify that the query should return all columns of the queried tables. SELECT is the most complex statement in SQL, with optional keywords and clauses that include:

The FROM clause which indicates the table(s) from which data is to be retrieved. The FROM clause can include optional JOIN subclauses to specify the rules for joining tables.

The WHERE clause includes a comparison predicate, which restricts the rows returned by the query. The WHERE clause eliminates all rows from the result set for which the comparison predicate does not evaluate to True.

The GROUP BY clause is used to project rows having common values into a smaller set of rows. GROUP BY is often used in conjunction with SQL aggregation functions or to eliminate duplicate rows from a result set. The WHERE clause is applied before the GROUP BY clause.

The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate.

The ORDER BY clause identifies which columns are used to sort the resulting data, and in which direction they should be sorted (options are ascending or descending). Without an ORDER BY clause, the order of rows returned by an SQL query is undefined.

The following is an example of a SELECT query that returns a list of expensive books. The query retrieves all rows from the Book table in which the price column contains a value greater than 100.00. The result is sorted in ascending order by title. The asterisk (*) in the select list indicates that all columns of the Book table should be included in the result set.

SELECT * FROM Book WHERE price > 100.00 ORDER BY title;

The example below demonstrates a query of multiple tables, grouping, and aggregation, by returning a list of books and the number of authors associated with each book.

SELECT Book.title, count(*) AS Authors FROM Book JOIN Book_author ON Book.isbn = Book_author.isbn GROUP BY Book.title;

Example output might resemble the following:

Title Authors

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 102: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

102

---------------------- -------SQL Examples and Guide 4The Joy of SQL 1An Introduction to SQL 2Pitfalls of SQL 1System Analyst

A systems analyst is responsible for researching, planning, coordinating and recommending software and system choices to meet an organization's business requirements. The systems analyst plays a vital role in the systems development process. A successful systems analyst must acquire four skills: analytical, technical, managerial, and interpersonal. Analytical skills enable systems analysts to understand the organization and its functions, which helps him/her to identify opportunities and to analyze and solve problems. Technical skills help systems analysts understand the potential and the limitations of information technology. The systems analyst must be able to work with various programming languages, operating systems, and computer hardware platforms. Management skills help systems analysts manage projects, resources, risk, and change. Interpersonal skills help systems analysts work with end users as well as with analysts, programmers, and other systems professionals.

Because they must write user requests into technical specifications, the systems analysts are the liaisons between vendors and the IT professionals of the organization they represent[1] They may be responsible for developing cost analysis, design considerations, and implementation time-lines. They may also be responsible for feasibility studies of a computer system before making recommendations to senior management.

A systems analyst performs the following tasks:

Interact with the customers to know their requirements Interact with designers to convey the possible interface of the software Interact/guide the coders/developers to keep track of system development Perform system testing with sample/live data with the help of testers Implement the new system Prepare High quality Documentation

Many systems analysts have morphed into business analysts. And, the Bureau of Labor Statistics reports that "Increasingly, employers are seeking individuals who have a master’s degree in business administration (MBA) with a concentration in information systems." [2]

System ProgrammerSystem programming (or systems programming) is the activity of programming system software. The primary distinguishing characteristic of systems programming when compared to application programming is that application programming aims to produce software which provides services to the user (e.g. word processor), whereas systems programming aims to produce software which provides services to the computer hardware (e.g. disk defragmenter). It thus requires a greater degree of hardware awareness.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 103: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

103

System Software

Systems software refers to the Operating System and all utility programs (like Compiler, Loader, Linker, and Debugger) that manage computer resources at a low level. [1] [2] [3] Operating systems, such as GNU, Microsoft Windows, Mac OS X or Linux, are prominent examples of system software.

System software is software that basically allows the parts of a computer to work together. Without the system software the computer cannot operate as a single unit. In contrast to system software, software that allows you to do things like create text documents, play games, listen to music, or surf the web is called application software.[4]

In general, application programs are software that enable the end-user to perform specific, productive tasks, such as word processing or image manipulation. System software performs tasks like transferring data from memory to disk, or rendering text onto a display device.

System software is not generally what a user would buy a computer for, instead, it is usually the basics of a computer which come built-in. Application software is the programs on the computer when the user buys it. These programs may include word processors and web browsers.

Types of system softwareSystem software helps use the operating system and computer system. It includes diagnostic tools, compilers, servers, windowing systems, utilities, language translator, data communication programs, data management programs and more. The purpose of systems software is to insulate the applications programmer as much as possible from the details of the particular computer complex being used, especially memory and other hardware features, and such accessory devices as communications, printers, readers, displays, keyboards, etc.

Temporary FileTemporary files may be created by computer programs for a variety of purposes; principally when a program cannot allocate enough memory for its tasks, when the program is working on data bigger than architecture's address space, or as a primitive form of inter-process communication.

Auxiliary memoryModern operating systems employ virtual memory, however programs that use large amounts of data (e.g. video editing) may need to create temporary files.

Inter-process communication

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 104: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

104

Most operating systems offer primitives such as pipes, sockets or shared memory to pass data among programs, but often the simplest way (especially for programs that follow the Unix philosophy) is to write data into a temporary file and inform the receiving program of the location of the temporary file.

CleanupSome programs create temporary files and then leave them behind - they do not delete them. This can happen because the program crashed or the developer of the program simply forgot to add the code needed to delete the temporary files after the program is done with them. In Microsoft Windows the temporary files left behind by the programs accumulate over time and can take up a lot of disk space. System utilities, called temporary file cleaners or disk cleaners, can be used to address this issue. UNIX based operating systems don't suffer from the same problem because their temporary files are wiped at boot.

UsageThe usual filename extension for temporary files is ".TMP". Temporary files are normally created in a designated temporary directory reserved for the creation of temporary files.

Thin Client

A thin client (sometimes also called a lean or slim client) is a computer or a computer program which depends heavily on some other computer (its server) to fulfill its traditional computational roles.[1] This stands in contrast to the traditional fat client, a computer designed to take on these roles by itself. The exact roles assumed by the server may vary, from providing data persistence (for example, for diskless nodes) to actual information processing on the client's behalf.

Thin clients occur as components of a broader computer infrastructure, where many clients share their computations with the same server. As such, thin client infrastructures can be viewed as the amortization of some computing service across several user-interfaces. This is desirable in contexts where individual fat clients have much more functionality or power than the infrastructure either requires or uses. This can be contrasted, for example, with grid computing.

An Aleutia E3 thin client, with flash memory

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 105: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

105

The most common sort of modern thin client is a low-end microcomputer which concentrates solely on providing a graphical user interface to the end-user. The remaining functionality, in particular the operating system, is provided by the server.

Thin clients as programsThe notion of a thin client extends directly to any client-server architecture: in which case, a thin client application is simply one which relies on its server to process most or all of its business logic. This idiom is relatively common for computer security reasons: a client obviously cannot be trusted with the logic that determines how trustworthy they are; an adversary would simply skip the logic and say "I'm as trustworthy as possible!"

However, in web development in particular, client applications are becoming fatter. This is due to the adoption of heavily client-side technologies like Ajax and Flash, which are themselves strongly driven by the highly interactive nature of Web 2.0 aplications.

A renewed interest in virtual private servers, with many virtualization programs coming to a ripe stage, means that servers on the web today may handle many different client businesses. This can be thought of as having a thin-client "virtual server" which depends on the actual host in which it runs to do all of its computation for it. The end result, at least, is the same: amortization of the computing service across many clients.

Characteristics

A Neoware m100 thin client

The advantages and problems of centralizing a computational resource are varied, and this page cannot exhaustively enumerate all of them. However, these advantages and problems tend to be related to certain characteristics of the thin-client architecture itself.

Single point of failure

The server, in taking on the entire processing load of several clients, forms a single point of failure for those clients. This has both positive and negative aspects. On the one hand, the security threat model for the software becomes entirely confined to the servers: the clients simply

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 106: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

106

don't run the software. Thus, only a small number of computers can be very rigorously secured, rather than securing every single client computer. On the other hand, any denial of service attack against the server will harm many clients: so, if one user crashes the system, everyone else loses their volatile data; if one user infects their computer with a virus, then the entire server is infected with that virus.

For small networks, this single-point of failure property might even be expanded: the server can be integrated with file servers and print servers particular to its clients. This simplifies the network and its maintenance, but might increase the risk against that server.

Cheap client hardware

While the server must be robust enough to handle several client sessions at once, the clients can be made out of much cheaper hardware than a fat client can. This reduces the power consumption of those clients, and makes the system marginally scalable: it is relatively cheap to add on a couple more client terminals. The thin clients themselves in general have a very low total cost of ownership, but some of that is offset by requiring a robust server infrastructure with backups and so forth[2]. This is also reflected in terms of power consumption: the thin clients are generally very low-power and might not even require cooling fans, but the servers are higher-power and require an air-conditioned server room.

On the other hand, while the total cost of ownership is low, the individual performance of the clients is also low. Thin clients, for example, are not suited to any real form of distributed computing. The costs of compiling software, rendering video, or any other computationally intensive task will be shared by all clients via the server.

Client simplicity

Since the clients are made from low-cost hardware with few moving parts, they can operate in more hostile environments than conventional computers. However, they inevitably need a network connection to their server, which must be isolated from such hostile environments. Since thin clients are cheap, they offer a low risk of theft in general, and are easy to replace when they are stolen or broken. Since they don't have any complicated boot images, the problem of boot image control is centralized to the central servers.

Three-Tier Architecture

Three-tier'[2] is a client-server architecture in which the user interface, functional process logic ("business rules"), computer data storage and data access are developed and maintained as independent modules, most often on separate platforms.

The three-tier model is considered to be a software architecture and a software design pattern.

Apart from the usual advantages of modular software with well defined interfaces, the three-tier architecture is intended to allow any of the three tiers to be upgraded or replaced independently

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 107: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

107

as requirements or technology change. For example, a change of operating system in the presentation tier would only affect the user interface code.

Typically, the user interface runs on a desktop PC or workstation and uses a standard graphical user interface, functional process logic may consist of one or more separate modules running on a workstation or application server, and an RDBMS on a database server or mainframe contains the computer data storage logic. The middle tier may be multi-tiered itself (in which case the overall architecture is called an "n-tier architecture").

Three-tier architecture has the following three tiers:

Presentation tierThis is the topmost level of the application. The presentation tier displays information related to such services as browsing merchandise, purchasing, and shopping cart contents. It communicates with other tiers by outputting results to the browser/client tier and all other tiers in the network.

Application tier (Business Logic/Logic Tier)The logic tier is pulled out from the presentation tier and, as its own layer, it controls an application’s functionality by performing detailed processing.

Data tierThis tier consists of Database Servers. Here information is stored and retrieved. This tier keeps data neutral and independent from application servers or business logic. Giving data its own tier also improves scalability and performance.

From Wikipedia, the free encyclopediaJump to: navigation, search

File File history File links

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 108: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

108

Overview_of_a_three-tier_application

Value Added Reseller (VAR)

A value-added reseller (VAR) is a company that adds some feature(s) to an existing product(s), then resells it (usually to end-users) as an integrated product or complete "turn-key" solution. This practice is common in the electronics industry, where, for example, a software application might be added to existing hardware.

This value can come from professional services such as integrating, customizing, consulting, training and implementation. The value can also be added by developing a specific application for the product designed for the customer's needs which is then resold as a new package..

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 109: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

109

The term is often used in the computer industry, where a company purchases computer components and builds a fully operational personal computer system usually customized for a specific task such as non-linear video editing. By doing this, the company has added value above the cost of the individual computer components. Customers would purchase the system from the reseller if they lack the time or experience to assemble the system themselves.

Resellers also have pre-negotiated pricing that enables them to discount more than a customer would see going direct. This is because a reseller has already qualified for higher tiered discounting due to previous engagements with other clients, and the strategic partnership between the vendor and VAR inherently brings the vendor more business. The VAR can also partner with many vendors, helping the client when deciding which solution is truly best for their unique environment rather than trusting each vendor who believes their solution is best when it may not be the case.

Validity Check

A Validation rule is a criterion used in the process of data validation, carried out after the data has been encoded onto an input medium and involves a data vet or validation program. This is distinct from formal verification, where the operation of a program is determined to be that which was intended, and that meets the purpose.

The method is to check that data falls the appropriate parameters defined by the systems analyst. A judgement as to whether data is valid is made possible by the validation program, but it cannot ensure complete accuracy. This can only be achieved through the use of all the clerical and computer controls built into the system at the design stage. The difference between data validity and accuracy can be illustrated with a trivial example. A company has established a Personnel file and each record contains a field for the Job Grade. The permitted values are A, B, C, or D. An entry in a record may be valid and accepted by the system if it is one of these characters, but it may not be the correct grade for the individual worker concerned. Whether a grade is correct can only be established by clerical checks or by reference to other files. During systems design, therefore, data definitions are established which place limits on what constitutes valid data. Using these data definitions, a range of software validation checks can be carried out.

[edit] CriteriaAn example of a validation check is the procedure used to verify an ISBN.[1]

Size. The number of characters in a data item value is checked; for example, an ISBN must consist of 10 characters only (in the previous version--the standard for 1997 and later has been changed to 13 characters.)

Format checks. Data must conform to a specified format. Thus, the first 9 characters must be the digits 0 through 9' the 10th must be either those digits or an X

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 110: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

110

Consistency. Codes in the data items which are related in some way can thus be checked for the consistency of their relationship. The first number of the ISBN designates the language of publication. for example, books published in French-speaking countries carry the digit "2". This must match the address of the publisher, as given elsewhere in the record. .

Range. Does not apply to ISBN, but typically data must lie within maximum and minimum preset values. For example, customer account numbers may be restricted within the values 10000 to 20000, if this is the arbitrary range of the numbers used for the system.

Check digit. An extra digit calculated on, for example, an account number, can be used as a self-checking device. When the number is input to the computer, the validation program carries out a calculation similar to that used to generate the check digit originally and thus checks its validity. This kind of check will highlight transcription errors where two or more digits have been transposed or put in the wrong order. The 10th character of the 10-character ISBN is the check digit.

Virtual Memory

Virtual memory is a computer system technique which gives an application program the impression that it has contiguous working memory (an address space), while in fact it may be physically fragmented and may even overflow on to disk storage.

Developed for multitasking kernels, virtual memory provides two primary functions:

1. Each process has its own address space, thereby not required to be relocated nor required to use relative addressing mode.

2. Each process sees one contiguous block of free memory upon launch. Fragmentation is hidden.

All implementations (excluding emulators) require hardware support. This is typically in the form of a Memory Management Unit built into the CPU.

Systems that use this technique make programming of large applications easier and use real physical memory (e.g. RAM) more efficiently than those without virtual memory. Virtual memory differs significantly from memory virtualization in that virtual memory allows resources to be virtualized as memory for a specific system, as opposed to a large pool of memory being virtualized as smaller pools for many different systems.

Note that "virtual memory" is more than just "using disk space to extend physical memory size" - that is merely the extension of the memory hierarchy to include hard disk drives. Extending memory to disk is a normal consequence of using virtual memory techniques, but could be done by other means such as overlays or swapping programs and their data completely out to disk while they are inactive. The definition of "virtual memory" is based on redefining the address space with a contiguous virtual memory addresses to "trick" programs into thinking they are using large blocks of contiguous addresses.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 111: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

111

Modern general-purpose computer operating systems generally use virtual memory techniques for ordinary applications, such as word processors, spreadsheets, multimedia players, accounting, etc., except where the required hardware support (a memory management unit) is unavailable. Older operating systems, such as DOS [1] of the 1980s, or those for the mainframes of the 1960s, generally had no virtual memory functionality - notable exceptions being the Atlas, B5000 and Apple Computer's Lisa.

Embedded systems and other special-purpose computer systems which require very fast and/or very consistent response times may opt not to use virtual memory due to decreased determinism. This is based on the idea that unpredictable processor exceptions produce unwanted jitter on CPU operated I/O, which the smaller embedded processors often perform directly to keep cost and power consumption low. And the associated simple application has little use for multitasking features.

Paged virtual memoryAlmost all implementations of virtual memory divide the virtual address space of an application program into pages; a page is a block of contiguous virtual memory addresses. Pages are usually at least 4K bytes in size, and systems with large virtual address ranges or large amounts of real memory (e.g. RAM) generally use larger page sizes.

Page tables

Almost all implementations use page tables to translate the virtual addresses seen by the application program into physical addresses (also referred to as "real addresses") used by the hardware to process instructions. Each entry in the page table contains a mapping for a virtual page to either the real memory address at which the page is stored, or an indicator that the page is currently held in a disk file. (Although most do, some systems may not support use of a disk file for virtual memory.)

Systems can have one page table for the whole system or a separate page table for each application. If there is only one, different applications which are running at the same time share a single virtual address space, i.e. they use different parts of a single range of virtual addresses. Systems which use multiple page tables provide multiple virtual address spaces - concurrent applications think they are using the same range of virtual addresses, but their separate page tables redirect to different real addresses.

Dynamic address translation

If, while executing an instruction, a CPU fetches an instruction located at a particular virtual address, or fetches data from a specific virtual address or stores data to a particular virtual address, the virtual address must be translated to the corresponding physical address. This is done by a hardware component, sometimes called a memory management unit, which looks up the real address (from the page table) corresponding to a virtual address and passes the real address to the parts of the CPU which execute instructions. If the page tables indicate that the

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 112: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

112

virtual memory page is not currently in real memory, the hardware raises a page fault exception (special internal signal) which invokes the paging supervisor component of the operating system (see below).

Paging supervisor

This part of the operating system creates and manages the page tables. If the dynamic address translation hardware raises a page fault exception, the paging supervisor searches the page space on secondary storage for the page containing the required virtual address, reads it into real physical memory, updates the page tables to reflect the new location of the virtual address and finally tells the dynamic address translation mechanism to start the search again. Usually all of the real physical memory is already in use and the paging supervisor must first save an area of real physical memory to disk and update the page table to say that the associated virtual addresses are no longer in real physical memory but saved on disk. Paging supervisors generally save and overwrite areas of real physical memory which have been least recently used, because these are probably the areas which are used least often. So every time the dynamic address translation hardware matches a virtual address with a real physical memory address, it must put a time-stamp in the page table entry for that virtual address.

Permanently resident pages

All virtual memory systems have memory areas that are "pinned down", i.e. cannot be swapped out to secondary storage, for example:

Interrupt mechanisms generally rely on an array of pointers to the handlers for various types of interrupt (I/O completion, timer event, program error, page fault, etc.). If the pages containing these pointers or the code that they invoke were pageable, interrupt-handling would become even more complex and time-consuming; and it would be especially difficult in the case of page fault interrupts.

The page tables are usually not pageable. Data buffers that are accessed outside of the CPU, for example by peripheral devices that

use direct memory access (DMA) or by I/O channels. Usually such devices and the buses (connection paths) to which they are attached use physical memory addresses rather than virtual memory addresses. Even on buses with an IOMMU, which is a special memory management unit that can translate virtual addresses used on an I/O bus to physical addresses, the transfer cannot be stopped if a page fault occurs and then restarted when the page fault has been processed. So pages containing locations to which or from which a peripheral device is transferring data are either permanently pinned down or pinned down while the transfer is in progress.

Timing-dependent kernel/application areas cannot tolerate the varying response time caused by paging.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 113: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

113

Virtual Private Network

A virtual private network (VPN) is a computer network that is implemented in an additional software layer (overlay) on top of an existing larger network for the purpose of creating a private scope of computer communications or providing a secure extension of a private network into an insecure network such as the Internet.

The links between nodes of a virtual private network are formed over logical connections or virtual circuits between hosts of the larger network. The Link Layer protocols of the virtual network are said to be tunneled through the underlying transport network.

One common application is to secure communications through the public Internet, but a VPN does not need to have explicit security features such as authentication or traffic encryption. For example, VPNs can also be used to separate the traffic of different user communities over an underlying network with strong security features, or to provide access to a network via customized or private routing mechanisms.

VPNs are often installed by organizations to provide remote access to a secure organizational network. Generally, a VPN has a network topology more complex than a point-to-point connection. VPNs are also used to mask the IP address of individual computers within the Internet in order, for instance, to surf the World Wide Web anonymously or to access location restricted services, such as Internet television.

Vulnerability

In computer security, the term vulnerability is a weakness which allows an attacker to reduce a system's Information Assurance. Vulnerability is the intersection of three elements: a system susceptibility or flaw, attacker access to the flaw, and attacker capability to exploit the flaw[1]. To be vulnerable, an attacker must have at least one applicable tool or technique that can connect to a system weakness. In this frame, vulnerability is also known as the attack surface.

A security risk may be classified as a vulnerability. A vulnerability with one or more known instances of working and fully-implemented attacks is classified as an exploit. The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled.

Constructs in programming languages that are difficult to use properly can be a large source of vulnerabilities.

Complexity: Large, complex systems increase the probability of flaws and unintended access points

Familiarity: Using common, well-known code, software, operating systems, and/or hardware increases the probability an attacker has or can find the knowledge and tools to exploit the flaw

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 114: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

114

Connectivity: More physical connections, privileges, ports, protocols, and services and time each of those are accessible increase vulnerability

Password management flaws: The computer user uses weak passwords that could be discovered by brute force. The computer user stores the password on the computer where a program can access it. Users re-use passwords between many programs and websites.

Fundamental operating system design flaws: The operating system designer chooses to enforce sub optimal policies on user/program management. For example operating systems with policies such as default permit grant every program and every user full access to the entire computer. This operating system flaw allows viruses and malware to execute commands on behalf of the administrator. [1]

Internet Website Browsing: Some internet websites may contain harmful Spyware or Adware that can be installed automatically on the computer systems. After visiting those websites, the computer systems become infected and personal information will be collected and passed on to third party individuals.

Software bugs: The programmer leaves an exploitable bug in a software program. The software bug may allow an attacker to misuse an application.

Unchecked user input: The program assumes that all user input is safe. Programs that do not check user input can allow unintended direct execution of commands or SQL statements (known as Buffer overflows, SQL injection or other non-validated inputs).

Wide Area Network (WAN)

A wide area network (WAN) is a computer network that covers a broad area (i.e., any network whose communications links cross metropolitan, regional, or national boundaries [1]). This is in contrast with personal area networks (PANs), local area networks (LANs), campus area networks (CANs), or metropolitan area networks (MANs) which are usually limited to a room, building, campus or specific metropolitan area (e.g., a city) respectively.

WANs are used to connect LANs and other types of networks together, so that users and computers in one location can communicate with users and computers in other locations. Many WANs are built for one particular organization and are private. Others, built by Internet service providers, provide connections from an organization's LAN to the Internet. WANs are often built using leased lines. At each end of the leased line, a router connects to the LAN on one side and a hub within the WAN on the other. Leased lines can be very expensive. Instead of using leased lines, WANs can also be built using less costly circuit switching or packet switching methods. Network protocols including TCP/IP deliver transport and addressing functions. Protocols including Packet over SONET/SDH, MPLS, ATM and Frame relay are often used by service providers to deliver the links that are used in WANs. X.25 was an important early WAN protocol, and is often considered to be the "grandfather" of Frame Relay as many of the underlying protocols and functions of X.25 are still in use today (with upgrades) by Frame Relay.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 115: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

115

Academic research into wide area networks can be broken down into three areas: Mathematical models, network emulation and network simulation.

Performance improvements are sometimes delivered via WAFS or WAN optimization.

Several options are available for WAN connectivity:[2]

Option: Description Advantages Disadvantages Bandwidth range

Sample protocols

used

Leased line

Point-to-Point connection between two computers or

Local Area Networks (LANs)

Most secure Expensive

PPP, HDLC, SDLC, HNAS

Circuit switching

A dedicated circuit path is created between end

points. Best example is dialup connections

Less Expensive Call Setup 28 - 144

kbpsPPP, ISDN

Packet switching

Devices transport packets via a shared single point-

to-point or point-to-multipoint link across a

carrier internetwork. Variable length packets

are transmitted over Permanent Virtual Circuits (PVC) or Switched Virtual

Circuits (SVC)

Shared media across link

X.25 Frame-Relay

Cell relay Similar to packet switching, but uses fixed

length cells instead of variable length packets.

Data is divided into fixed-length cells and then

transported across virtual

Best for simultaneous use of voice

and data

Overhead can be considerable

ATM

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 116: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

116

circuits

Transmission rate usually range from 1200 bps to 6 Mbps, although some connections such as ATM and Leased lines can reach speeds greater than 156 Mbps. Typical communication links used in WANs are telephone lines, microwave links & satellite channels.

Recently with the proliferation of low cost of Internet connectivity many companies and organizations have turned to VPN to interconnect their networks, creating a WAN in that way. Companies such as Cisco, New Edge Networks and Check Point offer solutions to create VPN networks.

Web Administrator

A webmaster (portmanteau of web and postmaster), also called a web architect, web developer, site author, website administrator, or (informally) webmeister, is a person responsible for maintaining a website(s). The duties of the webmaster may include ensuring that the web servers, hardware and software are operating accurately, designing the website, generating and revising web pages, replying to user comments, and examining traffic through the site.

Webmasters may be generalists with HTML expertise who manage most or all aspects of Web operations. Depending on the nature of the websites they manage, webmasters typically know scripting languages such as PHP, Perl and Javascript. They may also be required to know how to configure web servers such as Apache and serve as the server administrator.

An alternative definition of webmaster is a businessperson who uses online media to sell products and/or services. This broader definition of webmaster covers not just the technical aspects of overseeing Web site construction and maintenance but also management of content, advertising, marketing and order fulfilment for the Web site.[1]

Core responsibilities of the webmaster may include the regulation and management of access rights of different users of a website, the appearance and setting up website navigation. Content placement can be part of a webmaster's responsibilities, while content creation may not be.

Workstation

A workstation is a high-end microcomputer designed for technical or scientific applications. Intended primarily to be used by one person at a time, they are commonly connected to a local area network and run multi-user operating systems. The term workstation has also been used to refer to a mainframe computer terminal or a PC connected to a network.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com

Page 117: Physical access - Leeds School of Businessleeds-faculty.colorado.edu/marlattj/acct45405540/...  · Web viewword processors, spreadsheets, media players. and . ... The content of

117

Historically, workstations had offered higher performance than personal computers, especially with respect to CPU and graphics, memory capacity and multitasking cability. They are optimized for the visualization and manipulation of different types of complex data such as 3D mechanical design, engineering simulation (e.g. computational fluid dynamics), animation and rendering of images, and mathematical plots. Consoles consist of a high resolution display, a keyboard and a mouse at a minimum, but also offer multiple displays, graphics tablets, 3D mice (devices for manipulating and navigating 3D objects and scenes), etc. Workstations are the first segment of the computer market to present advanced accessories and collaboration tools.

Presently, the workstation market is highly commoditized and is dominated by large PC vendors, such as Dell and HP, selling Microsoft Windows/Linux running on Intel Xeon/AMD Opteron. Alternative UNIX based platforms are provided by Apple Inc., Sun Microsystems, and SGI.

Source – Becker CPA Review, 2008 and Uniform CPA Examination Content Specifications, 2002 and Wikipedia.com