Introduction to Computer Fundamentals

Embed Size (px)

Citation preview

  • 7/28/2019 Introduction to Computer Fundamentals

    1/37

    FACULTY OF INDUSTREIAL ENGINEERING AND MANGEMENT

    1 Dr. Pramod M

    (MMEIM -202) COMPUTER APPLICATIONS IN

    MANAGMENT

    Module 1 :

    Fundamentals of computers

    Evolution of Computing Machines, Generations of Computers, Classifications of Computers, Over view of

    Internal and External Components of a Computer system., binary number system, generation of chips

    and programming languages Overview of Operating system along with its type and functions.

    Applications of Computers.

    Session 1: Introduction to Computer Fundamentals

    Man is still the most extraordinary computer of all.-- John F. Kennedy

    IntroductionComputer has been the premier invention of this century. It plays an important role in almost every part

    of our lives. It has become so important that without it we would not be able to live the way we do.

    Look around you and you would find computers scattered all over the places, starting with the machine

    of computer to washing machine, refrigerator, car, mobile and to life saving devices with the doctors.

    Everywhere a small computer is working for your convenience and they seem to perform almost any

    task in the world. Computers have had a tremendous impact on the way information is processed within

    an organization. Although information has been processed manually throughout the history

    yet in modern management where decision-making is very fast and in the era of corporate

    governance, it is not possible without the help of information system managed by computers.

    ComputerThe word computer comes from word compute which means to calculate. By definition, a

    computer is a programmable machine (or more precisely, a programmable sequential state machine)

    that operates on data and is used for wide range of activities. Computer is an electronic device or a

    combination of electronic devices which solves problems after accepting data and supplies results to the

    user. It is a tool which can be used to read and write stories, draw and look at images, and send and

    receive e-mail. It can store a large amount of information and perform various scientific and

    mathematical tasks.

    Basically, a computer system comprises the following five elements:

    Hardware

    Software

    People

    Procedure Data/information

    A computer organization is often compared with a human brain. Just think of a human brain, how it

    works? It can store data with its five senses (like input devices in a computer), process the gathered

    information and reach to some conclusion drawn from the raw data (like the processing of a computer

    system). Then, it can deliver an output or result with speech or with expression (like an output device).

  • 7/28/2019 Introduction to Computer Fundamentals

    2/37

    Computer applications in management B2S2

    2 Dr. Pramod M

    Characteristics of Computers

    The ever-increasing use of computers is due to their special characteristics. A computer is not just a

    calculating machine. It is also capable of doing complex activities and operations.

    The main characteristics of a computer are given below:

    1. Speed

    A computer is a very fast and accurate device. Since electronic pulses travel at incredible speed and areelectronic devices, their internal speed is virtually instantaneous. A microcomputer can process millions

    of instruction per second over and over again without any mistake.

    2. Accuracy

    Computers physical circuits rarely make errors, if the data and instruction are correctly fed. Most of the

    errors occurring in computers are either hardware errors or human errors.

    3. Storage

    They have a large amount of memory to hold a very large amount of data. A large amount of

    data/information can be stored in secondary storage devices.

    4. Programmability

    A computer is programmable device, i.e. what it does depend on the lines of instruction (program) it is

    using.5. Diligence

    It is free from problems like lack of concentration, confusions etc. It is never confused like humans and

    can consecutively take instructions without failing or getting bored.

    6. Versatility

    Many different types of tasks can be performed on computer. At one point of time, it might be busy in

    calculating statistical data for annual performance evaluation of a business organization and at the other

    point of time, it might be working on inventory control.

    7. Power of Remembrance

    Unlike humans, computers can store things for unlimited period of time. They have a great

    remembering power.

    Classification

    Computers can be classified on the basis of different factors. At present, there are two categories of

    computers. These are as follows:

    1. Analog Computers

    Analog computers are analog devices (refer to figure 1.1). It means that they have continuous states

    rather than discrete numbered states. An analog computer can represent fractional or irrational values

    exactly, i.e. with no round off. Analog computers are almost never used outside of experimental

    settings. They handle or process information which is of physical nature.

    2. Digital Computers

    A digital computer is a programmable-clocked sequential state machine (refer to figure 1.2). It uses

    discrete states. A binary digital computer uses two discrete states, such as positive/negative, high/low,

    on/off, to represent the binary digits zero and one. They process information which is essentially ina binary state.

    Another Classification

    Computers can also be classified on the basis of size and speed. Based on this classification, five types of

    computers are as follows:

    1. Micro Computers

    A microcomputers CPU is a microprocessor. The microcomputer originated in late 1970s. The first

    microcomputer was built around 8-bit microprocessor chips. An 8-bit chip is the chip that can retrieve

  • 7/28/2019 Introduction to Computer Fundamentals

    3/37

    Computer applications in management B2S2

    3 Dr. Pramod M

    instructions/data from storage, manipulate and process an 8-bit data at a time. One can also say that

    the chip has a built-in 8-bit data transfer path. 8088 is an 8/16-bit chip, i.e. an 8-bit path is used to move

    data between chip and primary storage (external path) but processing is done within the chip using

    a 16-bit path (internal path) at a time. 8086 is a 16/16-bit chip, i.e. both the internal and external paths

    are 16-bit wide. Both these chips can support a primary storage capacity of upto 1 Mega Byte (MB).

    Most of the popular microcomputers are developed around Intels chips while most of the minis andsuper minis are built around Motorolas 68000 series chips. There are, however, new trends developing.

    With the advancement of display and VLSI technology, a microcomputer is now available in a very small

    size. Some of these are laptops/notebook computers etc. Most of these are of the size of a small

    notebook but with an equivalent capacity of an older mainframe.

    2. Minicomputers

    The term minicomputer was originated in 1960s when it was realized that many computing tasks do

    not require an expensive contemporary mainframe computers but can be solved by a small, inexpensive

    computer also. Initial minicomputers were 8-bit and 12-bit machines but by 1970s, almost all

    minicomputers were 16-bit machines. The 16-bit minicomputers have the advantages of large

    instruction set and address field, efficient storage and handling of text. Thus, a 16-bit minicomputer was

    a more powerful machine and could be used in variety of applications. It could support businessapplications along with the scientific ones. With the advancement in technology, the speed, memory

    size and other characteristics developed and the minicomputer was then used for various stand-alone or

    dedicated applications. The minicomputer was then used as a multi-user system which can be used by

    various users at the same time. Gradually, the architectural requirement of minicomputers grew and a

    32-bit minicomputer, called super mini, was introduced. The super mini had more peripheral devices,

    larger memory and could support more users working simultaneously on a computer in comparison to

    previous minicomputers.

    3. Workstation

    It is a powerful stand-alone computer of the sort used in computer-aided design and other applications

    requiring a high-end, expensive machine with considerable calculating or graphics capability. Machines

    using Intel Processor P2 at 400 MHz is an example of a workstation.

    4. Mainframe Computers

    They are very powerful, large-scale general-purpose computers. Their word length may be 48, 60 or 64

    bits, memory capacity 256 to 512 MB, hard disk capacity 1 to 100 GB or more and processing speed 100

    to 200 MIPS. They are used where large amounts of data are to be processed or very complex

    calculations are to be made. It should be noted that these tasks are beyond the capacities of mini

    computers. They are used in research organizations, large industries, airlines reservation etc. where a

    large database has to be maintained. Its examples include IBM 4300 series and IBM Enterprise

    system/9000 series.

    5. Super Computers

    Its processing capabilities lie in the range of 400-10,000 MIPS, word length 64-96bits, memory capacity

    1024 MB and more, and hard disk capacity 1000 GB and more. It contains a number of CPUs that

    operate in parallel to make it faster, i.e. CPUs give them their speed through parallel processing. Theyare used for weather forecasting, weapons research and development, rocketing, aerodynamics, atomic,

    nuclear and plasma physics. Supercomputers have limited use and limited market because of their very

    high price. They are being used at some research centers and government agencies involving

    sophisticated scientific and engineering tasks.

  • 7/28/2019 Introduction to Computer Fundamentals

    4/37

    Computer applications in management B2S2

    4 Dr. Pramod M

    Need for Computer Literacy

    Computers are found nearly everywhere in our personal lives. Unless you intend to be a hermit,

    computers will affect you. Computer literacy means having a general knowledge about computers. For

    example, to know who uses them, what kind of functions they perform, how others use them, where

    they are, how they are affecting society and how they can be beneficial to your own life or work. Some

    experts think that the person who does not know how to use a computer will be just as handicapped inperforming his or her job as the person today who cannot read.

    Several microcomputer application packages, such as word processor, data manager, spreadsheet and

    graphics and communication will be introduced. These programs give people, who are not mathematical

    wizards or computer programmers, an opportunity to use computers and take their advantages.

    Although the courses you chose in school might not have demanded a technical knowledge of

    computers and programming yet you are likely to be directly or indirectly involved with them in your

    work. Many jobs and careers depend on some familiarly with the use of computers. For those who are

    interested in careers or jobs directly involved with information or computer technology, all kinds of

    possibilities exist. These include keying in data, defining the way data are processed, managing the

    computer system, or managing the information system.

    Computer Limitations

    You have studied that computer is one of the most powerful tools ever developed. But we all have read

    articles similar to the one about the man who was treated for pneumonia and then charged by the

    hospitals computer for the use of the delivery room and nursery. Such computer failures may be

    amusing but most of the foul-ups happen because people fail to consider some basic computer

    limitations. Without reliable programs and sound logic, no computer system can perform sufficiently.

    Computer cannot think of its own. Also, it cannot be moved.

    Components of a Computer System

    Introduction

    We have seen that computer affects our life in a big way by increasing the efficiency and enhanced

    ability. Now we will have to look for the anatomy of computer. What is it made up of? The parts of

    computer did not appear all at once in one machine by one person. It is a continuously evolving process

    starting as early as 17th century when people began to work on machines that would automate task.

    The first such machine was developed in the 17th century by mathematician and philosopher Blaise

    Pascal but it was not an electronic device. It was purely a mechanical machine which used meshed gears

    to add and multiply the numbers. But after him, there was a long gap before an idea emerged from

    Charles Babbage to process information. Although he could never successfully develop such mechanical

    machine yet his idea was of million-dollar worth. That is why he is known as the father of computer.

    Modern electronic computer started taking shape in 1940s with the invention of Mark-I Computer. Since

    then, there have been a lot of research and new inventions in the technology of computers.

    Components of a ComputerComponents of a computer can be broadly divided into the following two categories:

    1. Software

    Software refers to the programs required to operate a computer. For example, DOS (Disk Operating

    System), BASIC, COBOL, dBase, Accounting Software etc. are all software. An analogy of hardware can

    be the book which you are reading and, in this case, software would be the text written on it. Another

    analogy could be that brainis hardware but memory stored in brain is software.

  • 7/28/2019 Introduction to Computer Fundamentals

    5/37

    Computer applications in management B2S2

    5 Dr. Pramod M

    Both hardware and software are dependent on each other. CPU, memory unit, hard disk etc. are useless

    unless they are provided with instructions and data for storage and processing. Similarly, BASIC or

    COBOL has no importance unless they are used along with various hardware components of computer.

    2. Hardware

    Hardware refers to any physical component of computer. For example, CPU, monitor (VDU), keyboard,

    hard disk, floppy disk, printer etc. are physical components and, thus, are all hardware.Hardware can be compared to a human body capable of doing any activity. But without the presence of

    blood and oxygen, it will not be able to do anything. The same is the case with computer and hardware.

    It is capable of doing many things but without software it just cannot work. Thus, for computer both

    software and hardware components are essential.

    Organization of Computer

    We will discuss the basic structure of a computer system. The diagram of a generalized architecture of a

    computer system is shown below. A computer system has the following main components (refer to

    figure 2.1):

    Input/output unit

    Central Processing Unit (CPU) Memory unit

    In order to solve a computational problem, a computer has to perform the following four

    major tasks:

    Input

    Process

    Output

    Storage

    Input/output Unit

    The computer is a machine which processes input data according to a given set of instructions and gives

    output. Before a computer does processing, it should be given data and instructions. After processing,

    output should be displayed or printed by computer. The unit used for getting the data and instructions

    into computer and displaying or printing output is known as an Input/Output Unit (I/O Unit). There are

    many peripheral devices which are used as input/output units for a computer. The most common form

    of an input device is known as terminal. A terminal has an electronic typewriter-like device called

  • 7/28/2019 Introduction to Computer Fundamentals

    6/37

    Computer applications in management B2S2

    6 Dr. Pramod M

    keyboard and has a display screen called Visual Display Unit (VDU) or monitor. Keyboard is the main

    input device while the monitor can be considered both as an input as well as an output device. There are

    some other common input devices like mouse, punch card, tape, joystick, scanner, modem etc. Monitor,

    printer and plotter are the main peripheral devices used as output units for a computer.

    Central Processing UnitCentral Processing Unit (CPU) is the main component or brain of a computer. It performs all the

    processing of input data. Its function is to fetch, examine and execute the instructions stored in the main

    memory of a computer. In microcomputers, CPU is built on a single chip or Integrated Circuit (IC) and is

    called a microprocessor. A CPU consists of the following distinct parts:

    Arithmetic Logic Unit (ALU) Control Unit (CU) Registers Buses Clock

    Arithmetic Logic Unit (ALU)

    The arithmetic and logic unit of CPU is responsible for all arithmetic operations like addition,

    subtraction, multiplication and division as well as logical operations, such as less than, equal to and

    greater than. All calculations and comparisons are performed in arithmetic logic unit.

    Control Unit

    The control unit is responsible for controlling the transfer of data and instructions among other units ofa computer. It is considered as the central nervous system of a computer as it manages and

    coordinates all the units of computer. It obtains the instructions from the memory, interprets them and

    directs the operation of computer. It also performs the physical data transfer between memory and

    peripheral devices.

    Registers

    Registers are small high-speed circuits (memory locations). These are used to store data, instructions

    and memory addresses (memory location numbers) when ALU performs arithmetic and logical

    operations. Registers can store one word of data (1 word = 2 bytes and 1 byte = 8 bits) until it is

    overwritten by another word. Depending on the processors capability, the number and type of registers

    vary from one CPU to another. Depending upon their functions, these can be divided into the following

    six categories:

    General purpose registers

    Pointer registers

    Segment registers

    Index registers

    Flags registers

    Instruction pointer registers

    Buses

    Data is stored as a unit of eight bits (bit stands for binary digit, i.e. 0 or 1) in a register. Each bit is

    transferred from one register to another by means of a separate wire. This group of eight wires that is

    used as a common way to transfer data between registers is known as a bus. It is actually a connection

    between two components to transmit signal between them. A bus can be of three major types. These

    types are as follows:1. Data bus-- It is used to move data.

    2. Control bus-- It is used to move address or memory location.

    3. Address bus-- It is used to send control signals between various components of a computer.

    Clock

    A clock is another important component of CPU. It measures and allocates a fixed time slot for

    processing each and every micro-operation (smallest functional operation). In simple terms, CPU is

    allocated one or more clock cycles to complete a micro-operation. CPU executes the instructions in

  • 7/28/2019 Introduction to Computer Fundamentals

    7/37

    Computer applications in management B2S2

    7 Dr. Pramod M

    synchronization with the clock pulse. The clock speed of CPU is measured in terms of Mega Hertz (MHz)

    or Millions of Cycles per second. The clock speed of CPU varies from one model to another in the range

    4.77 MHz (in 8088 processor) to 266 MHz (in Pentium II). The speed of CPU is also specified in terms of

    Millions of Instructions Per Second (MIPS) or Million of Floating Point Operations Per Second (MFLOPS).

    Memory UnitMemory unit is that component of a computer system which is used to store data, instructions and

    information before, during and after the processing by ALU. It is actually a work area (physically a

    collection of integrated circuits) within a computer where CPU stores data and instructions. It is also

    known as a main/primary/internal memory.

    Input Devices

    Input devices are used to input data, information and instructions into RAM. These

    devices can be classified into the following two broad categories:

    Basic input devices

    Special input devices

    The structure and function of common input devices of these two categories are discussed below indetail.

    Basic Input Devices

    The input devices which have now-a-days become essential to operate a PC (personal computer) may be

    called as basic input devices. These devices are always required forbasic input operations. These

    devices include keyboard and mouse.

    Keyboard

    Keyboard (similar to a typewriter) is the main input device of a computer (refer to figure 2.2). It contains

    three types of keys-- alphanumeric keys, special keys and function keys. Alphanumeric keys are used to

    type all alphabets, numbers and special symbols like $, %, @, A etc. Special keys such as , ,

    , , etc. are used for special functions. Function keys such as , ,

    etc. are used to give special commands depending upon the software used. The function of each and

    every key can be well understood only after working on a PC. When any key is pressed, an electronic

    signal is produced. This signal is detected by a keyboard encoder that sends a binary code corresponding

    to the key pressed to the CPU. There are many types of keyboards but 101 keys keyboard is the most

    popular one.

    Mouse

    Mouse (similar to a mouse) is another important input device (refer to figure 2.3). It is a pointing device

    used to move cursor, draw sketches/diagrams, select text/object/menu item etc. on monitor screen

    while working on Windows (graphics-based environment of a computer). Mouse is a small, palm size

    box containing three buttons and a ball underneath which senses the movement of the mouse and

    sends the corresponding signals to CPU on pressing the buttons.

    Special Input Devices

    The input devices which are not essential to operate a PC are called as special input devices. Thesedevices are used for various special purposes, and are generally not required for basic input operations.

    These devices include trackball, light pen, touch screen, joystick, digitizer, scanner, OMR, OCR, bar code

    reader, MICR and voice input devices.

    Output Devices

    Output devices are hardware components which are used to display or print the processed information.

    The structure, working and uses of common output devices is discussed below.

  • 7/28/2019 Introduction to Computer Fundamentals

    8/37

    Computer applications in management B2S2

    8 Dr. Pramod M

    Monitor

    Visual Display Unit (VDU), commonly known as monitor, is the main output device of a computer (refer

    to figure 2.14). It consists of a Cathode Ray Tube (CRT) which displays characters as an output. It forms

    images from tiny dots, called pixels. Pixels are arranged in a rectangular form. The sharpness of image

    (screen resolution) depends upon the number of pixels.

    Types of MonitorsDepending upon the resolution, monitors can be classified as follows:

    (a) CGA (Color Graphics Adapter)

    (b) MDA (Monochrome Display Adapter)

    (c) HGA (Hercules Graphics Adapter)

    (d) EGA (Enhanced Graphics Adapter)

    The differences between these monitors are summarized below. Depending upon the color of display,

    monitors can be classified as monochrome (with single color/ black and white display) and color (with all

    colors display) monitors.

    Printer

    Printer is the most important output device. It is used to print information on paper. It is essential for

    getting output of any computer-based application.

    Types of Printers

    Printers can be broadly categorized into the following two types:

    1. Impact Printers

    The printers that print the characters by striking against the ribbon and onto the paper are called

    impact printers. These are of two types:

    (a) Character Printers

    These printers print one character at a time. These printers are further of two types:(i) Daisy Wheel Printers

    These printers print the characters by a mechanism that uses a plastic or metal hub with spokes, called

    daisy wheel (refer to figure 2.20). The characters are embossed on the radiating spokes and printed by

    striking these spokes against the ribbon and paper. These printers give a good quality but are more

    expensive than dot matrix printers.

  • 7/28/2019 Introduction to Computer Fundamentals

    9/37

    Computer applications in management B2S2

    9 Dr. Pramod M

    (ii) Dot Matrix Printers

    These printers print the characters by putting dots onto paper. They do not give better printing quality

    than daisy wheel printers but are faster in speed. The printing speed of a dot matrix printer can be upto

    360 cps (characters per second). They are widely used with microcomputers.

    (b) Line Printers

    These printers print one line at a time. Their printing speed is much more than character printers. Theyare also of two types:

    (i) Drum Printers

    These printers print line by a rotating drum having a ring of characters for each print position (refer to

    figure 2.22). The hammers strike each character of the drum simultaneously so that entire line is printed

    in one full rotation of the drum. These printers are also called as barrel printers. The printouts

    obtained from these printers have even character spacing but uneven line height.

    (ii) Chain Printers

    These printers print the line by a rotating chain having ring characters for each print position. Their

    printing mechanism is similar to drum printers. The printouts, thus, obtained from these printers have

    uneven character spacing but even line height.

    2. Non-Impact PrintersThe printers that print the characters without striking against the ribbon and onto the paper are called

    non-impact printers. These printers print a complete page at a time and, therefore, are also called as

    page printers. Page printers are of three types:

    (a) Laser Printers

    These printers look and work like photocopiers. They are based on laser technology which is the latest

    development in high speed and high quality printing. In these printers, a laser beam is used to write an

    image on a paper. First, the image is formed by electrically charged thousands of dots on a paper by

    laser beam. Then, the paper is sprayed with a toner having the opposite charge and is passed over a

    heated roller to make the image permanent. Laser printers are very popular and have become an

    essential part of DTP. Although laser printers are costlier than dot matrix yet they are generally

    preferred in all offices due to their high quality of printing. There are many models of laser printers

    depending upon the speed and number of dots printed. The latest model of laser printer is 1200 DPI

    (dots per inch) which can print 10 pages per minute. Some high-speed laser printers give a speed of upto

    100 pages per minute.

    (b) Inkjet Printers

    These printers print characters by spraying electrically charged ink on paper. These printers give better

    quality than character printers but not better than laser printers. They are cheaper than laser printers

    and hence used widely in many offices. They also offer an option of using color cartridges for multi-color

    printing.

    (c) Thermal Printers

    These printers print characters by melting a wax-based ink off a ribbon onto a special heat sensitive

    paper. They give letter-quality printing but are relatively expensive in maintenance than other printers.

    Computer Generations

    The Computer Evolution over the period of time has resulted in development of various generations and

    devices. Different technologies have been used for manufacturing the computer hardware. Based on the

    component technology, computers are classified into five generations. Each computer generation is

    characterized by a major technological development that fundamentally changed the way computers

    operate, architectural structure, resulting in increasingly smaller, cheaper, more powerful and more

  • 7/28/2019 Introduction to Computer Fundamentals

    10/37

    Computer applications in management B2S2

    10 Dr. Pramod M

    efficient and reliable devices. The study of these aspects, helps one to distinguish between past and the

    present dimensions of the computer

    First Generation Computers (1937-1953)

    These computers were pure hardware machines which contained no Operating System.

    Programming was done in the machine language which differs from one computer to another. The userdeals with several switches in the front panel to start, run or halt the computer. The internal status of

    the computer is displayed on several lights on the front panel. Invariably only a designer or programmer

    could operate the computer due to the complexities involved. These machines used electronic switches,

    in the form of vacuum tubes, instead of electromechanical relays. In principle, the electronic switches

    would be more reliable, since they would have no moving parts that would wear out, but the technology

    was still new at that time and the vacuum tubes were comparable to relays in reliability. Electronic

    components had one major benefit, they could open and close about 1,000 times faster than

    mechanical switches. The earliest attempt to build an electronic computer was by J.V. Atanasoff, a

    Professor of Physics and Mathematics at Iowa State, in 1937. Atanasoff set out to build a machine that

    would help his graduate students solve systems of partial differential equations. By 1941, he and his

    graduate student Clifford Berry had succeeded in building a machine that could solve 29 simultaneousequations with 29 unknowns. However, the machine was not programmable, and was more of an

    electronic calculator. Electronic Numerical Integrator and Calculator(ENIAC) was the first general

    purpose electronic computer. It was an enormous machine weighing about 30 tons and containing more

    than 18,000 vacuum tubes.

    Second Generation Computers (1954-1962) -

    Transistor Invention by Bell Labs was a boon to second generation computers. Smaller in size and also

    consumes less power. Several companies such as IBM, NCR, RCA etc. quickly introduced transistor

    technology which also improved reliability of computers. Instead of wiring circuits, photo printing was

    used to build Printed Circuit Boards (PCB).

    Both computer production and maintenance of computers became easier. The second generation saw

    several important developments at all levels of computer system design, from the technology used to

    build the basic circuits to the programming languages used to develop scientific applications .Electronic

    switches in this era were based on discrete diodes and transistors technology with a switching time of

    approximately 0.3 microseconds. The first machines to be built with this technology include TRADIC

    (TRAnsistor Digital Computer) at Bell Laboratories in 1954 and TX-0 at MITs Lincoln Laboratory. Memory

    technology was based on magnetic cores, which could be accessed in random order, as opposed to

    mercury delay lines, in which data was stored as an acoustic wave that passed sequentially through the

    medium and could be accessed only when the data moved by the I/O interface. Important innovations in

    Computer Architecture were index registers for controlling loops and floating point units for calculations

    based on real numbers. Floating-point operations were performed by libraries of software routines in

    early computers, but were done in hardware in second generation machines.

    Third Generation Computers (1963-1972)

    Integrated Circuit(IC) chip Invention is a great event for electronics field giving rise to microelectronics.

    IC has multiple advantages over discrete components: smaller size, higher speed, lower hardware cost,

    improved reliability etc. Digital computer design became more attractive and interesting. The use of

    computers in a continuous processing and manufacturing sectors such as petroleum refining and

    electrical power distribution became popular. The computer families by leading companies such as IBM,

    UNIVAC, HP,ICL and DEC dominated the computer industry. The third generation brought huge gains in

  • 7/28/2019 Introduction to Computer Fundamentals

    11/37

    Computer applications in management B2S2

    11 Dr. Pramod M

    computational power: Integrated Circuits, or ICs (semiconductor devices with several transistors built

    into one physical component), semiconductor memories instead of magnetic cores, microprogramming

    for efficiently designing complex processors, pipelining and other forms of parallel processing

    techniques. Operating System software allowed efficient sharing of a computer system by several user

    programs. The first ICs were based on Small-Scale Integration (SSI) circuits, which had around 10 devices

    per circuit (or chip), and evolved to the use of Medium-Scale Integrated (MSI) circuits, which had up to100 devices per chip. Multilayered printed circuits were developed and core memory was replaced by

    faster, solid state memories.

    Computer designers began to take advantage of parallelism by using multiple functional units,

    overlapping CPU and I/O operations, and pipelining (internal parallelism) in both the instruction stream

    and the data stream. In 1964, Seymour Cray developed the CDC 6600, which was the first architecture to

    use functional parallelism. By using 10 separate functional units that could operate simultaneously and

    32 independent memory banks, the CDC 6600 was able to attain a computation rate of 1 million floating

    point operations per second (1 MFlops).

    Fourth Generation Computers (1972- 1984)

    Computers built after 1972, called Fourth Generation computers were based on LSI (Large ScaleIntegration) of circuits (such as microprocessors) - typically 500 or more transistors on a chip. Later

    developments included VLSI (Very Large Scale Integration) integrated circuits typically 10,000

    transistors. Modern circuits may now contain millions of components. This has led to very small, yet

    incredibly powerful computers. The fourth generation is generally viewed as running right up until the

    present, since, although computing power has increased, the basic technology has remained virtually

    the same. By the late 1990s many people began to suspect that this technology was reaching its limit,

    further miniaturization could only achieve so much. 1 GB RAM chips have circuitry so small that it can be

    measured in terms of atoms. Such small circuits pose many technical problems like the increase in

    temperature and radiation.

    Fifth Generation Computers (1984-1990)

    The use of VLSI and artificial intelligence concept is used in this generation of computers. Expert

    systems, pattern recognition, voice recognition, signature capturing and recognition, microprocessor

    controlled robots etc. are some of the sophisticated developments in the field of computers. They will

    be able to take commands in a audio visual way and carry out instructions. Many of the operations

    which require low human intelligence will be performed by these computers. The development of the

    next generation of computer systems is characterized mainly by the acceptance of parallel processing.

    Until this time, parallelism was limited to pipelining and vector processing, or at most to a few

    processors sharing jobs. The fifth generation saw the introduction of machines with hundreds of

    processors that could all be working on different parts of a single program. The scale of integration in

    semiconductors continued at an incredible pace - by 1990, it was possible to build chips with a million

    components - and semiconductor memories became standard on all computers. Other new

    developments were the widespread use of Computer Networks and the increasing use of single-userworkstations. Large scale parallel processing was employed in commercial products.

    A typical Computer System

    We will see a real life situation. In a college, Principal is the person who instructs the others to do their

    corresponding work. But he will not do all the work by himself. The fig. shows the structure of above.

  • 7/28/2019 Introduction to Computer Fundamentals

    12/37

    Computer applications in management B2S2

    12 Dr. Pramod M

    In the above structure, principal is instructed by the management or government. Without any

    knowledge of the management or government he does not take any action. Principal has to depend

    upon the management or government. But principal has to instruct the staff working under him to carry

    out the administrative activity in a satisfactory manner. The staff members can interact with the

    students and vice-versa. From the above example we know what the work of principal is. Now we will

    compare this with the computer system. A Computer system may be divided into four major

    components

    1. Hardware (Principal)2. Operating System (Management)

    3. Application Programs (Teaching & Non- Teaching Staff)

    4. Users (Students)

    The computer is an electronic machine with built-in intelligence to execute the instructions. A Computer

    System is an arrangement of hardware and software. The term hardware generally refers to the

    electronic circuits in the computer. The main hardware modules are keyboard, CRT monitor, Disk Drive,

    Printer, and other peripherals. In practice, the term hardware is used for all physical items in a computer

    including mechanical, electrical and electronic assemblies and components. The Electrical components

    are Motors, power supplies, Transformers, Relays, Fans, PCBs, Wires, and Cables. The Mechanicalcomponents are Switches, Panels, Covers, Chassis, Nuts and Screws. The Electronic components are

    Resistors, Capacitors, Coils, Diodes, Transistors, ICs, Crystals, LED, Speakers, and CRT. Fig 1.3 Shows

    Components of a Typical Computer System

  • 7/28/2019 Introduction to Computer Fundamentals

    13/37

    Computer applications in management B2S2

    13 Dr. Pramod M

    Any program is software. The software is developed to solve a problem and it controls the hardware

    when the program is executed. The hardware can be seen visually whereas the software is a logical

    action plan that is not visually noticeable. Computer Software is classified into two types: Application

    and System Software. An application program is a program solving users problems. Typical examples

    are: Payroll program, Inventory control program, tax calculator, class room scheduler, library

    management software, train reservation software, billing software and game programs. A system

    program is a program which helps in efficient utilization of the system by other programs and the users.

    It is generally developed for a given type of computer and it is not concerned with specific application or

    user. Operating system and compiler are examples of system software.

    Another way of looking into a typical personal computer is as shown Fig 1.4. At the core of this

    computer is a single-chip microprocessor such as the PentiumDual Core or AMD. The microprocessors

    internal (micro) architecture usually contains a number of speedup features not found in its earlier

    version. A system bus connects the microprocessor to a main memory based on semiconductor DRAM

    technology and to an I/O subsystem. The widely used I/O bus (peripheral bus) used in computers of all

    sizes, provides a shared data path between the Central Processing Unit (CPU) and peripheral controllers,

    such as network, display, SCSI and RAID cards.

  • 7/28/2019 Introduction to Computer Fundamentals

    14/37

    Computer applications in management B2S2

    14 Dr. Pramod M

    A separate I/O bus, such as the industry standard PCI Express (Peripheral Component Interconnect

    Express) local bus, connects the I/O devices and their controllers. The I/O bus is linked to the system

    bus via a special bus-to-bus control unit sometimes referred to as a Bridge. The I/O devices of a personal

    computer include the traditional keyboard, a TFT-based or flat-panel video monitor, USB and disk drive

    units for the hard and flexible (floppy) disk storage devices that constitute secondary memory. More

    recent additions to the I/O devices include drive units for DVD (Digital Versatile Disk), which haveextremely high capacity and allow sound and video images to be stored and retrieved efficiently. Other

    common audiovisual I/O devices in personal computers are microphones, loudspeakers, LCD projectors,

    video scanners, and webcam which are referred to as multimedia equipments.

    Computer Types

    Mainframe: Mainframes are computers used mainly by large organizations for critical applications,

    typically bulk data processing such as census, industry/consumer statistics, ERP, financial transaction

    processing. They require large power and storage capacity.

    Desktop: Is a personal computer made for use on a desk in an office or home. Desktop systems are

    normally set up in a permanent location. Most desktops offer more power, storage and versatility for

    less cost than their portable counter parts. Desktops are currently the most affordable computers;Nearly all desktop computers are modular, with components that can easily be replaced or upgraded.

    Portable: Is a computer that is designed to be moved from one place to another. Also called

    notebooks/laptops - are portable computers that integrate the display, keyboard, a pointing device or

    trackball, processor, memory and hard disk drive all in a battery-operated package slightly larger than an

    average hardcover book.

    Workstation: A desktop computer that has a more powerful processor, additional memory and

    enhanced capabilities for performing a special group of tasks, such as 3D Graphics or game

    development.

    Supercomputer: This type of computer usually costs hundreds of thousands or even millions of dollars.

    Although some supercomputers are single computer systems, most are comprised of multiple high

    performance computers working in parallel as a single system. Supercomputers are used for the large-

    scale numerical calculations required in applications such as weather forecasting and aircraft

    What is an Operating System

    An Operating System is a software program or set of programs that acts as a central control program for

    the computer. It mediates access between physical devices (such as keyboard, mouse, monitor, disk

    drive or network connection) and application programs (such as word processor, World-Wide Web

    browser or electronic mail client).

    An operating system (sometimes abbreviated as "OS") is the program that, after being initially loaded

    into the computer by a boot program, manages all the other programs in a computer. The other

    programs are called applications or application programs. The application programs make use of the

    operating system by making requests for services through a defined Application Program Interface (API).

    A set of routines, protocols, and tools for building software applications, which provides all the buildingblocks for the programmer to put together. Most operating environments, such as MS-Windows,

    provide an API so that programmers can write applications consistent with the operating environment.

    Although APIs are designed for programmers, they are ultimately good for users as their similar

    interfaces make it easier for users to learn new programs.

    In addition, users can interact directly with the operating system through a user interface such as a

    command language or a graphical user interface (GUI). The basic resources of a computer system are

    provided by its hardware, software and data. The operating system provides the means for the proper

  • 7/28/2019 Introduction to Computer Fundamentals

    15/37

    Computer applications in management B2S2

    15 Dr. Pramod M

    use of these resources in the operation of the computer system. It simply provides an environment

    within which other programs can do useful work.

    We can view an operating system as a resource allocator. A computer system has many resources

    (hardware and software) that may be required to solve a problem: CPU time, memory space, files

    storage space, input/output devices etc.

    The operating system acts as the manager of these resources and allocates them to specific programsand users, as necessary for their tasks. An operating system is a control program i.e, it acts as a

    controller. This program controls the execution of user programs to prevent errors and improper use of

    the computer.

    OS Objectives

    The primary goal of an operating system is convenience for the user. A secondary goal is the efficient

    operation of a computer system. This goal is particularly important for large, shared multi-user systems.

    It is known that sometimes these two goals, convenience and efficiency, are contradictory.

    OS Functions

    A computers operating system (OS) is a group of programs designed to serve two basic purposes:

    To control the allocation and use of the computing systems resources among the various users and

    tasks, and To provide an interface between the computer hardware and the programmer that simplifies

    and makes feasible the creation, coding, debugging, and maintenance of application programs.

    An Operating System does the following:

    Facilitate creation and modification of program and data files through an editor program, Provide access to compilers to translate programs from high-level languages to machine

    language,

    Provide a loader program to move the complied program code to the computers memory forexecution,

    Provide routines that handle the intricate details of I/O programming, Assure that when there are several active processes in the computer, each will get fair and non

    interfering access to the central processing unit for execution,

    Take care of storage and device allocation, Provide for long term storage of user information in the form of files, and Permit system resources to be shared among users when appropriate, and be protected from

    unauthorized or mischievous intervention as necessary.

  • 7/28/2019 Introduction to Computer Fundamentals

    16/37

    Computer applications in management B2S2

    16 Dr. Pramod M

    An operating system performs these services for applications:

    In a multitasking operating system where multiple programs can be running at the same time,the operating system determines which applications should run in what order and how much

    time should be allowed for each application before giving another application a turn.

    It manages the sharing of internal memory among multiple applications. It handles input and output to and from attached hardware devices, such as hard disks, printers,

    and dial-up ports.

    It sends messages to each application or interactive user (or to a system operator) about thestatus of operation and any errors that may have occurred.

    On computers that can provide parallel processing, an operating system can manage how todivide the program so that it runs on more than one processor at a time.

    All major computer platforms (hardware and software) require and sometimes include an operating

    system. Linux, Windows 2000, VMS, OS/400, AIX, and z/OS are all examples of operating systems.

    Evolution of OSThe History of OS is linked with the development of various computer generations. By tracing that

    evolution we can identify the common elements of operating systems, and see how, and why they

    evolved as they are now. Operating systems and computer architecture have a great deal of influence

    on each other. To facilitate the use of the hardware, operating systems were designed, developed and

    simplified.

    Since operating systems have historically been closely tied to the architecture of the computers on

    which they run. The mapping of operating systems to computer generations is admittedly crude, but it

    does provide some structure where there would otherwise be none. Because the history of computer

    operating systems parallels that of computer hardware, it can be generally divided into five distinct time

    periods, called generations, that are characterized by hardware component technology, software

    development, and mode of delivery of computer services.

    The Zeroth Generation

    The term Zeroth Generation is used to refer to the period of development of computing, which predated

    the commercial production and sale of computer equipment.

    In particular, this period witnessed the emergence of the first electronic digital computers on the

    Atanasoff-Berry Computer (ABC), designed by John Atanasoff in 1940; the Mark I, built by Howard Aiken

    and a group of IBM engineers at Harvard in 1944; and the Electronic Numerical Integrator And Computer

    (ENIAC), designed and constructed at the University of Pennsylvania by Wallace Eckert and John

    Mauchly. Perhaps the most significant of these early computers was the Electronic Discrete Variable

    Automatic Computer (EDVAC), developed in 1944-46 by John von Neumann, Arthur Burks, and Herman

    Goldstine, since it was the first to fully implement the idea of the stored program and serial execution of

    instructions.The development of EDVAC set the stage for the evolution of commercial computing and operating

    system software. The hardware component technology of this period was electronic vacuum tubes. The

    actual operation of these early computers took place without be benefit of an operating system. Early

    programs were written in machine language and each contained code for initiating operation of the

    computer itself.

  • 7/28/2019 Introduction to Computer Fundamentals

    17/37

    Computer applications in management B2S2

    17 Dr. Pramod M

    The First Generation, 1951-1956

    The first generation marked the beginning of commercial computing, including the introduction of

    Eckert and Mauchlys UNIVersal Automatic Computer I (UNIVAC I) in early 1951, and a bit later, The IBM

    701 which was also known as the Defense Calculator.

    Operation continued without the benefit of an operating system for a time.

    Application programs were run one at a time, and were translated with absolute computer addressesthat bound them to be loaded and run from these pre assigned storage addresses set by the translator,

    obtaining their data from specific physical I/O device.

    There was no provision for moving a program to a different location in storage for any reason. Similarly,

    a program bound to specific devices could not be run at all if any of these devices were busy or broken

    down.

    The inefficiencies inherent in the above methods of operation led to the development of the mono

    programmed operating system, which eliminated some of the human intervention in running a job and

    provided programmers with a number of desirable functions.

    The OS consisted of a permanently resident kernel in main storage, and a job scheduler and a number of

    utility programs kept in secondary storage.

    User application programs were preceded by control or specification cards (in those days, computerprogram were submitted on data cards) which informed the OS of what system resources (software

    resources such as compilers and loaders and hardware resources such as tape drives and printer) were

    needed to run a particular application.

    The systems were designed to be operated as batch processing system. These systems continued to

    operate under the control of a human operator who initiated operation by mounting a magnetic tape

    that contained the operating system executable code onto a boot device, and then pushing the IPL

    (initial program load) or boot button to initiate the bootstrap loading of the operating system.

    Once the system was loaded, the operator entered the date and time, and then initiated the operation

    of the job scheduler program which read and interpreted the control statements, secured the needed

    resources, executed the first user program, recorded timing and accounting information, and then went

    back to begin processing of another user program, and so on, as long as there were programs waiting in

    the input queue to be executed.

    At the same time, the development of programming languages was moving away from the basic

    machine languages; first to assembly language, and later to procedure oriented languages, the most

    significant being the development of FORTRAN by John W. Backus in 1956.

    Several problems remained, however. The most obvious was the inefficient use of system resources,

    which was most evident when the CPU waited while the relatively slower, mechanical I/O devices were

    reading or writing program data.

    In addition, system protection was a problem because the operating system kernel was not protected

    from being overwritten by an erroneous application program. Moreover, other user programs in the

    queue were not protected from destruction by executing programs.

    The second Generation, 1956-1964 (Batch Operating Systems):The second generation of computer hardware was most notably characterized by transistors replacing

    vacuum tubes as the hardware component technology.

    In addition, some very important changes in hardware and software architectures occurred during this

    period.

    For the most part, computer systems remained card and tape-oriented systems. Significant use of

    random access devices, that is, disks, did not appear until towards the end of the second generation.

  • 7/28/2019 Introduction to Computer Fundamentals

    18/37

    Computer applications in management B2S2

    18 Dr. Pramod M

    Program processing was, for the most part, provided by large centralized computers operated under

    mono programmed batch processing operating systems.

    The most significant innovations addressed the problem of excessive central processor delay due to

    waiting for input/output operations.

    Recall that programs were executed by processing the machine instructions in a strictly sequential

    order.As a result, the CPU, with its high speed electronic component, was often forced to wait for completion

    of I/O operations which involved mechanical devices (card readers and tape drives) that were order of

    magnitude slower.

    This problem led to the introduction of the data channel, an integral and special-purpose computer with

    its own instruction set, registers, and control unit designed to process input/output operations

    asynchronously from the operation of the computers main CPU, near the end of the first generation,

    and its widespread adoption in the second generation.

    The data channel allowed some I/O to be buffered. That is, a programs input data could be read

    ahead from data cards or tape into a special block of memory called a buffer. Then, when the users

    program came to an input statement, the data could be transferred from the buffer locations at the

    faster main memory access speed rather than the slower I/O device speed. Similarly, a programs outputcould be written in another buffer and later moved from the buffer to the printer, tape, or cardpunch.

    What made this all work was the data channels ability to work asynchronously and concurrently with

    the main processor. Thus, the slower mechanical I/O could be happening concurrently with main

    program processing. This process was called I/O overlap.

    The data channel was controlled by a channel program set up by the operating system I/O control

    routines and initiated by a special instruction executed by the CPU. Then, the channel independently

    processed data to or from the buffer. This provided communication from the CPU to the data channel to

    initiate an I/O operation.

    It remained for the channel to communicate to the CPU such events as data errors and the completion

    of a transmission. At first, this communication was handled by polling-the CPU stopped its work

    periodically and polled the channel to determine if there was any massage.

    Polling was obviously inefficient (imagine stopping your work periodically to go to the post office to see

    if an expected letter has arrived) and led to another significant innovation of the second generation - the

    interrupt. The data, channel was now able to interrupt the CPU with a message- usually I/O complete.

    In fact, the interrupt idea was later extended from I/O to allow signaling of number of exceptional

    conditions such as arithmetic overflow, division by zero and time-run-out. Of course, interval clocks

    were added in conjunction with the latter, and thus operating system came to have a way of regaining

    control from an exceptionally long or indefinitely looping program.

    Towards the end of this period, as random access devices became available, tape-oriented operating

    systems began to be replaced by disk-oriented systems. With the more sophisticated disk hardware and

    the operating system supporting a greater portion of the programmers work, the computer system that

    users saw was more and more removed from the actual hardware - users saw a virtual machine.

    The second generation was a period of intense operating system development. Also it was the period forsequential batch processing. But the sequential processing of one job at a time remained a significant

    limitation. Thus, there continued to be low CPU utilization for I/O bound jobs and low I/O device

    utilization for CPU bound jobs. This was a major concern, since computers were still very large (room-

    size) and expensive machines.

    Researchers began to experiment with multiprogramming and multiprocessing in their computing

    services called the time-sharing system. A noteworthy example is the Compatible Time Sharing System

    (CTSS), developed at MIT during the early 1960s.

  • 7/28/2019 Introduction to Computer Fundamentals

    19/37

    Computer applications in management B2S2

    19 Dr. Pramod M

    The Third Generation, 1964-1979 (Multiprogramming and Time Sharing Systems):

    The third generation officially began in April 1964 with IBMs announcement of its System/360 family of

    computers.

    Hardware technology began to use integrated circuits (ICs), which yielded significant advantages in both

    speed and economy.Operating system development continued with the introduction and widespread adoption of

    multiprogramming.

    These systems worked by introducing two new systems programs, a system reader to move input jobs

    from cards to disk, and a system writer to move job output from disk to printer, tape, or cards.

    Operation of spooling system was, as before, transparent to the computer user who perceived input as

    coming directly from the cards and output going directly to the printer.

    The idea of taking fuller advantage of the computers data channel, I/O capabilities continued to

    develop. That is, designers recognized that I/O needed only to be initiated by a CPU instruction - the

    actual I/O data transmission could take place under control of separate and asynchronously operating

    channel program.

    Thus, by switching control of the CPU between the currently executing user program, the system readerprogram, and the system writer program, it was possible to keep the slower mechanical I/O device

    running and minimizes the amount of time the CPU spent waiting for I/O completion.

    The net result was an increase in system throughput and resource utilization, to the benefit of both user

    and providers of computer services.

    This concurrent operation of three programs (more properly, apparent concurrent operation, since

    systems had only one CPU, and could, therefore executes just one instruction at time) required that

    additional features and complexity be added to the operating system.

    The Fourth Generation, 1980 - 1994(Personal Computers and Workstations):

    The fourth generation is characterized by the appearance of the personal computer and the

    workstation.

    Miniaturization of electronic circuits and components continued and large-scale integration (LSI), the

    component technology of the third generation, was replaced by very large scale integration (VLSI),

    which characterizes the fourth generation.

    VLSI with its capacity for containing thousands of transistors on a small chip, made possible the

    development of desktop computers with capabilities exceeding those that filled entire rooms and floors

    of building just twenty years earlier.

    The microprocessor brought the fourth generation of computers, as thousands of integrated circuits

    were built onto a single silicon chip. What in the first generation filled an entire room could now fit in

    the palm of the hand. The Intel 4004 chip, developed in 1971, located all the components of the

    computer from the central processing unit and memory to input/output controls - on a single chip.

    In 1981 IBM introduced its first computer for the home user, and in 1984 Apple introduced the

    Macintosh. Microprocessors also moved out of the realm of desktop computers and into many areas oflife as more and more everyday products began to use microprocessors.

    As these small computers became more powerful, they could be linked together to form networks,

    which eventually led to the development of the Internet. Fourth generation computers also saw the

    development of GUIs, the mouse and handheld devices.

  • 7/28/2019 Introduction to Computer Fundamentals

    20/37

    Computer applications in management B2S2

    20 Dr. Pramod M

    Fifth Generation - Present and Beyond: Artificial Intelligence

    Fifth generation computing devices, based on artificial intelligence, are still in development, though

    there are some applications, such as voice recognition, that are being used today.

    The use of parallel processing and superconductors is helping to make artificial intelligence a reality.

    Quantum computation and molecular and nanotechnology will radically change the face of computers in

    years to come.The goal of fifth-generation computing is to develop devices that respond to natural language input and

    are capable of learning and self-organization.

    Types of Operating Systems

    Modern computer operating systems may be classified into three groups, which are distinguished by the

    nature of interaction that takes place between the computer user and his or her program during its

    processing. The three groups are called batch, time-shared and real time operating systems.

    In a batch processing operating system environment, users submit jobs to a central place where these

    jobs are collected into a batch, and subsequently placed on an input queue at the computer where they

    will be run. In this case, the user has no interaction with the job during its processing, and the

    computers response time is the turnaround time-the time from submission of the job until execution iscomplete, and the results are ready for return to the person who submitted the job.

    Another mode for delivering computing services is provided by time sharing operating systems. In this

    environment a computer provides computing services to several or many users concurrently on-line.

    Here, the various users are sharing the central processor, the memory, and other resources of the

    computer system in a manner facilitated, controlled, and monitored by the operating system. The user,

    in this environment, has nearly full interaction with the program during its execution, and the

    computers response time may be expected to be no more than a few second.

    The third classes of operating systems, real time operating systems, are designed to service those

    applications where response time is of the essence in order to prevent error, misrepresentation or even

    disaster. Examples of real time operating systems are those, which handle airlines reservations, machine

    tool control, and monitoring of a nuclear power station. The systems, in this case, are designed to be

    interrupted by external signal that require the immediate attention of the computer system.

    In fact, many computer operating systems are hybrids, providing for more than one of these types of

    computing services simultaneously. It is especially common to have a background batch system running

    in conjunction with one of the other two on the same computer. A number of other definitions are

    important to gaining an understanding of operating systems:

    A multiprogramming operating system is a system that allows more than one active user program (or

    part of user program) to be stored in main memory simultaneously. Thus, it is evident that a time-

    sharing system is a multiprogramming system, but note that a multiprogramming system is not

    necessarily a time-sharing system. A batch or real time operating system could, and indeed usually does,

    have more than one active user program simultaneously in main storage. Another important, and all too

    similar, term is multiprocessing.

    A multiprocessing system is a computer hardware configuration that includes more than oneindependent processing unit. The term multiprocessing is generally used to refer to large computer

    hardware complexes found in major scientific or commercial applications.

    A networked computing system is a collection of physically interconnected computers. The operating

    system of each of the interconnected computers must contain, in addition to its own stand-alone

    functionality, provisions for handing communication and transfer of program and data among the other

    computers with which it is connected.

  • 7/28/2019 Introduction to Computer Fundamentals

    21/37

    Computer applications in management B2S2

    21 Dr. Pramod M

    A distributed computing system consists of a number of computers that are connected and managed so

    that they automatically share the job processing load among the constituent computers, or separate the

    job load as appropriate particularly configured processors. Such a system requires an operating system,

    which in addition to the typical stand-alone functionality provides coordination of the operations and

    information flow among the component computers.

    The networked and distributed computing environments and their respective operating systems aredesigned with more complex functional capabilities. In a network operating system the users are aware

    of the existence of multiple computers, and can log in to remote machines and copy files from one

    machine to another. Each machine runs its own local operating system and has its own user (or users).

    A distributed operating system, in contrast, is one that appears to its users as a traditional uniprocessor

    system, even though it is actually composed of multiple processors. In a true distributed system, users

    should not be aware of where their programs are being run or where their files are located; that should

    all be handled automatically and efficiently by the operating system.

    Network operating systems are not fundamentally different from single processor operating systems.

    They obviously need a network interface controller and some low-level software to drive it, as well as

    programs to achieve remote login and remote files access, but these additions do not change the

    essential structure of the operating systems.

    What OSs Are Available Today

    The list of different types of operating systems and a few examples of Operating Systems that fall into

    each of the categories. Many computer Operating Systems will fall into more then one of the below

    categories given below.

    GUI - Short for Graphical User Interface, a GUI Operating System contains graphics and icons and is

    commonly navigated using a computer mouse. Below are some examples of GUI Operating Systems.

    System 7.x

    Windows 98

    Windows CE

    Multi-user - A multi-user Operating System allows for multiple users to use the same computer at the

    same time and/or different times. Below are some examples of multi-user Operating Systems.

    Linux

    UNIX

    Windows 2000

    VMS

    MVS

    Multiprocessing - An Operating System capable of supporting and utilizing more than one computer

    processor. Below are some examples of multiprocessing Operating Systems.

    Linux

    UNIX

    Windows 2000

    Multitasking - An Operating systems that is capable of allowing multiple software processes to be run atthe same time. Below are some examples of multitasking Operating Systems.

    UNIX

    Windows 2000

    Multithreading - Operating systems that allow different parts of a software program to run

    concurrently. Operating systems that would fall into this category are:

    Linux

    UNIX

  • 7/28/2019 Introduction to Computer Fundamentals

    22/37

    Computer applications in management B2S2

    22 Dr. Pramod M

    Windows 2000

    Networking OS:

    Windows 2000

    Novell Netware

    OS LayersAn operating system provides the environment within which programs are executed. To construct such

    an environment, the system is partitioned into small modules with a well-defined interface. The design

    of a new operating system is a major task. It is very important that the goals of the system be well

    defined before the design begins. The type of system desired is the foundation for choices between

    various algorithms and strategies that will be necessary. The important modules for an operating system

    are listed below.

    Process Management, Memory Management, Secondary Storage Management

    I/O System

    File Management

    Protection System

    NetworkingCommand Interpreter System

    Obviously, not all systems have the same structure.

    The Binary Number System

    Why Binary?

    The number system that you are familiar with, that you use every day, is the decimal number system,

    also commonly referred to as the base-10 system. When you perform computations such as 3 + 2 = 5, or

    217 = 14, you are using the decimal number system. This system, which you likely learned in first or

    second grade, is ingrained into your subconscious; its the natural way that you think about numbers. Of

    course it is not just you: It is the way that everyone thinksand has always thoughtabout numbers

    and arithmetic. Evidence exists that Egyptians were using a decimal number system five thousand years

    ago. The Roman numeral system, predominant for hundreds of years, was also a decimal number

    system (though organized differently from the Arabic base-10 number system that we are most familiar

    with). Indeed, base-10 systems, in one form or another, have been the most widely used number

    systems ever since civilization started counting. In dealing with the inner workings of a computer,

    though, you are going to have to learn to think in a different number system, the binary number system,

    also referred to as the base-2 system. Before considering why we might want to use a different number

    system, lets first consider: Why do we use base-10? The simple answer: We have 10 fingers. Before the

    days of calculators and computers, we counted on our hands (many of us still do!). Consider a child

    counting a pile of pennies. He would begin: One, two, three, , eight, nine. Upon reaching nine, the

    next penny counted makes the total one single group of ten pennies. He then keeps counting: One

    group of ten pennies two groups of ten pennies three groups of ten pennies eight groups of ten

    pennies nine groups of ten pennies Upon reaching nine groups of ten penniesplus nine additionalpennies, the next penny counted makes the total thus far: one single group of one hundred pennies.

    Upon completing the task, the child might find that he has three groups of one hundred pennies, five

    groups of ten pennies, and two pennies left over: 352 pennies. More formally, the base-10 system is a

    positional system, where the rightmost digit is the ones position (the number of ones), the next digit to

    the left is the tens position (the number of groups of 10), the next digit to the left is the hundreds

    position (the number of groups of 100), and so forth. The base-10 number system has 10 distinct

    symbols, or digits (0, 1, 2, 3,8, 9). In decimal notation, we write a number as a string of symbols,

  • 7/28/2019 Introduction to Computer Fundamentals

    23/37

    Computer applications in management B2S2

    23 Dr. Pramod M

    where each symbol is one of these ten digits, and to interpret a decimal number, we multiply each digit

    by the power of 10 associated with that digits position For example, consider the decimal number:

    6349. This number is:

    There is nothing essentially easier about using the base-10 system. It just seems more intuitive only

    because it is the only system that you have used extensively, and, again, the fact that it is used

    extensively is due to the fact that humans have 10 fingers. If humans had six fingers, we would all be

    using a base-6 system, and we would all find that system to be the most intuitive and natural.

    So, long ago, humans looked at their hands, saw ten fingers, and decided to use a base-10 system. But

    how many fingers does a computer have? Consider: Computers are built from transistors, and anindividual transistor can only be ON or OFF (two options). Similarly, data storage devices can be optical

    or magnetic. Optical storage devices store data in a specific location by controlling whether light is

    reflected off that location or is not reflected off that location (two options). Likewise, magnetic storage

    devices store data in a specific location by magnetizing the particles in that location with a specific

    orientation. We can have the north magnetic pole pointing in one direction, or the opposite direction

    (two options). Computers can most readily use two symbols, and therefore a base-2 system, or binary

    number system, is most appropriate. The base-10 number system has 10 distinct symbols: 0, 1, 2, 3, 4, 5,

    6, 7, 8 and 9. The base-2 system has exactly two symbols: 0 and 1. The base-10 symbols are termed

    digits. The base-2 symbols are termed binary digits, or bits for short. All base-10 numbers are built as

    strings of digits (such as 6349). All binary numbers are built as strings of bits (such as 1101). Just as we

    would say that the decimal number 12890 has five digits, we would say that the binary number 11001 is

    a five-bit number. The point: All data in a computer is represented in binary. The pictures of your last

    vacation stored on your hard driveits all bits. The YouTube video of the cat falling off the chair that

    you saw this morningbits. Your Face book pagebits. The tweet you sentbits. The email from your

    professor telling you to spend less time on vacation, browsing YouTube, updating your Face book page

    and sending tweetsthats bits too. Everything is bits. To understand how computers work, you have

    to speak the language. And the language of computers is the binary number system.

    The Binary Number System

    Consider again the example of a child counting a pile of pennies, but this time in binary. He would begin

    with the first penny: 1. The next penny counted makes the total one single group of two pennies. what

    number is this? When the base-10 child reached nine (the highest symbol in his scheme), the next penny

    gave him one group of ten, denoted as 10, where the 1 indicated one collection of ten. Similarly,when the base-2 child reaches one (the highest symbol in his scheme), the next penny gives him one

    group oftwo, denoted as 10, where the 1 indicates one collection of two. Back to the base-2 child:

    The next penny makes one group of two pennies and one additional penny: 11. The next penny added

    makes two groups of two, which is one group of 4: 100. The 1 here indicates a collection of two

    groups of two, just as the 1 in the base-10 number 100 indicates ten groups of ten. Upon completing

    the counting task, base-2 child might find that he has one group of four pennies, no groups of two

    pennies, and one penny left over: 101 pennies. The child counting the same pile of pennies in base-10

  • 7/28/2019 Introduction to Computer Fundamentals

    24/37

    Computer applications in management B2S2

    24 Dr. Pramod M

    would conclude that there were 5 pennies. So, 5 in base-10 is equivalent to 101 in base-2. To avoid

    confusion when the base in use if not clear from the context, or when using multiple bases in a single

    expression, we append a subscript to the number to indicate the base, and write:

    Just as with decimal notation, we write a binary number as a string of symbols, but now each symbol is a0 or a 1. To interpret a binary number, we multiply each digit by the power of 2 associated with that

    digits position.

    For example, consider the binary number 1101. This number is:

    Since binary numbers can only contain the two symbols 0 and 1, numbers such as 25 and 1114000

    cannot be binary numbers. We say that all data in a computer is stored in binarythat is, as 1s and 0s.

    It is important to keep in mind that values of 0 and 1 are logical values, not the values of a physical

    quantity, such as a voltage. The actual physical binary values used to store data internally within acomputer might be, for instance, 5 volts and 0 volts, or perhaps 3.3 volts and 0.3 volts or perhaps

    reflection and no reflection. The two values that are used to physically store data can differ within

    different portions of the same computer. All that really matters is that there are two different symbols,

    so we will always refer to them as 0 and 1. A string of eight bits (such as 11000110) is termed a byte. A

    collection of four bits (such as 1011) is smaller than a byte, and is hence termed a nibble. (This is the sort

    of nerd-humor for which engineers are famous.)

    Decimal and Binary Numbers

    When we write decimal (base 10) numbers, we use a positional notation system. Each digit is multiplied

    by an appropriate power of 10 depending on its position in the number:

    For whole numbers, the rightmost digit position is the ones position (100 = 1). The numeral in that

    position indicates how many ones are present in the number. The next position to the left is tens, then

    hundreds, thousands, and so on. Each digit position has a weight that is ten times the weight of the

    position to its right. In the decimal number system, there are ten possible values that can appear in each

    digit position, and so there are ten numerals required to represent the quantity in each digit position.

    The decimal numerals are the familiar zero through nine (0, 1, 2, 3, 4, 5, 6, 7, 8, 9).

    In a positional notation system, the number base is called the radix. Thus, the base ten systems that wenormally use have a radix of 10. The term radix and base can be used interchangeably.

    When writing numbers in a radix other than ten, or where the radix isnt clear from the context, it is

    customary to specify the radix using a subscript. Thus, in a case where the radix isnt understood,

    decimal numbers would be written like this:

    Generally, the radix will be understood from the context and the radix specification is left off.

  • 7/28/2019 Introduction to Computer Fundamentals

    25/37

    Computer applications in management B2S2

    25 Dr. Pramod M

    The binary number system is also a positional notation numbering system, but in this case, the base is

    not ten, but is instead two. Each digit position in a binary number represents a power of two. So, when

    we write a binary number, each binary digit is multiplied by an appropriate power of 2 based on the

    position in the number:

    In the binary number system, there are only two possible values that can appear in each digit position

    rather than the ten that can appear in a decimal number. Only the numerals 0 and 1 are used in binary

    numbers. The term bit is a contraction of the wordsbinary and digit, and when talking about binary

    numbers the terms bit and digit can be used interchangeably. When talking about binary numbers, it is

    often necessary to talk of the number of bits used to store or represent the number. This merely

    describes the number of binary digits that would be required to write the number. The number in the

    above example is a 6 bit number.

  • 7/28/2019 Introduction to Computer Fundamentals

    26/37

    Computer applications in management B2S2

    26 Dr. Pramod M

    A programming language is an artificial language designed to communicate instructions to a machine,

    particularly a computer. Programming languages can be used to create programs that control the

    behavior of a machine and/or to express algorithms precisely.

    The earliest programming languages predate the invention of the computer, and were used to direct the

    behavior of machines such as Jacquard looms and player pianos. Thousands of different programming

    languages have been created, mainly in the computer field, with many being created every year. Most

    programming languages describe computation in an imperative style, i.e., as a sequence of commands,although some languages, such as those that support functional programming or logic programming, use

    alternative forms of description.

    The description of a programming language is usually split into the two components ofsyntax (form)

    and semantics (meaning). Some languages are defined by a specification document (for example,

    the C programming language is specified by an ISO Standard), while other languages, such as Perl 5 and

    earlier, have a dominant implementation that is used as a reference.

    The first programming languages predate the modern computer. The 19th century saw the invention of

    "programmable" looms and player piano scrolls, both of which implemented examples ofdomain-

    specific languages. By the beginning of the twentieth century, punch cards encoded data and directed

    mechanical processing. In the 1930s and 1940s, the formalisms of Alonzo Church's lambda

    calculus and Alan Turing's Turing machines provided mathematical abstractions forexpressing algorithms; the lambda calculus remains influential in language design

    In the 1940s, the first electrically powered digital computers were created. Grace Hopper, was one of

    the first programmers of the Harvard Mark I computer, a pioneer in the field, developed the first

    compiler, around 1952, for a computer programming language. Notwithstanding, the idea of

    programming language existed earlier; the first high-level programming language to be designed for a

    computer was Plankalkl, developed for the German Z3 by Konrad Zuse between 1943 and 1945.

    However, it was not implemented until 1998 and 2000.

    https://en.wikipedia.org/wiki/Formal_languagehttps://en.wikipedia.org/wiki/Machine_instructionhttps://en.wikipedia.org/wiki/Machinehttps://en.wikipedia.org/wiki/Computerhttps://en.wikipedia.org/wiki/Program_(machine)https://en.wikipedia.org/wiki/Algorithmhttps://en.wikipedia.org/wiki/History_of_computing_hardwarehttps://en.wikipedia.org/wiki/Jacquard_loomhttps://en.wikipedia.org/wiki/Player_pianohttps://en.wikipedia.org/wiki/Imperative_programminghttps://en.wikipedia.org/wiki/Functional_programminghttps://en.wikipedia.org/wiki/Logic_programminghttps://en.wikipedia.org/wiki/Syntax_(programming_languages)https://en.wikipedia.org/wiki/Semanticshttps://en.wikipedia.org/wiki/C_(programming_language)https://en.wikipedia.org/wiki/International_Organization_for_Standardizationhttps://en.wikipedia.org/wiki/Perlhttps://en.wikipedia.org/wiki/Programming_language_implementationhttps://en.wikipedia.org/wiki/Reference_implementationhttps://en.wikipedia.org/wiki/Loomhttps://en.wikipedia.org/wiki/Player_pianohttps://en.wikipedia.org/wiki/Domain-specific_languagehttps://en.wikipedia.org/wiki/Domain-specific_languagehttps://en.wikipedia.org/wiki/Alonzo_Churchhttps://en.wikipedia.org/wiki/Lambda_calculushttps://en.wikipedia.org/wiki/Lambda_calculushttps://en.wikipedia.org/wiki/Alan_Turinghttps://en.wikipedia.org/wiki/Turing_machinehttps://en.wikipedia.org/wiki/Algorithmhttps://en.wikipedia.org/wiki/Grace_Hopperhttps://en.wikipedia.org/wiki/High-level_programming_languagehttps://en.wikipedia.org/wiki/Plankalk%C3%BClhttps://en.wikipedia.org/wiki/Z3_(computer)https://en.wikipedia.org/wiki/Konrad_Zusehttps://en.wikipedia.org/wiki/Konrad_Zusehttps://en.wikipedia.org/wiki/Z3_(computer)https://en.wikipedia.org/wiki/Plankalk%C3%BClhttps://en.wikipedia.org/wiki/High-level_programming_languagehttps://en.wikipedia.org/wiki/Grace_Hopperhttps://en.wikipedia.org/wiki/Algorithmhttps://en.wikipedia.org/wiki/Turing_machinehttps://en.wikipedia.org/wiki/Alan_Turinghttps://en.wikipedia.org/wiki/Lambda_calculushttps://en.wikipedia.org/wiki/Lambda_calculushttps://en.wikipedia.org/wiki/Alonzo_Churchhttps://en.wikipedia.org/wiki/Domain-specific_languagehttps://en.wikipedia.org/wiki/Domain-specific_languagehttps://en.wikipedia.org/wiki/Player_pianohttps://en.wikipedia.org/wiki/Loomhttps://en.wikipedia.org/wiki/Reference_implementationhttps://en.wikipedia.org/wiki/Programming_language_implementationhttps://en.wikipedia.org/wiki/Perlhttps://en.wikipedia.org/wiki/International_Organization_for_Standardizationhttps://en.wikipedia.org/wiki/C_(programming_language)https://en.wikipedia.org/wiki/Semanticshttps://en.wikipedia.org/wiki/Syntax_(programming_languages)https://en.wikipedia.org/wiki/Logic_programminghttps://en.wikipedia.org/wiki/Functional_programminghttps://en.wikipedia.org/wiki/Imperative_programminghttps://en.wikipedia.org/wiki/Player_pianohttps://en.wikipedia.org/wiki/Jacquard_loomhttps://en.wikipedia.org/wiki/History_of_computing_hardwarehttps://en.wikipedia.org/wiki/Algorithmhttps://en.wikipedia.org/wiki/