Ch20 Database System Architectures

Embed Size (px)

Citation preview

  • 7/31/2019 Ch20 Database System Architectures

    1/37

    Database System Concepts, 5th Ed.

    Silberschatz, Korth and Sudarshan

    See www.db-book.com for conditions on re-use

    Chapter 20: Database System Architectures

    Version: Oct 5, 2006

    http://www.db-book.com/http://www.db-book.com/http://www.db-book.com/http://www.db-book.com/
  • 7/31/2019 Ch20 Database System Architectures

    2/37

    Database System Architectures

    Centralized and Client-Server Systems

    Server System Architectures

    Parallel Systems

    Distributed Systems

    Network Types

  • 7/31/2019 Ch20 Database System Architectures

    3/37

    Centralized Systems

    Run on a single computer system and do not interact with other

    computer systems. General-purpose computer system: one to a few CPUs and a number

    of device controllers that are connected through a common bus thatprovides access to shared memory.

    Single-user system (e.g., personal computer or workstation): desk-top

    unit, single user, usually has only one CPU and one or two harddisks; the OS may support only one user.

    Multi-user system: more disks, more memory, multiple CPUs, and amulti-user OS. Serve a large number of users who are connected tothe system vie terminals. Often called serversystems.

  • 7/31/2019 Ch20 Database System Architectures

    4/37

    A Centralized Computer System

  • 7/31/2019 Ch20 Database System Architectures

    5/37

    Client-Server Systems

    Server systems satisfy requests generated at mclient systems, whose general

    structure is shown below:

  • 7/31/2019 Ch20 Database System Architectures

    6/37

    Client-Server Systems (Cont.)

    Database functionality can be divided into:

    Back-end: manages access structures, query evaluation andoptimization, concurrency control and recovery.

    Front-end: consists of tools such as forms, report-writers, andgraphical user interface facilities.

    The interface between the front-end and the back-end is through SQL or

    through an application program interface.

  • 7/31/2019 Ch20 Database System Architectures

    7/37

    Client-Server Systems (Cont.)

    Advantages of replacing mainframes with networks of workstations or

    personal computers connected to back-end server machines: better functionality for the cost

    flexibility in locating resources and expanding facilities

    better user interfaces

    easier maintenance

  • 7/31/2019 Ch20 Database System Architectures

    8/37

    Server System Architecture

    Server systems can be broadly categorized into two kinds:

    transaction servers which are widely used in relational databasesystems, and

    data servers, used in object-oriented database systems

  • 7/31/2019 Ch20 Database System Architectures

    9/37

    Transaction Servers

    Also called query server systems or SQL serversystems

    Clients send requests to the server

    Transactions are executed at the server

    Results are shipped back to the client.

    Requests are specified in SQL, and communicated to the serverthrough a remote procedure call(RPC) mechanism.

    Transactional RPC allows many RPC calls to form a transaction.

    Open Database Connectivity(ODBC) is a C language applicationprogram interface standard from Microsoft for connecting to a server,sending SQL requests, and receiving results.

    JDBC standard is similar to ODBC, for Java

  • 7/31/2019 Ch20 Database System Architectures

    10/37

    Transaction Server Process Structure

    A typical transaction server consists of multiple processes accessing

    data in shared memory. Server processes

    These receive user queries (transactions), execute them and sendresults back

    Processes may be multithreaded, allowing a single process to

    execute several user queries concurrently Typically multiple multithreaded server processes

    Lock manager process

    More on this later

    Database writer process

    Output modified buffer blocks to disks continually

  • 7/31/2019 Ch20 Database System Architectures

    11/37

    Transaction Server Processes (Cont.)

    Log writer process

    Server processes simply add log records to log record buffer

    Log writer process outputs log records to stable storage.

    Checkpoint process

    Performs periodic checkpoints

    Process monitor process Monitors other processes, and takes recovery actions if any of the other

    processes fail

    E.g. aborting any transactions being executed by a server processand restarting it

  • 7/31/2019 Ch20 Database System Architectures

    12/37

    Transaction System Processes (Cont.)

  • 7/31/2019 Ch20 Database System Architectures

    13/37

    Transaction System Processes (Cont.)

    Shared memory contains shared data

    Buffer pool Lock table

    Log buffer

    Cached query plans (reused if same query submitted again)

    All database processes can access shared memory

    To ensure that no two processes are accessing the same data structureat the same time, databases systems implement mutual exclusionusing either

    Operating system semaphores

    Atomic instructions such as test-and-set

    To avoid overhead of interprocess communication for lockrequest/grant, each database process operates directly on the locktable

    instead of sending requests to lock manager process

    Lock manager process still used for deadlock detection

  • 7/31/2019 Ch20 Database System Architectures

    14/37

  • 7/31/2019 Ch20 Database System Architectures

    15/37

    Data Servers (Cont.)

    Page-shipping versus item-shipping

    Smaller unit of shipping more messages Worth prefetching related items along with requested item

    Page shipping can be thought of as a form of prefetching

    Locking

    Overhead of requesting and getting locks from server is high due

    to message delays Can grant locks on requested and prefetched items; with page

    shipping, transaction is granted lock on whole page.

    Locks on a prefetched item can be P{called back} by the server,and returned by client transaction if the prefetched item has notbeen used.

    Locks on the page can be deescalatedto locks on items in thepage when there are lock conflicts. Locks on unused items canthen be returned to server.

  • 7/31/2019 Ch20 Database System Architectures

    16/37

    Data Servers (Cont.)

    Data Caching

    Data can be cached at client even in between transactions

    But check that data is up-to-date before it is used (cache coherency)

    Check can be done when requesting lock on data item

    Lock Caching

    Locks can be retained by client system even in between transactions Transactions can acquire cached locks locally, without contacting

    server

    Server calls back locks from clients when it receives conflicting lockrequest. Client returns lock once no local transaction is using it.

    Similar to deescalation, but across transactions.

  • 7/31/2019 Ch20 Database System Architectures

    17/37

    Parallel Systems

    Parallel database systems consist of multiple processors and multiple

    disks connected by a fast interconnection network. A coarse-grainparallel machine consists of a small number of

    powerful processors

    A massively parallel or fine grain parallelmachine utilizesthousands of smaller processors.

    Two main performance measures: throughput --- the number of tasks that can be completed in a

    given time interval

    response time --- the amount of time it takes to complete a singletask from the time it is submitted

  • 7/31/2019 Ch20 Database System Architectures

    18/37

    Speed-Up and Scale-Up

    Speedup: a fixed-sized problem executing on a small system is given

    to a system which is N-times larger. Measured by:

    speedup = small system elapsed time

    large system elapsed time

    Speedup is linear if equation equals N.

    Scaleup: increase the size of both the problem and the system

    N-times larger system used to perform N-times larger job

    Measured by:

    scaleup = small system small problem elapsed time

    big system big problem elapsed time

    Scale up is linear if equation equals 1.

  • 7/31/2019 Ch20 Database System Architectures

    19/37

    Speedup

    Speedup

  • 7/31/2019 Ch20 Database System Architectures

    20/37

    Scaleup

    Scaleup

  • 7/31/2019 Ch20 Database System Architectures

    21/37

    Batch and Transaction Scaleup

    Batch scaleup:

    A single large job; typical of most decision support queries andscientific simulation.

    Use an N-times larger computer on N-times larger problem.

    Transaction scaleup:

    Numerous small queries submitted by independent users to a

    shared database; typical transaction processing and timesharingsystems.

    N-times as many users submitting requests (hence, N-times asmany requests) to an N-times larger database, on an N-timeslarger computer.

    Well-suited to parallel execution.

  • 7/31/2019 Ch20 Database System Architectures

    22/37

    Factors Limiting Speedup and Scaleup

    Speedup and scaleup are often sublinear due to:

    Startup costs: Cost of starting up multiple processes may dominatecomputation time, if the degree of parallelism is high.

    Interference: Processes accessing shared resources (e.g.,systembus, disks, or locks) compete with each other, thus spending timewaiting on other processes, rather than performing useful work.

    Skew: Increasing the degree of parallelism increases the variance inservice times of parallely executing tasks. Overall execution timedetermined by slowest of parallely executing tasks.

  • 7/31/2019 Ch20 Database System Architectures

    23/37

    Interconnection Network Architectures

    Bus. System components send data on and receive data from asingle communication bus;

    Does not scale well with increasing parallelism.

    Mesh. Components are arranged as nodes in a grid, and eachcomponent is connected to all adjacent components

    Communication links grow with growing number of components,and so scales better.

    But may require 2nhops to send message to a node (or nwithwraparound connections at edge of grid).

    Hypercube. Components are numbered in binary; components areconnected to one another if their binary representations differ inexactly one bit.

    ncomponents are connected to log(n) other components and canreach each other via at most log(n) links; reduces communicationdelays.

  • 7/31/2019 Ch20 Database System Architectures

    24/37

    Interconnection Architectures

  • 7/31/2019 Ch20 Database System Architectures

    25/37

    Parallel Database Architectures

    Shared memory -- processors share a common memory

    Shared disk -- processors share a common disk Shared nothing -- processors share neither a common memory nor

    common disk

    Hierarchical -- hybrid of the above architectures

  • 7/31/2019 Ch20 Database System Architectures

    26/37

    Parallel Database Architectures

  • 7/31/2019 Ch20 Database System Architectures

    27/37

    Shared Memory

    Processors and disks have access to a common memory, typically via

    a bus or through an interconnection network. Extremely efficient communication between processors data in

    shared memory can be accessed by any processor without having tomove it using software.

    Downside architecture is not scalable beyond 32 or 64 processorssince the bus or the interconnection network becomes a bottleneck

    Widely used for lower degrees of parallelism (4 to 8).

  • 7/31/2019 Ch20 Database System Architectures

    28/37

    Shared Disk

    All processors can directly access all disks via an interconnection

    network, but the processors have private memories. The memory bus is not a bottleneck

    Architecture provides a degree of fault-tolerance if a processorfails, the other processors can take over its tasks since the databaseis resident on disks that are accessible from all processors.

    Examples: IBM Sysplex and DEC clusters (now part of Compaq)running Rdb (now Oracle Rdb) were early commercial users

    Downside: bottleneck now occurs at interconnection to the disksubsystem.

    Shared-disk systems can scale to a somewhat larger number of

    processors, but communication between processors is slower.

  • 7/31/2019 Ch20 Database System Architectures

    29/37

    Shared Nothing

    Node consists of a processor, memory, and one or more disks.Processors at one node communicate with another processor atanother node using an interconnection network. A node functions asthe server for the data on the disk or disks the node owns.

    Examples: Teradata, Tandem, Oracle-n CUBE

    Data accessed from local disks (and local memory accesses) do notpass through interconnection network, thereby minimizing theinterference of resource sharing.

    Shared-nothing multiprocessors can be scaled up to thousands ofprocessors without interference.

    Main drawback: cost of communication and non-local disk access;sending data involves software interaction at both ends.

  • 7/31/2019 Ch20 Database System Architectures

    30/37

    Hierarchical

    Combines characteristics of shared-memory, shared-disk, and shared-nothing architectures.

    Top level is a shared-nothing architecture nodes connected by aninterconnection network, and do not share disks or memory with eachother.

    Each node of the system could be a shared-memory system with afew processors.

    Alternatively, each node could be a shared-disk system, and each ofthe systems sharing a set of disks could be a shared-memory system.

    Reduce the complexity of programming such systems by distributedvirtual-memory architectures

    Also called non-uniform memory architecture (NUMA)

  • 7/31/2019 Ch20 Database System Architectures

    31/37

    Distributed Systems

    Data spread over multiple machines (also referred to as sites ornodes).

    Network interconnects the machines

    Data shared by users on multiple machines

  • 7/31/2019 Ch20 Database System Architectures

    32/37

    Distributed Databases

    Homogeneous distributed databases

    Same software/schema on all sites, data may be partitionedamong sites

    Goal: provide a view of a single database, hiding details ofdistribution

    Heterogeneous distributed databases

    Different software/schema on different sites

    Goal: integrate existing databases to provide useful functionality

    Differentiate between localand globaltransactions

    A local transaction accesses data in the singlesite at which thetransaction was initiated.

    A global transaction either accesses data in a site different fromthe one at which the transaction was initiated or accesses data inseveral different sites.

  • 7/31/2019 Ch20 Database System Architectures

    33/37

    Trade-offs in Distributed Systems

    Sharing data users at one site able to access the data residing atsome other sites.

    Autonomy each site is able to retain a degree of control over datastored locally.

    Higher system availability through redundancy data can bereplicated at remote sites, and system can function even if a site fails.

    Disadvantage: added complexity required to ensure propercoordination among sites.

    Software development cost.

    Greater potential for bugs.

    Increased processing overhead.

    Implementation Issues for Distributed

  • 7/31/2019 Ch20 Database System Architectures

    34/37

    Implementation Issues for Distributed

    Databases

    Atomicity needed even for transactions that update data at multiple sites

    The two-phase commit protocol (2PC) is used to ensure atomicity

    Basic idea: each site executes transaction until just before commit,and the leaves final decision to a coordinator

    Each site must follow decision of coordinator, even if there is a failurewhile waiting for coordinators decision

    2PC is not always appropriate: other transaction models based onpersistent messaging, and workflows, are also used

    Distributed concurrency control (and deadlock detection) required

    Data items may be replicated to improve data availability

    Details of above in Chapter 22

  • 7/31/2019 Ch20 Database System Architectures

    35/37

    Network Types

    Local-area networks (LANs) composed of processors that aredistributed over small geographical areas, such as a single building ora few adjacent buildings.

    Wide-area networks (WANs) composed of processors distributedover a large geographical area.

  • 7/31/2019 Ch20 Database System Architectures

    36/37

    Networks Types (Cont.)

    WANs with continuous connection (e.g. the Internet) are needed forimplementing distributed database systems

    Groupware applications such as Lotus notes can work on WANs withdiscontinuous connection:

    Data is replicated.

    Updates are propagated to replicas periodically.

    Copies of data may be updated independently.

    Non-serializable executions can thus result. Resolution isapplication dependent.

  • 7/31/2019 Ch20 Database System Architectures

    37/37

    Database System Concepts, 5th Ed.

    Silberschatz, Korth and SudarshanSee www.db-book.com for conditions on re-use

    End of Chapter

    http://www.db-book.com/http://www.db-book.com/http://www.db-book.com/http://www.db-book.com/