Choosing a Computing Architecture

Preview:

DESCRIPTION

Choosing a Computing Architecture. Chapter 8. Architectural Requirements. Scalability. Manageability. Availability. Extensibility. Flexibility. Integration. User. Business. Technology. Budget. Strategy for Architecture Definition. Obtain existing architecture plans - PowerPoint PPT Presentation

Citation preview

Choosing a Computing Architecture

Chapter 8

Architectural Requirements

Scalability Manageability Availability Extensibility

Flexibility Integration

User Business

Budget Technology

Strategy for Architecture Definition

Obtain existing architecture plansObtain existing capacity plansDocument existing interfacesPrepare capacity planPrepare technical architectureDocument operating system requirementsDevelop recovery plansDevelop security and control plansCreate architectureCreate technical risk assessment

Hardware Architecture

Involve all expertsNew technologyOld technologyNetworking

Hardware Architectures

RobustAvailableReliableExtensibleScalableSupportableRecoverableParallel

VLM64-bitConnectiveOpen

Hardware Architectures

SMPClusterMPPNUMAHybrids use SMP and MPP

Evaluation Criteria

Determine the platform for your needs

SMP Clusters NUMA MPP

HighLow Scalability

Maturity Low

High

Parallel Processing

Parallel daily operations

Shared resources - Memory - Disk - NothingLoosely or tightly

coupled

Database

Application

Hardware

Operating system

Making the Right Choice

Requirements differ from operational systems

Benchmark - Available from vendors - Develop your own - Use realistic queriesScalability important

SMP

Communication by shared memoryDisk controllers accessible to all CPUsProven technology

Shared memory

CPU CPU CPUCPU

Common bus

Shared disks

SMP

Benefits: - High concurrency - Workload balancing - Moderate scalability - Easy administrationLimitations: - Memory (cluster for improvements) - Bandwidth

NUMA

Loosely coupled shared memory

CPU CPU CPU CPU CPU CPU

Sharedmemory

Sharedmemory

Disk Disk

Nonuniformmemory access

Shared bus

NUMABenefits: - Fully scalable, incremental additions to disk, CPU, and bandwidth - Performs better than MPP - Suited for Oracle serverLimitations: - The technology is new and less proven - You need new tools for easy system management - NUMA is more expensive than SMP

Clusters

CPU CPU CPU

Sharedmemory

Node 1CPU CPU CPU

Sharedmemory

Node 2CPU CPU CPU

Sharedmemory

Node 3

Common high-speed busCommon high-speed bus

Clusters Shared disk, loosely coupled Dedicated memory High-speed bus Shared resources SMP node Benefits:

- High availability - Single database concept, incremental growth Limitations:

- Scalability, internode synchronization needed - Operating system overhead

MPP

CPU CPU CPU CPU

Memory Memory Memory Memory

Disk Disk Disk Disk

MPP

A shared nothing architectureMany nodesFast accessExclusive memory on a nodeLow cost per nodeScalablenCUBE configuration

MPP Benefits

Unlimited incremental growthVery scalableFast accessLow cost per nodeGood for DSS

MPP Limitations

Rigid partitioningCache consistencyRestricted disk accessHigh memory cost per nodesHigh management burdenCareful data placement

Windows NT

Architecture based on the client-server model Benefits: - Include built-in Web services - Scalability - Ease of management and control Limitations: - Not as secure - Cannot execute programs remotely - Lack linear scalability beyond four processors - Addressing space for applications is limited to two gigabytes

Architectural Tiers

Tiered structures: - Modular - Logical separationDistributed structures: - Two-tier - Three-tier - Four-tier (and more)

Middleware

Technologies for integration

Gateway

Database Server Requirements

RobustAvailableReliableExtensibleScalableSupportableRecoverableParallel

Parallelism

DatabaseQueryLoadIndexSortBackupRecovery

Further Considerations

Optimization strategyPartitioning strategySummarization strategyIndexing techniquesHardware and software scalabilityAvailabilityAdministration

Server Environments

Operationalservers

Warehouseservers

Data martservers

•Open DBMS•Network, relational, hierarchical•Mainframe proprietary DBMS•Oracle, IMS, DB2, VSAM, Rdb, Non Stop SQL, RMS

•Open DBMS•Relational•General purpose and warehouse-specific DBMS•Oracle, Informix, Sybase, IBM DB2, NCR/AT&T Teradata Red Brick

•Open DBMS•Relational and multidimensional•General purpose and warehouse specific DBMS•Oracle, Oracle Express, Arbor Essbase, MS SQL Server, NT

Parallel Processing

A large task broken into smaller tasks:Concurrent executionOne or more processors

Processor 1

Processor 1Processor 2Processor 3Processor 4

Parallel

Elapsed timeNot parallel

Parallel Database

Increased speedImproved scalability

Performance gains - Availability - Flexibility - More users

Processor 1Processor 2Processor 3Processor 4

Parallel

Parallel Query

SQL code split among server processes.Sub-Query

Sub-Query

Sub-Query

Query

Parallel Load

Bypass SQL processing to speed throughput.

Parallel Processing

Index Reduces the time to createSort Allocates memory in cache efficientlyBackup Runs simultaneously from any node - Offline - OnlineRecovery Runs simultaneously from redo logsSummaries Uses the CREATE TABLES AS SELECT statement

Summary

This lesson discussed the following topics:

Outlining the basic architecture requirements for a warehouse

Highlighting the benefits and limitations of all the different hardware architectures