Upload
nathaniel-hewitt
View
220
Download
5
Tags:
Embed Size (px)
Citation preview
1
What Happens WhenWhat Happens WhenProcessingProcessing
StorageStorageBandwidth Bandwidth
are Free and Infinite?are Free and Infinite?
Jim Gray
Microsoft Research
2
OutlineOutline Hardware CyberBricks
– all nodes are very intelligent Software CyberBricks
– standard way to interconnect intelligent nodes What next?
– Processing migrates to where the power is• Disk, network, display controllers have full-blown OS
• Send RPCs to them (SQL, Java, HTTP, DCOM, CORBA) to them
• Computer is a federated distributed system.
3
A Hypothetical QuestionA Hypothetical QuestionTaking things to the limitTaking things to the limit
Moore’s law 100x per decade:– Exa-instructions per second in 30 years
– Exa-bit memory chips
– Exa-byte disks Gilder’s Law of the Telecosom
3x/year more bandwidth60,000x per decade!
– 40 Gbps per fiber today
4
Grove’s LawGrove’s Law
Link Bandwidth doubles every 100 years! Not much has happened to telephones lately Still twisted pair
5
Gilder’s Telecosom Law: Gilder’s Telecosom Law: 3x bandwidth/year for 25 more years3x bandwidth/year for 25 more years
Today: – 10 Gbps per channel– 4 channels per fiber: 40 Gbps– 32 fibers/bundle = 1.2 Tbps/bundle
In lab 3 Tbps/fiber (400 x WDM) In theory 25 Tbps per fiber 1 Tbps = USA 1996 WAN bisection bandwidth
1 fiber = 25 Tbps
6
ThesisThesisMany little beat few bigMany little beat few big
Smoking, hairy golf ballSmoking, hairy golf ball How to connect the many little parts?How to connect the many little parts? How to program the many little parts?How to program the many little parts? Fault tolerance?Fault tolerance?
$1 $1 millionmillion $100 K$100 K $10 K$10 K
MainframeMainframe MiniMiniMicroMicro NanoNano
14"14"9"9"
5.25"5.25" 3.5"3.5" 2.5"2.5" 1.8"1.8"1 M SPEC marks, 1TFLOP1 M SPEC marks, 1TFLOP
101066 clocks to bulk ram clocks to bulk ram
Event-horizon on chipEvent-horizon on chip
VM reincarnatedVM reincarnated
Multi-program cache,Multi-program cache,On-Chip SMPOn-Chip SMP
10 microsecond ram
10 millisecond disc
10 second tape archive
10 nano-second ram
Pico Processor
10 pico-second ram
1 MM 3
100 TB
1 TB
10 GB
1 MB
100 MB
7
Year 2000 Year 2000 4B4B Machine Machine The Year 2000 commodity PC
Billion Instructions/Sec
.1 Billion Bytes RAM
Billion Bits/s Net
10 B Bytes Disk
Billion Pixel display– 3000 x 3000 x 24
1,000 $
10 GB byte Disk
.1 B byte RAM
1 Bips Processor
1 B
bits
/sec
LA
N/W
AN
8
4 B PC’s: 4 B PC’s: The Bricks of CyberspaceThe Bricks of Cyberspace Cost 1,000 $ Come with
– OS (NT, POSIX,..)
– DBMS
– High speed Net
– System management
– GUI / OOUI
– Tools
Compatible with everyone else CyberBricks
9
Super Server: 4T MachineSuper Server: 4T Machine Array of 1,000 4B machinesArray of 1,000 4B machines
1 b ips processors1 b ips processors1 B B DRAM 1 B B DRAM 10 B B disks 10 B B disks 1 Bbps comm lines1 Bbps comm lines1 TB tape robot1 TB tape robot
A few megabucksA few megabucks Challenge:Challenge:
ManageabilityManageabilityProgrammabilityProgrammabilitySecuritySecurityAvailabilityAvailabilityScaleabilityScaleabilityAffordabilityAffordability
As easy as a single systemAs easy as a single systemFuture servers are CLUSTERSFuture servers are CLUSTERSof processors, discsof processors, discs
Distributed database techniquesDistributed database techniquesmake clusters workmake clusters work
CPU
50 GB Disc
5 GB RAM
Cyber BrickCyber Bricka 4B machinea 4B machine
10
Functionally Specialized CardsFunctionally Specialized Cards Storage
Network
Display
M MB DRAM
P mips processor
ASIC
ASIC
ASIC
Today:
P=50 mips
M= 2 MB
In a few years
P= 200 mips
M= 64 MB
11
It’s Already True of PrintersIt’s Already True of PrintersPeripheral = CyberBrickPeripheral = CyberBrick
You buy a printer You get a
– several network interfaces– A Postscript engine
• cpu, • memory, • software,• a spooler (soon)
– and… a print engine.
12
System On A ChipSystem On A Chip Integrate Processing with memory on one chip
– chip is 75% memory now– 1MB cache >> 1960 supercomputers– 256 Mb memory chip is 32 MB!– IRAM, CRAM, PIM,… projects abound
Integrate Networking with processing on one chip– system bus is a kind of network– ATM, FiberChannel, Ethernet,.. Logic on chip.– Direct IO (no intermediate bus)
Functionally specialized cards shrink to a chip.
13
Tera Byte Backplane
TODAY– Disk controller is 10 mips risc engine
with 2MB DRAM– NIC is similar power
SOON– Will become 100 mips systems
with 100 MB DRAM. They are nodes in a federation
(can run Oracle on NT in disk controller).
Advantages– Uniform programming model– Great tools– Security– economics (cyberbricks)– Move computation to data (minimize traffic)
All Device Controllers will be Cray 1’sAll Device Controllers will be Cray 1’s
CentralProcessor &
Memory
14
With Tera Byte InterconnectWith Tera Byte Interconnectand Super Computer Adaptersand Super Computer Adapters
Processing is incidental to – Networking– Storage– UI
Disk Controller/NIC is – faster than device– close to device– Can borrow device
package & power So use idle capacity for computation. Run app in device.
Tera ByteBackplane
15
ImplicationsImplications
Offload device handling to NIC/HBA
higher level protocols: I2O, NASD, VIA…
SMP and Cluster parallelism is important.
Tera Byte Backplane
Move app to NIC/device controller
higher-higher level protocols: CORBA / DCOM.
Cluster parallelism is VERY important.
CentralProcessor &
Memory
Conventional Radical
16
How Do They Talk to Each Other?How Do They Talk to Each Other? Each node has an OS Each node has local resources: A federation. Each node does not completely trust the others. Nodes use RPC to talk to each other CORBA? DCOM? IIOP? RMI? One or all of the above. Huge leverage in high-level interfaces. Same old distributed system story.
Wire(s)VIAL/VIPL
stre
ams
data
gram
s
RP
C?
Applications
VIAL/VIPL
streams
datagrams
RP
C ?
Applications
18
Objects!Objects!
It’s a zoo ORBs, COM, CORBA,.. Object Relationa1 Databases Objects and 3-tier computing
19
So
lari
sU
NIX
Inte
rnat
ion
al
OSFDCE
Op
en s
oft
war
e F
ou
nd
atio
n (
OS
F)
NT
ODBCXA / TX
Ob
ject
M
anag
emen
t G
rou
p (
OM
G)
CORBAOpenGroup
History and Alphabet SoupHistory and Alphabet Soup
1985
1990
1995
X/O
pen
DCE
RPC
GUIDs
IDL
DNS
Kerber
os
COM
Microsoft DCOM based on OSF-DCE TechnologyDCOM and ActiveX extend it
COM
20
The PromiseThe Promise
Objects are Software CyberBricks– productivity breakthrough (plug ins)
– manageability breakthrough (modules) Microsoft Promises Cairo
distributed objects, secure, transparent, fast invocation
IBM/Sun/Oracle/Netscape promise CORBA + Open Doc + Java Beans +
All will deliver Customers can pick the best one
Both campsShare key goals: Encapsulation: hide implementation Polymorphism: generic ops
key to GUI and reuse Uniform Naming Discovery: finding a service Fault handling: transactions Versioning: allow upgrades Transparency: local/remote Security: who has authority Shrink-wrap: minimal inheritance Automation: easy
21
The OLE-COM ExperienceThe OLE-COM Experience Macintosh had Publish & Subscribe PowerPoint needed graphs:
– plugged MS Graph in as an component. Office adopted OLE
– one graph program for all of office Internet arrived
– URLs are object references, – Office is Web Enabled right away!
Office97 smaller than Office95 because of shared components
It works!!
22
Linking And EmbeddingLinking And EmbeddingObjects are data modules;Objects are data modules;
transactions are execution modulestransactions are execution modules
Link: pointer to object somewhere else– Think URL in Internet
Embed: bytesare here
Objects may be active; can callback to subscribers
23
Objects Meet DatabasesObjects Meet Databasesbasis for basis for universaluniversal data servers, access, & integration data servers, access, & integration
DBMSDBMSengineengine
Object-oriented (COM oriented) interface to data
Breaks DBMS into components Anything can be
a data source Optimization/navigation
“on top of” other data sources
Makes an RDBMS anO-R DBMS assuming optimizer understands objects
DatabaseDatabase
SpreadsheetSpreadsheet
PhotosPhotos
MailMail
MapMap
DocumentDocument
24
The BIG PictureThe BIG PictureComponents and transactionsComponents and transactions
Software modules are objects Object Request Broker (a.k.a., Transaction Processing Monitor)
connects objects (clients to servers)
Standard interfaces allow software plug-ins Transaction ties execution of a “job” into an atomic unit:
all-or-nothing, durable, isolated ActiveX Components are a 250M$/year business.
Object RequestObject Request BrokerBroker
25
Transaction
Object Request Broker (ORB)Object Request Broker (ORB) Orchestrates RPCOrchestrates RPC
Registers Servers Manages pools of servers Connects clients to servers Does Naming, request-level authorization, Provides transaction coordination Direct and queued invocation Old names:
– Transaction Processing Monitor,
– Web server,
– NetWare
Object-Request Broker
26
The OO Points So FarThe OO Points So Far
Objects are software Cyber Bricks Object interconnect standards are emerging Cyber Bricks become Federated Systems. Next points:
– put processing close to data
– do parallel processing.
27
Three Tier ComputingThree Tier Computing
Clients do presentation, gather input
Clients do some workflow (Xscript)
Clients send high-level requests to ORB
ORB dispatches work-flows and business objects -- proxies for client, orchestrate flows & queues
Server-side workflow scripts call on distributed business objects to execute task
Database
Business Objects
workflow
Presentation
28
The Three The Three TiersTiers
Web Client
HTML
VB or Java Script Engine
VB or Java Virt Machine
VBscritptJavaScrpt
VB Javaplug-ins
InternetORB
HTTP+DCOM
ObjectserverPool
MiddlewareORB
TP MonitorWeb Server...
DCOM (oleDB, ODBC,...)
Object & Dataserver.
LU6.2
IBMLegacy Gateways
29
Transaction Processing Transaction Processing Evolution to Three TierEvolution to Three Tier
Intelligence migrated to clientsIntelligence migrated to clients Mainframe Batch processing
(centralized)
Dumb terminals & Remote Job Entry
Intelligent terminals database backends
Workflow SystemsObject Request BrokersApplication Generators
Mainframe
cards
Active
green screen3270
Server
TP Monitor
ORB
30
Web Evolution to Three TierWeb Evolution to Three TierIntelligence migrated to clients (like TP)Intelligence migrated to clients (like TP)
Character-mode clients, smart servers
GUI Browsers - Web file servers
GUI Plugins - Web dispatchers - CGI
Smart clients - Web dispatcher (ORB)pools of app servers (ISAPI, Viper)workflow scripts at client & server
archie ghophergreen screen
WebServer
Mosaic
WAIS
NS & IE
Active
31
PC Evolution to Three TierPC Evolution to Three Tier Intelligence migrated to serverIntelligence migrated to server
Stand-alone PC (centralized)
PC + File & print servermessage per I/O
PC + Database server message per SQL statement
PC + App server message per transaction
ActiveX Client, ORB ActiveX server, Xscript
disk I/OIO request
reply
SQL Statement
Transaction
32
Why Did Everyone Go To Three-Why Did Everyone Go To Three-Tier?Tier?
Manageability– Business rules must be with data
– Middleware operations tools
Performance (scaleability)– Server resources are precious
– ORB dispatches requests to server pools
Technology & Physics– Put UI processing near user
– Put shared data processing near shared data
– Minimizes data moves
– Encapsulate / modularityDatabase
Business Objects
workflow
Presentation
33
DAD’sRaw Data
Customer comes to storeTakes what he wantsFills out invoiceLeaves money for goods
Easy to buildNo clerks
Why Put Business Objects at Why Put Business Objects at Server?Server?
Customer comes to store with list Gives list to clerk Clerk gets goods, makes invoiceCustomer pays clerk, gets goods
Easy to manageClerks controls accessEncapsulation
MOM’s Business Objects
34
The OO Points So FarThe OO Points So Far
Objects are software Cyber Bricks Object interconnect standards are emerging Cyber Bricks become Federated Systems. Put processing close to data Next point:
– do parallel processing.
35
Parallelism: Parallelism: the OTHER half of Super-Serversthe OTHER half of Super-Servers Clusters of machines allow two kinds of parallelism
– Many little jobs: Online transaction processing
• TPC A, B, C,…
– A few big jobs: data search & analysis• TPC D, DSS, OLAP
Both give automatic Parallelism
36
Why Parallel Access To Data?Why Parallel Access To Data?
1 Terabyte
10 MB/s
At 10 MB/s1.2 days to scan
1 Terabyte
1,000 x parallel100 second SCAN.
Parallelism: divide a big problem into many smaller ones
to be solved in parallel.
BANDWID
TH
37
Kinds of Parallel ExecutionKinds of Parallel Execution
Pipeline
Partition outputs split N ways inputs merge M ways
Any Sequential Program
Any Sequential Program
SequentialSequential
SequentialSequential Any Sequential Program
Any Sequential Program
38
Why are Relational OperatorsWhy are Relational OperatorsSuccessful for Parallelism?Successful for Parallelism?
Relational data model uniform operatorson uniform data streamClosed under composition
Each operator consumes 1 or 2 input streamsEach stream is a uniform collection of dataSequential data in and out: Pure dataflow
partitioning some operators (e.g. aggregates, non-equi-join, sort,..)
requires innovation
AUTOMATIC PARALLELISM
39
Database Systems Database Systems “Hide” Parallelism “Hide” Parallelism
Automate system management via tools– data placement– data organization (indexing)– periodic tasks (dump / recover / reorganize)
Automatic fault tolerance– duplex & failover– transactions
Automatic parallelism– among transactions (locking)– within a transaction (parallel execution)
40
SQL SQL a Non-Procedural a Non-Procedural Programming LanguageProgramming Language
SQL: functional programming language describes answer set.
Optimizer picks best execution plan– Picks data flow web (pipeline), – degree of parallelism (partitioning)– other execution parameters (process placement, memory,...)
GUI
Schema
Plan
Monitor
Optimizer
ExecutionPlanning
Rivers
Executors
41
Automatic Data Automatic Data PartitioningPartitioningSplit a SQL table to subset of nodes & disks
Partition within set:Range Hash Round Robin
Shared disk and memory less sensitive to partitioning, Shared nothing benefits from "good" partitioning
A...E F...J K...N O...S T...Z A...E F...J K...N O...S T...Z A...E F...J K...N O...S T...Z
Good for equijoins, range queriesgroup-by
Good for equijoins Good to spread load
42
N x M way ParallelismN x M way Parallelism
A...E F...J K...N O...S T...Z
Merge
Join
Sort
Join
Sort
Join
Sort
Join
Sort
Join
Sort
Merge Merge
N inputs, M outputs, no bottlenecks.
43
Parallel Objects?Parallel Objects? How does all this DB parallelism connect to
hardware/software Cyber Bricks? To scale to large client sets
– need lots of independent parallel execution.– Comes for from from ORB.
To scale to large data sets– need intra-program parallelism (like parallel DBs)– Requires some invention.
44
OutlineOutline Hardware CyberBricks
– all nodes are very intelligent Software CyberBricks
– standard way to interconnect intelligent nodes What next?
– Processing migrates to where the power is• Disk, network, display controllers have full-blown OS
• Send RPCs to them (SQL, Java, HTTP, DCOM, CORBA) to them
• Computer is a federated distributed system.
• Parallel execution is important
45
MORE SLIDESMORE SLIDESbut there is only so but there is only so
much time.much time.
Too bad
46
The Disk Farm On a CardThe Disk Farm On a CardThe Disk Farm On a CardThe Disk Farm On a CardThe 100GB disc cardAn array of discsCan be used as 100 discs 1 striped disc 10 Fault Tolerant discs ....etcLOTS of accesses/second bandwidth
14"
Life is cheap, its the accessories that cost ya.
Processors are cheap, it’s the peripherals that cost ya (a 10k$ disc card).
47
Parallelism: Parallelism: Performance is the GoalPerformance is the Goal
Goal is to get 'good' performance. Trade time for money.
Law 1: parallel system should be faster than serial system
Law 2: parallel system should give near-linear scaleup or
near-linear speedup orboth.
Parallel DBMSs obey these laws
48
Success StoriesSuccess Stories Online Transaction Processing
– many little jobs
– SQL systems support
• 50 k tpm-C (44 cpu, 600 disk 2 node )
Batch (decision support and Utility)– few big jobs, parallelism inside
– Scan data at 100 MB/s
– Linear Scaleup to 1,000 processors
tran
sact
ion
s /
sec
hardware
recs
/ se
c
hardware
49
The New Law of ComputingThe New Law of Computing
Grosch's Law:
Parallel Law: Needs
Linear Speedup and Linear ScaleupNot always possible
1 MIPS1 $
1,000 $
1,000 MIPS
2x $ is 2x performance
1 MIPS1 $
1,000 MIPS 32 $.03$/MIPS
2x $ is 4x performance
50
Clusters being builtClusters being built Teradata 1,000 nodes
(30k$/slice) Tandem,VMScluster 150 nodes (100k$/slice) Intel, 9,000 nodes @ 55M$ (
6k$/slice) Teradata, Tandem, DEC
moving to NT+low slice price IBM: 512 nodes ASCI @ 100m$ (200k$/slice) PC clusters (bare handed) at dozens of nodes
web servers (msn, PointCast,…), DB servers KEY TECHNOLOGY HERE IS THE APPS.
– Apps distribute data– Apps distribute execution
52
BOTH SMP and BOTH SMP and Cluster?Cluster?
SMPSuper Server
DepartmentalServer
PersonalSystem
Grow Up with SMPGrow Up with SMP4xP6 is now standard4xP6 is now standard
Grow Out with ClusterGrow Out with Cluster
Cluster has inexpensive partsCluster has inexpensive parts
Clusterof PCs
53
Clusters Have AdvantagesClusters Have Advantages Clients and Servers made from the same stuff.
Inexpensive: – Built with commodity components
Fault tolerance: – Spare modules mask failures
Modular growth– grow by adding small modules
54
Meta-Message:Meta-Message: Technology Ratios Are Important Technology Ratios Are Important
Meta-Message:Meta-Message: Technology Ratios Are Important Technology Ratios Are Important
If everything gets faster & cheaper at the same rate THEN nothing really
changes.
Things getting MUCH BETTER:
– communication speed & cost 1,000x– processor speed & cost 100x– storage size & cost 100x
Things staying about the same– speed of light (more or less constant)– people (10x more expensive)– storage speed (only 10x better)
55
Storage Ratios ChangedStorage Ratios Changed 10x better access time 10x more bandwidth 4,000x lower media price DRAM/DISK 100:1 to 10:10 to 50:1
Disk Performance vs Time
1
10
100
1980 1990 2000
Year
seek
s p
er s
eco
nd
ban
dw
idth
: MB
/s
0.1
1.
10.
Cap
acity
(GB
)
Disk accesses/second vs Time
1
10
100
1980 1990 2000
Year
Acc
esse
s p
er S
eco
nd
Storage Price vs TimeMegabytes per kilo-dollar
0.1
1.
10.
100.
1,000.
10,000.
1980 1990 2000
Year
MB
/k$
59
Performance = Storage AccessesPerformance = Storage Accesses not Instructions Executed not Instructions Executed
In the “old days” we counted instructions and IO’s Now we count memory references Processors wait most of the time
Where the time goes: clock ticks used by AlphaSort Components
SortDisc Wait SortDisc WaitDisc Wait OS
Memory WaitMemory Wait
D-Cache Miss
I-Cache MissB-CacheB-Cache
Data MissData Miss
70 MIPS“real” apps have worse Icache misses so run at 60 MIPSif well tuned, 20 MIPS if not
60
Storage Latency: Storage Latency: How Far Away is the Data?How Far Away is the Data?
RegistersOn Chip CacheOn Board Cache
Memory
Disk
12
10
100
Tape /Optical Robot
10 9
106
Sacramento
This CampusThis Room
My Head
10 min
1.5 hr
2 Years
1 min
Pluto
2,000 YearsAndromdeda
Clo
ck T
icks
61
Tape Farms for Tertiary StorageTape Farms for Tertiary StorageNot Mainframe SilosNot Mainframe Silos
Tape Farms for Tertiary StorageTape Farms for Tertiary StorageNot Mainframe SilosNot Mainframe Silos
Scan in 27 hours.many independent tape robots(like a disc farm)
10K$ robot 14 tapes500 GB 5 MB/s 20$/GB 30 Maps
100 robots
50TB 50$/GB 3K Maps
27 hr Scan
1M$
62
0.01
0.1
1
10
100
1,000
10,000
100,000
1,000,000
1000 x Disc Farm STC Tape Robot 6,000 tapes, 8 readers
100x DLT Tape Farm
GB/K$
Maps
SCANS/Day
Kaps
The Metrics: The Metrics: Disk and Tape Farms Win Disk and Tape Farms Win
The Metrics: The Metrics: Disk and Tape Farms Win Disk and Tape Farms Win
Data Motel:Data checks in, but it never checks out
63
Tape & Optical: Tape & Optical: Beware of the Beware of the Media MythMedia Myth
Tape & Optical: Tape & Optical: Beware of the Beware of the Media MythMedia Myth
Optical is cheap: 200 $/platter 2 GB/platter => 100$/GB (2x cheaper than disc)
Tape is cheap: 30 $/tape 20 GB/tape => 1.5 $/GB (100x cheaper than disc).
64
Tape & Optical Tape & Optical RealityReality: : Media is 10% of System CostMedia is 10% of System Cost
Tape & Optical Tape & Optical RealityReality: : Media is 10% of System CostMedia is 10% of System CostTape needs a robot (10 k$ ... 3 m$ ) 10 ... 1000 tapes (at 20GB each) => 20$/GB ... 200$/GB
(1x…10x cheaper than disc)
Optical needs a robot (100 k$ ) 100 platters = 200GB ( TODAY ) => 400 $/GB
( more expensive than mag disc ) Robots have poor access times Not good for Library of Congress (25TB) Data motel: data checks in but it never checks out!
65
The Access Time MythThe Access Time MythThe Access Time MythThe Access Time MythThe Myth: seek or pick time dominatesThe reality: (1) Queuing dominates (2) Transfer dominates BLOBs (3) Disk seeks often shortImplication: many cheap servers
better than one fast expensive server– shorter queues– parallel transfer– lower cost/access and cost/byte
This is now obvious for disk arraysThis will be obvious for tape arrays
Seek
Rotate
Transfer
Seek
Rotate
Transfer
Wait
66
Billions Of Clients Billions Of Clients
Every device will be “intelligent” Doors, rooms, cars… Computing will be ubiquitous
67
Billions Of ClientsBillions Of ClientsNeed Millions Of ServersNeed Millions Of Servers
MobileMobileclientsclients
FixedFixedclients clients
ServerServer
SuperSuperserverserver
ClientsClients
ServersServers
All clients networked All clients networked to serversto servers May be nomadicMay be nomadic
or on-demandor on-demand Fast clients wantFast clients wantfasterfaster servers servers
Servers provide Servers provide Shared DataShared Data ControlControl CoordinationCoordination CommunicationCommunication
68
1987: 256 tps Benchmark 1987: 256 tps Benchmark 14 M$ computer (Tandem) A dozen people False floor, 2 rooms of machines
Simulate 25,600 clients
A 32 node processor array
A 40 GB disk array (80 drives)
OS expert
Network expert
DB expert
Performance expert
Hardware experts
Admin expert
Auditor
Manager
69
1988: DB2 + CICS Mainframe1988: DB2 + CICS Mainframe65 tps65 tps
IBM 4391 Simulated network of 800 clients 2m$ computer Staff of 6 to do benchmark
2 x 3725 network controllers
16 GB disk farm4 x 8 x .5GB
Refrigerator-sizedCPU
70
1997: 10 years later1997: 10 years later1 Person and 1 box = 1250 tps1 Person and 1 box = 1250 tps
1 Breadbox ~ 5x 1987 machine room 23 GB is hand-held One person does all the work Cost/tps is 1,000x less
25 micro dollars per transaction4x200 Mhz cpu1/2 GB DRAM12 x 4GB disk
Hardware expertOS expertNet expertDB expertApp expert
3 x7 x 4GB disk arrays
71
What Happened?What Happened? Moore’s law:
Things get 4x better every 3 years (applies to computers, storage, and networks)
New Economics: Commodityclass price/mips software $/mips k$/yearmainframe 10,000 100 minicomputer 100 10microcomputer 10 1
GUI: Human - computer tradeoffoptimize for people, not computers
mainframeminimicro
time
pric
e
72
What Happens NextWhat Happens Next
Last 10 years: 1000x improvement
Next 10 years: ????
Today: text and image servers are free
25 $/hit => advertising pays for them Future:
video, audio, … servers are free“You ain’t seen nothing yet!”
1985 20051995
perf
orm
ance
73
Smart Cards Smart Cards Smart Cards Smart Cards
Bull CP8 two chip card first public demonstration 1979
Then (1979)
EMV card with dynamic authentication(EMV=Europay, MasterCard, Visa standard)
door key, vending machines, photocopiers
Now (1997)
Courtesy of Dennis Roberson NCR.
74
Smart Card Smart Card Memory CapacityMemory Capacity
ApplicationsApplications
Cards will be able to storedata (e.g. medical)books, movies,…money
Source: PIN/Card -Tech/ Courtesy of Dennis Roberson NCR
1990 1992 1996 1998 2000 2002
Mem
ory
Siz
e (B
its) 300 M
1 M
3 K
10 K
You are here
2004
16 KB todaybut growing super-exponentially