70
Tanenbaum • Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. • He is currently a Professor of Computer Science at the Vrije Universiteit in Amsterdam, The Netherlands • His current research focuses primarily on computer security, especially in operating systems, networks, and large wide-area distributed systems. • Amsterdam Compiler Kit, Minix

Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Embed Size (px)

Citation preview

Page 1: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Tanenbaum• Andrew S. Tanenbaum has an S.B. degree from

M.LT. and a Ph.D. from the University of California at Berkeley.

• He is currently a Professor of Computer Science at the Vrije Universiteit in Amsterdam, The Netherlands

• His current research focuses primarily on computer security, especially in operating systems, networks, and large wide-area distributed systems.

• Amsterdam Compiler Kit, Minix

Page 2: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1 Introduction

Page 3: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Background:Two Advances in Technology

• The first was the development of powerful microprocessors

• The second development was the invention of high-speed computer networks

• The result of these technologies is that it is now not only feasible, but easy, to put together computing systems composed of large numbers of computers connected by a high-speed network

Page 4: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.1 Definition of a Distributed System

• How to establish this collaboration lies at the heart of developing distributed systems.

• No assumptions are made concerning – the type of computers. – the way that computers are interconnected

A distributed system is :a collection of independent

computers that appears to its users as a single coherent system.

Page 5: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Distributed system

• A distributed system is a software system in which components located on networked computers communicate and coordinate their actions by passing messages. The components interact with each other in order to achieve a common goal.

• An important goal and challenge of distributed systems is location transparency

• Three significant characteristics of distributed systems are– concurrency of components– lack of a global clock– independent failure of components

Page 6: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

independent failure

• An independent failure is a malfunction of a system component that does not affect any other component in the system. It is a key concept in assembly design for manufacturing. Although it is not feasible to build a system where components are completely independent of one another, building in a certain level of independence can be helpful.

• This potentially ensures that a single failure does not stop the entire manufacturing process, component failure can be fixed without replacing the entire system and certain safety protocols are in place

Page 7: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.1.1 characteristic of Distributed System

• Hide– Differences between the various computers– And they communicate

• consistent and uniform way– users and applications can interact with a distributed system in a

consistent and uniform way, regardless of where and when interaction takes place

• easy to expand or scale• Availability (the proportion of time a system is in a functioning)• heterogeneous computers connection

– middleware

Page 8: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.1.2 Middleware

Page 9: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer
Page 10: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.2 GoalsWith current technology

it is also possible to put four floppy disk drives on a personal computer

A distributed system should make resources easily accessible; it should reasonably hide the fact that resources are distributed across a network; it should be open; and it should be scalable

Page 11: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.2 Goals

• 1.2.1 Making Resources Accessible• 1.2.2 Distribution Transparency• 1.2.3 Openness• 1.2.4 Scalability

Page 12: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.2.1 Making Resources Accessible

• Resources– Remote resources for distributed system

• Why ? Economics:sharing• Way : colloaborate and exchange information• Hot point: security

– eg. credit card pay; tracking communication

Sharing resources in a easy controlled and efficient

Page 13: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.2.2 Distribution Transparency

A distributed system that is able to present itself to users and applications as if it were only a single computer system is said to be transparent.

types of transparency

Page 14: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.2.2 Distribution Transparency

• Types of Transparency

• Degree of Transparency

Page 15: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Types of Transparency

• Access transparency, hiding differences in– data representation – the way that resources can be accessed

• Location transparency– logical names– migration transparency– relocation transparency

首问负责制山大位置山大迁移

Page 16: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Types of Transparency (cont.)

• Replication transparency– resources may be replicated to increase availability or to

improve performance by placing a copy close to the place where it is accessed

• Concurrency Transparency– each user does not notice that the other is making use

of the same resource• Failure Transparent

– a user does not notice that a fails to work properly, and that the system subsequently recovers from that failure

Page 17: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Degree of Transparency• There are situations in which attempting to completely hide

all distribution aspects from users is not a good idea. eg.– morning paper 时区自动适应– On internet, it is clear that a single update operation may now

even take seconds to complete, something that cannot be hidden from users

– many Internet applications repeatedly try to contact a server before finally giving up. Consequently, attempting to mask a transient server failure before trying another one may slow down the system as a whole

– consider an office worker who wants to print a file from her notebook computer. It is better to send the print job to a busy nearby printer, rather than to an idle one at corporate headquarters in a different country

Page 18: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Degree of Transparency (cont.)

• Trade-off between a high degree of transparency and the performance of a system.

aiming for distribution transparency may be a nice goal when designing and implementing distributed systems, but that it should be considered together with other issues such as performance and comprehensibility.

Page 19: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.2.3 Openness• An open distributed system is a system that offers

services according to standard rules that describe the syntax and semantics of those services.– Networks (protocol)– Distributed system (interfaces: IDL(interface describe

language))• services are generally specified through interfaces• Syntax: IDL• Semantics: natural language

Page 20: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

interface definition

It also allows two independent parties to build completely different implementations of those interfaces, leading to two separate distributed systems that operate in exactly the same way

Page 21: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

interface definition (cont.)• Complete

• everything that is necessary to make an implementation has indeed been specified

• However, many interface definitions are not at all complete

• Neutral– do not prescribe what an implementation should look like

• Interoperability• two implementations of systems or components from

different manufacturers can co-exist and work together by merely relying on each other's services as specified by a common standard.

Page 22: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

interface definition (cont.)

• Portability– an application developed for a distributed system A

can be executed without modification, on a different distributed system B that implements the same interfaces as A

• Easy to configure• configure the system out of different components• easy to add new components or replace existing

ones• extensible

Page 23: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Separating Policy from Mechanism• Organization of system flexibility

– To achieve flexibility, it is crucial that the system is organized as a collection of relatively small and easily replaceable or adaptable components

• Change the organization– The need for changing a organization of distributed

system is often caused by a component that does not provide the optimal policy for a specific user or application

• Separating– parameterization– Plugging in a component

Page 24: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.2.4 Scalability

• Size– more users and resources

• Geographically scalable– lie far apart

• administratively scalable– easy to manage

Page 25: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Scalability Problems

• A single server can become a bottle neck. Unfortunately, using only a single server is sometimes unavoidable.

Page 26: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

examples

• Geographical scalability– Speed– Reliability– Many centralized components will be limited due

to the performance and reliability problems resulting from wide-area communication.

• Scalability Across Domains– Resource usage (and payment), management, and

security

Page 27: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Scaling Techniques(1)

• Hiding communication latencies– asynchronous communication is used (Handler)– Shipping code

Page 28: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Scaling Techniques(2)

• Distribution– Distribution involves taking a component, splitting it

into smaller parts, and subsequently spreading those parts across the system.(办事处,代理 )

– Example: • Domain Name System (DNS), which is organized into a tree

of domains, which are divided into non-overlapping zones• The names in each zone are handled by a single name

server

Page 29: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Example: nl. vu.cs.flits

Page 30: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Scaling Techniques(3)

• Replicate– increases availability– better performance

• Caching– caching is a decision made by the client of a

resource, and not by the owner of a resource– happens on demand whereas replication is often

planned in advance• Consistency problems

Page 31: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.2.5 Pitfalls

• 1. The network is reliable.• 2. The network is secure.• 3. The network is homogeneous.• 4. The topology does not change.• 5. Latency is zero.• 6. Bandwidth is infinite.• 7. Transport cost is zero.• 8. There is one administrator.

Page 32: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.3 TYPES OF DISTRIBUTED SYSTEMS

• 1.3.1 Distributed Computing Systems• 1.3.2 Distributed Information Systems• 1.3.3 Distributed Pervasive Systems

Page 33: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.3.1 Distributed Computing Systems• Performance computing• Cluster computing (homogeneity)

– The underlying hardware consists of a collection of similar workstations or PCs, closely connected by means of a high speed local-area network. In addition, each node runs the same operating system

• Grid computing (heterogeneity)– no assumptions are made concerning hardware,

operating systems, networks, administrative domains, security policies, etc.

Page 34: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Linux-based Beowulf clusters

Page 35: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Functions of nodes

• compute nodes are controlled by master node. The master typically handles the – allocation of nodes to a particular parallel program– provides an interface for the users of the system.

• the master actually runs the middleware needed for the execution of programs and management of the cluster– An important part of this middleware is formed by the

libraries for executing parallel programs• the compute nodes often need nothing else but a

standard operating system

Page 36: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

A key issue in a grid system

resources from different organizations are brought together to allow the collaboration of a group

of people or institutions

Page 37: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

A layered architecture for grid computing systems

Page 38: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

A layered architecture for grid computing systems

• fabric layer– provides interfaces to local resources at a specific

site: querying the state and capabilities of a resource, resource management

• Connectivity layer– communication protocols(transfer data between

resources)– security protocols

Page 39: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

A layered architecture for grid computing systems

• resource layer– managing a single resource, access control

• collective layer– It deals with handling access to multiple resources

and typically consists of services for resource discovery, allocation and scheduling of tasks onto multiple resources, data replication, and so on

• application layer– consists of the applications

Page 40: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Cloud computing, or the cloud

• It is a colloquial expression used to describe a variety of different types of computing concepts that involve a large number of computers connected through a real-time communication network

• In science, cloud computing is a synonym for distributed computing over a network and means the ability to run a program on many connected computers at the same time

Page 41: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Cloud computing, or the cloud

• It also refer to network-based services– appear to be provided by real server hardware– in fact are served up by virtual hardware,

simulated by software running on one or more real machines

• Such virtual servers do not physically exist and can therefore be moved around and scaled up (or down) on the fly without affecting the end user - arguably, rather like a cloud

Page 42: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Hosted services

• in marketing, cloud computing is mostly used to sell hosted services in the sense of application service provisioning that run client server software at a remote location– 'SaaS' (Software as a Service)– 'PaaS' (Platform as a Service)– 'IaaS' (Infrastructure as a Service)– 'HaaS' (Hardware as a Service)– 'EaaS' (Everything as a Service)

• End users access cloud-based applications through a web browser, thin client or mobile app while the business software and user's data are stored on servers at a remote location.

Page 43: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.3.2 Distributed Information Systems

• interoperability for a wealth of networked applications turned out to be a painful experience– the ability of two or more systems or components to

exchange information and to use the information that has been exchanged

• Many of the existing middleware solutions are the result of working with an infrastructure in which it was easier to integrate applications into an enterprise-wide information system

Page 44: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.3.2 Distributed Information Systems

• Integration at the lowest level would allow clients to wrap a number of requests, possibly for different servers, into a single larger request and have it executed as a distributed transaction

• As applications became more sophisticated and were gradually separated into independent components (notably distinguishing database components from processing components)

Page 45: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.3.2 Distributed Information Systems

• Forms of DIS– Transaction Processing Systems, supported by

• underlying distributed system• the language runtime system

– Enterprise Application Integration (EAI)• Enterprise application integration is an integration

framework composed of a collection of technologies and services which form a middleware to enable integration of systems and applications across the enterprise

Page 46: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Transaction

Page 47: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Transaction• Atomic: To the outside world, the transaction

happens indivisibly• Consistent: The transaction does not violate system

invariants– e.g. The amount of Money in the bank

• Isolated: Concurrent transactions do not interfere with each other– if two or more transactions are running at the same time, to

each of them and to other processes, the final result looks as though all transactions is sequentially in some (system dependent) order

• Durable: Once a transaction commits, the changes are permanent

Page 48: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Sub-transactionsA nested transactionis constructed from some subtransactions

Subtransactions give rise to a subtle, but important, problem.

Nested transactions are important in distributed systems, for they provide a natural way of distributing a transaction across multiple machines

Page 49: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Sub-transactions (cont.)When any transaction or subtransaction starts, it is

conceptually given a private copy of all data in the entire system for it to manipulate as it wishes.

If it aborts, its private universe just vanishes, as if it had never existed. If it commits, its private universe replaces the parent's universe. Thus if a subtransaction commits and then later a new subtransaction is started, the second one sees the results produced by the first one

Likewise, if an enclosing (higher-level) transaction aborts, all its underlying subtransactions have to be aborted as well

Page 50: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Sub-transactions (cont.)

Sub-transactions give rise to a subtle, but important, problem. Imagine that a transaction starts several sub-transactions in parallel, and one of these commits.

making its results visible to the parent transaction. After further computation, the parent aborts, restoring the entire system to the state it had before the top-level transaction started. Consequently, the results of the sub-transaction that committed must nevertheless be undone.

Page 51: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Transaction Processing(TP) monitor

In the early days of enterprise middleware systems, the component that handled distributed (or nested) transactions formed the core for integrating applications at the server or database level. This component was called a transaction processing monitor or TP monitor for short. Its main task was to allow an application to access multiple server/databases by offering it a transactional programming model

Page 52: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Transaction Processing(TP) monitor (cont.)

Page 53: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Enterprise Application Integration

• facilities were needed to integrate applications independent from their databases.

• applications could directly exchange information– RPC, RMI

• Disadvantages (tight coupling)– both need to be up and running at the time of communication– know exactly how to refer to each other

– Message• This need for inter-application communication led

to many different communication models

Page 54: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Communication middleware

Page 55: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

message-oriented middleware MOM

• applications simply send messages to logical contact Points

• applications can indicate their interest for a specific type of message

• the communication middleware will take care that those messages are delivered to those applications

• publish/subscribe

Page 56: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.3.3 Distributed Pervasive Systems

• System stability: nodes are fixed and have a more or less permanent and high-quality connection to a network

• However, matters have become very different with the introduction of mobile and embedded computing devices– Cell phones

• distributed pervasive systems, are often characterized by being small, battery-powered, mobile, and having only a wireless connection,

Page 57: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Features of distributed pervasive system

• Embrace contextual changes– Device’s environment may change all the time

• Encourage ad hoc composition– refers to the fact that many devices in pervasive

systems will be used in very different ways by different users

–设备相关的• Recognize sharing as the default

Page 58: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Non-transparency

• distribution transparency is not really in place in pervasive systems. In fact, distribution of data, processes, and control is inherent to these systems, for which reason it may be better just to simply expose it rather than trying to hide it

Page 59: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Home Systems

• increasingly popular• challenges

– self-configuring and self-managing• Discover• Update• Wifi <->3G

– personal space• sharing restrictions

• Architecture– Master– Personal devices

Page 60: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

ArchitectureWith these continuously increasing capacities,

we may see pervasive home systems adopt an architecture in which a single machine acts as a master (and is hidden away somewhere in the basement next to the central heating), and all other fixed devices simply provide a convenient interface for humans.

Personal devices will then be crammed with daily needed information, but will never run out of storage.

Page 61: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Recommenders

• Being able to store huge amounts of data shifts the problem to storing relevant data and being able to find it later

• Recommenders– programs that consult what other users have stored in order

to identify. similar taste, and from that subsequently derive which content to place in one's personal space.

– An interesting observation is that the amount of information that recommender programs need to do their work is often small enough to allow them to be run on PDAs

Page 62: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Electronic Health Care Systems

• With the increasing cost of medical treatment, new devices are being developed to monitor the well-being of individuals and to automatically contact physicians when needed. In many of these systems, a major goal is to prevent people from being hospitalized

• body-area network (BAN). An important issue is that such a network should at worst only minimally hinder a person

Page 63: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Electronic Health Care Systems

Page 64: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Electronic Health Care Systems

• In the first one, a central hub is part of the BAN and collects data as needed. this data is then offloaded to a larger storage device. – The advantage of this scheme is that the hub can also

manage the BAN.,• In the second scenario, the BAN is continuously

hooked up to an external network, again through a wireless connection, to which it sends monitored data. Separate techniques will need to be deployed for managing the BAN

Page 65: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Electronic Health Care Systems

• Where and how should monitored data be stored?• How can we prevent loss of crucial data?• What infrastructure is needed to generate and

propagate alerts? • How can physicians provide online feedback?• How can extreme robustness of the monitoring

system be realized?• What are the security issues and how can the

proper policies be enforced?

Page 66: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Sensor Networks

• Hundreds or thousands of small nodes• Equipped with a sensing device. • Wireless data transmission• Battery powered• Considering sensor networks as distributed

databases

Page 67: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

Organizing a sensor network database

Page 68: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

problemsNeither of these solutions is very attractive.The first one requires that sensors send all

their measured data through the network, which may waste network resources and energy.

The second solution may also be wasteful as it discards the aggregation capabilities of sensors which would allow much less data to be returned to the operator.

Page 69: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.4 SUMMARY

• Advantages– integrate different applications running on

different computers into a single system– scaling

• Disadvantages– complex software– degradation of performance– weaker security

Distributed systems consist of autonomous computers that work together to give the appearance of a single coherent system.

Page 70: Tanenbaum Andrew S. Tanenbaum has an S.B. degree from M.LT. and a Ph.D. from the University of California at Berkeley. He is currently a Professor of Computer

1.4 SUMMARY

• Transparency– performance price– be fully achieved is not a good idea– trade-off

• Making assumptions about the underlying network , which is fundamentally wrong– assuming that the network is reliable, static, secure, and

homogeneous• Types

– Computations– information processing– pervasiveness