UNIT-4 Distributed Scheduling

Embed Size (px)

Citation preview

  • 8/17/2019 UNIT-4 Distributed Scheduling

    1/51

    UNIT-4

    Distributed Load Scheduling

    &Dead Lock 

  • 8/17/2019 UNIT-4 Distributed Scheduling

    2/51

  • 8/17/2019 UNIT-4 Distributed Scheduling

    3/51

    The computer system today allows multiple programs to be loaded into memory and to be eecuted concurrently

     but at one point o! time only one program is in eecution

    or rather at most one instruction is eecuted on behal! o!

    the process.

    • " source program stored on disk is called passi#e because

    it will not demand any resource$

    • " process is an active entity i.e. program in execution is

    called process –during execution process demand resource

    like CPU , memory , I/O etc. for the execution.

  • 8/17/2019 UNIT-4 Distributed Scheduling

    4/51

  • 8/17/2019 UNIT-4 Distributed Scheduling

    5/51

    User !eels that all o! them run simultaneously% howe#er operating system allocates one process at a time$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    6/51

  • 8/17/2019 UNIT-4 Distributed Scheduling

    7/51

    •  New 'ere the operating system

    recogni(es the process but does not

    assign resources to it$• )eady The process is ready and

    waiting !or the processor !or

    eecution$

    • )un *hen a process is selected by

    the +,U !or eecution% it mo#es tothe run state$

    • locked *hen a running process

    does not ha#e immediate access to a

    resource it is said to be in the

     blocked state$• Terminated This is the state

    reached when the process !inishes

    eecution$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    8/51

  • 8/17/2019 UNIT-4 Distributed Scheduling

    9/51

    Distributed Scheduling

    • Distribute Scheduling. "s wide area network ha#e high

    communication delays% Distribute scheduling is more suitable !or

    distributed systems based on local area network$

    " locally distributed systems consists o! a collection o! autonomouscomputers % connected by local area communication network$ Users

    submit tasks at their host computers !or processing$ The need !or load

    distributing arises in such en#ironment because due to random arri#al

    o! tasks and their random +,U ser#ice time re/uirement there is a

    good possibility that se#eral computers are hea#ily loaded while others

    are idle or highly loaded$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    10/51

    Issues in Load Distribution

    • Task assignment approach• load-balancing approach

    • load-sharing approach

    • +lassi!ication o! load distributing approach

  • 8/17/2019 UNIT-4 Distributed Scheduling

    11/51

    Types of process scheduling techniques

     – Task assignment approach deals with the assignment o! task in

    order to minimi(e inter process communication costs and impro#eturnaround time !or the complete process% by taking some

    constraints into account

     – In load-balancing approach the process assignment decisions

    attempt to e/uali(e the a#erage workload on all the nodes o! thesystem

     – In load-sharing approach the process assignment decisions

    attempt to keep all the nodes busy i! there are su!!icient processes

    in the system !or all the nodes

  • 8/17/2019 UNIT-4 Distributed Scheduling

    12/51

    Task assignment approach

    • 0ain assumptions

     – ,rocesses ha#e been split into tasks

     – +omputation re/uirement o! tasks and speed o! processors are

    known

     – +ost o! processing tasks on nodes are known

     – +ommunication cost between e#ery pair o! tasks are known

     – )esource re/uirements and a#ailable resources on node are known

    • )eassignment o! tasks are not possible

    • asic idea. 1inding an optimal assignment to achie#e goals such as the

    !ollowing.

     –0inimi(ation o! I,+ costs

     – 2uick turnaround time o! process

     – 'igh degree o! parallelism

     – 3!!icient utili(ation o! resources

  • 8/17/2019 UNIT-4 Distributed Scheduling

    13/51

    Types of Load Distributing Approaches

  • 8/17/2019 UNIT-4 Distributed Scheduling

    14/51

    Static versus Dynamic

     – Static algorithms use only in!ormation about the a#erage beha#ior

    o! the system

     – Static algorithms ignore the current state or load o! the nodes in

    the system – Dynamic algorithms collect state in!ormation and react to system

    state i! it changed

     – Static algorithms are much more simpler 

     – Dynamic algorithms are able to gi#e signi!icantly better

     per!ormance

     

  • 8/17/2019 UNIT-4 Distributed Scheduling

    15/51

    Deterministic versus Probabilistic

     – Deterministic algorithms use the in!ormation about the properties

    o! the nodes and the characteristic o! processes to be scheduled

     –,robabilistic algorithms use in!ormation o! static attributes o! thesystem e$g$ number o! nodes% processing capability% topology5 to

    !ormulate simple process placement rules

  • 8/17/2019 UNIT-4 Distributed Scheduling

    16/51

    entrali!ed versus Distributed

     – +entrali(ed approach collects in!ormation to ser#er node and

    makes assignment decision

     – Distributed approach contains entities to make decisions on a

     prede!ined set o! nodes – +entrali(ed algorithms can make e!!icient decisions% ha#e lower

    !ault-tolerance

     – Distributed algorithms a#oid the bottleneck o! collecting state

    in!ormation and react !aster 

    ti " ti

  • 8/17/2019 UNIT-4 Distributed Scheduling

    17/51

    ooperative versus "on cooperative

     – In Non cooperati#e algorithms entities act as autonomous ones and

    make scheduling decisions independently !rom other entities

     – In +ooperati#e algorithms distributed entities cooperate with each

    other  – +ooperati#e algorithms are more comple and in#ol#e larger

    o#erhead

     – Stability o! +ooperati#e algorithms are better 

  • 8/17/2019 UNIT-4 Distributed Scheduling

    18/51

    omponents for load distribution algorithm

    • Load estimation policy

     – determines how to estimate the workload o! a node

    • ,rocess trans!er policy

     –

    determines whether to eecute a process locally or remote• State in!ormation echange policy

     – determines how to echange load in!ormation among nodes

    • Location policy

     – determines to which node the trans!erable process should be sent

    • ,riority assignment policy

     – determines the priority o! eecution o! local and remote processes

    • 0igration limiting policy

     – determines the total number o! times a process can migrate

    Load estimation policy for Load #alancing

  • 8/17/2019 UNIT-4 Distributed Scheduling

    19/51

    Load estimation policy for Load #alancing

    Algorithm

    • To balance the workload on all the nodes o! the system% it is necessary

    to decide how to measure the workload o! a particular node

    • Some measurable parameters with time and node dependent !actor5

    can be the !ollowing. – Total number o! processes on the node

     – )esource demands o! these processes

     – Instruction mies o! these processes

     – "rchitecture and speed o! the node6s processor 

  • 8/17/2019 UNIT-4 Distributed Scheduling

    20/51

    Process transfer policy

    • 0ost o! the algorithms use the threshold policy to decide on whether

    the node is lightly-loaded or hea#ily-loaded

    •Threshold #alue is a limiting #alue o! the workload o! node which can

     be determined by

     – Static policy. prede!ined threshold #alue !or each node depending

    on processing capability

     – Dynamic policy. threshold #alue is calculated !rom a#erage

    workload and a prede!ined constant• elow threshold #alue node accepts processes to eecute% abo#e

    threshold #alue node tries to trans!er processes to a lightly-loaded

    node

    • Double threshold policy

     – *hen node is in o#erloaded region new local processes are sent to

    run remotely% re/uests to accept remote processes are re7ected

     – *hen node is in normal region new local processes run locally%

    re/uests to accept remote processes are re7ected

     – *hen node is in underloaded region new local processes run

    locally% re/uests to accept remote processes are accepted

    L ti li

  • 8/17/2019 UNIT-4 Distributed Scheduling

    21/51

    Location policy

    • Threshold method

     – ,olicy selects a random node% checks whether the node is able to

    recei#e the process% then trans!ers the process$ I! node re7ects%another node is selected randomly$ This continues until probe limit

    is reached$

    • Shortest method

     – Distinct nodes are chosen at random% each is polled to determine

    its load$ The process is trans!erred to the node ha#ing theminimum #alue unless its workload #alue prohibits to accept the

     process$

    Simple impro#ement is to discontinue probing whene#er a node with

    (ero load is encountered

    L ti li

  • 8/17/2019 UNIT-4 Distributed Scheduling

    22/51

    Location policy

    • idding method

     –

     Nodes contain managers to send processes5 and contractors torecei#e processes5

     – 0anagers broadcast a re/uest !or bid% contractors respond with

     bids prices based on capacity o! the contractor node5 and manager

    selects the best o!!er 

     –

    *inning contractor is noti!ied and asked whether it accepts the process !or eecution or not

    Location polic

  • 8/17/2019 UNIT-4 Distributed Scheduling

    23/51

    Location policy

    • ,airing

     –+ontrary to the !ormer methods the pairing policy is to reduce the#ariance o! load only between pairs

     – 3ach node asks some randomly chosen node to !orm a pair with it

     – I! it recei#es a re7ection it randomly selects another node and tries

    to pair again

     – Two nodes that di!!er greatly in load are temporarily paired witheach other and migration starts

     – The pair is broken as soon as the migration is o#er

     –  " node only tries to !ind a partner i! it has at least two processes

    State information e$change policy

  • 8/17/2019 UNIT-4 Distributed Scheduling

    24/51

    State information e$change policy

    • Dynamic policies re/uire !re/uent echange o! state in!ormation% but

    these etra messages arise two opposite impacts.

     – Increasing the number o! messages gi#es more accurate schedulingdecision

     – Increasing the number o! messages raises the /ueuing time o!

    messages

    • State in!ormation policies can be the !ollowing.

     – ,eriodic broadcast

     – roadcast when state changes

     – 8n-demand echange

     – 3change by polling

  • 8/17/2019 UNIT-4 Distributed Scheduling

    25/51

    State information e$change policy

    • ,eriodic broadcast

     – 3ach node broadcasts its state in!ormation a!ter the elapse o! e#ery 

      units o! time

     – ,roblem. hea#y tra!!ic% !ruitless messages% poor scalability since

    in!ormation echange is too large !or networks ha#ing many nodes

    • roadcast when state changes

     – "#oids !ruitless messages by broadcasting the state only when a

     process arri#es or departures

     – 1urther impro#ement is to broadcast only when state switches to

    another region double-threshold policy5

  • 8/17/2019 UNIT-4 Distributed Scheduling

    26/51

    State information e$change policy

    • 8n-demand echange

     – In this method a node broadcast a State-In!ormation-)e/uest

    message when its state switches !rom normal to either underloaded

    or o#erloaded region$

     – 8n recei#ing this message other nodes reply with their own state

    in!ormation to the re/uesting node

     – 1urther impro#ement can be that only those nodes reply which are

    use!ul to the re/uesting node

    • 3change by polling

     – To a#oid poor scalability coming !rom broadcast messages5 the

     partner node is searched by polling the other nodes one by one%until poll limit is reached

  • 8/17/2019 UNIT-4 Distributed Scheduling

    27/51

    Priority assignment policy

    • Sel!ish

     –Local processes are gi#en higher priority than remote processes$*orst response time per!ormance o! the three policies$

    • "ltruistic

     – )emote processes are gi#en higher priority than local processes$

    est response time per!ormance o! the three policies$

    • Intermediate

     – *hen the number o! local processes is greater or e/ual to the

    number o! remote processes% local processes are gi#en higher

     priority than remote processes$ 8therwise% remote processes are

    gi#en higher priority than local processes$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    28/51

    %igration limiting policy

    • This policy determines the total number o! times a process can migrate

     – Uncontrolled

    • " remote process arri#ing at a node is treated 7ust as a process

    originating at a node% so a process may be migrated any

    number o! times

     – +ontrolled

    • "#oids the instability o! the uncontrolled policy

    • Use a migration count  parameter to !i a limit on the number

    o! time a process can migrate

    • Irre#ocable migration policy. migration count  is !ied to 9

    1or long eecution processes migration count  must be greaterthan 9 to adapt !or dynamically changing states

    Load sharing approach

  • 8/17/2019 UNIT-4 Distributed Scheduling

    29/51

    Load-sharing approach

    • Dra&backs of Load-balancing approach

     – Load balancing techni/ue with attempting e/uali(ing the workload

    on all the nodes is not an appropriate ob7ect since big o#erhead isgenerated by gathering eact state in!ormation

     – Load balancing is not achie#able since number o! processes in a

    node is always !luctuating and temporal unbalance among the

    nodes eists e#ery moment

    #asic ideas for Load-sharing approach – It is necessary and su!!icient to pre#ent nodes !rom being idle

    while some other nodes ha#e more than two processes

     – Load-sharing is much simpler than load-balancing since it only

    attempts to ensure that no node is idle when hea#ily node eists

     –,riority assignment policy and migration limiting policy are thesame as that !or the load-balancing algorithms

  • 8/17/2019 UNIT-4 Distributed Scheduling

    30/51

    Load estimation policy for Load-sharing algorithms

    • Since load-sharing algorithms simply attempt to a#oid idle nodes% it issu!!icient to know whether a node is busy or idle

    • Thus these algorithms normally employ the simplest load estimation

     policy o! counting the total number o! processes

    • In modern systems where permanent eistence o! se#eral processes on

    an idle node is possible% algorithms measure +,U utili(ation toestimate the load o! a node

    Process transfer policy

  • 8/17/2019 UNIT-4 Distributed Scheduling

    31/51

    Process transfer policy

    • "lgorithms normally use all-or-nothing strategy

    •This strategy uses the threshold #alue o! all the nodes !ied to 9

    •  Nodes become recei#er node when it has no process% and become

    sender node when it has more than 9 process

    • *hen +,U utili(ation is used as the load estimation policy% the

    double-threshold policy should be used as the process trans!er policy

    L ti li

  • 8/17/2019 UNIT-4 Distributed Scheduling

    32/51

    Location policy

    • Location policy decides whether the sender node or the recei#er node

    o! the process takes the initiati#e to search !or suitable node in the

    system% and this policy can be the !ollowing. – Sender-initiated location policy

    • Sender node decides where to send the process

    • 'ea#ily loaded nodes search !or lightly loaded nodes

     – )ecei#er-initiated location policy

    • )ecei#er node decides !rom where to get the process

    • Lightly loaded nodes search !or hea#ily loaded nodes

    • Sender-initiated location policy

     –  Node becomes o#erloaded% it either broadcasts or randomly probes

    the other nodes one by one to !ind a node that is able to recei#eremote processes

     – *hen broadcasting% suitable node is known as soon as reply

    arri#es

    St t i f ti h li

  • 8/17/2019 UNIT-4 Distributed Scheduling

    33/51

    State information e$change policy

    • In load-sharing algorithms it is not necessary !or the nodes to

     periodically echange state in!ormation% but needs to know the state o!

    other nodes when it is either underloaded or o#erloaded• #roadcast &hen state changes

     – In sender-initiated:recei#er-initiated location policy a node

     broadcasts State In!ormation )e/uest when it becomes

    o#erloaded:underloaded

     –It is called broadcast-when-idle policy when recei#er-initiated

     policy is used with !ied threshold #alue o! 9

    • Poll &hen state changes

     – In large networks polling mechanism is used

     – ,olling mechanism randomly asks di!!erent nodes !or state

    in!ormation until !ind an appropriate one or probe limit is reached

     – It is called poll-when-idle policy when recei#er-initiated policy is

    used with !ied threshold #alue o! 9

     

  • 8/17/2019 UNIT-4 Distributed Scheduling

    34/51

    lassification'Types

    of Load Distribution Algorithms

  • 8/17/2019 UNIT-4 Distributed Scheduling

    35/51

    Sender Initiated LD Algorithms

    • The o#erloaded node attempts to send tasks to lightly loaded node

     – Trans!er ,olicy. I! new Tasks takes you abo#e threshold% becomesender$ I! recei#ing task will not lead to crossing o#er threshold%then become recei#er 

     – Selection ,olicy. Newly arri#ed tasks

     – Location ,olicy

    • )andom still better than no sharing$ +onstrain by limiting

    the number o! trans!ers• Threshold chose nodes randomly but poll them be!ore

    sending task$ Limited no$ o! polls$ I! process !ails eecutelocally$

    • Shortest ,oll all randomly selected nodes and trans!er to

    least loaded$ Doesn6t impro#e much o#er threshold$ – In!ormation ,olicy

  • 8/17/2019 UNIT-4 Distributed Scheduling

    36/51

    (eceiver initiated

    • Load sharing process initiated by a lightly loaded node

     – Trans!er ,olicy. Threshold based$

     – Selection ,olicy. +an be anything

     – Location ,olicy. )ecei#er selects up to N nodes and polls them%trans!erring task !rom the !irst sender$ I! none are !ound% wait !or a

     predetermined time% check load and try again

     – In!ormation ,olicy

  • 8/17/2019 UNIT-4 Distributed Scheduling

    37/51

    Symmetric Algorithms

     – Simple idea combine the pre#ious two$ 8ne works well at highloads% the other at low loads$

     – "bo#e "#erage "lgorithm. ;eep load within a range

    • Trans!er ,olicy. maintain < thresholds e/uidistant !roma#erage$ Nodes with load = upper are senders% Nodes withload > lower are recei#ers$

    • Location ,olicy. Sender Initiated.

     – Sender broadcasts ?too high@ message and sets up toohigh alarm

     – )ecei#er getting too high message replies with accept%cancels its too low alarm% starts an awaiting task alarm%and increments load #alue

     –

    Sender which gets accept message will trans!er task asappropriate$ I! it gets too low message% it responds with atoo high to the sender$

     – I! no accept has been recei#ed within timeout% send outchange a#erage message$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    38/51

    Adaptive Algorithms

    • Stable Symmetric "lgorithm$

     – Use in!ormation gathered during polling to change beha#iour$Start by assuming that e#eryone is a recei#er$

     – Trans!er ,olicy. )ange based with Upper and Lower Threshold

     – Location ,olicy. Sender Initiated component polls node at head o!recei#er list$ Depending on answer% either a task is trans!erred or

    node mo#ed to 8; or sender list$ Same thing happens at therecei#ing end$ )ecei#er initiated component polls senders list inorder% 8; list and recei#ers list in re#erse order$ Nodes are mo#edin and out o! lists at sender and recei#er$

     – Selection ,olicy any% In!ormation ,olicy Demand dri#en

     – "t high loads% recei#er lists get empty pre#enting !uture pollingand ?deacti#ating@ sender component$ "t low loads% recei#erinitiated polling is deacti#ated% but not be!ore updating recei#erlists$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    39/51

    Task %igration

    • Task 0igration re!ers to the trans!er o! a task that has already begin

    eecution by a new location and continuing its eecution there to

    migrate a partially eecuted task to a new location % the tasks state

    should be made a#ailable at the new location$ The steps in#ol#ed in

    task migration are.

    • Task transfer. The trans!er o! task6s state to the new machine$

    • )nfree!e. This task is installed at the new machine and is put in the

    ready /ueue$% so that it can continue eecuting$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    40/51

    Issues in Task %igration

    • In the design o! a task migration mechanism% se#eral issues play

    important role in determining the e!!iciency o! the mechanism$ Theseissues include.

    • State Transparency. There are two issues to be considered

    • 9$ The cost to support remote eecution which includes delays due to

    !ree(ing task$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    41/51

    4<

    Traffic Shaping

    "nother method o! congestion control is to ?shape@ thetra!!ic be!ore it enters the network$

    • Tra!!ic shaping controls the rate at which packets are sentnot 7ust how many5$ Used in "T0 and Integrated Ser#icesnetworks$

    • "t connection set-up time% the sender and carrier negotiatea tra!!ic pattern shape5$

    • Two tra!!ic shaping algorithms are.

     – Leaky ucket

     –

    Token ucket

  • 8/17/2019 UNIT-4 Distributed Scheduling

    42/51

    4A

    The Leaky #ucket Algorithm

    • The Leaky #ucket Algorithm  used to control rate in a

    network$ It is implemented as a single-ser#er /ueue with

    constant ser#ice time$ I! the bucket bu!!er5 o#er!lows then

     packets are discarded$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    43/51

    44

    The Leaky #ucket Algorithm

    a5 " leaky bucket with water$ b5 a leaky bucket with packets$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    44/51

    4B

    Leaky #ucket Algorithm* cont+

    • The leaky bucket en!orces a constant output rate a#eragerate5 regardless o! the burstiness o! the input$ Does nothingwhen input is idle$

    •  The host in7ects one packet per clock tick onto the network$This results in a uni!orm !low o! packets% smoothing out

     bursts and reducing congestion$• *hen packets are the same si(e as in "T0 cells5% the one

     packet per tick is okay$ 1or #ariable length packets though% itis better to allow a !ied number o! bytes per tick$ 3$g$ 9C

  • 8/17/2019 UNIT-4 Distributed Scheduling

    45/51

    4

    Token #ucket Algorithm

    • In contrast to the L% the Token ucket "lgorithm% allowsthe output rate to #ary% depending on the si(e o! the burst$

    • In the T algorithm% the bucket holds tokens$ To transmita packet% the host must capture and destroy one token$

    Tokens are generated by a clock at the rate o! one tokene#ery ∆t sec$

    • Idle hosts can capture and sa#e up tokens up to the ma$si(e o! the bucket5 in order to send larger bursts later$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    46/51

    4E

    The Token #ucket Algorithm

    a5 e!ore$ b5  "!ter$

    B-A4

  • 8/17/2019 UNIT-4 Distributed Scheduling

    47/51

    4F

    Leaky #ucket vs Token #ucket

    L discards packetsG T does not$ T discards tokens$• *ith T% a packet can only be transmitted i! there are

    enough tokens to co#er its length in bytes$

    • L sends packets at an a#erage rate$ T allows !or large bursts to be sent !aster by speeding up the output$

    • T allows sa#ing up tokens permissions5 to send large bursts$ L does not allow sa#ing$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    48/51

    D,AD L. 

    • " process re/uests resourcesG i! the resources are not a#ailable at that

    time% the process enters a wait state$ It may happen that waiting

     processes will ne#er again change state% because the resources they

    ha#e re/uested are held by other waiting processes$ This situation is

    called deadlock$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    49/51

    D,ADL. /A(AT,(I0ATI"

    • " deadlock situation can arise i! the !ollowing !our conditions

    hold simultaneously in a system.• 9$ %utual e$clusion1 "t least one resource must be held in a non-

    sharable modeG that is% only one process at a time can use the resource$

    I! another process re/uests that resource% the re/uesting process must

     be delayed until the resource has been released$

  • 8/17/2019 UNIT-4 Distributed Scheduling

    50/51

  • 8/17/2019 UNIT-4 Distributed Scheduling

    51/51