1
Anirban Sinha Charles ‘Buck’ Krasic Ahi G l Anirban Sinha, Charles Buck Krasic Ashvin Goel Ashvin Goel University of British Columbia Ui it fT t University of British Columbia University of Toronto University of Toronto F P Fi h i h h d li F. Pure Fairshare vs C Pure Fairshare Scheduling F. Pure Fairshare vs C. Pure Fairshare Scheduling C ti A h Cooperative Approach Time based approach opposed to Cooperative Approach Time based approach opposed to Fi h i i it Fairshare Cooperative priority . Fairshare Cooperative priority . Shdli Shdli No starvation Overall fairness in Scheduling Scheduling No starvation. Overall fairness in Scheduling Scheduling the system the system. B bl b d k Better balance between desktop Better balance between desktop and server performance needs and server performance needs. B fit f t Benefits from recent Benefits from recent Di h L infrastructural components Dispatcher Latency. infrastructural components Dispatcher Latency. Fine grained time accounting Fine grained time accounting. A Traditional High resolution timers A. Traditional High resolution timers. Effective data structures (heaps red Scheduling Approach Effective data structures (heaps, red- Scheduling Approach black trees etc ) black trees etc.) multi level feedback queue multi-level feedback queue Q C d b ? l ih h h di 30 Q: Can we do better? algorithm hasn’t changed in 30 Context Switch Rate Q: Can we do better? algorithm hasn t changed in 30 Context Switch Rate. A Y b bi i fi years A: Yes by combining fair years. A: Yes, by combining fair S t CPU d IO i t i jb h i ih i Separate CPU and IO intensive jobs sharing with cooperation Separate CPU and IO intensive jobs sharing with cooperation. Priority based Priority based. Breaks down for mixed CPU and IO Breaks down for mixed CPU and IO i i jb lik id intensive jobs like video intensive jobs, like video Th h % f i l l D Overview of Our Approach: applications security enabled web Throughput as a % of single player case. D. Overview of Our Approach: applications, security enabled web Throughput as a % of single player case. D. Overview of Our Approach: dtb t Cooperative Polling servers, databases etc. Cooperative Polling servers, databases etc. Fairshare at finest granularity has 5x coop poll() system call Cooperative Polling Using real time priority leads to Fairshare at finest granularity has 5x coop_poll() system call Using real time priority leads to latency of coop yet context switch rate U starvation and live locks latency of coop, yet context switch rate User space starvation and live locks. h i b h d di is 2x worse Behavior can be hard to predict is 2x worse. Best effort tasks Cooperative tasks Behavior can be hard to predict Cooperative approach leverages deadlocks live locks or priority Cooperative approach leverages Step1:Cooperative tasks inform the Step 5: Kernel informs the deadlocks, live locks or priority li ti if ti t t t tasks inform the kernel their most important task of the next most important event of the i i application information to context most important event important event of the other coop tasks. inversion may occur. application information to context parameters. inversion may occur. switch in a much more strategic Kernel Space Event queue (coop) poor adaptation for adaptive time- switch in a much more strategic Space Step 4: If a cooperative task is Event queue (coop) poor adaptation for adaptive time- fashion Step 4: If a cooperative task is selected from the virtual time h k l l h k iti kl d fashion. Step 2:The kernel inserts this queue, the kernel selects the task that is at the top of the coop event Virtual time queue (fairshare) sensitive workloads. inserts this information in its queue. It calculates the task’s timeslice based on the nearest The kernel task scheduler (fairshare) sensitive workloads. own event queue containing event timeslice based on the nearest deadline of other coop tasks. scheduler info. for all coop- tasks Step 3: The kernel chooses the next task to G Coordinated Adaptation tasks. run by inspecting the head of the virtual time queue. The task with smallest virtual G. Coordinated Adaptation time gets chosen. G. Coordinated Adaptation B O(1) hdl B. O(1) scheduler ll f B. O(1) scheduler Have overall fairness Have overall fairness. Allow cooperation between time Allow cooperation between time sensitive tasks via the kernel: sensitive tasks via the kernel: Give preferential treatment to TS Give preferential treatment to TS tasks within the boundaries of Frame rate of all 12 videos at tasks within the boundaries of Frame rate of all 12 videos at fairness overload fairness. overload. facilitates uniform fidelity across Th id bl t iti facilitates uniform fidelity across The videos are able to maintain a Dispatcher latency with increasing videos tasks The videos are able to maintain a Dispatcher latency with increasing videos tasks. uniform quality even at overload uniform quality even at overload. i f E Overview of our E. Overview of our i l i H Conclusion implementation H. Conclusion implementation H. Conclusion Vi t l ti b d Virtual time based. Virtual time based. Coop + fairshare: One new system call :coop poll() Coop + fairshare: One new system call :coop_poll() F t f ll 10 i lt Gives better timeliness (smaller Uses efficient heaps for priority Frame rate of all 10 simultaneous Gives better timeliness (smaller Uses efficient heaps for priority id l ) d l d videos latency) even under overload queues i h l latency) even under overload. queues. Dispatcher latency: Facilitates coordinated adaptation Benefits from high resolution one Dispatcher latency: Facilitates coordinated adaptation Benefits from high resolution one- actual requested dispatch time f lti l d ti t k ht ti & i ti actual requested dispatch time. for multiple adaptive tasks. shot timers & precise time The latency increases quickly under for multiple adaptive tasks. shot timers & precise time The latency increases quickly under Informed context switching is accounting in the kernel heavy load with increasing videos Informed context switching is accounting in the kernel. heavy load with increasing videos. cache efficient leading to a better We use playback of multiple videos S f h id i cache efficient leading to a better We use playback of multiple videos Some of the videos experience l h h bl ih kl d f Some of the videos experience timeliness-throughput balance to represent a rich workload of noticeable interruptions timeliness throughput balance. to represent a rich workload of noticeable interruptions. multiple time sensitive applications multiple time-sensitive applications.

Anirban Sinha Charles ‘Buck’ KrasicAnirban Sinha, Charles Buck …anirban.org/sosp07-poster.pdf · 2016. 5. 17. · Anirban Sinha Charles ‘Buck’ KrasicAnirban Sinha, Charles

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

  • Anirban Sinha Charles ‘Buck’ Krasic A h i G lAnirban Sinha, Charles Buck Krasic Ashvin Goelba S a, C a es uc as c Ashvin GoelUniversity of British Columbia U i it f T tUniversity of British Columbia University of TorontoU e s ty o t s Co u b a University of Toronto

    F P F i hi h h d li F. Pure Fairshare vsC Pure Fairshare Scheduling F. Pure Fairshare vsC. Pure Fairshare SchedulingC ti A h

    gCooperative Approach•Time based approach opposed to Cooperative Approach•Time based approach opposed to

    F i h ipp pp

    i it Fairshare Cooperativepriority. Fairshare Cooperative priority.S h d li S h d li•No starvation Overall fairness in Scheduling SchedulingNo starvation. Overall fairness in Scheduling Scheduling

    the systemthe system.yB b l b d k•Better balance between desktopBetter balance between desktop

    and server performance needsand server performance needs.pB fit f t•Benefits from recentBenefits from recent

    Di h Linfrastructural components Dispatcher Latency.infrastructural components Dispatcher Latency.•Fine grained time accounting•Fine grained time accounting.

    A Traditional •High resolution timersA. Traditional •High resolution timers.. ad t o a•Effective data structures (heaps red

    Scheduling Approach•Effective data structures (heaps, red-

    Scheduling Approach black trees etc )Sc edu g pp oac black trees etc.)•multi level feedback queue•multi-level feedback queue

    Q C d b ?q

    l i h h ’ h d i 30 Q: Can we do better?algorithm – hasn’t changed in 30 Context Switch RateQ: Can we do better?algorithm hasn t changed in 30 Context Switch Rate.A Y b bi i f iyears A: Yes by combining fairyears. A: Yes, by combining fairy

    S t CPU d IO i t i j b h i i h i•Separate CPU and IO intensive jobs sharing with cooperationSeparate CPU and IO intensive jobs sharing with cooperation.•Priority based•Priority based.•Breaks down for mixed CPU and IO•Breaks down for mixed CPU and IO

    i i j b lik idintensive jobs like videointensive jobs, like video Th h % f i l lD Overview of Our Approach:applications security enabled web Throughput as a % of single player case.D. Overview of Our Approach:applications, security enabled web Throughput as a % of single player case.D. Overview of Our Approach: pp y

    d t b t Cooperative Pollingservers, databases etc. Cooperative Pollingservers, databases etc.•Fairshare at finest granularity has 5xcoop poll() system call

    Cooperative Polling•Using real time priority leads to •Fairshare at finest granularity has 5xcoop_poll() system call•Using real time priority leads to

    latency of coop yet context switch rateUstarvation and live locks latency of coop, yet context switch rateUser spacestarvation and live locks. y p, y

    h i b h d di is 2x worse•Behavior can be hard to predict is 2x worse.Best effort tasksCooperative tasksBehavior can be hard to predict•Cooperative approach leverages•deadlocks live locks or priority •Cooperative approach leveragesStep1:Cooperative tasks inform the Step 5: Kernel informs the •deadlocks, live locks or priority p pp g

    li ti i f ti t t ttasks inform the kernel their most important

    ptask of the next most important event of the

    p yi i application information to context

    most important event

    important event of the other coop tasks.

    inversion may occur. application information to contextparameters.inversion may occur.switch in a much more strategicKernel SpaceEvent queue (coop)•poor adaptation for adaptive time- switch in a much more strategicSpace Step 4: If a cooperative task is

    Event queue (coop)

    •poor adaptation for adaptive time-fashion

    Step 4: If a cooperative task is selected from the virtual time

    h k l l h k

    iti kl dfashion.Step 2:The kernel inserts this queue, the kernel selects the task that is at the top of the coop event Virtual time queue (fairshare)sensitive workloads. finserts this information in its p pqueue. It calculates the task’s timeslice based on the nearestThe kernel task scheduler (fairshare)sensitive workloads. own event queue

    containing event

    timeslice based on the nearest deadline of other coop tasks.

    scheduler

    ginfo. for all coop-tasks

    Step 3: The kernel chooses the next task to

    G Coordinated Adaptationtasks. run by inspecting the head of the virtual

    time queue. The task with smallest virtual G. Coordinated Adaptationqtime gets chosen. G. Coordinated AdaptationB O(1) h d lB. O(1) scheduler ll fB. O(1) scheduler •Have overall fairnessHave overall fairness.

    •Allow cooperation between time•Allow cooperation between timesensitive tasks via the kernel:sensitive tasks via the kernel:

    •Give preferential treatment to TS•Give preferential treatment to TSptasks within the boundaries of Frame rate of all 12 videos attasks within the boundaries of Frame rate of all 12 videos at fairness overloadfairness. overload.

    • facilitates uniform fidelity acrossTh id bl t i t i

    • facilitates uniform fidelity acrossThe videos are able to maintain aDispatcher latency with increasing videos

    ytasks The videos are able to maintain a Dispatcher latency with increasing videos tasks.

    uniform quality even at overloaduniform quality even at overload.i fE Overview of ourE. Overview of our

    i l i H Conclusionimplementation H. Conclusionimplementation H. ConclusionpVi t l ti b d•Virtual time based.Virtual time based.

    Coop + fairshare:•One new system call :coop poll() Coop + fairshare:•One new system call :coop_poll()F t f ll 10 i lt •Gives better timeliness (smaller•Uses efficient heaps for priorityFrame rate of all 10 simultaneous •Gives better timeliness (smaller •Uses efficient heaps for priority

    id(

    l ) d l dp p y

    videos latency) even under overloadqueuesi h l

    latency) even under overload.queues.•Dispatcher latency: •Facilitates coordinated adaptation•Benefits from high resolution oneDispatcher latency: •Facilitates coordinated adaptation •Benefits from high resolution one-

    • actual – requested dispatch timep

    f lti l d ti t kg

    h t ti & i tiactual requested dispatch time. for multiple adaptive tasks.shot timers & precise time•The latency increases quickly under

    for multiple adaptive tasks.shot timers & precise time•The latency increases quickly under •Informed context switching isaccounting in the kernelheavy load with increasing videos

    •Informed context switching is accounting in the kernel.heavy load with increasing videos. cache efficient leading to a better•We use playback of multiple videosy gS f h id i

    cache efficient – leading to a better •We use playback of multiple videos•Some of the videos experience

    gl h h b l

    p y pi h kl d fSome of the videos experience timeliness-throughput balanceto represent a rich workload of

    noticeable interruptionstimeliness throughput balance. to represent a rich workload of

    noticeable interruptions. multiple time sensitive applicationsp multiple time-sensitive applications.p pp