23
Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Tightening the Bounds on Feasible Preemption Points

Harini Ramaprasad, Frank Mueller North Carolina State University Center for Embedded Systems Research Tightening the Bounds on Feasible Preemption Points

Embed Size (px)

Citation preview

Harini Ramaprasad, Frank Mueller

North Carolina State University

Center for Embedded Systems Research

Tightening the Bounds on

Feasible Preemption Points

2

Motivation

Timing Analysis— Calculation of Worst Case Execution Times (WCETs) of

tasks— Required for scheduling of real-time tasks

– Schedulability theory requires a-priori knowledge of WCET

— Estimates need to be safe— Static Timing Analysis – an efficient method to calculate

WCET of a program!— Data caches (D$) introduce unpredictability in timing

analysisData caches:

• Improve Performance Significantly• Complicate Static Timing Analysis for a task

3

Preemptive scheduling

Practical Real-Time systems— Multiple tasks with varying priorities— Higher prio. task may preempt a lower prio. task at any

time— Additional misses occur when lower prio. task restarted— WCET with preemption delay required

Static Timing Analysis becomes even more complicated!

4

Data Cache Reference Patterns (Prior Work)

Data Cache Analyzer added to Static Timing Analysis framework

Enhanced Cache Miss Equations (Ghosh et al.) framework D$ miss/hit patterns for memory references

Used for loop-nest oriented code — Scalar and array references analyzed

Considers only a single task with no preemptions

Patterns fed to timing analyzer to tighten WCET estimate

Necessary terminology:— Iteration point

–Represents an iteration of a loop-nest— Set of all iteration points – Iteration Space

5

Static Timing Analyzer Framework

6

Methodology

Task Schedulability Response Time Analysis used

Steps involved in calculation of WCET with preemption delay— Calculate max. # of preemptions possible for a task— Identify placement of preemption points in iteration

space— Calculate preemption delay at a certain point

7

Methodology: Analysis Phases

Phase 1: Single-Task Analysis— For every task

–Build D$ Reference Patterns assuming NO preemptions–Calculate stand-alone WCET and BCET

Performed once for each task using D$ analyzer + static timing analyzer

Phase 2: Preemption Delay Calculation (in task-set context)— Per-job analysis done— All jobs within hyperperiod considered

Proof of correctness is in paper

8

Identification of Preemption Points

Identification of preemption points for a job— All higher priority (hp) jobs can potentially preempt— Eliminate infeasible points— In every interval between potential preemption points

–Check whether job can be scheduled use BCET and WCET of hp jobs

–Check whether portion of job remains beyond interval–Count preemption point only if both criteria satisfied

9

Eliminating Infeasible Preemption Points

Task Period

(= deadline)

WCET BCET

T0 20 7 5

T1 50 12 10

T2 200 30 25

0 10 20 30 40 50

T0 T0 T0

T1 T1

BEST CASE

WORST CASE

Partial timeline for task T1

Infeasible since T1 is already done before point

10

Eliminating Infeasible Preemption Points

0 10 20 30 40 50 60

T0 T0 T0

T1 T1

T2

T0

BEST CASE

WORST CASE

Partial timeline for task T2Min exec time

for T2 in interval

Max exec timefor T2 in interval

Infeasible since T2 not scheduled in interval cannot be preempted

11

Placement of Points within Job

Identification of worst-case scenario— Preemption point placement

–Bound by range of exec. time available for task in interval–Interact with Timing Analyzer find iteration point corresponding to point in time

1. Min iter point reached in shortest possible time 2. Min iter point reached in longest possible time3. Max iter point reached in shortest possible time4. Max iter point reached in longest possible time

1 2 3 4

Access space for taskRange for preemption

12

Preemption Delay at a Point

— Access Chain building–Build time-ordered list of all mem. refs in task–Connect all refs accessing same D$ set to form chain–Different cache sets shown with different colors

— Assign weights to every access point–Weight

– # distinctly colored chains that cross the point– indicates # misses if preemption at that point

–Count only chains for D$ sets used by a higher prio. task–Count only if next point in chain is a HIT

13

Experimental Results

Benchmark Period

(cycles)

WCET w/o

delay (cycles

)

BCET (cycles)

#jobs # preemptions

New method

(feasible points)

HJ Bound

Old meth

od

Staschulat method

Avg Min Max

n-real-updates 100000 16738 16738 50 0 0 0 0 0 0

900convulution 625000 76391 61091 8 0.75 0 1 7 7 1

Matrix1 625000 59896 54015 8 1 1 1 8 8 3

1000convolution 625000 87091 67791 8 1.25 1 2 9 9 5

600convolution 1000000 45291 40991 5 0.4 0 1 16 15 7

300n-real-updates

1000000 56538 47338 5 1.2 1 2 17 16 9

800fir 1250000 77037 69737 4 1.5 1 2 23 21 18

900lms 1250000 158636 118536 4 3 2 4 24 22 -

1000fir 2500000 99237 86937 2 4 4 4 47 41 -

500fir 5000000 43937 43937 1 3 3 3 94 80 -Task set with U = 0.8, hyperperiod = 5000000

Our new method gives the tightest bound on # of preemptions in all cases

14

Maximum # of preemptions (U = 0.8)

0

10

20

30

40

50

60

1 1 2 3 1 2 3 4 5 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9Task sets

Ma

x.

# o

f p

ree

mp

tio

ns

HJ BoundOld MethodStaschulatNew Method

Our method gives tightest bound on # preemptions

15

WCET w/ delay (U = 0.8)

0

50000

100000

150000

200000

250000

300000

1 1 2 3 1 2 3 4 5 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9

Task sets

WC

ET

wit

h d

elay

HJ BoundOld MethodStaschulatNew Method

•Our method gives lowest preemption delay and hence WCET•Since WCET is unique to task, there’s no pattern of increase/decrease

16

Response Time (U = 0.8)

0

500000

1000000

1500000

2000000

2500000

1 1 2 3 1 2 3 4 5 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9

Task sets

Re

sp

on

se

Tim

e

HJ BoundOld MethodStaschulatNew Method

•Response times monotonically increase as task priority decreases•Our method has least rate of increase•All task-sets deemed schedulable by our method

17

Varying WCET/BCET Ratios

Task ID Period (cycles)

WCET (cycles)

# Feasible Preemptions (min/max/avg) # preempts

HJ Bound

# preempts

Old Method

# preempts

Staschulat

W/B = 1 W/B = 1.5

W/B = 2 W/B = 2.5

W/B = 3

U = 0.5

1 80000 16000 1/1/1 1/1/1 1/1/1 1/1/1 1/1/1 8 8 2

2 100000 5000 0/1/0.25 0/1/0.25 0/2/0.5 0/2/0.5 0/2/0.5 12 12 4

3 200000 30000 3/3/3 3/4/3.5 3/4/3.5 3/5/4 3/5/4 25 25 8

U = 0.8

1 80000 20000 2/2/2 2/2/2 2/2/2 2/2/2 2/2/2 8 8 3

2 100000 15000 1/2/1.5 1/3/1.75 1/3/1.75 1/4/2 1/4/2 12 12 6

3 200000 50000 6/7/6.5 8/8/8 8/9/8.5 8/9/8.5 8/8/8 25 25 19

•Our method produces significantly lower # preemptions•As WCET/BCET increases

•# preemptions increases upto a point •# preemptions decreases slightly beyond point•# preemptions is lowest when WCET/BCET = 1•Max increase beyond lowest ~ 30%

18

Critical Instant

Does Critical instant (CI) occur on simultaneous task release?— Not when preemption delay is considered!

When preemption delay is considered— CI occurs when tasks are released in reverse priority

order— Similar to effect of critical sections/blocking!

Considering all jobs in hyperperiod eliminates safety concerns

19

Critical Instant

Ф P C Δ

T1 2 3 1 0

T2 1 15 5.12

5

0.12

5

T3 0 20 1.25 0.75

T4 0 25 1.25 0.25Task Set

2. Preemption with WCET - phasing

1. Preemption with WCET - no phasing

•In 1, response time of T3 is more •In 1, response time of T4 is shorter

RT = 12

RT = 12.25

20

Conclusions

Contributions:— Determination of critical instant under cache preemption— Calculation of tighter bound on max # preemptions— Construction of realistic worst-case scenario for

preemptions Results show significant improvements in

— Max # preemptions— WCET with preemption delay— Response time

Improvements— Order of magnitude over simplistic methods— Half an order of magnitude over best prior method

Observations— As WCET/BCET increases, # preemptions increases by

~30%— Some tools don’t provide BCET— Compromise use BCET = 0, get slightly pessimistic

results

21

Future Work

Consider phased task-sets in experiments

Extend framework to deal with dynamic scheduling policies— Recalculate priority at every interval

22

Related Work

C.-G. Lee et al.

1. Analysis of cache-related preemption delay in Fixed-priority

preemptive scheduling.

2. Bounding cache related preemption delay for real-time systems.

Basic ideas involved in calculating cache related preemption delay

Works only with instruction caches

J. Staschulat et al.

1. Multiple process execution in cache related preemption delay

analysis.

2. Scheduling analysis of real-time systems with precise modeling of

cache related preemption delay.

Complete framework to calculate cache related preemption delay

Works only with instruction caches

Takes indirect preemption effects into consideration

23

Thank you!

Questions?