SAIU: An Efficient Cache Replacement Policy for Wireless On-demand Broadcasts

Preview:

DESCRIPTION

SAIU: An Efficient Cache Replacement Policy for Wireless On-demand Broadcasts. Jianliang Xu, Qinglong Hu, Dik Lun Department of Computer Science in HK University Lee, Wang-Chien Lee GTE Laboratories - PowerPoint PPT Presentation

Citation preview

SAIU: An Efficient Cache Replacement Policy for Wireless On-

demand Broadcasts

Jianliang Xu, Qinglong Hu, Dik Lun Department of Computer Science in HK

UniversityLee, Wang-Chien Lee

GTE LaboratoriesProceedings of the ninth international conference on

Information knowledge management CIKM 2000.

Outline

Introduction Background Cache replacement algorithm Implementation issues Simulation model Performance evaluation Conclusion My comment

Introduction

Wireless data dissemination Broadcast-based information dissemination On-demand services

Wireless on-demand broadcast systems Some researches in wireless on-demand

systems. On-demand broadcast scheduling Wireless data caching

Wireless caching policy

Cache replacement is an important issue to be tackled for cache management.

Previous studies are based on unrealistic assumptions, such as fixed data sizes, no updates, and no disconnections.

Background

Performance metrics Traditional cache management

• Cache hit ratio• Access latency

On-demand broadcast system• Access latency• Stretch—the ratio of the access latency

of a request to its service time( size/bandwidth)

Scheduling algorithms

Longest Wait First( LWF) Longest Total Stretch First( LTSF)

In this paper, LTSF is the default scheduling algorithm.

RxW

Invalidation propagation

To maintain cache consistency, periodically propagating invalidation report( IR) is an efficient method.

Adaptive cache invalidation algorithm( AAW_AT)

Cache replacement algorithm

In traditional cache management methods, access probability is primary factor used to determine a cache replacement policy.

In an on-demand broadcast environment three additional factors, namely data retrieval delay, data update frequency and data item size, need to be considered in the design of cache replacement policies.

Design Guide

Observation( Which object should be replace) Lower access probability Lower miss penalty( shorter data retrieval delay) Higher update frequency Larger data size

The SAIU replacement policy

Stretch*Access-rate*Inverse Update-frequency( SAIU) gain(i)=Li*Ai/Si*Ui The algorithm remove the minimum gain(i)

value until the free space is sufficient to accommodate the incoming item.

Implementation issues Heap management

Use min-heap data structure to implement SAIU. The time complexity is O( logN).

Estimate of running parameter An exponential aging method is used to estimate Ui, Li, and Ai. Initially, Ui and Li are set to 0. Ui=αu/(tc-ti

lu)+(1-αu)*Ui Li=αs/(tc-ti

qt)+(1-αs)*Li Ai=αa/(tc-ti

la)+(1-αa)*Ai

on server-side

on client-side

Implementation issues( cont.)

Maintenance of cache item attributes A cache item has six parameters need to maintain,

namely si, Ui, tilu, Li, Ai ,and ti

la. Storing the attributes for cached data items in client

cache. In order to avoid the starvation problem.

Maintaining a GAINmin value which is the minimum gain(i) value in cached item. If one item need to evict, checking the gain value is larger than GAINmin or not. If it does, keep it’s attribute. If not, drop it.

Simulation model

A single server and numbers of clients. Two types of size distributions of item

Increasing Distribution( INCRT) Sizei=Smin+[(I-1)*(Smax-Smin+1)]/DbSize ,i=1,…….,Dbsize Decreasing Distribution( DECRT) Sizei=Smax-[(i-1)(Smax-Smin+1)]/DbSize ,i=1,…….,DbSize

Default system parameter settings

Client model

Server model

Performance evaluation

αa=αs=αu=0.25 Using AAW_AT to propagate invalidation

information and LTSF for on-demand broadcast scheduling.

SAIU( EST) and SAIU( IDL)

Experiment 1: Impact of the cache size( INCRT)

Experiment 1: Impact of the cache size( DECRT)

Experiment 1: Impact of the cache size( INCRT)

Experiment 1: Impact of the cache size( INCRT)

Experiment 2: Impact of the broadcast bandwidth( INCRT)

Experiment 2: Impact of the broadcast bandwidth( DECRT)

Experiment 3: Influence of the item size( INCRT)

Experiment 3: Influence of the item size( DECRT)

Experiment 4: Influence of the update frequency( INCRT)

Experiment 4: Influence of the update frequency( DECRT)

Experiment 5: Algorithm complexity

Conclusion

SAIU performs substantially better than the well known LRU and LRU-MIN policies, especially for clients which favor access to comparatively smaller.

Future work

They are incorporating the factor of cache validation delay.

They plan to conduct simulations for clients with heterogeneous access patterns.

Combining the prefetching into the current scheme.

My comment

Unfortunately, it did not use real trace to simulate. We can not compare the result with other experiments.

It point out three more factors that we should consider in wireless environment, namely data retrieval delay, data update frequency and data item size.

Different invalidate. The starvation problem.( Save attribute greater

then GAINmin)

Recommended