65
Web Cache Behavior Web Cache Behavior The Laboratory of The Laboratory of Computer Communication Computer Communication and Networking and Networking Submitted by: Lena Salman sihaya@t2 Vardit Zadik svardit@t2 Liraz Perlmooter slirazp@t2

Web Cache Behavior

  • Upload
    duff

  • View
    30

  • Download
    0

Embed Size (px)

DESCRIPTION

Web Cache Behavior. The Laboratory of Computer Communication and Networking. Submitted by: Lena Salman sihaya@t2 Vardit Zadik svardit@t2 Liraz Perlmooter slirazp@t2. Outline. Introduction - Web Caching Motivation Project flow design Project Modules Prowgen – producing the requests - PowerPoint PPT Presentation

Citation preview

Page 1: Web Cache Behavior

Web Cache Web Cache BehaviorBehavior

The Laboratory of Computer The Laboratory of Computer Communication and NetworkingCommunication and Networking

Submitted by:

Lena Salman sihaya@t2

Vardit Zadik svardit@t2

Liraz Perlmooter slirazp@t2

Page 2: Web Cache Behavior

Outline Introduction - Web Caching Motivation Project flow design Project Modules

Prowgen – producing the requests Network topology WebCache Tool NS simulation part

Statistics and graphs of the simulation results Evaluation of the cache behavior and the different

algorithms

Page 3: Web Cache Behavior

Motivation

The World-Wide Web has grown The World-Wide Web has grown tremendously in the past few years to tremendously in the past few years to become the most prevalent source of become the most prevalent source of traffic on the Internet today.traffic on the Internet today.

One solution that could help relieve these One solution that could help relieve these problems of congestion and overloaded problems of congestion and overloaded web-servers is web-servers is web caching..

Page 4: Web Cache Behavior

Motivation (2)

A web proxy cache sits between Web servers and A web proxy cache sits between Web servers and clients, and stores frequently accessed web objects.clients, and stores frequently accessed web objects.

Having received a request from a client the proxy Having received a request from a client the proxy attempts to fulfill the request from among the files stored attempts to fulfill the request from among the files stored in the proxy’s cache. in the proxy’s cache.

If the requested file is found If the requested file is found (a cache hit)(a cache hit) the proxy can the proxy can immediately respond to the client’s request. If the immediately respond to the client’s request. If the requested file is not found requested file is not found (a cache miss)(a cache miss) the proxy then the proxy then attempts to retrieve the file from the original location.attempts to retrieve the file from the original location.

When the server gets the file from the original server, it When the server gets the file from the original server, it can satisfy the request made by the client. can satisfy the request made by the client.

Page 5: Web Cache Behavior

Proxy Server

client

Server A

Server B

Server C

Server D

Server E

Internet

Web Caching Illustration

Page 6: Web Cache Behavior

Motivation (3)

When the cache is full, different When the cache is full, different replacement decisions are made, replacement decisions are made, regarding to which file to invoke from the regarding to which file to invoke from the proxy. proxy.

The pruning algorithm is mainly cache The pruning algorithm is mainly cache management dependent, and plays a management dependent, and plays a major role in reducing both latency and major role in reducing both latency and network traffic on the internet.network traffic on the internet.

Page 7: Web Cache Behavior

Motivation (4)

The cache concept helps the end user, the The cache concept helps the end user, the service provider and the content provider service provider and the content provider by reducing the server load, alleviating by reducing the server load, alleviating network congestion, reducing bandwidth network congestion, reducing bandwidth consumption and reducing network consumption and reducing network latency.latency.

Page 8: Web Cache Behavior

What is a Web proxy cache?

Intermediary between Web clients (browsers) and Web servers.

Store local copies of popular documents Forward requests to servers only if needed

Page 9: Web Cache Behavior

The project purpose:

Simulate a web cache behavior of a proxy, and see the hit-rate and the cost of different cache - pruning algorithms.

Simulate a network, and run the simulator to estimate the time it takes the misses

Page 10: Web Cache Behavior

Project Flow:

Prowgen

Generate requests

Prowgen

Generate requests

Prowgen Parsing

Creates database of requests

Prowgen Parsing

Creates database of requests

WebCache Tool

Simulates cache behavior

LRU/LFU/HYB/FIFO

WebCache Tool

Simulates cache behavior

LRU/LFU/HYB/FIFO

NS simulator

Runs the misses requests on the network

10, 50, 100

NS simulator

Runs the misses requests on the network

10, 50, 100

Statistics and conclusions from the results

Statistics and conclusions from the results

Page 11: Web Cache Behavior

Prowgen:

Page 12: Web Cache Behavior

Prowgen Part

ProWGen uses mathematical models to capture the salient ProWGen uses mathematical models to capture the salient characteristics of web proxy workload, as defined in the previous characteristics of web proxy workload, as defined in the previous study of web proxy servers.study of web proxy servers.

The main purpose of ProWGen is to generate synthetic workload for The main purpose of ProWGen is to generate synthetic workload for evaluating proxy caching techniques. This approach reduces the evaluating proxy caching techniques. This approach reduces the complexity of the models, as well as the time and space required for complexity of the models, as well as the time and space required for the generation and storage of synthetic workload.  the generation and storage of synthetic workload.  

The following parameters can be changed in the Prowgen:The following parameters can be changed in the Prowgen: 1. One-time referencing – set to 50% of the files1. One-time referencing – set to 50% of the files 2. File popularity - 0.75 – medium distribution2. File popularity - 0.75 – medium distribution 3. File size distribution – 1.4 (lighter tail index)3. File size distribution – 1.4 (lighter tail index) 4. Correlation - correlation between file popularity and file size, we used 4. Correlation - correlation between file popularity and file size, we used

normal correlation. normal correlation. 5.Temporal locality – used static configuration, which seems to have more 5.Temporal locality – used static configuration, which seems to have more

temporal localitytemporal locality

Page 13: Web Cache Behavior

Network Topology

PROXY

20% of the network –

Medium servers

5-7 hops to the proxy

10% of the network –

Closest servers-

3-4 hops to the proxy

70% of the network –

“the rest of the world”

10-15 hops to the proxy

Page 14: Web Cache Behavior

Division of files to servers:

10,000 file requests are related to servers from group 1 10,000 file requests are related to servers from group 1 (10% - the closest servers)(10% - the closest servers)

20,000 file requests are related to server from group 2 20,000 file requests are related to server from group 2 (20% - the medium distance servers)(20% - the medium distance servers)

70,000 file requests are related to group 3. 70,000 file requests are related to group 3. (70% - “the (70% - “the rest of the internet)rest of the internet)

Division is done with the help of a hash function, so the Division is done with the help of a hash function, so the division, won’t be influenced from the order of the files, in division, won’t be influenced from the order of the files, in the Prowgen input.the Prowgen input.

Page 15: Web Cache Behavior

ProWGen Output: The output of Prowgen is a list of requests: The output of Prowgen is a list of requests: File_nameFile_name file_sizefile_size 9209 4039209 403

14722 11359 14722 11359 9209 403 9209 403 34733 6544 34733 6544 4106 4041 4106 4041 24315 3653 24315 3653 5444 1220 5444 1220 29904 5266 29904 5266 8838 1485 8838 1485 18058 1570 18058 1570 33151 3577 33151 3577 24416 15669 24416 15669 6701 4075 6701 4075 18026 7172 18026 7172

Page 16: Web Cache Behavior

WebCache TOOL:

Page 17: Web Cache Behavior

Data basesData basesFile:• size

• name

• server …

File:• size

• name

• server … RequestsRequests

Server:• latency

• bandwidth

Server:• latency

• bandwidth

WebCache:•List of files in cache

•Cache_size

WebCache:•List of files in cache

•Cache_size

Servers:Servers:

LRULRU

FIFOFIFO

LFULFU

HYBHYB

Page 18: Web Cache Behavior

Data baseData baseWe have 3 classes:We have 3 classes: class Fileclass File class Serverclass Server class Cacheclass Cache

Page 19: Web Cache Behavior

class Fileclass FileThis class contain the file information:This class contain the file information:

double namedouble name : name of file: name of file double sizedouble size : size of file: size of file int serverint server : any value between 0 - num_serv: any value between 0 - num_serv

is valid is valid double priodouble prio : The Priority of the file : The Priority of the file int nrefint nref :: number of references to the number of references to the

document since it last entered the cachedocument since it last entered the cache

We used this DB for the list we receive from the prowgen, and We used this DB for the list we receive from the prowgen, and for the list of files that are in the cache for the list of files that are in the cache

Page 20: Web Cache Behavior

class Server class Server This class contain the file information:This class contain the file information:

double latdouble lat : The latency to this server: The latency to this server int bandint band : The bandwidth to this server: The bandwidth to this server

We use the DB to contain the list of serverWe use the DB to contain the list of server

Page 21: Web Cache Behavior

class Cacheclass Cache This class contain the file information:This class contain the file information:

list<File> FileList : a list of files that are in the cachelist<File> FileList : a list of files that are in the cache int CacheFreeSpace : The remaining place in the cacheint CacheFreeSpace : The remaining place in the cache

We use this class to simulate the cache itself We use this class to simulate the cache itself

Page 22: Web Cache Behavior

Evaluation the cache Evaluation the cache behaviorbehavior We used the following modules:We used the following modules:

ProwgenProwgen cacheLRUcacheLRU cacheLFUcacheLFU cacheHYBcacheHYB cacheFIFOcacheFIFO

Page 23: Web Cache Behavior

cacheLRUcacheLRU An implementation of the LRU algorithm using STL list An implementation of the LRU algorithm using STL list

The main idea is: The main idea is: if the file is in the cache – HIT:if the file is in the cache – HIT:1. Move the file to the beginning of 1. Move the file to the beginning of

the the list listotherwise MISSotherwise MISS::1.“make room” for the requested file 1.“make room” for the requested file

(by deleting the last files in the list), (by deleting the last files in the list), 2.print a request to the ns file for 2.print a request to the ns file for

the the requested file and then requested file and then

4.insert the file to the cache DB 4.insert the file to the cache DB (to the beginning of the list) (to the beginning of the list)

Page 24: Web Cache Behavior

cacheLRUcacheLRU Replaces least recently used page with the Replaces least recently used page with the assumption that the page that have not be assumption that the page that have not be reference for the longest time will not be very reference for the longest time will not be very likely to be reference in the future.likely to be reference in the future.Each newly fetched page is put on head of listEach newly fetched page is put on head of list

Tail page is deleted when storage is exceededTail page is deleted when storage is exceeded Performs better than LFU in practice Performs better than LFU in practice

Used in today caches (e.g., Squid Web Proxy Used in today caches (e.g., Squid Web Proxy Cache)Cache)

Page 25: Web Cache Behavior

cacheLFU cacheLFU Implementation of the LFU algorithm using STL list Implementation of the LFU algorithm using STL list The main idea is: The main idea is: if the file is in the cache – HIT:if the file is in the cache – HIT:

1. Update the file priority (inc by 1)1. Update the file priority (inc by 1)2. Update the file place in the list according 2. Update the file place in the list according

to to its prioits priootherwise MISS:otherwise MISS:1.“make room” for the requested file 1.“make room” for the requested file

(by deleting the last files in the list), (by deleting the last files in the list), 2.print a request to the ns file for the 2.print a request to the ns file for the requested file and then requested file and then

3.initiate the file priority to 13.initiate the file priority to 14.insert the file to the cache DB4.insert the file to the cache DBaccording to its prioaccording to its prio

Page 26: Web Cache Behavior

cacheLFU cacheLFU Replaces least frequently used page with the Replaces least frequently used page with the

assumption that the page has been least often used assumption that the page has been least often used will not be likely to be referenced again in the future. will not be likely to be referenced again in the future.

Optimal replacement policy if all pages have same Optimal replacement policy if all pages have same size and page popularity does not changesize and page popularity does not change

In practice has disadvantages In practice has disadvantages

slow to react to popularity changesslow to react to popularity changes needs to keep statistics (counter) for every pageneeds to keep statistics (counter) for every page does not consider page sizedoes not consider page size

Page 27: Web Cache Behavior

ccacheHYBacheHYBImplementation of the HYB algorithm using STL list Implementation of the HYB algorithm using STL list The main idea is: The main idea is: if the file is in the cache – HIT:if the file is in the cache – HIT:

1. Update the file priority according to the 1. Update the file priority according to the algorithm algorithm 2. 2. Update the file place in the list according to Update the file place in the list according to

its prioits prio

otherwise MISS:otherwise MISS:1.“make room” for the requested file 1.“make room” for the requested file

(by deleting the last files in the list), (by deleting the last files in the list), 2.Print a 2.Print a request to the ns file for the request to the ns file for the requested file requested file and thenand then

3.Update the file priority according to the 3.Update the file priority according to the algorithm algorithm 4.insert 4.insert

the file to the cache DB the file to the cache DB according to its prioaccording to its prio

Page 28: Web Cache Behavior

cacheHYBcacheHYB

The three factors which Hybrid takes into The three factors which Hybrid takes into account are account are size, transfer time, and number of size, transfer time, and number of referencesreferences..

The Hybrid algorithm offers the best The Hybrid algorithm offers the best combination of guaranteed performance for combination of guaranteed performance for frequently used objects and overall cache frequently used objects and overall cache size. size.

Drawback: needs to keep statistics (counter Drawback: needs to keep statistics (counter and other values) for every pageand other values) for every page

Page 29: Web Cache Behavior

cacheHYBcacheHYB WB = 8Kb and WN = 0.9 for the HYB algorithm (100 servers)WB = 8Kb and WN = 0.9 for the HYB algorithm (100 servers)

WB = 1 and WN = 1 for the HYB algorithm (50 serversWB = 1 and WN = 1 for the HYB algorithm (50 servers)) HYB selects for replacement the document with the

lowest value value of the following expression: Weight = ((Ref ** WN)*(latency + WB/bandwidth)) /

(FileSize) Therefore, a file is not likely to be removed if the

expression above is large, which would occur if the file has been referenced frequently ,and if the document size is small.

Page 30: Web Cache Behavior

cacheHYBcacheHYB The constant WB, whose units are bytes, is used

for setting the relative importance of the connection time versus the connection bandwidth.

The constant WN, which is dimensionless, is used for setting the relative importance of nref versus size. As WN 0 the emphasis is placed upon size.

If WN = 0 nref is not taken into account at all. If WN > 1 then the emphasis is placed more

greatly upon nref than size.

Page 31: Web Cache Behavior

cacheFIFO cacheFIFO Implementation of the FIFO algorithm using STL list Implementation of the FIFO algorithm using STL list The main idea is: The main idea is: if the file is in the cache – HIT:if the file is in the cache – HIT:

otherwise MISSotherwise MISS::1.“make room” for the requested file 1.“make room” for the requested file

(by deleting the last files in the list), (by deleting the last files in the list), 2.Print a request to the ns file for the 2.Print a request to the ns file for the requested file and then requested file and then

3.insert the file to the cache DB 3.insert the file to the cache DB (to the beginning of the list) (to the beginning of the list)

Page 32: Web Cache Behavior

cacheFIFO cacheFIFO Replace the page that has been cached for longest Replace the page that has been cached for longest

time with the time with the assumption that old caches will not be assumption that old caches will not be reference again in the futurereference again in the future..

Regardless of the frequency of the page is request, Regardless of the frequency of the page is request, size of the page and the cost to bring it back.size of the page and the cost to bring it back.

Does not take the frequency of the page into Does not take the frequency of the page into considerationconsideration, this policy will result that the same , this policy will result that the same popular page to be brought into the cache over and popular page to be brought into the cache over and over again.over again.

Page 33: Web Cache Behavior

NS implementation:

Page 34: Web Cache Behavior

Network Topology

PROXY

20% of the network –

Medium servers

5-7 hops to the proxy

2 MB

10% of the network –

Closest servers-

3-4 hops to the proxy

10 MB

70% of the network –

“the rest of the world”

10-15 hops to the proxy

2 MB

Page 35: Web Cache Behavior

Network Simulation:

Latency between hops – 10 msLatency between hops – 10 ms Latency to each server is decided:Latency to each server is decided:

The group it belongs to – The group it belongs to – 10%, 20%, 70%10%, 20%, 70% Inside the group – it is distributed uniformlyInside the group – it is distributed uniformly

• Group 1 – 3-4 hops * 10 msGroup 1 – 3-4 hops * 10 ms• Group 2 - 5-7 hops * 10 msGroup 2 - 5-7 hops * 10 ms• Group 3 – 10-15 hops * 10 msGroup 3 – 10-15 hops * 10 ms

The algorithm responsible for the distribution The algorithm responsible for the distribution uses counter, and modulo calculation.uses counter, and modulo calculation.

Page 36: Web Cache Behavior

Network Simulation:

Bandwidth is also decided, depeding on Bandwidth is also decided, depeding on the group it belongs:the group it belongs:

Group 1 – 10 MB (closest servers)Group 1 – 10 MB (closest servers) Group 2 & 3 – 2MB Group 2 & 3 – 2MB

Page 37: Web Cache Behavior

Connection Implementation:

TCP agents are created:TCP agents are created: Agent/TCP/Newreno for the ServerAgent/TCP/Newreno for the Server

• Will implement TCP – New Reno protocolWill implement TCP – New Reno protocol Agent/TCPSink for the proxy ( the receiver )Agent/TCPSink for the proxy ( the receiver )

On top of the agents:On top of the agents: FTP/Application was attached to the TCPFTP/Application was attached to the TCP

agent.agent.

Page 38: Web Cache Behavior

Requests:

When a miss has occurred in the When a miss has occurred in the CacheTool Part, it will write to the NS input CacheTool Part, it will write to the NS input file, a fetch request from the relevant file, a fetch request from the relevant server. server.

Those requests will be issued in varying Those requests will be issued in varying times. At the particular time, the server will times. At the particular time, the server will start sending the file.start sending the file.

Page 39: Web Cache Behavior

Requests Times:

Requests times are distributed Requests times are distributed exponentially using the random generator exponentially using the random generator implemented in the ns:implemented in the ns: Average – 0.5Average – 0.5 Seed – 0 (default)Seed – 0 (default)

Each file request, will be treated within this Each file request, will be treated within this time, counted from the previous request.time, counted from the previous request.

Page 40: Web Cache Behavior

Requests:

When a miss is indicated by the cache – When a miss is indicated by the cache – managing algorithmmanaging algorithm it will place a request to the simulator to fetch it will place a request to the simulator to fetch

the file:the file: NS will decide at which time to fetch this NS will decide at which time to fetch this

file (at the request time decided randomly) file (at the request time decided randomly) When this file request will be completed When this file request will be completed ( all ( all

the acks will be received )the acks will be received ) – – donedone procedure will be procedure will be called.called.

Page 41: Web Cache Behavior

done procedure:

Done procedureDone procedure Is called every time, a send command is Is called every time, a send command is

finishedfinished Done procedure updates the timer for this Done procedure updates the timer for this

request - counts how long the request took.request - counts how long the request took. Writes this time to the statistic fileWrites this time to the statistic file

Page 42: Web Cache Behavior

done procedure:

Done procedureDone procedure Is called every time, a send command is Is called every time, a send command is

finishedfinished Done procedure updates the timer for this Done procedure updates the timer for this

request - counts how long the request took.request - counts how long the request took. The duration of request is counted as the The duration of request is counted as the

difference between the beginning time, and difference between the beginning time, and the end time (when done is called)the end time (when done is called)

Writes this time to the statistic fileWrites this time to the statistic file

Page 43: Web Cache Behavior

Screen shots:

Page 44: Web Cache Behavior
Page 45: Web Cache Behavior
Page 46: Web Cache Behavior

Statistics and Evaluation:

Page 47: Web Cache Behavior

statistics:

3 types of network: 10, 50, 100 servers3 types of network: 10, 50, 100 servers LFU, LRU, HYB, FIFO algorithmsLFU, LRU, HYB, FIFO algorithms

Hit countHit count Byte hit count Byte hit count

4000 requests in the middle, are run over 4000 requests in the middle, are run over the simulatorthe simulator Total time for the misses among the requests Total time for the misses among the requests

are countedare counted

Page 48: Web Cache Behavior

conclusions:

Cache sizes are tested for different Cache sizes are tested for different algorithms:algorithms: 1MB – 256MB1MB – 256MB

Page 49: Web Cache Behavior

Performance metrics:

Page 50: Web Cache Behavior

Hit Ratio

The cache hit ratio is the number of The cache hit ratio is the number of requests satisfied by the cache divided by requests satisfied by the cache divided by the total number of requests from user.the total number of requests from user.

Higher hit ratio, the better the replacement Higher hit ratio, the better the replacement policy is, because this means that fewer policy is, because this means that fewer requests are forwarded to the web server, requests are forwarded to the web server, thus reduce the network traffic thus reduce the network traffic

Page 51: Web Cache Behavior

10 SERVERS - Hit Rate

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 50 100 150 200 250 300

Cache size

Hit

Rat

e

LFU

LRU

HYB

FIFO

Page 52: Web Cache Behavior

50 SERVERS - Hit Rate

0.2

0.3

0.4

0.5

0.6

0.7

0 50 100 150 200 250 300

Cache Size

Hit

Rat

e

LFU

LRU

HYB

FIFO

Page 53: Web Cache Behavior

100 SERVERS - Hit Rate

0.2

0.3

0.4

0.5

0.6

0.7

0 50 100 150 200 250 300

Cache Size

Hit

Rat

e

LFU

LRU

HYB

FIFO

Page 54: Web Cache Behavior

LRU, FIFO and HYB seem to get close resultsLRU, FIFO and HYB seem to get close results HYB seems to be a little lower, maybe since it is HYB seems to be a little lower, maybe since it is

taking into account the number of references, taking into account the number of references, which doesn’t seem to be an efficient ideawhich doesn’t seem to be an efficient idea

LFU is the worse algorithmLFU is the worse algorithm At 256MB, all the algorithms seem to get to the At 256MB, all the algorithms seem to get to the

same results, since the cache size seems to be same results, since the cache size seems to be big enough, and to contain reasonable amount big enough, and to contain reasonable amount of files, for optimal amount of misses.of files, for optimal amount of misses.

Conclusions

Page 55: Web Cache Behavior

Byte-Hit Ratio

The ratio of total bytes satisfied by the The ratio of total bytes satisfied by the cache divided by the total bytes cache divided by the total bytes transferred to the user.transferred to the user.

Higher byte hit ratio means a lower volume Higher byte hit ratio means a lower volume of data flowing between the proxy and the of data flowing between the proxy and the web server, thus reduce the network traffic web server, thus reduce the network traffic

Page 56: Web Cache Behavior

10 SERVERS - Byte Hit Rate

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 50 100 150 200 250 300

Cache Size

Byt

e H

it R

ate

LFU

LRU

HYB

FIFO

Page 57: Web Cache Behavior

50 SERVERS - Byte Hit Rate

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 50 100 150 200 250 300

Cache Size

Byt

e H

it R

ate LFU

LRU

HYB

FIFO

Page 58: Web Cache Behavior

100 SERVERS - Byte Hit Rate

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0 50 100 150 200 250 300

Cache Size

By

te H

it R

ate LFU

LRU

HYB

FIFO

Page 59: Web Cache Behavior

Conclusions

LRU and FIFO seem to get good results againLRU and FIFO seem to get good results again HYB seems to get lower results – since it prefers HYB seems to get lower results – since it prefers

to evict bigger files, and obviously will achieve to evict bigger files, and obviously will achieve lower byte-hit rate. It “pays” more for each miss.lower byte-hit rate. It “pays” more for each miss.

LFU seems to be worse then LRU and FIFOLFU seems to be worse then LRU and FIFO Again at 256MB, all the algorithms achieve Again at 256MB, all the algorithms achieve

similar results.similar results.

Page 60: Web Cache Behavior

NS Latency

The simulated time that it takes to fetch The simulated time that it takes to fetch the files from the internet. the files from the internet.

The less the latency, the better is the The less the latency, the better is the algorithm to lower network traffic, by thusalgorithm to lower network traffic, by thus

taking load from the internet. taking load from the internet. Here we are reducing both latency and Here we are reducing both latency and

both traffic.both traffic.

Page 61: Web Cache Behavior

50 SERVERS - NS Latency Time

1000120014001600180020002200240026002800

0 50 100 150 200 250 300

Cache Size

NS L

aten

cy T

ime

LFU

LRU

HYB

FIFO

Series5

Page 62: Web Cache Behavior

100 SERVERS - NS Latency Time

1000

1200

1400

1600

1800

2000

2200

2400

2600

2800

3000

0 50 100 150 200 250 300

Cache size

NS

Lat

ency

Tim

e

LFU

LRU

HYB

FIFO

Page 63: Web Cache Behavior

LFU, FIFO, LRU seem to get very close LFU, FIFO, LRU seem to get very close results on the files that ran on the results on the files that ran on the simulator.simulator.

HYB seems to get worse results – HYB seems to get worse results – This might be because of parameters not This might be because of parameters not

suited to the workload generatedsuited to the workload generated Or perhaps because of the HYB preference to Or perhaps because of the HYB preference to

small files – which causes more time to bring small files – which causes more time to bring the larger files.the larger files.

Conclusions

Page 64: Web Cache Behavior

Conclusions

LFU does not achieve good results in both Hit LFU does not achieve good results in both Hit Ratio and Byte Hit Ratio. This implies that the Ratio and Byte Hit Ratio. This implies that the assumption that user will request the same assumption that user will request the same frequently requested document over and over frequently requested document over and over again is not a very good assumption. again is not a very good assumption.

As for FIFO performance, the results were As for FIFO performance, the results were surprisingly good, when taking into account the surprisingly good, when taking into account the simple and non-sophisticated implementation.simple and non-sophisticated implementation.

Page 65: Web Cache Behavior

Conclusions

HYB achieves good hit-rate, but does not HYB achieves good hit-rate, but does not achieve neither good byte-hit rate, or low achieve neither good byte-hit rate, or low latency time.latency time.

Since HYB prefers files which have high Since HYB prefers files which have high reference, and are relatively small, the reference, and are relatively small, the byte-hit ratio is not expected to be high.byte-hit ratio is not expected to be high.

As for network latency, it should be As for network latency, it should be dependent on the network, and more dependent on the network, and more parameters should be tested.parameters should be tested.