18
SSUM: Smart Server Update Mechanism for Maintaining Cache Consistency in Mobile Environments Khaleel Mershad and Hassan Artail, Senior Member, IEEE Abstract—This paper proposes a cache consistency scheme based on a previously proposed architecture for caching database data in MANETs. The original scheme for data caching stores the queries that are submitted by requesting nodes in special nodes, called query directories (QDs), and uses these queries to locate the data (responses) that are stored in the nodes that requested them, called caching nodes (CNs). The consistency scheme is server-based in which control mechanisms are implemented to adapt the process of caching a data item and updating it by the server to its popularity and its data update rate at the server. The system implements methods to handle disconnections of QD and CN nodes from the network and to control how the cache of each node is updated or discarded when it returns to the network. Estimates for the average response time of node requests and the average node bandwidth utilization are derived in order to determine the gains (or costs) of employing our scheme in the MANET. Moreover, ns2 simulations were performed to measure several parameters, like the average data request response time, cache update delay, hit ratio, and bandwidth utilization. The results demonstrate the advantage of the proposed scheme over existing systems. Index Terms—Data caching, cache consistency, invalidation, server-based approach, MANET. Ç 1 INTRODUCTION I N a mobile ad hoc network (MANET), data caching is essential as it reduces contention in the network, increases the probability of nodes getting desired data, and improves system performance [1], [5]. The major issue that faces cache management is the maintenance of data consistency between the client cache and the server [5]. In a MANET, all messages sent between the server and the cache are subject to network delays, thus, impeding consistency by download delays that are considerably noticeable and more severe in wireless mobile devices. All cache consistency algorithms are developed with the same goal in mind: to increase the probability of serving data items from the cache that are identical to those on the server. A large number of such algorithms have been proposed in the literature, and they fall into three groups: server invalidation, client polling, and time to live (TTL). With server invalidation, the server sends a report upon each update to the client. Two examples are the Piggyback server invalidation [19] and the Invalidation report [31] mechanisms. In client polling, like the Piggyback cache validation of [20], a validation request is initiated according to a schedule. If the copy is up to date, the server informs the client that the data have not been modified; else the update is sent to the client. Finally, with TTL algorithms, a server-assigned TTL value (e.g., T ) is stored alongside each data item d in the cache. The data d are considered valid until T time units pass since the cache update. Usually, the first request for d submitted by a client after the TTL expiration will be treated as a miss and will cause a trip to the server to fetch a fresh copy of d. Many algorithms were proposed to determine TTL values, including the fixed TTL approach [17], adaptive TTL [11], and Squid’s LM-factor [29]. TTL-based consistency algorithms are popular due to their simplicity, sufficiently good performance, and flex- ibility to assign TTL values for individual data items [4]. However, TTL-based algorithms, like client polling algo- rithms, are weakly consistent, in contrast to server invalida- tion schemes that are generally strongly consistent. According to [11], with strong consistency algorithms, users are served strictly fresh data items, while with weak algorithms, there is a possibility that users may get inconsistent (stale) copies of the data. This work describes a server-based scheme implemen- ted on top of the COACS caching architecture we proposed in [2]. In COACS, elected query directory (QD) nodes cache submitted queries and use them as indexes to data stored in the nodes that initially requested them (CN nodes). Since COACS did not implement a consistency strategy, the system described in this paper fills that void and adds several improvements: 1) enabling the server to be aware of the cache distribution in the MANET, 2) making the cached data items consistent with their version at the server, and 3) adapting the cache update process to the data update rate at the server relative to the request rate by the clients. With these changes, the overall design provides a complete caching system in which the server sends to the clients selective updates that adapt to their needs and reduces the average query response time. Next, Section 2 describes the proposed system while Section 3 analyzes its performance using key measures. Section 4 discusses the experimental results, and Section 5 778 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010 . The authors are with the Electrical and Computer Engineering Depart- ment, American University of Beirut, PO Box 11-0236, Riad El-Solh, Beirut 1107-2020, Lebanon. E-mail: {kwm03, hartail}@aub.edu.lb. Manuscript received 27 Mar. 2009; revised 15 Sept. 2009; accepted 12 Oct. 2009; published online 11 Jan. 2010. For information on obtaining reprints of this article, please send e-mail to: [email protected], and reference IEEECS Log Number TMC-2009-03-0102. Digital Object Identifier no. 10.1109/TMC.2010.18. 1536-1233/10/$26.00 ß 2010 IEEE Published by the IEEE CS, CASS, ComSoc, IES, & SPS

SSUM Smart Server Update Mechanism - Base Paper

Embed Size (px)

Citation preview

Page 1: SSUM Smart Server Update Mechanism - Base Paper

SSUM: Smart Server Update Mechanismfor Maintaining Cache Consistency

in Mobile EnvironmentsKhaleel Mershad and Hassan Artail, Senior Member, IEEE

Abstract—This paper proposes a cache consistency scheme based on a previously proposed architecture for caching database data

in MANETs. The original scheme for data caching stores the queries that are submitted by requesting nodes in special nodes, called

query directories (QDs), and uses these queries to locate the data (responses) that are stored in the nodes that requested them, called

caching nodes (CNs). The consistency scheme is server-based in which control mechanisms are implemented to adapt the process of

caching a data item and updating it by the server to its popularity and its data update rate at the server. The system implements

methods to handle disconnections of QD and CN nodes from the network and to control how the cache of each node is updated or

discarded when it returns to the network. Estimates for the average response time of node requests and the average node bandwidth

utilization are derived in order to determine the gains (or costs) of employing our scheme in the MANET. Moreover, ns2 simulations

were performed to measure several parameters, like the average data request response time, cache update delay, hit ratio, and

bandwidth utilization. The results demonstrate the advantage of the proposed scheme over existing systems.

Index Terms—Data caching, cache consistency, invalidation, server-based approach, MANET.

Ç

1 INTRODUCTION

IN a mobile ad hoc network (MANET), data caching isessential as it reduces contention in the network,

increases the probability of nodes getting desired data,and improves system performance [1], [5]. The major issuethat faces cache management is the maintenance of dataconsistency between the client cache and the server [5]. In aMANET, all messages sent between the server and thecache are subject to network delays, thus, impedingconsistency by download delays that are considerablynoticeable and more severe in wireless mobile devices.

All cache consistency algorithms are developed with thesame goal in mind: to increase the probability of servingdata items from the cache that are identical to those on theserver. A large number of such algorithms have beenproposed in the literature, and they fall into three groups:server invalidation, client polling, and time to live (TTL).With server invalidation, the server sends a report uponeach update to the client. Two examples are the Piggybackserver invalidation [19] and the Invalidation report [31]mechanisms. In client polling, like the Piggyback cachevalidation of [20], a validation request is initiated accordingto a schedule. If the copy is up to date, the server informsthe client that the data have not been modified; else theupdate is sent to the client. Finally, with TTL algorithms, aserver-assigned TTL value (e.g., T ) is stored alongside eachdata item d in the cache. The data d are considered valid

until T time units pass since the cache update. Usually, thefirst request for d submitted by a client after the TTLexpiration will be treated as a miss and will cause a trip tothe server to fetch a fresh copy of d. Many algorithms wereproposed to determine TTL values, including the fixed TTLapproach [17], adaptive TTL [11], and Squid’s LM-factor[29]. TTL-based consistency algorithms are popular due totheir simplicity, sufficiently good performance, and flex-ibility to assign TTL values for individual data items [4].However, TTL-based algorithms, like client polling algo-rithms, are weakly consistent, in contrast to server invalida-tion schemes that are generally strongly consistent.According to [11], with strong consistency algorithms,users are served strictly fresh data items, while with weakalgorithms, there is a possibility that users may getinconsistent (stale) copies of the data.

This work describes a server-based scheme implemen-ted on top of the COACS caching architecture we proposedin [2]. In COACS, elected query directory (QD) nodes cachesubmitted queries and use them as indexes to data storedin the nodes that initially requested them (CN nodes). SinceCOACS did not implement a consistency strategy, thesystem described in this paper fills that void and addsseveral improvements: 1) enabling the server to be aware ofthe cache distribution in the MANET, 2) making the cacheddata items consistent with their version at the server, and3) adapting the cache update process to the data updaterate at the server relative to the request rate by the clients.With these changes, the overall design provides a completecaching system in which the server sends to the clientsselective updates that adapt to their needs and reduces theaverage query response time.

Next, Section 2 describes the proposed system whileSection 3 analyzes its performance using key measures.Section 4 discusses the experimental results, and Section 5

778 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010

. The authors are with the Electrical and Computer Engineering Depart-ment, American University of Beirut, PO Box 11-0236, Riad El-Solh,Beirut 1107-2020, Lebanon. E-mail: {kwm03, hartail}@aub.edu.lb.

Manuscript received 27 Mar. 2009; revised 15 Sept. 2009; accepted 12 Oct.2009; published online 11 Jan. 2010.For information on obtaining reprints of this article, please send e-mail to:[email protected], and reference IEEECS Log Number TMC-2009-03-0102.Digital Object Identifier no. 10.1109/TMC.2010.18.

1536-1233/10/$26.00 � 2010 IEEE Published by the IEEE CS, CASS, ComSoc, IES, & SPS

Page 2: SSUM Smart Server Update Mechanism - Base Paper

reviews some previous cache consistency techniques beforeconcluding the paper in Section 6.

2 SMART SERVER UPDATE MECHANISM (SSUM)

SSUM is a server-based approach that avoids many issuesassociated with push-based cache consistency approaches.Specifically, traditional server-based schemes are notusually aware of what data items are currently cached, asthey might have been replaced or deleted from the networkdue to node disconnections. Also, if the server data updaterate is high relative to the nodes request rate, unnecessarynetwork traffic would be generated, which could increasepacket dropout rate and cause longer delays in answeringnode queries. SSUM reduces wireless traffic by tuning thecache update rate to the request rate for the cached data.

2.1 Basic Operations

Before detailing the operations of SSUM, we list in Table 1the messages used in the COACS architecture as we refer tothem in the paper. Given that no consistency mechanismwas implemented in COACS, it was necessary to introducefour additional messages.

In SSUM, the server autonomously sends data updates tothe CNs, meaning that it has to keep track of which CNscache which data items. This can be done using a simpletable in which an entry consists of the id of a data item (orquery) and the address of the CN that caches the data. Anode that desires a data item sends its request to its nearestQD. If this QD finds the query in its cache, it forwards therequest to the CN caching the item, which, in turn, sendsthe item to the requesting node (RN). Otherwise, itforwards it to its nearest QD, which has not received therequest yet. If the request traverses all QDs without beingfound, a miss occurs and it gets forwarded to the serverwhich sends the data item to the RN. In the latter case, afterthe RN receives the confirmation from the last traversed QDthat it has cached the query, it becomes a CN for this data

item and associates the address of this QD with the itemand then sends a Server Cache Update Packet (SCUP) to theserver, which, in turn, adds the CN’s address to the dataitem in its memory. This setup allows the server to sendupdates to the CNs directly whenever the data items areupdated. Fig. 1 illustrates few data request and updatescenarios that are described below.

In the figure, the requesting nodes (RNs) submit queriesto their nearest QDs, as shown in the cases of RN1, RN2, andRN3. The query of RN1 was found in QD1, and so the latterforwarded the request to CN1, which returned the datadirectly to the RN. However, the query of RN2 was notfound in any of the QDs, which prompted the last searched(QD1) to forward the request to the server, which, in turn,replied to RN2 that became a CN for this data afterward.The figure also shows data updates (key data pairs) sentfrom the server to some of the CNs.

MERSHAD AND ARTAIL: SSUM: SMART SERVER UPDATE MECHANISM FOR MAINTAINING CACHE CONSISTENCY IN MOBILE... 779

TABLE 1Packets Used in COACS After Integrating SSUM into It

Fig. 1. Scenarios for requesting and getting data in the COACSarchitecture.

Page 3: SSUM Smart Server Update Mechanism - Base Paper

2.2 Dealing with Query Replacements and NodeDisconnections

A potential issue concerns the server sending the CNupdates for data that have been deleted (replaced), orsending the data out to a CN that has gone offline. To avoidthis and reduce network traffic, cache updates can bestopped by sending the server Remove Update Entry Packets(RUEPs). This could occur in several scenarios. Forexample, if a CN leaves the network, the QD, which firsttries to forward it a request and fails, will set the addressesof all queries whose items are cached by this unreachableCN in its cache to �1, and sends an RUEP to the servercontaining the IDs of these queries. The server, in turn,changes the address of that CN in its cache to �1 and stopssending updates for these items. Later, if another node Arequests and then caches one of these items, the server,upon receiving an SCUP from A, will associate A with thisdata item. Also, if a CN runs out of space when trying tocache a new item in, it applies a replacement mechanism toreplace id with in and instructs the QD that caches the queryassociated with id to delete its entry. This causes the QD tosend an RUEP to the server to stop sending updates for id inthe future.

If a caching node CNd returns to the MANET afterdisconnecting, it sends a Cache Invalidation Check Packet(CICP) to each QD that caches queries associated withitems held by this CN. A QD that receives a CICP checksfor each item to see if it is cached by another node and thensends a Cache Invalidation Reply Packet (CIRP) to CNd

containing all items not cached by other nodes. CNd thendeletes from its cache those items whose IDs are not in theCIRP but were in the CICP. After receiving a CIRP from allQDs to which it sent a CICP and deleting nonessential dataitems from its cache, CNd sends a CICP containing the IDsof all queries with data remaining in its cache to the serveralong with their versions. In the meanwhile, if CNd

receives a request from a QD for an item in its cache, itadds the request to a waiting list. The server then creates aCIRP and includes in it fresh copies of the outdated itemsand sends it to CNd, which, in turn, updates its cache andanswers all pending requests.

Finally, and as described in [2], QD disconnections andreconnections do not alter the cache of the CNs, and hence,the pointers that the server holds to the CNs remain valid.

2.3 Adapting to the Ratio of Update Rate andRequest Rate

SSUM suspends server updates when it deems that they areunnecessary. The mechanism requires the server to monitorthe rate of local updates, Ru, and the rate of RN requests,Rr, for each data item di. Each CN also monitors thesevalues for each data item that it caches. Whenever a CNreceives an update from the server, it calculates Ru=Rr andcompares it to a threshold �. If this ratio is greater than orequal to �, the CN will delete di and the associatedinformation from its cache and will send an Entry DeletionPacket (EDP) to the QD (say, QDd) that caches query qi. TheCN includes in the header of EDP a value for Ru, which tellsQDd that di is being removed due to its high update-to-request ratio. Normally, when a QD gets an EDP, it removesthe cached query from its cache, but here, the nonzero value

of Ru in the EDP causes QDd to keep the query cached, butwith no reference to a CN. Next, QDd will ask the server tostop sending updates for di. Afterward, when QDd receivesa request from an RN node that includes qi, it forwards it tothe server along with a DONT_CACHE flag in the headerto be later passed in the reply, which includes the results, tothe RN. Under normal circumstances in COACS, when anRN receives a data item from the server in response to aquery it had submitted, it assumes the role of a CN for thisitem and will ask the nearest QD to cache the query. TheDONT_CACHE flag instructs the RN to treat the result as if itwere coming from the cache and not become a CN for it.Now, at the server, each time an update for qi occurs and anew Ru=Rr is computed, if this ratio falls below a secondthreshold, � (� < �), the server will reply to the RN with aDREP that includes the CACHE_NEW flag in the header.Upon receiving the DREP, the RN sends a QCRP with theCACHE_NEW flag to its nearest QD. If this QD caches thequery of this item (with �1 as its CN address), it sets itsaddress to its new CN, else it forwards the request to itsown nearest QD. If the QCRP traverses all QDs withoutbeing processed (implying that the QD caching this itemhas gone offline), the last QD at which the QCRP arriveswill cache the query with the CN address.

By appropriately selecting the values of � and �, thesystem can reduce unnecessary network traffic. The proces-sing time of qi will suffer though when Ru=Rr is above �after it had passed � since QDd will be sending qi to theserver each time it receives it. However, the two thresholdsallow for favoring bandwidth consumption over responsetime, or vice versa. This makes SSUM suitable for a varietyof mobile computing applications: a large � may be usedwhen disconnections are frequent and data availability isimportant, while a low � could be used in congestedenvironments where requests for data are infrequent orgetting fresh data is not critical. Fig. 2 summarizes theinteractions among the entities of the system.

2.4 Accounting for Latency in Receiving ServerUpdates

Given the different processes running at the server and sinceit sends the updates to the CNs via unicasts, there may be atime gap between when an update occurs and when the CNactually receives the updated data item d. Hence, if the CNgets a request for d during this time, it will deliver a stalecopy of d to the RN. Our design uses the time stamp that theserver sends with each update in an attempt to mitigate thisissue. To explain this, suppose that the time stamp sent withd is ts and the time of receiving d by the CN is tc. Upongetting an update, the CN checks if it had served any RN acopy of d from its cache in the past ts-tc milliseconds. If it isthe case, the CN sends a new DREP to the RN, but now itincludes the fresh copy of d.

The above solution assumes that the clocks of the nodesin the MANET and that of the server are synchronized. Thisassumption is realistic given that node clock synchroniza-tion is part of the MAC layer protocol, as specified by theIEEE 802.11 standards [15]. In particular, IEEE 802.11specifies a Timing Synchronization Function (TSF) throughwhich nodes synchronize their clocks by broadcasting theirtiming information using periodic beacons. Since the Access

780 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010

Page 4: SSUM Smart Server Update Mechanism - Base Paper

Point (AP) is considered a node in the MANET and it can

synchronize its clock with that of the server asynchronously

with respect to the MANET through the wired network, it

will be possible to synchronize the clocks of the mobile

nodes with that of the server at almost a zero cost to them

(no protocol besides the MAC layer’s TSF is needed).The suitability of TSF for SSUM depends on its effective-

ness. It was shown in [33] that using TSF, the maximum clock

offset in the case of 500 nodes is 700 �s. Several approaches

were proposed to reduce this error. The Automatic Self-time

Correcting Procedure (ASP) [27] reportedly reduced the

maximum offset to 300 �s, while the Multihop Adaptive TSF

(MATSF) method [33] cut it down to 50 �s, but at the expense

of adding 8 bits to the IEEE 802.11 frame. In Section 2.6, we

show that SSUM is best characterized by the delta consis-

tency model [21], where the upper bound for the delta

MERSHAD AND ARTAIL: SSUM: SMART SERVER UPDATE MECHANISM FOR MAINTAINING CACHE CONSISTENCY IN MOBILE... 781

Fig. 2. Detailed operations of the entities of SSUM using pseudocode.

Page 5: SSUM Smart Server Update Mechanism - Base Paper

between the time of the server’s update and the time the RNgets a fresh copy is in tens of milliseconds. It follows that TSFwill virtually not increase this delta, especially in small tomoderately sized networks.

2.5 Overhead Cost

When a node joins the network, the server will know aboutit when it first gets a query from it in a DRP that isforwarded by one of the QD nodes. Each data item at theserver that is cached in the network is associated with aquery id, a request rate, an update rate, and the address ofthe CN caching it. This additional information could costthe server about 16 bytes of extra storage per record. Hence,from a storage perspective, this cost may be deemedinsignificant when considering the capabilities of modernservers. In terms of communication cost, the servercommunicates with the CNs information about the ratesusing header information in the exchanged packets, anduses control packets (RUEP, SCUP, CICP, and CIRP) tomanage the updating process. Here, it suffices to state thatthe simulation results (Section 4.10) indicate that the overalloverhead traffic (including other packets that do notconcern the server) is a small portion of the data traffic.Finally, from a processing load point of view, the server isonly required to manipulate the update rate when appro-priate. In conclusion, the server will not incur any notablyadditional load due to its role in this architecture. Hence,the system should be able to scale to a large number ofcached items.

A similar argument can be made for the CNs, althoughthe main concern here is the impact on the cache space andreplacement frequency. Using the same value of 5 KB for theaverage data item size (as in the simulations of Section 4),with caching capacity of 200 KB, a CN can cache about40 data items. The additional overhead required for storingthe request and update rates of one single data item is8 bytes, and therefore, the overhead for storing the requestand update rates at the CN is less than 0.16 percent of theavailable space. It follows that the space for caching at theCNs is minimally impacted by this additional information.Also, the frequency of cache replacements will not increasein a major way because of this.

2.6 Consistency Model

From the characteristics of SSUM described above, we mayrelate it to the delta consistency model [21]. Assuming theRN always requires a fresh copy of item d, a CN may endup sending two copies of d to an RN in case the CN serves acopy of d from its cache shortly before receiving a serverupdate for it. First, we emphasize that from the perspectiveof the RN, d will be considered fresh only if it is a duplicateof the server’s version at the time it is received (i.e., theserver has not updated it just yet). We now define twoboundary situations in which the RN gets a stale copy of d.In the first one, immediately after the server updates d at tsand sends it into the network, the RN receives a copy of itfrom the CN based on a request it had sent. In the secondone, right after the CN serves the RN a copy of d, it receivesan updated version from the server. Assuming that whenthe server updates d, it immediately sends it out, and that itis connected to the MANET via an access point in the corner

of the network, we can now compute an interval for the

maximum “delta” between ts and the time the RN gets a

fresh copy of it. To do this, we borrow some definitions

from our work in [2] in relation to the average number of

hops. The top two definitions below are used here while the

other two are used in Section 3, but all are detailed in

Appendices A, B, and C:

. HC is the average number of hops between thecorner of the topology and a random node in theMANET. It applies when a packet is sent betweenthe server and the random node.

. HR is the expected number of hops between any tworandomly selected nodes.

. HA is the expected number of hops to traverse all theQDs in the system, which usually occurs in the caseof a cache miss.

. HD is the expected number of hops to reach the QDwhich holds the reference to the requested data, inthe case of a hit.

Hence, if Tin is the transmission delay between two

neighboring nodes in the wireless network and Tout is the

delay in the wired network to/from the server, then the delay

until the RN gets a fresh copy of d in the first situation is

Tout þHC � Tin þHR � Tin, while in the second situation, it

is simply HR � Tin. Using the values of HC and HR in [2], we

can deduce that the maximum delta will be expressed as

�max 2 ½0:52Tina=r0; Tout þ 1:29Tin a=r0�, where a is the side

length of the square topography and r0 is the wireless

transmission range. Taking a as 1,000 m, r0 as 100 m, Tin as

5 ms, andTout as 40 ms, �max will be in the range of 25-105 ms.

3 ANALYSIS

We evaluate our scheme in terms of bandwidth and query

response time gains. These are the differences between the

corresponding measures when no cache updating is in

place and when SSUM is employed. Requests for data in the

ad hoc network and data updates at the server are assumed

to be random processes and may be represented by

exponential random variables, as was suggested in [4] and

[25]. We use �R to denote the rate of requests and �U the

rate of updates. The probability density functions of

requests and updates are thus given:

PRðtÞ ¼ �Re��Rt; PUðtÞ ¼ �Ue��U t: ð1Þ

Both the bandwidth gain Gb and response time gain Gt are

influenced by the number of data requests issued by

requesting nodes relative to the number of data updates

that occur at the server. In the remainder of the paper, we

refer to no cache updating as NCU.We now consider two cases:�U=�R< 1 (C1) and�U=�R � 1

(C2). C2, in turn, comprises two scenarios: after �U=�R � �

and then while �U=�R � � (C2S2), where updates are sus-

pended by the server, and the remaining scenario (C2S1),

where updates are sent by the server (illustrated in Fig. 3).

Finally, the average update and request rates can be related as

follows: In the first case (i.e., C1), �R ¼ N�U;N > 1, while in

the first scenario of C2 (i.e., C2S1), �U ¼M�R;1 �M < �.

782 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010

Page 6: SSUM Smart Server Update Mechanism - Base Paper

Starting with C2S2, SSUM suspends the server updates

to the RN, thus, acting like NCU and resulting in a zero

bandwidth gain. The cache entry would have been outdated

upon getting the RN’s next request, thus causing a cache

miss. This causes a traversal of all QDs, then a transmission

to and from the server, a transmission back to the

requesting node, and then a packet sent to a QD asking it

to cache the query. In C2S1, where �U=�R � 1 and the

updates are sent to the RNs, the requests are served from

the cache. To compute the gain, we take a time period that

includes K update periods such that K > M. The cache

entry will be updated, and hence, an update packet is sent

to the CN from the server, plus the request sent from the

RN to the QD caching the query and then forwarded to the

CN holding the data, and finally, the reply from the CN.

Thus, the bandwidth gains for C2S1 and C2S2 are stated as

shown in (2):

GbC2S1 ¼ ðK=MÞðSRHA þ SRHC þ SDHC þ SRHRÞ�K½SDHC þ 1=MðSRHD þ SRHR þ SDHRÞ�¼ ðK=MÞðSRHA þ SRHC þ ð1�MÞSDHC

� SRHD � SDHRÞ;GbC2S2 ¼ 0;

ð2Þ

where SR denotes the size of a request packet (in bytes) and

SD the size of a data packet.For the first case (C1), we can develop a similar

expression as follows:

GbC1 ¼ KðSRHA þ SRHC þ SDHC þ SRHRÞþKðN � 1ÞðSRHD þ SRHR þ SDHRÞ�K½SDHC þNðSRHD þ SRHR þ SDHRÞ�¼ KðSRHA þ SRHC � SRHD � SDHRÞ:

ð3Þ

Note that if we make �U ¼ �R (i.e., M ¼ N ¼ 1), both of the

above expressions yield the same result. Also, although

HA > HD and HC > HR, Gb may be negative if the larger

size of the result compared to the size of the request (SDversus SR) outweighs the gain in the number of hops.

Next, to compute the response time gain, we consider the

transmission delay measures Tin and Tout, which were

defined in Section 2.6. We also note that since the updates

are issued by the server in the background, they do noimpact the query response time.

The response time gain for C2S2 is also zero, while inC2S1, each request generates a miss in NCU and a hit inSSUM. Using the above description of request processing,we have

GtC2S1 ¼ ðTinHA þ TinHC þ Tout þ TinHCÞ� ðTinHD þ TinHR þ TinHRÞ¼ Tout þ ðTinÞðHA þ 2HC �HD � 2HRÞ;

GtC2S2 ¼ 0:

ð4Þ

For Case 1, we account for the average of one miss andN � 1 hits in the case of NCU:

GtC1 ¼ ð1=NÞ½ðTinHA þ TinHC þ Tout þ TinHCÞþ ðN � 1ÞðTinHD þ TinHR þ TinHRÞ�� ðTinHD þ TinHR þ TinHRÞ¼ Tout=N þ ðTin=NÞðHA þ 2HC �HD � 2HRÞ:

ð5Þ

With the above expressions, we can make two observations:when �U ¼ �R (i.e., N ¼ 1), both expressions give the sameresult, and the response time gains are always positive.

We now turn our attention to calculating the probabil-ities of the three possible scenarios. Since we are dealingwith two independent processes, we use double integrals,as follows:

PRðC1Þ ¼ PRð�U < �RÞ ¼Z 1

0

Z 1tR

PRðtRÞPUðtUÞdtUdtR

¼Z 1

0

Z 1tR

�Re��RtR�Ue

��U tU dtUdtR

¼ �U�RZ 1

0

e��RtRZ 1tU

e��U tU dtUdtR

¼ �U�RZ 1

0

e��RtR�e��U tU�U

� �1tR

dtR

¼ �RZ 1

0

e��RtRe��U tRdtR ¼ �RZ 1

0

e�ð�Uþ�RÞtRdtR

¼ �R�e�ð�Uþ�RÞtR�U þ �R

� �10

¼ �R�U þ �R

;

ð6Þ

MERSHAD AND ARTAIL: SSUM: SMART SERVER UPDATE MECHANISM FOR MAINTAINING CACHE CONSISTENCY IN MOBILE... 783

Fig. 3. Illustration of the behavior of SSUM as �U=�R changes (�R is shown fixed).

Page 7: SSUM Smart Server Update Mechanism - Base Paper

PRðC2S2Þ ¼ PRð�U � ��RÞ � PR½ð��R < �U � ��RÞ^ ðP�1

U ð��RÞ � tU � P�1U ð��RÞ�

¼Z 1

0

Z 1�tU

PRðtRÞPUðtUÞdtUdtR

�Z � lnð��R=�U Þ=�U

� lnð��R=�U Þ=�U

Z �tU

�tU

PRðtRÞPUðtUÞdtUdtR

¼ �U�U þ ��R

1þ ð��R=�UÞ��Uþ��R

�U

��

�ð��R=�UÞ��Uþ��R

�U

��

þ �U�U þ ��R

ð��R=�UÞ��Uþ��R

�U

��

�ð��R=�UÞ��Uþ��R

�U

��PRðC2S1Þ ¼ PRð�U � �RÞ � PRðC2S2Þ

¼ �U�U þ �R

� PRðC2S2Þ:

ð7Þ

With respect to the above expression, we note that if the

server is permitted to send updates to the clients at all times

(i.e., without regard to the � and � parameters), then the

probability of Case 2 (i.e., there will be no distinct scenarios

in this situation) will be �U=ð�U þ �RÞ.With the above, the average gains of the system can now

be computed:

E½Gb� ¼ GbC1PRðC1Þ þGbC2S1PRðC2S1ÞþGbC2S2PRðC2S2Þ;

ð8Þ

E½Gt� ¼ GtC1PRðC1Þ þGtC2S1PRðC2S1ÞþGtC2S2PRðC2S2Þ:

ð9Þ

The gain expressions indicate that the wireless band-width gain may be negative, but the response time gain isalways positive. This may form a reasonable trade-off,especially when bandwidth can be sacrificed in order tospeed up the query response time.

Fig. 4 confirms the above observations and illustratesthe average bandwidth and response time gains undertwo situations: when SSUM is applied as described andwith a modified version that lets the server send updatesto the RNs at all times (i.e., without considering � and �).Before analyzing the graphs, we note that several factorsplay a role in determining the gain values, mostimportantly the number of QDs, the �U=�R ratio, andthe data packet size (Sd) relative to the request packet size(Sr). The effect of the number of QDs is intuitive becausewith more QDs, more data will be cached in the network,and therefore, both the bandwidth and response timegains will increase. Next, as the Sd=Sr ratio increases, thebandwidth gain will decrease while the response timegain will generally not be affected.

The graphs of Fig. 4 generally show that as �R increases,the bandwidth gain improves, while normally, the responsetime gain suffers. The explanation is intuitive as additional

784 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010

Fig. 4. Bandwidth and response time gains versus update and request rates in two cases.

Page 8: SSUM Smart Server Update Mechanism - Base Paper

requests will be served from the cache with consistent data(versus going to the server for each request as in NCU).Given this characteristic, SSUM is most beneficial inenvironments with high user activity (high request rate)and relatively low data volatility (low server update rate).Moreover, these graphs illustrate the influence of theselective updating mechanism that SSUM employs (leftgraphs) versus the all-time updating version (right graphs).Clearly, selective updating saves bandwidth consumptionat the moderate expense of response time.

Before leaving this section, we point out that inAppendix D, we proved that the effect of QD disconnec-tions on bandwidth consumption and response time inSSUM is insignificant. Moreover, since QD disconnectionsis a network issue and applies to both SSUM and NCU, theextra delay and bandwidth do not contribute to the gainexpressions presented above.

4 PERFORMANCE EVALUATION

We used the ns2 software to implement the SSUM system,and to also simulate the Updated Invalidation Report(UIR) mechanism [10] in addition to a version of it that isimplemented on top of COACS so that a presumablyfairer comparison with SSUM is done. To start with, welist the default values of the main simulation parametersin Table 2.

4.1 Network and Cache Simulation Parameters

A single database server is connected to the wirelessnetwork through a fixed access point, while the mobilenodes are randomly distributed. The client cache size wasfixed to 200 Kb, meaning that a CN can cache between 20and 200 items, while the QD cache size was set to 300 Kb,and therefore, a QD can cache about 600 queries. We usedthe least recently used (LRU) cache replacement policywhen the cache is full and a data item needs to be cached.Each scenario started with electing one QD, but more wereelected when space was needed. Each scenario lasted for

2,000 seconds and repeated 10 times with the seed of thesimulation set to a new value each time, and the final resultwas taken as the average of the 10 runs.

The SSUM system was implemented as a new C++ agentin ns2 that gets attached to the node class in the tcl code atsimulation runtime. This implementation includes a cacheclass that defines and sets the needed data items as well asthe operations of the caching methods that were described.Also, the routing protocols in ns2 were modified to processthe SSUM packets and to implement the functions of theMDPF algorithm used for traversing the QD system [3].Other changes to the ns2 C++ code included modifying thepacket header information which is used to control thecache update process. After implementing the changes inthe C++ code, tcl scripts were written to run the variousdescribed scenarios.

4.2 The Query Model Parameters

The client query model was chosen such that each node inthe network generates a new request every Tq seconds.When the simulation starts, each node generates a newrequest, and after Tq seconds, it checks if it has not receiveda response for the request it generated in which case, itdiscards it and generates a new request. We chose a defaultvalue for Tq equal to 20 seconds, but in order to examine theeffect of the request rate on the system performance, wesimulated several scenarios with various request rates. Theprocess of generating a new request followed a Zipf-likeaccess pattern, which has been used frequently to modelnonuniform distributions [34]. In Zipf law, an item rankedið1 � i � nqÞ is accessed with probability1=ði�

Pnqk¼1 1=k�Þ,

where � ranges between 0 (uniform distribution) and 1(strict Zipf distribution). The default value of the zipfparameter � was set to 0.5.

Every second, the server updates a number of randomlychosen data items, equal to a default value of 20. The defaultvalues of � and � were set to 1.25 and 0.75, respectively,while the default number of node disconnections is 1 everytwo minutes with a period of 10 seconds, after which thenode returns to the network. However, in the experimentsthat we present below, the above parameters’ values arevaried to study their effects on performance.

4.3 Compared Systems

In addition to SSUM, we simulated two other systems: UIRand C_UIR. C_UIR is a version of UIR [10] that weimplemented on top of COACS (with the parameters’values given above). It should be noted though that severaldifferences between the implementation of UIR in oursimulation environment and that used in [10] exist. Thisresulted in producing different results than those reportedin [10]. Some of the differences are: the number of dataitems at the server was set to 1,000 in [10] and 10,000 in oursimulations. The used value for the zipf � parameter was 1in [10] and 0.5 in our case. We did not divide the databaseinto hot and cold subsets, but instead, we relied on Zipf tonaturally determine the access frequency of data. The size ofthe data item was randomly varied in our simulationsbetween 1 and 10 KB, while in [10], it was fixed to 1 KB.Also, we set the IR interval L to 10 s and replicated the UIRfour times within each IR interval, while in [10], L was set to

MERSHAD AND ARTAIL: SSUM: SMART SERVER UPDATE MECHANISM FOR MAINTAINING CACHE CONSISTENCY IN MOBILE... 785

TABLE 2Summary of Simulation Parameters’ Values

Page 9: SSUM Smart Server Update Mechanism - Base Paper

20 s. Hence, the simulation parameters in our environmentput more stress on the network and are more realistic.

We analyze the performance of SSUM, UIR, and C_UIRfrom four different perspectives. The first two are the querydelay and the update delay. In this regard, it is importantto note that in SSUM, the server unicasts the actual dataitems to the CNs, while in UIR and C_UIR, it broadcaststhe IDs of the updated items, not their contents. Anothermajor difference is that in C_UIR, if the CN receives therequest from the RN after the update takes place at theserver but before the CN receives the update report, the RNwill receive a stale copy of the data item, while, as wasexplained in Section 2.4, SSUM implements a provisionthat fixes this issue. We stress that the update delay of UIRand C_UIR as presented in Sections 4.4-4.8 is the delayuntil the item’s ID reaches the node, not the item itself. Ifwe were to incorporate the delay of fetching the data itemfrom the server, the update delay will greatly increase. Thethird performance measure is the hit ratio, and finally, thefourth measure is the average bandwidth consumption pernode. The latter was computed by recording the totalnumber of messages of each packet type issued, received,or passed by a node, multiplying it by the respectivemessage size, and then divided by the number of nodesand the simulation time.

In the following sections, we vary some of the keyparameters of the simulation environment and observe theeffects on the performance of the three systems.

4.4 Varying the Number of Nodes

This section presents the effects of varying the node densityin the fixed network area. Fig. 5a shows that the query delayof UIR is much greater than that of SSUM and C_UIR. Thereason for this is that an issued query in SSUM does nothave to wait for any report from the server, as it is always

served directly after it is issued, whereas in UIR, it mustwait for the next UIR to arrive from the server before gettingprocessed. Moreover, in the event of a local cache miss inUIR, the data item must be fetched from the server, but inCOACS and C_UIR, the data are next searched for in theQD system for possible network cache hit. Fig. 5b showsthat the update delays of UIR and C_UIR are less than thatof SSUM when there are less than 100 nodes, simplybecause updates in UIR are broadcasted. But, when thenumber of nodes increases, congestion starts to buildup,which why the hit ratio of this scheme drops rapidly and itstraffic increases as shown in Figs. 5c and 5d.

Finally, we reiterate the fact that the presented updatedelay for UIR and C_UIR is not the total delay since itmeasures the time until the item’s ID reaches the RN/CN,whereas for SSUM, it is the entire delay since it is the timeuntil the data item itself reaches the CN.

4.5 Varying the Query Request Rate

When the request rate is increased, the query delay ofSSUM rises as shown in Fig. 6a due to queuing morepackets in the nodes, while the update delay decreasesinitially and then settles down because as more items arecached, new CNs are set up. This increases the probabilityof having more CNs closer to the access point, which, inturn, results in smaller number of hops, on average, for theupdate packets to reach their destinations.

Unlike SSUM, the update delay of C_UIR increases dueto the increase in network traffic as seen in Fig. 6d. Thislarge traffic, which is the cumulative result of requests,update reports, and control packets, can result in congestionthat delays the query and update packets of C_UIR. It canalso lead to more unsuccessful requests, which, in turn,decreases the hit ratio. In contrast, the hit ratio of SSUMkeeps increasing as the request rate increases for tworeasons: first, the number of cached queries increases, and

786 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010

Fig. 5. Query and update delay, hit ratio, and bandwidth usage versus number of nodes.

Page 10: SSUM Smart Server Update Mechanism - Base Paper

second, the network is not overcome by congestion as in theother two schemes. The query delay of UIR drops as its hitratio increases while its update delay is generally un-affected due to the broadcasting of updates.

4.6 Varying the Data Update Rate

The results of varying the update rate are shown in Fig. 7.We notice that the query delay of SSUM remains almostunchanged as the update rate increases since its traffic(Fig. 7d) increases slightly. On the other hand, its hit ratio

drops quickly (Fig. 7c) since as the request rate is held

constant, more data items will be fetched from the server

(their Ru=Rr ratio will exceed �). The update delay of

SSUM worsens (Fig. 7b) since with increased server

updates, more contention will occur at the Access Point,

which will be busier forwarding more requests to the

server. The update delay and hit ratio of UIR and its

variant remain almost unchanged, but they induce more

traffic due to larger update reports.

MERSHAD AND ARTAIL: SSUM: SMART SERVER UPDATE MECHANISM FOR MAINTAINING CACHE CONSISTENCY IN MOBILE... 787

Fig. 7. Query and update delay, hit ratio, and bandwidth usage versus data update rate.

Fig. 6. Query and update delay, hit ratio, and bandwidth usage versus query request rate.

Page 11: SSUM Smart Server Update Mechanism - Base Paper

4.7 Varying the Node Velocity

The mobility pattern of nodes in a MANET can varyanytime and unexpectedly. Here, we study the effect ofvarying the node speed from 0 to 20 m/sec and show theresult in Fig. 8.

We notice from the results in Fig. 8 that the delay of thethree systems increases with increased speed, while their hitratios decrease, which is normal for most mobile systems: asnodes move faster, the network topology changes, andconnections between nodes are maintained for shorterperiods of time. The traffic of UIR and C_UIR increaseswhile that of SSUM generally remains unchanged. This maybe because the CNs and QDs can move closer or fartherfrom each other and from the AP, thus, resulting in aminimum net effect. It is noted that although the traffic ofC_UIR increases, it still follows the trend of SSUM, which isunderstandable since both schemes are built on the samecaching system.

4.8 Varying the Node Disconnection Rate

We now examine the effect of another property of thedynamic environment of MANETs, namely nodes discon-necting and reconnecting. The period of disconnection wasfixed at 10 seconds but we varied the number of nodesdisconnecting per minute between 0.1 and 20.

Fig. 9 illustrates that the delays of UIR and C_UIR aremore affected than that of SSUM, while the hit ratio andtraffic of SSUM are more affected. The hit ratios of SSUM,C_UIR, and UIR drop to 11, 9, and 6 percent, respectively,as the disconnection rate increases. The traffic of SSUMincreases over the traffic of UIR at high disconnection rates,a phenomenon that may be attributed to having no recoverymechanism in UIR, since the node that reconnects to thenetwork either waits for the next UIR to invalidate its cacheif it has gone offline for less than a given number of seconds

[10], or drops its entire cache. On the other hand, therecovery process for reconnections in SSUM requires extraoverhead as was discussed in Section 2.2, but keeps theincrease in the response time under control.

4.9 Varying � and �

As described in Section 2.3, when the ratio Ru=Rr of a dataitem reaches �, the item is fetched from the server wheneverrequested until the ratio drops below �, then the request iscached again. The values of � and � were set to 1.5 and 0.75,respectively, but here, we analyze the effects of varying �between 0.5 and 4, while setting � to �=5. We ran simulationsfor three different Ru values: 0.1, 20, and 50 updates persecond (which, respectively, represent the minimum, aver-age, and maximum values of the update rate presented inSection 4.6), while fixing Rr to 0.1 requests per second. Theresults in Fig. 10 indicate that for low update rate values, theperformance of SSUM is not affected in a major way, but forhigh update rates, the query delay decreases as � and �

increase because less packets are fetched from the serverdirectly, which produces higher hit ratios and smaller delays.

We also note in Fig. 10a that when Ru ¼ 20, the querydelay decreases significantly as � changes from 1 to 2, butsettles afterward. The explanation for this is that the furtherincrease in � will keep the Ru=Rr ratio for all data items lessthan �. On the other hand, when the update rate was set to50, the query delay kept decreasing as � increased, althoughit is expected that it will stabilize at some � value, as wasthe case when the update rate was set to 20. As for theupdate delay, it increases because more update packets aresent, and this, in turn, increases the traffic. The hit ratio alsoincreases because as � increases, less data items are fetchedfrom outside the MANET. In conclusion, we learn thatincreasing � and � allows for favoring query delay and hitratio over update delay and network traffic, or vice versa.

788 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010

Fig. 8. Query and update delay, hit ratio, and bandwidth usage versus node velocity.

Page 12: SSUM Smart Server Update Mechanism - Base Paper

The results of Fig. 10 can be used to configure thevalues of � and � when certain characteristics about thedata and the client application are known. If the applica-tion requires quick access to the data, then large values of� and � should be chosen. On the other hand, if there iscontention in the network and/or if the mobile deviceswant to conserve battery power, then relatively smallvalues can be selected. Now concerning the difference invalue between � and �, we use it to extend the dataupdate suspension period. One possible scenario is when

the database at the server has certain volatile data that arechanging generally rapidly (like the price of an activelytraded stock), while at the same time, the mobile users areregularly checking for the price. In this case, the differencebetween � and � may be widened to “latch onto” thegeneral behavior of the data update frequency. In generalthough, the closer the values of � and � are, the moreupdates will reach the network and the longer the cachewill remain valid (i.e., before expiring). This will, in turn,reduce the query delays.

MERSHAD AND ARTAIL: SSUM: SMART SERVER UPDATE MECHANISM FOR MAINTAINING CACHE CONSISTENCY IN MOBILE... 789

Fig. 10. Query and update delay, hit ratio, and bandwidth usage versus the threshold �.

Fig. 9. Query and update delay, hit ratio, and bandwidth usage versus disconnection rate.

Page 13: SSUM Smart Server Update Mechanism - Base Paper

4.10 Data and Overhead Traffic

In this set of experiments, we separated the produced trafficinto data traffic, which includes the requests plus forwardingand reply packets, and overhead traffic generated by CSP,CIP, SCUP, CICP, CIRP, and update packets. The graphs inFig. 11 show that the overhead traffic in SSUM makes upbetween 2 and 25 percent of the total traffic when the requestrate is varied, and between 15 and 25 percent when the dataupdate rate is changed. In all, it is shown that SSUM does notgenerally produce substantial overhead traffic.

4.11 System Considerations

We discuss in this section some considerations for theimplementation of SSUM that we derive from the systemdesign and the simulation results. The first concerns nodedisconnections: Section 4.8 demonstrated that node dis-connections led to major increases in bandwidth consump-tion. Note that the scenarios in Section 4.8 includeddisconnections of random nodes that include QDs (in suchcases, new QDs are elected). The major limitation which wepoint out is that an implementation of SSUM in a MANETwhere all nodes have short battery lifetime will not beefficient since this will result in frequent disconnections,and could lead to situations where no more nodes arequalified for becoming QDs. Hence, the requirement is thatat any time, a portion of the nodes in the network must havesufficient energy to take on QD roles (in our simulations,the average number of QD nodes considered was 7 out of100 nodes). This requirement though should be satisfied inmost current MANETs since major technological improve-ments have been made to mobile device batteries, includingthose of cell phones. As an example, a survey of some recentcell phone models, including Nokia N86, Nokia N97, SonyEricsson XPERIA X2, and Samsung I8000 revealed anaverage battery capacity of 430 hours in standby modeand 9 hours in talk time.

Another issue is the network size. Similar to most server-based consistency schemes, SSUM will not be efficient invery large networks with thousands of nodes, where routingpaths from the server to distant nodes could be lengthy, andmore nodes will disconnect from and reconnect to thenetwork. In these conditions, maintaining information aboutdata items at all caching nodes will be expensive for theserver. One of the possible solutions for such networks is theimplementation of clustering in the MANET and addingmore Access Points. Clustering can reduce overhead at theserver since clusterheads can be used to represent the CNswithin their clusters and act as single points of contact with

the server. Additional APs will serve to reduce contention atthe connections of the MANET to the wired network.

5 LITERATURE REVIEW

Several cache consistency (invalidation) schemes have beenproposed in the literature for MANETS. In general, theseschemes fall into three categories: 1) pull- or client-based,where a caching node (CN) asks for updates from theserver, 2) push- or server-based, where the server sendsupdates to the CN, and 3) cooperative, where both the CNand the server cooperate to keep the data up-to-date. Ingeneral, pull-based strategies achieve smaller query delaytimes at the cost of higher traffic load, whereas push-basedstrategies achieve lower traffic load at the cost of largerquery delays. Cooperative-based strategies tend to behalfway between both ends. In this section, we restrict ourliterature survey to server-based strategies so as to providea comparative study in relationship to the approach of ourproposed system.

5.1 Invalidation Reports

Server-based approaches generally employ invalidationreports (IRs) that are periodically broadcasted by the server.An IR normally carries the IDs of the updated data itemsand the time stamps of the updates. When a query isgenerated, the node waits for the periodic IR to invalidateits cache (if connected) and answer the query if it has a validitem. If the requested item is invalid, it usually waits for theperiodic IR, or as in some proposed schemes, like theModified Time Stamp (MTS) mechanism of [23], it broad-casts a request packet that gets forwarded to the serverwithout waiting for the periodic IR. Such schemes generallysuffer from large average delays due to waiting for theperiodic IR or from high traffic in case broadcasts areemployed when misses occur and the request rate is high.Most research in this area has focused on reducing the timeintervals between updates or making the process of sendingupdate reports less static. In the improved IR technique thatwas presented by Cao [10] and which we implemented tocompare SSUM to, the time between two consecutive IRswas divided into intervals. At the beginning of eachinterval, the server broadcasts a UIR, which contains theIDs of data items updated since the last IR. This UIRreduces the query latency time since a node that needs toanswer a query only waits until the next UIR to see whetherthe item has been updated instead of waiting until the nextIR (the UIR interval may be adjusted dynamically accordingto the average request rate of the network). This approachcan also decrease generated traffic by making the server

790 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010

Fig. 11. Data and overhead traffic of SSUM versus the request and update rates.

Page 14: SSUM Smart Server Update Mechanism - Base Paper

save a list of the submitted requests to each item after whichit broadcasts this list followed by the updated data items.This approach may be best suited to high update ratesituations because it does not overwhelm the network withupdates immediately upon their occurrence. Anotherimprovement over the basic IR approach was proposed byLi et al. [22]. The basic idea is for each node to cache the lastK reports that are broadcasted by the server, and for theserver to store the IRs for future use. Using the time stampsof the IRs it caches, when a node goes offline and comesback online, it determines the IRs that it missed and pullsthem from the server to validate its cache. If the node missesmore than K IRs, it drops the entire cache.

Several approaches attempted to make the process ofsending reports dynamic [11], [12], [18], [32]. Yuen et al. [32]proposed a scheme by absolute validity interval (AVI) thatis calculated dynamically by the server based on the updaterate. A client considers an item invalid after its AVI expiresand may drop or update it from the server. The serverperiodically broadcasts an IR that contains the changes toAVIs of items to make clients caching these items update thecached AVI with its new value in the IR. Thus, the size andinterval of the IR could be improved. This approach,however, still exhibits access delays introduced by periodicbroadcasts. Similarly, Kai and Lu [18] proposed to broadcastthe reports dynamically based on the updating frequency ofthe data items. Chung and Hwang [12] took this conceptfurther by adapting the broadcast of reports to the updatepattern and data locality on the server. A serious issue,however, arises with the above approaches when thefrequency of updates at the server is high, which couldoverload the clients with update reports.

5.2 Energy-Efficient Cache Invalidation

A different approach was taken by Cai and Tan [8] inproposing three energy-efficient selective cache invalidationschemes based on the group invalidation method in which aseparate IR is broadcasted for each group of items. Thesestrategies allow clients to selectively tune to the portions ofthe invalidation report that are of interest to them, whilereturning to sleep or idle modes during the rest of the IR.This can be done by including at the beginning of the IR anindex of the rest of the remaining data in the report. Thistechnique produces good results when the update rate ishigh, but makes the size of the IR quite large.

5.3 Cache Invalidation Using Bit Sequences

In order to reduce the size of the IR, Jing et al. [16] proposedusing bit sequences (BSs), according to which the dataserver broadcasts a sequence of bits with a time stamp,where each bit represents a data item, and its position in thesequence represents an index to its corresponding item. Theclients get the mapping of bits to the names of data itemsfrom the server. A value of 1 in the BS means that thecorresponding item has been updated since the last IR, anda value of 0 indicates otherwise. The authors improved onthe BS algorithm in [13] through a multidimensional bitsequence. The granularity of each bit in the BS was variedaccording to the size of the database at the server. Forexample, for an 8 GB database, each bit can represent an8 MB data block, which allows for representing the database

with a BS of 1,024 bits. The client retrieves data from thedatabase as data pages and updates each cached item thatcontains at least one page from an updated block (withcorresponding bit equal to 1 in the BS). Moreover, thegranularity of each bit was dynamically adjusted accordingto the popularity of the corresponding block of the databasethat it represents (whether it is hot or cold). All the BSalgorithms perform best when the update rate is low, butsuffer from drawbacks when the update rate is high, as wasstated by the authors.

5.4 Selective Data Push

Huang et al. present in [14] a Greedy Walk-Based SelectivePush (GWSP) method that attempts to reduce the redun-dancy in conventional push mechanisms by pushingupdates only if the CN is expected to serve queries and ifthere are no more data updates before the Time-to-Refresh(TTR) expires. This is done by maintaining the state of thecached data at the data server, which stores the TTR valueand the query rate associated with cached data. Based onthis state information, the server selects the CNs to send theupdates to and creates a push set that contains the CNs thatshould receive updates. This method is compared with adynamic pull-mechanism in which the CN maintains a TTRfor the cached data and pulls updates accordingly.Reported results show that GWSP exhibits lower generatedtraffic load and lower query latency at low query rates, buttends to have similar delays at larger rates.

5.5 Consistency Based on Location

The consistency of location-dependent data was studied in[30], which proposes methods to validate and updateinformation that change their values depending on thelocation of the client, like traffic reports and parkinginformation. In this context, not only the temporalconsistency but also the location-dependent consistency(due to client mobility) is taken into consideration. A dataitem can have different values for different locations: whena mobile host caching an item moves from one of the cellsto another, the value of the item may become obsolete andneeds to be updated. One of the methods proposed by theauthors is the Implicit Scope Information (ISI) scheme inwhich the client stores the valid scopes of each item andthe server periodically broadcasts the changes of validityof each data item.

Finally, we conclude that besides the latency issue that isassociated with the push-based techniques, their basicproblem is that the IR or UIR reports typically containinformation that is not of interest to a large number of nodes,thus costing them wasted processing time and bandwidth.Even cooperative-based mechanisms, which try to combinethe advantages of push and pull strategies, also inherit theirdisadvantages halfway. For example, the simulation resultsof the scheme in [9] show that both the query delay and trafficload fall between the measures of pure push and pullstrategies. To our knowledge, no existing scheme actuallyadapts the frequency of update reports to both the rate ofupdates at the server and that of data requests by the clients,to render a truly dynamic system. Our work addresses bothof the above two issues, which is made in part possible by thearchitecture of COACS.

MERSHAD AND ARTAIL: SSUM: SMART SERVER UPDATE MECHANISM FOR MAINTAINING CACHE CONSISTENCY IN MOBILE... 791

Page 15: SSUM Smart Server Update Mechanism - Base Paper

Table 3 summarizes the key systems that we presented.All systems are push-based except RPCC, which is coopera-tive, and all are stateless except GWSP, which is stateful.

6 CONCLUSIONS AND FUTURE WORKS

In this paper, we presented a novel mechanism formaintaining cache consistency in a Mobile Ad hoc Network.Our approach was built on top of the COACS architecturefor caching data items in MANETs and searching for them.We analyzed the performance of the system through a gain-loss mathematical model and then evaluated it whilecomparing it with the Updated Invalidation Report me-chanism. The presented results illustrated the advantage ofour proposed system in most scenarios.

The evaluation results confirmed our analysis of thescalability of the system (Section 2.5). That is, they indicatethat even when the node density increases or the noderequest rate goes up, the query request delay in the systemis either reduced or remains practically unaffected, whilethe cache update delay experiences a moderate rise. Yet,even if a higher update delay means that the probability ofnodes getting stale data increases, the proposed systemincludes a provision for keeping track of such nodes andsupplying them with fresh data within a delta time limit,as described in Section 2.6. Hence, it can be concluded thatthe system can scale to a moderately large network evenwhen nodes are requesting data frequently. Moreover, theincrease in the node speed and disconnection rate alsoonly affected the cache update delay, and to a lesserextent, the traffic in the network. This reflects on therobustness of the architecture and its ability to cope withdynamic environments.

For future work, we propose investigating the effects ofother cache placement strategies and cache replication onperformance. The COACS system, and its successor,SSUM, did not implement cache replication: Only onecopy of the version at the server is maintained in the

MANET. By adding replication to the design, the querydelay is expected to go down but the overhead cost willsurely go up. Concerning cache placement, as wasdescribed in [2], the node that first requests a data itemnot found in the network becomes the CN for it. Thissimple strategy may not necessarily be optimal, but if it ismore likely for that same node or other neighboring nodesto request the same data item in the future, then ourstrategy could be advantageous. In any case, otherstrategies, like trying to distribute the cache data asuniformly as possible in the network, could be studied.

Another important topic for future consideration is thesecurity of the system. The design we presented in thispaper did not take into consideration the issues of node andnetwork security. Since nodes in SSUM need to cooperatewith each other in order to answer a query, it is importantthat they trust each other. Also, since QDs are the centralcomponent of the system, they could be the target ofmalicious attacks. For this reason, we present some ideasthat improve the security of the system and leave theirimplementation to a future paper.

In [2], we focused on the fact that each QD to be electedmust have a sufficiently high score that is computed as acombination of resources available to the node. Just like wehave focused on the resources of the QDs, we should alsofocus on their trustworthiness. For this, a node that is to beconsidered a candidate QD must have a high enough trustvalue. In this regard, several works such as [7], [26], and[28] have proposed different solutions to ensure trustbetween nodes. For our system, we can, for example, adoptthe method in [28] when electing a new QD. It is based onthe notion of having multiple nodes jointly monitor aparticular node to ensure its trustworthiness by cooperatingto compute a certain trust value for the monitored node.Therefore, the additional condition that a node must meet inorder to be considered by the QD election process is for itstrust value to be above a certain threshold.

Although applying a trust-based scheme as in [28] helpsin detecting the behavior of malicious nodes and preventing

792 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010

TABLE 3Summary of the Discussed Consistency Methods

Page 16: SSUM Smart Server Update Mechanism - Base Paper

them from performing further damage, it will not be able toavoid the initial damage that could be inflicted by suchnodes. Moreover, some attacks are very dangerous andshould be prevented before they succeed (like performing aDenial of Service (DoS) on a QD, or submitting fakerequests to learn if and where particular sensitive data arecached in the network). In order to prevent such attacks,tight security schemes must be deployed. For example, theapproach in [7] proposes a distributed firewall mechanismfor detecting and recovering from DoS and flooding attacks.The approach in [26], on the other hand, ensures dataconfidentiality, integrity, and availability by establishing asecurity association (SA) between each two end-to-endcommunicating nodes. Both approaches were discussedand evaluated in [24] from which we can deduce that [7] isbetter suited since it satisfies many security requirements,while [26] introduces much overhead as it requires dataredundancy. For our future work, we will do a thoroughinvestigation of the possible threats that could disrupt theoperations of SSUM and devise security mechanisms thatsafeguard it by either building on existing approaches, like[7] and [28], and/or developing new ones.

APPENDIX A

EXPECTED NUMBER OF HOPS TO A RANDOMLY

SELECTED NODE

We assume a rectangular topology with area a� b anduniform distribution of nodes. Two nodes can form a directlink if the distance X between them is less than or equal tor0, where r0 is the maximum node transmission range.Using stochastic geometry, the probability density functionof X is given in [6] as follows:

P ðxÞ ¼ 4x

a2b2

2ab� ax� bxþ 0:5x2

� 0 � x < b < a:

It is concluded that if two nodes are a distance x0 from each

other, the number of hops between them, when there are

sufficient number of nodes to form a connected network,

would tend toward x0=r0. Hence, HR, the expected

minimum number of hops between any two nodes in the

network, is equivalent to dividing E½X�, the expected

distance, by r0. It should be noted that the value of HR

represents a lower bound because when nodes are sparse in

the network, the number of hops will inevitably increase

due to having to route through longer distances to reach a

certain node. When a ¼ b, the expected number of hops as is

depicted in [6] is HR ¼ 0:521a=r0.

APPENDIX B

EXPECTED NUMBER OF HOPS TO THE EXTERNAL

NETWORK

Assuming an a� a topography filled with a sufficient

number of nodes that are uniformly distributed, the

expected distance to the AP, assumed to be at the corner, is

E½XAP � ¼Z a

0

Z a

0

1

a2

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2 þ y2

pdxdy:

This seemingly trivial problem has a complex solutionwhich is the outcome of using tables of integration and theMathematica software:

E½XAP � ¼1

a2

Z a

0

1

2

�x

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2 þ y2

pþ log

�xþ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffix2 þ y2

p ��a0

�dy

¼ 1

a2

Z a

0

1

2

�affiffiffiffiffiffiffiffiffiffiffiffiffiffiffia2 þ y2

pþ log

�aþ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffia2 þ y2

p �� log jyj

�dy

¼ 1

2a2

�2

3ay

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffia2 þ y2

p� 1

3y3 logðyÞ

þ 1

3y3 log

�aþ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffia2 þ y2

p �þ 1

3a3 log

�yþ

ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffia2 þ y2

p ��a0

¼ 1

2a2

�2

3a2

ffiffiffiffiffiffiffi2a2p

� 1

3a3 logðaÞ þ 2

3a3 log

�aþ

ffiffiffiffiffiffiffi2a2p ��

� 1

2

�2

3a3 logðaÞ

¼�

1

3

ffiffiffiffiffiffiffi2a2p

� 1

3a logðaÞ þ 1

3a log

�aþ

ffiffiffiffiffiffiffi2a2p ��

¼ 1

3

�affiffiffi2p� a logðaÞ þ a log a

�1þ

ffiffiffi2p ��

¼ 1

3

� ffiffiffi2pþ log

�1þ

ffiffiffi2p ��

a ¼ 0:7652a:

After dividing the result by the transmission range, r0, weget the expected number of hops HC which is equal toHC ¼ E½SAP �=r0 ¼ 0:7652a=r0.

APPENDIX C

EXPECTED NUMBER OF HOPS TO THE QD WITH DATA

AND TO THE LAST QD

Referring to the probability density function P ðxÞ inAppendix A, the probability of the distance being greaterthan a value r is Px>rðrÞ ¼

R1x P ðxÞdx. The minimum

distance routing is about selecting the minimum from theset of multiple independent random variables. Given NQD

random variables, we can represent this by a vector ofrandom variables X ¼ ðx1; x2; x3; . . .xNSD

Þ. The randomvariables are iid with pdf P ðxÞ, and the pdf of selectingthe minimum of X is

PminðXÞðrÞ

¼XNQD

i¼1

PXðxi ¼ r j xj > r; 8j 6¼ iÞ

¼XNQD

i¼1

P ðxi ¼ rÞYNQD

j¼1;j6¼iPxj>rðrÞ

!

¼XNQD

i¼1

�P ðxi ¼ rÞðPx>rðrÞÞðNQD�1Þ�

¼XNQD

i¼1

ðP ðxi ¼ rÞÞðPx>rðrÞÞðNQD�1Þ

¼ NQDP ðxi ¼ rÞðPx>rðrÞÞðNQD�1Þ:

MERSHAD AND ARTAIL: SSUM: SMART SERVER UPDATE MECHANISM FOR MAINTAINING CACHE CONSISTENCY IN MOBILE... 793

Page 17: SSUM Smart Server Update Mechanism - Base Paper

Hence, the expected distance to the nearest QD given NQD

choices is

E½PminðXÞ� ¼Z 1

0

rPminðXÞðrÞdr

¼ NQD

Z 10

rP ðrÞZ 1r

P ðrÞdr� �ðNQD�1Þ

dr:

Assuming a sufficiently large number of nodes in thenetwork, the expected number of hops to the nearest QD isE½PminðXÞ� divided by r0:

E½HQDnearest� ¼ NQD

r0

Z 10

rP ðrÞZ 1r

P ðrÞdr� �ðNQD�1Þ

dr:

To calculate the average number of hops to get to the QDwith the desired data, we multiply the probability Pi thatQDi has the data with the average number of hops tocontact each QD and then take the sum. Hence, theexpected number of hops to get to the QD with the data is

E½HQDData� ¼

XNQD

i¼1

PiE½HiQDnearest

�;

where the superscript i was used to signify that the numberof QDs is i when computing the expected number of hopsE½HQDnearest

� inside the summation.Finally, we calculate the average number of hops to get

to the last-traversed QD. This is simply the sum of theexpected number of hops when there are NQD choices, thenNQD � 1 choices, and so on until we have one choice left.This is expressed below:

E½HQDLast� ¼

XNQD

i¼1

E½HiQDnearest

�:

APPENDIX D

EFFECT OF QD DISCONNECTIONS ON NETWORK

TRAFFIC AND RESPONSE TIME

As was explained in Section 2.2, when a QD disconnectsfrom the network, a new QD is elected and the CNs thatcache data items whose queries were stored in thedisconnected QD cooperate to build the cache of the newQD. This process affects both the response time and thebandwidth consumption of the system. In [2], we derivedthe expected node disconnection rate in the network,E½Ndisc�, and showed that for a ¼ 1;000 m, r0 ¼ 100 m,100 nodes, and an average speed of 2 m/s, we get E½Ndisc� ¼0:067 disconnections per second. We now compute theresultant response time and traffic overhead.

Suppose that the average number of queries held by aQD is nQD, the total number of data items cached in theserver is n, the number of QDs is NQD, and the total numberof nodes is N , then the extra delay T � that is attributed toQD disconnections in a one second time period is

T � ¼ nQDn� E½Ndisc� �

NQD

N

� ðTinHA þ TinHC þ Tout þ TinHCÞ:

The above expression reflects the fact that a request for aquery that was cached on the departed QD will be forwardedto the server the first time it is submitted after thedisconnection. As a realistic scenario, if we take N ¼ 100,NQD ¼ 7, n ¼ 10;000, nQD ¼ 250, and the default values ofTin, Tout, HC , and HA as we considered in [2] (Tin ¼ 5 ms,Tout ¼ 40 ms, HC ¼ 5:21 hops, and HA ¼ 21 hops), then T �

will be equal to 0.023 ms. This value is insignificant comparedto the average response time of the system (70 ms as shown inthe simulations).

Similarly, the extra bandwidth consumed due to QDdisconnections can be shown to be

BW � ¼ E½Ndisc� �NQD

N� ðBQDelect þNCN � SR �HCÞ;

where BQDelect is the extra traffic required to elect a new QD,NCN is the average number of CNs that cache data itemswhose queries are cached in the disconnected QD, and SR isthe size of the packet sent by a CN to the new QD to help itbuild its cache. Since the QD election packet traverses allnodes sequentially (see [2] for details), then BQDelect can becomputed as (ðN � 1Þ � SE þ SA), where SE is the size of theelection packet, SA is the size of the QD assignment packet,and NCN can be approximated by NCN ¼ N=NQD. Again, ifwe consider the default values of SR, SE , and SA as in [2](SR ¼ 2 Kb, SE ¼ 1 Kb, and SA ¼ 1 Kb) with the same valueof NQD, N , HC as above, then BW � becomes 1.12 Kb/sec.From Section 4, the average bandwidth consumption ofSSUM is normally around 10 Kb/sec. Hence, the extra trafficis much smaller than the total bandwidth consumption (afact that we prove in Section 4.10).

REFERENCES

[1] S. Acharya, R. Alonso, M. Franklin, and S. Zdonik, “BroadcastDisks: Data Management for Asymmetric CommunicationsEnvironments,” Proc. ACM SIGMOD, pp. 199-210, May 1995.

[2] H. Artail, H. Safa, K. Mershad, Z. Abou-Atme, and N. Sulieman,“COACS: A Cooperative and Adaptive Caching System forMANETS,” IEEE Trans. Mobile Computing, vol. 7, no. 8, pp. 961-977, Aug. 2008.

[3] H. Artail and K. Mershad, “MDPF: Minimum Distance PacketForwarding for Search Applications in Mobile Ad Hoc Net-works,” IEEE Trans. Mobile Computing, vol. 8, no. 10, pp. 1412-1426, Oct. 2009.

[4] O. Bahat and A. Makowski, “Measuring Consistency in TTL-Based Caches,” Performance Evaluation, vol. 62, pp. 439-455, 2005.

[5] D. Barbara and T. Imielinski, “Sleepers and Workaholics: CachingStrategies for Mobile Environments,” Proc. ACM SIGMOD, pp. 1-12, May 1994.

[6] C. Bettstetter and J. Eberspacher, “Hop Distances in Homoge-neous Ad Hoc Networks,” IEEE Proc. 57th IEEE Semiann. VehicularTechnology Conf., vol. 4, pp. 2286-2290, Apr. 2003.

[7] N.A. Boudriga and M.S. Obaidat, “Fault and Intrusion Tolerancein Wireless Ad Hoc Networks,” Proc. IEEE Wireless Comm. andNetworking Conf. (WCNC), vol. 4, pp. 2281-2286, 2005.

[8] J. Cai and K. Tan, “Energy-Efficient Selective Cache Invalidation,”Wireless Networks J., vol. 5, no. 6, pp. 489-502, Dec. 1999.

[9] J. Cao, Y. Zhang, L. Xie, and G. Cao, “Consistency of CooperativeCaching in Mobile Peer-to-Peer Systems over MANETs,” Proc.Third Int’l Workshop Mobile Distributed Computing, vol. 6, pp. 573-579, 2005.

[10] G. Cao, “A Scalable Low-Latency Cache Invalidation Strategy forMobile Environments,” IEEE Trans. Knowledge and Data Eng.,vol. 15, no. 5, pp. 1251-1265, Sept. 2003.

[11] P. Cao and C. Liu, “Maintaining Strong Cache Consistency in theWorld-Wide Web,” IEEE Trans. Computers, vol. 47, no. 4, pp. 445-457, Apr. 1998.

794 IEEE TRANSACTIONS ON MOBILE COMPUTING, VOL. 9, NO. 6, JUNE 2010

Page 18: SSUM Smart Server Update Mechanism - Base Paper

[12] Y. Chung and C. Hwang, “Transactional Cache Management withAperiodic Invalidation Scheme in Mobile Environments,” Ad-vances in Computing Science, pp. 50-61, Springer, 1999.

[13] A. Elmagarmid, J. Jing, A. Helal, and C. Lee, “Scalable CacheInvalidation Algorithms for Mobile Data Access,” IEEE Trans.Knowledge and Data Eng., vol. 15, no. 6, pp. 1498-1511, Nov. 2003.

[14] H. Jin, J. Cao, and S. Feng, “A Selective Push Algorithm forCooperative Cache Consistency Maintenance over MANETs,”Proc. Third IFIP Int’l Conf. Embedded and Ubiquitous Computing, Dec.2007.

[15] IEEE Standard 802.11, Wireless LAN Medium Access Control (MAC)and Physical Layer (PHY) Specification, IEEE, 1999.

[16] J. Jing, A. Elmagarmid, A. Helal, and R. Alonso, “Bit-Sequences:An Adaptive Cache Invalidation Method in Mobile Client/ServerEnvironments,” Mobile Networks and Applications, vol. 15, no. 2,pp. 115-127, 1997.

[17] J. Jung, A.W. Berger, and H. Balakrishnan, “Modeling TTL-BasedInternet Caches,” Proc. IEEE INFOCOM, Mar. 2003.

[18] X. Kai and Y. Lu, “Maintain Cache Consistency in MobileDatabase Using Dynamical Periodical Broadcasting Strategy,”Proc. Second Int’l Conf. Machine Learning and Cybernetics, pp. 2389-2393, 2003.

[19] B. Krishnamurthy and C.E. Wills, “Piggyback Server Invalidationfor Proxy Cache Coherency,” Proc. Seventh World Wide Web(WWW) Conf., Apr. 1998.

[20] B. Krishnamurthy and C.E. Wills, “Study of Piggyback CacheValidation for Proxy Caches in the World Wide Web,” Proc.USENIX Symp. Internet Technologies and Systems, Dec. 1997.

[21] D. Li, P. Cao, and M. Dahlin, “WCIP: Web Cache InvalidationProtocol,” IETF Internet Draft, http://tools.ietf.org/html/draft-danli-wrec-wcip-01, Mar. 2001.

[22] W. Li, E. Chan, Y. Wang, and D. Chen, “Cache InvalidationStrategies for Mobile Ad Hoc Networks,” Proc. Int’l Conf. ParallelProcessing, Sept. 2007.

[23] S. Lim, W.-C. Lee, G. Cao, and C.R. Das, “PerformanceComparison of Cache Invalidation Strategies for Internet-BasedMobile-Ad Hoc Networks,” Proc. IEEE Int’l Conf. Mobile Ad-Hocand Sensor Systems, pp. 104-113, Oct. 2004.

[24] M.N. Lima, A.L. dos Santos, and G. Pujolle, “A Survey ofSurvivability in Mobile Ad Hoc Networks,” IEEE Comm. Surveysand Tutorials, vol. 11, no. 1, pp. 66-77, First Quarter 2009.

[25] H. Maalouf and M. Gurcan, “Minimisation of the UpdateResponse Time in a Distributed Database System,” PerformanceEvaluation, vol. 50, no. 4, pp. 245-266, 2002.

[26] P. Papadimitratos and Z.J. Haas, “Secure Data Transmission inMobile Ad Hoc Networks,” Proc. ACM Workshop Wireless Security(WiSe ’03), pp. 41-50, 2003.

[27] J.P. Sheu, C.M. Chao, and C.W. Sun, “A Clock SynchronizationAlgorithm for Multi-Hop Wireless Ad Hoc Networks,” Proc. 24thInt’l Conf. Distributed Computing Systems, pp. 574-581, 2004.

[28] W. Stallings, Cryptography and Network Security, fourth ed. PrenticeHall, 2006.

[29] D. Wessels, “Squid Internet Object Cache,” http://www.squid-cache.org, Aug. 1998.

[30] J. Xu, X. Tang, and D. Lee, “Performance Analysis of Location-Dependent Cache Invalidation Schemes for Mobile Environ-ments,” IEEE Trans. Knowledge and Data Eng., vol. 15, no. 2,pp. 474-488, Feb. 2003.

[31] L. Yin, G. Cao, and Y. Cai, “A Generalized Target Driven CacheReplacement Policy for Mobile Environments,” Proc. Int’l Symp.Applications and the Internet (SAINT ’03), Jan. 2003.

[32] J. Yuen, E. Chan, K. Lain, and H. Leung, “Cache InvalidationScheme for Mobile Computing Systems with Real-Time Data,”SIGMOD Record, vol. 29, no. 4, pp. 34-39, Dec. 2000.

[33] D. Zhou and T.H. Lai, “An Accurate and Scalable ClockSynchronization Protocol for IEEE 802.11-Based MultihopAd Hoc Networks,” IEEE Trans. Parallel and Distributed Systems,vol. 18, no. 12, pp. 1797-1808, Dec. 2007.

[34] G. Zipf, Human Behavior and the Principle of Least Effort. Addison-Wesley, 1949.

Khaleel Mershad received the BE degree withhigh distinction in computer engineering andinformatics from Beirut Arab University, Leba-non, in 2004, and the ME degree in computerand communications engineering from theAmerican University of Beirut (AUB) in 2007.He is currently working toward the PhD degreeat AUB, where he is doing research in the areaof data availability in mobile ad hoc networks.

Hassan Artail received the BS and MS degreesin electrical engineering with high distinctionfrom the University of Detroit in 1985 and 1986,respectively, and the PhD degree in electricaland computer engineering from Wayne StateUniversity in 1999. He is currently an associateprofessor at the American University of Beirut(AUB) and is doing research in the areas ofInternet and mobile computing, mobile ad hocnetworks, and vehicle ad hoc networks. During

the past six years, he has published more than 80 papers in topconferences and reputable journals. Before joining AUB in 2001, he wasa system development supervisor at the Scientific Labs of Daimler-Chrysler, where he worked for 11 years in the field of software andsystem development for vehicle testing applications. He is a seniormember of the IEEE.

. For more information on this or any other computing topic,please visit our Digital Library at www.computer.org/publications/dlib.

MERSHAD AND ARTAIL: SSUM: SMART SERVER UPDATE MECHANISM FOR MAINTAINING CACHE CONSISTENCY IN MOBILE... 795