12
Int. J. Inf. Secur. DOI 10.1007/s10207-013-0221-x REGULAR CONTRIBUTION Randomized gossip algorithms under attack Mousa Mousazadeh · Behrouz Tork Ladani © Springer-Verlag Berlin Heidelberg 2013 Abstract Recently, gossip-based algorithms have received significant attention for data aggregation in distributed envi- ronments. The main advantage of gossip-based algorithms is their robustness in dynamic and fault-prone environments with unintentional faults such as link failure and channel noise. However, the robustness of such algorithms in hos- tile environments with intentional faults has remained unex- plored. In this paper, we call attention to the risks which may be caused by the use of gossip algorithms in hostile envi- ronments, i.e., when some malicious nodes collude to skew aggregation results by violating the normal execution of the protocol. We first introduce a model of hostile environment and then examine the behavior of randomized gossip algo- rithms in this model using probabilistic analysis. Our model of hostile environment is general enough to cover a wide range of attacks. However, to achieve stronger results, we focus our analysis on fully connected networks and some powerful attacks. Our analysis shows that in the presence of malicious nodes, after some initial steps, randomized gossip algorithms reach a point at which the lengthening of gossip- ing is harmful, i.e., the average accuracy of the estimates of the aggregate value begins to decrease strictly. Keywords Distributed averaging · Randomized gossip algorithms · Hostile environment · Malicious node · Harmful gossiping M. Mousazadeh (B ) · B. Tork Ladani Department of Computer Engineering, University of Isfahan, Hezar Jerib Ave., Isfahan, Iran e-mail: [email protected] B. Tork Ladani e-mail: [email protected] 1 Introduction In the last decade, various gossip-based schemes have been introduced for aggregating data in distributed systems. The Push-Sum algorithm [14] and randomized gossip algo- rithms [5] are highly cited cases. In gossip-based algo- rithms, the involved nodes iteratively contact their neighbors to exchange estimates of the aggregate value and based on the received values, the nodes update their current estimates. Over time, local estimates of the nodes asymptotically con- verge to the actual aggregate value. Gossip-based data aggregation algorithms are robust in fault-prone environments and have no single point of failure or bottleneck [8]. Besides, when dealing with a particular environment, other advantages also emerge. For example, when gossip-based algorithms are used in sensor and ad hoc networks, there is no need for routing and the required energy for running the aggregation algorithm is fairly divided among the nodes. Another example is the case of peer-to-peer net- works. In this example, when gossip-based algorithms are used, there is no need for each individual node to know the identities of all participating nodes and a local and restricted view of the whole network is sufficient. In this case, fast con- vergence and stability in the presence of frequent churn (i.e., the joining and leaving of nodes) are two other advantages that come from using gossip algorithms. Due to their var- ious desirable properties, gossip-based schemes have been utilized to solve different problems including load balanc- ing in parallel computing [7], data aggregation in general [14, 5], and in particular in sensor networks [3, 4, 6, 18], trust and reputation management in peer-to-peer networks [22], and privacy preserving data mining [17]. Ensuring the integrity of aggregate values in hostile (i.e., insecure) environments is a fundamental problem in com- puter science that has been much considered over the years. 123

Randomized gossip algorithms under attack

  • Upload
    behrouz

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Randomized gossip algorithms under attack

Int. J. Inf. Secur.DOI 10.1007/s10207-013-0221-x

REGULAR CONTRIBUTION

Randomized gossip algorithms under attack

Mousa Mousazadeh · Behrouz Tork Ladani

© Springer-Verlag Berlin Heidelberg 2013

Abstract Recently, gossip-based algorithms have receivedsignificant attention for data aggregation in distributed envi-ronments. The main advantage of gossip-based algorithmsis their robustness in dynamic and fault-prone environmentswith unintentional faults such as link failure and channelnoise. However, the robustness of such algorithms in hos-tile environments with intentional faults has remained unex-plored. In this paper, we call attention to the risks which maybe caused by the use of gossip algorithms in hostile envi-ronments, i.e., when some malicious nodes collude to skewaggregation results by violating the normal execution of theprotocol. We first introduce a model of hostile environmentand then examine the behavior of randomized gossip algo-rithms in this model using probabilistic analysis. Our modelof hostile environment is general enough to cover a widerange of attacks. However, to achieve stronger results, wefocus our analysis on fully connected networks and somepowerful attacks. Our analysis shows that in the presence ofmalicious nodes, after some initial steps, randomized gossipalgorithms reach a point at which the lengthening of gossip-ing is harmful, i.e., the average accuracy of the estimates ofthe aggregate value begins to decrease strictly.

Keywords Distributed averaging · Randomized gossipalgorithms · Hostile environment · Malicious node ·Harmful gossiping

M. Mousazadeh (B) · B. Tork LadaniDepartment of Computer Engineering, University of Isfahan,Hezar Jerib Ave., Isfahan, Irane-mail: [email protected]

B. Tork Ladanie-mail: [email protected]

1 Introduction

In the last decade, various gossip-based schemes have beenintroduced for aggregating data in distributed systems. ThePush-Sum algorithm [14] and randomized gossip algo-rithms [5] are highly cited cases. In gossip-based algo-rithms, the involved nodes iteratively contact their neighborsto exchange estimates of the aggregate value and based onthe received values, the nodes update their current estimates.Over time, local estimates of the nodes asymptotically con-verge to the actual aggregate value.

Gossip-based data aggregation algorithms are robust infault-prone environments and have no single point of failureor bottleneck [8]. Besides, when dealing with a particularenvironment, other advantages also emerge. For example,when gossip-based algorithms are used in sensor and ad hocnetworks, there is no need for routing and the required energyfor running the aggregation algorithm is fairly divided amongthe nodes. Another example is the case of peer-to-peer net-works. In this example, when gossip-based algorithms areused, there is no need for each individual node to know theidentities of all participating nodes and a local and restrictedview of the whole network is sufficient. In this case, fast con-vergence and stability in the presence of frequent churn (i.e.,the joining and leaving of nodes) are two other advantagesthat come from using gossip algorithms. Due to their var-ious desirable properties, gossip-based schemes have beenutilized to solve different problems including load balanc-ing in parallel computing [7], data aggregation in general[14,5], and in particular in sensor networks [3,4,6,18], trustand reputation management in peer-to-peer networks [22],and privacy preserving data mining [17].

Ensuring the integrity of aggregate values in hostile (i.e.,insecure) environments is a fundamental problem in com-puter science that has been much considered over the years.

123

Page 2: Randomized gossip algorithms under attack

M. Mousazadeh, B. Tork Ladani

For instance, the Byzantine Generals problem [12] is a well-known problem in computer science that deals with theintegrity of the aggregate values when flooding is used foraggregating data in distributed systems. As another exam-ple, ensuring the integrity of aggregate values is a well-studied problem in sensor networks, especially when in-network aggregation is considered [15]. When gossip-basedalgorithms are used for data aggregation, the integrity ofaggregate values may be even more crucial. Note that gos-sip algorithms have been adopted to work in fully distrib-uted systems in which, due to the lack of a central authority,detection and isolation of violating nodes are not trivial tasks.Furthermore, an individual malicious node can take part inthe several steps of the gossip algorithm, which gives it moreopportunity to distort the aggregate value. As an illustrativeexample, consider a fully distributed peer-to-peer networkwith a gossip-based data aggregation subsystem, which isused to compute the reputation scores of the nodes as con-sidered in [22]. In this case, malicious nodes are naturallymotivated to misuse the potential vulnerabilities of gossipalgorithms for both raising their own scores and reducingthat of others. This can be accomplished simply by ignoringthe updates of the gossip algorithm and repeatedly declar-ing fake values. When malicious nodes collude together, thesituation can be even worse. As another example, in a criti-cal sensor network, some nodes may be compromised by anattacker to distort the aggregate value. In addition to reportingfalse sensed values, these nodes may be forced to misuse thevulnerabilities of gossip algorithms to distort the aggregatevalue even further.

Recently, a series of remarkable works have been pub-lished on the analysis of existing gossip-based data aggrega-tion algorithms in fault-prone environments and the designof more robust algorithms. Packet dropping channels [10],noisy channels [11,13,20], quantization error [2,21], andgeneral disturbances [1,16] are some investigated problems.In this paper, we go beyond the conventional disturbancesto model intentional misbehaviors and to analyze the behav-ior of gossip-based data aggregation algorithms under sucha model. To this end, we consider the behavior of the sys-tem when two types of nodes are involved: honest nodes thatexactly obey the predefined algorithm and malicious nodesthat violate the algorithm by declaring fake values. Due to thediversity of gossip algorithms, we restrict our analysis to ran-domized gossip algorithms in the asynchronous time modelas a prototype of gossip algorithms. Randomized gossip algo-rithms are distributed averaging algorithms which have beenanalyzed by Boyd et al. [5]. Also, due to the involvement ofmore than one type of node in hostile environment, conven-tional theoretical tools (such as the spectral graph theory)that are usually used for analyzing gossip algorithms canhardly be utilized here. Therefore, we restrict our analysis tosome powerful attacks on fully connected networks and use

the basic results of the probability theory to obtain strongerresults.

To realize the contributions of the paper, consider an envi-ronment with N nodes when a gossip algorithm is used foraggregating data. In this case, there are N estimators forcomputing the aggregate value, i.e., one estimator per eachnode. Hence, we use the average mean-square error (aver-age MSE) of the estimators for measuring the accuracy ofthe whole system. We show how the average MSE evolvesover time when malicious nodes are involved. The analysisshows that in the presence of malicious nodes, the averageMSE initially decreases to reach a special point in which itstarts to increase strictly. Also, it will be shown that undera mild condition, the number of the gossip steps required toreach the increasing phase is at most linearly proportional toboth the number of the nodes and the inverse of maliciousnode rate.

The rest of the paper is organized as follows: Sect. 2presents a model for hostile environment and discusses otherproblem-related issues. To provide insight into the problem,the results of two simulations are presented in Sect. 3. Analy-sis of the algorithm’s convergence in the modeled hostileenvironment takes place in Sect. 4. In Sect. 5, we formu-late the evolution of the average MSE over time. In Sect. 6,we discuss the effects of relaxing some of the assumptionsinherent in the analytical model. Finally, Sect. 7 concludesthe paper.

2 Modeling hostile environment

To model hostile environments, we assume two types of node,honest and malicious, that participate in the gossip algorithm.Honest nodes are those which exactly obey the gossip algo-rithm, while malicious nodes are those which are able to vio-late it. Consider an environment with N nodes where someof them are malicious and each node is connected to all oth-ers. Let H and M be the sets of honest and malicious nodes,respectively, where ‖H‖ = n, ‖M‖ = m, and N = n + m.Also, p = m/N represents the rate of the maliciousness ofthe environment.

First, let all nodes be honest and run the randomized gos-sip algorithm in the asynchronous time model [5]. In thiscase, each node has an independent Poisson clock in whichall clocks tick at the same rate. The Poisson clock has nomemory. This means that the time intervals between consec-utive events (ticks) are independent random variables. Butthe expected number of the events (the rate of the events) isa constant value over any specific time interval. When theclock of a node ticks, it randomly selects one of its neighbor-ing nodes and contacts it. At this time, both nodes update theirestimates of the aggregate value to be the average of their cur-rent estimates. At any given point in time, each node partic-

123

Page 3: Randomized gossip algorithms under attack

Randomized gossip algorithms under attack

ipates at most in one transaction and ignores other requests.Let t1, t2, . . . be the absolute points of the time at whichupdates take place. The time intervals between two consec-utive updates do not matter in our analysis, so for the sakeof convenience and without loss of generality, we considerthat tk = k, for every k = 1, 2, . . .. Let nodes i, j ∈ H, bethe two participants of a gossip update at time t + 1, with xt

iand xt

j as their estimates of the aggregate value at t . Then,transactions (updates) of the randomized gossip algorithmcan be formalized as follows:

xt+1k =

{ 12

(xt

i + xtj

), if k = i or k = j

x tk, otherwise.

(1)

Before considering gossip algorithms in the hostile envi-ronment, we should note that the average is not inherentlya secure aggregation function, regardless of the employedaggregation algorithm [19]. This is because a malicious nodecan easily skew the average toward any arbitrary amount byappropriately choosing its own value. Hence, as a presump-tion in our model of hostile environment, we consider thatthere are lower and upper bounds for the values declared bynodes during the aggregation process. In this way, althoughmalicious nodes are free to ignore the bounds, since hon-est nodes check the received values, out of bound values areautomatically dropped. In any case, defining such boundsis not usually a difficult task when it comes to a particularapplication. So, in the following discussions, we suppose thatnode values are bounded by the interval [L , U ].

Now let us focus on the activities of malicious nodes.Informally, we say a node is malicious if it violates the gossipalgorithm by sending fake values to its participants in orderto distort the aggregate value. As an example, consider asystem with three nodes where the initial values of the nodeslie between 0 and 1. Now, consider that one of the nodes ismalicious and intends to skew the aggregate value toward themaximum value, i.e., 1. Then, when this node is on one sideof a gossip transaction, it simply declares 1 without updatingits value. The malicious node expects that regardless of theinitial values, the values of the other nodes converge to 1 aftera few updates.

In our model, we allow each malicious node to have its ownstrategy for generating fake values over time. The strategyof a malicious node is defined as a sequence of values thatthis node sends to other nodes over time. More formally,the attack strategy of a malicious node i is the sequenceAi = (r0

i , r1i , . . .), where r t

i is the value that node i sendsto the other participant of the gossip transaction at time t ,provided that node i is involved in a gossip transaction. Wedefine the joint attack strategy of the malicious nodes as theset of all malicious nodes’ strategies, i.e., A = {Ai |i ∈ M}.

Our main goal in this paper is to call attention to the riskswhich may be introduced by using gossip algorithms in theabsence of proper defensive measures. On the other hand, the

notion of joint attack strategy is too general. Hence, withoutconsidering some specific strategies, the study of gossip algo-rithms in hostile environments can hardly result in significantoutcomes. Therefore, we focus on two powerful and easy-to-run strategies. In these strategies, malicious nodes collude toskew the aggregation result toward their desired value.

In the first strategy, all malicious nodes declare an identi-cal and fixed value over time. This strategy can be formallydefined as follows:

Strategy 1 Let μm be a fixed value and A = {(r0i , r1

i , . . .)|i∈ M} be the joint attack strategy of the malicious nodes. Inthis strategy, ∀i ∈ M and ∀t ≥ 0, r t

i = μm .

As we will see in Sect. 4, if malicious nodes intend toskew the aggregation result toward a fixed value v ∈ [L , U ],they can simply use Strategy 1 with μm = v. Furthermore,if malicious nodes aim to cause maximum disturbance to thesystem by skewing the aggregation results toward the lowerbound (upper bound), they can use Strategy 1 with μm = L(μm = U ), which is not only an easy-to-run strategy, but alsothe most effective one.

Although Strategy 1 is easy to run and effective, but mali-cious nodes may be easily detected due to their declaring afixed fake value. The next step is to allow malicious nodesto generate fake values with identical and fixed mean valuesbut non-zero variances. A non-zero variance allows mali-cious nodes to remain without being detected easily. Thisstrategy can be defined more formally as follows:

Strategy 2 Let μm and σm be two fixed values and A ={(r0

i , r1i , . . .)|i ∈ M} be the joint attack strategy of the

malicious nodes. In this strategy, each r ti is drawn from

a probability distribution function with E[r ti ] = μm , and

V ariance(r ti ) = σ 2

m .

Strategy 1 is a special case of Strategy 2.Again, suppose that malicious nodes intend to skew the

aggregation results toward a fixed value v ∈ [L , U ]. Insteadof using Strategy 1, they can use Strategy 2 with μm = v

and a non-zero variance to remain undetectable. In Sect. 4,we show that in this case, the average of the values of honestnodes converges to v in expectation.

If defensive measures are used for detecting and isolat-ing malicious nodes, they will naturally use smarter attackstrategies. This is the start of a cat-and-mouse game; smarterattacks need more powerful defensive measures to be pre-vented, and more powerful measures lead malicious nodesto smarter strategies. Other strategies, including those whichare used to tackle specific defensive measures, are outside ofthe scope of this paper.

Now, we are going to define a metric for measuring theaccuracy of gossip algorithms in hostile environments. Letx(t) = (xt

1, xt2, . . . , xt

n) be the vector of the values of the

123

Page 4: Randomized gossip algorithms under attack

M. Mousazadeh, B. Tork Ladani

honest nodes at time t and x(t) be its average, i.e., x(t) =1n

∑ni=1 xt

i . In the randomized gossip algorithm at any timet , each xt

i , i = 1, . . . , n, is an estimator of x(0). On the otherhand, for an individual node i , the mean squared error (MSE),i.e., E[(xt

i − x(0))2], is a usual quantity for measuring thequality of the estimator xt

i . Therefore, we use the average ofthe mean squared errors of nodes as the performance measureof the gossip algorithm at any time t . First, let us define theaverage squared error (ASE) as follows:

ASE(t) = 1

n

n∑i=1

(xti − x(0))2. (2)

Thus,

E[ASE(t)] = 1

n

n∑i=1

E[(xti − x(0))2]. (3)

Therefore, E[ASE(t)] is the average of the mean squarederrors of nodes (the average MSE).

Note that in the definition of the average MSE, we use theaverage of the initial values of only honest nodes [i.e., x(0)],instead of the average of the initial values of all nodes. This isbecause the accuracy of the initial values of malicious nodesis dubious, and thus, obtaining x(0) only can be consideredas the ultimate goal of honest nodes.

If all nodes are honest, the average MSE of nodes is infact the expected variance of nodes and it therefore decreasesover time [5]. However, as we will see in the next sections, inthe modeled hostile environment, after some initial steps, theaverage MSE reaches a point at which it begins to increasestrictly. This means that after this point, the lengthening ofgossiping is harmful. We formally define harmful gossipingas follows:

Definition 1 (Harmful Gossiping) Gossiping is harmful attime t , iff E[ASE(t)] > E[ASE(t − 1)]. Otherwise, it isuseful at t .

3 Simulation examples

In this section, to provide an intuitive insight into the behav-ior of the randomized gossip algorithm in the modeled hostileenvironment, results of two simulation experiments are pre-sented. The first experiment deals with the convergence ofthe randomized gossip algorithm where malicious nodes useStrategy 1 to skew aggregation results. In this scenario, thereare 4 nodes in the environment. Nodes 1–3 are honest, andnode 4 is malicious. The initial values of the honest nodes are0.1, 0.3, and 0.5, respectively. Node 4 tries to skew honestnode values using Strategy 1 with μm = 0.6. Figure 1 showsthe mean values of the nodes over time. To approximate themean values of the nodes, we have used the average of 10,000

Fig. 1 Curves a, b, and c show the mean values of nodes 1, 2, and 3,respectively, over time. Curve d shows the value of the malicious node(i.e., node 4) that remains unchanged over time

Fig. 2 The average MSE of the honest nodes over time when the mali-cious node uses Strategy 1 with μm = 0.6 to skew the aggregationresults

runs. As can be predicted, under Strategy 1, the values of allhonest nodes converge to the malicious node value. Figure 2shows an approximation of E[ASE(t)], that it is also obtainedby averaging over ASE(t) in 10,000 runs. In expectation, theaverage MSE of the honest nodes reaches its minimum in thefourth step and after this point begins to increase strictly. Inother words, after 4 steps, gossiping will be harmful.

In the second experiment, there are 1,000 nodes, of which50 are malicious. The node values are restricted to the inter-val [0, 1], i.e., L = 0 and U = 1. The initial values ofhonest nodes have been drawn at random from the interval[0, 1] using a uniform distribution. The initial average andvariance of the values of the honest nodes are 0.4924 and0.0833, respectively, which means that x(0) = 0.4924 andASE(0) = 0.0833. On the other hand, malicious nodes useStrategy 2 to attack the system. The fake values of maliciousnodes are governed by a uniform distribution over the interval[0.8, 1.0], and thus μm = 0.9 and σm = 0.0577.

When malicious nodes use Strategy 2 with σm > 0, fakevalues are not fixed and we do not expect the values ofthe honest nodes to converge. However, in the presence ofmalicious nodes, harmful gossiping is unavoidable. Figure 3

123

Page 5: Randomized gossip algorithms under attack

Randomized gossip algorithms under attack

Fig. 3 The average MSE of the honest nodes over time when the mali-cious nodes use Strategy 2 with μm = 0.9 and σm = 0.0577

shows an approximation of E[ASE(t)] generated by averag-ing over ASE(t) in 100 complete runs of the gossip algorithm,where there are 30,000 gossip transactions in each single run.As Fig. 3 shows, E[ASE(t)] reaches its minimum value att = 3,615; thereafter, gossiping will be harmful. Also, after24,469 steps, E[ASE(t)] exceeds the initial variance of thehonest nodes.

In the next two sections, both the convergence of the ran-domized gossip algorithm and the evolution of the averageMSE will be investigated analytically.

4 Convergence in the hostile environment

In this section, we investigate the evolution of xti , ∀i =

1, . . . , n, and x(t) over time. In the following discussions,conditional expectation is widely used. Suppose at time t ,nodes i and j are the two participants of a gossip update.For the sake of convenience in the mathematical analysis,we allow the special case of i = j . In this case, nothing hap-pens in the gossip update. If we allow this special case, oneof the following events occurs at t :

A = {(i, j)|i ∈ H, j ∈ H}, (4)

B = {(i, j)|i ∈ H, j ∈ M} ∪ {(i, j)|i ∈ M, j ∈ H}, (5)

C = {(i, j)|i ∈ M, j ∈ M}. (6)

Obviously, these events obey a binomial distribution withtwo experiments, where in each experiment, the probabilityof the success (a node being malicious) is p. Hence,

Pr(A) = (1 − p)2, (7)

Pr(B) = 2p(1 − p), and (8)

Pr(C) = p2. (9)

In the following discussions, the term 1 − pN frequently

appears, and then, we define

c = 1 − p

N. (10)

Let us first formulate the evolution of x(t) in the hostileenvironment using the following theorem. Since Strategy 1is a special case of Strategy 2, we consider only Strategy 2.

Theorem 1 If malicious nodes use Strategy 2 to attackthe randomized gossip algorithm in the asynchronous timemodel, then

E[x(t)] = ct x(0) + (1 − ct )μm . (11)

Proof Let the event A occur at time t + 1, i and j be theparticipating nodes of the gossip update at t + 1, and x(t) =x = (x1, . . . , xn). Then,

x(t + 1|i ∈ H, j ∈ H, x(t) = x) = 1

n

n∑k=1

xt+1k

= x(t) − 1

n(xi + x j ) + 2

n

(xi + x j

2

)

= x(t).

Then, x(t + 1|i ∈ H, j ∈ H, x(t) = x) is independent of iand j . So, taking the expectation over x , we have

E[x(t + 1)|A] = E[x(t)]. (12)

Now, let the event B occur at t + 1, i ∈ H and j ∈ M bethe participants of the gossip update, xt

j = xm , and x(t) =x = (x1, . . . , xn). Then,

x(t + 1|i ∈ H, xtj = xm, x(t) = x) = 1

n

n∑k=1

xt+1k

= x(t) − 1

n(xi ) + 1

n

(xi + xm

2

)

= x(t) − xi

2n+ xm

2n.

Taking the expectation over xm , we have

E[x(t + 1)|i ∈ H, j ∈ M, x(t) = x] = x(t) − xi

2n+ μm

2n.

Again by taking the expectation over all i ∈ H and j ∈ M,we have

E[x(t + 1)|B, x(t) = x] = x(t) − 1

2n2

n∑i=1

xi + μm

2n

=(

1 − 1

2n

)x(t) + μm

2n.

Taking the expectation over x , we have,

E[x(t + 1)|B] =(

1 − 1

2n

)E[x(t)] + μm

2n. (13)

If the event C occurs, x(t) remains unchanged, and then,

E[x(t + 1)|C] = E[x(t)]. (14)

123

Page 6: Randomized gossip algorithms under attack

M. Mousazadeh, B. Tork Ladani

Putting together all the pieces yields

E[x(t + 1)] = + Pr(A)E[x(t + 1)|A]+ Pr(B)E[x(t + 1)|B]+ Pr(C)E[x(t + 1)|C]

=(

1 − p(1 − p)

n

)E[x(t)] + p(1 − p)μm

n.

Since n = (1 − p)N , we have

E[x(t + 1)] = cE[x(t)] + pμm

N. (15)

Repeatedly using the above recursive equation, we have

E[x(t)] = ct E[x(0)] + pμm

N

t−1∑i=0

ci

= ct x(0) + pμm

N

(1 − ct

1 − c

).

And hence,

E[x(t)] = ct x(0) + (1 − ct)μm .

��The above theorem shows that if p > 0 and malicious

nodes use Strategy 2, in expectation, x(t) moves from x(0)

to μm over time, so E[x(t)] converges. Since Strategy 1 isan instance of Strategy 2, the above result is also true forStrategy 1. To deal with the convergence of the values ofthe individual nodes as well, let us first define the averagedistance of the honest nodes from μm as follows:

D(t) = 1

n

n∑k=1

|xtk − μm |. (16)

The following lemma deals with an upper bound for D(t).

Lemma 1 If malicious nodes use Strategy 2 to attack the ran-domized gossip algorithm in the asynchronous time model,then

E[D(t)] ≤ ct D(0) + (1 − ct) σm . (17)

Proof Let the event A occur at time t +1, i and j be the par-ticipants of the gossip update, and x(t) = x = (x1, . . . , xn).Then,

D(t + 1|i ∈ H, j ∈ H, x(t) = x) = 1

n

n∑k=1

|xt+1k − μm |

= D(t)+ 1

n

(−|xi −μm |−|x j −μm | + 2

∣∣∣∣ xi + x j

2−μm

∣∣∣∣)

.

Since |xi + x j − 2μm | ≤ |xi − μm | + |x j − μm |, we have

D(t + 1|i ∈ H, j ∈ H, x(t) = x) ≤ D(t)

Thus,

E[D(t + 1)|A] ≤ E[D(t)]. (18)

Now, let the event B occur at t + 1, i ∈ H and j ∈ M bethe participants of the gossip update, xt

j = xm , and x(t) =x = (x1, . . . , xn). Then,

D(t + 1|i ∈ H, xtj = xm, x(t) = x) = 1

n

n∑k=1

∣∣xt+1k − μm

∣∣

= D(t) + 1

n

(−|xi − μm | +

∣∣∣∣ xi + xm

2− μm

∣∣∣∣)

Since | xi +xm2 − μm | ≤ 1

2 |xi − μm | + 12 |xm − μm |, we have

D(

t + 1|i ∈ H, xtj = xm, x(t) = x

)

≤ D(t) − 1

2n|xi − μm | + 1

2n|xm − μm |

Using Jensen’s inequality, we have

(E[|xm − μm |])2 ≤ E[|xm − μm |2

]

= E[(xm − μm)2

].

Therefore,

E[|xm − μm |] ≤ σm .

Hence, relaxing the condition xtj = xm yields

E[D(t + 1)|i ∈ H, j ∈ M, x(t) = x]≤ D(t) − 1

2n|xi − μm | + 1

2nσm

Taking the expectation over all i ∈ H and j ∈ M implies

E[D(t + 1)|B, x(t) = x]≤ D(t) − 1

2n2

n∑i=1

|xi − μm | + 1

2nσm

=(

1 − 1

2n

)D(t) + 1

2nσm

Relaxing the condition x(t) = x , we have

E[D(t + 1)|B] ≤(

1 − 1

2n

)E[D(t)] + 1

2nσm . (19)

If the event C occurs, D(t) remains unchanged, and then,

E[D(t + 1)|C] = E[D(t)]. (20)

Putting together all the pieces yields

E[D(t + 1)] = + Pr(A)E[D(t + 1)|A]+ Pr(B)E[D(t + 1)|B]+ Pr(C)E[D(t + 1)|C]

≤ cE[D(t)] + p

Nσm .

123

Page 7: Randomized gossip algorithms under attack

Randomized gossip algorithms under attack

Repeatedly using the above recursive inequality, we have

E[D(t)] ≤ ct E[D(0)] + p

Nσm

t−1∑i=0

ci

= ct D(0) + p

N

(1 − ct

1 − c

)σm .

It yields

E[D(t)] ≤ ct D(0) + (1 − ct) σm .

��Now, we use the above lemma to prove the following theo-

rem that deals with the convergence of the values of the indi-vidual honest nodes when malicious nodes use Strategy 1.

Theorem 2 If malicious nodes use Strategy 1 to attackthe randomized gossip algorithm in the asynchronous timemodel, and if p > 0, then for any i ∈ H, x t

i converges inprobability to μm.

Proof In Strategy 1, σm = 0. Then, using Lemma 1, we have

E[D(t)] ≤ ct D(0). (21)

Since p > 0, we have 0 ≤ c < 1, and then, (21) impliesthat limt→+∞ E[D(t)] = 0. Therefore, for any i ∈ H,limt→+∞ E[|xt

i − μm |] = 0.On the other hand, using Markov’s inequality, for every

ε > 0, we have

Pr(|xti − μm | ≥ ε) ≤ E[|xt

i − μm |]ε

.

Therefore, for every ε > 0, limt→+∞ Pr(|xti − μm | ≥ ε) =

0. Hence, by definition [9], xti converges to μm in probability.

��

5 Harmful gossiping

In this section, we first derive a recursive formulation forcomputing the average MSE at any time. Then, we use thisformulation to analyze the behavior of the randomized gos-sip algorithm, particularly by dealing with harmful gossiping.In the following formulation, E[ASE(t)] has a close relationwith E[(x(t) − x(0))2]. The results are summarized in The-orem 3.

Theorem 3 If malicious nodes use Strategy 2 to attackthe randomized gossip algorithm in the asynchronous timemodel, then[

E[ASE(t + 1)]E[S(t + 1)]

]=

[1 − 1+ p

2N

1−pN

p2nN 1 − 2p

N

] [E[ASE(t)]

E[S(t)]]

+[

pN

( 32 − ct

)(μm − x(0))2 + p

2N σ 2m

2pN

(1 − ct − 1

4n + 12n ct

)(μm − x(0))2 + p

2nN σ 2m

]

(22)

where

S(t) = (x(t) − x(0))2. (23)

Proof The proof is straightforward. You can find it in“Appendix A”.

Using Theorem 3, one can find time intervals in which gos-siping is harmful. But this formulation of the average meansquared error is more complex than to be used for investigat-ing the randomized gossip algorithm in hostile environments.Thus, we first obtain a lower bound for the average MSE inLemma 2. Then, using the mentioned lower bound and someother auxiliary lemmas, we obtain a point after which gos-siping is harmful.

Lemma 2 If malicious nodes use Strategy 2 to attack the ran-domized gossip algorithm in the asynchronous time model,then

E[ASE(t)] ≥ (1 − ct)2

(μm − x(0))2 . (24)

Proof We know that ∀y ∈ Rn : ‖y‖1 ≤ n

12 ‖y‖2, where ‖.‖p

is p-norm. Thus,

n∑i=1

|xti − x(0)| ≤ n

12

(n∑

i=1

(xt

i − x(0))2

) 12

= n(ASE(t))12 .

Hence,

(ASE(t))12 ≥ 1

n

n∑i=1

|xti − x(0)|.

Thus, using the triangle inequality for L1-norm, we have

(ASE(t))12 ≥ |1

n

n∑i=1

(xti − x(0))|

= |x(t) − x(0)|,and so ASE(t) ≥ (x(t) − x(0))2. Therefore,

E[ASE(t)] ≥ E[(x(t) − x(0))2].Hence, using Jensen’s inequality and (11), we have

E[ASE(t)] ≥ (E[x(t)] − x(0))2

= (ct x(0) + (1 − ct )μm − x(0)

)2.

Hence, E[ASE(t)] ≥ (1 − ct )2(μm − x(0))2. ��Now, we show that if malicious nodes use Strategy 2, and

gossiping is harmful at th , then it is also harmful at any timet ≥ th . We use Lemmas 3 and 4 to prove this statement.

Lemma 3 Suppose malicious nodes use Strategy 2 to attackthe randomized gossip algorithm in the asynchronous timemodel. Let p > 0, and

y(t) =[

E[ASE(t)]E[S(t)]

].

If y(t) > y(t − 1), then y(t + 1) > y(t), where [ab] > [c

d ]means that both a > c and b > d.

123

Page 8: Randomized gossip algorithms under attack

M. Mousazadeh, B. Tork Ladani

Proof Let

A =[

1 − 1+ p2

N1−p

Np

2nN 1 − 2pN

],

and

f (t) =[ p

N

( 32 − ct

)(μm − x(0))2 + p

2N σ 2m

2pN

(1 − ct − 1

4n + 12n ct

)(μm − x(0))2 + p

2nN σ 2m

]

Then, by Theorem 3, y(t) = Ay(t − 1) + f (t − 1). Also,y(t) > y(t − 1) yields

Ay(t − 1) + f (t − 1) > y(t − 1).

Thus,

(A − I )y(t − 1) + f (t − 1) > 0. (25)

On the other hand,

y(t + 1) − y(t) = (A − I )y(t) + f (t)

= (A − I )(Ay(t − 1) + f (t − 1)) + f (t)

= A((A − I )y(t − 1) + f (t − 1))

+ f (t) − f (t − 1). (26)

A is a matrix with positive entries, and then, (25) and (26)imply

y(t + 1) − y(t) > f (t) − f (t − 1).

Obviously, f (t) is an increasing function of t , and hence,f (t) − f (t − 1) > 0. So, y(t + 1) > y(t). ��Lemma 4 If malicious nodes use Strategy 2 to attack the ran-domized gossip algorithm in the asynchronous time model,and p > 0, then E[S(t)] is a strictly increasing function of t .

Proof The proof is straightforward. You can find it in“Appendix B”.

Lemma 5 If malicious nodes use Strategy 2 to attack the ran-domized gossip algorithm in the asynchronous time model,and gossiping is harmful at th, then it is also harmful at anytime t ≥ th .

Proof Direct result of Lemmas 3 and 4. ��Corollary 1 If malicious nodes use Strategy 2 to attackthe randomized gossip algorithm in the asynchronous timemodel, then gossiping is harmful at any time t > 0 iff

(μm − x(0))2 + σ 2m

ASE(0)> 1 + 2

p. (27)

Proof By definition, gossiping is harmful at t = 1 iff

E[ASE(1)] > E[ASE(0)].

Then, using Theorem 3, we have(1 − 1 + p

2

N

)E[ASE(0)] + 1 − p

NE[S(0)]

+ p

2Nσ 2

m + p

N

(3

2− c0

)(μm − x(0))2 > ASE(0).

On the other hand, S(0) = 0, and so the above inequality isequivalent to

(μm − x(0))2 + σ 2m

ASE(0)> 1 + 2

p.

Hence, gossiping is harmful at t = 1 iff (27) holds. So, dueto lemma 5 if (27) holds, then at any time t > 0, gossipingis harmful. ��

In Lemma 5, we stated that if gossiping at a given point isharmful, then at any time after that point, it will be harmful aswell. Now, we show that under a mild condition, the numberof the gossip steps required to reach the mentioned point isin O( N

p ).

Theorem 4 If malicious nodes use Strategy 2 to attackthe randomized gossip algorithm in the asynchronous timemodel, and |μm − x(0)| >

√ASE(0), then at any time t

such that

t ≥ N

plog

(1 −

√ASE(0)

|μm − x(0)|)−1

, (28)

gossiping is harmful.

Proof If at time t , E[ASE(t)] > ASE(0), then there is thsuch that 0 < th ≤ t and E[ASE(th)] > E[ASE(th − 1)],i.e., gossiping is harmful at th , and hence, Lemma 5 guaran-tees that it is also harmful at t . Therefore, if E[ASE(t)] >

ASE(0), then gossiping is harmful at t .On the other hand, if

t >− log

(1 −

√ASE(0)

|μm−x(0)|)

− log(1 − p

N

) , (29)

then (1 − ct )2(μm − x(0))2 > ASE(0), and Lemma 2 guar-antees that E[ASE(t)] > ASE(0).

Furthermore, for any x > 1, we have

1

− log(1 − 1

x

) < x .

Hence, if the inequality (28) holds, then (29) holds as well,and therefore, E[ASE(t)] > ASE(0), i.e., gossiping is harm-ful at t . ��

6 Beyond the analytical model

In the previous sections, in order to make the model ana-lytically tractable, we made some assumptions about both

123

Page 9: Randomized gossip algorithms under attack

Randomized gossip algorithms under attack

the adversary model and the network topology. In thissection, we discuss the effects of relaxing some of theseassumptions.

6.1 Adversary model

In the modeled hostile environment, malicious nodes candeliberately report any fake value in [L , U ], but they com-ply with the protocol otherwise. We showed that this type ofattack can be very damaging. In practice, however, the sit-uation can be even worse. Indeed, malicious nodes can bestrengthened by increasing their participation in the averag-ing algorithm. The Sybil attack, denial of service (DoS), andwhat we call the hyperactivity of malicious nodes are someways to do so. In the Sybil attack, a single malicious node actsas several nodes by forging different identities to increase itsinfluence. Using a DoS attack, malicious nodes can decreasethe influence of honest nodes. Eventually, hyperactive mali-cious nodes increase their influence by participating in mul-tiple gossip transactions simultaneously.

The above attacks can be at least partially mitigated usingdifferent techniques.1 However, to see how these attacks canaffect the system performance, we consider a system with1,000 nodes, of which 10 are malicious. Malicious nodes useStrategy 2 to attack the system. The experiment setting is sim-ilar to that of the previous experiment presented in Sect. 3. Butthis time, malicious nodes use the above-mentioned attacks(i.e., Sybil, DoS, and hyperactivity) to increase their partic-ipation in the averaging algorithm in order to intensify theattack. If the attacks are successful, malicious nodes can actas a larger group. Let us consider malicious nodes could pre-tend to be a group of 10, 20, 50, 100, 200, or 500 nodes.For each case, Theorem 3 can be used to evaluate the systemperformance. The results are presented in Fig. 4 in terms ofthe average MSE.

As expected, the above attacks can significantly intensifythe adverse effects of the previous ones. However, to evaluatethe exact effects of these attacks, we need to know moredetails about the real environment such as the capabilitiesof malicious nodes and selected defensive measures that areout of the scope of this work. Once such details are knownor estimated, one can use the results of the previous sectionsto evaluate exactly the system performance.

1 For example, the Sybil attack can be prevented by assigning a centralauthority for managing identities, or mitigated by increasing the costof obtaining new identities. DoS attacks can be mitigated by enhancingcomputation and communication capacities. Hyperactive nodes can bedetected through data sharing between nodes. Moreover, when datasharing is not feasible, the effect of hyperactive nodes can be reducedby forcing nodes to run the averaging algorithm as fast as possible.

Fig. 4 The average MSE of the honest nodes over time where themalicious nodes use Strategy 2 with μm = 0.9 and σm = 0.0577 andpretend to be a group of 10, 20, 50, 100, 200, or 500 nodes

Fig. 5 A random geometric graph in the unit square with 100 nodeswhere R = 0.15. The circles show honest nodes, and the squares showmalicious ones

6.2 Network topology

In the previous sections, the underlying communication net-work was modeled as a complete graph. Although it is anacceptable assumption in many cases (e.g., in peer-to-peernetworks), there are important networks in which completegraphs cannot be considered as the network topology. Par-ticularly, for the case of wireless sensor networks, randomgeometric graphs are typically used to model the networktopology. In these graphs, vertices are placed uniformly atrandom on a plane, and an edge is created between vertices iand j , if and only if the Euclidean distance between i and jis less than a threshold value R. Figure 5 shows an exampleof these graphs where 100 nodes are randomly placed on theunit square and R is defined to be 0.15.

When there is no attack, it is known that the convergencerate of node values for the case of random geometric graphsis slower than the case of the complete graph [5]. Whenthere is an attack, we use computer simulation to see how

123

Page 10: Randomized gossip algorithms under attack

M. Mousazadeh, B. Tork Ladani

Fig. 6 The average MSE of the honest nodes over time where themalicious nodes use Strategy 2 with μm = 0.9 and σm = 0.0577, andthe network topology is a random geometric graph with different valuesof R

the network topology can affect the system performance. Tothis end, we consider a system with 1,000 nodes, of which50 are malicious. The setting is as before, except that thenetwork topology is now modeled as a random geometricgraph in the unit square with different values of R. For eachvalue of R, we randomly generate 100 geometric graphsand report the average performance in terms of the averageMSE. Figure 6 shows the results. The lower curve corre-sponds to the case where R = √

2. In this case, randomgeometric graph is indeed a complete graph. By compar-ing the upper curves with the lower one, we can concludethat not only the node values converge more slowly on ran-dom geometric graphs, but also the average MSE can neverreach the obtained minimum value in the case of the completegraph.

7 Conclusion and future work

In this paper, the assessment of randomized gossip algorithmsin hostile environments was investigated. To deal with hos-tile environments, we first modeled malicious nodes as thesource of fake values. Then, we observed how the conver-gence of randomized gossip algorithms may be affected bymalicious nodes when such nodes use particular strategiesfor generating fake values. Advancing the notion of harmfulgossiping was another part of the work. Although in hostileenvironments limited gossiping may be useful, the analysisconfirmed that unlimited gossiping may end in disastrousresults, even in the presence of a small group of maliciousnodes. Limited gossiping and using safeguards for detectingand isolating malicious nodes should be considered to miti-gate potential harms in hostile environments. Designing suchsafeguards will be considered in our future work.

Acknowledgments This work was supported in part by the IranTelecommunication Research Center Grant T-500-19242.

Appendix A: Proof of Theorem 3

Proof We use conditional expectation to prove the theorem.Let the event A occur at t + 1, i and j be the participants ofthe gossip update, and x(t) = x = (x1, . . . , xn). Then,

ASE(t+1|i ∈ H, j ∈H, x(t) = x)= 1

n

n∑k=1

(xt+1

k −x(0))2

= ASE(t) + 1

n

(2

(xi + x j

2− x(0)

)2

− (xi − x(0))2

− (x j − x(0)

)2

).

Then, by taking the expectation over all i, j ∈ H, and apply-ing straightforward calculations, we have

E[ASE(t+1)|A, x(t) = x] =(

1− 1

n

)ASE(t) + 1

nS(t).

Taking the expectation over x yields

E[ASE(t + 1)|A] =(

1 − 1

n

)E[ASE(t)] + 1

nE[S(t)].

(30)

Now, suppose the event B occurs at t + 1, i ∈ H andj ∈ M be the participants of the gossip update, xt

j = xm ,and x(t) = x = (x1, . . . , xn). Then,

ASE(

t + 1|i ∈ H, xtj = xm, x(t) = x

)

= 1

n

n∑k=1

(xt+1k − x(0))2

= ASE(t) + 1

n

(− (xi − x(0))2 +

(xi + xm

2− x(0)

)2)

= ASE(t) − 3

4n(xi − x(0))2 + 1

4n(xm − x(0))2

+ 1

2n(xm − x(0)) (xi − x(0)) .

On the other hand,

E[(xm − x(0))2

]= σ 2

m + (μm − x(0))2 .

Then, taking the expectation over xm yields

ASE(t + 1|i ∈ H, j ∈ M, x(t) = x)

= ASE(t) − 3

4n(xi − x(0))2 + 1

4n

(σ 2

m + (μm − x(0))2)

+ 1

2n(μm − x(0)) (xi − x(0)) .

123

Page 11: Randomized gossip algorithms under attack

Randomized gossip algorithms under attack

Taking the expectation over all i ∈ H and j ∈ M, and thentaking the expectation over x , implies

E[ASE(t + 1)|B] =(

1 − 3

4n

)E[ASE(t)]

+ 1

4n

(σ 2

m + (μm − x(0))2)

+ 1

2n(μm − x(0)) (E[x(t)] − x(0)) .

(31)

If the event C occurs, ASE(t) remains unchanged, and then

E[ASE(t + 1)|C] = E[ASE(t)]. (32)

Putting together all the pieces yields:

E[ASE(t + 1)] = + Pr(A)E[ASE(t + 1)|A]+ Pr(B)E[ASE(t + 1)|B]+ Pr(C)E[ASE(t + 1)|C]

Then, using (11), (30), (31), and (32), we have

E[ASE(t + 1)] =(

1 − 1 + p2

N

)E[ASE(t)]

+1 − p

NE[S(t)] + p

2Nσ 2

m

+ p

N

(3

2− ct

)(μm − x(0))2 .

To obtain E[S(t + 1)], we should first compute E[S(t +1)|A], E[S(t +1)|B], and E[S(t +1)|C]. Due to its similarityto E[ASE(t + 1)], we present only the results and omit theproofs.

E[S(t + 1)|A] = E[S(t)], (33)

E [S(t + 1)|B] =(

1 − 1

n

)E[S(t)] + 1

4n2 E[ASE(t)]

+1

n(1 − ct ) (μm − x(0))2 + 1

4n2 σ 2m

− 1

2n2

(1

2− ct

)(μm − x(0))2 , (34)

E[S(t + 1)|C] = E[S(t)]. (35)

On the other hand,

E[S(t + 1)] = + Pr(A)E[S(t + 1)|A]+ Pr(B)E[S(t + 1)|B]+ Pr(C)E[S(t + 1)|C]

Thus, using (33), (34), and (35), we have

E[S(t + 1)] =(

1 − 2p

N

)E[S(t)] + p

2nNE[ASE(t)]

+2p

N

(1 − ct − 1

4n+ 1

2nct

)(μm − x(0))2

+ p

2nNσ 2

m .

This relation completes the proof. ��

Appendix B: Proof of Lemma 4

Proof Let the event B occur at t + 1, i ∈ H and j ∈ M bethe participants of the gossip update, xt

j = xm , and x(t) =x = (x1, . . . , xn). Then,

S(

t+1|i ∈ H, xtj = xm, x(t)= x

)=

(1

n

n∑k=1

xt+1k −x(0)

)2

=(

1

n

n∑k=1

xk − xi

n+ xi + xm

2n− x(0)

)2

=(

x(t) − x(0) + xm − xi

2n

)2

= (x(t) − x(0))2 + 2 (x(t) − x(0))

(xm − xi

2n

)

+(

xm − xi

2n

)2

≥ S(t) + 1

n(x(t) − x(0)) (xm − xi ) .

Taking the expectation over xm , we have

E [S(t + 1)|i ∈ H, j ∈ M, x(t) = x]

≥ S(t) + 1

n(x(t) − x(0)) (μm − xi )

Taking the expectation over all i ∈ H and j ∈ M, we have

E[S(t + 1)|B, x(t) = x]≥ S(t) + 1

n(x(t) − x(0)) (μm − x(t)) .

Taking the expectation over x , and using (11), we have

E[S(t + 1)|B]≥ E[S(t)] + 1

n(E[x(t)] − x(0)) (μm − E[x(t)])

= E[S(t)]+ 1

n(1−ct )ct (μm −x(0))2

Hence, E[S(t + 1)|B] > E[S(t)]. On the other hand, due to(33) and (35), if the event A or the event C occurs at t + 1,then E[S(t)] remains unchanged. Therefore, E[S(t + 1)] >

E[S(t)] for any t ≥ 0. ��

References

1. Aysal, T.C., Barner, K.E.: Convergence of consensus models withstochastic disturbances. IEEE Trans. Inf. Theory 56(8), 4101–4113(2010)

2. Aysal, T.C., Coates, M.J., Rabbat, M.G.: Distributed average con-sensus with dithered quantization. IEEE Trans. Signal Process.56(10), 4905–4918 (2008)

3. Aysal, T.C., Yildiz, M., Sarwate, A., Scaglione, A.: Broadcast gos-sip algorithms for consensus. IEEE Trans. Signal Process. 57(7),2748–2761 (2009)

123

Page 12: Randomized gossip algorithms under attack

M. Mousazadeh, B. Tork Ladani

4. Benezit, F., Dimakis, A.G., Thiran, P., Vetterli, M.: Order-optimalconsensus through randomized path averaging. IEEE Trans. Inf.Theory 56(10), 5150–5167 (2010)

5. Boyd, S., Ghosh, A., Prabhakar, B., Shah, D.: Randomized gossipalgorithms. IEEE Trans. Inf. Theory 52(6), 2508–2530 (2006)

6. Chen, J., Pandurangan, G., Xu, D.: Robust computation of aggre-gates in wireless sensor networks: distributed randomized algo-rithms and analysis. IEEE Trans. Parallel Distrib. Syst. 17(9), 987–1000 (2006)

7. Cybenko, G.: Load balancing for distributed memory multiproces-sors. J. Parallel Distrib. Comput. 7, 279–301 (1989)

8. Dimakis, A., Kar, S., Moura, J., Rabbat, M., Scaglione, A.: Gossipalgorithms for distributed signal processing. Proc. IEEE 98(11),1847–1864 (2010)

9. Dudley, R.M.: Real analysis and probability. Cambridge UniversityPress, Cambridge (2002)

10. Fagnani, F., Zampieri, S.: Average consensus with packetdrop communication. SIAM J. Control Optim. 48(1), 102–133(2007)

11. Hatano, Y., Das, A.K., Mesbahi, M.: Agreement in presence ofnoise: pseudogradients on random geometric networks. In: Pro-ceedings of the 44th IEEE Conference on Decision and Control,and 2005 European Control Conference (2005)

12. Lamport, L., Shostak, R., Pease, M.: The Byzantine generals prob-lem. ACM Trans. Program. Lang. Syst. 4(3), 382–401 (1982)

13. Kar, S., Moura, J.M.F.: Distributed average consensus in sensornetworks with random link failures and communication channelnoise. In: Proceedings of the 41st Asilomar Conference on Signals,Systems Computer, pp. 676–680 (2007)

14. Kempe, D., Dobra, A., Gehrkey, J.: Gossip-Based computation ofaggregate information. In: Proceedings of the 44th Annual IEEESymposium on Foundations of Computer, Science, pp. 482–491(2003)

15. Przydatek, B., Song, D., Perrig, A.: SIA: Secure information aggre-gation in sensor networks. In: Proceedings of the 1st InternationalConference on Embedded Networked Sensor Systems, pp. 255–265 (2003)

16. Rajagopal, R., Wainwright, M.J.: Network-based consensus aver-aging with general noisy channels. IEEE Trans. Signal Process.59(1), 350–364 (2011)

17. Sakuma, J., Kobayashi, S.: Large-scale k-means clustering withuser-centric privacy-preservation. Knowl. Inf. Syst. 25, 253–279(2010)

18. Ustebay, D., Oreshkin, B., Coates, M., Rabbat, M.: Greedy gossipwith eavesdropping. IEEE Trans. Signal Process. 58(7), 3765–3776(2010)

19. Wagner, D.: Resilient aggregation in sensor networks. In: Proceed-ings of the 2nd ACM Workshop on Security of Ad Hoc and Sensor,Networks, pp. 78–87 (2005)

20. Xiao, L., Boyd, S., Kim, S.-J.: Distributed average consensus withleast-mean-square deviation. J. Parallel Distrib. Comput. 67(1), 33–46 (2007)

21. Yildiz, M.E., Scaglione, A.: Coding with side information for rateconstrained consensus. IEEE Trans. Signal Process. 56(8), 3753–3764 (2008)

22. Zhou, R., Hwang, K., Cai, M.: GossipTrust for fast reputationaggregation in peer-to-peer networks. IEEE Trans. Knowl. DataEng. 20(9), 1282–1295 (2008)

123