25
Wireless Pers Commun (2014) 75:1689–1713 DOI 10.1007/s11277-013-1299-1 An Effective Model for Indirect Trust Computation in Pervasive Computing Environment Naima Iltaf · Abdul Ghafoor · Usman Zia · Mukhtar Hussain Published online: 4 July 2013 © Springer Science+Business Media New York 2013 Abstract The performance of indirect trust computation models (based on recommenda- tions) can be easily compromised due to the subjective and social-based prejudice of the provided recommendations. Eradicating the influence of such recommendation remains an important and challenging issue in indirect trust computation models. An effective model for indirect trust computation is proposed which is capable of identifying dishonest recom- mendations. Dishonest recommendations are identified by using deviation based detecting technique. The concept of measuring the credibility of recommendation (rather than credibil- ity of recommender) using fuzzy inference engine is also proposed to determine the influence of each honest recommendation. The proposed model has been compared with other existing evolutionary recommendation models in this field, and it is shown that the model is more accurate in measuring the trustworthiness of unknown entity. Keywords Recommendation model · Pervasive computing · Malicious recommendation detection N. Iltaf · M. Hussain Department of Software Engineering, National University of Sciences and Technology (NUST), Islamabad, Pakistan e-mail: [email protected] M. Hussain e-mail: [email protected] A. Ghafoor (B ) Department of Electrical Engineering, National University of Sciences and Technology (NUST), Islamabad, Pakistan e-mail: [email protected] U. Zia Department of Computer Engineering, Center of Advanced Studies and Engineering (CASE), Islamabad, Pakistan e-mail: [email protected] 123

An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Embed Size (px)

Citation preview

Page 1: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Wireless Pers Commun (2014) 75:1689–1713DOI 10.1007/s11277-013-1299-1

An Effective Model for Indirect Trust Computationin Pervasive Computing Environment

Naima Iltaf · Abdul Ghafoor · Usman Zia ·Mukhtar Hussain

Published online: 4 July 2013© Springer Science+Business Media New York 2013

Abstract The performance of indirect trust computation models (based on recommenda-tions) can be easily compromised due to the subjective and social-based prejudice of theprovided recommendations. Eradicating the influence of such recommendation remains animportant and challenging issue in indirect trust computation models. An effective modelfor indirect trust computation is proposed which is capable of identifying dishonest recom-mendations. Dishonest recommendations are identified by using deviation based detectingtechnique. The concept of measuring the credibility of recommendation (rather than credibil-ity of recommender) using fuzzy inference engine is also proposed to determine the influenceof each honest recommendation. The proposed model has been compared with other existingevolutionary recommendation models in this field, and it is shown that the model is moreaccurate in measuring the trustworthiness of unknown entity.

Keywords Recommendation model · Pervasive computing · Malicious recommendationdetection

N. Iltaf · M. HussainDepartment of Software Engineering, National University of Sciences and Technology (NUST),Islamabad, Pakistane-mail: [email protected]

M. Hussaine-mail: [email protected]

A. Ghafoor (B)Department of Electrical Engineering, National University of Sciences and Technology (NUST),Islamabad, Pakistane-mail: [email protected]

U. ZiaDepartment of Computer Engineering, Center of Advanced Studies and Engineering (CASE),Islamabad, Pakistane-mail: [email protected]

123

Page 2: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1690 N. Iltaf et al.

1 Introduction

Access control is a primary concern of collaborative, dynamic and open environments (per-vasive environment) with no definite security boundaries. The dynamic properties envisionedfor pervasive computing, imply that traditional access control mechanisms are often inap-propriate as they are of static nature [1]. Trust based on human notion is applied to copewith new security concerns in pervasive environments. Much research on trust based accesscontrol models for pervasive environment has been carried out [2–6], which use trust as aelementary criterion for authorizing known, partially known and unknown entities to interactwith each other.

Direct trust computational models proposed in literature evaluate the trustworthiness ofknown and partially known entities on the basis of their personal experience [3]. However, ifthe entity is completely unknown, trustworthiness is defined based on the recommendationsreceived from the peer services. Reliance on peer services for seeking the recommendationof unfamiliar service requestor can lead to erroneous decisions if the recommender providerecommendations that deviate from their experience. The recommenders can falsely providedishonest recommendation either to elevate trust value of malicious entities or to lessen thetrust value of honest entities [7]. Therefore, a mechanism to avoid the influence of dishonestrecommendations from malicious recommenders is a fundamental problem for trust models.

Moreover, in existing trust models the recommending services often provide a single trustvalue (computed by them during their interaction with the unknown entity in question) as arecommendation [4–6] and [22]. However, a single trust value recommended by a recom-mender represents its subjective opinion about the unknown entity and cannot depict the realtrust level very well under certain circumstances. For example, S requests for recommenda-tions about unknown entity C and receives replies from two recommenders A and B withtheir recommended trust values TA = 0.7 and TB = 0.5. Does it mean S should give both therecommendations an equal weight in aggregated recommendation computing? Suppose TA

was evaluated at time tA and TB at tB where tcurrent − tA > tcurrent − tB , also A has a verymodest history of past interactions with C as compared to B. It is obvious that recommen-dation provided by B is more reliable than A in terms of predicting the trustworthiness of C ,as it is more recent and with more experience. Moreover, it is also possible that the contextof interaction between B and C was more sensitive vis-a-vis that between A and C , thusfurther strengthening the trustworthiness of B’s opinion about C .

To address these issues, an effective model for indirect trust computation is proposed.The proposed approach is based on the premise that only honest recommendation shouldbe considered for recommended trust computation. Moreover, honest and credible recom-mendations should be given more weight as compared to low credible recommendations incomputing an aggregated recommended trust value for an unknown entity.The effectivenessof the proposed model is demonstrated through the experiments. To best of our knowledge,the proposed model is the first model that can not only distinguish between honest and dis-honest recommendations but also incorporates mechanism to determine the weight each validrecommendation should carry in aggregation process.

2 Related Work

In literature, trust value is mostly modeled by two parts: direct trust value based on entity’s ownexperience and indirect trust value based on recommendations from other [3]. However, whenan entity has no experience with the service requester, it considers recommendation from other

123

Page 3: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1691

entities to determine its trustworthiness. The service providers often receive recommendationsfrom a number of recommenders, therefore require a method to determine the aggregatedreputation for the unknown service.

Summation is often used to produce an aggregated recommendation value. These modelsproposed in [8] and [9] use single factor of feedbacks as the recommendation measure,which often fails to capture the trustworthiness of entity effectively. They are designed onthe assumption that the given recommendation is always honest and with no bias. However,aggregating the recommendations without evaluating them can lead to erroneous decisions.The recommenders can often provide recommendations that deviate from their experiencedue to intentional or unintentional errors (misinterpretations and misunderstandings).

Recently, research has been carried out in designing defense mechanisms to detect dishon-est recommendation in these open distributed environments [11–23]. Thedefense mechanismsagainst dishonest recommendations has been grouped into two broad categories, namelyexogenous method and endogenous method [11]. The approaches that fall in endogenousmethod, use other external factors along with the recommendations (reputation of recom-mender and credibility of recommender) to decide the trustworthiness of the given recommen-dation. However the proposed approach assumes that only highly reputed recommenders cangive honest recommendations and vice versa. In [12], Xiong and Liu presented an approach(PeerTrust) that avoids aggregation of the individual interactions. Their model computes thetrustworthiness of a given peer based on the community feedback about participant’s pastbehavior. The credibility factor of the feedback source is computed using a function of trustvalue as its credibility value. The model also incorporates the personalized similarity betweenthe experience with other partners for reputation on ranking discrepancy. Chen et al. [13]distinguishes between recommendations by computing the reputation for each recommender.The reputation is measured on the basis of the quality and quantity of recommendation itprovide.The recommenders reputation is used as a its weight when aggregating the recom-mendations of all the recommenders. However, the model does not consider the service typeof the recommender on which its recommendation is based. In [14], rater credibility is pro-posed for its recommendation assessment. It believes that only highly reputed recommenderscan give honest recommendations. These models use other external information sources togather the reputations of recommender. Ganeriwal et al. [15], believe that the weight of rec-ommendation given by a service provider is dependent on its own reputation for the serviceprovisioning. In other word, if it provides a reliable service than the recommendations itprovides is also reliable. In [16], global reputation of a node is aggregated from local trustscores weighted by the global reputation scores of all senders. Since these models are basedon the assumption that entities with high reputation provide honest recommendations makesit vulnerable to attack. A smart attacker may behave well for a while to get a high reputationand then it provides all dishonest recommendations that can not be detected by schemesusing reputation [17]. That is a recommender can build repute with different expectationsand intention, and the recommendation they provide can be different from their experience.

In endogenous method, recommendation seeker has no personal experience with the entityin question. It relies only on the recommendations provided by the recommender to detectdishonest recommendation. The method believes that the dishonest recommendations havedifferent statistical patterns from honest recommendations. Therefore, in this method filteringof dishonest recommendation is based on analyzing and comparing the recommendationsthemselves. In trust models where indirect trust based on recommendations is used onlyonce to allow a stranger entity to interact, endogenous method based on the majority rule iscommonly used.

123

Page 4: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1692 N. Iltaf et al.

Dellarocas [18] has proposed an approach based on controlled anonymity to separateunfairly high ratings and fair ratings. This approach is unable to handle unfairly low ratings[19]. In [20], a filtering algorithm based on the beta distribution is proposed to determinewhether each recommendation Ri falls between q quartile (lower) and (1−q) quartile (upper).Whenever a recommendation does not lie between lower and upper quartile it is consideredmalicious, and its recommendation is excluded. The technique assumes that recommenda-tions follow Beta distribution, and is effective only if there are effectively large number ofrecommendations. Weng et al. in [21] proposed a filtering mechanism based on entropy. Thebasic idea is that if recommendation is too different from majority opinion then it could beunfair. The approach is similar to other reputation based models except that it uses entropyto differentiate between different recommendations. A context specific and reputation-basedtrust model for pervasive computing environment is proposed in [22] for detecting mali-cious recommendation, based on control chart method. Control chart method uses mean andstandard deviation to calculate Lower Confidence Limit (LCL) and Upper Confidence Limit(UCL). It is assumed that the recommendation values that lie outside the interval defined byLCL and UCL are malicious, therefore discarded from the set of valid recommendations. Itconsiders that a metrical distance exists between valid and invalid recommendations. As aresult, the rate of filtrating out the false positive and false negative recommendation is reallyhigh. Deno et al. [23] proposed an iterative filtering method for the process of detectingthe malicious recommendations.The effectiveness of this approach depends on choosing asuitable value for a predefined threshold S. These detection mechanisms based majority rulecan easily be bypassed, if a relatively small bias is introduced in dishonest recommendations.

In this paper, we propose a hybrid model that not only identifies dishonest recommen-dations but also uses the content of the recommendation itself to determine it’s credibilitywithout relying on the external factors.

3 Framework for Proposed Model

The existing trust and reputation models proposed for pervasive computing environment doinvestigate measures to address the issues related to dishonest recommenders who use bad-mouthing and ballot stuffing attacks to deceive other services. However, they insufficientlyhandle the issue of measuring the reliability of the recommendation. They generally believethat the recommendation is only as reliable as its source (recommendation provider), thusinclude reputation of the recommender as a factor to measure the significance of the rec-ommendation made by it. This leverages us to design our fuzzy based credibility evaluationmodel for computing indirect computation to allow access to resources in an uncertain envi-ronment. In our model, we assume that all entities are autonomous and some of them aremobile. Entities in our model try to access the services. Thus, we establish trust relationshipsbetween entities and the services. Each service maintains a list of trustworthy and untrustwor-thy entities, the trust value associated with them, and time when trust value was last revisedand number of interaction the entity had with the service. An overview of our proposedframework is shown in Fig. 1. In the proposed framework, when an entity requests an accessof a service (from the service provider), the service provider seeks recommendation fromother peer services for the entity to determine its trustworthiness. All these recommendationsthen undergo a filtration mechanism in outlier detection engine to determine dishonest rec-ommendations. The recommendation evaluation model then computes the credibility of eachhonest recommendation. The model uses weighted average (taking credibility as a weight) todetermine the aggregated recommended trust value. The basic function of a policy analyzer

123

Page 5: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1693

Fig. 1 Proposed model

is to process the request; to determine whether the entity requesting a service is permitted todo the requested action in presence of the computed recommended trust value and policiesdefined for that service.

3.1 Recommendation Request/Response Cycle

In the proposed model, a service provider seeks recommendation before allowing accessto an unknown entity to know what other services think about the prospect. To request arecommendation, the service provider creates a recommendation request message (RRE QST )and broadcast it to all peer services in its vicinity. The service provider then waits for sometime to receive recommendations. The peer services which had interaction with the targetentity may answer this RRE QST message with recommendation response message (RRE S P ).The recommendation provider sends a collection of attributes that define the significance ofthe interaction they had with the entity rather than a single recommended trust value. TheRRE QST and RRE S P message are defined as:

Recommendation Request:RRE QST [SvcI D, Enti t y I D, ReqT ime]Recommendation Response:RRE S P [Recomm I D, Enti t y I D, Ti , t, nt , SS]

where in RRE QST message, SvcI D represents the identity of the service provider requestingrecommendations, Enti t y I D is identity of the target entity for which recommendations arerequested (prospect), Reqtime is the current time when recommendations are requested. InRRE S P message Recomm I D represents the identity of the recommender, Ti is the recom-mendation value provided by recommender i, t is the time when the recommendation valuehas been recorded, nt indicates the total number of interactions on which the recommendationis based and SS is the sensitivity level of the recommender. All these attributes are explainedin detail in Sect. 3.3.

123

Page 6: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1694 N. Iltaf et al.

3.2 Outlier Detection Engine

The objective of indirect trust computation is to determine the trustworthiness of an unfa-miliar entity from the set of recommendations that narrow the gap between the derivedrecommendation and the actual trustworthiness of the entity. In the proposed framework,outlier detection engine is responsible for detecting dishonest recommendation. The outlierdetection engine defines a dishonest recommendation as an outlier that appears to be incon-sistent with other recommendations and has a low probability that it originated from thesame statistical distribution as the other recommendation in the data set. The importanceof detecting outliers in data has been recognized in the fields of database and data miningfor a long time. The outlier deviation based approach was first proposed by [24], in whichan exact exception problem was discussed. In [10] author has presented a new method fordeviation based outlier detection in large database. The algorithm locates the outlier by adynamic programming method. In our model, we have extended this outlier detection tech-nique to filter out the dishonest recommendations. Our approach is based on the fact that if arecommendation is far from the median value of given recommendation set and with a lowerfrequency of occurrence, it is filtered out as a dishonest recommendation. Suppose that anentity X requests to access service A. If Service A has no previous interaction history withthe X , it will broadcast the request for recommendations, with respect to the X . Let R denotethe set of recommendations collected from recommenders.

R = {r1, r2, r3, . . . . . . , rn}where n is the total number of recommendations. Since smart attackers can give recommen-dations with little bias to go undetected, we divide the range of possible values of recom-mendations into 10 intervals (or bins), such that Rc1 comprises of all recommendations thatlie between interval [0 0.1], Rc2 comprises all recommendations between interval [0.1 0.2]and so on for (Rc3, . . . , Rc10). These bins define which recommendations we consider to besimilar to each other, such that all recommendations that lie in the same bin are consideredalike. After grouping the recommendations in their respective bins, we compute a histogramthat shows a count fi of the recommendations falling in each bin. Let H be a histogram ofset of recommendation classes, where

H(R) = {〈Rc1, f1〉, 〈Rc2, f2〉, 〈Rc3, f3〉, 〈Rc4, f4〉, 〈Rc5, f5〉, 〈Rc6, f6〉,〈Rc7, f7〉, 〈Rc8, f8〉, 〈Rc9, f9〉, 〈Rc10, f10〉}

where fi is the total no of recommendations falling in Rci . From this histogram H(R), weremove all the recommendation classes with zero frequencies and get domain set (Rdomain)and frequency set ( f )

Rdomain = {Rc1, Rc2, Rc3, . . . . . . , Rc10}f = { f1, f2, f3, . . . . . . , f10}.

Definition 1 The dissimilarity function DF(xi ) is defined as:

DF(xi ) = |xi − median(x)|2fi

(1)

where xi is a recommendation class from a recommendation set x .Under the proposed approach, the dissimilarity value of xi is directly proportional to

Median Absolute Deviation (MAD), i.e. |xi − median(x)|. MAD is used for deviation

123

Page 7: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1695

detection because it is resistant to outliers. The presence of outliers does not change thevalue of the MAD. Moreover, the dissimilarity value of xi is inversely proportional to itsfrequency. In Eq. (1), MAD is divided by frequency fi . In this way, if recommendation isvery far from the rest of the recommendations and its frequency of occurrence is also low,Eq. (1) will return a high value. Similarly, if recommendation is close to the rest of therecommendations (i.e. similar to each other) and its frequency of occurrence is also high, Eq.(1) will return a low value.

For each Rci a dissimilarity value is computed using Eq. (1), to represent its dissimilaritywith rest of the recommendations with regard to their frequency of occurrence. All the rec-ommendation classes in Rdomain are then sorted with respect to their dissimilarity valueDF(Rci ) in descending order. The recommendation class at the top of sorted Rdomainwith respect to its DF(x j ) is considered to be the most suspicious one to be filtered out asdishonest recommendation. Once the Rdomin is sorted, the next step it to determine theset of dishonest recommendation classes from Rdomain set. To help find the set of dishon-est recommendation classes from set of recommendations in Rdomain, [24] has defined ameasure called Smoothing Factor (SF).

Definition 2 A SF for each S Rdomain is computed as:

SF(S Rdomain j ) = C(Rdomain−S Rdomain j ) ∗ (DF(Rdomain)

−DF(S Rdomain j )) (2)

where j = 1, 2, 3 . . . , m, and m is the total number of distinct elements in S Rdomain. C iscardinality function and is taken as frequency of elements in a set {Rdomain−S Rdomain j }.The SF indicates how much the dissimilarity can be reduced by removing a suspicious set ofrecommendation (S Rdomain) from the Rdomain.

Definition 3 Dishonest Recommendation Domain (Rdomaindishonest ) is a subset ofRdomain that contributes most to the dissimilarity of Rdomain and with least number ofrecommendations i.e, Rdomaindishonest ⊆ Rdomain. We say that S Rdomainx is a set ofdishonest recommendation classes with respect to S Rdomain, C and DF(S Rdomain j ) if

SF(S Rdomainx ) ≥ SF(S Rdomain j ) x, j ∈ m

for all Rdomain, C and S Rdomain j .

In order to find out the set of dishonest recommendation Rdomaindishonest from Rdomainthe mechanism defined by the proposed approach is :

– Let Rck be the kth recommendation class of Rdomain and S Rdomain be the set ofsuspicious recommendation classes from Rdomain, i.e. S Rdomain ⊆ Rdomain.

– Initially, S Rdomain is an empty set, S Rdomain0 = {}– Compute SF(S Rdomaink) for each S Rdomaink formed by taking union of

S Rdomaink−1 and Rck .

S Rdomaink = S Rdomaink−1 ∪ Rck (3)

where k = 1, 2, 3 . . . , m − 1, and m is the distinct recommendation class value numberin sorted Rdomain.

– The subset S Rdomaink with largest SF(S Rdomaink) is considered as a set containingdishonest recommendation classes.

After detecting the set Rdomaindishonest , we remove all the recommendations that fall inthe dishonest recommendation classes.

123

Page 8: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1696 N. Iltaf et al.

3.3 Recommendation Evaluation

After filtering the dishonest recommendation, the next step is to evaluate the effectiveness ofeach honest recommendation. Since each recommender provides its subjective opinion basedon its experience with the target entity as its recommendation. The gathered recommendationrepresents a specific discerning interpretation of the experience that the recommender hadwith the entity. These subjective judgments are the major sources of uncertainty in indirecttrust computational models as they depend heavily upon each recommender’s experience,intuition and other subjective factors. Recommendations can also be incorrect due to unin-tentional errors, misinterpretations and misunderstandings. Therefore, in the proposed modela recommendation evaluation module is introduced, to evaluate the credibility of the pro-vided recommendation. The proposed approach determines how strongly a recommendationrequestor can believe in valid recommendation and how much influence it should have indetermining the trustworthiness. The proposed framework introduces fuzzy logic to evaluatethe credibility of the recommendation that eliminates subjectivity from the valid recommen-dation.

3.3.1 Credibility

In our proposed model, we use credibility as a parameter to determine the capability of rec-ommendation in delivering the correct opinion about trustworthiness of the target entity. Theinspiration for using credibility to determine the quality of each recommendation comes fromthe intuitive notion that, wisdom comes with experience but decays with passage of time ifnot invigorated. It seems natural that the more weight should be given to the recommenda-tion made by the recommender who is the most expert and has a recent experience with therecommender. In literature, the credibility is often used to rate the recommender to ascertainthe influence of its recommendation. However, if the service provider has no prior knowledgeof the recommender, he cannot measure the credibility of the recommender. In the proposedapproach, the concept of measuring credibility of recommendation rather than credibility ofrecommender resolves this issue of assessing recommendation without any prior knowledgeof recommendation provider.

3.3.2 Factors for Measuring Credibility

Credibility of a recommendation is established by taking into account (i) the number ofinteractions on which the recommendation is based ( Experience (E)), (ii) time when trustvalue was last updated (Time base experience (TBE)) and (iii) the sensitivity of the serviceoffering recommendation (SS).

TBE: It represents the time the recommender had last interacted with the target entity andcalculated its trust on the target on the basis of his current experience and past behavior. Themotivation in introducing time based experience in credibility computation is to integratethe aspect of fading experience. As much as change is about adapting to the new, it is aboutdetaching from old. Human nature study shows that all relations show a liability of newnessin which the rate of decay slows over time. Indirect trust computation mechanism is basedon the human notions and uses history of interaction for trust value computation. Therefore,an old experience with the target entity should influence the current opinion about the targetentity but newly made experiences should weigh more in decision making. To computeTBE, recommendation requesting service extracts time of interaction (t) from each RRE S P

123

Page 9: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1697

message. Let t and tc denote the interaction time and current time respectively then the T B Efor target entity is computed as:

T B E = α(1 − β)�tα

where �t = tc − t, α and β are adjustable positive constants that can be tuned accordinglyto define the rate of decay.

E: It represents the total number of interactions that a service provider had with an entity inits lifetime. It is the measure of an entity’s activity count with the service provider. The serviceprovider that has more interactions with the target entity is taken as more experienced than theone that has a fewer number of interactions. Accordingly, the recommendation provided bythe service provider with a high interaction count is more credible. In the proposed approach,the recommendation requesting service extract total number of interactions (nt ) from RRE S P

message. Since credibility is in continuous range [0 1], we require a normalization functionthat can limit the number of interactions within this range. The function we have used tonormalize interaction value is given as:

E = nt − nmint

nmaxt − nmin

twhere 0 ≤ E ≤ 1

where nmaxt and nmin

t represent the maximum and minimum number of interaction, nmint = 1

and 1 ≤ nmaxt ≤ ∞.

SS: Pervasive environment is envisioned as a physical space rich in devices and servicesthat is capable of interacting with people, the physical environment and external networkedservices. We categorize the services on the basis of type of service they provide. For example,a simple scan service has less sensitivity than a file service. Accordingly, the recommendationprovided by a service with high sensitivity level is given more weight than the recommenda-tion from the service with low sensitivity level. The spur behind this idea is that a maliciousservice requestor can involve in a large number of interactions with a service with low sen-sitivity level in order to falsely elevate its trust value. However, the use of experience com-bined with the sensitivity of the recommender can be effective in detecting such malevolentintentions.

3.3.3 Fuzzy Based Credibility Evaluation

Fuzzy logic is a multi-valued logic that allows values to be defined between conventionalevaluations, making an attempt to apply a more human-like way of thinking in the program-ming of computers. Since measure of credibility involves vagueness, ambiguousness andsubjectivity, it cannot be measured as a crisp value. The proposed approach takes the benefitsfrom the distinct advantage of fuzzy logic and introduces fuzzy inference for evaluating thecredibility of recommendations. Fuzzy inference is the process of formulating the mappingfrom a given input to an output using fuzzy logic. The mapping then provides basis fromwhich decisions can be made, or patterns discerned. In the proposed approach, E, T B E andSS are fed as inputs in the fuzzy inference engine. The output of fuzzy inference engineis the measure of credibility. There are three main steps involved fuzzy inference process:fuzzification, rule evaluation, and defuzzification (Fig. 2).

Fuzzification The first step of the proposed approach is fuzzification which transforms areal world variable into a fuzzy set using a fuzzifier. Let the three input variables E, T B Eand SS be represented in vector notation as:

x∗ = [x∗1 x∗

2 x∗3 ] = [E TBE SS] (4)

123

Page 10: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1698 N. Iltaf et al.

Fig. 2 Fuzzy based credibilityevaluation model

where x∗ ∈ R3 represents real value point. Each input variables is defined by three fuzzy setsi-e Low (L), Medium (M) and High (H). Each fuzzy set is characterized by a MembershipFunction (M F) which associates with each point in x a real number in the interval [0 1]. Inthe proposed model we define gaussian M F μT B Ed (x1), μEd (x2) and μSd (x3) for inputs as:

μT B Ed (x1) = e−

⎛⎝ x1 − xd

1

σ d1

⎞⎠

2

(5)

μEd (x2) = e−

⎛⎝ x2 − xd

2

σ d2

⎞⎠

2

(6)

μSSd (x3) = e−

⎛⎝ x3 − xd

3

σ d3

⎞⎠

3

(7)

where d = {L , M, H}, represent the fuzzy sets. xd1 , xd

2 , xd3 and σ d

1 , σ d2 , σ d

3 , are constantparameters representing means and variances for input fuzzy sets. In the proposed model,gaussian fuzzifier is used to map x∗ ∈ R3 into fuzzy set X .

μX (x1, x2, x3) = e−

(x1 − x∗

1

a1

)2

� e−

(x2 − x∗

2

a2

)2

� e−

(x3 − x∗

3

a3

)3

(8)

where a1, a2 and a3 are positive parameters and taken as a1 = 2 maxd={L ,M,H} σ

d1 , a2 =

2 maxd={L ,M,H} σ

d2 and a3 = 2 max

d={L ,M,H} σd1 . The t-norm � operator is taken as algebraic prod-

uct. In the proposed approach the output variable consists of five fuzzy set (Cr g) i.e, V L(very Low), L , M, H and V H (Very High). The gaussian output MF for five fuzzy sets aredefined as:

μCr g (ym) = e−

(ym − yg

�g

)2

(9)

where g = {V L , L , M, H, V H} and yg and �g are constant parameters representing meanand variances of output fuzzy sets.

123

Page 11: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1699

Rule Evaluation: Fuzzy inference is calculation of fuzzy relations based on logical rulesin a fuzzy rule bank. In this approach we choose to use Product inference Engine (PIE) toprocess fuzzy inputs. The fuzzy rule bank for measuring the credibility of recommendationis based on the following axioms:

A-1 : Credibility (Cr) is high if it is computed for the highly sensitive service (SS) whichhas high number of interactions (E) with the target entity and most of them are recentinteractions (TBE).A-2 : Credibility (Cr) is low if it is provided by the service with low sensitivity (SS)having lesser number of interactions (E) with the target entity and most of them are indistant past (TBE).

Based on these axioms, twenty-seven fuzzy inference rules based on three input member-ship functions and five output membership functions are proposed for measuring credibility(Table 1).PIE is used to process the fuzzy inputs based on fuzzy rule and linguistic rules.PIE structure consists of individual rule based inference with union combination, Mamdani’sproduct implication and algebraic product for t-norm and max operator for s-norm [25]. PIEis defined as:

μCr ′(ym) = Mmaxl=1

[sup

{x1,x2,x3}μX (x1, x2, x3)μT B El (x1)μEl (x2)μSSl (x3)μCrl (ym)

]

(10)

where fuzzy rules are denoted by l and for the proposed model l = 1, 2, . . . , 27. By sub-stituting μX (x1, x2, x3), μT B El (x1), μEl (x2), μSSl (x3) and μCrl (ym) and simplifying, theabove equation reduces to:

μCr ′(ym) = 27maxl=1

⎡⎢⎢⎢⎣

⎡⎢⎢⎢⎣e

−⎛⎝ xl

1P − x l1

σ l1

⎞⎠

2

· e−

⎛⎝ xl

1P − x�1

a1

⎞⎠

2

e−

⎛⎝ xl

2P − x l2

σ l2

⎞⎠

2

· e−

⎛⎝ xl

2P − x�2

a2

⎞⎠

2

× e−

⎛⎝ xl

3P − x l3

σ l3

⎞⎠

2

· e−

⎛⎝ xl

3P − x�3

a3

⎞⎠

2⎤⎥⎥⎥⎦μCrl (ym)

⎤⎥⎥⎥⎦ (11)

where,

xl1P = a2

1 x l1 + (σ l

1)2x∗

1

a21 + (σ l

1)2

, xl2P = a2

1 x l2 + (σ l

2)2x∗

2

a22 + (σ l

2)2

and xl3P = a2

1 x l3 + (σ l

3)2x∗

3

a23 + (σ l

3)2

Defuzzification It is the process to convert fuzzy outputs into real world outputs. Aftercomputation of fuzzy set through PIE by Eq. (11), the defuzzifier maps this output of PIEto crisp point y* thus representing the best point that represents μCr ′(ym). Center of gravitydefuzzifier is often used to specify y* by computing the center of the area covered by themembership function of Cr . However, due to the computational complexity, Center averagedefuzzifier has been proposed in literature which can approximate y∗

123

Page 12: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1700 N. Iltaf et al.

Table 1 Rules for fuzzy trustevaluation Ru(1): IF E is L AND T B E is L AND S is L THEN Cr is VL.

Ru(2): IF E is M AND T B E is L AND S is L THEN Cr is L.

Ru(3): IF E is L AND T B E is M AND S is L THEN Cr is L.

Ru(4): IF E is L AND T B E is L AND S is M THEN Cr is L.

Ru(5): IF E is M AND T B E is M AND S is L THEN Cr is L.

Ru(6): IF E is L AND T B E is M AND S is M THEN Cr is L.

Ru(7): IF E is M AND T B E is L AND S is M THEN Cr is L.

Ru(8): IF E is H AND T B E is L AND S is L THEN Cr is L.

Ru(9): IF E is L AND T B E is H AND S is L THEN Cr is L.

Ru(10): IF E is L AND T B E is L AND S is H THEN Cr is M.

Ru(11): IF E is L AND T B E is M AND S is H THEN Cr is M.

Ru(12): IF E is L AND T B E is H AND S is M THEN Cr is M.

Ru(13): IF E is M AND T B E is L AND S is H THEN Cr is M.

Ru(14): IF E is M AND T B E is H AND S is L THEN Cr is M.

Ru(15): IF E is H AND T B E is L AND S is M THEN Cr is M.

Ru(16): IF E is H AND T B E is M AND S is L THEN Cr is M.

Ru(17): IF E is M AND T B E is M AND S is M THEN Cr is H.

Ru(18): IF E is H AND T B E is H AND S is L THEN Cr is H.

Ru(19): IF E is L AND T B E is H AND S is H THEN Cr is H.

Ru(20): IF E is H AND T B E is L AND S is H THEN Cr is H.

Ru(21): IF E is H AND T B E is M AND S is M THEN Cr is H.

Ru(22): IF E is M AND T B E is M AND S is H THEN Cr is H.

Ru(23): IF E is M AND T B E is H AND S is M THEN Cr is H.

Ru(24): IF E is M AND T B E is H AND S is H THEN Cr is H.

Ru(25): IF E is H AND T B E is H AND S is M THEN Cr is H.

Ru(26): IF E is H AND T B E is M AND S is H THEN Cr is H.

Ru(27): IF E is H AND T B E is H AND S is H THEN Cr is VH.

Cr = y∗ =∑5

g=1ygwg

∑5

g=1wg

(12)

where yg represents the center of gth output fuzzy set and wg be its height.

3.4 Recommended Trust Value

After computing the credibility of each recommendation, they are aggregated by the serviceprovider to compute a single recompounded trust value (Trecom) for the entity. The computedcredibility factor for each recommendation is used as a weight to determine the influence thateach recommendation will have in aggregation process. Therefore the recommendation withhigher credibility has more weight in aggregation process than the one with lower credibility.The service provider computes Trecom using each recommendation’s credibility (Cri ) andrecommendation received (Ti ) as :

123

Page 13: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1701

Table 2 Parameters for example Parameter Value

Current time tc “30 May 2012, 10:20:30:45”

Minimum number of interactions nmin 1

Maximum number of interactions nmax 50

α 1.1

β 0.1

Trecom =

⎧⎪⎪⎨⎪⎪⎩

unde f ined if i = 00 if i > 0 and

∑ni=1 Cri = 0∑n

i=1 Cri ∗ Ti∑ni=1 Cri

if i > 0 and∑n

i=1 Cri �= 0(13)

where i denotes number of recommenders. According to Eq. (13), if the recommendationrequestor receives no recommendation for the target service, i.e, i = 0 the recommendedtrust value Trecom is set to be undefined. This usually happens when the service requestor isa new service and has no previous interaction with any other services in the environment.However, if cumulative credibility of all the recommendation is zero then Trecom is set to 0.The credibility of a service can be 0, if it has interacted with the entity very few times andhas least sensitivity level in a very distant past.

4 An Illustrative Example

To illustrate the working of the proposed framework, this section provides an example thatgoes through each step of proposed approach. Let A be a service provider in a pervasiveenvironment that is using the proposed model for the detection of malicious recommendationsand computation of recommended trust value of an unknown entity.

4.1 Configuration of Model

In order to apply the model, A first configures the model parameters to implement its ownsecurity policies. The proposed model requires these parameter (Table 2) during the compu-tation of credibility of the recommendation.

4.2 Scenario

In this scenario, we assume that X is an unknown entity for service provider A and wantsto request a service Since A has no previous interaction with X , in order to determine thetrustworthiness of X and determine what level of access it should give to X, A will requestrecommendations from its peer services that have previous interaction with X . Table 3 showsthe RRE QST message sent by A and RRE S P messages received from peer services.Therecommendations received from the peer services are then processed by the proposed model.The steps involved in the computation of recommended trust value using the proposed modelare elaborated below:

123

Page 14: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1702 N. Iltaf et al.

Table 3 Recommendation request and response messages for scenario

Recommendation Request

RRE QST [SvcI D : 1023, Enti t y I D : 8745, ReqT ime : “May302012, 10 : 20 : 30 : 45”]

Recommendation Responses

R1 RRE S P [Recomm I D : 1067, Enti t y I D : 8745, 0.23, “May 20 2012, 10 : 20 : 30 : 47”, 30, 0.11]

R2 RRE S P [Recomm I D : 1238, Enti t y I D : 8745, 0.67, “May 25 2012, 12 : 45 : 21 : 41”, 45, 0.7]

R3 RRE S P [Recomm I D : 2154, Enti t y I D : 8745, 0.19, “May 19 2012, 08 : 16 : 38 : 15”, 17, 0.6]

R4 RRE S P [Recomm I D : 3152, Enti t y I D : 8745, 0.28, “Apr 30 2012, 18 : 40 : 39 : 13”, 37, 0.8]

R5 RRE S P [Recomm I D : 1863, Enti t y I D : 8745, 0.78, “May 17 2012, 13 : 29 : 12 : 55”, 45, 0.9]

R7 RRE S P [Recomm I D : 2041, Enti t y I D : 8745, 0.09, “May 06 2012, 11 : 34 : 04 : 42”, 21, 0.2]

R8 RRE S P [Recomm I D : 1009, Enti t y I D : 8745, 0.95, “Apr 20 2012, 15 : 20 : 59 : 08”, 36, 0.4]

R9 RRE S P [Recomm I D : 7130, Enti t y I D : 8745, 0.26, “May 26 2012, 06 : 33 : 09 : 45”, 29, 0.5]

R10 RRE S P [Recomm I D : 5308, Enti t y I D : 8745, 0.14, “May 09 2012, 19 : 45 : 01 : 31”, 31, 0.7]

Table 4 Frequency distributionof recommendations

Rci Rec value rci Frequency fi

Rc1 0.1 1

Rc2 0.2 3

Rc3 0.3 3

Rc4 0.4 0

Rc5 0.5 0

Rc6 0.6 0

Rc7 0.7 1

Rc8 0.8 1

Rc9 0.9 0

Rc10 1.0 1

4.2.1 Outlier Detection

After receiving the recommendations, these recommendations are passed to outlier detectionengine for filtering dishonest recommendations using deviation based detection mechanism.

Step 1 All the recommendations received from the peer services are grouped in respectivebins as per their recommendation values (Ti ). Table 4 shows how the received recom-mendations are grouped in their respective classes thus forming RDomain.Step 2 After arranging the recommendations in their respective recommendation classRci , we remove the recommendation classes with zero frequencies and calculateDF(Rci ) for each recommendation class using Eq. (1). Table 5 shows the sorted listof recommendation classes with respect to their dissimilarity value.Step 3 The next step is the computation of SF for each S R Domain derived from thesorted R Domain. In Table 6 the recommendation class Rc10 has the highest deviationvalue, so it is taken as suspicious recommendation class and is added to suspicious rec-ommendation domain (S Rdomain1)and its SF is calculated. Subsequently, the engineforms S R Domain2 by taking the union of the S Rdomain1 and next recommendationclass in sorted list i.e, Rc8 and calculates SF for S Rdomain2 using Eq. (2). This process

123

Page 15: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1703

Table 5 Recommendationclasses sorted with respect totheir DF

Rci Rec value rci Frequency fi DF(Rci )

Rc10 1.0 1 0.49

Rc8 0.8 1 0.25

Rc7 0.7 1 0.16

Rc1 0.1 1 0.04

Rc2 0.2 3 0.003

Rc3 0.3 3 0.0

Table 6 Smoothing factor computation

S Rdomain Rdomain − S Rdomain DF(Rdomain − S Rdomain) SF

{1.0} {0.8, 0.7, 0.1, 0.2, 0.3} 0.453 4.41

{1.0, 0.8} {0.7, 0.1, 0.2, 0.3} 0.263 5.439

{1.0, 0.8, 0.7} {0.1, 0.2, 0.3} 0.013 6.51

{1.0, 0.8.0.7, 0.1} {0.2, 0.3} 0.003 5.64

{1.0, 0.8.0.7, 0.1, 0.2} {0.3} 0.0 2.29

is repeated for each Rci of Rdomain where i < m and m = 6 (total number of classesin Rdomain). Table 6 shows that SF of S Rdomain3 has the highest value. Therefore,the recommendation classes {1.0, 0.8, 0.7} in S Rdomain3 are considered as dishonestrecommendation classes and are removed from Rdomain.

4.2.2 Recommendation Evaluation

Let Rv represent the set of recommendations after removing all the dishonest recommenda-tions that were identified in step 3 of outlier detection engine. The subsequent steps illustratethe process of computing recommended trust value by Recommendation Evaluation engine.

Step 1 Since credibility and recommended trust value is defined in the interval [0 1], weneed to scale the no of interactions and time in interval [0 1]. For this each valid recom-mendation from Rv is taken and its T B E, E is computed using equations defined in abovesection. The Table 7 shows the E, T B E and SS calculation for all the recommendationsin Rv .Step 2 For each honest recommendation, the calculated E, T B E and SS is used tocompute its credibility using fuzzy logic. The first step is to take the three inputs (E, T B Eand SS) and determine the degree to which they belong to each of the appropriate fuzzysets through the gaussian MFs using Eq. (8). The inputs are taken as a crisp numericalvalue limited to interval [0 1].Step 3 After the inputs are fuzzified, we know the degree to which each part of the inputis satisfied for each rule. The proposed model is built on 27 rules, and each rule resolvesthe inputs into three fuzzy linguistic sets i.e, L , M and H . Each rule is taken and logicaloperation AND (

.= prod) is applied on the fuzzified inputs. The calculation of ru1 forrecommendation R1 is as:

123

Page 16: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1704 N. Iltaf et al.

Table 7 Computation of T B E, E and SS

Ri nt ti E TBE SS

R1 30 May 20 2012, 10 : 20 : 30 : 45 0.6 0.422 0.11

R3 17 May 19 2012, 08 : 16 : 38 : 15 0.34 0.383 0.6

R4 37 Apr 30 2012, 18 : 40 : 39 : 13 0.74 0.062 0.8

R6 45 May 27 2012, 21 : 58 : 30 : 27 0.9 0.825 0.9

R7 21 May 06 2012, 11 : 34 : 04 : 42 0.42 0.11 0.2

R9 29 May 26 2012, 06 : 33 : 09 : 45 0.58 0.749 0.5

R10 31 May 09 2012, 19 : 45 : 01 : 31 0.62 0.147 0.7

Ru1 = prod{μE L , μT B E L , μSSL }= prod{0.226, 0.482, 0.951}= 0.1036

Step 4 Since the model uses gaussian fuzzifier, by solving Eq. (11) for recommender R1we get

μCr ′(ym) = 27maxl=1

[0.1036μCr V L (ym), 0.439μCr L (ym), 0.208μCr L (ym), . . . ,

0.008μCr M (ym), 0.028μCr M (ym), . . . ,

0.122μCr H (ym), 0.002μCr H (ym), . . . ,

0.005μCr V H (ym)]

(14)

Next, in above equation the rules are grouped according to the output fuzzy set(V L , L , M, H, V H ) and from each group, maximum value is selected to represent theimpact of that group.

μCr ′(ym) = [0.103μCr V L (ym), 0.884μCr L (ym), 0.309μCr M (ym), 0.256μCr H (ym),

0.0048μCr V H (ym)]

(15)

Figure 3 depicts the working of PIE in 5 steps for the data set provided by recommenderR1.Step 5 The output of PIE is an aggregated output fuzzy set encompassing a range ofoutput values, and so must be defuzzified in order to get a single output value (Fig. 4).For example, the credibility calculation for R1 using Eqs. (12) and (15) is shown below.

Cr = 0.103 ∗ 0 + 0.884 ∗ 0.25 + 0.309 ∗ 0.5 + 0.256 ∗ 0.75 + 0.0048 ∗ 1

0.103 + 0.884 + 0.309 + 0.256 + 0.0048= 0.367

The recommendation evaluation process in then repeated for each recommendation thatis in the set Rv to compute their credibility. This credibility acts as weight to scale the effectof respective recommendation in evaluation of Recommended Trust Value. Table 8 showsthe computed credibility for each recommendation in Rv set.

123

Page 17: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1705

Fig. 3 Credibility evaluation of of recommendation provided by R1

Fig. 4 Defuzzification

4.2.3 Recommended Trust Value

Using the recommended value (Ti ) of each recommendation in set Rv and credibility of eachrecommendation as computed in credibility computation process, the recommended trustvalue (Trecom) of the entity is evaluated using Eq. (13) as:

123

Page 18: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1706 N. Iltaf et al.

Table 8 Evaluated credibilityfor each recommendation

Ri E T B E SS Cr

R1 0.6 0.422 0.11 0.367

R3 0.34 0.383 0.6 0.475

R4 0.74 0.062 0.8 0.54

R6 0.9 0.825 0.9 0.805

R7 0.42 0.11 0.2 0.291

R9 0.58 0.749 0.5 0.613

R10 0.62 0.147 0.7 0.448

Trecom = 0.367 ∗ 0.23 + 0.475 ∗ 0.19 + 0.54 ∗ 0.28 + 0.805 ∗ 0.11 + 0.291 ∗ 0.09 + 0.613 ∗ 0.26 + 0.448 ∗ 0.14

0.367 + 0.475 + 0.54 + 0.805 + 0.291 + 0.613 + 0.448= 0.187

5 Experimental Validation

5.1 Effectiveness of Outlier Detection Engine

To illustrate the effectiveness of the proposed deviation based approach for detecting dis-honest recommendations, we have compared our approach with other approaches proposedin literature based on Quartile [20], Control Limit Chart [22] and Iterative Filtering [23] fordetecting dishonest recommendations in indirect trust computation. A set of experimentshave been carried out by applying the approaches for detecting dishonest recommendationsin two different scenarios. For first set of experiment, we assume that certain percentage ofthe recommenders is dishonest and launch bad mouthing attack by giving recommendationsbetween 0.1 and 0.3. And for second set of experiments, the dishonest recommenders areassumed to give a high recommendation value between 0.8 and 1.0 thus launching BallotStuffing attack. In both set of experiments the percentage of dishonest recommenders is variedfrom 10 to 45 %. For comparison, we have used Mathews Correlation Coefficient (MCC)to measure the accuracy of all the four approaches in detecting dishonest recommenda-tions [26]. MCC is defined as a measure of the quality of binary (two-class) classifications.It takes into account true and false positives and negatives. The formula used for MCCcalculation is

MCC = (T P ∗ T N ) − (F P ∗ F N )√(T P + F P)(T P + F N )(T N + F P)(T N + F N )

where T P is the number of true positives, T N the number of true negatives, F P the numberof false positives and F N the number of false negatives. MCC returns a value between −1and 1 (1 means perfect filtering, 0 indicates no better than random filtering and −1 representstotal inverse filtering). To avoid infinite results while calculating MCC , it is assumed that ifany of the four sums (T P, F P, T N and F N ) in the denominator is zero, the denominator isarbitrarily set to one. The Fig. 5 shows the comparison of MCC values of proposed approachwith different models with varying percentage of dishonest recommendations (from 10 to4 %). According to the results, the proposed approach can effectively detect dishonest rec-ommendations evident from constant MCC of +1 for the both set of experiments. Whereas,in [22], in case of badmouthing attack (Fig. 5a) MCC increases slowly as the percentage

123

Page 19: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1707

Fig. 5 Filtering accuracy interms of MCC. a BadMouthingattack. b Ballot stuffing

10 20 30 40 450.0

0.2

0.4

0.6

0.8

1.0

MC

C

% of Dishonest Recommenders

Proposed ApproachQuartileIterativeUCL/LCL

10 20 30 40 450.0

0.2

0.4

0.6

0.8

1.0

MC

C

% of Dishonest Recommenders

Proposed ApproachQuartileIterativeUCL/LCL

(a)

(b)

of dishonest recommenders increases from 10 to 30 % but then decreases promptly to 0 asthe percentage of dishonest recommender increases from 30 to 45 %. The same behavior isobserved in case of ballot stuffing attack (Fig. 5b). In [23] when the percentage of dishon-est recommender increases to 40 % MCC rate starts to decrease as well. Thus all the threeapproaches ([22,23] and [20]) fail to achieve perfect filtering of the dishonest recommenda-tion as the percentage of the dishonest recommenders increases.

5.2 Effectiveness of Proposed Model in Indirect Trust Computation

To demonstrate the effectiveness of the proposed model, a software simulation has beencarried out. The objective of the simulation is to determine whether the use of deviation baseddishonest recommendation approach along with the fuzzy based credibility value enables theservice provider to effectively determine the trustworthiness of the target entity. For thecomparison purposes, we classify the recommendation models into four major categories asexplained in Table 9. In the experimental evaluation, we simulate a multi-agent environmentwhere agents (offering and requesting services) are continuously joining and leaving theenvironment. For the clarity purpose, we categorize our agents into two groups i.e, agentsoffering services as Service Provider Agents (S P A) and agents consuming services as ServiceRequesting Agents (S R A). We conduct a series of experiments for a new S P A to evaluatethe trustworthiness of an unknown S R A by requesting recommendation from other S P A inthe environment. All S P A can also act as a Recommending Agents (R A) for other S P As .

123

Page 20: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1708 N. Iltaf et al.

Table 9 Categorization of recommendation models for experimental validation

Category Dishonestrecommendation& Detection

Recommendation Evaluation

Average ([8] and [9] ) NA NA

Reputation ([12,15,16]) NA Reputation of the recommender

Maliciousrecommendationfiltering ([20–23])

Control Limit Charts,Quartile, IterativeFiltering

NA

Proposed model Deviation based filtering Credibility of recommendation

The R A give recommendations, in a continuous range [0 1], for a given S R A on the requestof a S P A. The R A can either be honest or dishonest depending on the trustworthinessof its recommendation. An honest R A truthfully provides recommendation based on itspersonal experience, whereas, a dishonest R A insinuates a true experience to high, low orerratic recommendation with a malicious intent. Each S P A and R A is assumed to exchangerecommendation in RRQST and RRE S P format as described by the proposed approach. Todefine the credibility of the recommendations, each recommendation is randomly assignedone of the types,i.e, V L[0 0.2], L[0.2 0.4], M[0.4 0.6], H [0.6 0.8] and V H [0.8 1.0].

The environment is initialized with a set numbers of honest (N_honest_R A) and dishonest(N_dishonest_R A) recommenders. The simulation is run in steps, the total number whichis defined by N_ST E P S. In each step, the percentage of dishonest_R A is progressivelyincreased. During each step, S P A broadcasts RRE QST and which is responded by randomlychosen set of R A. After each simulation run, S P A computes the recommended trust valueof S R A on the basis of four approaches described in Table 9. To analyze the effectiveness ofthe proposed approach, two inherent attack scenarios (bad mouthing and ballot stuffing) forrecommendation models have been implemented in above defined simulation environment.

5.2.1 Attack Scenario 1

This scenario is defined to examine the performance of the proposed model when the intentof dishonest recommenders is to launch a badmouthing attack against S R A in order todecrease its trustworthiness. The environmental variables for the scenario are presented inTable 10. In this scenario, an S P A gathers recommendations from R A about an unknownS R A. It is assumed that the actual trust value of S R A is 0.8. At the initial step of thesimulation, the environment has 10 % dishonest R A who attempt to launch a badmouthingattack against S R A by providing low recommended trust values (between range [0 0.3]).After each of the seven steps, the percentage of dishonest R A is increased by 5 %. Figure6a shows the comparison of recommended trust value evaluated by an approach from eachcategory mentioned in Table 9.

5.2.2 Attack Scenario 2

In this scenario the performance of the proposed model is examined when the intentof dishonest recommenders is to launch a ballot stuffing attack against S P A by falselyrecommending high trustworthiness of S R A. The environmental variables for the scenarioare presented in Table 10. In this scenario, the simulation is again run for 7 steps. At each step,

123

Page 21: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1709

Table 10 Experimental variables

Simulation variable Symbol Scenario 1 Scenario 2

Number of simulation run N_ST E P S 7 7

Total number of RA N_R A 500 500

Percentage of honest RA N_honest_R A(%) 90 90

Honest Recommendation range H_range [0.6 1.0] [0 0.3]

Percentage of dishonest RA N_Dishonest_R A(%) 10 10

Dishonest recommendation range D_range [0 0.3] [0.6 1.0]

Percentage increase in dishonest RA N_Dishonest_R A_inc 5 5

Actual trust value of SRA TS R A 0.8 0.15

Number of recommendations gathered in each step NRecomm 100 100

Percentage of recommendations with very high credibility C R_V H 20 20

Percentage of recommendations with high credibility C R_H 20 20

Percentage of recommendations with medium credibility C R_M 20 20

Percentage of recommendations with low credibility C R_L 20 20

Percentage of recommendations with very low credibility C R_V L 20 20

dishonest R A launch ballot stuffing attack by recommending very high trust value (betweenrange [0.6 1]). It is assumed that the actual trust value of S R A in this scenario is 0.15. Thecomparison of the recommended trust value evaluated by S P A using approaches from eachof the four categories is shown in Fig. 6b.

The results from both the attack scenarios (Fig. 6a, b) illustrate that the proposedmodel outperforms the other approaches in mitigating the impact of the attack. The aver-age based recommendation model computes Trecom by simply taking the average of thereceived recommendations, therefore as the percentage of the dishonest recommenda-tions increases, Trecom gets skewed towards the dishonest recommendation. It is observedfrom Fig. 6a that the approach is affected by the badmouthing attack as the computedrecommended trust value is much less than the actual trust value. In reputation basedrecommendation models, reputation is believed to precisely reflect the reliability or trust-worthiness of the recommender. However, Fig. 6a, b shows that this model is still unableto correctly judge the trustworthiness of the entity as it cannot detect the divergenceof behavior of an entity with high reputation. However, the models that incorporate thetechnique to filter dishonest recommendation are also not able to produce the accurateresults. Since a recommendation can be honest but old or based on very inadequate expe-rience, these models fail to measure the reliability of an honest recommendation. Theresult illustrates that the proposed approach is able to compute the optimum recommendedtrust value because it can 100 % detect dishonest recommendations (provided that thehonest recommendations are in majority) and incorporates credibility (based on experi-ence, interaction recency and sensitivity) of the recommendation to weigh the influenceof each honest recommendation. The experimental results demonstrate that the proposedapproach can surely supplement and hence improve the traditional trust and recommendationmodels.

123

Page 22: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1710 N. Iltaf et al.

10 15 20 25 30 35 40

0.45

0.50

0.55

0.60

0.65

0.70

0.75

0.80

Rec

omm

ende

d T

rust

Val

ue

% of Dishonest Recommenders

Proposed ApproachAverageUCL / LCLReputation Based Model [R = 0.8, 0.9]Actual Recommended Trust Value

10 15 20 25 30 35 40

0.15

0.20

0.25

0.30

0.35

0.40

Rec

omm

ende

d T

rust

Val

ue

% of Dishonest Recommenders

Proposed ApproachAverageUCL / LCLReputation Based Model [R = 0.8, 0.9]Actual Recommended Trust Value

(a)

(b)

Fig. 6 Experimental results. a Attack Scenario 1. b Attack Scenario 2

6 Conclusion

Indirect rust computation based on recommendations play an important role in trust baseaccess control models. It provides the service provider the confidence to interact with anunknown service requestor. We have proposed an effective mechanism for computing theindirect trust. The model can not only identify dishonest recommendations but also deter-mines the significance of each honest recommendation. A deviation based detecting tech-nique is proposed to identify the dishonest recommendations. We also propose the conceptof measuring the credibility of recommendation to determine the influence of each honestrecommendation. The credibility is measured by introducing three new parameters in the

123

Page 23: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1711

recommendation response message i.e, experience (on the bases of which recommendationis given), time based experience (recency of interaction) and the sensitivity of the serviceoffering recommendation. A fuzzy inference engine is used to compute credibility whichassigns weight to each parameter based on some reasoning. The comparison of experimentalresults against other existing approaches clearly indicates better performance of the proposedapproach. Our future research will be focused on incorporating the proposed indirect trustcomputation model with the direct trust computation model and investigate the efficiency ofthe model in the face of different types of attacks.

References

1. Wagealla, W., Carbone, M., English, C., Terzis, S., & Nixon, P. (Sept. 2003). A formal model on trustlifecycle management. In Workshop on formal aspects of security and trust, Italy, pp. 184–195.

2. Shand, B., Dimmock, N., & Bacon, J. (2003). Trust for ubiquitous, transparent collaboration. In 1st IEEEInternational Conference on Pervasive Computing and Communications, pp. 1–6, MA, USA, March23–25.

3. Iltaf, N., Mahmud, U., & Kamran, F. (2006). Security & enforcement in pervasive computing environment(STEP). In 2006 international symposium on high-capacity optical networks and enabling technologies,USA, pp. 1–5, September 6–8.

4. Deno, M. K., & Sun, T. (2008). Probabilistic trust management in pervasive computing. In IEEE/IFIPinternational conference on embedded and ubiquitous computing, Guelph, ON, pp. 610–615, December17–20.

5. Komarova, M., & Riguidel, M. (2008). Adjustable trust model for access control. In 5th internationalconference on autonomic and trusted computing. Oslo, Norway, pp. 429–443, June 23–25.

6. Almenarez, F., Marin, A., Diaz, D., Cortes, A., Campo, C., & Garcia, C. (2011). Trust management formultimedia P2P applications in autonomic networking. Adhoc Networks, 9(4), 687–690.

7. Sun, Y., Han, Z., & Ray Liu, K. J. (2008). Defense of trust management vulnerabilities in distributednetworks. IEEE Communications Magazine, Feature Topic on Security in Mobile Ad Hoc and SensorNetworks, 46(2), 112–119.

8. Paul, R., & Richard, Z. (2002). Trust among strangers in Internet transactions: Empirical analysis ofeBay’s reputation system. Advances in Applied Microeconomics: A Research Annual, 11, 127–157.

9. “Amazon Auctions”, http://auctions.amazon.com.10. Zhang, Z., & Feng, X. (2009). New methods for deviation-based outlier detection in large database. In 6th

international conference on fuzzy systems and knowledge discovery, China, pp. 495–499, August 14–16.11. Josang, A., Ismail, R., & Boyd, C. (2007). A survey of trust and reputation systems for online service

provision. Decision Support Systems, 43(2), 618–644.12. Xiong, L., & Liu, L. (2004). Peertrust: Supporting reputation-based trust for peer-to-peer electronic

communities. IEEE Transactions on Knowledge and Data Engineering, 16(7), 843–857.13. Chen, M., & Singh, J. P. (2001). Computing and using reputations for internet ratings. In 3rd ACM

conference on electronic commerce. NY, USA, pp. 154–162.14. Malik, Z., & Bouguettaya, A. (2007). Evaluating rater credibility for reputation assessment of web ser-

vices. In 8th international conference on web information systems engineering, Nancy, France, pp. 38–49,December 3–7.

15. Ganeriwal, S., Balzano, L. K., & Srivastava, M. B. (2008). Reputation-based framework for high integritysensor networks. ACM Transactions on Sensor Networks, 4, 1–37.

16. Zhou, R., & Hwang, K. (2007). Powertrust: A robust and scalable reputation system for trusted peer-to-peer computing. IEEE Transactions on Parallel and Distributed Systems, 18(4), 460–473.

17. Liu, X., Datta, A., Fang, H., & Zhang, J. (2012). Detecting imprudence of reliable sellers in onlineauction sites. In The 11th IEEE international conference on trust, security and privacy in computing andcommunications. Liverpool, UK, June 25–27.

18. Dellarocas, C. (2000). Immunizing online reputation reporting systems against unfair ratings and dis-criminatory behavior. In 2nd ACM conference on electronic commerce, MN, USA, pp. 150–157, October17–20.

19. Liu, S., Zhang, J., Miao, C., Theng, Y., & Kot, A. (2012). An integrated clustering-based approach tofiltering unfair multi-nominal testimonies. Computational Intelligence.

20. Whitby, A., Josang, A., & Indulska, J. (2005). Filtering out unfair ratings in bayesian reputation systems.In 3rd international joint conference on autonomous agenst and multi agent systems, pp. 106–117.

123

Page 24: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

1712 N. Iltaf et al.

21. Weng, C. M., & Goh, A. (2006). An entropy-based approach to protecting rating systems from unfairtestimonies. IEICE Transactions on Information and Systems, 89(9), 2502–2511.

22. Ahamed, S. I., Haque, M., Endadul, M., Rahman, F., & Talukder, N. (2010). Design, analysis, anddeployment of omnipresent formal trust model (FTM) with trust bootstrapping for pervasive environments.Journal of Systems and Software, 83(2), 253–270.

23. Deno, M. K., Sun, T., & Woungang, I. (2011). Trust management in ubiquitous computing: A Bayesianapproach. Computer Communications, 34(3), 398–406.

24. Arning, A., Agrawal, R., & Raghavan, P. (1996). A linear method for deviation detection in large databases(pp. 164–169). Portland, Oregon: Data Mining and Knowledge Discovery.

25. Wang, L. X. (1997). A course in fuzzy systems and control. USA: Prentice Hall.26. Matthews, B. W. (1975). Comparison of the predicted and observed secondary structure of t4 phage

lysozyme. Biochimica et Biophysica Acta, 405(2), 442–451.

Author Biographies

Naima Iltaf is a Ph.D. student of Software Engineering at NationalUniversity of Sciences and Technology, Pakistan. She received herMS degree in Software Engineering from National University of Sci-ences and Technology, Pakistan in 2006. Her field of interest encom-passes security in pervasive environment and trust model in pervasivecomputing.

Abdul Ghafoor received the B.S. degree in electrical engineeringfrom the University of Engineering and Technology, Pakistan, in 1994,the M.S. degree in electrical engineering from the National Universityof Sciences and Technology, Pakistan, in 2003, and the Ph.D. degreefrom the University of Western Australia, in 2007. His research topicsinclude model/controller order reduction, image processing/matching,through wall imaging and cognitive radio. Since 2008 he is withthe National University of Sciences and Technology, Pakistan wherecurrently he is associate professor.

123

Page 25: An Effective Model for Indirect Trust Computation in Pervasive Computing Environment

Effective Model for Indirect Trust Computation 1713

Usman Zia received his B.E. Degree in Electrical Engineering fromNational University of Sciences and Technology, Pakistan in 2002. Heis currently doing his M.S. in Computer Engineering from Centre forAdvance Studies in Engineering, Pakistan. His field of interest involveTrust based Access Control Models and Service Oriented Architecture.

Mukhtar Hussain did his Ph.D. from USA in Computer Engineeringin 1991. He has published and presented more than 40 research papersin International Journals, and Conferences in the field of artificial intel-ligence, information security and mobile sensors network. Currently heis working as CIO of a fertilizer company.

123