11
1 Introduction The radio spectrum is one of the most important resources for communica- tions. So spectrum detection is essential for wireless communication, which is the key issue in Cognitive Radio. 1.1 What is cognitive radio? Cognitive Radio is generic term used to describe a radio that is aware of the surrounding environment and can accordingly adapts it transmission. More- over Cognitive radio is a flexible system as it can change the communication parameter according to the condition. 1.2 Purpose The purpose of cognitive radio Spectrum sensing plays a main role in CR because it’s important to avoid interference with PU and guarantee a good quality of service of the PU. It’s predicted that CR identifies portions of unused spectrum to transmits in it to avoid interferences with others users. The cognitive radio basically detects the unused spectrum and transmits the information to the fusion center whether the user is present is or not. Now our aim is to check whether the primary user is present or not.So for that we will perform hypothesis testing : 1. Simple hypothesis testing 2. Composite hypothesis testing. For simple hypothesis testing there is : 1. Binary hypothesis testing. 2. Bayesian hypothesis testing. Let us study about Bayesian hypothesis testing in detail. 1

Cognitive radio

Embed Size (px)

DESCRIPTION

cognitive radio using spectrum sensing.

Citation preview

  • 1 Introduction

    The radio spectrum is one of the most important resources for communica-tions. So spectrum detection is essential for wireless communication, whichis the key issue in Cognitive Radio.

    1.1 What is cognitive radio?

    Cognitive Radio is generic term used to describe a radio that is aware of thesurrounding environment and can accordingly adapts it transmission. More-over Cognitive radio is a flexible system as it can change the communicationparameter according to the condition.

    1.2 Purpose

    The purpose of cognitive radio Spectrum sensing plays a main role in CRbecause its important to avoid interference with PU and guarantee a goodquality of service of the PU. Its predicted that CR identifies portions ofunused spectrum to transmits in it to avoid interferences with others users.The cognitive radio basically detects the unused spectrum and transmits theinformation to the fusion center whether the user is present is or not. Nowour aim is to check whether the primary user is present or not.So for that wewill perform hypothesis testing :

    1. Simple hypothesis testing

    2. Composite hypothesis testing.

    For simple hypothesis testing there is :

    1. Binary hypothesis testing.

    2. Bayesian hypothesis testing.

    Let us study about Bayesian hypothesis testing in detail.

    1

  • 2 Bayesian Binary Hypothesis Testing.

    In Bayesian Binary Hypothesis Testing we just select from two hypothesis,H0 and H1 based on the observation on random vector Y . Here we considerthe domain Y . From this we consider,

    H0 = Y f(y/H0) & H1 = Y f(y/H1) (1)

    If we consider as a discrete case then,

    H0 = P (y/H0)

    H1 = P (y/H1)

    So from given Y we will decide whether H0 or H1 is true.This can beaccomplished using a function called (y) which is also called a decisionfunction which takes values (0,1). Hence,

    (y) = 1; if H1 holds and

    (y) = 0; if H0 holds.

    On the basis of decision function we have a disjoint set which is given byY0& Y1 where,

    Y0 = {y : (y) = 0}Y1 = {y : (y) = 1}

    The Hypothesis has prior probabilities which are :

    pi0 = P (H0) & pi1 = P (H1).

    with pi0 + pi1 = 1.

    Now there exist a Cost function which can be given by Cij i.e it statesthat cost incurred on selecting Hi when Hj holds. On the basis of the costfunction, the Bayes Risk can be given as follows :

    R(/Hj) =1i=0

    CijP [Yi/Hj] (2)

    Therefore the average Bayes Risk is

    R() =1i=0

    R(/Hj)pij (3)

    2

  • Hence the optimum Bayes Decision rule B is obtained by reducing the riskgiven over here i.e,

    R() =ni=0

    nj=0

    CijP [Yi/Hj]pij (4)

    Now as the Y and Yare disjoint sets,we can write as

    P (Y1/Hj) + P (Y0)/Hj) = 1 (5)

    so for j=0,1 we can get

    R() =1j=0

    C0jpij +1j=0

    (C1j C0j)P [Y1/Hj] (6)

    R() =1j=0

    C0jpij +

    Y1

    1j=0

    (C1j C0j)f(y/Hj), dy (7)

    Hence by reducing the Y1 we can minimize the Risk.

    Y1 = {yRn :1j=0

    (C1j C0j)f(y/Hj)} (8)

    Now for j=1,we can see that taking H1 and H1 holds is lest costly then theselecting H0 H1 holds i.e.(C11 C01) In order to make a correct a decisionwe introduce a likelihood ratio which takes the values of vector Y,given as

    L(y) =f(y/H1)

    f(y/H0)(9)

    This can be given as if L(y) then H0 is present and if L(y) thenH1is present.

    =(C10 C00)(C01 C11) (10)

    Then under the condition C11 < C01as optimum bayesian decision rule canbe written as ,

    B =

    {1 if L(y) 0 if L(y) (11)

    From this we can derive equation for Minimum Probability of error .

    3

  • 2.1 Minimum Probability of Error

    Taking Cij=0 for i=j and Cij=1 for i 6= jThe Cost is incurred only when the error occurs i.e when H1 is true whenH0holds and H0 is true when H1holds.

    R(/H1) = P [E/H0]

    R(/H0) = P [E/H1]

    Hence,P [E] = P [E/H0] + P [E/H1]

    and consequently the = pi0pi1

    3 Neyman-Pearson Test

    In Bayesian hypothesis Testing it would require the knowledge of cost func-tions and prior probabilities pi0&pi1. In Neyman-Pearson Testing the aim is todesign the decision function in such a way the it maximizes the probabilityof Detection PD by bounding the Probability of false alarm PF to .

    D = { : PF () } (12)

    NP = arg maxD

    PD() (13)

    The optimization problem can be solved by using Lagrangian Optimization,

    L(, ) = PD() + ( PF ()) (14)A Test will be optimal if it maximizes L(, )& satisfies KKT condition

    ( PF ()) = 0 (15)

    Now

    PD() =

    Y1

    f(y/H1) dy

    PF () =

    Y1

    f(y/H0) dy

    The Lagrangian can be written as,

    L(, ) =

    Y1

    f(y/H1) dy + (Y1

    f(y/H0) dy)

    4

  • L(, ) =

    Y1

    (f(y/H1) f(y/H0)) dy + (16)

    To maximize L(, ),(y) should be chosen such that

    f(y/H1) f(y/H0) 0f(y/H1)

    f(y/H0)

    L(y) Thus the (y) can be written as

    (y) =

    1 if L(y) > 0 or 1 if L(Y)=0 if L(y) <

    (17)

    4 Bayesian Sequential Hypothesis Testing

    The standard Hypothesis problem involves fixed number of ob-servation.On the other hand in sequential Hypothesis testing Problem, thenumber of observation are not fixed .Depending the observed samples decisionmay be taken after just few samples or large number of samples are observedif the decision is not concluded.We consider infinite number of I.I.D (indepen-dant identically Distributed ) observations { YK : K 1 } are available.Byusing this a sequential decision rule can be formed by a pair (, ),where = {n, n N} a sampling plan or stopping rule and = {n, n N}denotes the terminal decision rule.The function n(Y1, Y2, ......Yn) maps Yninto {0,1}.After observing YK (for 1 K n) we have, n(Y1, Y2, ....., Yn) =0indicates we should take one more sample till the decision is made ,and if atthe same we have n(Y1, Y2, ....., Yn) = 1 one should stop sampling and makea decision.The terminal decision function n(Y1, Y2, ....., Yn) takes the values in 0,1 wheren(Y1, Y2, ....., Yn)= 0 or 1 depending on whether H0 or H1 holds,more overn(Y1, Y2, ....., Yn) can be defined only if we have decided to stop sampling.

    n(Y1, Y2, ....., Yn) =

    {1 for n=N0 for n 6= N (18)

    B =

    {undefined for n 6= Nn(Y1, Y2, ....., Yn) for n=N

    (19)

    Now we associate cost for decision in order to determine the sequential deci-sion rule(, ) in the Bayesian setting.To compute Bayes Risk for a sequential

    5

  • decision rule (, ) as R(, ) one needs to first compute Conditional Bayesrisk for each of the hypothesis.R(, /H0) = conditional bayes risk of (, ) under H0 R(, /H1) = con-ditional bayes risk of (, ) under H1 Proir probabilities are given by pi0 =P[H0] & pi1 = P[H1]. Let CM=the cost corresponding to a miss,i.e. choosingH0 when H1 is true. and CF=the cost corresponding to a false alarm,i.e.choosing H1 when H0 is true. Unlike in the case of fixed sample size hy-pothesis testing problem ,here we also assume that each observation incursa cost denoted by D,i.e. strictly positive (D>0). In the absence of this costfor each observation D, one can collect an infinite number of observations,which would ensure that the decision is error-free.With this one can express the conditional Bayes Risk for H0 as

    R(, /H0) = CFP [N(Y1, Y2, ....., YN) = 1/H0] +DE[N/H0] (20)

    where N denotes the random stopping time. The Conditional Bayes riskfor H1 given by ,

    R(, /H1) = CMP [N(Y1, Y2, ....., YN) = 0/H1] +DE[N/H0] (21)

    Therefore the average Bayes Risk for the sequential decision rule(, ) isgiven by ,

    R(, ) =1j=0

    R(, /Hj)pij (22)

    Our aim is to choose a decision rule (, ),so that the Bayes Risk can beminimized i.e if (B, B)denotes the optimum Bayesian Sequential decisionrule , then

    R(B, B) = min(,)

    R(, ) = V (pi0) (23)

    Next we want to divide the set of all sequential decision rules into the fol-lowing two categories:

    S = {(, ) : 0 = 0}and{(0 = 1, 0 = 1) (0 = 1, 0 = 0)} (24)

    S here corresponds to the case of a sequential decision rule set,where onesample is taken to decide between H0 and H1. While {(0 = 1, 0 = 1)(0 =1, 0 = 0)} corresponds to the case when one doesnt take further more sample.Let

    J(pi0) = min(,)S

    R(, ) (25)

    6

  • Note that since N 1 for all rules in S therefore we have E[N/H0] or E[N/H1]isgreater than or equal to 1. Thus we have ,

    R(, ) D

    min(,)S

    R(, ) D

    J(pi0) DAlso note that for pi0=1 and pi0=0,the

    P [N(Y1, Y2, ......YN) = 1/H0]and[P [N(Y1, Y2, ......YN) = 0/H1]are equal to zero

    Therefore, for pi0 = 1, R(, = DE[N/H0] = D. Thus J(1)=D.Similarly forpi0 = 0,J(0)=D.Next we compute the Bayes risk for the following two cases of sequentialdecision rules, when no sample is taken implying 0 = 1.When (0 = 1, 0 = 1) since no samples is taken during this , we haveE[N/H0]=0.This implies

    R(, /H0) = CFP [N(Y1, Y2, ......YN) = 1/H0] = CF

    and R(, /H1) = 0 as P [N(Y1, Y2, ......YN) = 1/H1] = 0

    R(, ) = CFpi0. (26)

    Similarly for (0 = 1, 0 = 0),this implies

    R(, ) = CM(1 pi0) (27)

    .Therefore the minimum Bayes risk for sequential decision rule that do

    not take any sample which correspondsto ( the rules ( = 1, = 1)&( =1, = 0) is therefore given by the piecewise linear function,

    T (pi0) = min{CFpi0, CM(1 pi0)} (28)

    T (pi0) =

    {CFpi0 forpi0 CM(CF+CM )(29)

    Since the Bayes risk obtained by any of other strategies,should lie betweenJ(pi0) and T (pi0),therefore,we have

    V (pi0) = min(T{pi0), J(pi0)} (30)

    7

  • The sketch of V (pi0) vs pi0 is shown in the figure given below,Depending on the fact that whether fact whether J(pi0) always remain

    above T (pi0) or whether it intersects T (pi0)at 2 pts as shown in figure abovetwo cases are possible, In the First case, when J(pi0) remains above T (pi0),the optimum decision rule is to make a decision immediately i.e. 0=1 and0 is given as follows :

    0 =

    {1 ifpi0 CM(CF+CM )0 ifpi0 CM(CF+CM )

    (31)

    In the second case the optimum decision is

    0 = 1, 0 = 1forpi0 piL0 = 1, 0 = 0forpi0 piU

    and0 = 0 for piL < pi0 < piU

    Thus sequential decision rule can be summarized as follows:

    Decide H1 or H0 if pi0 falls below a low threshold piL or above highthreshold piU repectively.

    We keep on taking one sample when piL < pi0 < piU .Now next we want to get what one should do in case when piL < pi0 < piU forthat what is to be done by taking one sample Y1 = y1.It is the same situationwhat we are having at first stage that we have CF and CM and D which aresame.However with knowledge of observation, y1 one can update ones priorprobability pi0 as follows using Bayes rule:

    pi0(y1) = Pr[H0/Y1 = y1]

    =Pr[Y1 = y1/H0]Pr[H0]1j=0 Pr[Y1 = y1/Hj]Pr[Hj]

    =pi0f0(y1)

    pi0f0(y1) + (1 pi0)f1(y1)One can now again give the optimum decision rule after taking a sampleY1 = y1 as follows :

    decide H0 if pi0(y1) piL or pi0(y1) piU If piL < pi0(y1) < piU ,then take another sample.

    8

  • Evaluation of J(pi0)

    J(pi0) = D + EY1 [V (pi0(Y1))] (32)

    The PDF/PMF of Y1 required for the evaluation of above equation can beobtained as follows :

    f(y1) =1j=0

    f(y1/Hj)P (Hj) (33)

    f(y1) = pi0f0(y1) + (1 pi0)f1(y1)Combining equation (30) and (33) we obtain,

    V (pi0) = min{T (pi0), D + EY1 [V (pi0(Y1))]} (34)

    Next let us assume that observation samples have been obtained.Given thislet us compute

    pi0(y1, ...., yn) = Pr[H0/y1, .....yn]

    =f(y1, ....., yn/H0)Pr[H0]

    f(y1, ....., yn)

    =

    nk=1 f(yk/H0)pi0f(y1, ....., yn)

    pi0(y1, ...., yn) =pi0

    pi0 + (1 pi0)Ln(y1, ...., yn) (35)

    where Ln(y1, ...yn) =n

    k=1f1(yk)f0(yk)

    . Thus from the same reason the optimumbayesian rule is given by

    Bn(y1, ...., yn) =

    {0 ifpiL < pi0(y1, ....yn) < piU1 otherwise

    (36)

    Bn(y1, ...., yn) =

    {1 ifpi0(y1, ....yn) piL0 ifpi0(y1, ....yn) piU (37)

    Using (35) and (37) we can say that ,

    Bn(y1, ...., yn) =

    {0 ifA < Ln(y1, ....yn) < B1 otherwise

    (38)

    Bn(y1, ...., yn) =

    {1 ifLn(y1, ....yn) B0 ifLn(y1, ....yn) A (39)

    9

  • where A = pi0(1piU )(1pi0)piU & B =

    pi0(1piL)(1pi0)piL

    Thus the optimal Bayesian sequential Hypothesis rule can be expressed as asequential probability ratio test(SPRT),where we can take sample as long asthe likelihood ratio Ln stays between A and B and we select H0 or H1 as soonas Ln falls below A or Exceeds B respectively. The condition piL < pi0 < piUrequired to ensure we pick atleast one sample,can be transformed into acondition on A and B as follows:

    A < 1 < B

    Till this point, we have completely characterized the structure of the optimalBayesian sequential Decision Test.But, to specify it completely we need thevalues of piL and piU or equivalently A or B.In addition to piL and piU , to determine the worst case prior

    piOM = arg maxpi0

    V (pi0)

    one also requires the average Bayes risk V (pi0).Conceptually,V (pi0) can be determined from, the functional equation .

    5 Sequential Probability Ratio test

    We now examine properties of SPRTs.To this end, consider an SPRT withupper and lower thresholds A and B such that

    0 < A < 1 < B

  • where D(f/g) is the relative entropy between f and g.Under H1, we have mean

    m0 = E[Zk/H1] = E[ln

    (f1(yk)

    f0(yk)

    )/H0] = D(f1/f0)

    Since A < 1 < B ln(A) < ln(1) < ln(B)

    a < 0 < b.

    Then the original SPRT can be transformed into SPRT(a,b) such that :

    We select H0 whenever n a. We select H1 whenver n b. Keep sampling if a < n < b.

    n can be viewed as adding zero mean fluctuation to mean trajectoriesnm0 or nm1, depending on whether H0 or H1 holds.

    When H0 holds, then n gets towards the lower threshold a.If n crossesa ,then the correct decision is made.

    When H0 holds,incorrect decision is made if the fluctuations are solarge that they cause n cross the threshold b.

    An obvious way to avoid incorrect decision by choosing the thresholds,a and b as low and high as possible.

    11