Csc313 Lecture Complexity(6)

Embed Size (px)

Citation preview

  • 8/10/2019 Csc313 Lecture Complexity(6)

    1/63

    Data StructuresTime Complexity and Formal Notations

    Hikmat Farhat

    March 17, 2014

    Hikmat Farhat Data Structures March 17, 2014 1 / 63

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    2/63

    Efficient Algorithms

    Given an algorithm to solve a problem we ask

    Is it efficient?

    We seek a sensible definition of efficiencyHow much work if the input doubles in size?

    For large input sizes can our algorithm solve the problem in areasonable time?

    Hikmat Farhat Data Structures March 17, 2014 2 / 63

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    3/63

    Polynomial Time

    Efficient algorithm = polynomial in the size of the input

    Definition

    Polynomial Time: for every input of sizen

    a, bsuch that number of computation steps

  • 8/10/2019 Csc313 Lecture Complexity(6)

    4/63

    Why Polynomial Time ?

    Hikmat Farhat Data Structures March 17, 2014 4 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    5/63

    Worst Case Analysis

    Usually the running time is the running time of the worst case

    One could analysis the average case but it much more difficult anddepends on the chosen distribution.

    Therefore an algorithm is efficient if it has a worst case polynomialtime

    There are exceptions the most important being the simplex algorithmthat works very well in practice

    Hikmat Farhat Data Structures March 17, 2014 5 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    6/63

    Informal Example: Union Find

    We will introduce the cost of algorithms informally by an example:union find.

    We have a set ofNpoints and a set ofMconnections between these

    points.For any two points pand qwe would like to answer the questions: isthere a path from p to q?

    Three different algorithms, with different costs, will be presented tosolve the above problem.

    Hikmat Farhat Data Structures March 17, 2014 6 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    7/63

    Union Find: attempt number 1

    The basic idea is to associate an identifier with every point, so wemaintain an array id[N].

    The identifier of a given point is the group the point belongs to.Initially there are Ngroup with one point in each, namely id[i] =i

    When two points, pand q, are found to be connected their respectivegroups are merged (union).

    Hikmat Farhat Data Structures March 17, 2014 7 / 63

    http://goforward/http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    8/63

    Find(p)

    return id[p]

    Union(p,q)

    idpFind(p)idqFind(q)if idp=idqthen

    returnendfor i= 0 to n 1 do

    if id[i] =idpthenid[i]idq

    endendcountcount 1

    Hikmat Farhat Data Structures March 17, 2014 8 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    9/63

    Computational Cost

    How much does it cost to run Find and Union on N elements?

    What exactly is the unit of cost?

    For now let us say that the cost of referencing an array element costs1.

    The question becomes: how many array references are Union and

    Find doing?

    For find it is easy, just 1.

    The cost of the Union function is the sum of 2 calls to Find and thecost of the loop.

    The if stmt is executed Ntimes whereas the assignment is executedonly when id[i] =idp. We know that id[p] =idpand id[q]=idptherefore the if stmt is true at least once and at most N 1 times.

    Hikmat Farhat Data Structures March 17, 2014 9 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    10/63

    Since the comparison in the if stmt is always computed then thereareNif tests. When the test is true we do one more computation,id[i]idq.Therefore the total cost of the loop is at least N+ 1 and at mostN+ (N 1) = 2N 1Finally the total cost of Union is between N+ 3 and 2N+ 1.

    The above the cost of a singlecall to Union.

    The number of times the function Union is called depends on theproblem.

    Consider the case when all sites are connected then there are at least

    N 1 connected pairs and Union is called N 1 times for a total costof at least (N 1)(N+ 3)

    Hikmat Farhat Data Structures March 17, 2014 10 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    11/63

    Quick Union

    A different approach is to organize all related points in a treestructure.

    Two points belong to the same group iff they belong to the same tree.

    A tree is uniquely identified by its root.The array id[] has a different meaning: id[i] =kmeans that site k isthe parent of site i.

    Only the root of a tree has the property id[i] =i;

    In this case find(p) returns the root of the tree that pbelongs to.

    Hikmat Farhat Data Structures March 17, 2014 11 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    12/63

    Quick Union Pseudo Code

    Find(p)

    while id[p]=pdop=id[p]

    endreturn p

    Union(p,q)prootFind(p)qrootFind(q)if proot=qroot then

    return

    endid[proot]qrootcountcount 1

    Hikmat Farhat Data Structures March 17, 2014 12 / 63

    http://goforward/http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    13/63

    Quick Union Cost

    The cost ofFind(p) is 2d+ 1 where d is the depth of node p.This means that the cost ofUnion(p, q) is between(2dp+ 1) + (2dq+ 1) + 1 and (2dp+ 1) + (2dq+ 1) + 3.

    The problem is that in some cases the tree degenerates into a linear

    list.In that case the height=size and instead of getting log n behavior weget n.

    To avoid such a situation we try to keep the trees balanced.

    We do this by always attaching the small tree to the large one.to this end we introduce a variable tsize initialized to 1.

    Hikmat Farhat Data Structures March 17, 2014 13 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    14/63

    Union Find: take 3

    Union(p,q)

    prootFind(p)qrootFind(q)if proot=qroot then

    returnend

    if tsize[proot]> tsize[qroot] thenid[qroot]proottsize[proot]tsize[proot] +tsize[qroot]

    elseid[proot]

    qroot

    tsize[qroot]tsize[proot] +tsize[qroot]endcountcount 1

    Hikmat Farhat Data Structures March 17, 2014 14 / 63

    http://goforward/http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    15/63

    Weighted Quick Union

    We have shown that in the quick union version UnionFind, find(p)costs 2d+ 1 where d is the depth of node p.

    we will now show that during the computation of the weighted quick

    union for Nsites, the depth of ANY node is at most log N.It is sufficient to show that the height of ANY tree of size k is atmost log k(this is not the case in the original quick union where theheight can be up to k 1)

    Hikmat Farhat Data Structures March 17, 2014 15 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    16/63

    Proof

    By induction on the size of the tree.

    Base case: k= 1 then there is only one node and the height islog k= 0.

    Assume that for any tree, Tof size i

  • 8/10/2019 Csc313 Lecture Complexity(6)

    17/63

    The size of the combined tree is i+j=k.

    Using the weighted quick union method we know that the height ofthe combined tree is at most max(1 + log i, logj) (why?)

    in the first case 1 + log i= log2ilog(i+j) = log kin the second case logjlog(i+j) = log k.

    Hikmat Farhat Data Structures March 17, 2014 17 / 63

    G f

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    18/63

    Asymptotic Growth of Functions

    Definition

    Big Oh:The setO(g(n)) is defined as all functions f(n) with the property

    c, n0 such thatf(n)cg(n) for allnn0

    "C"

    -D"E

    .(D"E

    Figure : Graphical definition ofOtaken from the CLRS book

    Hikmat Farhat Data Structures March 17, 2014 18 / 63

    E l

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    19/63

    Example

    f(n) = 2n2 +n =O(n2) because let c= 3 and n0 = 1

    nn0 = 1n

    n2

    2n2 +n3n2f(n)cn2f(n) =O(n2)

    On the other hand f(n) = 2n2 +n=O(n)

    Hikmat Farhat Data Structures March 17, 2014 19 / 63

    D fi iti

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    20/63

    Definition

    Big Omega:The set(g(n)) is defined as all functionsf(n) with theproperty

    c, n0 such thatf(n)

    cg(n) for alln

    n0

    6,6

    -?6@

    &9?6@

    Figure : Graphical definition of taken from the CLRS book

    Hikmat Farhat Data Structures March 17, 2014 20 / 63

    E l

    http://goforward/http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    21/63

    Example

    Consider f(n) =

    n and g(n) = log n.

    f(n) = (g(n)) because for c= 1, n0 = 16 we have

    16 = 4 = log 24

    Note that between n=2 and n=4 the value of log2n

    n

    Hikmat Farhat Data Structures March 17, 2014 21 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    22/63

    Abuse of notation: ifh(n)O(g(n)) we write h(n) =O(g(n))similarly ifh(n)(g(n)) we writeh(n) = (g(n))

    Ifh(n) =O(g(n)) we say g(n) is an upper bound for f(n).Ifh(n) = (g(n)) we say g(n) is a lower bound for f(n).

    Hikmat Farhat Data Structures March 17, 2014 22 / 63

    D fi i i

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    23/63

    Definition

    Big:The set(g(n)) is defined as all functions f(n) with the propertyc1, c2, n0 such thatc1g(n)f(n)c2g(n) for allnn0

    4!4

    *"4#

    #$7"4#

    #%7"4#

    Figure : Graphical definition of taken from the CLRS book

    Hikmat Farhat Data Structures March 17, 2014 23 / 63

    Example

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    24/63

    Example

    Consider c1 = 1, c2 = 3 and n0 = 1 it is obvious that

    c1n2 2n2 +nc2n2 nn0

    We can show that f(n) = (g(n)) ifff(n) =O(g(n)) andf(n) = (g(n))

    If f(n) = (g(n)) we say g(n) is a tight bound for f(n).

    Hikmat Farhat Data Structures March 17, 2014 24 / 63

    http://goforward/http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    25/63

    DefinitionLittle Oh: The seto(g(n)) is defined as all functionsf(n)with the propertyfor allc,n0 such thatf(n)

  • 8/10/2019 Csc313 Lecture Complexity(6)

    26/63

    Definition

    Little omega: The set(g(n)) is defined as all functionsf(n) with theproperty

    for allc,n0 such thatcg(n)< f(n) for allnn0We can show that the above definition is equivalent to

    limn

    f(n)

    g(n)=

    Hikmat Farhat Data Structures March 17, 2014 26 / 63

    Examples

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    27/63

    Examples

    f(n) = 2n2 +n

    f(n) =(n) because

    limn

    2n2 +n

    n

    =

    f(n) =o(n3) because

    limn

    2n2 +n

    n3

    = 0

    Hikmat Farhat Data Structures March 17, 2014 27 / 63

    Using Limits

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    28/63

    Using Limits

    Sometimes it is easier to determine the relative growth rate of twofunctions f(n) and g(n) by using limits

    if limnf(n)g(n) = 0 then f(n) =o(g(n)).

    if limnf(n)

    g(n)

    =

    then f(n) =(g(n)).

    if limnf(n)g(n) =c, for some constant c, then f(n) = (g(n)).

    Can we always do that?No

    In many situations the complexity cannot be written in an analyticform.

    Hikmat Farhat Data Structures March 17, 2014 28 / 63

    Exponential vs Polynomial vs Logarithmic

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    29/63

    Exponential vs Polynomial vs Logarithmic

    it is easy to show that for all a>0 ,b>1

    limn

    na

    bn = 0

    And polynomials grow faster(use m= log n and the previous result) thanlogarithms (loga x= (log x)a)

    limn

    loga n

    nb = 0

    Hikmat Farhat Data Structures March 17, 2014 29 / 63

    Arithmetic Properties

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    30/63

    Arithmetic Properties

    transitivity: iff(n) =O(g(n)) and g(n) =O(h(n)) thenf(n) =O(h(n)).

    eg: log n= O(n) and n=O(2n) then log n=O(2n).

    constant factor: iff(n) =O(kg(n)) for some k>0 thenf(n) =O(g(n)).

    eg: n2

    =O(3n2

    ) thus n2

    =O(n2

    ).sum: iff1(n) =O(g1(n)) and f2(n) =O(g2(n)) thenf1(n) +f2(n) =O(max(g1(n), g2(n)))

    3n2 =O(n2), 6log n=O(log n) then 3n2 + 6 log n=O(n2)

    product:f1(n) =O(g1(n)) and f2(n) =O(g2(n)) then

    f1(n) f2(n) =O(g1(n) g2(n)) eg: 3n2 =O(n2), 6log n=O(log n) then 3n2 6log n=O(n2 log n)

    Hikmat Farhat Data Structures March 17, 2014 30 / 63

    Code Fragments

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    31/63

    Code Fragments

    sum= 0for i= 1 . . . n do

    sumsum+ 1end

    the operation sum= 0 is independent of the input thus it costs aconstant time c1.

    the operation sumsum+ 1 is independent of the input thus it costsome constant time c2.

    Regardless of the input the loop runs n times therefore the total costis c1+c2n= (n).

    Hikmat Farhat Data Structures March 17, 2014 31 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    32/63

    The algorithm below for finding the maximum ofn numbers is (n).Input: a1, . . . , anOutput: max(a1, . . . , an)

    Initially max=a1for i= 2 . . . n do

    if ai>max then

    max=aiend

    end

    Try 89,47,80,50,67,102 and 19,15,13,10,8,3

    Do they have the same running time?

    Hikmat Farhat Data Structures March 17, 2014 32 / 63

    Sequential Search

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    33/63

    q

    Given an array a check if element x is in the array.for i= 1 . . . n do

    if a[i] =x thenreturn True

    end

    endreturn False

    What is the running time of the above algorithm?

    Consider the two extreme cases: x is the first or the last element of

    the array.

    Hikmat Farhat Data Structures March 17, 2014 33 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    34/63

    Ifx is the first element than we perform a single operation. This isthe best-case.

    Ifx is the last element than we perform n operation.This is theworst-case.

    Now if we run the algorithm on many different (random) input andaverage out the results we get the average-case.

    Which one do we use? Depends on the application and the feasibility. Real-time and critical applications usually require worst-case In most other situations we prefer average-case, but difficult to

    calculate and depends on the random distribution!

    In light of the above, what is the best-case,average-case andworst-case for the compute max algorithm we had before?

    Hikmat Farhat Data Structures March 17, 2014 34 / 63

    Nested Loops

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    35/63

    p

    What is the complexity of nested loops?

    The cost of a stmt in a nested loop is the cost of the statementmultiplied by the product of the size of the loops

    for i= 1 . . . n do

    for j= 1 . . .m dokk+ 1

    end

    end

    The cost is O(n m)

    Hikmat Farhat Data Structures March 17, 2014 35 / 63

    Factorial

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    36/63

    Consider the recursive implementation of the factorial function.

    factorial(n)

    if n=1 thenreturn 1

    elsereturn n*factorial(n-1)

    end

    The cost of size n? T(n) =T(n 1) +C

    Thus T(n) = (n).

    Hikmat Farhat Data Structures March 17, 2014 36 / 63

    Complexity of Factorial

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    37/63

    T(n) =T(n 1) +C=T(n 2) + 2C=. . .

    =T(n i) +i C=. . .

    =T(1) + (n 1) C=n

    C+T(1)

    C

    = (n)

    Hikmat Farhat Data Structures March 17, 2014 37 / 63

    Fibonacci

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    38/63

    Computing the nth Fibonacci number can be done recursivelyfib(n)

    if n= 0 thenreturn 0

    endif n= 1 then

    return 1endreturn fib(n-1)+fib(n-2)

    IfT(n) is the cost of computing Fib(n) then

    T(n) =T(n 1) +T(n 2) + 3We will show, by induction on n, that T(n)fib(n) i.e. the cost ofcomputing Fibonacci number n is greater than the number itself.

    Hikmat Farhat Data Structures March 17, 2014 38 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    39/63

    We are assuming that all operations cost the same so the 3 comesfrom executing the two if stmts and the sum.

    First the base cases. Ifn= 0 then the algorithm costs 1 (if stmt), ifn= 1 it costs 2 (2 if stmts) thus T(0) = 1, T(1) = 2.

    In the other cases we have T(n) =T(n 1) +T(n 2) + 3. Thismeans T(2) = 6fib(2) = 1.Assume that T(n)

    fib(n) then

    T(n+ 1) =T(n) +T(n 1) + 3fib(n) +fib(n 1) hyp.fib(n+ 1)

    One can show that (for n5) fib(n)(3/2)n thus T(n)(3/2)nwhich is exponential!

    Hikmat Farhat Data Structures March 17, 2014 39 / 63

    Fibonacci: take two

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    40/63

    can we compute Fibonacci numbers more efficiently?It turns out yes. By just remembering the values we alreadycomputed.

    A simple iterative algorithm

    FiboIter(n)

    f[0]0f[1]1for i2 to n do

    f[i]f[i 1] +f[i 2]end

    Hikmat Farhat Data Structures March 17, 2014 40 / 63

    Comparison

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    41/63

    We had two different algorithms to compute Fibonacci number n

    One was ((3/2)n) while the other was O(n).

    In the first one we did not need to save anything.

    In the second algorithm we used an array of size n: spacecomplexity: O(n).

    This is a trade offbetween time and space.

    Obviously in this case the trade off is worth it.

    Hikmat Farhat Data Structures March 17, 2014 41 / 63

    Exponentiation

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    42/63

    Exponentiation is another example where the simplest algorithm ismuch less than optimal.

    The simplest way to compute xn is x. . . x

    n times

    therefore the complexity of the above algorithm is (n).We can do (much) better by observing that

    ifn is even then xn = (x2)n/2

    ifn is odd then xn = (x2)n12 x

    (Note that both cases can be written in term of (x2)n

    2

    Hikmat Farhat Data Structures March 17, 2014 42 / 63

    Implementation

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    43/63

    i n t power ( i n t x , i n t n ){i f ( n ==0) r e t u r n 1 ;i n t h a l f =power ( x , n / 2 ) ;h a l f = h a l f

    h a l f ;

    i f ( ( n % 2)! =0) h a l f =h a l f x ;r e t u r n h a l f ;

    }

    Hikmat Farhat Data Structures March 17, 2014 43 / 63

    Complexity of Exponentiation

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    44/63

    The analysis is simplified by assuming n= 2k (other cases are similar,albeit more complicated)

    Assume: n/2, half halfand the if stmt each costs 1.for a total of 4 (including the test for x==0) when n is even and 5when it is odd.

    Let T(n) be the computational cost for xn then

    T(n) =T(n/2) + 4

    =T(n/4) + 8

    =T(n/2i) + 4i

    =. . .=T(1) + 4k

    = (k) = (log n)

    Hikmat Farhat Data Structures March 17, 2014 44 / 63

    General case

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    45/63

    In general the problem has the following recursion relation

    T(n) =T(n/2) + 4 + (n mod 2)

    We will show that the general form

    T(n) =T(

    n/2

    ) +M+ (n mod 2) (1)

    Has solution

    T(n) =Mlog n + (n) (2)

    Where (n) is the number of 1s in the binary representation ofn.Using the fact thatlogn/2=log n 1 it is easy to check that(2) satisfies (1).

    Hikmat Farhat Data Structures March 17, 2014 45 / 63

    Fibonacci: Take Three

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    46/63

    The previous method for exponentiation can be used to computeFibonacci (n) in O(log n).

    The key is that

    Fn+1 FnFn Fn1

    = 1 11 0

    n

    The above is shown by induction on n.

    We know how to compute the power in log n.

    Hikmat Farhat Data Structures March 17, 2014 46 / 63

    Maximum Subarray Sum

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    47/63

    Given an array A ofn elements we ask for the maximum value of

    j

    k=i

    Ak

    For example ifA is -2,11,-4,13,-5,-2 then the answer is 20 =4

    k=2

    Hikmat Farhat Data Structures March 17, 2014 47 / 63

    Brute ForceC h f ll b f A f i d

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    48/63

    Compute the sum of all subarrays of an array A of size n and returnthe largest.A subarray starts at index iand ends at index jwhere 0

    i

  • 8/10/2019 Csc313 Lecture Complexity(6)

    49/63

    To determine the complexity of the brute force approach we can seethat there are 3 nested loop therefore the complexity of the problemdepends on how many times line 14 is executed

    The number of executions is

    n1

    i=0n1

    j=ij

    k=i 1 =n1

    i=0n1

    j=i j i+ 1

    To evaluate the first sum let m=j i+ 1 thenn1j=i

    j i+ 1 =nim=1

    m= (n i)(n i+ 1)/2

    Hikmat Farhat Data Structures March 17, 2014 49 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    50/63

    Finally, we get

    n1i=0

    (n i)(n i+ 1)/2 = n3 + 3n2 + 2n

    6

    = (n3)

    Hikmat Farhat Data Structures March 17, 2014 50 / 63

    Divide and Conquer

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    51/63

    general technique that divides a problem in 2 or more parts (divide)and patch the subproblems together (conquer).

    In this context if we divide an array in two subarrays. We have 3possibilities:

    1 max is entirely in the first half2 max is entirely in the second half3 max spans both halves.

    Therefore the solution is max(left,right,both)

    Hikmat Farhat Data Structures March 17, 2014 51 / 63

    Both halves

    If th b th h l it it i l d th l t l t f

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    52/63

    If the sum spans both halves it means it includes the last element ofthe first half and the first element of the second halfThis means that the we are looking for the sum of

    1 Max subsequence in first half that includes the last element2 Max subsequence in the second half that includes the first element

    S3 = max

    0imax1 then

    max1sum1end

    endfor j=center+ 1 to right do

    sum2sum2+A[j]if sum2 >max2 then

    max2sum2endendreturn max1+max2

    Hikmat Farhat Data Structures March 17, 2014 53 / 63

    Recursive Algorithm

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    54/63

    maxSubarray(A, left, right)

    center(left+right)/2S1maxSubarray(A, left, center)S2maxSubarray(A, center+ 1, right)S3computeBoth(A, left, right)return max(S1,S2, S3)

    Hikmat Farhat Data Structures March 17, 2014 54 / 63

    Complexity

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    55/63

    Given an array of size n the cost of the call to maxSubarray is dividedinto two computations

    1 The work ofcomputeBoth which is (n).2 Two recursive calls on the problem with half the size

    3 Therefore the total cost can be written as

    T(n) = 2T(n/2) + (n)

    to solve the above recurrence, we assume for simplicity that n= 2k

    Hikmat Farhat Data Structures March 17, 2014 55 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    56/63

    Thus

    T(2k) = 2T(2k1) +C 2k= 2(2T(2k2) + 2k1) +C 2k= 22T(2k2) + 2 C 2k

    =. . . . . .= 2iT(2ki) +i C 2k= 2kT(1) +k C 2k= (n log n)

    Hikmat Farhat Data Structures March 17, 2014 56 / 63

    Running time comparison

    There is an (n) algorithm for max subarray Can you find it?

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    57/63

    There isan (n) algorithm for max subarray. Can you find it?

    Hikmat Farhat Data Structures March 17, 2014 57 / 63

    Master Theorem (special case)

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    58/63

    A generalization of the previous cases is done using a simplifiedversion of the Master theorem

    T(n) =aT(n/b) + (nd)

    Hikmat Farhat Data Structures March 17, 2014 58 / 63

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    59/63

    T(n) =aT(n/b) +cnd

    =aaT(n/b2) +c(n/b)d

    +cnd

    =a2T(n/b2) +cnd(a/bd) +cnd

    =a2 aT(n/b3) +c(n/b2)d +cn

    d(a/bd) +cnd

    =a3T(n/b3) +cnd(a/bd)2 +cnd(a/bd) +cnd

    =aiT(n/bi) +cndi1l=0

    (a/bd)l

    The above reaches T(1) when bk =n for some k. We get

    Hikmat Farhat Data Structures March 17, 2014 59 / 63

    http://find/http://goback/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    60/63

    T(n) =akT(1) +cndk1l=0

    (a/bd)l

    There are three cases

    1 a=bd

    2 abd

    Hikmat Farhat Data Structures March 17, 2014 60 / 63

    case 1: a= bd

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    61/63

    Ifa=bd (i.e a

    bd= 1) then we get

    T(n) =akT(1) +cnd k

    Since k= logbn then

    T(n) =alogbnT(1) +cnd logbn

    =nlogbaT(1) +cnd logbn

    =ndT(1) +cnd logbn

    = (nd log n)

    Hikmat Farhat Data Structures March 17, 2014 61 / 63

    case 2: a < bd

    http://find/
  • 8/10/2019 Csc313 Lecture Complexity(6)

    62/63

    T(n) =akT(1) +cnd

    k1l=0

    (a/bd)l

    =akT(1) +cnd(a/bd)k 1

    (a/bd) 1

    for large n, i.e. n then k= logbn and since a

  • 8/10/2019 Csc313 Lecture Complexity(6)

    63/63

    In this case we can write

    T(n) =akT(1) +cnd(a/bd)k 1

    (a/bd) 1=nlogbaT(1) +gnd(a/bd)k

    =nlog

    ba

    T(1) +gnd

    (a/bd

    )log

    bn

    =nlogbaT(1) +gndnlogb(a/bd)

    =nlogbaT(1) +gndn(d+logba)

    = (nlogba)

    Hikmat Farhat Data Structures March 17, 2014 63 / 63

    http://find/