Classical Encryption Techniquessvbitce2010.weebly.com/uploads/8/4/4/5/8445046/ch02.pdf ·...

Preview:

Citation preview

Chapter 2:-

Analysis of Algorithm

Compiled By:- Sanjay Patel

Assistant Professor,

SVBIT.

Outline

The efficient algorithm

Average and worst case analysis

Elementary operation

Asymptotic notation

Analyzing control statement

Amortized analysis

Sorting algorithm

Binary tree search

Sanjay Patel

Empirical approach(Posteriori approach)

Theoretical approach(Priori)

Hybrid approach

The efficient algorithm

Sanjay Patel

We write the programs for various competing

techniques and then that program is tested on the

computer using different instances.

This approach is also called as posteriori approach.

Disadvantages

Practical consideration may force us to test

algorithm for small number of instances.

Discovering the algorithm that performs better for

large as well as for small instances is critical.

Empirical approach(Posteriori approach)

Sanjay Patel

The requirement quantity of resources is determined

mathematically. The resources that are considered in

this approach are-computing time and storage

space.

This approach is also known as priori approach.

This approach is independent of the computer being

used, programming language used and skill of the

programmer.

It saves time spent in programming an inefficient

algorithm and testing them.

Theoretical approach(Priori)

Sanjay Patel

In this approach the function that describes the

algorithm’s efficiency is determined theoritically and

then any required numerical parameters are

determined empirically when program executes on

some machine.

This approach is called as an hybrid approach

because it makes use of theoretical and empirical

approaches in combination.

The time required by an algorithm for very large

instances can be computed , but the predictions must

be made with the help of theoritical approach.

Hybrid approach

Sanjay Patel

Functions in order of increasing growth rate

Sanjay Patel

Function Name

C Constant

LogN Logarithmic

Log2N Log-squared

N Linear

NlogN NlogN

N2 Quaratic

N3 Cubic

2n Exponential

Common Computing time function

Sanjay Patel

Arrange following rate of growth in increasing

order.

2n,NlogN, N2, N ,1, LogN ,N!, N3

GTU Dec-2010 (2 marks)

Sanjay Patel

Predict resource utilization

1. Memory (space complexity)

2. Running time (time complexity)

Resource Utilization

Sanjay Patel

Best Case time complexity:- when algorithm

runs for short time.

Worst Case time complexity:- when algorithm

runs for a longest time.

Average Case time complexity:- Gives

information about the behavior of an algorithm

on specific or random input.

Three cases of analysis

Sanjay Patel

Elementary operation are those operation whose

execution time is bounded by a constant which

depends upon the type of implementation used.

The elementary operations are addition,

multiplication and assignment.

Elementary operation

Sanjay Patel

Algorithm SUM(n)

{

Sum 0

for (i1 to n)

do

sum sum + I

return sum;

}

In above algorithm the elementary operation is

Addition.

Example..

Sanjay Patel

Asymptotic notation

Sanjay Patel

To choose the best algorithm, we need to check efficiency of each algorithm.

The efficiency can be measured by computing time complexity of each algorithm.

Asymptotic notation is a shorthand way to represent the time complexity.

Classifying Functions by Their Asymptotic Growth

Theta (Θ)

Big Oh (O)

Big Omega (Ω)

Its denote by “O”

Definition: Let f(n) and g(n) be two functions; We

say that

f(n) O(g(n))

if there exists a constant c, no > 0 such that

f(n)≤c*g(n) for all n ≥ Xo.

f (n) is asymptotically less than or equal to g(n)

Big-O gives an asymptotic upper bound

BIG OH notation

Sanjay Patel

Consider function

F(n)= 2n + 2

g(n)= n2

initially, start with n=1, then n=2,3,4……

Thus Upper bound of exiting time is obtained by Big

OH notation.

EXAMPLE

Sanjay Patel

It is denoted by “ ”.

Definition: Let f (n)and g(n) be two functions; We

say that

f(n) (g(n))

if there exists a constant c ≥ 0 such that

f(n) ≥ c*g(n) for all n ≥ no

f(n) is asymptotically greater than or equal to g(n)

Big-Omega gives an asymptotic lower bound

OMEGA

Sanjay Patel

Consider example

F(n)= 2n2 + 5

g(n)=7n

Initially, start with n=0, then n=1,2,3,4……

EXAMPLE

Sanjay Patel

Definition: Let f(n) and g(n) be two functions ; We

say that

f(n) (g(n))

if there exists a constant c1, c2, no > 0; such that for

every integer n n0 we have

c1g(n) <= f(n) <= c2g(n)

F(n) is asymptotically equal to g(n)

F(n) is bounded above and below by g(n)

Big-Theta gives an asymptotic equivalence

THETA

Sanjay Patel

Consider example

f(n)= 2n+8

g(n)=7n

Where n>=2. ,,here c1=5, c2 = 7

EXAMPLE

Sanjay Patel

Use the informal definitions of “O”, “ ”and to

determine whether the following assertions are true

or false.

1) n(n+1)/2 € O(n3 )

2) n(n+1)/2 € O(n2 )

3) n(n+1)/2 € (n3 )

4) n(n+1)/2 € (n)

Example

Sanjay Patel

The recurrence equation is an equation that defines

a sequence recursively.

The recurrence relation can be solved by following

methods.

Substitution method

Master’s method

Recurrence Equation

Sanjay Patel

The substitution method is a kind of method in which

a guess for the solution is made.

There are two types of substitutions.

Forward substitution

Backward substitution

Substitution Method

Sanjay Patel

consider recurrence relation

T(n)= T(n-1) +n

with initial condition T(0)= 0, the n=1,2,3…

By observing each formula, we can derive a formula

T(n)= n(n+1)2

We can denote T(n) in terms of big oh notation

T(n)= O(n2)

Forward substitution

Sanjay Patel

consider recurrence relation

T(n)= T(n-1) +n

with initial condition T(0)= 0.

In backward substitution, start from n=n-1, n-2…

Let, n=n-1

Then T(n-1)= T(n-1-1) + n-1 …….(1)

Now , put (1) equation in original T(n) function

T(n) = T(n-2) + (n-1) +n ….

Backward substitution

Sanjay Patel

Now, n=n-2

T(n-2)= T(n-2-1)+(n-2)

=T(n-3) + (n-2)

Now, put the value of T(n-2) in previous equation.

So, T(n)= T(n-3) + (n-2) + (n-1) + n

::::::::::::::::::::::::::::::::::::::

= T(n-k) + (n-k+1) + (n-k+2)+…….+n

So if k=n then

T(n)= T(0) + 1 +2 +…….n

= 0+1 +2 + ………+n

T(n)= n(n+1)2

Cont’d…

Sanjay Patel

By observing each formula, we can derive a formula

T(n)= n(n+1)2

We can denote T(n) in terms of big oh notation

T(n) € O(n2)

Cont’d…

Sanjay Patel

Solve the x(n)= x(n-1) + 5 for n>1, x(1)=0

With recurrence relation.

Example

Sanjay Patel

Let solve this example with backward substitution.

n= n-1 .. X(n)= X(n-1) + 5

X(n-1)= X(n-1-1) +5

Put the value of X(n-1) into x(n)

so X(n)= [X(n-2)+5] +5

::::::::::::::::::::::::::

= [ X(n-3) + 5] +5 +5

:::::::::::::::::::::::::::::::

= X(n-i) + 5* i.

solution

Sanjay Patel

Now in example we have value X(1)=0

So if i=n-1 then we will get value of X(1)

X(n)= X(n-(n-1) + 5* (n-1)

= X(1) + 5 * (n-1)

= 0 + 5 *(n-1) X(1)=0

= 5 (n-1)

Cont’d….

Sanjay Patel

Solve the following recurrence relation

T(n)=T(n-1)+1 with T(0)=0 as initial condition . Find

the big oh notation.

Example

Sanjay Patel

It is efficiency analysis.

Consider the following recurrence relation

T(n)= a T(n/b) + F(n) where n>=d , d is constant

If F(n) is (nd ) where d>=0 in the recurrence relation

then.

Master Theorem

Sanjay Patel

T(n)= (nd) If a< bd

T(n)= (nd log n) If a=b

T(n)= (nlogba ) If a>bd

Find the growth for solution of the following

recurrence.

1. T(n)= 4T(n/2) + n T(1)=1

2. T(n)= 4T(n/2) + n2 T(1)=1

3. T(n)= 4T(n/2) + n3 T(1)=1

Example

Sanjay Patel

1. T(n)= 4T(n/2) + n T(1)=1

here, a=4, b=2 and F(n)= n1

so d=1

it is matching with case 3 because a>bd

4 > 2

i.e T(n)= (nlog24 )

But log24 = 2

T(n)= (n2) is the time complexity.

solution

Sanjay Patel

Sequencing

For loops

While loops

Recursive call

While and repeat loops

Analyzing control statement

Sanjay Patel

In amortized analysis means finding average

running time per operation over worst case

sequence of operation.

Aggregate Method

Accounting Method

Potential Method

Amortized analysis

Sanjay Patel

• Aggregate analysis is a kind of analysis in which the analysis is made over sequence of n operation and these operation actually take worst case time T(n).

• In the worst case, the average cost, or amortized cost , per operation is T(n)/n.

• The amortized cost applies to each operation, even when there are several types of operations in the sequence

Aggregate Method

Sanjay Patel

Stack example

Sanjay Patel

3 ops:

Push(S,x) Pop(S)Multi-

pop(S,k)

Worst-case

cost:O(1) O(1)

O(min(|S|,k)

= O(n)

The accounting method is based on the charges that are

assigned to each operation.

Assign different charges to different operation

The amount of charges is called amortized cost

Then there is actual cost of each operation.

The amortized cost can be more or less than actual cost.

When amortized cost > actual cost , the difference is saved in

specific object called credits.

When any particular operation amortized cost < actual cost

the stored credit are utilized.

Accounting method

Sanjay Patel

• Charge i th operation a fictitious amortized cost ĉi, where $1 pays for 1 unit of work (i.e., time).

• Assign different charges (amortized cost ) to different operations

• Some are charged more than actual cost

• Some are charged less

• This fee is consumed to perform the operation.

• Any amount not immediately consumed is stored in the bank for use by subsequent operations.

• The bank balance (the credit) must not go negative! We must ensure

Accounting method

Sanjay Patel

Stack Example

Sanjay Patel

3 ops:

Push(S,x) Pop(S) Multi-pop(S,k)

•Assigned

cost:2 0 0

•Actual cost: 1 1 min(|S|,k)

Push(S,x) pays for possible later pop of x.

1

1

1

1

1

1

1

1

1

0

1

1

0

0

0

• When pushing an object, pay $2

• $1 pays for the push

• $1 is prepayment for it being popped by either pop or Multipop

• Since each object has $1, which is credit, the credit can never go negative

• Therefore, total amortized cost = O(n), is an upper bound on total actual cost

Cont’d…

Sanjay Patel

This method is similar to the accounting method in

which the concept of “prepaid” is used.

Accounting method stores credit with specific objects

while potential method stores potential in the data

structure as a whole.

Can release potential to pay for future operations

Most flexible of the amortized analysis methods.

Potential method

Sanjay Patel

Cont’d..

Sanjay Patel

Framework:

• Start with an initial data structure D0. hence

for n operation D1 D2 D3..…• Operation i transforms Di–1 to Di.

• The cost of operation i is ci.

• Define a potential function : {Di} R,such that (D0 ) = 0 and (Di ) 0 for all i.

• The amortized cost ĉi with respect to is

defined to be ĉi = ci + (Di) – (Di–1).

Stack Example

Sanjay Patel

Define: (Di) = #items in stack Thus, (D0)=0.

Plug in for operations:

Push: ĉi = ci + (Di) - (Di-1)

= 1 + j - (j-1)

= 2

Pop: ĉi = ci + (Di) - (Di-1)

= 1 + (j-1) - j

= 0

Multi-pop: ĉi = ci + (Di) - (Di-1)

= k’ + (j-k’) - j k’=min(|S|,k)

= 0

Thus O(1) amortized time per op.

Bubble sort

Selection sort

Radix sort

Insertion sort

Shell sort

Heap sort

Sorting algorithm

Sanjay Patel

Bubble sort

Sanjay Patel

Sorting takes an unordered collection and makes

it an ordered one.

512354277 101

1 2 3 4 5 6

5 12 35 42 77 101

1 2 3 4 5 6

Cont’d..

Sanjay Patel

512354277 101

1 2 3 4 5 6

Swap42 77

Cont’d..

Sanjay Patel

512357742 101

1 2 3 4 5 6

Swap35 77

Cont’d..

Sanjay Patel

512773542 101

1 2 3 4 5 6

Swap12 77

Cont’d..

Sanjay Patel

577123542 101

1 2 3 4 5 6

No need to swap

Cont’d..

Sanjay Patel

577123542 101

1 2 3 4 5 6

Swap5 101

Cont’d..

Sanjay Patel

77123542 5

1 2 3 4 5 6

101

Largest value correctly placed

Bubbling all element

Sanjay Patel

77123542 5

1 2 3 4 5 6

101

5421235 77

1 2 3 4 5 6

101

4253512 77

1 2 3 4 5 6

101

4235512 77

1 2 3 4 5 6

101

4235125 77

1 2 3 4 5 6

101

N -

1

Cont’d..

Sanjay Patel

674523 14 6 3398 42

1 2 3 4 5 6 7 8

to_do

index

7

N 8 did_swap true

Cont’d..

Sanjay Patel

674523 14 6 3398 42

1 2 3 4 5 6 7 8

to_do

index

7

1

N 8 did_swap false

Cont’d..

Sanjay Patel

674523 14 6 3398 42

1 2 3 4 5 6 7 8

to_do

index

7

1

N 8

Swap

did_swap false

Cont’d..

Sanjay Patel

674598 14 6 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

1

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

674598 14 6 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

2

N 8 did_swap true

Cont’d..

Sanjay Patel

674598 14 6 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

2

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

679845 14 6 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

2

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

679845 14 6 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

3

N 8 did_swap true

Cont’d..

Sanjay Patel

679845 14 6 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

3

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

671445 98 6 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

3

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

671445 98 6 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

4

N 8 did_swap true

Cont’d..

Sanjay Patel

671445 98 6 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

4

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

671445 6 98 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

4

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

671445 6 98 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

5

N 8 did_swap true

Cont’d..

Sanjay Patel

671445 6 98 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

5

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

981445 6 67 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

5

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

981445 6 67 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

6

N 8 did_swap true

Cont’d..

Sanjay Patel

981445 6 67 3323 42

1 2 3 4 5 6 7 8

to_do

index

7

6

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

331445 6 67 9823 42

1 2 3 4 5 6 7 8

to_do

index

7

6

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

331445 6 67 9823 42

1 2 3 4 5 6 7 8

to_do

index

7

7

N 8 did_swap true

Cont’d..

Sanjay Patel

331445 6 67 9823 42

1 2 3 4 5 6 7 8

to_do

index

7

7

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

331445 6 67 4223 98

1 2 3 4 5 6 7 8

to_do

index

7

7

N 8

Swap

did_swap true

Cont’d..

Sanjay Patel

331445 6 67 4223 98

1 2 3 4 5 6 7 8

to_do

index

7

8

N 8

Finished first “Bubble Up”

did_swap true

For i 0 to n -2 do

{

for J 0 to n-2-i do

{

if (A[J] > A[J+1] then

{

temp A[J]

A[J] A[J+1]

A[J+1] temp

}}}

Algorithm Of Bubble sort

Sanjay Patel

Analysis

Sanjay Patel

C(n)= outer for loop * inner for loop * basic operation

n-2 n-2-i

C(n)= ∑ ∑ 1i=0 J=0

So , after deriving above formula,

C(n)= n(n-1)2

(n2 )

The time complexity of bubble sort is (n2)

Selection sort

Sanjay Patel

20 8 5 10 7

5 8 20 10 7

5 7 20 10 8

5 7 8 10 20

5 7 8 10 20

1. select the smallest element

among data[i]~ data[data.length-1];

2. swap it with data[i];

3. if not finishing, repeat 1&2

Cont’d..

Sanjay Patel

Place ith item in proper position:

temp = data[i]

shift those elements data[j] which greater than temp to right by one position

place temp in its proper position

20 8 5 10 7

20 20 5 10 7

8

temp

8

i = 1, first iteration

8 20 5 10 7---

Cont’d..

Sanjay Patel

8 20 5 10 7

8 20 20 10 7

temp

5

5

i = 2, second iteration

8 8 20 10 75

5 8 20 10 7---

Cont’d..

Sanjay Patel

5 8 20 10 7

5 8 20 20 7

temp

10

10

i = 3, third iteration

5 8 10 20 7---

Cont’d..

Sanjay Patel

5 8 10 20 7

5 8 10 20 20

5 8 10 10 20

5 8 8 10 20

7

temp

7

7

7

i = 4, forth iteration

5 7 8 10 20---

Selection sort Algorithm

Sanjay Patel

Input: An array A[1..n] of n elements.

Output: A[1..n] sorted in nondecreasing order.

{ for i 0 to n – 2 do

{ min i

for J i + 1 to n-1 {Find the i th smallest element.}

{ if A[J] < A[min] then

min J

}

temp A[i]

A[i] A[min]

A[min} temp

}

C(n)= outer for loop * inner for loop * basic operation

n-2 n-1

C(n)= ∑ ∑ 1i=0 J=i+1

The time complexity of selection sort is (n2)

Analysis of selection sort

Sanjay Patel

Radix sort

Sanjay Patel

Extra information: every integer can be represented by at

most k digits

d1d2…dk where di are digits in base r

d1: most significant digit

dk: least significant digit

Algorithm

sort by the least significant digit first (counting sort)

=> Numbers with the same digit go to same bin

reorder all the numbers: the numbers in bin 0 precede

the numbers in bin 1, which precede the numbers in bin

2, and so on

sort by the next least significant digit

continue this process until the numbers have been

sorted on all k digits

Example –Radix sort

Sanjay Patel

Least-significant-digit-first

Example: 275, 087, 426, 061, 509, 170, 677, 503

Cont’d..

Sanjay Patel

Cont’d..

Sanjay Patel

Does it work?

Clearly, if the most significant digit of a and b are different and

a < b, then finally a comes before b

If the most significant digit of a and b are the same, and the

second most significant digit of b is less than that of a, then b

comes before a.

We’ll count the total number of enqueue and

dequeue operations.

Each time through the outer for loop:

In the while loop: n elements are dequeued from q and

enqueued somewhere in array: 2*n operations

In the inner for loop: a total of n elements are

dequeued from queues in array and enqueued in q:

2*n operations

Analysis

Sanjay Patel

So, we perform 4*n enqueue and dequeue

operations each time through the outer loop

Outer for loop is executed k times, so we have

4*k*n enqueue and dequeue operations altogether

But k is a constant, so the time complexity for radix

sort is O(n)

Cont’d…..

Sanjay Patel

Insertion sort keeps making the left side of the

array sorted until the whole array is sorted.

Real life example:

An example of an insertion sort occurs in everyday life

while playing cards. To sort the cards in your hand you

extract a card, shift the remaining cards, and then

insert the extracted card in the correct place.

Insertion sort

Sanjay Patel

Sanjay Patel

Analysis

Sanjay Patel

Best case: O(n). It occurs when the data is in sorted

order. After making one pass through the data and

making no insertions, insertion sort exits.

Average case: θ(n2) since there is a wide

variation with the running time.

Worst case: O(n2) if the numbers were sorted in

reverse order.

Shell sort uses a sequence h1, h2, …, ht called the

increment sequence. Any increment sequence is fine

as long as h1 = 1 and some other choices are

better than others.

Shell sort makes multiple passes through a list and

sorts a number of equally sized sets using the

insertion sort.

Shell sort improves on the efficiency of insertion sort

by quickly shifting values to their destination.

Shell sort is also known as diminishing increment sort.

Shell sort

Sanjay Patel

Sanjay Patel

Disadvantage of Shellsort is that it is a

complex algorithm and its not nearly as

efficient as the merge, heap, and quick sorts.

The shell sort is still significantly slower than the

merge, heap, and quick sorts, but its relatively

simple algorithm makes it a good choice for

sorting lists of less than 5000 items unless

speed important. It's also an excellent choice

for repetitive sorting of smaller lists.

Pros and cons of Shell sort

Sanjay Patel

Best Case O(n) : The best case in the shell sort is

when the array is already sorted in the right order.

The number of comparisons is less.

Worst case:-The running time of Shellsort depends

on the choice of increment sequence.

The problem with Shell’s increments is that pairs of

increments are not necessarily relatively prime and

smaller increments can have little effect.

Analysis

Sanjay Patel

A heap is also known as a priority queue and can be

represented by a binary tree with the following

properties:

Structure property: A heap is a completely filled

binary tree with the exception of the bottom row,

which is filled from left to right

Heap Order property: For every node x in the

heap, the parent of x greater than or equal to the

value of x.

Heap sort

Sanjay Patel

Simple Binary run

Sanjay Patel

21 15 25 3 5 12 7 19 45 2 9

21

15 25

3 5 12 7

19 45 2 9

Array representation

Binary tree

representation

For example we have the following list of element

for sorting with heap sort method.

65,77,5,25,32,45,99,83,69,81

Now, we will build the heap structure for the given list

of element.

Heap is complete binary tree in which every parent

node has less value than child nodes.

Construction of heap structure(dec-2010)

Sanjay Patel

Sanjay PatelSanjay Patel

Constructing a heap

65 65

77

65

77 5

5

25 65

77

5

25 65

77

5

77 65

251 2 3 4

32

5

25 65

77 32 45

Now, we will swap the root node with last indexed

node. This last node of the tree will be deleted and

be placed in a queue.

Sorting with heap

Sanjay Patel

5

25 45

69 6532 99

83 77 81

Algorithm

1. Delete the root.

2. Compare the two children of the root

3. Make the lesser of the two the root.

4. An empty spot is created.

5. Bring the lesser of the two children of the empty

spot to the empty spot.

6. A new empty spot is created.

7. Continue

Delete min Algorithm

Sanjay Patel

Delete min

Sanjay Patel

1. Copy the last number to the root (i.e. overwrite the minimum element

stored there)

2. Restore the min-heap property by percolate down (or bubble down)

Cont’d

Sanjay Patel

Cont’d

Sanjay Patel

Sorting algorithm’s Complexities

Sanjay Patel

Sorting Algorithm Asymptotic Complexities

Bubble sort Best Case: O(n).

Worst Case: O (n2)

Selection sort Best Case: O(n).

Worst Case: O (n2)

Radix sort Worst Case: O(kn) where k-constant

Insertion Sort Best Case: O(n).

Worst Case: O (n2)

Shell sort Best Case: O (n)

Worst Case: O (n (log n)2

Heap Sort Best Case: O(n log n).

Worst Case: O (n log n)

All items in the left subtree are less than the root.

All items in the right subtree are greater or equal to

the root.

Each subtree is itself a binary search tree

In a binary search tree,

the left subtree contains key values less than the

root

the right subtree contains key values greater than or

equal to the root.

Binary tree search

Sanjay Patel

Cont’d..

Sanjay Patel

(a), (b) - complete and balanced trees;

(d) – nearly complete and balanced tree;

(c), (e) – neither complete nor balanced trees

We discuss four basic BST operations: search, insert,

and delete; and develop algorithms for searches,

insertion, and deletion.

• Searches

• Insertion

• Deletion

Binary search tree

Sanjay Patel

To insert data all we need to do is follow the

branches to an empty subtree and then insert the

new node.

In other words, all inserts take place at a leaf or at

a leaflike node – a node that has only one null

subtree.

Insertion

Sanjay Patel

Sanjay Patel

Cont’d..

Sanjay Patel

There are the following possible cases when we delete a node

The node to be deleted has no children. In this case, all we need to do is delete the node.

The node to be deleted has only a right subtree. We delete the node and attach the right subtree to the deleted node’s parent.

The node to be deleted has only a left subtree. We delete the node and attach the left subtree to the deleted node’s parent.

The node to be deleted has two subtrees. It is possible to delete a node from the middle of a tree, but the result tends to create very unbalanced trees.

Deletion

Sanjay Patel

Sanjay Patel

Sanjay Patel

Sanjay Patel