5
CSE 101 - Homework 2 Haoxi Fang A08352536 2.22 Algorithm description: The kth smallest element lies in the range A[1..k] and B[1..k], so we only need to consider these two subarrays. Then we divide A[1..k] and B[1..k] into 4 subarrays: A[1..k/2], A[k/2..k], B[1..k/2], B[k/2..k] and do a comparison between A[k/2] and B[k/2]. If A[k/2]>B[k/2], then it means B[1..k/2] can be excluded from our consideration because there are at least k elements (both A[k/2..k] and B[k/2..k]) that are greater than all elements in B[1..k/2], so there are not k/2 elements (in the total of 2k elements) that can make the kth smallest element in B[1..k/2]. Similarly, if A[k/2]<B[k/2], then we exclude A[1..k/2]. Then the next subproblem is to find the k/2th smallest element in the remaining subarrays which are still in sorted order. And we repeat the above steps until k=2. When k=2 we compare A[1], A[2], B[1], B[2] and find the second smallest element and return it. Pseudo-code design: function kth(A[1..m],i,B[1..n],j,k) input: Array A, starting index for A, array B, starting index for B, k output: kth smallest element in A and B if k=2: return the second smallest in A[i], A[i+1], B[j], B[j+1] mid_A = A[i + ⌊k/2⌋] mid_B = B[j + ⌊k/2⌋] if mid_A < mid_B: return kth(A, i + ⌊k/2⌋, B, j, ⌈k/2⌉) else: return kth(A, i, B, j + ⌊k/2⌋, ⌈k/2⌉) Time complexity analysis: This algorithm depends on input k. The recurrence function is T(k) = T(k/2) + c. Therefore, time complexity is T(n) = O(log n)

CSE 101 hw2

  • Upload
    rejsav

  • View
    36

  • Download
    0

Embed Size (px)

DESCRIPTION

CSE 101 hw2

Citation preview

Page 1: CSE 101 hw2

CSE 101 - Homework 2

Haoxi Fang

A08352536

2.22

Algorithm description:

The kth smallest element lies in the range A[1..k] and B[1..k], so we only need to consider these two subarrays. Then we divide A[1..k] and B[1..k] into 4 subarrays: A[1..k/2], A[k/2..k], B[1..k/2], B[k/2..k] and do a comparison between A[k/2] and B[k/2]. If A[k/2]>B[k/2], then it means B[1..k/2] can be excluded from our consideration because there are at least k elements (both A[k/2..k] and B[k/2..k]) that are greater than all elements in B[1..k/2], so there are not k/2 elements (in the total of 2k elements) that can make the kth smallest element in B[1..k/2]. Similarly, if A[k/2]<B[k/2], then we exclude A[1..k/2]. Then the next subproblem is to find the k/2th smallest element in the remaining subarrays which are still in sorted order. And we repeat the above steps until k=2. When k=2 we compare A[1], A[2], B[1], B[2] and find the second smallest element and return it.

Pseudo-code design:

function kth(A[1..m],i,B[1..n],j,k)input: Array A, starting index for A, array B, starting index for B, koutput: kth smallest element in A and Bif k=2: return the second smallest in A[i], A[i+1], B[j], B[j+1]mid_A = A[i + ⌊k/2⌋]mid_B = B[j + ⌊k/2⌋]if mid_A < mid_B:

return kth(A, i + ⌊k/2⌋, B, j, ⌈k/2⌉)else:

return kth(A, i, B, j + ⌊k/2⌋, ⌈k/2⌉)

Time complexity analysis:

This algorithm depends on input k. The recurrence function is T(k) = T(k/2) + c. Therefore, time complexity is T(n) = O(log n)

Correctness:

The function kth divides the problem into finding the k/2th smallest element each time and still maintain the possible range of kth smallest element. It ends when k=2 and returns the second smallest element, which can always be reached since the recurrence relation is taken at the ceiling of k/2.

Page 2: CSE 101 hw2

2.25

(a) function pwr2bin(n)if n=1: return 10102else:

z = pwr2bin(n/2)return fastmultiply(z, z)

For fastmultiply, time complexity is O((n log23)

Recurrence relation is: T(n) = T(n/2) + O(n log23)

By Master Theorem, T(n) = O(n log23)

(b) function dec2bin(x)if n=1: return binary[x]else:

split x into two decimal numbers xL, xR with n/2 digits each

return fastmultiply(dec2bin(xL), pwr2bin(n/2)) + dec2bin(xR)

Recurrence relation is: T(n) = 2T(n/2) + O(n log23)

By Master Theorem, T(n) = O(n log23)

3.22

Algorithm description:

Idea: Pick one vertex and do depth first search on both direction, trying to reach all other vertices.

First pick one vertex v V arbitrarily. Do a depth first search on this vertex on the direction of its edges, ∈mark all vertices that are visited, and count the number of vertices visited as N_down. The second part of this algorithm is to do a depth first search on the same vertex but on the different direction, opposite to its edges. Mark all vertices that are visited and count the number of vertices visited as N_up. Also, each vertex visited on the opposite direction needs to do a depth first search on the down direction. The total number of vertices visited would be N_down + N_up. We then compare this number to the number of vertices |V|. If these two are equal, then there exist some vertex that can reach all other vertices.

Pseudo-code design:

Page 3: CSE 101 hw2

function explore_down(G, v)input: G = (V, E) is a graph; v V∈output: number of vertices visited from this nodevisited(v) = truecount = 1for each edge (v, u) E:∈

if not visited(u): count = count + explore_down(u)return count

procedure explore_up(G, v)input: G = (V, E) is a graph; v V∈output: number of vertices visited from this nodevisited(v) = truecount = 1for each edge (v, u) E:∈

if not visited(u): count = count + explore_down(u)for each edge (u, v) E:∈

if not visited(u): count = count + explore_up(u)return count

function up_dfs(G, v1)input: G = (V, E) is a graph; v1 V∈output: whether there exists a vertex that can reach all other verticesfor all v V:∈

visited(v) = falseN = explore_up(G, v1)if N = |V|: return trueelse: return false

Time complexity analysis:

Depth-first search on the given graph visits each node at most one time. Worst case time complexity is O(|V| + |E|)

Correctness:

Search starts at the middle of the graph, and goes to both directions. When it goes down, all nodes it can reach are visited. When it goes up, this algorithm also does a down search for the upper node. Therefore, this algorithm successfully counts the number of nodes that can be reached in this sequence and compare it to the number of all vertices in V. If all vertices are visited then there exists a vertex that can visit all other vertices.

Page 4: CSE 101 hw2

3.16

Algorithm description:

First arrange all the courses into a directed graph. Find all courses that have no prerequisite as starting points. Push them into a queue, which counts as the courses taken in the first semester. Visit these starting vertices, and delete them and their edges from the graph. After deleting all the vertices in the queue, start again to find all courses with no prerequisite (or in-edge). Continue doing so until there is no vertex left in the graph. Count the number of times needed to eliminate all the vertices in the graph, which is the total number of semesters needed to finish all courses.

Pseudo-code design:

function courses(G)input: G = (V, E) is a graph representing all courses and prerequisite relation.output: n, semester needed to finish all coursescount = 0while V is not empty:

count = count +1find all vertices v that (u, v) is not in E for all u V and put them into a queue L∈for all v in L:

remove all (v, u) in E for all u V∈remove v from V

return count

Time complexity analysis:

This algorithm visits all vertices and edges only once; therefore, the time complexity is O(|V| + |E|)

Correctness:

This problem assumes that a student can take any number of courses in one semester, which means we can remove any number of vertices at once. The algorithm put all vertices that have no in-bound edge into a queue and then removes them, which means removing all courses that have no prerequisite. After removal, delete them in the graph, which means prerequisite is cleared. According to this process, this algorithm is correct.