Tuesday, February 23, 2021

thumbnail

Analysis of Recursive Function

 In the previous post, we have studied, Complexity Analysis of Common Loops

As you go through this, the journey of understanding Data Structures, to make your code more efficient you will use the concept of Recursion.

When we analyze these functions, we get a recurrence relation for time complexity. For example in Binary Search, to search a given element, we divide the array/list into two halves and recursively repeat the process for two halves, until we get the element we are searching for. The time complexity of Binary Search is T(n) = T(n/2) + c  i.e., Ө(log n). There are many such algorithms like Tower of Hanoi, Josephus Problem, and many more.

We have three methods, for calculating the Complexity of the Recursive Function.|
1) Substitution methods
2) Tracing Tree Methods
3) Master Theorem

First, let's take a simple example, 

void Test(int n) {                --- T(n)
    if (n < 0) {
        printf("%d", n);            --- 1
        Test(n-1);                --- T(n-1)
    }
}
So, we have time Complexity function T(n) = T(n-1)+1

Now, let's first discuss it by Tracing Tree Method...

        Test(3)                -- 1 unit of time
        /         \
print(3)      Test(2)            ---     1 unit of time
                    /      \
           
print(2)      Test(1)        --- 1 unit of time
                            /       \
                 print(1)       Test(0)        --- 1 unit of time
                                        |
                                       X
Hence, f(n) = n+1
where n is the number of prints, 
Hence,
Time Complexity is: O(n)

Now let's analyze the same algorithm using the Substitution Method.
Let us suppose the time taken for this algorithm is T(n)

T(n) =  {1,             n = 0
            {T(n-1)+1, n > 0
So, let's consider,
        T(n) = T(n-1)+1
        Substitute T(n-1)   --------- as if
  T(n) = T(n-1)+1 then,
T(n-1) = T(n-2) +1
So, we have
T(n) = [T(n-2)+1] +1
T(n) = T(n-2) + 2
Simirally, T(n) = [T(n-3)+1] +2
T(n) = T(n-3) +3
            :
            :        continue for k times
            :
T(n) = T(n-k) + k
Let's assume n-k = 0
        ∴ n = k
So, we have:
        T(n) = T(n-n) + n
T(n) = T(0) + 1
but a/c to our algorithm if n = 0 then T(n) = 1 i.e., T(0) = 1
 ∴ T(n) = 1+n
And hence, 
Time Complexity is O(n)

Now, let's consider Master Theorem:

The master theorem is a formula for solving recurrence relations of the form:

T(n) = aT(n/b) + f(n)
here, 
n = size of output
a = number of subproducts
n/b = size of each subproblem.
        And all subproblems are assumed to have the same size.
f(n) = cost of the work done outside the recursive call.
            which include the cost of dividing the problem and 
            cost of merging the solutions.

Here, a ≥ 1 and b > 1 are constants and are an asymptotically positive function.

And Here, 
T(n) has the following asymptotic bounds:

    1) if f(n) = O(nlogb a-ο), then T(n) =  Ө(nlogb a).
    2) if f(n) = Ө(nlogb a), then T(n) = Ө(nlogb a * log n).
    3) if f(n) = Ω(nlogb a+ο), then T(n) = Ө(f(n))
here,
ο > 0 is a constant

Each of the above conditions can be clarified as,
  1. If the cost of solving the sub-problems at each level is increased by a certain factor, the value of f(n) will become polynomially smaller than nlogb a. Thus the time complexity is oppressed by the cost of the last level i.e., nlogb a.
  2. If the cost of solving the sub-problem at each level is nearly equal, then the value of f(n) will be nlogb a. Thus the time complexity will be f(n) times the total number of levels i.e., nlogb a * log n.
  3. If the cost of solving the sub-problems at each level decreases by a certain factor, the value of f(n) will become polynomially larger than nlogb a. Thus, the time complexity is oppressed by the cost of f(n).

Note: Pretty Soon I will make a video explaining all these Recursive Analysis Methods with examples.

The link will be provided here ...


By this,
It's Harsh Jaiswal
Signing Of...

Subscribe by Email

Follow Updates Articles from This Blog via Email

No Comments

Sieve Of Eratosthenes | Mathematics | Data Structures and Algorithm | Python

Problem Statement: The  Sieve of Eratosthenes  is a method for finding all primes up to (and possibly including) a given natural n . n .  Th...