DAY - 01 of 22 Weeks
View the plan Here...
Analysis of Algorithm...
The typical definition of an algorithm is 'a formally defined procedure
for performing some calculation'. If a procedure is formally defined, then it
can be implemented using formal language, and such language is known as
a programming language. In general terms, the algorithm provides a
blueprint to write a program to solve a particular problem.
TIME and SPACE Complexity
Analyzing an algorithm means determining the number of resources (such as time
and memory) needed to execute it. Algorithms are generally designed to work
with an arbitrary number of inputs, so the efficiency or complexity of an
algorithm is stated in terms of time and space complexity.
The time complexity of an algorithm is basically the running time
of a program as a function of the input size. Similarly, the space complexity of an algorithm is the amount of computer memory that is required
during the program execution as a function of the input size.
The running time/execution time depends on several factors. First, the amount
of data that must be processed directly affects the execution time. As the
data size increases, so does the execution time. Second, the execution time
can vary depending on the type of hardware and the time the computer is used.
If we use a multi-process, multi-user system to execute the program, the
execution of other programs on the same machine can directly affect the
execution time of our program. Finally, the choice of programming language and
compiler used to implement an algorithm can also influence the execution time.
Some compilers are better optimizers than others and some languages produce
better-optimized code than others.
Thus, we need a method to analyze an algorithm's efficiency independent of the
implementation details.
Worst-case, Average-case, Best-case, and Amortized Time Complexity
Worst-Case running time:
- This denotes the behavior of an algorithm with respect to the worst possible case of the input instance.
- This case of an algorithm is an upper bound on the running time for any input.
- Having the knowledge of worst-case gives an assurance that the algorithm will never go beyond this time limit.
Average-Case running time:
- Average-Case running time is an estimate of the running time for an 'average' input.
- Average-case running time assumes that all inputs of a given size are equally likely.
Best-Case running time:
- This case is used to analyze an algorithm under optimal conditions. For example, the best case for the linear search of an array occurs when the desired element is the first in the list.
- While selecting an algorithm we hardly base our decision on the best-case performance.
Amortized running time:
- Amortized running time refers to the time required to perform a sequence of (related) operations averaged over all the operations performed.
- Amortized analysis guarantees the average performance of each operation in the worst case.
Expressing Time and Space Complexity
The time and space complexity can be expressed using the function
f(n) where
n is the input size for a given instance of the problem being solved.
Expressing the complexity is required when
- We want to predict the rate of growth of complexity as the input size of the problem increases.
- There are multiple algorithms that find solutions to a given problem and we need to find the most efficient algorithm.
Algorithm Efficiency
If a function is linear (without any loops or recursion), the efficiency of
that algorithm or running time of that algorithm can be given as the number of
instructions it contains.
Algorithm efficiency changes with change in the number of loops.
Linear Loops
To determine an efficiency of algorithm with single loops, we first need to
calculate the number of times the statement in the loops will be executed.
This is because, Number of iteration is directly proportional to the loop
factor.
Example:
In C :
for (i = 0; i < 500; i++)
statement block;
In Python:
for i in range(0, 500):
statement block
Here, 500 is the loop factor.
As we know now, efficiency is directly proportional to number of iterations.
So, the general formula in the case of linear loop can be written as :
f(n) = n
Nevertheless calculating efficiency is not so simple as in above example.
Consider the given loop...
In C:
for (i = 0; i < 500; i += 2)
statement block;
In Python:
for i in range(0, 500, 2):
statement block
Here, the number of iteration is half the number of loop factor. So efficiency
is given as,
f(n) = n/2
Logarithmic Loops
Now we know, what are linear loops, they are whose loop updates either by
addition or subtraction. Anyway, in logarithmic loops, the loop-controlling
variable is either multiplied or divided during each itetration of the loop.
As shown below...
In C:
for (i = 1; i < 1000; i *= 2) for (i = 1000; i >= 1; i/=2) statement block; statement block;
In Python:
for i in range(1, 1000): for i in range(1000, 1, i/2): statement block statement block
Consider the first for loop in which the loop-controlling variable
i is multiplied by 2. The loop is
executed only 10 times and not 1000 times because in each iteration the value
of i doubles.
And in the second loop in which the loop-controlling variable
i is divided by 2. In this case
also, the loop will be executed 10 times.
Thus, the number of iteration is a function of the number by which the
loop-controlling variable is divided or multiplied.
That is, when n = 1000, the number
of iterations canbe given by
log 1000 which is approximately
equal to 10.
Therefore, when loop-controlling variable is divided or multiplied then, then
efficiency is given as,
f(n) = log n
Nested Loops
Loops inside loops are Nested Loops. To determine the efficiency of
Nested loops, we need to calculate the number of times each loop is multiplied
after each iteration. The total is then obtained as theproduct of the number
of iterations in both loops.
Linear logarithmic loop
Consider the following loops:
In C:
for (i = 0; i < 10; i++) -- (n) times for (j = 1; j < 10; j *= 2) -- (log n) times statement blocks;
In Python:
for i in range(0, 10): -- (n) times for j in range(1, 10): -- (log n) times statement blocks; j *= 2;
So, the efficiency is
f(n) = n log n.
Quadratic loop
Consider the following loops:
In C:
for (i = 0; i< 10; i++) -- n times for (j = 0; j < 10; j++) -- n times statement block;
In Python:
for i in range(0, 10): -- n times for j in range(0, 10): -- n times statement block
So, the efficiency is f(n) = n^2.
Dependent quadratic loop
In dependent quadratic loop, the number of iterations in the inner loop is
dependent on the outer loop. Consider the following loop:
In C:
for (i =0; i < 10; i++)
for (j = 0; j <= i; j++)
statement block;
In Python:
for i in range(0, 10): for j in range(0, i+1): statement block
So, the efficiency is calculated as:
1+2+3+4+...+9+10 = 55
So, we have f(n) = n (n + 1) / 2
Next: Asymptotic Notation
See you in the next post.
Its Harsh Jaiswal
Signing off...
January 02, 2021
Tags :
Algorithm
,
C
,
Data Structure and Algorithm in C
,
Data Structure and Algorithm in Python
,
DSA
,
DSA in C
,
DSA in Python
,
for loop
,
Python
Subscribe by Email
Follow Updates Articles from This Blog via Email
No Comments