More or less, we all know about algorithm complexity analysis.

Here, we will discuss what we actually need to know in real life about algorithm complexity.

There are different notations available but we will consider big O notation.

Always keep in mind that algorithm complexity is always considered as worst case scenario for any algorithms.

There are two types of parameter to measure algorithm complexity, time complexity and space complexity. Here, we will take about time complexity.**Time complexity: **

Means how many steps your program need to execute;

Most importantly, if your input changes then you have to formulate a mathematical equation to calculate the total execution steps;

For example: `array=[1,2,3] for (int i=0;i<array.length;i++){ print(i) }`

This program has an array of length 3 and the program will have 3 execution steps. But if the array has size of 10 then it will have 10 steps.

Similarly, if we want to make a formal expression that if the array has N elements then the program will have N steps.

Hence, the time complexity of this program will be O(N).

There are mostly 7 types of time complexity we have to deal with.

1. **O(1)**

2. **O( log(N) )**

3. **O(N)**

4. **O (N log(N))**

5. **O(N ^{2}) / O(N^{3})/….. **

6.

**O (N!)**

This list is ordered according to their performance. Means O(1) is the best then O(log(N)), then O(N) and so on.

**O(1) – Constant time:**This is call constant time complexity. If you change your input your program steps doesn’t change then we can say that the program has constant time complexity;

Constant time algorithms will always take same amount of time to be executed. The execution time of these algorithm is independent of the size of the input.

A simple example of

`O(1)`

might be `return 23;`

— whatever the input, this will return in a fixed, finite time.Example1:

int index = 5; // cost= 1 int item = list[index]; // cost= 1if(condition true)thenperform some operation that runs in constant time // cost= 1elseperform some other operation that runs in constant time // cost= 1fori = 1to100forj = 1to200 perform some operation that runs in constant time // cost= 100*200

So total cost = O(1+1+1+1+(100*200) = O (1) ; It doesn’t matter how many cost we have it will be written as 1 which indicates that it is constant time as the program execution steps will not change if we change any input(here we don’t have variable inputs) in our current implementation.

Example2:`Sum(a,b){ `

return a+b

}

Takes 2 unit of time(constant) one for arithmetic operation and one for return.(as per above conventions) cost=2 no of times=1**Case scenarios for O(1) time:**

- Accessing Array Index (int a = ARR[5];)
- Inserting a node in Linked List
- Pushing and Poping on Stack
- Insertion and Removal from Queue
- Finding out the parent or left/right child of a node in a tree stored in Array
- Jumping to Next/Previous element in Doubly Linked List

**O(N) Linear time**:

The running time increases at most linearly with the size of the input. If the time to execute the algorithm is directly proportional to the input size n. Therefore the time it will take to run the algorithm will increase proportionately as the size of input n increases.

export function twoNumberSum(array: number[], targetSum: number) { for(let i=0;i<array.length-1; i++){ if(i!=j){ if(array[i]+array[j]==targetSum) { return [array[i], array[j]]; } } } return []; }

This program has O(N) time complexity because we don’t know the size of the array here and we have exactly 1 loop. This loop will execute exactly to it’s input size.

Also there will be some other constant time execution but as N is the highest we have to consider here.

For example, if total execution cost (steps) of a program is like O(1+4N+100) then ultimately we have to eliminate all the constant and we have to consider the higher value and it will be O(N).

Meaning, O(1+2N+3) => O(N)

Again, if the program has O(N^{2}+5N+100) then it should be O(N^{2})

Again, O(N^{3}+100N^{2}+5N+100) => O(N^{3})**Case scenarios for O(N) time:**

In a nutshell, all Brute Force Algorithms, or Noob ones which require linearity, are based on O(n) time complexity

- Traversing an array
- Traversing a linked list
- Linear Search
- Deletion of a specific element in a Linked List (Not sorted)
- Comparing two strings
- Checking for Palindrome
- Counting/Bucket Sort and here too you can find a million more such examples….

**O(N ^{2}) Quadratic Time**:

The running time increases in quadratic fashion with the size of the input.

export function twoNumberSum(array: number[], targetSum: number) { for(let i=0;i<array.length-1; i++){ for(let j=0;j<array.length; j++){ if(i!=j){ if(array[i]+array[j]==targetSum) { return [array[i], array[j]]; } } } } return []; }

This program has O(N^{2}) time complexity because we don’t know the size of the array here. We have exactly 2 array and total execution steps will be N*N if we assume the array has size N; **Case scenarios for O(N ^{2}) time:**

These ones are supposed to be the less efficient algorithms if their O(nlogn) counterparts are present. The general application may be Brute Force here.

- Bubble Sort
- Insertion Sort
- Selection Sort
- Traversing a simple 2D array

*O(log N) – Logarithmic time complexity:*

Logarithmic running time (O(log n)) essentially means that the running time grows in proportion to the logarithm of the input size – as an example, if 10 items takes at most some amount of time x, and 100 items takes at most, say, 2x, and 10,000 items takes at most 4x, then it’s looking like an O(log n) time complexity.

A typical example if `O(log N)`

would be looking up a value in a sorted input array by bisection.

Consider an example of binary search for an array=[1,2,3,4,5,6,7,8]

In-order to perform binary search for finding 1 from that array, first we have to make cut the array half then take one half (for example [1,2,3,4]). Again, cut half and take one half (for example [1,2]). Again cut and finally we get 1. So in total, we need 3 steps(execution) for the array of size 8.

Now if we have array of size 16 then how many steps we will need ? We will need only 4 steps that means 1 extra steps we will need if we double our input size.

If your program behave like this you can say that your program has log(N) time complexity.

That’s why binary search has log(N) time complexity;

If your input increase a lot but your execution doesn’t increase that much then in that scenario we can say log(N) complexity.

**Case scenarios for O(log(N)) time:**

- Binary Search
- Finding largest/smallest number in a binary search tree
- Certain Divide and Conquer Algorithms based on Linear functionality
- Calculating Fibonacci Numbers – Best Method The basic premise here is NOT using the complete data, and reducing the problem size with every iteration

*O(Nlog N) :*

A typical example of `O(N log N)`

would be sorting an input array with a good algorithm (e.g. mergesort).**Case scenarios for O(Nlog(N)) time:**

The factor of ‘log n’ is introduced by bringing into consideration Divide and Conquer. Some of these algorithms are the best optimized ones and used frequently.

- Merge Sort
- Heap Sort
- Quick Sort
- Certain Divide and Conquer Algorithms based on optimizing O(n^2) algorithms

Finally some example from here.

*Example 1*

`x = n`

`while ( x > 0 ) { x = x - 1 }`

The above is O(n)

*Example 2*

x = n while ( x > 0 ) { x = x / 2 } The above is O(logn)

*Example 3*

x = n while ( x > 0 ) { y = n while ( y > 0 ) { y = y - 1 } x = x - 1 } The above is O(n^{2})

*Example 4*

x = n while ( x > 0 ) { y = x while ( y > 0 ) { y = y - 1 } x = x - 1 }

The above is O(n^{2})

*Example 5*

x = n while ( x > 0 ) { y = n while ( y > 0 ) { y = y / 2 } x = x - 1 } The above is O(nlogn)

*Example 6*

x = n while ( x > 0 ) { y = x while ( y > 0 ) { y = y / 2 } x = x - 1 }

The above is O(nlogn)

*Example 7*

x = n while ( x > 0 ) { y = n while ( y > 0 ) { y = y - 1 } x = x / 2 }

The above is O(nlogn)

*Example 8*

x = n while ( x > 0 ) { y = x while ( y > 0 ) { y = y - 1 } x = x / 2 }

The above is O(n)