Algorithm analysis involves evaluating the efficiency of an algorithm to determine its suitability at performing a specific task. One common approach of analyzing algorithms is through asymptotic analysis. 

The word "asymptotic" refers to the idea of approaching a limit but never reaching it. In the context of algorithm analysis, it refers to the trend in the performance of an algorithm as the input size becomes arbitrarily large.  It helps us understand how an algorithm's performance scales with the input size 

Asymptotic analysis allows us to compare different algorithms and choose the most efficient one for a given problem. By selecting the most efficient algorithm, we can optimize the execution time and resource usage, leading to improved overall performance.

In the next sections, we will look deeper into the concept of asymptotic analysis and explore different techniques and notations used to analyze and describe the efficiency of algorithms.

The "big-oh" notation

The big oh notation provides a way to express the upper bound or worst-case scenario of an algorithm's runtime as a function of the input size.  

It is represented with a letter 'O' followed by a function that describes the growth rate of the algorithm's runtime. For example, if an algorithm has a runtime of O(n), it indicates that the runtime grows linearly with the input size.

Let us look on the various functions that are used in this notation and what each signifies.

O(1) - Constant

This function represents a constant time complexity. Algorithms with O(1) runtime have a fixed execution time regardless of the input size. To put it better, the performance of the algorithm does not depend on the size of input.  

A lot of basic operations have O(1) complexity some examples, includes; accessing an element in an array by its index, performing arithmetic operations, or assigning a value to a variable.

It represents operations that can be executed in a single step, regardless of the size of the input.

#The python list is implemented using array.

array = [2, 3, 5, 7, 13, 17, 19]

#accessing an element from the array
print(array[4])

When accessing an element from an array, as in above, the size of the array or the index of the element to be accessed does not determine the running time. This is an example of O(1) time complexity.

0(logn)- Logarithmic complexity

The O(logn) complexity  is used to describe an algorithm whose runtime will increase in proportion to the logarithm of the input size.

This type of complexity is commonly seen in algorithms that involve repeatedly dividing the input size into smaller halves at each step. The binary search is one of the most well-known examples of such an algorithm.  

The binary search algorithm is used to search for a value in a sorted array. During each failed attempt to locate the value, the algorithm divides the search space in half and eliminates the other half. The steps involved are as follows:

  1. Divide the sorted array into two halves.
  2. Compare the middle element of the array with the value being searched for.
  3. If the middle element is equal to the value, the search is complete.
  4. If the middle element is greater than the value, eliminate the second half of the array and continue the search in the first half.
  5. If the middle element is less than the value, eliminate the first half of the array and continue the search in the second half.
  6. Repeat this process until the value is found or the search space is exhausted.

binary search with Python

'''# Returns index of x in arr, or -1 if not found'''

def binary_search(arr, x): 
    # Initialize start and end pointers 
    start = 0 
    end = len(arr)-1

    # Loop until start and end pointers meet
    while start <= end: 
        # Calculate middle index 
        mid = (start + end) // 2 

        # Check if x is at middle index 
        if arr[mid] == x: 
          return mid 

        # If x is smaller than middle element, it must be in left subarray 
        elif x < arr[mid]:
         end = mid - 1 

        # If x is larger than middle element, it must be in right subarray
        else: start = mid + 1 

    # If x is not found, return -1 
    return -1 

# Example usage 

arr = [1, 3, 5, 7, 9, 11, 13, 15] 
x = 11

result = binary_search(arr, x)
print(f"Element {x} found at index {result}")

One important requirement for the binary search algorithm is that the array must be sorted. If the array is not sorted, the algorithm will not work. 

The O(logn) complexity is very desirable because of its efficiency and scalability.

O(n) -Linear complexity

The O(n) complexity describes the algorithm whose runtime will increase linearly as the input size increases. In other words, the running time will grow at the same rate as the input size. 

For example, if an algorithm has O(n) complexity, its runtime will be doubled if  the input size is doubled. This means that as the size of the input increases, the time required to process it will also increase linearly.

This complexity  is commonly associated with algorithms that involve looping through an array once and performing an operation on each element in the array. 

An example O(n) complexity is when finding the largest element in unsorted array.

Find the largest element in a list

def find_max(arr):
    maximum = arr[0] # initialize max variable with first element in array
    for num in arr: # iterate through each element in array
        if num > maximum: # compare current element with max variable 
            maximum = num # assign new maximum value if current element is larger return max 

    return maximum

arr = [3, 8, 2, 10, 1, 5, 7] 
print(find_max(arr))

 In the above example, the number of iterations in the for loop is directly proportional to the size of the input array (n), making it have a time complexity of O(n). As the input array gets larger, the number of iterations also increases, but in a linear fashion.

The O(n) complexity is considered to be relatively efficient and is commonly found in many popular algorithms.

O(nlogn)  complexity

O(nlogn) complexity describes an algorithm whose runtime increases linearly with the logarithm of the input size. This means that as the input size grows, the runtime increases at a rate of n multiplied by the logarithm of n.

It is found in algorithms that has a time complexity of O(logn) at each step and then takes steps to be completed. Simply put, as the input size increases, the time it takes for the algorithm to run will increase at a slightly faster rate than the input size itself.

This complexity is usually found in sorting algorithms such as merge sort and quick sort.

Algorithms with this complexity have a more efficient rate of growth compared to algorithms with quadratic or exponential complexities.

O(n2)-Quadratic

The O(n2) describe algorithms whose runtime increases at a rate proportional to the square of the input size. It is commonly found in algorithms that involve some form of doubly nested loops.

for i in dataset:
    for j in dataset:
         #do something

The bubble sort algorithm is a popular algorithm with O(n2). It is commonly  used to introduce computer science students to sorting algorithms.

The basic concept of bubble sort is to repeatedly step through the list, compare adjacent elements, and swap them if they are in the wrong order. This process continues until the list is sorted.

Bubblesort example

def bubbleSort(arr): 
    n = len(arr) 
  
    # Traverse through all array elements 
    for i in range(n-1): 
        for j in range(0, n-i-1): 
  
            #swap the elements
            if arr[j] > arr[j+1] : 
                arr[j], arr[j+1] = arr[j+1], arr[j] 
  
#test bubbleSort
arr = [64, 34, 25, 12, 22, 11, 90] 
  
bubbleSort(arr)
print(arr)

The quadratic complexity is generally less efficient due to the high number of operations that are involved.

O(2n)-exponential

O(2n) complexity, also known as exponential complexity is used to describe algorithms that have a runtime that increases exponentially with the input size. In other words, the runtime of an algorithm with O(2^n) complexity doubles with every increment in the input size.

For example, if an algorithm with this complexity takes 2 seconds to solve a problem with an input size of 10, it would take about 4 seconds for an input size of 11, about 8 seconds for an input size of 12, and so on. Such algorithms can quickly become unmanageable and impractical for real-world applications

def fib(n): 
    if n <= 1: 
        return n 
    else: 
        return fib(n-1) + fib(n-2) 

print(fib(20))

Try running the above function with a slightly larger number to see how impractical it becomes.

Conclusion

Apart from the "big oh" notation, there are other less commonly used  notations in algorithm analysis.  Some of them includes:

  • "Big Omega" notation(): This notation is used to represent the asymptotic lower bound of a function. It is denoted by Ω (pronounced "big omega").
  • "Big Theta" notation: This notation is used to represent both the asymptotic upper and lower bounds of a function. It is denoted by Θ (pronounced "big theta").