[go: up one dir, main page]

DEV Community

Nemanja Milosavljevic
Nemanja Milosavljevic

Posted on

Big O

Do you know what the Big O stands for?

  • Yes! Erm... I have a friend his name is Orestis he is big enough, therefore, we call him Big O, is it not?!
  • Well, not really.
  • Erm... Big Omar? Or, no, no, Big Otto, right?
  • Well, you are almost there. Big O can be our friend, it usually is, but its name is neither Orestis nor Otto or Omar.

Big O notation is a mathematical notation that describes the limiting behaviour of a function when the argument tends towards a particular value or infinity. It is a member of a family of notations invented by Paul Bachmann, Edmund Landau, and others collectively called Bachmann–Landau notation or asymptotic notation.

  • That is what Wikipedia is saying, not me, all possible complaints to them then.

Simplified, Big O describes the complexity through mathematical terms, precisely algebraic. We can see it in an example like this. O(n²), which is called Big O squared. The letter n is nothing but an input size of given elements, let's say an array of n integers, and the function f(n) = n² shows us how big the input is. Quadratic time complexity has a growth rate of . If the input is size 2, it will do four operations, if the size is 4 it will do 16 operations and so on.

Shall we write an example for an Algorithm that runs in n2 time complexity right now and then discuss it in the detail? Just bear with me here. I promise it is going to be fun.

A typical example would be Selection Sort. For the definition and more detailed functionality of the Selection sort, you can read here.

Now then

..back to the implementation.

void sort(int array[]) {
        int n = array.length;
        for (int i = 0; i < n-1; i++) {
            int minIndex = i;
            for (int j = i+1; j < n; j++) {
                if (array[j] < array[minIndex ]) {
                    minIndex  = j;
                }
            }
            int temp = array[minIndex ];
            array[minIndex ] = array[i];
            array[i] = temp;
        }
   }
Enter fullscreen mode Exit fullscreen mode

Alright, let's look at this sorting algorithm in detail. Shall we determine what Big O it has and why?

        for (int i = 0; i < n-1; i++) {
            int minIndex = i;
            for (int j = i+1; j < n; j++) {
                if (array[j] < array[minIndex ]) {
                    minIndex  = j;
                }
            }
            int temp = array[minIndex ];
            array[minIndex ] = array[i];
            array[i] = temp;
        }
Enter fullscreen mode Exit fullscreen mode

If we look at the outer for loop of our algorithm, that would mean that our algorithm will run at O(n) time, at least at the first glance. Why? For the n input integers, for loop will repeat n-times. That has been defined in the loop condition, it is straightforward, and that's all about the outer for loop.

            for (int j = i+1; j < n; j++) {
                if (array[j] < array[minIndex ]) {
                    minIndex  = j;
                }
            }
Enter fullscreen mode Exit fullscreen mode

Alright, let's tackle the last portion of the code now. We would find that the inner loop will repeat for 1+2 … + n times, which equals n(n-1)/2 times. If we multiply this out, we will end up getting n²/2-n/2.
When we calculate big O notation, we only care about the dominant terms, and we do not care about the coefficients, therefore we can avoid all the values except for which will point to our definite time complexity range. We write it as O(n²) as our big O value.

Definition of dominant term

The mathematical term greater in absolute value than any other (as in a set) or than the sum of the others.

For instance, grows faster than n, so if we have the following quadratic equation an² + bn + c. Where a,b,c are known numbers, and a ≠ 0. We only care about the dominant term for numerators and denominators in the end. Finally, it'll lead us to the final big O(n²) value.

At this point, we ask ourselves if there is a little O or something similar?
Well, Yes, there is indeed, and we will most certainly mention it.

Big O, Little O, and the rest of the gang..

  • Big O (O()) stands for the upper bound of the complexity.
  • Little o (o()) stands for the upper bound excluding the exact bound.
  • Omega (Ω()) stands for the lower bound of the complexity.
  • Theta (Θ()) stands for the exact bound of the complexity.

It is about time to look at the complexity order across the BigO, from the least complex to the most complex ones.

It is about time to look at the complexity order across the BigO, from the least complex to the most complex ones.

Big O Complexity

  • O(1) has the least complexity (constant complexity)
  • O(log(n))
  • O(n)
  • O(nlog(n))
  • O(n²)...O(n⁴), O(n⁵)...O(n⁹⁹)..and so on
  • At some point here, we will face some inefficient algorithms with the big O of O(N!) factorial, which one can rarely face in day to day algorithm solving problems, especially coding ones due to its high inefficiency.

Time & Space Complexity

So far, we have only been discussing the time complexity of the algorithms or how fast is an algorithm. However, what also matters is the space the program takes to complete the task. The space complexity is related to how much memory the program will use, and therefore is also an important factor to take into count.

The space complexity works similarly to time complexity. For example, the selection sort that we have worked with so far has a space complexity of O(1). Because it only stores one minimum value and its index for comparison, the maximum space used does not increase with the input size. It means that it doesn't need any additional memory space to store values while running. Of course, some algorithms can and will use additional space, and its big O can be O(n).

An example of such an algorithm would be the Bucket Sort. Bucket sort sorts the array by creating a sorted list of all the possible elements in an array, then increments the count whenever the element is encountered. In the end, the sorted array will be the sorted list elements repeated by their counts.

Bucket Sort Visually:

Alt Text

*Image taken from Wikipedia.

Bucket sort works as follows:

Set up an array of initially empty buckets.
Go over the original array, putting each object in its bucket.
Sort each non-empty bucket.
Visit the buckets in order and put all elements back into the original array.

Last but not least

Complexity Expectance

The complexity can be analyzed as best case, worst case, average case and expected case.
Let’s take insertion sort, for example. Insertion sort iterates through all the elements in the list. If the element is larger than its previous element, it inserts the element backwards until it is larger than the previous element.

Alt Text

*Image taken from Wikipedia.

If an array has been initially sorted, then no swapping will be made. The algorithm will iterate through the array once, which has the time complexity of O(n). Therefore, we would say that the best-case time complexity of insertion sort is O(n).

To Wrap up:

Big O notation is a mathematical analysis applied by the algorithm. The results may defer, so we always have to take the edge cases into count. At this particular moment, we should be able to understand how and when to use Big O analysis and manipulate our best performance.
The whole story is based on my own experiences and researches. Please let me know if I have missed something or wrote something that brought you to confusion. There are plenty of articles on this topic, and they are waiting for you to read them Orestis, Omar, and you Otto.

Top comments (0)