[go: up one dir, main page]

DEV Community

Jan van Brügge
Jan van Brügge

Posted on

What does "Big-O notation" mean anyway?

If you are a programmer, chances are that you have stumbled over the term "Big-O notation". But what does this actually mean?

Big-O notation is used to specify the computational complexity of your program or data structure. This basicly means: How many steps do I need to perform a certain action. "Step" in this context does not mean CPU cycles or lines in a program, but arbitrary steps with a fixed length, no matter how big they are.

Let's start with an example: Check if an element is in an array. In JavaScript you would implement this like so:

function findInArray(array, element) {
    for(let i = 0; i < array.length; i++) {
        if(array[i] === element) {
            return true;
        }
    }
    return false;
}

You can see it quite easily: Everything inside of the loop takes a constant time: Accessing an element in an array with the index and a comparison. Let's define this as 1 "step". But because of the for loop, we do this n times. In Big-O: O(n).

Other standard examples that are described in Big-O notation are sorting algorithms. Take the most simple sorting algorithm as example, bubble sort. Its implementation could be this:

function bubblesort(array) {
    for(let i = 0; i < array.length; i++) {
        for(let j = 0; j < array.length - 1; j++) {
            if(array[j] > array[j + 1]) {
                let tmp = array[j];
                array[j] = array[j + 1];
                array[j + 1] = tmp;
            }
        }
    }
}

For every element in the array it runs through the whole array and swaps if needed. As before, everything in the loop is constant. Then we have a loop doing this n times. But we additionally have a second loop that executes the first loop n times. So we execute n steps n times. In Big-O notation: O(n*n) = O(n²)

But there are also other cases where the answer is not so straight forward. Take this code snippet for example:

function mapAndFilter(x) {
    return x
        .map(n => n * 2)
        .filter(n => n > 0);
}

What is the complexity of this function? Is it

  • O(1)?
  • O(n)?
  • O(n²)?
  • Something different?

The correct answer is O(n). Why? While our code in itself is constant (no direct loops), executing other functions will also take some time. In the map case, the browser will run trough all elements and call the function on them (so this would be O(n)), filter is basicly the same as our findInArray, so also O(n). Together our function would have O(n+n) = O(2n), but Big-O notation does not care about constant factors, so the 2 gets erased resulting in O(n).

I think this is a good example that you always have to check not only your own code, but also code you import through a library or through the browser. Every algorithm that operates on a datastructure has a computational complexity. If you know it, you can write your code to have the smallest complexity possible. You can also ask yourself: "What operations do I need to do on my data structure and is there another data structure that has less complexity for those?".

This is the first article in a series about common data structures and how they work internally. Follow me if you are interested in those kinds of articles and also write which data structures you are interested in.

Top comments (10)

Collapse
 
dwd profile image
Dave Cridland

I'm sorry, but I'm going to entirely disagree with the primary premise of your article.

"Big-O" notation is absolutely not about how fast (or "how many steps") your function runs. An O(n²) function can easily outperform an O(log(n)) function, for a given n - as you point out, there's a hidden "k" involved here.

"Big-O" notation is about how the run time will change as the system scales.

So if a function takes 28 seconds to run through with 4 inputs, and it's O(n), it'll take 56 seconds with 8 inputs.

Or, if an O(n²) function takes 9 seconds to run through with 3 inputs, it'll take 81 seconds with 9 inputs.

But knowing that, you can sensibly pick that O(n²) function over the previous O(n) function if you happen to know the input size never goes above 7.

Collapse
 
jvanbruegge profile image
Jan van Brügge • Edited

I did not mean to imply that the computational complexity equals speed. The steps are just to visualize how a piece of code gets executed.
But yeah, I should have added a paragraph emphasizing this and also talk about break even points between two algorithms/datastructures.

Collapse
 
md2perpe profile image
Per Persson

It's not even true that "if a function takes 28 seconds to run through with 4 inputs, and it's O(n), it'll take 56 seconds with 8 inputs". The quoted is more likely true if you replace "4 inputs" with "4 miljon inputs" and "8 inputs" with "8 miljon inputs".

The "Big-O" notation tells how the running time (or consumed memory, whatever it is used for) behaves for big numbers of inputs. For small n the time for a O(n) algorithm is probably not proportional to n.

Collapse
 
dwd profile image
Dave Cridland

Well, yes, or at least sort of. So if an algorithm takes a number of CPU cycles equal to (kn + c), then we call it O(n). For low values of n, the c may dominate, making it not a simple matter of linear scaling. But a typical algorithm has a low value of c.

So the case with 4 million inputs versus 8 million is really only working because the c is just noise at that level.

Luckily, c is going to be pretty small anyway, and much of it will remain constant no matter what the algorithm used - function call overheads, for instance.

Thread Thread
 
md2perpe profile image
Per Persson

An algorithm that is efficient for big n often takes more time for small n than one that is slow for big n. The reason is that the former is more complicated. Also, n is usually small.

Collapse
 
gab profile image
Gabriel Magalhães dos Santos

you spoke of a theme with little focus in our area, but very important. I always had a hard time understanding how to discover the big O, but you came up with a very simple and straightforward way, my head exploded, please keep writing things like that, thanks man

Collapse
 
e_ahlberg profile image
Eric Ahlberg

Good read! However, shouldn't it be filter(n => n > 0) in the third example? 🤔

Collapse
 
jvanbruegge profile image
Jan van Brügge

Yeah of course, small typo

Collapse
 
devmonte profile image
Grzegorz Jońca

Nice article, it could briefly give a basic understanding of complexity in algorithms. I'm looking forward to more :)

Collapse
 
robdwaller profile image
Rob Waller • Edited

Great post!

I hadn't heard of this before so will now be doing some reading. Thanks.