Big O: How to Determine the Efficeiency of Some Function
To determine what the the efficiency of some function is, first we have to clarify what we mean by efficient.
Do we mean:
- Faster?
- Less memory intensive?
- Easy to read?
Let’s break these three concepts down into a little more detail.
Why Faster is Not the Optimal Measure of Efficiency
- There are lots of moving parts in a computer, literally and figuratively. What works on my machine may not work on yours. Processor speed, available memory, all these constraints have an effect on the time it takes to process an operation.
- There are a million little processes going on in your own computer effecting the time it takes to complete each operation. Maybe your Mac is processing an app with a bunch of nested
for
loops on a Wednesday afternoon which takes 50 ms. Later that day the same code processes in 200ms. We can’t depend on time as a metric because of the large amount of variables we need to consider when we talk about our computers and their processing power.
So How Do We Measure an Algorithms Effiecency?
By the number of simple operations it needs to perform.
Take a look at this code:
function doSomeMath(n) {
return n * (n + 1) / 2;
}
Here we have three simple operations:
- 1 multiplication
- 1 addition
- 1 division
This is the case regardless of n
’s size.
Take a look at this code:
function plsIterateOverMe(n) {
let count = 0;
for (let i =1; i <= n; i++) {
count += 1;
}
return count;
}
How many operations are there here?
We have:
let total = 0
: 1 assignment
let i = 1
: 1 assignment
count += 1
: n additions and n assignments
i <= n
: n comparisons
i++
: n additions and assignments
Depending on the operations we count the number of operations can be from 2n up to 5n + 2.
Regardless of the exact number of operations the number of operations grows proportionally with n.
If n doubles the number of operations will too.
This is essentially Big O Notation: a way of discerning the runtime of an algorithm as its input grows.
It doesn’t matter what the inputs are, only the trends we can see when we assessing an algorithms efficiency.
Technical Definition
We say that an algorithm is O(f(n)) if the number of simple operations the computer must do is eventually less than the constant times f(n) as n increases.
- f(n) could be linear: (f(n)=n) or O(n)
- f(n) could be quadratic: (f(n) = n^2) or O(n^2)
- f(n) could be constant: (f(n) = 1) or O(1)
- or…f(n) could be something else.
Let’s refer back to our previous doSomeMath
funtion.
function doSomeMath(n) {
return n * (n + 1) / 2;
}
This is always only 3 operations. This is O(1) or constant time.
And our plsIterateOverMe
function:
function plsIterateOverMe(n) {
let count = 0;
for (let i =1; i <= n; i++) {
count += 1;
}
return count;
}
Number of operations will eventually be bound by a multiple of n, maybe 8n or something. It doesn’t really matter. This is linear time, or O(n)
Here is an example of a slower, less efficient algorithm:
function addAllPairs(n) {
for (var i =0; i < n; i++) {
for (var j = 0; j < n; j++) {
console.log(i + j);
}
}
}
Here we have an O(n) operation inside another one, giving us, O(n^2).
The important thing is that constants don’t matter here. Neither do smaller terms such as O(n -3) which is O(n) or O(n^2 + 12n + 6) which is O(n^2).
Things to Remember
- Arithmetic operations are constant
- Variable assignment is constant
- Accessing elements in an array or object is constant
- A loop’s complexity is the length of the loop times whatever is inside the loop
Conclusion
Big O has some interesting math behind it but at the end of the day, recognizing different patterns in your algorithm, such as nested loops, should give you a head start on optimizing an algorithm for efficiency.