Plain and simple, I don't know Big O that well. I was introduced to the topic in college during one of my intro CS courses, however the concept never really stuck with me. Below is the product of many hours spent trying to completely understand Big O with a focus on javascript. Please note that Big O is language agnostic and does not strictly apply to javascript.

Big O is a computer science term used to describe the time and complexity of an algorithm. Big O is used to describe the worst case scenario in terms of performance for any algorithm. The complexity is measured based upon the input to a function, and how that function's performance changes as the input grows. Big O is a mathematical representation, and seeing how many developers aren't as strong in math as they should be (✋me), I'll do my best to describe the math associated with the functions below.

Big O typically comes into play when you're working with an iterable data structure such as a collection, stacks, queues and array. These data structures will often require you to perform some kind of search or sort function on them.

The most efficient function is one that runs in `O(1)`

which is also known as "constant time". Algorithms that are `O(1)`

will execute at the same speed regardless of how large the input becomes.

```
function getFirstElementFromArray(array){
return array[0];
}
```

In this example, it doesn't matter if the array is of size 10 or 10 million. The function simply returns the first item in the array which means it only takes "one step" to complete its execution.

Ahh, logarithms, you remember how these things work, right? Nope? Ok neither did I until I did a little google searching so let's do a quick recap.

While researching logarithms I kept seeing references to it's inverse function - exponentials. In my opinion, exponential functions are seen quite often so lets start there.

`Math.pow(2,3) // 2³`

With this small exponential equation we have a *base* of 2 and *exponent* of 3 which will evaluate to 8 (2 * 2 * 2). Given the same equation, if we only have the evaluated value (8) and its base (2), we can figure out it's exponent with the following equation:

`Math.log(8)/Math.log(2) //Log₂8 = 3`

One thing to note when working with Big O is that we will always drop the constant value. In this case that would refer to the base of 2. With that being said, the Big O notation for this equation would simply just be O(log(n)) with no reference to our base of 2.

Now that we have the technical explanation out of the way, lets see how this actually works in code using a binary search. Lets assume we have a sorted array, and using a binary search algorithm we want to find a specific value within this array.

```
function binarySearch(needle, haystack) {
const middle = Math.floor(haystack.length / 2);
if (haystack[middle] > needle) {
const left = haystack.slice(0, middle)
return binarySearch(needle, left)
} else if (haystack[middle] < needle) {
const right = haystack.slice(middle)
return binarySearch(needle, right)
} else {
return haystack[middle];
}
}
```

After each loop through this recursive function, our problem set decreases 50% by eliminating the left or right half of the array. If you were to map this out on a graph you would see that as our input X increases exponentially, our complexity Y increases linearly.

- X = 4, Y = 2
- X = 8, Y = 3
- X = 16, Y = 4
- X = 32, Y= 5

O(N) is considered to be the second most efficient algorithm behind O(1). In this case the algorithm will grow in a 1:1 ration with respect to the size of N, this is also known as "linear time". In the example below, as N increases in size, the algorithm's complexity increases at the same rate.

```
function linearSearch(array, searchVal) {
for (let i = 0; i < array.length; i++) {
if (array[i] === searchVal) {
return i;
}
}
}
```

Even if your array is of size 100 and the value you're searching for is the first one in the array, we aren't concerned about that because Big O is focused on the **worst case scenario**.

Working off our previous knowledge of **O(log n)** and a simple binary search, we can start to understand what** O(n log n)** will look like.

Imagine we have a two dimensional array that we need to perform a binary search on. Our code might look something like this.

```
const arrays = [
[1, 2, 3, 5, 6, 9],
[2, 22, 333, 444, 555, 666, 777]
]
function multipleArraySearch(arrays, searchVal) {
for (let i = 0; i < arrays.length; i++){
let result = binarySearch(searchVal, arrays[i])
console.log(result)
}
}
multipleArraySearch(arrays, 2)
```

Since we know binary search has a complexity of **O(log n)**, that remains in our equation, but now we multiply it by **n** (the size of our `arrays`

input) since it will have to execute the binary search that many times.

Typically when you see nested loops in programming, there's a good chance the time complexity of this algorithm is O(n²). Below is an example of this in the form of a bubble sorting algorithm.

```
const numberArray = [55,22, 1, 28, 43, 22];
function swap(array, i, j) {
let tmp = array[i];
array[i] = array[j];
array[j] = tmp;
}
function bubbleSort(array) {
for(var i = 0; i < array.length; i++) {
for(var j = 1; j < array.length; j++) {
if(array[j - 1] > array[j]) {
swap(array, j - 1, j);
}
}
}
return array;
}
bubbleSort(numberArray) // [1, 22, 22, 28, 43, 55]
```