Big-O Notation measures the complexity of an algorithm or order of magnitude of the number of operations required to perform a function. In other words, how the number of processing steps increase as the number of items being processed increases. Processing may not increase in a constant manner. For instance if processing one item takes 2 seconds, processing 100 item does not necessarily take 200 seconds (2 * 100).
Big-O notation is a type of language or notation or terminology to discuss the the reality which is - algorithms need to process efficiently. Without knowing Big-O notation, an experienced programmer can look at an algorithm and understand that a step that can be moved to process outside of a loop will reduce the processing steps by the number of loops the algorithm has to perform less one. Instead of processing once for each item in the list, it can process that step only one time. For me, this underlying understanding of processing steps is more important than reciting Big-O Notation, but we must follow industry standards when applying for jobs. We need a way to communicate about complexity of an algorithm with other programmers. So I conform.
Common Big-O notation orders:
O(1) represents function that runs in constant time
The time to run an algorithm doesn't change based if the number of items being processed changes. Whether running 1 or 1000 items the time remains constant. This is rare. Looking up an element in an array usually takes the same amount of time no matter how many items are in the array and would be said to run in constant time.
O(N) represents function that runs in linear time
Operations to run directly proportional to number of items processed. For instance if it take 3 minutes to process 1 item it will take 3 X 10 = 30 minutes to process 10 items.
O(N2) represents a function that runs in quadratic time.
The equation for quadratic time is (N2 - N) / 2. Or in other words 0+1+2+...+(N-1). In Big-O notation constants are dropped so we have N2 - N. As the number of items increases, subtracting N from the result becomes a negligible difference so we can skip that and end up with N2. So if you have 2 items a function that runs in O(N2) roughly takes 4 processing steps and with 10 items takes 100 processing steps. An example would be inserting items being processed into an array in the first position and having to move every single item of the array each time to insert the new item. Functions running in quadratic time are typically not acceptable for interactive applications.
O(log N) and O(N log N) represent a functions that runs in logarithmic time.
The running time of an algorithm increases with the log of the number of items being processed. These generally mean that the algorithm deals with a data set that is partitioned into small groups of data as it is processed, like a balanced binary tree.
For instance if your asked to find the number I'm thinking of out of 100 you could ask, is it greater than or less than 50. I say greater. You say is it greater or less than 75. I say greater. You say is it greater or less than 87.5. I say greater...and continues until you get to the number I am thinking of which is 88. This is more efficient than saying, "Is it one? Is it two? Is it three?..." etc.
O(N!) represents a function that runs in factorial time.
Factorial of 5 would be 5 x 4 x 3 x 2 x 1. So if there were five items being processed and the algorithm runs in factorial time then it would require 120 processing steps. If an algorithm performs in factorial time, look for a better solution.
Related articles:
Determining Big O Notation
O (n log n)
No matter how much Big-O analysis you do, you'll still ultimately need to test your algorithms on large data sets and there are a number of factors which can affect performance outside a single algorithm in an application. Ultimately you need to test performance. As the following article states: "the only real way to know for sure is to actually try it with large data sets. There may be performance issues that are not taken into account by big-oh notation, eg, the effect on paging as virtual memory usage grows."
However during the design process, Big-O notation provides a way to talk about the complexity of an algorithm and how efficient it is expected to be in advance.
Big O Notation Notes