Someone please explain Big-O/Big Omega/Big Theta notation to me

semtex

:)
Joined
May 1, 2012
Messages
20,310
Reputation
3,396
Daps
46,187
Mainly big-O.

how the fukk do you find if two functions are big-O of each other? I've heard so many different explanations. the only one that makes sense is the limit theory.

:upsetfavre:
 

Regular Developer

Supporter
Joined
Jun 2, 2012
Messages
8,088
Reputation
1,796
Daps
22,836
Reppin
NJ
hmmm, lets see if i remember how to do this. I'm kind of unsure what part you need help on so bare with me as i just kind of put out information i remember and ask some questions.

So, when dealing with big O, in software dev, I think this is the worse-case scenario solution as far as time. And n is the variable that describes how long an instruction takes to be executed.
Based on Big O in math, you use the highest degree as what is going to dictate how long the function can take. and then after that, you can just drop the preceding constant that accompanies it.
Are you having trouble on figuring out the actualy big O of a function, or is it the part of comparing them? I'm a little unsure of the exact point of struggle atm.
 

semtex

:)
Joined
May 1, 2012
Messages
20,310
Reputation
3,396
Daps
46,187
hmmm, lets see if i remember how to do this. I'm kind of unsure what part you need help on so bare with me as i just kind of put out information i remember and ask some questions.

So, when dealing with big O, in software dev, I think this is the worse-case scenario solution as far as time. And n is the variable that describes how long an instruction takes to be executed.
Based on Big O in math, you use the highest degree as what is going to dictate how long the function can take. and then after that, you can just drop the preceding constant that accompanies it.
Are you having trouble on figuring out the actualy big O of a function, or is it the part of comparing them? I'm a little unsure of the exact point of struggle atm.
Finding big O of a function
 

Regular Developer

Supporter
Joined
Jun 2, 2012
Messages
8,088
Reputation
1,796
Daps
22,836
Reppin
NJ
man, that shyt used to get me too when i was learning it.

ok, so you have your function. For the most part, you'r going to want to look at most of the looping functions. Because this is where the variables are.

so, lets say that all basic instructions take constant time i.e. O(1). Thats usually for stuff like variable declaration, assignment of primitives, and math.

When you get to a for loop, n is going to be the number of iterations you do. So lets say you have an array. To measure a search function for an array, the worse case scenario is that the variable is at the end of the array and you are going from one element to the next.
(my psuedo code is not in any particular language, kinfda forgot the fundamentals of java and c++ so im just throwing some shyt together)
Code:
new string[] exArray = <shytload of words>;
for(int i=0; i>exArray.size()-1; i++)
{
   if(exArray[i]==someChar){return;}
}
So looking at that you see for an array of strings exArray, has a size of n, there is a comparison (which is a math function) and a return. The latter two have constant time c for execution. With all this, the worse case scenario of time this function will take to execute is n. ( O(the function above) = O(n)).

Makes sense? Would you like me to continue explaining, stop and explain something you don't understand in what I posted, or do you get everything already?
 

Spatial Paradox

All Star
Supporter
Joined
May 16, 2012
Messages
2,288
Reputation
1,120
Daps
12,139
Reppin
Brooklyn
You actually asked about this in another thread: http://www.the-coli.com/higher-learning/4179-official-computer-science-software-engineering-thread.html#.T9VQ-Gh5k5Q

Here's what I posted:
I just finished a class on data structures that touched on algorithms. The book we used for the class explained Big-O notation as a way of describing the running time of an algorithm. So here's two examples, based on examples they used in the book:

Suppose you're looking for the smallest value in an unordered array. So you'd take the first element as the implied minimum and then compare it against every other element in the array. The running time, which I'll call T(n), would be n - 1, where n is the number of elements in the array.

For n = 100, T(100) = 100 - 1 = 99
For n = 500, T(500) = 500 - 1 = 499
For n = 1000, T(1000) = 1000 - 1 = 999

In the above example, though the running time could be specifically described as "making one less than the number of array elements" comparisons, the term which has the most influence on the run time is the number of elements in the array (n), so you'd say the algorithm for finding the minimum value above has O(n) running time.

As another example, let's looks at a selection sort on an unordered array. Here, the minimum value is found and then swapped with the value in the first position in the array. You'd then repeat, finding the next smallest value and swapping it with the value in the second position in the array, and so on until you get to the second to last element in the array.

The number of passes through the array is (n - 1) + (n - 2) + ... + 3 + 2 + 1. This is equivalent to n (n - 1) / 2 or n^2/2 - n/2, where n is the number of elements in the array (Admittedly, I don't know the specifics behind finding the second expression from the first. But this is just to illustrate how to express running time in terms of Big-O).

For n = 100, T(100) = 100^2/2 - 100/2 = 10,000/2 - 100/2 = 5,000 - 50 = 4,950
For n = 1000, T(1000) = 1,000^2/2 - 1,000/2 = 1,000,000/2 - 1,000/2 = 500,000 - 500 = 499,500
For n = 10,000, T(10,000) = 10,000^2/2 - 10,000/2 = 100,000,000/2 - 10,000/2 = 50,000,000 - 5,000 = 4,995,000

For the above, the dominate term is n^2/2. Get rid of the constant coefficient (2) and the result is n^2. So you'd say the running time for the selection sort is O(n^2).
 

semtex

:)
Joined
May 1, 2012
Messages
20,310
Reputation
3,396
Daps
46,187
man, that shyt used to get me too when i was learning it.

ok, so you have your function. For the most part, you'r going to want to look at most of the looping functions. Because this is where the variables are.

so, lets say that all basic instructions take constant time i.e. O(1). Thats usually for stuff like variable declaration, assignment of primitives, and math.

When you get to a for loop, n is going to be the number of iterations you do. So lets say you have an array. To measure a search function for an array, the worse case scenario is that the variable is at the end of the array and you are going from one element to the next.
(my psuedo code is not in any particular language, kinfda forgot the fundamentals of java and c++ so im just throwing some shyt together)
Code:
new string[] exArray = <shytload of words>;
for(int i=0; i>exArray.size()-1; i++)
{
   if(exArray[i]==someChar){return;}
}
So looking at that you see for an array of strings exArray, has a size of n, there is a comparison (which is a math function) and a return. The latter two have constant time c for execution. With all this, the worse case scenario of time this function will take to execute is n. ( O(the function above) = O(n)).

Makes sense? Would you like me to continue explaining, stop and explain something you don't understand in what I posted, or do you get everything already?
So n is just the number of iterations before you get to the returned variable?

O(log n) also throws me off.
 

semtex

:)
Joined
May 1, 2012
Messages
20,310
Reputation
3,396
Daps
46,187
Ohhh that phone book example cleared O(log n) up a bit. Still kinda fuzzy though :ohhh:

It deals with binary trees so it might be a while before I get a handle on that.
 

semtex

:)
Joined
May 1, 2012
Messages
20,310
Reputation
3,396
Daps
46,187
I get it now. at least how it relates to programming
10s6ipy.gif
 
Top