We want to determine the complexity of an algorithm checking for duplicates in an array by considering the best and worst case performance. Find the number of operations performed in the Snippet 1.4 for both the worst and best case. There is no need to work out the algorithmic complexity in big O notation. Assume that the inner loop results in eight operations every time it executes.
For the outer loop, assume four operations:
public boolean containsDuplicates(int[] numbers) {
for (int i=0; i<numbers.length; i++) {
for (int j=0; j<numbers.length; j++) {
if (i != j && numbers[i] == numbers[j]) return true;
}
}
return false;
}
To do this, we should perform the following steps:
- In the worst- case, we execute the inner loop n times (array length).
- In the best case, we only execute the inner loop only twice.
- The best case is when the duplicate numbers are in the front of the input array. The worst is when the array doesn't contain any duplicates.
- The worst case is when the array doesn't contain duplicates and is of size n:
- For the outer loop, we have 4*n operations
- For the inner loop, we have n*(n*8) operations
- In total, we have 4n + 8n2 operations
- In the best case, the duplicates are the first two items in the array:
- For the outer loop, we have 4 operations
- For the inner loop, we have 2*8 operations as the inner loop executes twice to get to the second item in the array where the duplicate is located
- In total, we have 20 operations
We have seen how we can analyze the number of operations performed in an algorithm and how we can use big O notation to describe the best and worst case. We also discussed how the notation can be used to describe any resource usage. In the next section, we'll describe some basic rules that are used when using the notation.
Notation Rules
There are two simple rules to follow when we want to express an algorithm using the big O notation. In this section, we will understand how to convert the expression from 4n + 8n2 to the big O notation equivalent.
The first rule to follow is to drop any constants.
For example, 3n + 4 becomes n and, a single constant such as 5 becomes 1. If an algorithm simply has 4 constant instructions that don't depend on the input, we say the algorithm is O(1), also known as constant time complexity.
The second rule is to drop everything except the highest order.
Consider an algorithm that performs n + n2 + n3. The highest order variable of this is the n3 part. If we keep the highest order, we end up with a big O runtime complexity of O(n3).