Summary

Section 20.1 Introduction

  • Searching data involves determining whether a search key (p. 842) is present in the data and, if so, returning its location.

  • Sorting (p. 842) involves arranging data into order.

  • One way to describe the efficiency of an algorithm is with Big O notation (p. 842), which indicates how much work an algorithm must do to solve a problem.

Section 20.2 Searching Algorithms

  • A key difference among searching algorithms is the amount of effort they require to return a result.

Section 20.2.1 Linear Search

  • The linear search (p. 843) compares each array element with a search key. Because the array is not in any particular order, it’s just as likely that the value will be found in the first element as the last. On average, the algorithm must compare the search key with half the array elements. To determine that a value is not in the array, the algorithm must compare the search key to every element in the array.

  • Big O describes how an algorithm’s effort varies depending on the number of elements in the data.

  • An algorithm that’s O(1) has a constant runtime (p. 844)—the number of comparisons does not grow as the size of the array increases.

  • An O(n) algorithm is referred to as having a linear runtime (p. 845).

  • Big O highlights dominant factors and ignores terms that are unimportant with high values of n.

  • Big O notation represents the growth rate of algorithm runtimes, so constants are ignored.

  • The linear search algorithm runs in O(n) time.

  • In the worst case for linear search every element must be checked to determine whether the search element exists. This occurs if the search key is the last element in the array or is not present.

Section 20.2.2 Binary Search

  • Binary search (p. 846) is more efficient than linear search, but it requires that the array first be sorted. This is worthwhile only when the array, once sorted, will be searched many times.

  • The first iteration of binary search tests the middle element. If this is the search key, the algorithm returns its location. If the search key is less than the middle element, binary search continues with the first half of the array. If the search key is greater than the middle element, binary search continues with the second half. Each iteration tests the middle value of the remaining array and, if the element is not found, eliminates from consideration half of the remaining elements.

  • Binary search is more efficient than linear search, because with each comparison it eliminates from consideration half of the elements in the array.

  • Binary search runs in O(log n) (p. 850) time.

  • If the size of the array is doubled, binary search requires only one extra comparison to complete.

Section 20.3.1 Insertion Sort

  • The first iteration of an insertion sort (p. 851) takes the second element and, if it’s less than the first element, swaps it with the first element (i.e., the algorithm inserts the second element in front of the first element). The second iteration looks at the third element and inserts it into the correct position with respect to the first two elements, so all three elements are in order. At the ith iteration of this algorithm, the first i elements in the original array will be sorted. For small arrays, the insertion sort is acceptable, but for larger arrays it’s inefficient compared to other more sophisticated sorting algorithms.

  • The insertion sort algorithm runs in O(n2) time.

Section 20.3.2 Selection Sort

  • The first iteration of selection sort (p. 853) selects the smallest element and swaps it with the first element. The second iteration selects the second-smallest element (which is the smallest remaining element) and swaps it with the second element. This continues until the last iteration selects the second-largest element and swaps it with the second-to-last index, leaving the largest element in the last index. At the ith iteration, the smallest i elements are sorted into the first i elements.

  • The selection sort algorithm runs in O(n2) time.

Section 20.3.3 Merge Sort (A Recursive Implementation)

  • Merge sort (p. 855) is faster, but more complex to implement, than insertion sort and selection sort.

  • The merge sort algorithm sorts an array by splitting the array into two equal-sized sub-arrays, sorting each sub-array and merging the sub-arrays into one larger array.

  • Merge sort’s base case is an array with one element, which is already sorted. The merge part of merge sort takes two sorted arrays (these could be one-element arrays) and combines them into one larger sorted array.

  • Merge sort performs the merge by looking at the first element in each array, which is also the smallest element in each. Merge sort takes the smallest of these and places it in the first element of the larger, sorted array. If there are still elements in the sub-array, merge sort looks at the second element in that sub-array (which is now the smallest element remaining) and compares it to the first element in the other sub-array. Merge sort continues this process until the larger array is filled.

  • In the worst case, the first call to merge sort has to make O(n) comparisons to fill the n slots in the final array.

  • The merging portion of the merge sort algorithm is performed on two sub-arrays, each of approximately size n/2. Creating each of these sub-arrays requires n/2 – 1 comparisons for each sub-array, or O(n) comparisons total. This pattern continues, as each level works on twice as many arrays, but each is half the size of the previous array.

  • Similar to binary search, this halving results in log n levels, each level requiring O(n) comparisons, for a total efficiency of O(n logn) (p. 861).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset