**How Does Binary Search Work?**

Binary search only works in sorted collections. It will thus be assumed that customer orders have been sorted before the implementation of the algorithm.

Binary search works by recursively dividing a collection in halves, discarding the half that does not contain the element, and continuing to divide the portions that contain the item until we have one last item left.

Say we want to find element 13 in the following collection. The first row are the indices while the next one are the elements:

We cannot simply look at every index to check whether it is 13 or not. Here is how binary search works.

Let’s say we stand at element 12. Since we know that the array is sorted, we know 13 must come after 12. So, we can discard all the elements to the left of 12 (including 12 itself) and only continue searching after 12.

First, we will initialize 2 pointers, at the beginning of the array and at the end.

Then we will find the element in the middle of our pointers.

If the element at the middle is bigger than our target, we set the end pointer to the middle.

If the element at the middle is smaller than our goal, we will set our start pointer to middle +1.

We continue this way until the middle I our target or we return none if an element is not found

**Deriving asymptotic analysis of the Binary Search Algorithm**

In this asymptotic analysis I will illustrate the best case and worst-case scenarios, since the average case is equal to the worst case when it comes to binary search algorithms.

*Best case Omega notation (Ω)*

This is when algorithm performs minimum number of comparisons, that is, it finds the element at the first try. This item is returned in instant time with just one iteration has been carried out, thus it is O(1).

*Worst Case Big O notation (O)*

This happens when the item is found at a point where we can no longer halve the collection, or not found at all.

Remember, binary search recursively halves the collection. Say, for instance, we have a collection of 128 items. How many times will we call the function?

When n = 128, we call function to reduce to 64

When n = 64, we call function to reduce to 32

When n = 32, we call function to reduce to 16

When n = 16, we call function to reduce to 8

When n = 8, we call function to reduce to 4

When n = 4, we call function to reduce to 2

When n = 2, we call function to reduce to 1

Here, we either return the found item at last, or null.

Thus, we have called the function 7 times for n= 128

Remember, 128 = 2^{7}

When we halved the initial list, we called the function 6 times for n = 64

Remember, 64 = 2^{6}

Taking n to be a power of 2 such that n = 2^{k}

Taking the above for all 128 elements, we can see that after k searches, we call the function k times while we reduce n to eventually 1.

From the above:

2^{k} = n

Extracting K by logging both sides to base two:

K = log_{2} (n)

Or simply:

K = log n

From the above, we can conclude the time complexity for binary search is **O(log n).**

The above asymptotic analysis proved that binary search has a time complexity of O(log n). This is also known as logarithmic time.

This time complexity is very good because it means that were the number of orders to double, the number of operations that we will perform within our function will only increase by one. As the input size grows, the number of operations also grows very slowly.

What this means is that the time taken to search for the order will not be significantly impacted by the number of orders in our database.

O(log n) is even better than linear complexity, O(n), even though O(n) itself is pretty efficient.

**P****seudo code** **that defines the design of the algorithm**

Sort array or ensure array is sorted Let number of items = n Let left = 0 and right = n-1 Calculate mid = (left + right) /2 While left <= right: If array[mid] == target Return mid If array [mid] > target Right = mid – 1 If array[mid] < target Left = mid + 1 Return “null” if item not found

The above pseudocode is sufficient and can be translated into any other programming language.

Here is an implementation in Python to prove it:

def binary_search(arr, target): left <em>=</em> 0 right <em>=</em> len(arr) <em>-</em> 1 <em>while</em> (left <em><=</em> right): mid <em>=</em> (left <em>+</em> right)<em>//</em>2 <em>if</em> arr[mid] <em>==</em> target: <em>return</em> mid <em>elif</em> arr[mid] <em><</em> target: left <em>=</em> mid <em>+</em> 1 <em>else</em>: right <em>=</em> mid <em>-</em>1 <em>return</em> <em>-</em>1 array1 <em>=</em> [10, 11, 12, 13, 14, 15, 16, 17, 18, 19] num <em>=</em> 15 result <em>=</em> binary_search(array1, num) <em>if</em> result <em>==</em> <em>-</em>1: print(f"We're sorry, the number {num} does not appear in our array") <em>else</em>: print(f"The number {num} appears at index {result}")