Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
In computer science, it is a sorting algorithm, specifically an in-place comparison sort.
It has O(n2) time complexity, making it inefficient on large lists, and generally performs worse than the similar insertion sort.
Selection sorting is noted for its simplicity, and it has performance advantages over more complicated algorithms in certain situations, particularly where auxiliary memory is limited.
Now, let’s create a new function named SelectionSort which accepts 1 parameter as an argument.
The argument which we pass to this function is an unordered list that is passed to this above function to perform Selection Sorting algorithm on this list and return sorted list back to function call.
Read => Binary Search Algorithm on Sorted List using Loop in Python
So the logic or the algorithm behind Selection Sort is that it iterates all the elements of the list and if the smallest element in the list is found then that number is swapped with the first.
So for this algorithm, we are going to use two for loops, one for traversing through each element from index 0 to n-1.
Another nested loop is used to compare each element until the last element for each iteration.
The Complexity of Selection Sort
The time efficiency of selection sort is quadratic, so there are a number of sorting techniques which have better time complexity than selection sort.
One thing which distinguishes this sort from other sorting algorithms is that it makes the minimum possible number of swaps, n − 1 in the worst case.
Now let’s define the main condition where we define our unordered list which needs to be passed to the above function we created.
So, pass the user-defined lists to function and print the returned sorted list using the print statement.
Source Code
Output
Insertion sort is good for collections that are very small or nearly sorted. Otherwise, it’s not a good sorting algorithm it moves data around too much.
Each time insertion is made, all elements in a greater position are shifted.
It is much less efficient on large lists than more advanced algorithms such as quicksort, heapsort, or merge sort.
1. Simple implementation.
2. Much More Efficient for small data sets, much like other quadratic sorting algorithms like bubble sort and selection sort.
3. Adaptive that is efficient for the type of data sets that are already substantially sorted.
4. Stable Sorting Algorithm
5. In-place sorting means O(1) space required.
Now, let’s define a new function named insertion-sort which accepts one parameter which is list we pass as n argument to this function.
So what we are going to do is to use two for loops, one starting from index 1 and another loop inside the first loop from the previous element of the list up to index 0.
Then we compare the outer loop index value with the inner loop index value for each iteration and then swap the small one with the outer index element.
Complexity
Insertion sort has a worst-case and average complexity of О(n2), where n is the number of items being sorted.
Most practical sorting algorithms have substantially better worst-case or average complexity, often O(n log n).
When the list is already sorted (best-case), the complexity of the insertion is only O(n).
Now, let’s create a main condition where we need to call the above function and pass the list which needs to be sorted.
So let’s manually defined the list which we want to pass as an argument to the function.
Source Code
Output
Quicksort (sometimes called partition-exchange sort) is an efficient sorting algorithm, serving as a systematic method for placing the elements of a random access file or an array in order.
Quicksort works by selecting an element called a pivot and splitting the array around that pivot in Python.
We split array such that all the elements in, say, the left sub-array are less than the pivot and all the elements in the right sub-array are greater than the pivot.
The splitting continues until the array can no longer be broken into pieces. That’s it. Quicksort is done.
1. Easy implementation.
2. High performance.
3. Cache Performance is higher than other sorting algorithms.
4. No extra memory.
Now, let’s define a new function named quick-sorting which accepts three parameters which is a list, starting index and the ending index we pass as an argument to this function.
So this function is to sort an array or list using a quick sorting algorithm in Python.
In this tutorial, we are going to provide two solutions, one is normal and other is more efficient than first.
Solution 1
In the first solution, we are going to first find the pivot by using a partition function and then we split the array on that pivot basis.
In this solution, we are recursively calling the quicksort function which leads to more complexity in Python.
Solution 2
This second solution is much more efficient than the first one.
Complexity
The overall time complexity of OuickSort is O(nLogn).
The space complexity of Quick Sort is O(log n).
Define Main Condition
Now, let’s create a main condition where we need to call the above functions and pass the list which needs to be sorted.
So let’s manually defined the list which we want to pass as an argument to the function.
So, one more thing we want to do is to calculate the time for both solutions to check which solution works better.
Source Code
Output
In computer science, merge sort is an efficient, general-purpose, comparison-based sorting algorithm.
Most implementations produce a stable sort, which means that the order of equal elements is the same in the input and output.
It is a divide and conquers algorithm. In the divide and conquer paradigm, a problem is broken into pieces where each piece still retains all the properties of the larger problem – except its size.
2. Much More Efficient for small and large data sets.
3. Adaptive that is efficient for the type of data sets that are already substantially sorted.
4. Stable Sorting Algorithm
Now, let’s define a new function named merge-sorting which accepts one parameter which is list we pass as an argument to this function.
So this function is to sort an array or list using a merge sorting algorithm.
As we have discussed above, to solve the original problem, each piece is solved individually and then the pieces are merged back together.
For that, we are going to use recursive calls to a new function named merge which accepts two sorted arrays to form a single sort array.
Now in the merge-sort function, the base condition for our recursive call is that if the length of an array or list is equal to 0 or 1 then simply return the first element of the array.
Otherwise, just divide the array into two equal halves and pass both arrays to recursive calls of merge-sort.
And at last, we are going to call merge function after each recursive call to join both sorted array.
Now we are breaking the array until they are divided individually. So what we want is just join the arrays that we passed in a sorted way to this function and then returned the new array as a result.
Complexity
The overall time complexity of Merge is O(nLogn).
The space complexity of Merge-sort is O(n).
This means that this algorithm takes a lot of space and may slow down operations for large data sets.
Now, let’s create a main condition where we need to call the above function and pass the list which needs to be sorted.
So let’s manually defined the list which we want to pass as an argument to the function.
Source Code
Output
Bubble Sort Algorithm.
To write a program for Bubble sort.
To get a understanding about Bubble sort.
It is a python program of Bubble sort Algorithm.
It is written in a way that it takes user input.
First a function is written to perform Bubble sort.
Then outside the function user input is taken.
Start with the first element, compare the current element with the next element of the array. If the current element is greater than the next element of the array, swap both of them. If the current element is less than the next element, move to the next element. Keep on comparing the current element with all the elements in the array. The largest element of the array comes to its original position after 1st iteration. Repeat all the steps till the array is sorted.
Just clone the repository .
Insertion Sort
When would we use recursive solutions? Tree traversals and quick sort are instances where recursion creates an elegant solution that wouldn't be as possible iteratively.
Divide and conquer is when we take a problem, split it into the same type of sub-problem, and run the algorithm on those sub-problems.
If we have an algorithm that runs on a list, we could break the list into smaller lists and run the algorithm on those smaller lists. We will divide the data into more manageable pieces.
We break down our algorithm problems into base cases
-- the smallest possible size of data we can run our algorithm upon to determine the basic way our algorithm should work.
These solutions can give us better time complexity solutions; however, they wouldn't work if a portion of the algorithm's data is dependent upon the rest. If we broke the list into two halves, and one half is required to work on the other half, we could not use recursion.
Recursion requires independent sub-data.
Let's apply recursion to breaking down what a list is. The sum of a list is equal to the first element plus the rest of the list. We could write that like in this add_list
function found in this file:
This should print 10, or the sum of the items in our list.
On each pass, the add_list
function is taking the first item and adding the sum of the rest of the list, found by calling add_list
on the remainder of the list. This would loop through the rest of the list in this manner, only adding together the elements once the final element was reached.
Finding a sum like this is not the most time efficient -- it would be better to do iteratively. But this allows us to understand how recursion works.
Often, iterative solutions are easier to read and more performant.
If we add a print statement into the add_list
function:
The terminal would print:
Add 1 to the sum of [2, 3, 4] Add 2 to the sum of [3, 4] Add 3 to the sum of [4] Add 4 to the sum of [] 10
This helps us understand what is happening at each recursive step.
Our base case is an empty list or 0, which is what we handle at the beginning of our function with returning 0 if the list is empty. By filling that in, it gives us our first return, so that each previous add_list
call can be resolved based on the sum of the next.
When we use recursion, it uses a lot of memory, so each recursive calls allocates an amount of memory. We have a pre-set recursion limit in case we write an infinitely recursive algorithm to prevent our computer needing to reboot to end the algorithm.
With Big O, we're interested in the number of times we have to run an operation. add_list
just runs basic addition, which is a single operation, and it is being called one time for every element in the list, so this is O(n)
.
Quick sort is a great example use case of a recursive appropriate solution.
We need to include a base case and then call itself.
Quick sort sorts a list using partitioning
. The partitioning process involves splitting up data around the pivot
.
If our list is [5, 3, 9, 4, 8, 1, 7]
.
We'll choose a pivot point to split the list. Let's say we choose 5 as the pivot. One list will contain all the numbers less than 5, and the other will contain all the numbers greater than or equal to 5. This results in two lists like so:
[3, 4, 1] 5 [9, 8, 7]
5 is already sorted into the correct place that it needs to be. All the numbers to the right and left of it are in the area they need to, just not yet sorted.
This process is partitioning.
Our next step is to repeat this process until we hit our base case, which is an empty list or a list with just one element. When everything is down to one element lists, then we know they are properly sorted.
3 and 9 are our next pivots: [1] 3 [4] 5 [8, 7] 9 Next, 8 is our pivot: [1] 3 [4] 5 [7] 8 [] 9 1 3 4 5 7 8 9
The number of sorted items doubles with each pass through this algorithm, and we have to make one complete pass through the data on each loop. That means each pass is O(n), and we have to make log n
passes.
It takes O(log n)
steps to pass through, with each pass taking O(n)
, so the average case is O(n log n)
, the fastest search we can aim for.
What would be a bad case for quick sort?
[1, 2, 3, 4, 5, 6, 7]
If we look at the order of this on each loop:
[] 1 [2, 3, 4, 5, 6, 7] 1 [] 2 [3, 4, 5, 6, 7] 1 2 [] 3 [4, 5, 6, 7] 1 2 3 [] 4 [5, 6, 7] 1 2 3 4 [] 5 [6, 7] 1 2 3 4 5 [] 6 [7] 1 2 3 4 5 6 7
This took a full 7 passes, for 7 elements, because there was only one sorted item being added with each pass.
Already sorted lists are the worst case scenario which results in an order O(n^2)
.
Quick sort shines when the first pivot chosen is roughly the median value of the list. Now, since we can't always choose the median value with the traditional quick sort.
We could use quick select
to find the median at each step -- but this slows down our algorithm to O(n)
run time on average.
If we choose a random pivot point, we generally do not pick the worst case pivot with each pass. Randomly selecting a pivot point results in the most time efficient average.
If we were to write out our quick sort algorithm in a basic way, it would look something like this:
Let's define our partition function next:
Let's test out a bunch of possible cases like so:
We already know off the tops of our heads that we have not setup our algorithm to handle edge cases like an input that is not a list, or is a list full of strings, etc.
Our terminal returns back:
So we can see that it handles all of our tests well.
It's important to analyze what you know about your incoming data before choosing a type of algorithm. If you know that your list is almost completely sorted, bubble sort would handle that the quickest. If the list is completely garbled, quick sort would be best.
Even when we aren't handling sort, we need to customize our algorithmic choices to the data anticipated, especially when dealing with large sets of data where time performance can have a huge impact.
The quick sort function we wrote is not an in-place solution. When we sort that list, we're actually returning an entirely new list. It's not returning the same list.
This isn't time or space efficient because it takes time and data to copy lists over to newly allocated spots in memory. It would be more efficient to move items around within the original given list.
This is in-place sorting
-- using the original list to sort items within it and returning that same original list, but now sorted. We mutate the original list rather than making new lists.
To do in-place sorting, we need to be able to pass into the function the bounds of the current part of the list that we're working on, to ensure that we are only working on certain segments of the list at a time.
We can give it a low index, and a high index, to indicate the start and stop points of the section of the list to work on.
As we keep going, the low and high indices will change. Our base case should now change to where if the low and high are the same, then our list is sorted.
Let's try it:
We're iterating through the list and checking if the item at list[i]
is less than the item at list[pivot_index]
. If it is, then we need to swap these items.
That has to happen in two steps. First we swap i with an item one beyond the pivot index. Then we swap the pivot with the item after the pivot.
Then we update the pivot index to search for the next item to sort in the array.
In order to call this function without passing in three parameters, we can write a short helper function:
Now we can run this function and it sorts our lists without allocating extra memory.
Let's add some print statements just to see exactly what is happening at each step on one of the sorts:
This helps us visualize why we go through each swapping step and how the list is slowly being sorted, and split apart into smaller sorting lists.
VISUALIZED
![bubble sort]((https://s3-us-west-1.amazonaws.com/appacademy-open-assets/data_structures_algorithms/naive_sorting_algorithms/bubble_sort/images/BubbleSort.gif)
This project contains a skeleton for you to implement Bubble Sort. In the file lib/bubble_sort.js, you should implement the Bubble Sort. This is a description of how the Bubble Sort works (and is also in the code file).
Clone the project from
https://github.com/appacademy-starters/algorithms-bubble-sort-starter.
cd
into the project folder
npm install
to install dependencies in the project root directory
npm test
to run the specs
You can view the test cases in /test/test.js
. Your job is to write code in
the /lib/bubble_sort.js
that implements the Bubble Sort.
This sorting algorithm is comparison-based algorithm in which each pair of adjacent elements is compared and the elements are swapped if they are not in order. This algorithm is not suitable for large data sets as its average and worst case complexity are of Ο(n2) where n is the number of items.
How Bubble Sort Works? We take an unsorted array for our example. Bubble sort takes Ο(n^2) time so we're keeping it short and precise.
Bubble Sort Bubble sort starts with very first two elements, comparing them to check which one is greater.
Bubble Sort In this case, value 33 is greater than 14, so it is already in sorted locations. Next, we compare 33 with 27.
Bubble Sort We find that 27 is smaller than 33 and these two values must be swapped.
Bubble Sort The new array should look like this −
Bubble Sort Next we compare 33 and 35. We find that both are in already sorted positions.
Bubble Sort Then we move to the next two values, 35 and 10.
Bubble Sort We know then that 10 is smaller 35. Hence they are not sorted.
Bubble Sort We swap these values. We find that we have reached the end of the array. After one iteration, the array should look like this −
Bubble Sort To be precise, we are now showing how an array should look like after each iteration. After the second iteration, it should look like this −
Bubble Sort Notice that after each iteration, at least one value moves at the end.
Bubble Sort And when there's no swap required, bubble sorts learns that an array is completely sorted.
Bubble Sort Now we should look into some practical aspects of bubble sort.
Algorithm We assume list is an array of n elements. We further assume that swap function swaps the values of the given array elements.
begin BubbleSort(list)
for all elements of list if list[i] > list[i+1] swap(list[i], list[i+1]) end if end for
return list
end BubbleSort Pseudocode We observe in algorithm that Bubble Sort compares each pair of array element unless the whole array is completely sorted in an ascending order. This may cause a few complexity issues like what if the array needs no more swapping as all the elements are already ascending.
To ease-out the issue, we use one flag variable swapped which will help us see if any swap has happened or not. If no swap has occurred, i.e. the array requires no more processing to be sorted, it will come out of the loop.
Pseudocode of BubbleSort algorithm can be written as follows −
procedure bubbleSort( list : array of items )
loop = list.count;
for i = 0 to loop-1 do: swapped = false
end for
end procedure return list
![bubble sort](
The algorithm bubbles up As you progress through the algorithms and data structures of this course, you'll eventually notice that there are some recurring funny terms. "Bubbling up" is one of those terms.
When someone writes that an item in a collection "bubbles up," you should infer that:
The item is in motion The item is moving in some direction The item has some final resting destination When invoking Bubble Sort to sort an array of integers in ascending order, the largest integers will "bubble up" to the "top" (the end) of the array, one at a time.
The largest values are captured, put into motion in the direction defined by the desired sort (ascending right now), and traverse the array until they arrive at their end destination
How does a pass of Bubble Sort work? Bubble sort works by performing multiple passes to move elements closer to their final positions. A single pass will iterate through the entire array once.
A pass works by scanning the array from left to right, two elements at a time, and checking if they are ordered correctly. To be ordered correctly the first element must be less than or equal to the second. If the two elements are not ordered properly, then we swap them to correct their order. Afterwards, it scans the next two numbers and continue repeat this process until we have gone through the entire array.
See one pass of bubble sort on the array [2, 8, 5, 2, 6]. On each step the elements currently being scanned are in bold.
2, 8, 5, 2, 6 - ordered, so leave them alone 2, 8, 5, 2, 6 - not ordered, so swap 2, 5, 8, 2, 6 - not ordered, so swap 2, 5, 2, 8, 6 - not ordered, so swap 2, 5, 2, 6, 8 - the first pass is complete Because at least one swap occurred, the algorithm knows that it wasn't sorted. It needs to make another pass. It starts over again at the first entry and goes to the next-to-last entry doing the comparisons, again. It only needs to go to the next-to-last entry because the previous "bubbling" put the largest entry in the last position.
2, 5, 2, 6, 8 - ordered, so leave them alone 2, 5, 2, 6, 8 - not ordered, so swap 2, 2, 5, 6, 8 - ordered, so leave them alone 2, 2, 5, 6, 8 - the second pass is complete Because at least one swap occurred, the algorithm knows that it wasn't sorted. Now, it can bubble from the first position to the last-2 position because the last two values are sorted.
2, 2, 5, 6, 8 - ordered, so leave them alone 2, 2, 5, 6, 8 - ordered, so leave them alone 2, 2, 5, 6, 8 - the third pass is complete No swap occurred, so the Bubble Sort stops.
Ending the Bubble Sort During Bubble Sort, you can tell if the array is in sorted order by checking if a swap was made during the previous pass performed. If a swap was not performed during the previous pass, then the array must be totally sorted and the algorithm can stop.
You're probably wondering why that makes sense. Recall that a pass of Bubble Sort checks if any adjacent elements are out of order and swaps them if they are. If we don't make any swaps during a pass, then everything must be already in order, so our job is done.
![selection](
This project contains a skeleton for you to implement Selection Sort. In the file lib/selection_sort.js, you should implement the Selection Sort. You can use the same swap
function from Bubble Sort; however, try to implement it on your own, first.
The algorithm can be summarized as the following:
Set MIN to location 0
Search the minimum element in the list
Swap with value at location MIN
Increment MIN to point to next element
Repeat until list is sorted
This is a description of how the Selection Sort works (and is also in the code file).
Clone the project from
https://github.com/appacademy-starters/algorithms-selection-sort-starter.
cd
into the project folder
npm install
to install dependencies in the project root directory
npm test
to run the specs
You can view the test cases in /test/test.js
. Your job is to write code in
the /lib/selection_sort.js
that implements the Selection Sort.
The algorithm can be summarized as the following:
Set MIN to location 0 Search the minimum element in the list Swap with value at location MIN Increment MIN to point to next element Repeat until list is sorted
![selection](
Starting from the beginning of the list,
1, Find the minimum unsorted element 2 Swap it with the current index (front of the unsorted list) 3 Move to the next index and repeat from step 1 4 Repeat until at the end of the list
The algorithm: select the next smallest Selection sort works by maintaining a sorted region on the left side of the input array; this sorted region will grow by one element with every "pass" of the algorithm. A single "pass" of selection sort will select the next smallest element of unsorted region of the array and move it to the sorted region. Because a single pass of selection sort will move an element of the unsorted region into the sorted region, this means a single pass will shrink the unsorted region by 1 element whilst increasing the sorted region by 1 element. Selection sort is complete when the sorted region spans the entire array and the unsorted region is empty!
![insertion](
This project contains a skeleton for you to implement Insertion Sort. In the file lib/insertion_sort.js, you should implement the Insertion Sort.
The algorithm can be summarized as the following:
If it is the first element, it is already sorted. return 1;
Pick next element
Compare with all elements in the sorted sub-list
Shift all the elements in the sorted sub-list that is greater than the
value to be sorted
Insert the value
Repeat until list is sorted
This is a description of how the Insertion Sort works (and is also in the code file).
Clone the project from
https://github.com/appacademy-starters/algorithms-insertion-sort-starter.
cd
into the project folder
npm install
to install dependencies in the project root directory
npm test
to run the specs
You can view the test cases in /test/test.js
. Your job is to write code in
the /lib/insertion_sort.js
that implements the Insertion Sort.
The algorithm: insert into the sorted region
Insertion Sort is similar to Selection Sort in that it gradually builds up a larger and larger sorted region at the left-most end of the array.
However, Insertion Sort differs from Selection Sort because this algorithm does not focus on searching for the right element to place (the next smallest in our Selection Sort) on each pass through the array. Instead, it focuses on sorting each element in the order they appear from left to right, regardless of their value, and inserting them in the most appropriate position in the sorted region.
This project contains a skeleton for you to implement Merge Sort. In the file lib/merge_sort.js, you should implement the Merge Sort.
The algorithm can be summarized as the following:
if there is only one element in the list, it is already sorted. return that
array.
otherwise, divide the list recursively into two halves until it can no more
be divided.
merge the smaller lists into new list in sorted order.
This is a description of how the Merge Sort works (and is also in the code file).
Clone the project from
https://github.com/appacademy-starters/algorithms-merge-sort-starter.
cd
into the project folder
npm install
to install dependencies in the project root directory
npm test
to run the specs
You can view the test cases in /test/test.js
. Your job is to write code in
the /lib/merge_sort.js
that implements the Merge Sort.
it is easy to merge elements of two sorted arrays into a single sorted array you can consider an array containing only a single element as already trivially sorted you can also consider an empty array as trivially sorted The algorithm: divide and conquer You're going to need a helper function that solves the first major point from above. How might you merge two sorted arrays? In other words you want a merge function that will behave like so:
let arr1 = [1, 5, 10, 15]; let arr2 = [0, 2, 3, 7, 10]; merge(arr1, arr2); // => [0, 1, 2, 3, 5, 7, 10, 10, 15] Once you have that, you get to the "divide and conquer" bit.
The algorithm for merge sort is actually really simple.
if there is only one element in the list, it is already sorted. return that array. otherwise, divide the list recursively into two halves until it can no more be divided. merge the smaller lists into new list in sorted order.
This project contains a skeleton for you to implement Quick Sort. In the file lib/quick_sort.js, you should implement the Quick Sort. This is a description of how the Quick Sort works (and is also in the code file).
Clone the project from
https://github.com/appacademy-starters/algorithms-quick-sort-starter.
cd
into the project folder
npm install
to install dependencies in the project root directory
npm test
to run the specs
You can view the test cases in /test/test.js
. Your job is to write code in
the /lib/quick_sort.js
that implements the Quick Sort.
it is easy to sort elements of an array relative to a particular target value an array of 0 or 1 elements is already trivially sorted Regarding that first point, for example given [7, 3, 8, 9, 2] and a target of 5, we know [3, 2] are numbers less than 5 and [7, 8, 9] are numbers greater than 5.
How does it work? In general, the strategy is to divide the input array into two subarrays: one with the smaller elements, and one with the larger elements. Then, it recursively operates on the two new subarrays. It continues this process until of dividing into smaller arrays until it reaches subarrays of length 1 or smaller. As you have seen with Merge Sort, arrays of such length are automatically sorted.
The steps, when discussed on a high level, are simple:
choose an element called "the pivot", how that's done is up to the implementation take two variables to point left and right of the list excluding pivot left points to the low index right points to the high while value at left is less than pivot move right while value at right is greater than pivot move left if both step 5 and step 6 does not match swap left and right if left ≥ right, the point where they met is new pivot repeat, recursively calling this for smaller and smaller arrays
The algorithm: divide and conquer Formally, we want to partition elements of an array relative to a pivot value. That is, we want elements less than the pivot to be separated from elements that are greater than or equal to the pivot. Our goal is to create a function with this behavior:
let arr = [7, 3, 8, 9, 2]; partition(arr, 5); // => [[3, 2], [7,8,9]] Partition Seems simple enough! Let's implement it in JavaScript:
// nothing fancy function partition(array, pivot) { let left = []; let right = [];
array.forEach(el => { if (el < pivot) { left.push(el); } else { right.push(el); } });
return [ left, right ]; }
// if you fancy function partition(array, pivot) { let left = array.filter(el => el < pivot); let right = array.filter(el => el >= pivot); return [ left, right ]; } You don't have to use an explicit partition helper function in your Quick Sort implementation; however, we will borrow heavily from this pattern
This project contains a skeleton for you to implement Binary Search. In the file lib/binary_search.js, you should implement the Binary Search and its cousin Binary Search Index.
The Binary Search algorithm can be summarized as the following:
If the array is empty, then return false
Check the value in the middle of the array against the target value
If the value is equal to the target value, then return true
If the value is less than the target value, then return the binary search on
the left half of the array for the target
If the value is greater than the target value, then return the binary search
on the right half of the array for the target
This is a description of how the Binary Search works (and is also in the code file).
Then you need to adapt that to return the index of the found item rather than a Boolean value. The pseudocode is also in the code file.
Clone the project from
https://github.com/appacademy-starters/algorithms-binary-search-starter.
cd
into the project folder
npm install
to install dependencies in the project root directory
npm test
to run the specs
You can view the test cases in /test/test.js
. Your job is to write code in
the /lib/binary_search.js
that implements the Binary Search and Binary
Search Index.
]