CST370 Module Seven

 

CST370 Module Seven

Non-Comparison Sorting, Dynamic Programming, Warshall's Algorithm, Floyd's Algorithm, Prim's Algorithm and Greedy Technique

What I learned this week:

This week we covered a lot of varying algorithms that seemed to all have an underlying theme, where there seemed to be a tradeoff between time complexity and space complexity.

Non-Comparison Sorting:
We began with covering two different methods of sorting an array of integers without using comparisons among the alloted values. for the Counting Sort Algorithm it's ideal use-case is when the values are in a set range, where the less expensive is most ideal and a more expansive range is less ideal. The main method of sorting would be to keep track of the frequency and distribution of each value in set range. after doing so we can evaluate at each index starting at the end of our initial input array and working backwards, pulling from our distribution array and setting the value at at the distribution arrays value and decrementing it within our distribution array. This continues until our resulting array is filled and we complete analysis of our initial input array at its first index.

The Radix Sort algorithm pulls from the Counting Sort Algorithm, however instead it begins by analyzing the array per digit starting at the least significant digit of each initial indices value. Where you have an array from [0..9] where you can analyze the frequency and distribution of each values specified digit, similarly to Counting Sort. However the resulting index, will then be a new Input of which you repeat the process until finally reaching the greatest significant digit, where the final output should be a sorted array of your intial values.

Dynamic Programming:
The main idea behind dynamic programming is for solving a problem with overlapping subproblems, if you're able to set up a recurrence relation that describes a solution to a problem with smaller sub-problems, you can solve each subproblem and record the solutions in a table that will assist in solving the initial problem. Perfect examples being the Fibonacci problem, which is initially solved through recurrence, however instead of running through each input value, you can sacrifice space complexity and generate a table in which to pull from. Another example is the Coin Collecting Problem , where again you sacrifice space complexity in order to generate a matrix, whose values are dependent on which option holds the greater value. After which you can use the matrix generated in order to backtrack to determine the path of collecting the most coins.

Warshall's and Floyd's Algorithms:
Warshall and Floyd's algorithm both have similar techniques among the two but their use cases are in completely different fields. Where Warshall's Algorithm looks to perform transitive closures on disconnected graphs, Floyd's Algorithm is focused on finding the shortest path for all vertices. They both consist of building a table and using an auxillary vertices in which acts as a neighboring path for adjacency. dependent on k's neighboring relationship between i and j we can determine the value set for the resulting matrix at set index. The main thing for the two is the relationship between i, k, and j.

Greedy Technique: 
The Greedy technique is perfect for fast approximations, however it is not considered globally optimal as there are instances in which other techniques can outperform. It is solely dependent on if can satisfy the constraints of the problem, and that the choices cannot be changed later. I think a perfect example use case was the cashier problem, in which your tasked to provide a minimum number of coins, of which for US coins it does provide the optimal solution, however a different countries coin structure could compromise the effectiveness of using the greedy technique.

Comments

Popular Posts