Algorithm Design Techniques in DAA

DAA Algorithm

When it comes to solve problems, especially in computer science, algorithms are something that play a crucial role. An algorithm is a set of rules for solving a certain problem quickly. But the question is how to design an algorithm to solve that problem? So IT professionals are upgrading their knowledge and developing different approaches to construct an effective algorithm to fulfil the expanding demands of industry. Different algorithm design techniques and their applications are discussed in this article.

Brute-Force Search

It is a simple approach of addressing a problem that relies on huge processing power and testing of all possibilities to improve efficiency. Suppose you forgot the combination of a 4-digit padlock and to avoid purchasing the new one, you have to open the lock using brute-force search method. You will have to try all possible 4-digit combinations from 0 to 9 to unlock it. That combination could be anything between 0000 to 9999, hence there are 10,000 combinations. So we can say that in the worst case, you have to try 10, 000 times to find your actual combination.

Brute-Force

Brute Force Search for combination of padlock

The time complexity of brute force is O(mn), which can also be written as O(n*m). This means that if we need to search a string of “n” characters in a string of “m” characters, then it would take “n*m” tries.

Divide and Conquer

Divide and conquer algorithm works on top-down approach and is preferred for large problems. As the name says divide and conquer, it follows following steps:

Step 1: Divide the problem into several subproblems.

Step 2: Conquer or solve each sub-problem.

Step 3: Combine each sub-problem to get the required result.

Divide and Conquer solve each subproblem recursively, so each subproblem will be the smaller original problem.

Divide and Conquer

Divide and Conquer

Greedy Algorithm

In Greedy Algorithm, resources are divided in a loop depending on the maximum and immediate availability at any given stage of execution. This method is used to solve optimization problems in which set of input values are given, that are required either to be increased or decreased according to the objective. Greedy Algorithm always chooses the option, which appears to be the best at the moment. That is why it is known as greedy algorithm. It may not always give the optimized solution. There are two stages to solve the problem using greedy algorithm:

  1. Examining the list of Items.
  2. Optimization

This means that a greedy algorithm selects the best immediate option and does not rethink its decisions. When it comes to optimizing a solution, this simply implies that the greedy solution will seek out local optimum solutions, which could be multiple, and may skip a global optimal solution. For example, greedy algorithm in animation below aims to locate the path with the largest sum.

Greedy Algorithm

Greedy Algorithm

With a goal of reaching the largest sum, at each step, the greedy algorithm will choose what appears to be the optimal immediate choice, so it will choose 12 instead of 3 at the second step and will not reach the best solution, which contains 99.

Dynamic Programming

Dynamic Programming (DP) is an algorithmic technique for solving optimization problems by breaking them into simpler sub-problems and storing each sub-solution so that the corresponding sub-problem can be solved only once. Dynamic Programming is a good methodology for optimization problems that seek the maximal or minimal solution with restrictions as it searches through all possible sub-problems and never recomputes the conclusion to any sub-problem.

Dynamic Programing

Dynamic Programming

It’s an algorithmic strategy for breaking down an optimization problem into smaller sub-problems and leveraging the fact that the best solution for the overall problem is defined by the best solution for its sub-problems. For example in case of Fibonacci Series in which each number is the sum of the two preceding numbers. Suppose the first two numbers of the series are 0, 1. If it is asked to find the nth number of the series, we can do that as follows:

Dynamic Programming Example

Dynamic Programming Example

Here, to solve the overall problem i.e., Fib(n), we have to break it down into two smaller sub-problems i.e., Fib(n-1) and Fib(n-2). Hence, we can use Dynamic Programming to solve above mentioned problem, which is elaborated in more detail in the following figure:

Dynamic Programing

Fibonacci Series using Dynamic Programming

Branch and Bound Algorithm

For combinatory, discrete, and general mathematical optimization problems, branch and bound algorithms are applied to determine the optimal solution. A branch and bound algorithm searches the set of all possible solutions before recommending the best one. This algorithm enumerates possible candidate solutions in a stepwise manner by exploring all possible set of solutions.

Branch and Bound Algorithm

Branch and Bound Algorithm Example

First of all we build a rooted decision tree where the root node represents the entire search space. Each child node is a part of the solution set and is a partial solution. Based on the optimal solution, we set an upper and lower bound for a given problem before constructing the rooted decision tree and we need to make a decision about which node to include in the solution set at each level. It is very important to find upper and lower bound and to find upper bound any local optimization method can be used. It can also be found by picking any point in the search space and convex relaxation. Whereas, duality can be used for finding lower bound.

Randomized Algorithm

Randomized Algorithm refers to an algorithm that uses random numbers to determine what to do next at any point in its logic. In a standard algorithm, it is usually used to reduce either the running time, or time complexity, or the memory used, or space complexity. The algorithm works by creating a random number, r, from a set of numbers and making decisions based on its value. This algorithm could assist in making a decision in a situation of doubt by flipping a coin or drawing a card from a deck.

Randomized Algorithm

Randomized Algorithm Flowchart

When utilizing a randomized method, keep the following two considerations in mind:

  • It takes source of random numbers and makes random choices during execution along with the input.
  • Behavior of an algorithm varies even on fixed inputs.

Backtracking Algorithms

Backtracking means that if the current solution isn’t working, you should go back and attempt another option. It is a method for resolving issues recursively by attempting to construct a solution incrementally, one piece at a time, discarding any solutions that do not satisfy the problem’s constraints at any point in time. This approach is used to resolve problems having multiple solutions. For example if we want to find all the possible ways of arranging 2 boys and 1 girl on 3 benches with a constraint that Girl should not be on the middle bench. So there will be  3! = 6 possibilities to solve this problem. We will try all possible ways recursively to get the required solution. Those possibilities are as follows:

Backtracking Example Possibilities

Backtracking Example Possibilities

Following diagram shows the possible solutions for the above mentioned problem:

Solutions of Backtracking

Solutions of Backtracking

Add Comment