In computer science, there are numerous algorithms for solving a problem. When there are several different algorithms to solve a problem, we evaluate the performance of all those algorithms. Performance evaluation aids in the selection of the best algorithm from a set of competing algorithms for a given issue. So, we can express algorithm performance as a practice of producing evaluation judgments regarding algorithms.
Factors Determining Algorithm’s Performance
To compare algorithms, we consider a collection of parameters or components such as the amount of memory required by the algorithm, its execution speed, how easy it is to comprehend and execute, and so on. In general, an algorithm’s performance is determined by the following factors:
- Is that algorithm giving you the perfect solution to your problem?
- Is it straightforward to comprehend?
- Is it simple to put into practice?
- How much memory (space) is needed to solve the problem?
- How long does it take to remedy the issue?
When we aim to analyze an algorithm, we just look at space and time requirements of that algorithm and neglect anything else. On the basis of this information, an algorithm’s performance may alternatively be described as a technique of determining the space and time requirements of an algorithm.
The following metrics are used to evaluate the performance of an algorithm:
- The amount of space necessary to perform the algorithm’s task (Space Complexity). It consists of both program and data space.
- The time necessary to accomplish the algorithm’s task (Time Complexity).
When we create a problem-solving algorithm, it demands the use of computer memory to complete its execution. Memory is necessary for the following purposes in any algorithm:
- To keep track of software instructions.
- To keep track of constant values.
- To keep track of variable values.
- Additionally, for a few additional things such as function calls, jump statements, and so on.
When software is running, it often utilizes computer memory for the following reasons:
- The amount of memory needed to hold compiled versions of instructions, which is referred to as instruction space.
- The amount of memory utilized to hold information about partially run functions at the time of a function call, which is known as the environmental stack.
- The amount of memory needed to hold all of the variables and constants, which is referred to as data space.
We need to know how much memory is required to store distinct datatype values to compute the space complexity (according to the compiler). Take a look at the following code:
int square(int a)
In the preceding piece of code, variable ‘a’ takes up 2 bytes of memory, and the return value takes up another 2 bytes, i.e., it takes a total of 4 bytes of memory to finish its execution, and for any input value of ‘a’, this 4 byte memory is fixed. Constant Space Complexity is the name given to this type of space complexity.
Every algorithm needs a certain amount of computer time to carry out its instructions and complete the operation. The amount of computer time required is referred to as time complexity. In general, an algorithm’s execution time is determined by the following:
- It doesn’t matter if it’s on a single processor or a multi-processor computer.
- It doesn’t matter if it’s a 32-bit or 64-bit computer.
- The machine’s read and write speeds.
- The time taken by an algorithm to complete arithmetic, logical, return value, and assignment operations, among other things.
- Data to be entered
Calculating an algorithm’s Time Complexity based on the system configuration is a challenging undertaking since the configuration varies from one system to the next. We must assume a model machine with a certain setup to tackle this challenge. As a result, we can compute generalized time complexity using that model machine. Take a look at the following code:
int sum(int a, int b)
In the following example code, calculating a+b takes 1 unit of time, and returning the value takes 1 unit of time, i.e., it takes two units of time to perform the task, and it is unaffected by the input values for a and b. This indicates it takes the same amount of time for all input values, i.e., 2 units.
Notation of Performance Measurement
We must compute the complexity of an algorithm if we wish to do an analysis on it. However, calculating an algorithm’s complexity does not reveal the actual amount of resources required. Rather than taking the precise quantity of resources, we describe the complexity in a generic form (Notation), which yields the algorithm’s essential structure. That Notation is termed Asymptotic Notation and is a mathematical representation of the algorithm’s complexity. The following are three asymptotic notations for indicating time-complexity each of which is based on three separate situations, namely, the best case, worst case, and average case:
Big – O (Big-Oh)
The top-bound of an algorithm’s execution time is represented by the Big-O notation. It’s the total amount of time an algorithm takes for all input values. It indicates an algorithm’s worst-case time complexity.
Big – Ω (Omega)
The omega notation represents the lowest bound of an algorithm’s execution time. It specifies the shortest time an algorithm requires for all input values. It is the best-case scenario for the time complexity of an algorithm.
Big – Θ (Theta)
The function is enclosed in the theta notation from above and below. It reflects the average case of an algorithm’s time complexity and defines the upper and lower boundaries of its execution time.