We are going to start a series of lessons based on Data Structures and Algorithms.

**Objectives of this article:**

- Describe what is the importance of a Data Structure
- Describe the term Algorithm
- What are algorithm complexities
- Approaches to solving algorithms

This particular concept is identified as one of the most important concepts in software engineering, and that became a primary checkpoint for most of the top-level companies.

In this lesson series, we will discuss the idea behind data structures and algorithm concepts, and we will implement several algorithms during upcoming lessons.

## What is a Data Structure?

The simple definition for the data structure is that “**different ways of storing data on your computer**” or “the systematic way of representing and organizing your data”.

Importantly, any data structure should be able to use efficiently for any given task. for instance, search, transform data, edit, update, etc.

**Features of a Data Structure**

There are three different features of a data structure that we can categorize based on the usage.

**Time complexity**

*Running time or the **execution time** for a given task is defined as the time complexity. We should use the best possible data structure for the given context to minimize the time complexity as much as possible.*

**Correctness based on the particular interface**

*Every data structure comprises its **interface **that the operations that support by the given data structure. Similarly, there should be a correct **implementation** of the data structure based on the correct interface. Ideally, a data structure should come with a correctly defined interface and descriptive implementation.*

**Space Complexity**

*Space complexity will measure the memory usage of a given data structure. Ultimately, we should optimize our algorithmic solution to minimize space complexity as much as possible for solutions with a large number of data sets.*

**Why we need any sort of data structure?**

Nowadays, with the development of new processors, computer systems, handling a large number of data records with our normal computers is not a complex or exhaustive task. But, when it comes to certain unpredictive conditions based on a few criteria like **data size**, **retrieval speed**, and **multi-thread processing** we should focus on implementing a proper data structure for the given scenario.

Imagine that you are implementing a simple text search based on a big text corpus with more than millions of records. If you are trying to process data objects parallelly and if your execution time should not exceed sub milliseconds.

The properly designed data structure will help you to achieve those types of tasks in an efficient manner.

## What is an Algorithm?

An algorithm is a step-by-step procedure to achieve any task. In other words, an algorithm is a well-defined set of unambiguous instructions to achieve a given task without relies on any particular programming language. In this series, we will try to implement major data structures and algorithms in nodejs and python programming languages to see the similarity of any given algorithm.

**Properties of a given Algorithm**

As we previously mentioned, the algorithm should have a well-defined set of instructions to achieve a given task, even though if you have a set of instructions to achieve a task that will not be defined as an algorithm if the following characteristics are not satisfied with that.

**Unambiguous**

*every step of the algorithm should be clear, and all the inputs and outputs should be clear as well.*

**Finiteness**

*The algorithm should be able to terminate after a finite number of step occurrences*

**Feasibility**

*The algorithm should be able to work with available resources*

**Independent**

*all the steps in the algorithm should be language independent(should be able to implement in any programming language)*

**Input**

*The algorithm should have zero or more clear inputs which should be a well-defined one.*

**Output**

*The algorithm should produce 1 or more well-defined output(s).*

Problems can be solved in many ways. Let’s design a simple algorithm to identify and describe the procedure that we explained in the above paragraph.

**Example:** Design an algorithm to generate the square value of a given number.

```
Step 1 − START
Step 2 − define a value A which you need to get the square
Step 3 − define the operation of A * A
Step 4 − store output of step 3 to new variable C
Step 5 − return C
Step 6 − STOP
```

Now go through all the characteristics of an algorithm and try to justify all the features against the above simple algorithm.

**Self-study:** try to design the following algorithms and check your algorithm supports all the properties that we described earlier.

- Design an algorithm to determine the maximum value from given three positive numbers
- Design an algorithm to get the 10th value of the Fibonacci series

When it comes to analyzing your algorithm, a theoretical analysis of an algorithm is highly important. *A Priori*** Analysis** is a type of analysis that focuses on analyzing the efficiency of an algorithm.

The efficiency of an algorithm can be measured by **algorithm complexities**. The complexity of an algorithm is mainly based on two factors which are the **time factor** and the** space factor**.

**Space Complexity**

Space complexity is measured by the total amount of memory space required by the algorithm when it executed its life cycle.

`Total Space Complexity = "fix part complexity" + "variable part complexity"`

**The fixed part complexity** is considered an amount of space required to store relevant data and variables. for instance, assigned constants, variables that are independent of the size of the problem.

**The variable part complexity** depends on the size of the problem and that component dynamically changes over the execution time.

**Time Complexity**

Time complexity is mean by the total amount of time units that required to complete the given algorithm**.**

Time complexity is linearly associate with the input size which means when your input size grows, the time complexity also increased with that.

**Asymptotic analysis**

In algorithm world calculating the running time of a given algorithm refer to the asymptotic analysis.

Running time of an algorithm always based on parameters and the function which is running behind the algorithm.

Normally we can measure algorithmic running time based on three different cases (**Best**, **Average**, and **Worst**).

We mostly consider the **worst-case** scenario, which means the maximum time required to execute the algorithm.

*Asymptotic notations for popular algorithms*

Algorithms are designed to achieve the optimum solution for the desired problem. In the journey to the solution, there are three different ways that we can define to solve a particular problem.

**Greedy Approach**

The greedy approach is a common problem-solving approach that uses the method to solve the particular problem by looking at the local optimum solution.

Most of the times greedy approach converges to the **local optimum** solution and **hardly getting** in the **global optimum** solution, but locally optimal solutions that approximate a globally optimal solution with the time and iterations.

**Example**: Find the path with the **largest sum** in the following tree structure

Here are some popular greedy algorithm examples:

- Travelling Salesman Problem
- Kruskal’s Minimal Spanning Tree Algorithm
- Dijkstra’s Minimal Spanning Tree Algorithm
- Knapsack Problem
- Job Scheduling Problem

Try to solve that using a proper algorithmic way.

**Divide and Conquer Approach**

Divide and conquer is easy to understand. In this approach, the problem **divided** into **smaller subproblems**, and those subproblems will solve independently. When you divide the main problem into subproblems, you can divide that until you find an **atomic problem**( subproblem that cannot divide into more parts). Once you individually solved subproblems, you can **merge** all the results to generate the optimum solution for your problem.

**Example**: Find the longest common prefix of a given set of database records

Here are some popular greedy algorithm examples:

- Binary Search Algorithm
- Merge Sort Algorithm
- Quick Sort Algorithm
- Closest pair of points
- MethodCooley -Turkey Fast Fourier Transform (FFT)

**Dynamic Programming Approach**

The dynamic programming approach is much similar to the divide and conquers approach, but slightly differs from it because of the problem-solving approach.

Unlike Divide and Conquer method once you divide your problem into subatomic level problems, you are not going to solve those atomic level problems individually. Instead of solving individual atomic level problems, **you will be remembered and use the results of those smaller, atomic problems** in other overlapping(**repeated**) sub-problems.

In the dynamic programming approach, an optimum solution can be achieved by using optimum solutions to atomic problems and **memorization mechanisms**.

**Memo: **Dynamic programming uses the memorized results of previously resolved subatomic problems to resolve similar small problems in a given iteration.

**Example**: **Fibonacci number** series value representation

Here are some popular greedy algorithm examples:

- Knapsack problem
- Project scheduling problem
- Shortest path by Dijkstra Algorithm
- Tower of Hanoi problem
- Fibonacci number series

**In conclusion,** We have gone through basic definitions and ideas of the various Data structures and Algorithms.

In future articles, we will discuss several algorithmic representations and solutions for many real-life problems. In addition to that, we will implement algorithms to resolve the above-mentioned problems.

*Read behind a paywall at* *https://medium.com/@pradeephbac/data-structures-and-algorithms-journey-80d225cfbbd8*