🔢
Free Study Guide · 2025
Top 20 Data Structures & AlgorithmsInterview Questions & Answers (2025)
DSA rounds are mandatory at every product company and FAANG interview in India. From Flipkart to Amazon to CRED, these are the most commonly asked algorithmic questions — along with the approach, time complexity, and key patterns you need to know.
✓ 20 questions
✓ Detailed answers
✓ 100% free
1What is Big-O notation? What does O(n log n) mean?▼
Big-O notation describes the worst-case time (or space) complexity of an algorithm as input size n grows. O(1) is constant, O(log n) is logarithmic (halving search space each step — binary search), O(n) is linear, O(n log n) is linearithmic (efficient sorting like merge sort), O(n²) is quadratic (nested loops), O(2^n) is exponential (recursive brute force). Focus on eliminating nested loops and repeated work.
2What is the difference between an array and a linked list?▼
Arrays store elements in contiguous memory — O(1) random access by index, O(n) insertion/deletion in middle (shifting required). Linked lists store elements in nodes with next pointers — O(n) access by index, O(1) insertion/deletion if you have the node pointer (no shifting). Arrays are better for cache performance and random access; linked lists for frequent insertions/deletions at known positions.
3What is a stack and what is it used for?▼
A stack is a LIFO (Last In, First Out) data structure with push, pop, and peek operations, all O(1). Used for: function call stack, undo operations, expression evaluation (infix to postfix), validating brackets (classic problem), DFS traversal (can replace recursion), and backtracking algorithms. Implement with a dynamic array or linked list.
4What is a queue and what is it used for?▼
A queue is a FIFO (First In, First Out) data structure with enqueue and dequeue operations, O(1) each. Used for: BFS graph traversal, level-order tree traversal, task scheduling, printer/CPU job queues, and sliding window problems (deque). A deque (double-ended queue) supports O(1) operations at both ends and is useful for monotonic queue patterns.
5What is a binary search tree (BST)?▼
A BST is a binary tree where every node's left subtree contains only nodes with values less than the node, and the right subtree only greater values. Search, insert, and delete are O(h) where h is height. A balanced BST (AVL, Red-Black Tree) guarantees O(log n) height. In-order traversal of a BST gives sorted output — a key property for many problems.
6What is a hash map and how does it handle collisions?▼
A hash map maps keys to values using a hash function to compute an array index. Hash collisions (two keys map to the same index) are handled by chaining (each bucket stores a linked list of entries — Java's HashMap) or open addressing (probe for the next empty slot — linear or quadratic probing). Average O(1) for get/put; O(n) worst case with many collisions.
7What is BFS vs DFS and when do you use each?▼
BFS (Breadth-First Search) explores level by level using a queue — finds the shortest path in unweighted graphs, level-order tree traversal, checking connectivity. DFS (Depth-First Search) explores as deep as possible using a stack/recursion — detects cycles, topological sort, finding connected components, maze-solving, and problems requiring backtracking. BFS for shortest path; DFS for exhaustive search and graph properties.
8What is dynamic programming?▼
Dynamic programming solves problems by breaking them into overlapping subproblems, computing each once, and storing results (memoization or tabulation) to avoid redundant work. Key insight: identify optimal substructure (optimal solution contains optimal solutions to subproblems) and overlapping subproblems. Classic problems: Fibonacci, coin change, longest common subsequence, knapsack, matrix chain multiplication.
9What is the sliding window technique?▼
The sliding window maintains a window (subarray or substring) over data and slides it to compute a result without reprocessing elements. Fixes an O(n²) brute-force to O(n). Fixed-size window: track a running sum/count, add the new element, remove the oldest. Variable-size window: expand the right pointer, shrink the left when a constraint is violated. Used for: maximum subarray sum of size k, longest substring without repeating characters.
10What is the two-pointer technique?▼
Two pointers start at different positions and move toward each other (or in the same direction) to solve array/string problems in O(n) instead of O(n²). Common patterns: left and right pointers on a sorted array (two-sum, container with most water, three-sum), fast and slow pointers on a linked list (detect cycle, find middle), and meeting in the middle.
11How do you detect a cycle in a linked list?▼
Floyd's Cycle Detection (Tortoise and Hare): use two pointers, slow (moves 1 step) and fast (moves 2 steps). If they ever meet, a cycle exists. If fast reaches null, no cycle. O(n) time, O(1) space. To find the cycle start: once they meet, reset slow to head and move both one step at a time — they meet again at the cycle's entry point.
12What is a heap (priority queue)?▼
A heap is a complete binary tree satisfying the heap property: in a max-heap, every parent is ≥ its children; min-heap every parent ≤. Insert and delete-min/max are O(log n); peek (find min/max) is O(1). Built using an array where parent of i is at (i-1)/2. Used for: priority queues, Dijkstra's algorithm, heap sort, finding the k-th largest element, and merging k sorted lists.
13What is merge sort and why is it preferred for linked lists?▼
Merge sort recursively splits the array in half, sorts each half, and merges them — O(n log n) time, O(n) space (stable sort). It's preferred for linked lists because it doesn't require random access (unlike quicksort's pivot swapping) — splitting a linked list and merging are natural pointer operations. For arrays, quicksort is usually faster in practice due to better cache performance.
14What is quicksort and what is its time complexity?▼
Quicksort picks a pivot, partitions the array (elements smaller to the left, larger to the right), and recursively sorts each partition. Average case: O(n log n). Worst case: O(n²) with a bad pivot (already sorted array + always picking first/last element). Fix: random pivot selection or median-of-three. In practice, quicksort is the fastest comparison sort due to in-place sorting and cache efficiency.
15How do you find the lowest common ancestor (LCA) of two nodes in a binary tree?▼
Recursive approach: if root is null or equals either node, return root. Recursively search left and right subtrees. If both return non-null, root is the LCA. If only one returns non-null, return that result. O(n) time. For a BST, leverage BST property: if both values are less than root, LCA is in left subtree; if both greater, right subtree; otherwise root is the LCA. O(h) time.
16What is a trie (prefix tree) and when is it used?▼
A trie is a tree where each node represents a character, and paths from root to nodes represent strings. It supports O(m) insert and search (m = string length) and is space-efficient for shared prefixes. Used for: autocomplete, spell checkers, prefix search, longest common prefix, and IP routing tables. More memory-efficient alternatives: compressed tries (Patricia trees) and radix trees.
17What is a graph and how is it represented?▼
A graph G = (V, E) is a set of vertices and edges. Directed (edges have direction) vs undirected; weighted vs unweighted. Representations: Adjacency Matrix (V×V array, O(1) edge lookup, O(V²) space — good for dense graphs), Adjacency List (array of edge lists, O(V+E) space — good for sparse graphs). Most graph problems use adjacency lists.
18What is Dijkstra's algorithm?▼
Dijkstra's finds the shortest path from a source vertex to all other vertices in a graph with non-negative edge weights. It uses a min-heap (priority queue): start with distance 0 for source, infinity for others. Repeatedly extract the minimum-distance vertex, relax its neighbors (update if shorter path found), and push to heap. O((V+E) log V) with a binary heap. Fails with negative edges — use Bellman-Ford instead.
19What is the difference between memoization and tabulation?▼
Both are dynamic programming techniques. Memoization (top-down) uses recursion and caches results in a hash map/array — natural to write, only computes needed subproblems, but has recursion overhead and stack depth limits. Tabulation (bottom-up) fills a table iteratively starting from base cases — no recursion overhead, O(1) space optimisations possible, but may compute subproblems that aren't needed.
20How do you find all permutations of a string?▼
Use backtracking: at each position, swap the current character with each subsequent character, recurse on the remainder, then swap back (backtrack). O(n * n!) time complexity — there are n! permutations each of length n. Alternatively, use a boolean visited array and build permutations by picking unused characters at each step. For unique permutations with duplicates, sort first and skip duplicates in the recursion.
Level up your prep
Get company-specific Data Structures & Algorithms questions
Upload your resume → get questions tailored to Google, Amazon, TCS, and 50+ companies.