Turn your manual testers into automation experts! Request a DemoStart testRigor Free

Top 25 DSA interview questions

Even today, one of the most important concerns in software engineering interviews is data structures and algorithms. An in-depth knowledge of DSA guidelines indicates your ability to critically analyze the efficiency and performance of a system and the amount of load it can handle. It is irrelevant whether you are a developer preparing for your first job or an experienced developer brushing up on your current job search. Here is a compilation of the top 25 DSA interview questions to help you get started.

DSA Interview Questions

What is an array? What are its key operations and time complexities?

Suggested Approach: It is to define an array first, followed by common operations (access, insert, and delete), and then important complexities. Next, highlight restrictions and situations in which a different data structure (like a linked list) would be used.

Sample Answer: A contiguous block of memory containing elements of the same type is called an array. Basic operations:
  • Access by index: O(1) time.
  • Insertion at end (if capacity): O(1); at arbitrary index: O(n) (because you may need to shift elements).
  • Deletion at an arbitrary index: O(n) for the same reason.

For n elements, the space complexity is O(n). Limitations include fixed size (in certain languages), expensive shifts, and inefficiency for regular middle insertions and deletions. A linked list could be used if regular additions and deletions were needed.

What is a linked list? What is the difference between singly and doubly linked lists?

Suggested approach: Describe the node structure, traversal, insertion/deletion cost, and linked list definition. Next, compare singly and doubly linked.

Sample answer: A linked list is a group of nodes, each of which has data and a reference to the node after it (in a singly linked list). Important operations are O(1) insertion and deletion (after you have the position). O(n) is the traversal. In comparison to arrays, linked lists need traversal and do not allow direct index access. With two references, next and previous, a doubly linked list makes it simpler to add or remove nodes once you have the node (you can also go backward), but it also takes up more space and requires a little more maintenance work.

Reverse a linked list (singly): describe the algorithm and complexity.

Suggested approach: Go over the recursive or iterative algorithm. Describe the current, previous, and next pointers. Discuss edge cases (empty list, single-node list) and time/space complexity.

Sample answer: Algorithm (iterative):
  1. Initialize prev = null, curr = head.
  2. While curr != null:
    • next = curr.next
    • curr.next = prev
    • prev = curr
    • curr = next
  3. At the end, prev is new head.
    Time: O(n) because we traverse each node once. Space: O(1) extra space (in-place). Edge cases: if head is null or head.next is null, simply return head. The recursive version uses call-stack O(n) for space.

What is a stack and a queue? Use-cases and complexities.

Suggested approach: Mention common uses (e.g., recursion/backtracking for stack; BFS/traversal or scheduling for queue), define both, highlight LIFO vs. FIFO, and list common operations (push/pop for stack; enqueue/dequeue for queue).

Sample answer:
  • A Last-In.-First-Out (LIFO) data structure is a stack. Operations: push O(1), pop O(1), peek O(1). Use-cases: function call stack, parsing expressions, and backtracking algorithms.
  • A queue is First-In-First-Out (FIFO). Operations: enqueue O(1), dequeue O(1), peek O(1). Use-cases: breadth-first search (BFS) in graphs/trees, scheduling tasks. If deployed with a linked list or circular buffer, you can maintain O(1) operations.

An example: using a stack, you can verify parentheses in an expression; using a queue, you can traverse a tree level by level (BFS).

What is a hash table (or hash map)? How does collision handling work?

Suggested approach: It is to define the hash table, common operations (get, put, remove), average, and worst case complexities, and then explain collision resolution strategies like open addressing (linear probing, quadratic probing, double hashing, etc.) and chaining (separate overflow list).

Sample answer: A hash function that computes an index into an array is used to map keys to values in a hash table. O(1) is the average time for operations (get, put). In the worst situation (many collisions), it may degrade to O(n). Collison handling:
  • Chaining: Each table entry holds a bucket (linked list) of elements; when a collision occurs, you append to the list.
  • Open addressing: You probe other slots (e.g., double hashing: second hash; linear probing: next slot) if there is a collision.

Also, discuss elements like resizing to maintain performance and load factor (the ratio of entries to bucket count).

Describe the binary search algorithm. What are its prerequisites and complexities?

Suggested approach: Describe the algorithm, specify that a sorted array is needed, demonstrate how to reduce the search space and time/space complexity, and include variations (search in a rotated array).

Sample answer: A sorted array is used for binary search. We keep two indices up to date: low and high. While low <= high: compute mid = floor((low + high)/2). If arr[mid] equals target → found; if arr[mid] < target, search right half (low = mid + 1); else search left half (high = mid – 1). Time complexity: O(log n). Space complexity: O(1) for the iterative version (or O(log n) for recursion). Sorted collection and random access are prerequisites.

What is “sorting”? Name common sorting algorithms and their complexities.

Suggested approach: Give an explanation of sorting, list the main algorithms (such as bubble, selection, insertion, merge, quicksort, heap sort, and counting/radix for special cases), and discuss time and space complexity, stability, and in-place vs. not.

Sample answer: Sorting is the process of putting things in a certain order, usually descending or ascending. Common algorithms:
  • Bubble sort: O(n²), in-place, stable.
  • Selection sort: O(n²), in-place, not stable.
  • Insertion sort: O(n²) average/worst, O(n) best (if nearly sorted), in-place, stable.
  • Merge sort: O(n log n) time, O(n) extra space, stable.
  • Quicksort: O(n log n) average, O(n²) worst (if pivot is bad), in-place (with some partition schemes), not stable (unless modified).
  • Heap sort: O(n log n), in-place (usually), not stable.
  • Counting sort/radix sort: For integer keys with a restricted range, it can be O(n + k) where k is the range.

What is a binary tree? How is it different from a binary search tree (BST)?

Suggested approach: Emphasize the ordering property and define both.

Sample answer: Each node in a binary tree may have up to two offspring. A BST is a binary tree where left child < parent < right child. Unlike general trees, which may need O(n) search, BST enables O(log n) search/insertion in balanced cases.

How would you check if a binary tree is a valid BST?

Suggested approach: Talk about the min/max constraint propagation method or the in-order traversal approach, which should deliver a sorted sequence. Bring up the complexity of time and space.

Sample answer: Values must appear in strictly increasing order; perform an in-order traversal.

As an alternative, recursively ensure the value of every node falls between the permitted minimum and maximum.

Time O(n), Space O(h).

Describe breadth-first search (BFS) vs depth-first search (DFS) in a tree/graph.

Suggested approach: Explain both common implementations, time/space complexity, when to use one over the other, and queue vs. stack (or recursion).

Sample answer: DFS is appropriate for path finding and topological sorting since it investigates as deeply as possible (stack or recursion).

BFS visits each level in a queue, which makes it ideal for shortest paths in unweighted graphs. Both need O(V+E).

What is a heap (priority queue), and what are typical operations/complexities?

Suggested approach: It is to define min-heap/max-heap, common operations (insert, extract-min/max, peek), array representation, complexities, and use cases (Dijkstra, scheduling).

Sample Answer: A heap is a complete binary tree meeting the heap property: for a min-heap, each node is ≤ its children; for a max-heap, each node is ≥ its children. It is typically implemented using an array: for nodes at index i, children at 2i+1 and 2i+2 (0-based). Operations:

  • Insert: O(log n) (insert at end, then “bubble up”).
  • Extract-min (or extract-max): O(log n) (swap root with last, remove last, then “heapify” downward).
  • Peek (min or max): O(1).
    Use-cases: implementing priority queues, Dijkstra’s algorithm, scheduling tasks by priority, finding k-th largest/smallest, streaming median (with two heaps).

What is a trie (prefix tree)? Use-cases and complexity.

Suggested Approach: Define trie (tree for storing strings/prefixes), node children map or array, operations (insert, search, startsWith), complexities. Mention memory trade-offs.

Sample Answer: A trie (also called a prefix tree) is a tree-based data structure used to store a dynamic set of strings where keys are usually characters. Each node may have multiple children (one for each possible character). Typical operations:

  • Insert a word: O(L) where L = length of the word.
  • Search a word: O(L).
  • Check prefix: O(L).
    Space: O(N * alphabet_size) worst case (for N total characters stored) unless you optimize with compressed trie or other techniques. Use-cases: autocomplete, spell-checker, IP routing (prefix matching), dictionary word checking.

What is dynamic programming (DP)? How is it different from recursion/memoization?

Suggested approach: Describe the features of DP (optimal substructure, overlapping subproblems). Compare top-down (memoization) and bottom-up methods. Indicate when DP should be used in place of basic recursion.

Sample answer: Problems can be solved using dynamic programming by breaking them down into overlapping subproblems, resolving each subproblem once, and then storing the results. It is based on two characteristics: optimal substructure and overlapping subproblems.

Memoization (top-down DP) stores results to avoid repetition, whereas recursion may revisit the same subproblem repeatedly (exponential). From the smallest subproblems to the objective, bottom-up DP builds. When computing Fibonacci numbers, for example, the bottom-up method is O(n), the memorized version is O(n), and the naïve recursion is O(2ⁿ). When a naïve recursion leads to excessive repetition of work, use DP.

What is the “Longest Increasing Subsequence” (LIS) problem? Describe a solution.

Suggested Approach: Define the problem; describe an O(n²) DP solution and mention an O(n log n) optimized approach. Highlight complexity.

Sample Answer:
  • Problem: Given an array of integers, detect the length of the longest strictly increasing subsequence (not necessarily contiguous).
  • Simple DP solution: Let dp[i] = length of LIS ending at index i. Then for each i, for j from 0 to i-1, if arr[j] < arr[i], dp[i] = max(dp[i], dp[j] + 1). Time complexity O(n²), space O(n).
  • Optimized solution: Utilize a tail array and binary search to maintain candidate subsequences; time complexity O(n log n), space O(n). This demonstrates you’re comfortable with advanced algorithmic improvements.

Describe the “0/1 Knapsack” problem and its DP solution.

Suggested Approach: Describe the DP formulation.

Sample Answer: Given N items (weight, value) and capacity W, select items to maximize value without exceeding W.

DP: dp[i][c] = max(dp[i-1][c], dp[i-1][c-w[i]] + v[i]) if w[i] ≤ c.

Time O(N x W), Space O(W) optimized.

What is a graph? What are adjacency list vs adjacency matrix representations?

Suggested approach: Describe the graph’s nodes, vertices, and edges. Explain the trade-offs (memory/time) of the two popular methods for storing graph data in memory.

Sample answer: A set of vertices V and edge E joining pairs of vertices (either directed or undirected) is called a graph G=(V,E). Representations:

Adjacency list: Maintain a list of neighbors for every vertex. Memory: O(V+E). Traversal, or iterating neighbors, is efficient.

Adjacency matrix: A VxV matrix in which the presence or weight of an edge is indicated by cell (i.j). Memory: O(V²). Although it takes O(1) to validate for edges, sparse graphs need a lot of memory.

For sparse graphs, use an adjacency list; for dense graphs or situations needing fast edge-existence checks, use an adjacency matrix.

How would you detect a cycle in a directed graph?

Suggested Approach: Explain DFS with a recursion stack or color marking.

Sample Answer: Approach (directed graph): Utilize DFS with two auxiliary arrays: visited and recStack (or colors: white/grey/black). Use DFS tracking visited nodes and the recursion stack.

If a neighbor is already in the recursion stack, a cycle exists.

Time O(V + E), Space O(V).

For undirected graphs, use Union-Find (disjoint set).

What is “topological sort”? Describe when it’s used.

Suggested Approach: Define ordering for DAG and describe the algorithm.

Sample Answer: Topological sort gives a linear order of vertices such that u → v means u comes before v.

Applicable only to Directed Acyclic Graphs (DAG).

Implemented via DFS (post-order stack) or Kahn’s BFS (in-degree method). Time O(V + E).

What is a “divide and conquer” algorithmic strategy? Give an example.

Suggested Approach: Explain the strategy and typical algorithm.

Sample Answer: Divide and Conquer breaks a problem into smaller independent subproblems, resolves each, then merges the results. Examples: Merge Sort, Quick Sort, Binary Search. It achieves O(n log n) for sorting by reducing the problem size logarithmically.

What is the “sliding window” technique? Give an example problem.

Suggested approach: Explain window movement and give an example of an issue.

Sample answer: Two pointers are used by sliding windows to preserve a subrange in strings or arrays. For instance, the maximum sum of a size k subarray. After calculating the first window’s sum, slide by subtracting the outgoing element and adding the incoming one. Time O(n).

What is the “two-pointer” (or “fast/slow pointer”) technique?

Suggested approach: Explain the concept and give examples.

Sample answer: To bring down nested loops, two pointers iterate at different speeds or directions. For instance, use left/right pointers in O(n) to identify two numbers with a given sum in a sorted array. Fast and slow pointers help in locating the middle or identifying cycles in linked lists.

What is Backtracking?

Suggested approach: Define recursion with pruning.

Sample answer: Backtracking removes routes that break constraints and builds solutions piecemeal. For example, permutation generation or N-Queens. By avoiding pointless branches, it outperforms brute force, though the worst-case scenario could still be exponential.

Explain the “greedy algorithmic paradigm”. Give an example.

Suggested approach: Define use case and greedy choice property.

Sample answer: In an attempt to detect a global optimum, a greedy algorithm selects the locally best option. It is efficient when the problem possesses both the greedy choice property and an optimal substructure. Examples include Kruskal’s and Prim’s MST, Huffman coding, and activity selection.

What is memoization, and how does it relate to dynamic programming?

Suggested approach: Define memoization (storing the results of expensive calls) and illustrate the differences between memoized, DP bottom-up, and naïve recursion. Time/space complexity advantage.

Sample answer: The results of expensive recursive calls are cached for later use in top-down DP memoization. For example, exponential time is reduced to O(n) by storing Fibonacci results in a map. Iteratively, bottom-up DP builds and eliminates unnecessary labor.

Describe the “sliding window maximum” problem and give an efficient solution.

Suggested approach: Describe the problem: Find the maximum in each sliding window given the array and window size k, using a deque (double-ended queue). Describe naïve O(nk), and optimal O(n) using a deque (double-ended queue). Complexity.

Sample Answer: Problem: Given an array of length n and window size k, for each index i = 0…n-k, compute the maximum of the sub-array from i to i+k-1. Naïve approach: for each window scan k elements → O(nk). Efficient approach: Use a deque to keep candidates of maximums, as you move the window forward:

  • Eliminate elements from the window from the front.
  • Remove from the back all elements smaller than the current new element (since they cannot become maximum).
  • Add the current index to the back.
  • The front of the deque is the index of the current window’s maximum.
    Time complexity: O(n) because each element is added/removed at most once; space complexity: O(k). This pattern is common in interviews.

Conclusion

A regular practice, full understanding of the principle, and the ability to express confidently your arguments within the strict time limit – these are the three components of DSA interview preparation. Quite often, the interviewer is more interested in your ability to think than in the correct answer. Thus, you should be able to deal with the edge cases, explain in detail your approach, and demonstrate your potential to improve.

Privacy Overview
This site utilizes cookies to enhance your browsing experience. Among these, essential cookies are stored on your browser as they are necessary for ...
Read more
Strictly Necessary CookiesAlways Enabled
Essential cookies are crucial for the proper functioning and security of the website.
Non-NecessaryEnabled
Cookies that are not essential for the website's functionality but are employed to gather additional data. You can choose to opt out by using this toggle switch. These cookies gather data for analytics and performance tracking purposes.