InterviewStack.io LogoInterviewStack.io

Data Structures and Complexity Questions

Comprehensive coverage of fundamental data structures, their operations, implementation trade offs, and algorithmic uses. Candidates should know arrays and strings including dynamic array amortized behavior and memory layout differences, linked lists, stacks, queues, hash tables and collision handling, sets, trees including binary search trees and balanced trees, tries, heaps as priority queues, and graph representations such as adjacency lists and adjacency matrices. Understand typical operations and costs for access, insertion, deletion, lookup, and traversal and be able to analyze asymptotic time and auxiliary space complexity using Big O notation including constant, logarithmic, linear, linearithmic, quadratic, and exponential classes as well as average case, worst case, and amortized behaviors. Be able to read code or pseudocode and derive time and space complexity, identify performance bottlenecks, and propose alternative data structures or algorithmic approaches to improve performance. Know common algorithmic patterns that interact with these structures such as traversal strategies, searching and sorting, two pointer and sliding window techniques, divide and conquer, recursion, dynamic programming, greedy methods, and priority processing, and when to combine structures for efficiency for example using a heap with a hash map for index tracking. Implementation focused skills include writing or partially implementing core operations, discussing language specific considerations such as contiguous versus non contiguous memory and pointer or manual memory management when applicable, and explaining space time trade offs and cache or memory behavior. Interview expectations vary by level from selecting and implementing appropriate structures for routine problems at junior levels to optimizing naive solutions, designing custom structures for constraints, and reasoning about amortized, average case, and concurrency implications at senior levels.

HardTechnical
83 practiced
Design an algorithm/data-structure to find top-k frequent items in a high-rate data stream with limited memory. Compare exact counters (hash map), Misra-Gries (k counters), and Count-Min Sketch (approximate). For each approach give space complexity, error bounds, update time, and how to merge results from multiple shards.
EasyTechnical
71 practiced
Explain the conceptual difference between a set and a map/dictionary. Give three backend use cases where a set is the better choice and three where a map is necessary. Discuss implementation variants (hash-set, tree-set) and when you might use a Bloom filter as an approximate set.
MediumTechnical
96 practiced
Explain Breadth-First Search (BFS) and Depth-First Search (DFS) on a graph represented with adjacency lists. For an unweighted graph explain why BFS is used to find shortest path in edge count. State time and memory complexity for both and give a backend example where DFS is preferable to BFS.
MediumTechnical
79 practiced
Implement an LRU cache that supports get(key) and put(key, value) in O(1) time. Specify the interface in your preferred language, include capacity C, and show code or clear pseudo-code for core operations. Explain why your chosen combination of data structures achieves O(1) operations and discuss edge cases (e.g., eviction tie-breaking).
MediumTechnical
81 practiced
Explain the cost of hash-table resizing during inserts. For a hash map that doubles capacity when load factor > 0.75, describe the amortized cost per insert and the worst-case cost for a single insert. For a latency-sensitive backend service, how would you mitigate large, unpredictable resizing pauses?

Unlock Full Question Bank

Get access to hundreds of Data Structures and Complexity interview questions and detailed answers.

Sign in to Continue

Join thousands of developers preparing for their dream job.