HomeInterview QuestionsOperating System Interview Questions
💻
Free Study Guide · 2025

Top 30 Operating System Interview Questions for Freshers (2025)

Operating Systems is one of the most commonly tested CS fundamentals in campus placements at TCS, Infosys, Amazon, Microsoft, and every company that does core technical rounds. These 30 questions cover processes, scheduling, deadlock, memory management, and synchronization — exactly what interviewers ask.

20 questions
Detailed answers
100% free
1What is an Operating System? What are its main functions?
An OS is system software that acts as an interface between hardware and user applications. Main functions: Process Management (create, schedule, terminate processes), Memory Management (allocate/deallocate RAM), File System Management (organize files on disk), I/O Management (manage device drivers), Security & Access Control, and providing a User Interface (CLI/GUI). Without an OS, each application would need to directly manage hardware — impractical and insecure.
2What is the difference between a process and a thread?
A process is an independent program in execution with its own memory space (code, data, heap, stack). Threads are lightweight units of execution within a process — they share the same memory space (code, data, heap) but each has its own stack and registers. Creating a thread is much cheaper than creating a process. Context switching between threads is faster. Processes are isolated; a crash in one doesn't affect another. Threads within a process can corrupt shared memory if not synchronized properly.
3What is CPU scheduling? What are the main scheduling algorithms?
CPU scheduling decides which process gets the CPU when multiple processes are ready. Key algorithms: FCFS (First Come First Served) — simple but poor for short jobs behind long ones. SJF (Shortest Job First) — optimal average wait time but requires knowing burst time in advance. Round Robin — each process gets a fixed time quantum, cycled repeatedly — best for time-sharing. Priority Scheduling — highest priority runs first, can cause starvation (fixed with aging). Multilevel Queue — separate queues for different process types (interactive vs batch).
4What is a deadlock? What are the four necessary conditions?
A deadlock is a situation where a set of processes are all waiting for resources held by each other — none can proceed. The four Coffman conditions (all must hold simultaneously): 1. Mutual Exclusion — at least one resource is non-shareable. 2. Hold and Wait — a process holds resources while waiting for more. 3. No Preemption — resources cannot be forcibly taken from a process. 4. Circular Wait — a circular chain of processes each waiting for a resource held by the next. Deadlock prevention removes at least one condition; deadlock avoidance uses algorithms like Banker's Algorithm.
5What is virtual memory? How does it work?
Virtual memory creates an illusion that each process has its own large, contiguous address space, even if physical RAM is limited. It works by mapping virtual addresses to physical RAM pages using a page table maintained by the OS and Memory Management Unit (MMU). Pages not currently in RAM are stored on disk (swap space). When a process accesses a page not in RAM, a page fault occurs and the OS loads the page from disk. This allows running programs larger than physical RAM but disk access causes latency.
6What is paging? How does it differ from segmentation?
Paging divides physical memory into fixed-size frames and virtual memory into equal-size pages. The page table maps virtual page numbers to physical frame numbers. It eliminates external fragmentation (frames are fixed-size) but causes internal fragmentation (last page may not be full). Segmentation divides memory into variable-size logical units (code, stack, heap segments) based on program structure. It has no internal fragmentation but suffers external fragmentation. Modern OS (x86-64) use paged segmentation — segmentation on top of paging.
7What is a page fault? What happens when one occurs?
A page fault occurs when a process accesses a virtual page not currently loaded in physical RAM. Steps: 1. MMU detects the missing page and raises a page fault interrupt. 2. OS takes control, checks if the access is valid (if not → segfault). 3. OS finds a free frame (or evicts a page using a replacement algorithm like LRU). 4. OS reads the required page from disk into the frame. 5. OS updates the page table. 6. Process resumes at the faulting instruction. Frequent page faults (thrashing) kill performance.
8What is thrashing?
Thrashing occurs when a system spends more time swapping pages in and out of memory than executing actual processes. It happens when the total working set of all running processes exceeds available physical RAM — each process page-faults constantly, causing the OS to continuously swap, leaving no CPU time for real work. Solutions: reduce multiprogramming degree (run fewer processes), increase RAM, improve page replacement algorithm, or use working set model to track how many pages each process actually needs.
9What is a semaphore? How does it differ from a mutex?
A semaphore is a synchronization primitive that controls access to shared resources. It maintains a counter: wait() (P) decrements it; signal() (V) increments it. Binary semaphore (0 or 1) — works like a mutex. Counting semaphore — allows N processes into a critical section simultaneously. Mutex (Mutual Exclusion Lock): a locking mechanism where only the thread that locked it can unlock it — it has ownership. A binary semaphore has no ownership — any thread can signal it. Use mutex for mutual exclusion; use semaphore for signaling between threads or resource counting.
10What is a critical section? What are Peterson's Solution conditions?
A critical section is a code segment that accesses shared resources and must not be executed by more than one process simultaneously. Requirements for a correct critical section solution: 1. Mutual Exclusion — only one process inside at a time. 2. Progress — if no process is in the critical section, a waiting process should be able to enter without indefinite delay. 3. Bounded Waiting — a process must not wait forever (bounded number of times others can enter before it). Peterson's Solution uses two variables (flag and turn) to satisfy all three for two processes — not used in practice (hardware solutions preferred).
11What is context switching?
Context switching is the process of saving the state (context) of the currently running process and restoring the context of the next process to run. The context includes: CPU registers, program counter, stack pointer, process state, memory mappings. Context switch overhead is pure overhead — no useful work is done during the switch. Threads switch faster than processes because they share the same virtual address space (no need to flush TLB). Context switches happen due to interrupts, system calls, or the scheduler's time quantum expiring.
12What are the different states of a process?
The typical process state model: New — process is being created. Ready — process is in memory, waiting for CPU. Running — process is currently executing on the CPU. Waiting/Blocked — process is waiting for an I/O event or resource (not using CPU). Terminated — process has finished execution. Transitions: New → Ready (admitted), Ready → Running (scheduler dispatch), Running → Waiting (I/O request), Waiting → Ready (I/O completion), Running → Ready (preempted/time quantum), Running → Terminated (exit).
13What is Inter-Process Communication (IPC)?
IPC allows processes to communicate and synchronize. Methods: Shared Memory — fastest, processes map the same memory region (requires synchronization). Message Passing — processes send/receive messages via OS (safer, no shared state). Pipes — unidirectional data stream between parent-child processes. Named Pipes (FIFOs) — like pipes but work between unrelated processes. Sockets — IPC over a network or locally. Signals — asynchronous notifications (e.g., SIGTERM, SIGKILL). Message Queues — persistent message buffers managed by OS. Semaphores — for synchronization, not data transfer.
14What is the difference between preemptive and non-preemptive scheduling?
Preemptive scheduling: the OS can forcibly remove a process from the CPU mid-execution (e.g., when its time quantum expires or a higher-priority process arrives). Examples: Round Robin, Priority Preemptive. Better for interactive systems — ensures responsiveness. Non-preemptive (cooperative) scheduling: once a process has the CPU, it runs until it voluntarily gives it up (I/O request, system call, or completion). Examples: FCFS, SJF (non-preemptive). Simpler but can cause poor response time if a process hogs the CPU.
15What is memory fragmentation? What are its types?
Fragmentation is wasted memory that cannot be used effectively. Internal Fragmentation: allocated memory block is larger than required — the unused space inside the block is wasted. Occurs in fixed-size allocation (paging). Example: process needs 13KB but gets a 16KB page — 3KB wasted. External Fragmentation: enough total free memory exists but it's scattered in non-contiguous pieces, so large requests can't be satisfied. Occurs in variable-size allocation (segmentation). Solution: compaction (moving processes to consolidate free space) or paging (fixed-size blocks eliminate external fragmentation).
16What is a system call?
A system call is the programmatic way a user-space application requests a service from the OS kernel. When an app needs hardware access, file operations, process creation, or network I/O, it makes a system call which switches the CPU from user mode to kernel mode. Examples: fork() (create process), exec() (run a program), open()/read()/write() (file I/O), socket() (networking), malloc() uses brk()/mmap() internally. System calls are expensive relative to regular function calls due to the mode switch and context save.
17What is the difference between kernel mode and user mode?
Kernel (privileged) mode: CPU has unrestricted access to all hardware — can execute any instruction, access any memory, manage I/O. OS kernel runs in this mode. User mode: CPU has restricted access — cannot directly access hardware, protected memory, or execute privileged instructions. User applications run in this mode. The separation protects the OS and other processes from buggy/malicious code. Switching from user to kernel mode happens via system calls, hardware interrupts, or exceptions — the hardware enforces this boundary.
18What are page replacement algorithms?
When physical RAM is full and a new page must be loaded, a page replacement algorithm selects which existing page to evict: FIFO (First In First Out): evicts the oldest page — simplest but can exhibit Bélády's anomaly (more frames → more faults). Optimal: evicts the page that won't be used for the longest time in future — theoretical benchmark, impossible in practice. LRU (Least Recently Used): evicts the page not used for the longest time — good approximation of Optimal, requires tracking access times. Clock/Second Chance: efficient LRU approximation using a circular buffer with reference bits.
19What is the Banker's Algorithm?
Banker's Algorithm is a deadlock avoidance algorithm. The OS maintains: Max (maximum resources each process may need), Allocation (currently allocated), Need (Max - Allocation), and Available (free resources). Before granting a resource request, the OS checks if the resulting state is 'safe' — i.e., there exists a sequence (safe sequence) in which all processes can complete using available resources. If safe, the request is granted; if not, the process must wait. Named after a bank that only lends money if it can still satisfy all customers' maximum withdrawal needs.
20What is spooling in OS?
Spooling (Simultaneous Peripheral Operations On-Line) is a technique where data is buffered in a disk area (spool) for a device (like a printer) that can't keep up with the request rate. Instead of sending data directly to a slow device, the OS writes it to a spool on disk; a background daemon (spooler) feeds the spool to the device at its own pace. This lets multiple processes 'print' simultaneously without waiting for the physical printer — each process's print job is queued. Spooling is also used for batch job scheduling.
Level up your prep
Get company-specific questions for your interview
Upload your resume → get questions tailored to Google, Amazon, TCS, and 50+ companies.
Try AI Interview Prep →
© 2025 CareerLens · Home · Interview Questions · Pricing