The following contents are generated by ChatGPT based on the Midterm hints.
Only includes some questions that Jackson did not answered correctly and thought that they were important.
Multiple Choice
- Which statement best describes the relationship between device drivers and the kernel? A) Device drivers operate independently from the kernel B) Device drivers are hardware components that control the kernel C) Device drivers are software components that reside inside the kernel D) The kernel manages hardware directly without device drivers
- What is the typical flow of control when an I/O operation completes in an interrupt-driven system? A) CPU → Device Driver → Controller → Device B) Device → Controller → Interrupt → CPU → Device Driver C) Device Driver → Interrupt → Controller → Device D) Memory → System Bus → Controller → CPU
- Which component would be responsible for translating high-level read/write commands into specific hardware control signals? A) Memory B) System bus C) Device driver D) Graphics adapter
- What architectural design pattern is illustrated by the relationship between device controllers, device drivers, and the kernel? A) Client-server model B) Layered abstraction C) Peer-to-peer communication D) Event-driven programming
- Which of the following represents the correct relationship between storage devices in terms of cost? A) Magnetic disk > Main memory > Cache > Hardware register B) Hardware register > Cache > Main memory > Magnetic disk C) Magnetic disk < Main memory < Cache < Hardware register D) Hardware register < Cache < Main memory < Magnetic disk
- How can we describe the structure of the kernel in traditional UNIX?
A. It is highly modular with clear – cut layers.
B. It consists of a single function for system management.
C. It has a large number of functions all at one level.
D. It only interacts with the physical hardware directly. - In the monolithic structure of traditional UNIX, what role do system programs play in relation to the kernel?
A. They directly modify the kernel code during operation.
B. They interact with the kernel through the system – call interface.
C. They are independent of the kernel and operate separately.
D. They are embedded within the kernel as sub – components.
Answer: B 区分CD表述,C说的是完全不依赖于kernel的功能肯定错,D说的是system program是嵌入在kernel里的也是错的,两者separate part. - A Process Control Block (PCB) is also known as:
A. Program Control Block
B. Task Control Block
C. Process Information Block
D. System Control Block
Answer: B - Why is the PCB invisible to the process itself?
A. Because it contains sensitive information for the kernel only.
B. Because it is stored in a different hard – disk partition.
C. Because the process does not have the necessary read permissions.
D. Because it is encrypted by the kernel.
Answer: A ( 因为它只包含供内核使用的敏感信息)
PCB 存储了诸如进程状态、CPU 调度信息、内存管理信息等内容,这些信息主要用于内核管理进程,对进程本身而言属于敏感信息,内核不希望进程随意访问和修改这些信息,所以 PCB 对进程本身不可见,该选项正确。 - Which of the following is part of the accounting information stored in the PCB?
A. The number of threads in the process
B. CPU used and clock time elapsed since start
C. The version of the operating system
D. Network bandwidth used by the process
Answer: B - In a system where the context – switch time is relatively long, and the processes are mainly I/O – bound. Which of the following optimizations is most likely to improve the overall system performance?
A. Increasing the priority of all I/O – bound processes.
B. Optimizing the data structure of the Process Control Block to reduce its complexity.
C. Upgrading the CPU to a more powerful one.
D. Adding more buffers for I/O operations.
Answer: B - Suppose a process Pj is in the running state, and a context switch is triggered by a system call. During this context switch, if there is an error in saving the values of CPU registers into PCBj, what kind of problem may occur when Pj is resumed later?
A. Pj may execute a wrong instruction sequence because the program counter value is incorrect.
B. The I/O devices allocated to Pj may malfunction.
C. The process Pj will be automatically terminated by the operating system.
D. Other processes in the ready queue will be unable to get CPU time.- Answer: B
- CPU 寄存器中保存着诸如程序计数器(指示下一条要执行的指令位置)等关键信息。在上下文切换时,如果保存 CPU 寄存器值到PCBj出错,那么程序计数器的值可能就不准确。当Pj恢复运行时,从错误的程序计数器值处开始取指令,就会导致执行错误的指令序列,所以该选项正确。
- In the inter – process communication model of message passing, which ( ) operations are controlled by the kernel? A. Sending messages (send) and receiving messages (receive) B. Allocation of shared memory C. Process creation D. Process termination.
- Answer: A
- 在消息传递进程间通信模型中,发送消息(send)和接收消息(receive)操作由内核控制。内核负责确保消息在进程之间正确传递,处理诸如消息缓冲、同步和安全等问题。
- What will happen if the
open()
function in the reader process of a named – pipe communication fails?
A. The writer process will automatically terminate.
B. The reader process will continue to execute without receiving data.
C. The system will create a new named pipe.
D. The reader process may not be able to receive data from the named pipe as expected.
Answer: D - 创建named – pipe 用mkfifo. 删除用unlink. Named – pipe 是 bidirectional Ordinary – pipe 是unidirectional
- In SJF scheduling, if two processes have the same predicted next CPU burst length, how might the scheduler decide which one to execute first?
A. Execute the process that arrived earlier.
B. Randomly choose one of the processes.
C. Execute the process with a higher priority.
D. Both A and C could be possible depending on the implementation.
Answer: D 从题目已知算法看是无法确定的 稍微综合点的是当优先级相同时引入RR中的q来交替运行 - When using exponential averaging to predict the next CPU burst length (τn+1=αtn+(1−α)τn), if the actual length of the nth CPU burst (tn) is much larger than the previous predicted value (τn), and α is set to 0.5, what will happen to the new predicted value (τn+1)?
A. τn+1 will be closer to tn.
B. τn+1 will be closer to τn.
C. τn+1 will be exactly the average of tn and τn.
D. τn+1 will be much larger than both tn and τn.
Answer: C 500 + 2.5 = 502.5。 solve this by giving an example - In a multilevel feedback queue, what is the main advantage compared to a multilevel queue?
Processes can move between queues, preventing starvation. 选对了记个表述 - The number of queues, scheduling algorithms for each queue, and policies on moving processes between queues defines MFQ as parameters.
- In a multilevel queue system, if the foreground queue has a shorter time slice compared to the background queue, what is the purpose of this setting?
A. To ensure that interactive processes can respond quickly to user input.
B. To give batch processes more time to complete their tasks.
C. Both A and B.
D. To save system resources.
Answer: C
The following contents are written by Jackson based on the midterm hints.
Short-Essay Questions:
- About System call Parameters Passing (3 methods: register / stored in a block in memory, address of block passed as a parameter in register / push onto a stack by program and off stack by os)
- About Zombie and Orphan
- A zombie (僵尸) process is half alive and half dead
- It is terminated, but still consumes system resources
- still has an entry in the process table
- The entry is still needed to allow the parent process to read its child’s exit status.
- Once the exit status is read by parent via the wait system call, the zombie’s entry is removed from the process table (“reaped“). If any parent terminates without reaping a child, then child will be reaped by init or system process
- An orphan (孤儿) process is child process that is still running but parent process has finished or terminated.
- 如何鉴别通过代码: zombie的parent process 大概率有while (1) 死循环,并且没有wait ; 而orphan 的parent process应该早早结束。
- About interprocess communication:
- we have dependent process and cooperating process (need IPC)
- Two methods: shared memory (controlled by user processes) & message passing (controlled by kernel)
- For shared memory major issue: synchronisation
- Producer – Consumer Problem: bounded buffer and unbounded buffer
- Bounded buffer: Producer may wait if bounded buffer is full (no space) and also consumer need to wait if bounded buffer is empty (no information)
- Unbounded buffer: Producer never wait and only consumer need to wait.
- For message passing: send & receive ; a communication link must be set up
- Direct (process to process) or indirect (mail box)
- Direct
- Processes must name each other explicitly.
- send(P, message) – send a message to process P
- receive(Q, message) – receive a message from process Q
- Indirect:
- Messages are directed and received from mailboxes (also referred to as ports(端口))
- Each mailbox has a unique id and Processes can communicate only if they share a mailbox
- send(A, message) – send a message to mailbox A
- receive(A, message) – receive a message from mailbox A
- Link established only if processes share a common mailbox
- A link may be associated with many processes
- Each pair of processes may share several communication links
- Link may be unidirectional or bi-directional
- Direct
- Synchronous (blocking) or asynchronous (non-blocking)
- Synchronisation (blocking)
- receiver is blocking until the message is available
- the sender is blocking until the receiver receive the message
- asynchronous (Non – blocked)
- sender continues to send the message without checking whether the receiver receives them.
- receiver continues to read the message which may lead to: a valid message or NULL messages
- If both send and receive are blocking, this case is called rendezvous (会合)
- Synchronisation (blocking)
- Automatic or explicit buffering
- Zero capacity – no messages are queued on a link.
- Sender must wait for receiver (rendezvous)
- Bounded capacity – finite length of n messages
- Sender must wait if link full
- Unbounded capacity – infinite length
- Sender never waits
- Zero capacity – no messages are queued on a link.
- About multicore programming
- Concurrency & Parallelism
- Concurrency is a property of a program where two or more tasks can be in progress simultaneously.
- Parallelism is a run-time property where two or more tasks are being executed simultaneously.
- Parallelism implies Concurrency. Concurrency does not implies Parallelism
- More about Parallelism:
- Data parallelism: distributes subsets of data on each core, same operation on each.
- Task parallelism: distributes thread on each core, each thread performs different operations.
- Concurrency & Parallelism
Comprehensive Questions
- Need to be familiar with the process creation!
- Need to be familiar with the pthread!
- Need to be familiar with the first four CPU Scheduling Algorithm