9.8 KEYWORDS Semaphore: It is a software concurrency control tool. Race Condition: It is a flaw in a system of processes whereby the output of the process is unexpectedly and critically dependent on the sequence of other processes. Producer–Consumer Problem: In this problem, two processes, one called the producer and the other called the consumer, run concurrently and share a common buffer. Mute: It is a program element that allows multiple program processes to share the same resource but not simultaneously. Mutex: It is a software tool used in concurrency control. It is short form of mutual exclusion. Mutual Exclusion: It means that only one of the processes is allowed to execute its critical section at a time. Critical Section: Code that accesses a critical resource. 9.9 LEARNING ACTIVITY 1. Illustrate how a binary semaphore can be used to implement mutual exclusion among n processes. ___________________________________________________________________________ ____________________________________________________________________ 9.10 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. Compare and contrast various software tools of process synchronization. 2. Write code for a typical critical section problem. 3. Describe situations when one of the philosophers starves in the classical dining philosopher’s problem. 4. What is Peterson’s Algorithm? 5. What is Lamport’s Bakery Algorithm? 201 CU IDOL SELF LEARNING MATERIAL (SLM)
Long Questions 1. Citing one suitable example explain why synchronization is necessary in multiple process environments where there is at least one shared resource. 2. Explain what happens when a philosopher (in the classical dining philosopher’s problem) is so hungry that he keeps on eating always. 3. What are the advantages of using semaphores for process synchronization? 4. Identify the critical section in the following scenario: Global int i = 0; P1: Read i a. Increment i by 1 b. Store the result into i P2: Read i c. Increment i by 2 d. Store the result into i 5. Write the code to synchronize three processes P1, P2, and P3 all trying to update a global variable Salary. B. Multiple Choice Questions 1. Which process can be affected by other processes executing in the system? a. cooperating process b. child process c. parent process d. init process 202 CU IDOL SELF LEARNING MATERIAL (SLM)
2. When several processes access the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a. dynamic condition b. race condition c. essential condition d. critical condition 3. If a process is executing in its critical section, then no other processes can be executing in their critical section. This condition is called a. mutual exclusion b. critical exclusion c. synchronous exclusion d. asynchronous exclusion 4. Which one of the following is a synchronization tool? a. thread b. pipe c. semaphore d. socket 5. A semaphore is a shared integer variable 203 a. that cannot drop below zero b. that cannot be more than zero c. that cannot drop below one d. that cannot be more than one CU IDOL SELF LEARNING MATERIAL (SLM)
6. Mutual exclusion can be provided by the a. mutex locks b. binary semaphores c. both (a) and (b) d. None of these 7. When high priority task is indirectly pre-empted by medium priority task effectively inverting the relative priority of the two tasks, the scenario is called a. priority inversion b. priority removal c. priority exchange d. priority modification 8. Process synchronization can be done on a. hardware level b. software level c. both (a) and (b) d. None of these 9. A monitor is a module that encapsulates 204 a. shared data structures b. procedures that operate on shared data structure c. synchronization between concurrent procedure invocation d. All of these CU IDOL SELF LEARNING MATERIAL (SLM)
10. To enable a process to wait within the monitor, a. a condition variable must be declared as condition b. condition variables must be used as Boolean objects c. semaphore must be used d. All of these Answers 1 a, 2 b, 3 a, 4 c, 5 a, 6 c, 7 a, 8 c, 9 d, 10 a. 9.11 REFERENCES A. Silberschatz P.B. Galvin, Gange (2002). Operating System Concepts, 6th Ed., Addison Wesley Publishing Co., Boston. H.M. Deitel (1990). An Introduction to Operating Systems, Addison Wesley Publishing Co., Boston. D.M. Dhamdhare (2002). Operating System. Tata McGraw Hill, New Delhi. A.S. Tanenbaum (2000). Operating Systems: Design and Implementation, Prentice Hall of India, New Delhi. Nutt (2005). Operating Systems, 3rd Edition, Pearson Education, Delhi. 205 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT -10: DEADLOCKS 206 Structure 10.0 Learning Objectives 10.1 Introduction 10.2 Deadlock Characterization 10.2.1 Necessary Conditions 10.2.2 Resource Allocation Graphs 10.3 Prevention, Avoidance and Detection 10.3.1 Deadlock Prevention 10.3.2 Deadlock Avoidance 10.3.3 Deadlock Detection and Recovery 10.3.4 Ignore Deadlock Embedded Operating System 10.4 Summary 10.5 Keywords 10.6 Learning Activity 10.7 Unit End Questions 10.8 References 10.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Define deadlocks Explain characterization of deadlock Discuss deadlock detection Describe prevention and avoidance CU IDOL SELF LEARNING MATERIAL (SLM)
10.1 INTRODUCTION A deadlock is a situation wherein two or more competing actions are waiting for the other to finish, and thus neither ever does. It is often seen in a paradox like 'the chicken or the egg'. This situation may be likened to two people who are drawing diagrams, with only one pencil and one ruler between them. If one person takes the pencil and the other takes the ruler, a deadlock occurs when the person with the pencil needs the ruler and the person with the ruler needs the pencil, before he can give up the ruler. Both requests can't be satisfied, so a deadlock occurs. We will usually reason in terms of resources R1, R2... Rm and processes P1, P2... Pn. A process Pi that is waiting for some currently unavailable resource is said to be blocked. Deadlock occurs when we have a set of processes [not necessarily all the processes in the system], each holding some resources, each requesting some resources, and none of them is able to obtain what it needs, i.e., to make progress. Those processes are deadlocked because all the processes are waiting. None of them will ever cause any of the events that could wake up any of the other members of the set, and all the processes continue to wait forever. For this model, we assume that processes have only a single thread and that there are no interrupts possible to wake up a blocked process. The no-interrupts condition is needed to prevent an otherwise deadlocked process from being awakened by, say, an alarm, and then causing events that release other processes in the set. In most cases, the event that each process is waiting for is the release of some resource currently possessed by another member of the set. In other words, each member of the set of deadlocked processes is waiting for a resource that is owned by another deadlocked process. None of the processes can run, none of them can release any resources, and none of them can be awakened. The number of processes and the number and kind of resources possessed and requested are unimportant. This result holds for any kind of resource, including both hardware and software. Recall that one definition of an operating system is a resource allocator. There are many resources that can be allocated to only one process at a time, and we have seen several operating system features that allow this, such as mutexes, semaphores or file locks. 207 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 10.1: Processes are in Deadlock Situation Sometimes a process has to reserve more than one resource. For example, a process which copies files from one tape to another generally requires two tape drives. A process which deals with databases may need to lock multiple records in a database. In general, resources allocated to a process are not pre-emptible; this means that once a resource has been allocated to a process, there is no simple mechanism by which the system can take the resource back from the process unless the process voluntarily gives it up or the system administrator kills the process. This can lead to a situation called deadlock. A situation when two more processes are unable to proceed because they are waiting for each other to do something. A common example is a program communicating to a server, which may find itself waiting for output from the server before sending anything more to it, while the server is similarly waiting for more input from the controlling program before outputting anything. It is reported that this particular flavour of deadlock is sometimes called a starvation deadlock, though the term starvation is more properly used for situations where a program can never run simply because it never gets high enough priority. Another common flavour is constipation, in which each process is trying to send stuff to the other but all buffers are full because nobody is reading anything. A set of processes or threads is deadlocked when each process or thread is waiting for a resource to be freed which is controlled by another process. Here is an example of a situation where deadlock can occur. Mutex M1, M2; /* Thread 1 */ 208 CU IDOL SELF LEARNING MATERIAL (SLM)
while (1) { NonCriticalSection() Mutex_lock(&M1); Mutex_lock(&M2); CriticalSection(); Mutex_unlock(&M2); Mutex_unlock(&M1); } /* Thread 2 */ while (1) { NonCriticalSection() Mutex_lock(&M2); Mutex_lock(&M1); CriticalSection(); Mutex_unlock(&M1); Mutex_unlock(&M2); } Suppose thread 1 is running and locks M1, but before it can lock M2, it is interrupted. Thread 2 starts running; it locks M2, when it tries to obtain and lock M1, it is blocked because M1 is already locked (by thread 1). Eventually thread 1 starts running again, and it tries to obtain and lock M2, but it is blocked because M2 is already locked by thread 2. Both threads are blocked; each is waiting for an event which will never occur. Traffic gridlock is an everyday example of a deadlock situation. In order for deadlock to occur, four conditions must be true. 209 CU IDOL SELF LEARNING MATERIAL (SLM)
1. Mutual exclusion: Each resource is either currently allocated to exactly one process or it is available. (Two processes cannot simultaneously control the same resource or be in their critical section). 2. Hold and Wait: processes currently holding resources can request new resources 3. No pre-emption: Once a process holds a resource, it cannot be taken by another process or the kernel. 4. Circular wait: Each process is waiting to obtain a resource which is held by another process. The dining philosopher’s problem discussed in an earlier section is a classic example of deadlock. Each philosopher picks up his or her left fork and waits for the right fork to become available, but it never does. Figure 10.2: Every Day Example of Deadlock Situation A process which has acquired a resource is show with an arrow (edge) from the resource to the process. A process which has requested a resource which has not yet been assigned to it is modelled with an arrow from the process to the resource. If these create a cycle, there is deadlock. The deadlock situation in the above code can be modelled like this. 210 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 10.3: Deadlock Situation Model This graph shows an extremely simple deadlock situation, but it is also possible for a more complex situation to create deadlock. Here is an example of deadlock with four processes and four resources. Figure 10.4: Example of Deadlock with Four Processes and four Resources There are a number of ways that deadlock can occur in an operating situation. We have seen some examples, here are two more. Two processes need to lock two files, the first process locks one file the second process locks the other, and each waits for the other to free up the locked file. Two processes want to write a file to a print spool area at the same time and both start writing. However, the print spool area is of fixed size, and it fills up before either process finishes writing its file, so both wait for more space to become available 211 CU IDOL SELF LEARNING MATERIAL (SLM)
10.2 DEADLOCK CHARACTERIZATION We will consider following aspects under the deadlock characterization: 10.2.1 Necessary Conditions 1. Deadlock situation can arise if the following four conditions hold simultaneously in a system: 2. Resources are used in mutual exclusion 3. Resources are acquired piecemeal (i.e., not all the resources that are needed to complete an activity are obtained at the same time in a single indivisible action). 4. Resources are not pre-empted (i.e., a process does not take away resources being held by another process). 5. Resources are not spontaneously given up by a process until it has satisfied all its outstanding requests for resources (i.e., a process, being that it cannot obtain some needed resource it does not kindly give up the resources that it is currently holding). 10.2.2 Resource Allocation Graphs Resource Allocation Graphs (RAGs) are directed labelled graphs used to represent, from the point of view of deadlocks, the current state of a system. Figure 10.5: Resource Allocation Graph 212 CU IDOL SELF LEARNING MATERIAL (SLM)
State transitions can be represented as transitions between the corresponding resource allocation graphs. Here are the rules for state transitions: 1. Request: If process Pi has no outstanding request, it can request simultaneously any number (up to multiplicity) of resources R1, R2, ...Rm. The request is represented by adding appropriate requests edges to the RAG of the current state. 2. Acquisition: If process Pi has outstanding requests and they can all be simultaneously satisfied, then the request edges of these requests are replaced by assignment edges in the RAG of the current state. 3. Release: if process Pi has no outstanding request, then it can release any of the resources it is holding, and remove the corresponding assignment edges from the RAG of the current state. Here are some important propositions about deadlocks and resource allocation graphs: 1. If a RAG of a state of a system is fully reducible (i.e., it can be reduced to a graph without any edges using ACQUISITION and RELEASE operations) then that state is not a deadlock state. 2. If a state is not a deadlock state, then its RAG is fully reducible [this holds only if we are dealing with reusable resources; it is false if we have consumable resources] 3. A cycle in the RAG of a state is a necessary condition for that being a deadlock state 4. A cycle in the RAG of a state is a sufficient condition for that being a deadlock state only in the case of reusable resources with multiplicity one. Here is an example of reduction of a RAG: 213 CU IDOL SELF LEARNING MATERIAL (SLM)
RAG: loopF. igure11 Figure 10.6: Reduction of an RAG And here is a deadlock-free system with a loop. Figure Figure 10.7: RAG with Loop but no Deadlock 10.3 PREVENTION, AVOIDANCE AND DETECTION There are several ways to address the problem of deadlock in an operating system: 1. Prevent 2. Avoid 3. Detection and recovery 4. Ignore 214 CU IDOL SELF LEARNING MATERIAL (SLM)
10.3.1 Deadlock Prevention Deadlocks can be prevented by ensuring that at least one of the following four conditions do not hold: 1. Mutual exclusion: Removing the mutual exclusion condition means that no process may have exclusive access to a resource. This proves impossible for resources that cannot be spooled, and even with spooled resources deadlock could still occur. Algorithms that avoid mutual exclusion are called non-blocking synchronization algorithms. 2. Hold and wait: The \"hold and wait\" conditions may be removed by requiring processes to request all the resources they will need before starting up (or before embarking upon a particular set of operations); this advance knowledge is frequently difficult to satisfy and, in any case, is an inefficient use of resources. Another way is to require processes to release all their resources before requesting all the resources they will need. This too is often impractical. (Such algorithms, such as serializing tokens, are known as the all-or- none algorithms.) 3. No pre-emption: A \"no pre-emption\" (lockout) condition may also be difficult or impossible to avoid as a process has to be able to have a resource for a certain amount of time, or the processing outcome may be inconsistent or thrashing may occur. However, inability to enforce pre-emption may interfere with a priority algorithm. (Note: Pre- emption of a \"locked out\" resource generally implies a rollback, and is to be avoided, since it is very costly in overhead.) Algorithms that allow pre-emption include lock-free and wait-free algorithms and optimistic concurrency control. 4. Circular wait: The circular wait condition: Algorithms that avoid circular waits include \"disable interrupts during critical sections\", and \"use a hierarchy to determine a partial ordering of resources\" (where no obvious hierarchy exists, even the memory address of resources has been used to determine ordering) and Dijkstra's solution. Deadlock Prevention is to use resources in such a way that we cannot get into deadlocks. In real life we may decide that left turns are too dangerous, so we only do right turn. It takes longer to get there but it works. In terms of deadlocks, we may constrain our use of resources so that we do not have to worry about deadlocks. Here we explore this idea with two examples. 215 CU IDOL SELF LEARNING MATERIAL (SLM)
Linear Ordering of Resources Assume that all resources are totally ordered from 1 to r. We may impose the following constraint: It is easy to see that with this rule we will not get into deadlocks. (Proof of contradiction). Here is an example of how we apply this rule. We are given a process that uses resources ordered as A, B, C, D, E in the following manner: Figure 10.8: Linear Ordering of Resources Then the process can do the following: acquire(A); acquire(B); acquire(C); use C use A and C use A, B, C release(A); release(B); acquire(E); use C and E release(C); release(E); acquire(D); use D release(D); A strategy such as this can be used when we have a few resources. It is easy to apply and does not reduce the degree of concurrency too much. 216 CU IDOL SELF LEARNING MATERIAL (SLM)
Hierarchical Ordering of Resources Another strategy we may use in the case that resources are hierarchically structured is to lock them in hierarchical order. We assume that the resources are organized in a tree (or a forest) representing containment. We can lock any node or group of nodes in the tree. The resources we are interested in are nodes in the tree, usually leaves. Then the following rule will guarantee avoidance of deadlocks. Here is an example of use of this rule, locking a single resource at a time. Figure 10.9: Hierarchical Ordering of Resources Then if a process wants to use the resources e, f, i, k it uses in sequence the commands: lock(a); lock(b); lock(h); unlock(a); lock(d); unlock(b); lock(i); lock(j); unlock(h); lock(k); unlock(j); lock(e); lock(f); unlock(d); Of course, by forcing all locking sequences to start at the root of the resource hierarchy we create a bottleneck, since now the root becomes a scarce resource. As always there is trade-offs to be made. An obvious improvement on this prevention policy is to start not by locking the root, but by instead locking the lowest node subsuming all the nodes used by the current activity. For example, if we had an activity that accessed only the nodes p, r, and s then we could start by locking n, not a. 10.3.2 Deadlock Avoidance Deadlock Avoidance, assuming that we are in a safe state (i.e., a state from which there is a sequence of allocations and releases of resources that allows all processes to terminate) and 217 CU IDOL SELF LEARNING MATERIAL (SLM)
we are requested certain resources, simulates the allocation of those resources and determines if the resultant state is safe. If it is safe the request is satisfied, otherwise it is delayed until it becomes safe. A state is safe if the system can allocate resources to each process (up to its maximum) in some order and still avoid a deadlock. More formally, a system is in a safe state only if there exists a safe sequence. A sequence of processes <P1, P2…., Pn> is a safe sequence for the current allocation state if, for each Pi the resource requests that Pi can still make can be satisfied by the currently available resources plus the resources held by all Pj, with j<i. 1. In this situation, if the resources that Pi needs are not immediately available, then Pi can wait until all Pj have finished. 2. When they have finished, Pi can obtain all of its needed resources, complete its designated task, return its allocated resources, and terminate. 3. When Pi terminates, Pi+1 can obtain its needed resources, and so on. 4. If no such sequence exists, then the system state is said to be unsafe. A safe state is not a deadlocked state. Conversely, a deadlocked state is an unsafe state. Not all unsafe states are deadlocks, however (see Fig. below). Figure 10.10: Safe, Unsafe, and Deadlock State Spaces 218 CU IDOL SELF LEARNING MATERIAL (SLM)
An unsafe state may lead to a deadlock. As long as the state is safe, the OS can avoid unsafe (and deadlocked) states. In an unsafe state, the OS cannot prevent processes from requesting resources such that a deadlock occurs: The behavior of the processes controls unsafe states. The difference between a safe state and an unsafe state is that from a safe state the system can guarantee that all processes will finish; from an unsafe state, no such guarantee can be given. For Example, let’s consider a system with 12 magnetic tape drives and three processes: P0, P1, and P2. 1. Process P0 requires 10 tape drives, and holds 5 tape drives 2. Process P1 may need as many as 4 tape drives, and holds 2 tape drive 3. Process P2 may need up to 9 tape drives, and holds 2 tape drives 4. Thus, there are 3 free tape drives. Table 10.1: < P1, P0, P2 > requirements Pi Maximum Needs Current Needs P0 10 5 P1 4 2 P2 9 2 At time t0, the system is in a safe state. The sequence < P1, P0, P2 > satisfies the safety condition. A system can go from a safe state to an unsafe state. Suppose that, at time t1, process P2 requests and is allocated one more tape drive. The system is no longer in a safe state. 1. At this point, only process P1 can be allocated all its tape drives. 2. When it returns them, the system will have only 4 available tape drives. 3. Since process P0 is allocated 5 tape drives but has a maximum of 10, it may request 5 more tape drives. Since they are unavailable, process P0 must wait. 4. Similarly, process P2 may request an additional 6 tape drives and have to wait, resulting in a deadlock. Granting the request from process P2 for one more tape drive led to the deadlock. 219 CU IDOL SELF LEARNING MATERIAL (SLM)
The Banker's Algorithm is used to determine if a request can be satisfied. It uses requires knowledge of who are the competing transactions and what are their resource needs. Deadlock avoidance is essentially not used in distributed systems. The Banker’s algorithm is run by the operating system whenever a process requests resource. The algorithm prevents deadlock by denying or postponing the request if it determines that accepting the request could put the system in an unsafe state (one where deadlock could occur). For the Banker’s algorithm to work, it needs to know three things: 1. How much of each resource each process could possibly request? 2. How much of each resource each process is currently holding? 3. How much of each resource the system has available? Some of the resources that are tracked in real systems are memory, semaphores and interface access. Data Structures: 1. Available: e. Vector of length m f. # instances of each resource type available in system g. If available [j] = k, there are k instances of resource type Rj available 2. Max: h. n x m matrix. i. Maximum # of instances of each resource each process can request j. If Max [i, j] = k, then process Pi may request at most k instances of resource type Rj 3. Allocation: k. n x m matrix l. # instances of each resource type allocated to each process m. If Allocation [i, j] = k then Pi is currently allocated k instances of Rj. 220 CU IDOL SELF LEARNING MATERIAL (SLM)
4. Need: n. n x m matrix o. # instances of each resource type each process may need more of p. If Need [i, j] = k, then Pi may need k more instances of Rj to complete its task q. Need [i, j] = Max [i, j] – Allocation [i, j]. Safety Algorithm: 1. Let Work and Finish be vectors of length m and n, respectively. Initialize: Work = Available Finish [i] = false for i = 0, 1,2, …, n-1. 2. Find an i such that both: r. Finish [i] = false s. Needi ≤ Work If no such i exists, go to step 4. 3. Work = Work + Allocationi Finish[i] = true go to step 2. 4. If Finish [i] == true for all i, then the system is in a safe state. As an example, let there be 5 processes P0 through P4; 3 resource types R0 (10 instances), R1 (5 instances), and R2 (7 instances). Table 10.2: Initial Table Allocation Max Available Need R0 R1 R2 R0 R1 R2 R0 R1 R2 R0 R1 R2 P0 0 1 0 753 332 743 P1 2 0 0 322 122 P2 3 0 2 902 600 221 CU IDOL SELF LEARNING MATERIAL (SLM)
P3 2 1 1 222 011 P4 0 0 2 433 431 The system is in a safe state since the sequence <P1, P3, P4, P2, P0> satisfies safety criteria. Suppose that P1 requests (1, 0, 2). Is it safe to honor this request? Table 10.3: At New Request Allocation Max Available R0 R1 R2 R0 R1 R2 R0 R1 R2 P0 0 1 0 753 332 P1 2 0 0 322 P2 3 0 2 902 P3 2 1 1 222 P4 0 0 2 433 Now let’s find the new state that the system would be in if this request is granted. If the new state is safe, then it is safe to grant this request. Otherwise, it is not safe to honor this request (safety algorithm) The Banker's Algorithm Find a row in the Need matrix which is less than the available vector. If such a row exists, then the process represented by that row may complete with those additional resources. If no such row exists, eventual deadlock is possible. You want to double check that granting these resources to the process for the chosen row will result in a safe state. Looking ahead, pretend that that process has acquired all its needed resources, executed, terminated, and returned resources to the Available vector. Now the value of the available vector should be greater than or equal to the value it was previously. Repeat steps 1 and 2 until all the processes have successfully reached pretended termination (this implies that the initial state was safe); or deadlock is reached (this implies the initial state was unsafe) This is the new state that the system would be in if P1’s request is honoredTable 10.4: P1’s Request Honored 222 CU IDOL SELF LEARNING MATERIAL (SLM)
Allocation Max Available R0 R1 R2 R0 R1 R2 R0 R1 R2 P0 0 1 0 753 230 P1 3 0 2 322 P2 3 0 2 902 P3 2 1 1 222 P4 0 0 2 433 The sequence <P1, P3, P4, P0, P2> satisfies safety requirement, so it is safe to honor P1’s request 10.3.3 Deadlock Detection and Recovery Often neither deadlock avoidance nor deadlock prevention may be used. Instead deadlock detection and recovery are used by employing an algorithm that tracks resource allocation and process states, and rolls back and restarts one or more of the processes in order to remove the deadlock. Detecting a deadlock that has already occurred is easily possible since the resources that each process has locked and/or currently requested are known to the resource scheduler or OS. Detecting the possibility of a deadlock before it occurs is much more difficult and is, in fact, generally undecidable, because the halting problem can be rephrased as a deadlock scenario. However, in specific environments, using specific means of locking resources, deadlock detection may be decidable. In the general case, it is not possible to distinguish between algorithms that are merely waiting for a very unlikely set of circumstances to occur and algorithms that will never finish because of deadlock. Once a deadlock has been detected, one of the 4 conditions must be invalidated to remove the deadlock. Generally, the system acts to remove the circular wait, because making a system suddenly pre-emptive with respect to resources, or making a resource suddenly sharable is usually impractical. Because resources are generally not dynamic, the easiest way to break such a cycle is to terminate a process. Usually, the process is chosen at random, but if more is known about the processes, that information can be used. For example, the largest or smallest process can be disabled. Or the 223 CU IDOL SELF LEARNING MATERIAL (SLM)
one waiting for longest. Such discrimination is based on assumptions about the system workload. Some systems facilitate deadlock recovery by implementing check pointing and rollback. Check pointing is saving enough state of a process so that the process can be restarted at the point in the computation where the checkpoint was taken. Auto saving file edits is a form of check pointing. Check pointing costs depend on the underlying algorithm. Very simple algorithms (like linear primarily testing) can be check pointed with a few words of data. More complicated processes may have to save all the process state and memory. Checkpoints are taken less frequently than deadlock is checked for. If a deadlock is detected, one or more processes are restarted from their last checkpoint. The process of restarting a process from a checkpoint is called rollback. The hope is that the resource requests will not interleave again to produce deadlock. Deadlock recovery is generally used when deadlocks are rare, and the cost of recovery (process termination or rollback) is low. Process check pointing can also be used to improve reliability (long running computations), assist in process migration (Sprite, Mach), or reduce start-up costs. If a system does eventually slip into a deadlock, measures must be taken to break the deadlock so that the execution may proceed. There are two solutions for recovery from deadlock 1. Process termination 2. Resource pre-emption Process Termination When deadlock does occur, it may be necessary to bring the system down, or at least manually kill a number of processes, but even that is not an extreme solution in most situations. Abort all deadlocked processes: This method clearly will break the deadlock cycle, but at a great expense, since these processes may have computed for a long time, and the results of these partial computations must be discarded, and probably must be recomputed later 224 CU IDOL SELF LEARNING MATERIAL (SLM)
Abort one process at a time until the deadlock cycle is eliminated: This method incurs considerable overhead, since, after each process is aborted, a deadlock-detection algorithm must be invoked to determine whether any processes are still deadlock Resource Pre-emption Resource pre-emption is a technique used to break a deadlock in a computer system. In other words, it is a technique that an operating system uses to break up a situation in which every resource is used up by a variety of processes, each of which are waiting on more resources. To eliminate deadlocks using resource pre-emption, we successively take away some resources from processes and give these resources to other processes until the deadlock is broken; this usually occurs when one process is given enough resources to be able to complete, then frees up all of the resources it was using. Resources, in terms of resource pre-emption, refer to anything that might be of importance in a computer: memory, a particular piece of data, CPU time, and use of devices such as your hard drive or your printer, and so forth. Managing these resources is a fundamental part of what your operating system does to keep your computer running. If resource pre-emption is used to deal with deadlocks, three issues must be considered: Selection of a victim Which processes can have resources taken away from them easily? The goal here is to minimize cost; if a resource can be taken away from one process much more easily than an identical resource can be taken away from another process, we select the victim that we will have the easiest time removing the resource from. Rollback If we pre-empt a resource from a process, what should be done with that process? If a resource is taken away, it obviously cannot continue in its current state; it would be much as if I chopped off one of your legs and expected you to continue to walk normally. One thing we can do is roll the process back to a safe state; one where the resource hasn’t been taken up by the process yet. 225 CU IDOL SELF LEARNING MATERIAL (SLM)
Unfortunately, most operating systems don’t have the capability to store states of processes as they progress normally, and it is often impossible to determine a safe state given just the current state. The solution then is a total rollback; aborting the process and starting from scratch. A total rollback is usually a last-ditch resort, only usable if a deadlock can’t be resolved without a total rollback. Starvation This happens when the same process is chosen again and again as a victim, making sure that that process can never finish. To prevent starvation, an operating system usually limits the number of times a specific process can be chosen as a victim. Most modern operating systems attempt resource pre-emption as a way to solve deadlocks. The alternative method of killing off processes often means that a much greater amount of computational work is lost; it’s not a preferable method of dealing with deadlocks simply because of the lost time. Implementing resource pre-emption is often one of the trickiest parts of writing an operating system. Many operating systems use a very simple method of selecting processes for pre- emption: the one with the lowest priority receives a total rollback, then on up the chain until the highest priority process can run. Other, more complex schemes exist, but most operating systems in wide use some form of the method just described to break up deadlocks. Resource pre-emption is an integral part of the operating system that runs your computer. Every time you start an application, resource pre-emption has to decide what program gets use the memory and CPU of your computer. It’s a vital task that your operating system deals with for you. 10.3.4 Ignore Deadlock In the Ostrich Algorithm it is hoped that deadlock doesn't happen. In general, this is a reasonable strategy. Deadlock is unlikely to occur very often; a system can run for years without deadlock occurring. If the operating system has a deadlock prevention or detection system in place, this will have a negative impact on performance (slow the system down) 226 CU IDOL SELF LEARNING MATERIAL (SLM)
because whenever a process or thread requests a resource, the system will have to check whether granting this request could cause a potential deadlock situation. If deadlock does occur, it may be necessary to bring the system down, or at least manually kill a number of processes, but even that is not an extreme solution in most situations 10.4 SUMMARY A deadlock is a situation where a group of processes are permanently blocked as a result of each process having acquired a subset of the resources needed for its completion and waiting for release of the remaining resources held by others in the same group – thus making it impossible for any of the processes to proceed. In a multiprogramming environment, more than one process exists in the system competing for the resources. Based on criteria and algorithms employed by the system, one of them is selected at a time, is granted requisite resources and is executed while other candidate processes wait for their turn. Deadlocks can occur in concurrent environments as a result of uncontrolled granting of system resources to requesting processes. A deadlock situation can arise if four conditions Conditions) hold simultaneously in a system – Mutual exclusion, Hold and wait, No preemption, and Circular wait. Resources of a computer system whose allocation is subject to deadlocks can be broadly categorized into two classes: reusable and consumable resources. Deadlocks can be expressed more clearly using a directed graph, called a system resource-allocation graph or precedence graph. This graph has vertices so arranged that depicts the before-after relationship between the processes. Deadlock prevention takes measures so that the system does not go into the deadlock condition in the first place. Deadlock avoidance attempts to assure that the request will not lead to deadlock. Detection is a mechanism to identify a deadlock situation. If a system does eventually slip into a deadlock, recovery attempts to break the deadlock so that the execution may proceed. Banker’s algorithm can be employed to determine whether a particular process state is safe from deadlock or not. 227 CU IDOL SELF LEARNING MATERIAL (SLM)
10.5 KEYWORDS Deadlock Avoidance: It attempts to assure that the request will not lead to deadlock. Deadlock Prevention: It is a measure so that the system does not go into the deadlock condition in the first place. Deadlock: A situation wherein execution of processes halts because of cyclic waiting for the required resources. Detection: It is a mechanism to identify a deadlock situation. Contiguous Allocation Method: It is a method which requires each file to occupy a set of contiguous address on the disk. Critical Resource: A resource shared with constraints on its use (e.g., memory, files, printers, etc.). 10.6 LEARNING ACTIVITY 1. Is it possible to have a deadlock involving only one single-threaded process? Explain your answer. ___________________________________________________________________________ ____________________________________________________________________ 10.7 UNIT END QUESTIONS A. Descriptive Questions Short Questions 3. Explain the different Deadlock strategies. 4. What is Resource Allocation Graphs? 5. How can we avoid a deadlock to happen? 6. What are several ways to address the problem of deadlock in an operating system? 7. How a Deadlock can be prevented? Long Questions 1. Explain process termination. 228 CU IDOL SELF LEARNING MATERIAL (SLM)
2. Can a process be allowed to request multiple resources simultaneously in a system where deadlock is avoided? Discuss why or why not. 3. How deadlock situation is avoided and prevented so that no systems are locked by deadlock? 4. Determine whether there is a deadlock in the above situation. 5. Explain how the Banker’s algorithm is run by the operating system whenever a process requests resource. B. Multiple Choice Questions 1. What is the reusable resource? a. that can be used by one process at a time and is not depleted by that use b. that can be used by more than one process at a time c. that can be shared between various threads d. None of these 2. Which of the following condition is required for deadlock to be possible? a. mutual exclusion b. a process may hold allocated resources while awaiting assignment of other resources c. no resource can be forcibly removed from a process holding it d. All of these 3. A system is in the safe state if a. the system can allocate resources to each process in some order and still avoid a deadlock b. there exists a safe sequence c. both (a) and (b) 229 CU IDOL SELF LEARNING MATERIAL (SLM)
d. None of these 4. The circular wait condition can be prevented by a. defining a linear ordering of resource types b. using thread c. using pipes d. All of these 5. Which one of the following is the deadlock avoidance algorithm? a. banker’s algorithm b. round-robin algorithm c. elevator algorithm d. karn’s algorithm 6. What is the drawback of banker’s algorithm? a. in advance processes rarely know that how much resource they will need b. the number of processes changes as time progresses c. resource once available can disappear d. All of these 7. For effective operating system, when to check for deadlock? 230 a. every time a resource request is made b. at fixed time intervals c. both (a) and (b) CU IDOL SELF LEARNING MATERIAL (SLM)
d. None of these 8. A problem encountered in multitasking when a process is perpetually denied necessary resources is called a. deadlock b. starvation c. inversion d. aging 9. Which one of the following is a visual (mathematical) way to determine the deadlock occurrence? a. resource allocation graph b. starvation graph c. inversion graph d. None of these 10. To avoid deadlock a. there must be a fixed number of resources to allocate b. resource allocation must be done only once c. all deadlocked processes must be aborted d. inversion technique can be used Answers 1 a, 2 d, 3 c,4 a, 5 a, 6 d, 7 c, 8 b, 9 a, 10 a. 231 CU IDOL SELF LEARNING MATERIAL (SLM)
10.8 REFERENCES A. Silberschatz P.B. Galvin, Gange (2002). Operating System Concepts, 6th Ed., Addison Wesley Publishing Co., Boston. H.M. Deitel (1990). An Introduction to Operating Systems, Addison Wesley Publishing Co., Boston. D.M. Dhamdhare (2002). Operating System. Tata McGraw Hill, New Delhi. A.S. Tanenbaum (2000). Operating Systems: Design and Implementation, Prentice Hall of India, New Delhi. Nutt (2005). Operating Systems, 3rd Edition, Pearson Education, Delhi. 232 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 11: MEMORY MANAGEMENT STRATEGIES Structure 11.0 Learning Objectives 11.1 Introduction 11.2 Multiprogramming 11.3 Multiprogramming with fixed partition 11.4 Multiprogramming with variable partition 11.5 Virtual Memory 11.6 Summary 11.7 Keywords 11.8 Learning Activity 11.9 Unit End Questions 11.10 References 11.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Describe disk structure Define free space management Perform disk scheduling Exercise allocation methods 11.1 INTRODUCTION In the multi-programming system, one or multiple programs can be loaded into its main memory for getting to execute. It is capable only one program or process to get CPU for executes for their instructions, and other programs wait for getting their turn. Main goal of using of multiprogramming system is overcome issue of underutilization of CPU and primary memory. 233 CU IDOL SELF LEARNING MATERIAL (SLM)
Main objective of multiprogramming is to manage entire resources of the system. The primary components of multiprogramming system are command processor, file system, I/O control system, and transient area. So, multiprogramming operating system is designed based on this principle that sub segmenting parts of transient area to store individual programs, and then resource management routines are attached with basic function of operating system. In the multiprogramming system, multiple users can perform their tasks concurrently, and it can be stored into main memory. CPU has ability to deliver time to several programs while sitting in idle mode, when one is getting engage along with I/O operations. When one program is getting to wait for I/O transfer, and another program is always ready to use of processor, and CPU’s time can be shared into various processes. All jobs are executed at the same time frame is not known as multiprogramming but it can be defined as multiple jobs present for processor and part of other processes are executed then segment of another and so on. One real life example: User can use MS-Excel, download apps, transfer data from one point to another point, Firefox or Google chrome browser, and more at a same time. 11.2 MULTIPROGRAMMING Multiprogramming operating system has ability to execute multiple programs with using of only one processor machine. In multiprogramming operating system, if single program gets to wait for I/O transfer, then other programs are always ready to CPU utilization. Due to this, multiple jobs can share time of its CPU. But, in the multiprogramming operating system, it does not predefine to be execution of their jobs at same time frame. If, program is in the execution process then it is known as “Process” or “Job” or “Task”. The concurrent executions of programs help to improve performance of system resources utilization as well as improve system throughput than to serial and batch processing system. Advantages of Multiprogramming Operating System TO increase CPU utilization and it never gets idle. Short time jobs are done fastest compare to long time jobs. 234 CU IDOL SELF LEARNING MATERIAL (SLM)
Multiple users can use multiprogramming system at once. It can help to execute multiple tasks in single application at same time duration. It can help to improve turnaround time for short jobs. It reduces total read time that is required to execute a job. Multiprogramming system helps to optimize total job throughput of computer. Multiprogramming system can monitor fastest as entire tasks run in parallel. Disadvantages of Multiprogramming System Memory management is required because all types of jobs are stored in the main memory. If, it contains massive load of jobs then its long-time jobs have to need long waiting time. Harder task is to manage of all processes and jobs. It is highly complex and sophisticated. 11.3 MULTIPROGRAMMING WITH FIXED PARTITION In operating systems, Memory Management is the function responsible for allocating and managing computer’s main memory. Memory Management function keeps track of the status of each memory location, either allocated or free to ensure effective and efficient use of Primary Memory. There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In Contiguous Technique, executing process must be loaded entirely in main-memory. Contiguous Technique can be divided into: 1. Fixed (or static) partitioning 2. Variable (or dynamic) partitioning Fixed Partitioning: This is the oldest and simplest technique used to put more than one processes in the main memory. In this partitioning, number of partitions (non-overlapping) in RAM are fixed but size of each partition may or may not be same. As it is contiguous allocation, hence no spanning is allowed. Here partition is made before execution or during system configure. 235 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 11.1: Fixed memory partitions with separate queues for each partition and Fixed memory partitions with single input queue Figure 11.2: Fixed partition scheme As illustrated in above figure, first process is only consuming 1MB out of 4MB in the main memory. Hence, Internal Fragmentation in first block is (4-1) = 3MB. Sum of Internal Fragmentation in every block = (4-1) +(8-7) +(8-7) +(16-14) = 3+1+1+2 = 7MB. 236 CU IDOL SELF LEARNING MATERIAL (SLM)
Suppose process P5 of size 7MB comes. But this process cannot be accommodated inspite of available free space because of contiguous allocation (as spanning is not allowed). Hence, 7MB becomes part of External Fragmentation. Advantages of Fixed Partitioning – 1. Easy to implement: Algorithms needed to implement Fixed Partitioning are easy to implement. It simply requires putting a process into certain partition without focussing on the emergence of Internal and External Fragmentation. 2. Little OS overhead: Processing of Fixed Partitioning require lesser excess and indirect computational power. Disadvantages of Fixed Partitioning 1. Internal Fragmentation: Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This can cause internal fragmentation. 2. External Fragmentation: The total unused space (as stated above) of various partitions cannot be used to load the processes even though there is space available but not in the contiguous form (as spanning is not allowed). 3. Limit process size: Process of size greater than size of partition in Main Memory cannot be accommodated. Partition size cannot be varied according to the size of incoming process’s size. Hence, process size of 32MB in above stated example is invalid. 4. Limitation on Degree of Multiprogramming: Partition in Main Memory are made before execution or during system configure. Main Memory is divided into fixed number of partitions. Suppose if there are partitions in RAM and are the number of processes, then condition must be fulfilled. Number of processes greater than number of partitions in RAM is invalid in Fixed Partitioning 237 CU IDOL SELF LEARNING MATERIAL (SLM)
11.4 MULTIPROGRAMMING WITH VARIABLE PARTITION It is a part of Contiguous allocation technique. It is used to alleviate the problem faced by Fixed Partitioning. In contrast with fixed partitioning, partitions are not made before the execution or during system configure. Various features associated with variable Partitioning- 1. Initially RAM is empty and partitions are made during the run-time according to process’s need instead of partitioning during system configure. 2. The size of partition will be equal to incoming process. 3. The partition size varies according to the need of the process so that the internal fragmentation can be avoided to ensure efficient utilization of RAM. 4. Number of partitions in RAM is not fixed and depends on the number of incoming process and Main Memory’s size. Figure 11.3: Variable Partition 238 Advantages of Variable Partition 1. No Internal Fragmentation: CU IDOL SELF LEARNING MATERIAL (SLM)
In variable Partitioning, space in main memory is allocated strictly according to the need of process, hence there is no case of internal fragmentation. There will be no unused space left in the partition. 2. No restriction on Degree of Multiprogramming: More number of processes can be accommodated due to absence of internal fragmentation. A process can be loaded until the memory is empty. 3. No Limitation on the size of the process: In Fixed partitioning, the process with the size greater than the size of the largest partition could not be loaded and process cannot be divided as it is invalid in contiguous allocation technique. Here, in variable partitioning, the process size can’t be restricted since the partition size is decided according to the process size. Disadvantages of Variable Partitioning 1. Difficult Implementation: Implementing variable Partitioning is difficult as compared to Fixed Partitioning as it involves allocation of memory during run-time rather than during system configure. 2. External Fragmentation: There will be external fragmentation inspite of absence of internal fragmentation. For example, suppose in above example- process P1 (2MB) and process P3 (1MB) completed their execution. Hence two spaces are left i.e., 2MB and 1MB. Let’s suppose process P5 of size 3MB comes. The empty space in memory cannot be allocated as no spanning is allowed in contiguous allocation. The rule says that process must be contiguously present in main memory to get executed. Hence it results in External Fragmentation. 239 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 11.4: Dynamic Partition Now P5 of size 3 MB cannot be accommodated in spite of required available space because in contiguous no spanning is allowed. 11.5 VIRTUAL MEMORY Various memory-management has the goal – to keep as many processes in memory as possible, simultaneously to allow multi-programming. However, these strategies tend to require that the entire process to be in memory before the process can execute. Obviously, because of the limited size of physical memory available in a system, these schemes are not very optimal. The largest process (program) that can be executed at one time is limited by the size of physical memory. For instance, if 16MB RAM (physical memory) is available in the system. Assuming that the operating system itself occupies 4MB, then the no process larger than 12MB can be executed on this system. The total number of processes that can be executed simultaneously is also limited to total space of 12MB, in this case. This limitation may be overcome by another memory management technique – Virtual Memory System. Virtual memory is a technique that permits the execution of processes that may not be completely resident in the memory. The main advantage of this scheme is that programs can be larger than physical memory. Further, it views main memory into an extremely large, uniform array of storage, separating logical memory as viewed by the user from physical memory. It, as a matter of fact augments the main memory with secondary storage devices (like hard disks). This technique frees programmers from concern over memory storage limitations. Virtual memory is not easy to implement, however, and may substantially 240 CU IDOL SELF LEARNING MATERIAL (SLM)
decrease performance if it is used carelessly. In this lesson, we discuss virtual memory in the form of demand paging, and examine its complexity and cost. The memory-management algorithms of discussed so far, are necessary because of one basic requirement: the instructions being executed must be in physical memory. The first approach to meeting this requirement is to place the entire logical address space in physical memory. Overlays and dynamic loading can help ease this restriction, but they generally require special precautions and extra effort by the programmer. This seems both necessary and reasonable, but it is also unfortunate, since it limits the size of a program to the size of physical memory. In fact, an examination of real programs shows us that, in many cases, the entire program is not needed for execution. For instance; Programs often have code to handle unusual error conditions. Since these errors seldom, if ever, occur in practice, this code is almost never executed. Arrays, lists, and tables are often allocated more memory than they actually need. An array may be declared 100 by 100 elements, even though it is seldom larger than 10 by 10 elements at execution time. An assembler symbol table may have room for 4000 symbols, although the average program has less than 300 symbols. Not all sub-routines are executed always. Certain options and features of a program may be used rarely. For instance, the routines on U.S. government computers that balance the budget have not been used in years. Even in those cases where the entire program may be needed at the same time (such is the case with overlays, for example). The ability to execute a program that is only partially in memory would have many benefits: 1. A program would no longer be constrained by the amount of physical memory that is available. Users would be able to write programs for an extremely large virtual address space, simplifying the programming task. The users do not have to worry about the amount of memory available for the programs. 2. Because each user program could take less physical memory, more programs could be run at the same time, with a corresponding increase in CPU utilization and consequently, throughput, but with no increase in response time or turnaround time. 241 CU IDOL SELF LEARNING MATERIAL (SLM)
3. Less I/O would be needed to load or swap each user program into memory, so each user program would run faster. Thus, running a program that is not entirely in memory would benefit both the system, and the user. Virtual memory is the separation of user logical memory from physical memory. This separation allows an extremely large virtual memory to be provided for programmers when only a smaller physical memory is available (Figure 7.18). Virtual memory makes the task of programming much easier, because the programmer no longer needs to worry about the amount of physical memory available, or about what code can be placed in overlays, but can concentrate instead on the problem to be programmed. On systems, which support virtual memory, overlays have virtually disappeared, therefore. Virtual memory is commonly implemented by Demand Paging. It can also be implemented in a segmentation system. Several systems provide a paged segmentation scheme, where segments are broken into pages. Thus, the user view is segmentation, but the operating system can implement this view with demand paging. Demand segmentation can also be used to provide virtual memory. Burroughs' computer systems have used demand segmentation. The IBM OS/2 operating system also uses demand segmentation. However, segment- replacement algorithms are more complex than the page-replacement algorithms because the segments have variable sizes. Page 0 Page 1 Page 2 memory map Page n physical memory virtual memory Figure 11.5: Showing Virtual Memory Larger than Physical Memory 242 CU IDOL SELF LEARNING MATERIAL (SLM)
11.6 SUMMARY Sharing the processor, when two or more programs reside in memory at the same time, is referred as multiprogramming. Multiprogramming assumes a single shared processor. Multiprogramming increases CPU utilization by organizing jobs so that the CPU always has one to execute. The earliest and one of the simplest technique which can be used to load more than one processes into the main memory is Fixed partitioning or Contiguous memory allocation. In this technique, the main memory is divided into partitions of equal or different sizes. The operating system always resides in the first partition while the other partitions can be used to store user processes. The memory is assigned to the processes in contiguous way. Dynamic partitioning tries to overcome the problems caused by fixed partitioning. In this technique, the partition size is not declared initially. It is declared at the time of process loading. The first partition is reserved for the operating system. The remaining space is divided into parts. The size of each partition will be equal to the size of the process. The partition size varies according to the need of the process so that the internal fragmentation can be avoided. Virtual Memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory. The addresses a program may use to reference memory are distinguished from the addresses the memory system uses to identify physical storage sites, and program generated addresses are translated automatically to the corresponding machine addresses. The size of virtual storage is limited by the addressing scheme of the computer system and amount of secondary memory is available not by the actual number of the main storage locations. It is a technique that is implemented using both hardware and software. It maps memory addresses used by a program, called virtual addresses, into physical addresses in computer memory. 11.7 KEYWORDS Distributed system: A distributed system is a collection of processors that do not share memory, peripheral devices, or a clock. Monitors: A monitor is software synchronization tool with high-level of abstraction that provides a convenient and effective mechanism for process synchronization. 243 CU IDOL SELF LEARNING MATERIAL (SLM)
Multiprocessor System: It is a computer system that has an array of a number of processors. Multiprogramming: The rapid switching back and forth of CPU among processes is called multiprogramming. Batch Processing: A mode of data processing in which a number of tasks are lines up and the entire task set is submitted to the computer in one go 11.8 LEARNING ACTIVITY 1. Consider a logical address space of 64 pages of 1,024 words each, mapped onto a physical memory of 32 frames. a. How many bits are there in the logical address? b. How many bits are there in the physical address? ___________________________________________________________________________ ____________________________________________________________________ 11.9 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. Define multiprogramming system. 2. What is virtual memory? 3. What are the three major activities of an operating system with regard to memory management? 4. What are the three major activities of an operating system with regard to secondary storage management? 5. List out the two portioning methods. Long Questions 1. Briefly explain multiprogramming with necessary diagram. 2. Explain about Partitioned allocation in memory management. 3. Describe fixed partitioning scheme. 244 CU IDOL SELF LEARNING MATERIAL (SLM)
4. Discuss variable partitioning scheme. 5. Compare and contrast fixed and variable partitioning scheme. B. Multiple Choice Questions 1. A technique that allows more than one program to be ready for execution and provides the ability to switch from one process to another. a. multitasking b. multiprocessing c. multitasking d. multiprogramming 2. Multiprogramming is mainly accomplished by: a. OS b. Software c. Hardware d. Program 3. The technique that increases the system’s productivity. a. Single-programming b. multiprocessing c. multitasking d. multiprogramming 4. Which of the following memory unit that processor can access more rapidly? 245 a. Main Memory CU IDOL SELF LEARNING MATERIAL (SLM)
b. Virtual Memory c. Cache memory d. Read Only Memory 5. When memory is divided into several fixed sized partitions, each partition may contain ________. a. exactly one process b. at least one process c. multiple processes at once d. three process at once 6. In fixed size partition, the degree of multiprogramming is bounded by ___________ a. the number of partitions b. the CPU utilization c. the memory sizes d. All of these 7. In almost all modern multiprogramming systems, the principal operation of memory management involves a sophisticated scheme known as …………………. a. memory partitioning b. virtual memory c. real memory d. memory organization 246 CU IDOL SELF LEARNING MATERIAL (SLM)
8. …………….. is based on the use of one or both of two basic techniques: segmentation and paging. a. memory partitioning b. virtual memory c. real memory d. memory organization 9. A process may be loaded into a partition of equal or greater size in ………………. of memory. a. Fixed partitioning b. Simple Paging c. Virtual memory paging d. Simple segmentation 10. Because of virtual memory, the memory can be shared among ____________ a. processes b. threads c. instructions d. None of these Answers 1 d, 2 a, 3 d, 4 c, 5 a, 6 a, 7 b, 8 b, 9 a, 10 a. 11.10 REFERENCES A. Silberschatz P.B. Galvin, Gange (2002). Operating System Concepts, 6th Ed., Addison Wesley Publishing Co., Boston. 247 CU IDOL SELF LEARNING MATERIAL (SLM)
H.M. Deitel (1990). An Introduction to Operating Systems, Addison Wesley Publishing Co., Boston. D.M. Dhamdhare (2002). Operating System. Tata McGraw Hill, New Delhi. A.S. Tanenbaum (2000). Operating Systems: Design and Implementation, Prentice Hall of India, New Delhi. Nutt (2005). Operating Systems, 3rd Edition, Pearson Education, Delhi. 248 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 12: PAGING Structure 12.0 Learning Objectives 12.1 Introduction 12.2 Paging 12.3 Demand Paging 12.4 Design and Implementation issues 12.5 Inverted Page Table 12.6 Summary 12.7 Keywords 12.8 Learning Activity 12.9 Unit End Questions 12.10 References 12.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Describe paging Outline demand paging Discuss implementation issues in paging Explain inverted page table 12.1 INTRODUCTION Paging is a storage mechanism that allows OS to retrieve processes from the secondary storage into the main memory in the form of pages. In the Paging method, the main memory is divided into small fixed-size blocks of physical memory, which is called frames. The size of a frame should be kept the same as that of a page to have maximum utilization of the main memory and to avoid external fragmentation. Paging is used for faster access to data, and it is a logical concept. 249 CU IDOL SELF LEARNING MATERIAL (SLM)
The paging process should be protected by using the concept of insertion of an additional bit called Valid/Invalid bit. Paging Memory protection in paging is achieved by associating protection bits with each page. These bits are associated with each page table entry and specify protection on the corresponding page. Segmentation method works almost similarly to paging, only difference between the two is that segments are of variable-length whereas, in the paging method, pages are always of fixed size. A program segment includes the program's main function, data structures, utility functions, etc. The OS maintains a segment map table for all the processes. It also includes a list of free memory blocks along with its size, segment numbers, and its memory locations in the main memory or virtual memory. 12.2 PAGING A possible solution to the external fragmentation problem is to permit the logical address space of a process to be non-contiguous, thus allowing a process to be allocated physical memory wherever the latter is available. One way of implementing this solution is through the use of a paging scheme. Paging avoids the considerable problem of fitting the varying- sized memory chunks onto the backing store, from which most of the previous memory- management schemes suffered. When some code fragments or data residing in main memory need to be swapped out, space must be found on the backing store. The fragmentation problems discussed in connection with main memory are also prevalent with backing store, except that access is much slower, so compaction is impossible. Because of its advantages over the previous methods, paging in its various forms is commonly used in many operating systems. Physical memory is broken into fixed-sized blocks called frames. Logical memory is also broken into blocks of the same size called pages. When a process is to be executed, its pages are loaded into any available memory frames from the backing store. The backing store is divided into fixed-sized blocks that are of the same size as the memory frames. 250 CU IDOL SELF LEARNING MATERIAL (SLM)
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
- 309
- 310
- 311
- 312
- 313
- 314
- 315
- 316
- 317
- 318
- 319
- 320
- 321
- 322
- 323
- 324
- 325
- 326
- 327
- 328
- 329
- 330
- 331
- 332
- 333
- 334
- 335