Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore MCA613_System Programming and Operating System

MCA613_System Programming and Operating System

Published by Teamlease Edtech Ltd (Amita Chitroda), 2020-10-23 10:20:48

Description: MCA613_System Programming and Operating System

Search

Read the Text Version

Deadlocks 143 8.3.1 Deadlock Detection Algorithm for Single Instance Resources  This algorithm is also called as wait-for graph.  This graph is derived from the resource allocation graph by removing nodes of type resource and then by collapsing the appropriate edges.  A deadlock exists in the system if and only if the wait-for graph contains a cycle.  An algorithm to detect a cycle in a graph requires an order of n2 operations, where n is the number of vertices in the graph. Fig. 8.3: Resource Allocation Graph and Wait for Graph 8.3.2 Deadlock Detection Algorithm for Multiple Instance Resources This algorithm is suitable when there are multiple number of resource instances for different types of resource categories. Consider following set of data structure : Variable Meaning n Total number of processes in the system m The number of resource types Available A vector of length m. It indicates the no. of available resources of each type. Available [j] = K means there are K instances of resource type j are available Request An n*m matrix indicates the current request of each process. Request[i, j] = K means process Pi is requesting K instances of resource type Rj. Allocation an n * m matrix. It defines the no. of resources currently allocated to each process. Allocation [i, j] = K means Pi is currently allocated with K instances of resource Rj. CU IDOL SELF LEARNING MATERIAL (SLM)

144 System Programming and Operating System Steps : Step 1 : Let work and Finish are two variables of length m and n respectively. Initialize work = Available & Finish[ i ] = false for all i = 1 to n. Step 2 : Find an i such that Finish[ i ] = false &&Requesti <= work If no such i exists then go to step 4 otherwise goto step3 to change the available (work) value. Step 3 : work= work + Allocation, Finish[ i ] = true goto step2 for further checking. Step 4 : If finish[ i ] = false for some i = 1 to n then the system is in deadlock. In this case the process Pi is deadlocked, whose finish [i] variable is false. This operation requires m * n2 operations to detect whether the system is in deadlock state or not? Example: Consider the following snap shot of a system with 5 processes with 3 types of resources. Detect whether the deadlock is there or not Table 8.1: Processes with Allocation, Request and Avaibility for Deadlock Detection P Allocation Request Available A B C AB C AB C P0 0 1 0 0 0 0 0 0 0 P1 2 0 0 2 0 2 P2 3 0 3 0 0 0 P3 2 1 1 1 0 0 P4 0 0 2 00 2 Solution: Initialize i = 0 to 4 work = Available = 0,0,0 Set initially all Finish[ i ] = false CU IDOL SELF LEARNING MATERIAL (SLM)

Deadlocks 145 Find an i so that Finish[ i ] = false &&Requesti <= Available Apply the Deadlock detection algorithm to check whether the existing sequence is safe state or not. If processes are in safe sequence that means deadlock is not detected. Table 8.2: Processes Sequence for not detecting Deadlock Pi Requesti <= work Y/N Work = work + Allocationi Finish[i] P0 0,0,0 <= 0,0,0 Y Work = 0,0,0 + 0,1,0 = 0,1,0 True P1 2,0,2 <= 0,1,0 N False P2 0,0,0 <= 0,1,0 Y Work = 0,1,0 + 3,0,3 = 3,1,3 True P3 1,0,0 <= 3,1,3 Y Work = 3,1,3 + 2,1,1 = 5,2,4 True P4 0,0,2 <= 5,2,4 Y Work = 5,2,4 + 0,0,2 = 5,2,6 True P1 2,0,2 <= 5,2,6 Y Work = 5,2,6 + 2,0 0 = 5,2,8 True Deadlock is not detected with sequence {P0,P2,P3,P4,P1} If process P2 demands a resource type C – 1 in existing situation instance then detect the deadlock??? Now the given snapshot is Table 8.3: Processes with Allocation, Request and Avaibility for Deadlock P Allocation Request Available A B C AB C AB C P0 0 1 0 0 0 0 0 0 0 P1 2 0 0 2 0 2 P2 3 0 3 0 0 1 P3 2 1 1 1 0 0 P4 0 0 2 0 0 2 Again Initialize i = 0 to 4, Set initially all Finish[ i ] = false work = Available = 0,0,0 CU IDOL SELF LEARNING MATERIAL (SLM)

146 System Programming and Operating System Step 1 : Check process request against the available number of resources. For process P0 Request0 <= work 0,0,0 <= 0,0,0……Yes For process P1 For process P2 Set Finish[0] = True For process P3 For process P4 Now P0 can complete the execution and can release its allocated For process P1 resources so available number of resources will now become = work + Allocation0 Work = 0,0,0 + 0,1,0 = 0,1,0 Request1 <= work 2,0,2 <= 0,1,0……No Set Finish[1] = False P1 cannot complete the execution because requested resources are not available so P1 should wait and will not release its allocated resources. No change in work variable value. Request2 <= work 0,0,1 <= 0,1,0……No Set Finish[2] = False P2 cannot complete the execution because requested resources are not available so P2 should wait and will not release its allocated resources. No change in work variable value. Request3 <= work 1,0,0 <= 0,1,0……No Set Finish[3] = False P3 cannot complete the execution because requested resources are not available so P3 should wait and will not release its allocated resources. No change in work variable value. Request4 <= work 0,0,2 <= 0,1,0……No Set Finish[4] = False P4 cannot complete the execution because requested resources are not available so P4 should wait and will not release its allocated resources. No change in work variable value. Request1 <= work 2,0,2 <= 0,1,0……No Set Finish[1] = False P1 cannot complete the execution because requested resources are not available so P1 should wait and will not release its allocated resources. No change in work variable value. CU IDOL SELF LEARNING MATERIAL (SLM)

Deadlocks 147 Table 8.4: Deadlock is detected with processes P1, P2, P3, P4 are deadlocked [B] Deadlock Recovery Once the deadlock is detected by the system then it should be recovered by the system to avoid exceptions. There are two solutions for deadlock recovery : 1. Abort one or more processes to break the circular wait. 2. Resource Preemption : Preempt the resources one by one forcefully from processes so that they can be made available. This can be done through one of the three following given techniques :  Selecting a victim  Roll-Back  Starvation B.1 ] Abort one or more processes to break the circular wait. This can be implemented either by aborting all deadlocked processes at once or by aborting each process one by one. Aborting all deadlocked processes at once can break the cycle instantly, but this method is very expensive.There might be some processes who have completed more than 90 executions. Aborting all at once will require such processes to start from scratch. Another way is to abort one process at a time until the deadlock cycle is eliminated. But this method includes many overheads, because after each process is aborted, a deadlock detection algorithm must be invoked to detect whether any process still causes deadlock or not. CU IDOL SELF LEARNING MATERIAL (SLM)

148 System Programming and Operating System By aborting a process, system can reclaim all the resources allocated to the terminated process and include them in the list of available resources to fulfill the need of other processes In aborting process one by one to break the circular wait requires to select the process from set of processes. Such selection of process to be terminated depends upon many parameters which are listed below: (a) The priority of process. (b) How much time the selected process will take to complete the assigned task and how long it has computed? (c) How many and what type of resources the process has used? (d) What is the resource need i.e how many more resources the process will require to complete the assigned task? (e) What is the type of process – interactive or batch? (f) How many processes needs to be terminated because of the selected process to abort? B.2] Resource Preemption This method says that preempt some resources from one or more of the deadlocked processes and then allocate these preempted resources to other processes.This is repeated with multiple processes until the deadlock cycle is broken.Preemption of resources requires to consider three different issues for deadlock recovery.  Selecting a victim  Roll-back  Starvation Selecting a victim  Victim is that process which is selected for preemption (forceful termination).  The selection is based on following parameters :  The cost  The time, resources were associated with a process. CU IDOL SELF LEARNING MATERIAL (SLM)

Deadlocks 149  How much time is left for the process for its completion.  The number of resources a process is currently holding and how much it requires more. Roll Back  When a process is preempted from its resources then that process cannot continue with its normal execution.  At this stage such process must be roll-back to some safe state and then should restart it from that state.  But since it is difficult to decide the safe state for any such process, and so for simplicity, system does the total roll-back.  Total roll-back means abort such process and then restart it. Starvation  Preemption of resources from a process may lead to starvation.  System must take care that the resources must not be preempted always from the same process otherwise such victim process will require to wait for long time for completing its execution and so will starve to death. 8.5 Deadlock Avoidance Deadlock avoidance overcomes the drawback of deadlock prevention. It is the simplest deadlock prevention model. Here each process declares their maximum need of resources of each type which they require for completing the execution.  Avoidance can be there if system knows the requirement of each process in advance i.e. before starting the allotment.  For this each process should declare the maximum number of resources of each type that it may require. CU IDOL SELF LEARNING MATERIAL (SLM)

150 System Programming and Operating System  A deadlock avoidance algorithm dynamically examines the resource allocation state before starting the allocation to ensure that a circular wait condition can never exist  Here the resource allocation state is defined by the number of available resources and allocated resources and the maximum demand of the process.  This algorithm checks a system to find if, it is in safe state or in unsafe state.  A system is in safe state if the system can allocate resources to each process up to its maximum demand.  A system is said to be in safe state if and only if there exists a safe sequence.  Safe sequence is the sequence which satisfies need of all processes.  If no such sequence exists then the system is said to be in unsafe state.  An unsafe state may lead to deadlock, but not always. But a safe state is always not a deadlock state. What is Safe sequence? A sequence of processes P1, P2, …., Pn is a safe sequence for the current allocation state if, For each Pi process, the request can be satisfied by the currently available resources of Pi, plus the resources held by all the Pj, with j < i. In this case Pi process can wait till all Pj have finished. When they have finished, Pi can obtain all of its required resources, then it will complete the task and return all its allocated resources to the list of available resources. When Pi terminates, Pi+1 can obtain its all resources and so can complete the task and so on. There are two Deadlock Avoidance Algorithms: 1. Resource allocation Algorithm 2. Banker’s Algorithm (a) Resource Request Algorithm (b) Safety Algorithm CU IDOL SELF LEARNING MATERIAL (SLM)

Deadlocks 151 1. Resource Allocation Algorithm (Suitable for single instance of resources) Resource allocation algorithm is applicable for single instance of each resources but not for multiple instances of resources. Here it is assumed that each resource type is one in number. This is applied using resource allocation graph.  Resource allocation graph consists of three edges : 1. Request Edge (Pi, Rj): Process Pi is requesting for resource Rj. Pi Rj 2. Assignment Edge (Rj, Pi): Resource Rj is allocated to process Pi Rj Pij 3. Claim Edge (Pi, Rj): Pi claims for Rj. Represented with dashed line. When in near future a request comes from Pi for Rj, then the claim edge is converted to request edge. Pi Rj  Deadlock occurrence can be avoided by not allowing the formation of cycle in the resource allocation graph. With no cycle, the system is in safe state. If cycle is found in the graph then that allocation will put the system in unsafe state. Example: Assume that P1 & P2 are two processes & requires R1 & R2 resources for execution. At some point of time R1 is assigned to P1. P2 makes a request for R1. Further R2 is claimed by P1 as well as P2 for execution. Currently R2 is free. Now if system allocates R2 resource to P2 then in resource allocation graph cycle will be formed. This will bring the system to an unsafe state. CU IDOL SELF LEARNING MATERIAL (SLM)

152 System Programming and Operating System But if system assigns R2 to process P1, then no cycle will be formed in the graph. This brings the system in safe state. With this P1 can finish off its execution with resources R1 & R2. After this R1 & R2 will be free & can be assigned to P2 for execution. 2. Banker’s Algorithm (Suitable for Multiple Instance of Resources) Bankers’ algorithm is implemented through two stage algorithms. In first stage the system checks the resource request received from process. Verify its validity against the demand and need of that resource by requested process. When a request comes for resources, then allocate them (Not actually) and observe the resource allocation state with Safety algorithm. Second stage is the safety algorithm. This algorithm checks for the safe sequence after allocating the requested algorithm. Check for safe sequence in resource allocation state through Observed snapshot. (a) Resource Request Algorithm (b) Safety Algorithm Banker’s Algorithm is used to avoid deadlock for multiple instances of resources. To understand this algorithm consider the following set of data structure : A] General Set of Data Structure Variable Meaning n m Total number of processes in the system Available The number of resource types Max A vector of length m. It indicates the no. of available resources of each type. Allocation Available [ j ] = K means there are K instances of resource type j are available an n * m matrix. It defines the maximum demand of each process. Max [ i, j ] = K means process Pi may request maximum K instances of Rj resources. an n * m matrix. It defines the no. of resources currently allocated to each process. Allocation [ i, j ] = K means Pi is currently allocated with K instances of resource Rj. CU IDOL SELF LEARNING MATERIAL (SLM)

Deadlocks 153 Need An n * m matrix. It indicates the remaining resource requirement of a process for completeing the execution. Need [i, j] = k means process Pi requires K more instances of resource type Rj Need [i, j] = Max [i, j] - Allocation [i, j] Need of a process for the resources to complete the execution will always be checked by the system dynamically by subtracting the currently allocated number of resources for its maximum demand. B] For Resource Request Algorithm consider following set of variables: requesti request from process Pi, Needi If requesti [ j ] = K // process Pi wants K instances of resource type Rj need for process Pi Resource Request Algorithm Steps : Whenever such a request is made by Pi then the following actions are taken by the system : 1. Calculate the need of each process for each resource type (Ni = Mi-Ai) 2. Check to see if request= <= Needi. If yes go to step 3 else error, as process has exceeded its maximum claim. 3. If requesti <= Available, if yes then go to step 4 else “Resources are not available”. So Process Pi should wait. 4. If generated request is less than the maximum need and that resource is available as per need then the system will allocate the resources by modifying the states : Available = Available – requesti Allocationi = Allocationi + requesti Needi = Needi – requesti 5. After making changes in current data set system will Call the safety algorithm to check whether the system state is safe state or unsafe state after the allocation of resources. At this stage if the resource allocation state is in safe state after fulfilling the request generated by process then the transaction is completed. Process Pi is allocated with its required CU IDOL SELF LEARNING MATERIAL (SLM)

154 System Programming and Operating System resources. But if this allocation state is unsafe state then system will ask process Pi to wait for request and the old state and old data sets are restored. Safety Algorithm Steps involved in Safety algorithm: 1. Calculate the need of each job for each resource type (Ni=Mi-Ai) 2. Assume that : Work = A vector of length m. Finish = A vector of length n. Finish variable is used to check the process state. If Finish [i] is true means process Pi is finished its execution. And so resources can be released. If Finish[i] state is false. It indicates that the process is not finished and may require some more resources. 3. Initialize: Work = Available Finish[ i ] = false ; for i = 0 to n 4. Find an i (starting with first process), such that a) Finish[ i ] = false && b) Needi <= work, if found an i then go to step 5 else go to step 4 to check next i. 5. work = work + Allocationi Finish [ i ] = true : Select the ith process, keep it aside & go back to step 3 for checking of the next process in sequence. 6. If Finish [ i ] = true for all values of i , then such system state is called as safe state. Example of Bankers Algorithm to check the Safe State Consider a system with processing of five processes. A, B, and C are three types of resources. A has 10 instances, B has 5 and C has 7. At some time T0 the system reading is as follows: CU IDOL SELF LEARNING MATERIAL (SLM)

Deadlocks 155 Table 8.5: Allocation Metrics Table 8.6: Need Matrics P Allocation Max. Available Need ABC AB C ABCABC P0 7 4 3 P1 1 2 2 P0 0 1 0 7 5 3 3 3 2 P2 6 0 0 P3 0 1 1 P1 2 0 0 3 2 2 P4 4 3 1 P2 3 0 2 9 0 2 P3 2 1 1 2 2 2 P4 0 0 2 4 3 3 Currently allocated Needi = Maxi - Allocation resources Step 1 : Need of each process is calculated for A, B and C. Step 2 : Initialize work = Available = 3,3,2 , Initially Finish [i] = false start with Process P0 Need0 = 7,4,3 Finish[0] = false Step 3 : fori = 0 to 4 : compare Need0 <= Work ?? 7, 4, 3 <= 3, 3, 2 ….. No check for P1 : Need1 <= work ?? 1, 2, 2 <= 3, 3, 2 … Yes ………P1 Step 4 : Fulfill the need of P1 by assigning the desired resources. Work = work + allocationi work = 3, 3, 2 + 2, 0, 0 = 5, 3, 2 finish [1] = true select P1 process for safe sequence and go to step 3 for the similar checking of remaining processes i.e. for P2, P3, and P4 CU IDOL SELF LEARNING MATERIAL (SLM)

156 System Programming and Operating System Finally when finish [i] for all P’s will become true then it will form a safe sequence Check For process P2 Need2 <= work ?? 6, 0, 0 <= 5, 3, 2…… No Check For process P3 Need3 <= work ?? 0, 1, 1 <= 5, 3, 2…….Yes…… P3 Check For process P4 Work = work + allocationi = 5, 3, 2 + 2, 1, 1 = 7, 4, 3 Check For process P0 Need4 <= work ?? 4, 3, 1 <= 7, 4, 3…….Yes…… P4 Check For process P2 Work = work + allocationi = 7, 4, 3 + 0, 0, 2 = 7, 4, 5 Need0 <= work ?? 7, 4, 3 <= 7, 4, 5…….Yes…… P0 Work = work + allocationi = 7, 4, 5 + 0, 1, 0 = 7, 5, 5 Need2 <= work ?? 6, 0, 0 <= 7, 5, 5…….Yes…… P2 Work = work + allocationi = 7, 5, 5 + 3, 0, 2 = 10, 5, 7 Table 8.7: Safe sequence is {P1, P3, P4, P0, P2} 8.6 Deadlock Prevention Deadlock prevention ensures that one of the four necessary conditions for deadlock cannot hold. 1. Mutual Exclusion Condition 2. Hold and Wait 3. No Preemption of Resources 4. Circular wait CU IDOL SELF LEARNING MATERIAL (SLM)

Deadlocks 157 Mutual Exclusion  The mutual-exclusion condition exists with non sharable resources. Hold & Wait Because non-sharable resources cannot be accessed by several processes at the same time. This condition can be avoided by the use No Preemption of sharable resources. The sharable resources can be accessed by more Circular Wait than one process at a time & so they need not require to wait.  To ensure that Hold and Wait condition never occurs in the system, system must guarantee that whenever a process requests a resource, then it holds the other required resources. This algorithm says that a process should be allocated with all resources before starting the execution.  Another approach says that a process should request the resource when it has none. i.e. before putting up the request, the process should release all its resources.  But the disadvantage of these two approaches are: (i) Resource utilization may be very low (ii) Starvation is possible.  To prevent the deadlock this condition should not hold.  To ensure that this condition does not hold, the following protocol can be used:  If a process is holding some resources and requests for another resource/s that can not be allocated to it immediately and so process may have to wait. In such case the system will ask the requested process to release its all resources voluntarily. So all such resources are preempted and released implicitly. With this the preempted resources are added to the list of available resources. Such process can restart only when it can regain its old as well as new resources.  By imposing the total ordering of all resources in an increasing order, we can ensure that circular wait condition can never hold. Possible Drawbacks of deadlock prevention are : 1. Low device Utilization 2. Reduced System Throughput 8.7 Summary A deadlock state occurs when two or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes. CU IDOL SELF LEARNING MATERIAL (SLM)

158 System Programming and Operating System There are three principal methods for dealing with deadlocks:  Use some protocol to prevent or avoid deadlocks, ensuring that the system, will never enter a deadlock state.  Allow the system to enter a deadlock state, detect it, and then recover.  Ignore the problem altogether and pretend that deadlocks never occur in the system. The third solution is the one used by most operating systems, including UNIX and Windows A deadlock can occur only if four necessary conditions hold simultaneously in the system: mutual exclusion, hold and wait, no preemption, and circular wait. To prevent deadlocks, we can ensure that at least one of the necessary conditions never holds. A method for avoiding deadlocks that is less stringent than the prevention algorithms requires that the operating system have a priori information on how each process will utilize system resources. 8.8 Key Words/Abbreviations  Processes: A process is basically a program in execution. The execution of a process must progress in a sequential fashion.  Resources: Operating system resources are the physical or virtual components of limited availability within a computer system. Every device connected to a computer system is a resource. Every internal system component is a resource. Virtual system resources include files, network connections and memory areas. 8.9 Learning Activity 1. Explain Safety algorithm. ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2. Explain Banker’s algorithm. ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- CU IDOL SELF LEARNING MATERIAL (SLM)

Deadlocks 159 8.10 Unit End Questions (MCQ and Descriptive) A. Descriptive Types Questions 1. Explain Deadlock and its four conditions. 2. Explain Modeling, 3. Explain Deadlock Detection Algorithm. 4. Explain Deadlock Detection Algorithm for multiple instance resource. 5. Explain Deadlock Recovery. 6. Explain Deadlock Avoidance. 7. Explain Resource Allocation Algorithm. 8. Explain Bankers Algorithm to check safe state. 9. Explain Deadlock Prevention in detail. B. Multiple Choice/Objective Type Questions 1. A system is in the safe state if ____________. (a) the system can allocate resources to each process in some order and still avoid a deadlock (b) there exist a safe sequence (c) All of the mentioned (d) None of the mentioned 2. The circular wait condition can be prevented by ____________. (a) defining a linear ordering of resource types (b) using thread (c) using pipes (d) All of the mentioned CU IDOL SELF LEARNING MATERIAL (SLM)

160 System Programming and Operating System 3. Which one of the following is the deadlock avoidance algorithm? (a) Banker’s algorithm (b) Round-robin algorithm (c) Elevator algorithm (d) Karn’s algorithm 4. To avoid deadlock ____________. (a) there must be a fixed number of resources to allocate (b) resource allocation must be done only once (c) all deadlocked processes must be aborted (d) inversion technique can be used 5. Which of the following condition is required for a deadlock to be possible? (a) Mutual exclusion (b) A process may hold allocated resources while awaiting assignment of other resources (c) No resource can be forcibly removed from a process holding it (d) All of the mentioned Answers 1. (a), 2. (a), 3. (a), 4. (a), 5. (d) 8.11 References Reference Books 1. http://www2.latech.edu/~box/os/ch07.pdf 2. https://www.slideshare.net/sonalichauhan/operating-system-deadlock-galvin Web Resources 1. https://www.geeksforgeeks.org/introduction-of-deadlock-in-operating-system/ 2. http://www.padakuu.com/article/191-deadlock-summary 3. https://www.tutorialspoint.com/process-deadlocks-in-operating-system 4. https://www.studytonight.com/operating-system/deadlocks 5. https://www.javatpoint.com/os-deadlocks-introduction CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 9 MEMORY MANAGEMENT - 1 Structure: 9.0 Learning Objectives 9.1 Introduction 9.2 Multiprogramming with Fixed Partition 9.3 Variable Partitions 9.4 Virtual Memory 9.5 Paging 9.6 Demand Paging 9.7 Design and Implementation Issues in Paging Such as Page Tables 9.8 Summary 9.9 Key Words/Abbreviations 9.10 Learning Activity 9.11 Unit End Questions (MCQ and Descriptive) 9.12 References 9.0 Learning Objectives After studying this unit, you should be able to:  Explain the various Memory Management Techniques.  Elaborate Paging including design and implementation issues.

162 System Programming and Operating System 9.1 Introduction In operating systems, memory management is the function responsible for managing the computer’s primary memory. The memory management function keeps track of the status of each memory location, either allocated or free. It determines how memory is allocated among competing processes, deciding which gets memory, when they receive it, and how much they are allowed. When memory is allocated it determines which memory locations will be assigned. It tracks when memory is freed or unallocated and updates the status. This is distinct from application memory management, which is how a process manages the memory assigned to it by the operating system. 9.2 Multiprogramming with Fixed Partition Introduction In operating systems, Memory Management is the function responsible for allocating and managing computer’s main memory. Memory Management function keeps track of the status of each memory location, either allocated or free to ensure effective and efficient use of Primary Memory. There are two Memory Management Techniques: Contiguous, and Non-contiguous. In Contiguous Technique, executing process must be loaded entirely in main-memory. Contiguous Technique can be divided into:  Fixed (or static) partitioning  Variable (or dynamic) partitioning This is the oldest and simplest technique used to put more than one processes in the main memory. In this partitioning, number of partitions (non-overlapping) in RAM are fixed but size of each partition may or may not be same. As it is contiguous allocation, hence no spanning is allowed. Here partition are made before execution or during system configure. CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 1 163  This method allows multiple processes to execute simultaneously.  Here memory is divided into fixed sized partitions. Size can be equal or unequal for different partitions.  Generally unequal partitions are used for better utilizations.  Each partition can accommodate exactly one process, means only single process can be placed in one partition.  The partition boundaries are not movable.  Whenever any program needs to be loaded in memory, a free partition big enogh to hold the program is found. This partition will be allocated to that program or process.  If there is no free partition available of required size, then the process needs to wait. Such process will be put in a queue.  There are two ways to maintain queue  Using multiple Input Queues.  Using single Input Queue. The disadvantage of sorting the incoming jobs into separate queues becomes apparent when the queue for a large partition is empty but the queue for a small partition is full, as is the case for partition 1 and 3 in Fig. 9.1(a). Here small jobs have to wait to get into memory, even though plenty of memory is free. An alternative organization is to maintain a single queue as in Fig. 9.1(b). whenever a partition becomes free, The job closest to the front of the queue that fits in it could be loaded into the empty partition and run. CU IDOL SELF LEARNING MATERIAL (SLM)

164 System Programming and Operating System Multiple Partition - 4 Single Partition - 4 Input Partition - 3 Input Partition - 3 Queue Queues Partition - 2 Partition - 2 Partition - 1 Partition - 1 Operating Operating System System Fig. 9.1 (a): Fixed memory partitions Fig. 9.1 (b): Fixed memory partitions with separate input queues with a single input queue. for each partition. Advantages of Fixed Partitioning 1. Easy to implement: Algorithms needed to implement Fixed Partitioning are easy to implement. It simply requires putting a process into certain partition without focussing on the emergence of Internal and External Fragmentation. 2. Little OS overhead: Processing of Fixed Partitioning require lesser excess and indirect computational power. Disadvantages of Fixed Partitioning 1. Internal Fragmentation: Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This can cause internal fragmentation. 2. External Fragmentation: The total unused space (as stated above) of various partitions cannot be used to load the processes even though there is space available but not in the contiguous form (as spanning is not allowed). 3. Limit process size: Process of size greater than size of partition in Main Memory cannot be accommodated. Partition size cannot be varied according to the size of incoming process’s size. Hence, process size of 32MB in above stated example is invalid. CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 1 165 4. Limitation on Degree of Multiprogramming: Partition in Main Memory are made before execution or during system configure. Main Memory is divided into fixed number of partition. Suppose if there are partitions in RAM and are the number of processes, then condition must be fulfilled. Number of processes greater than number of partitions in RAM is invalid in Fixed Partitioning. 9.3 Variable Partitions It is a part of Contiguous allocation technique. It is used to alleviate the problem faced by Fixed Partitioning. In contrast with fixed partitioning, partitions are not made before the execution or during system configure. Various features associated with variable Partitioning- 1. Initially RAM is empty and partitions are made during the run-time according to process’s need instead of partitioning during system configure. 2. The size of partition will be equal to incoming process. 3. The partition size varies according to the need of the process so that the internal fragmentation can be avoided to ensure efficient utilisation of RAM. 4. Number of partitions in RAM is not fixed and depends on the number of incoming process and Main Memory’s size.  This method also allows multiple process to execute simultaneously.  Here, memory is shared among operating system and various simultaneously running processes.  Memory is not divided into fixed partitions. Also the number of partitions is not fixed. Process is allocated exactly as much memory as it requires.  Initially, the entire available memory is treated as a single free partition.  Whenever any process enters in a system, a chunk of free memory big enough to fit the process is found and allocated. The remaining unoccupied space is treated as another free partition. CU IDOL SELF LEARNING MATERIAL (SLM)

166 System Programming and Operating System  If enough free memory is not available to fit the process, process needs to wait until required memory becomes available.  Whenever any process gets terminable, it releases the space occupied. If the released fewer space is contiguous to another free partition, both the free partitions are merged together in to single free partition.  Better utilization of memory than fixed sized partition.  This method suffers from External fragmentation. Time C CCC C A B B B B D A Operating A A Operating D Operating D System System System Operating Operating Operating Operating (a) System System (d) System (f) System (b) (c) (e) (g) Fig. 9.2: Memory allocation changes as process come into memory and leave it. The shared regions are unused memory. Advantages of Variable Partitioning 1. No Internal Fragmentation: In variable Partitioning, space in main memory is allocated strictly according to the need of process, hence there is no case of internal fragmentation. There will be no unused space left in the partition. 2. No restriction on Degree of Multiprogramming: More number of processes can be accommodated due to absence of internal fragmentation. A process can be loaded until the memory is not empty. CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 1 167 3. No Limitation on the size of the process: In Fixed partitioning, the process with the size greater than the size of the largest partition could not be loaded and process cannot be divided as it is invalid in contiguous allocation technique. Here, in variable partitioning, the process size can’t be restricted since the partition size is decided according to the process size. Disadvantages of Variable Partitioning 1. Difficult Implementation: Implementing variable Partitioning is difficult as compared to Fixed Partitioning as it involves allocation of memory during run-time rather than during system configure. 2. External Fragmentation: There will be external fragmentation in spite of absence of internal fragmentation. For example, suppose in above Dynamic partitioning example- process P1(2MB) and process P3(1MB) completed their Operating system execution. Hence two spaces are left i.e. 2MB and 1MB. Let’s suppose P1 (2 MB) executed, Block size = 2 MB process P5 of size 3MB comes. The now empty empty space in memory cannot be allocated as no spanning is allowed in P2 = 7 MB Block size = 7 MB contiguous allocation. The rule says that process must be contiguously P3 (1 MB) executed Block size = 1 MB present in main memory to get executed. Hence it results in External P4 = 5 MB Block size = 5 MB Fragmentation. Empty space of RAM Now P5 of size 3 MB cannot be accommodated in spite of required Partition size = process size available space because in contiguous So, no internal Fragmentation no spanning is allowed. CU IDOL SELF LEARNING MATERIAL (SLM)

168 System Programming and Operating System 9.4 Virtual Memory Real memory refers to the actual memory chips that are installed in the computer. All programs actually run in this physical memory. However, it is often useful to allow the computer to think that it has memory that isn’t actually there, in order to permit the use of programs that are larger than will physically fit in memory, or to allow multitasking (multiple programs running at once). This concept is called virtual memory shown in Fig. 9.3. An imaginary memory area supported by some operating systems (for example, Windows but not DOS) in conjunction with the hardware. You can think of virtual memory as an alternate set of memory addresses. Programs use these virtual addresses rather than real addresses to store instructions and data. When the program is actually executed, the virtual addresses are converted into real memory addresses. Physical Memory Address Virtual Memory Space Disk Drive Chip Fig. 9.3: Virtual Memory Purpose of Virtual Memory The conceptual separation of user logical memory from physical memory. Thus we can have large virtual memory on a small physical memory. The purpose of virtual memory is to enlarge the address space, the set of addresses a program can utilize. For example, shown in Fig. 9.4. virtual memory might contain twice as many addresses as main memory. A program using all of virtual memory, therefore, would not be able to fit in main memory all at once. Nevertheless, the computer could execute such a program by copying into main memory those portions of the program needed at any given point during execution. CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 1 169 To facilitate copying virtual memory into real memory, the operating system divides virtual memory into pages, each of which contains a fixed number of addresses. Each page is stored on a disk until it is needed. When the page is needed, the operating system copies it from disk to main memory, translating the virtual addresses into real addresses. page 0 page 1 page 2 memory map page v physical memory virtual memory Fig. 9.4: Concept of Virtual Memory 9.5 Paging Dynamic memory partitioning suffers from external fragmentation. To overcome this problem either we can use compaction or paging. This method allows a program to be allocated physical memory wherever it is available. In paging physical memory is broken into fixed size blocks called frames. Also logical memory is broken into fixed size blocks called as pages. Whenever a process is to be executed its pages are loaded into available backing store. This back storage is nothing but the physical memory which is in the size of frames, which are fixed blocks. The size of page is dependent of hardware. It is typically a power of 2 varying between 512 bytes and 16 MB per page depending on system architecture. CU IDOL SELF LEARNING MATERIAL (SLM)

170 System Programming and Operating System This is done for translation of logical address into physical address in easy way. The logical address is in the following form. Page Number Offset Fig. 9.5: Address Conversion from Logical to Physical Every logical address is bound with physical address shown in Fig. 9.6 and Fig. 9.7. So paging is supposed to be a form of dynamic relocation. Every address generated by the CPU is divided into two parts. (1) a Page number (p) and (2) a page offset (d). Page Number: The page number is used as an index into the page table as shown in Fig. 9.6. The page table contains the base address of each page in physical memory. Base Address: This base address is combined with the page offset to define the physical memory address that is sent to the memory unit. CPU logical physical page 0 01 frame address address f0000...0000 14 number pd page 1 23 fd 37 0 p page 2 f1111...1111 page table 1 page 0 page 3 logical 2 memory 3 page 2 f 4 page 1 physical 5 memory 6 page table 7 page 3 physical memory Fig. 9.6: Example of Page Translation CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 1 171 Example of Paging: 0a 0 1 b0 0 2 c 3d 4e 4i 5 6 f1 05 j 1 16 k g 21 7h 32 l page table 8i 8m 9 j 2 n 10 k o2 11 l p 12 m 12 13 n 3 14 o 3 15 p logical memory 16 Logical address Physical address 4 22 32 20 a pd fd b 01 01 11 0 01 c5 d 24 e f g6 h 28 7 physical memory Fig. 9.7: Example of Paging 9.6 Demand Paging When a page is referenced, either as code execution or data access, and that page isn’t in memory, then get the page from disk and re-execute the statement. CU IDOL SELF LEARNING MATERIAL (SLM)

172 System Programming and Operating System 3 page is on backing store operating 2 system trap reference 1 load M i page table 6 restart instruction free frame 5 4 reset page bring in table missing page physical memory Fig. 9.8: Demand Paging Demand paging system is somehow similar to paging system with swapping. In this type of method the process resides on the back storage (i.e. secondary memory or disk), shown in Fig. 9.8. When we want to execute the process, we swap it into memory. A lazy swapper is used to swap the page, which is needed, instead of swapping the whole process. The page is swapped strictly if it is needed. We can view the process as a sequence of pages rather than a large address space instead of snapper we use pager, which is concern with the individual pages of the process. When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again. Instead of swapping in a whole process, the pager brings only those necessary pages into memory. It avoids reading into memory pages that will not be used anyway, decreasing the swap time and amount of physical memory needed. CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 1 173 The hardware to support demand paging is same as that is required for paging and swapping as follows: (1) Page table (2) Secondary memory (3) Software needed to solve page fault problem In demand paging if we guess right and demand for only those pages that are actually needed, then process will rim exactly as per our expectation. But if in case, process tries to access a page that was not brought into memory, then access to this page is called as a page fault. Advantages: (1) Demand paging, as opposed to loading all pages immediately. (2) Only loads pages that are demanded by the executing process. (3) When a process is swapped out (context switch) of memory, only those pages loaded in main memory need to be swapped out from main memory to secondary storage. (4) As there is more space in main memory, more processes can be loaded reducing context switching time which utilizes large amounts of resources. (5) Less loading latency occurs at program startup, as less information is accessed from secondary storage and less information is brought into main memory. (6) Does not need extra hardware support than what paging needs, since protection fault can be used to get page fault. Disadvantages: (1) Individual programs face extra latency when they access a page for the first time. So demand paging may have lower performance than anticipatory paging algorithms such as prepaging, a method of remembering which pages a process used when it last executed and preloading a few of them, is used to improve performance. (2) Programs running on low-cost, low-power embedded systems may not have a memory management unit that supports page replacement. CU IDOL SELF LEARNING MATERIAL (SLM)

174 System Programming and Operating System (3) Memory management with page replacement algorithms becomes slightly more complex. (4) Possible security risks, including vulnerability to timing attacks; 9.7 Design and Implementation Issues in Paging Such as Page Tables A page table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual addresses and physical addresses. Virtual addresses are those unique to the accessing process. Physical addresses are those unique to the CPU, i.e., RAM. valid-invalid frame bit logical page table memory physical memory Fig. 9.9: Concept of Page Table Concept of Page Table: The virtual address is divided into a virtual page number which are also known as high-code bits and an offset called low order bits and an offset (low-order bits). CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 1 175 Example shown in Fig. 9.9. with a 16 bit address and a 4KB page size, the upper bits could specify one of the 16 virtual pages and the lower 12 bits would then specific the byte offset (0 to 4095) within the selected page. However, a split with S or 5 or some other number of bits for the page is also possible. Different splits imply different page sizes. Here the virtual page number is used as an index into the page table to search the entry for that virtual page. From the :page table entry, the page frame number (if any) is found. The page frame number attached to the high order end of the offset, replacing the virtual page number; to form a physical address that can be sent to the memory. The main use of page table is to map virtual pages onto page frames, in mathematically words; the page table is a function, with the virtual page number as argument and the physical frame number as result. This result is useful function; a page frame field, thus forming a physical memory address, can replace the virtual page field in a virtual address. Two major issues attached with the file are that page table can be extremely large and the mapping must be fast. Fig. 9.10: Concept of Page Table CU IDOL SELF LEARNING MATERIAL (SLM)

176 System Programming and Operating System 9.8 Summary  Memory management is the functionality of an operating system which handles or manages primary memory and moves processes back and forth between main memory and disk during execution.  It keeps track of each and every memory location, regardless of either it is allocated to some process or it is free.  It checks how much memory is to be allocated to processes. It decides which process will get memory at what time.  It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status.  Following are the concepts related to Memory Management 1. Process Address Space 2. Static vs Dynamic Loading 3. Static vs Dynamic Linking. 4. Swapping. 5. Memory Allocation. 6. Fragmentation 7. Paging. 8. Segmentation.  There are two Memory Management Techniques: Contiguous, and Non-Contiguous. In Contiguous Technique, executing process must be loaded entirely in main-memory. Contiguous Technique can be divided into: 1. Fixed (or static) partitioning 2. Variable (or dynamic) partitioning  Virtual memory is a memory management capability of an operating system (OS) that uses hardware and software to allow a computer to compensate for physical memory CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 1 177 shortages by temporarily transferring data from random access memory (RAM) to disk storage.  A computer’s memory management unit (MMU) handles memory operations, including managing virtual memory. In most computers, the MMU hardware is integrated into the CPU. There are two ways in which virtual memory is handled: paged and segmented.  Paging: In Operating Systems, Paging is a storage mechanism used to retrieve processes from the secondary storage into the main memory in the form of pages. The main idea behind the paging is to divide each process in the form of pages. The main memory will also be divided in the form of frames. One page of the process is to be stored in one of the frames of the memory. The pages can be stored at the different locations of the memory but the priority is always to find the contiguous frames or holes. Pages of the process are brought into the main memory only when they are required otherwise they reside in the secondary storage. Different operating system defines different frame sizes. The sizes of each frame must be equal. Considering the fact that the pages are mapped to the frames in Paging, page size needs to be as same as frame size.  Demand Paging: Demand Paging is a popular method of virtual memory management. In demand paging, the pages of a process which are least used, get stored in the secondary memory. A page is copied to the main memory when its demand is made or page fault occurs. There are various page replacement algorithms which are used to determine the pages which will be replaced. 9.9 Key Words/Abbreviations  Paging: Paging is a memory management mechanism that allows the physical address space of a process to be non-contagious. Here physical memory is divided into blocks of equal size called Pages. The pages belonging to a certain process are loaded into available memory frames. CU IDOL SELF LEARNING MATERIAL (SLM)

178 System Programming and Operating System  Page Table: A Page Table is the data structure used by a virtual memory system in a computer operating system to store the mapping between virtual address and physical addresses. Virtual address is also known as Logical address and is generated by the CPU. While Physical address is the address that actually exists on memory. 9.10 Learning Activity 1. What is difference between static partitioning and dynamic partitioning? ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2. What do you mean by paging and demand paging? ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 9.11 Unit End Questions (MCQ and Descriptive) A. Descriptive Types Questions 1. Explain Fixed partitions for multi-programming. 2. Explain advantages and disadvantages of fixed partitioning. 3. Explain variable partition in detail. 4. Explain advantages and disadvantages of variable partitioning. 5. Explain virtual memory. 6. Explain paging with page table. 7. Explain Demand paging concept. 8. Explain Advantages and disadvantages of Demand paging. 9. State issues in paging. CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 1 179 B. Multiple Choice/Objective Type Questions 1. _________ is the function responsible for managing the computer’s primary memory. (a) Low memory (b) High memory (c) Memory management (d) None of the mentioned 2. In contiguous memory allocation ____________. (a) each process is contained in a single contiguous section of memory (b) all processes are contained in a single contiguous section of memory (c) the memory space is contiguous (d) None of the mentioned 3. In fixed size partition, the degree of multiprogramming is bounded by ___________. (a) the number of partitions (b) the CPU utilization (c) the memory size (d) All of the mentioned 4. For every process there is a __________. (a) pointer to page table (b) copy of page table (c) page table (d) All of the mentioned 5. Virtual memory is normally implemented by ___________. (a) buses (b) demand paging (c) virtualization (d) All of the mentioned Answers 1. (c), 2. (a), 3. (a), 4. (c), 5. (b) 9.12 References Reference Books 1. https://www.researchgate.net/publication/332912311_MEMORY_MANAGEMENT_ IN_COMPUTER_SYSTEM 2. https://web.cs.wpi.edu/~cs3013/c07/lectures/Section08-Memory_Management.pdf CU IDOL SELF LEARNING MATERIAL (SLM)

180 System Programming and Operating System Web Resources 1. https://en.wikipedia.org/wiki/Memory_management_(operating_systems) 2. https://searchstorage.techtarget.com/definition/virtual-memory 3. https://cs.nyu.edu/courses/spring02/V22.0202-002/lecture-09.html 4. https://www.tutorialspoint.com/operating_system/os_memory_management.htm 5. https://www.guru99.com/os-memory-management.html CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 10 MEMORY MANAGEMENT - 2 Structure: 10.0 Learning Objectives 10.1 Introduction 10.2 Inverted Page Tables 10.3 Page Replacement Algorithms 10.3.1 Different Page Replacement Algorithms 10.4 Page Fault Handling 10.5 Working Set Model 10.6 Local vs. Global Allocation 10.7 Page Size 10.8 Segmentation with Paging 10.8.1 Difference between Paging and Segmentation 10.9 Summary 10.10 Key Words/Abbreviations 10.11 Learning Activity 10.12 Unit End Questions (MCQ and Descriptive) 10.13 References

182 System Programming and Operating System 10.0 Learning Objectives After studying this unit, you should be able to:  Apply the Page Replacement Algorithms.  Explain about Page Fault Techniques.  Discuss about the concept of Segmentation. 10.1 Introduction Memory management is the functionality of an operating system which handles or manages primary memory and moves processes back and forth between main memory and disk during execution. Memory management keeps track of each and every memory location, regardless of either it is allocated to some process or it is free. It checks how much memory is to be allocated to processes. It decides which process will get memory at what time. It tracks whenever some memory gets freed or unallocated and correspondingly it updates the status. 10.2 Inverted Page Tables An alternate approach is to use the Inverted Page Table structure that consists of one-page table entry for every frame of the main memory. So the number of page table entries in the Inverted Page Table reduces to the number of frames in physical memory and a single page table is used to represent the paging information of all the processes. Through the inverted page table, the overhead of storing an individual page table for every process gets eliminated and only a fixed portion of memory is required to store the paging information of all the processes together. This technique is called as inverted paging as the indexing is done with respect to the frame number instead of the logical page number. Each entry in the page table contains the following fields.  Page number – It specifies the page number range of the logical address. CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 2 183  Process id – An inverted page table contains the address space information of all the processes in execution. Since two different processes can have similar set of virtual addresses, it becomes necessary in Inverted Page Table to store a process Id of each process to identify its address space uniquely. This is done by using the combination of PId and Page Number. So this Process Id acts as an address space identifier and ensures that a virtual page for a particular process is mapped correctly to the corresponding physical frame.  Control bits – These bits are used to store extra paging-related information. These include the valid bit, dirty bit, reference bits, protection and locking information bits.  Chained pointer – It may be possible sometime that two or more processes share a part of main memory. In this case, two or more logical pages map to same Page Table Entry then a chaining pointer is used to map the details of these logical pages to the root page table.  Working – The operation of an inverted page table is shown below: Virtual Address Page# PId Offset i Offset Physical Address Physical Memory Page# Control PId Bits 0 Search i Size of Physical Memory = 2^m frames 2^m-1 Pagetable Fig. 10.1: Inverted Page Table CU IDOL SELF LEARNING MATERIAL (SLM)

184 System Programming and Operating System The virtual address generated by the CPU contains the fields and each page table entry contains and the other relevant information required in paging related mechanism. When a memory reference takes place, this virtual address is matched by the memory-mapping unit and the Inverted Page table is searched to match and the corresponding frame number is obtained. If the match is found at the ith entry then the physical address of the process, is sent as the real address otherwise if no match is found then Segmentation Fault is generated. Note: Number of Entries in Inverted page table = Number of frames in Physical address Space (PAS). Examples – The Inverted Page table and its variations are implemented in various systems like PowerPC, UltraSPARC and the IA-64 architecture. An implementation of the Match operating system on the RT-PC also uses this technique. Advantages and Disadvantages  Reduced memory space – Inverted Pagetables typically reduces the amount of memory required to store the page tables to a size bound of physical memory. The maximum number of entries could be the number of page frames in the physical memory.  Longer lookup time – Inverted Page tables are sorted in order of frame number but the memory look-up takes place with respect to the virtual address, so, it usually takes a longer time to find the appropriate entry but often these page tables are implemented using hash data structures for a faster lookup.  Difficult shared memory implementation – As the Inverted Page Table stores a single entry for each frame, it becomes difficult to implement the shared memory in the page tables. Chaining techniques are used to map more than one virtual address to the entry specified in order of frame number. 10.3 Page Replacement Algorithms Page replacement algorithms are the techniques using which an Operating System decides which memory pages to swap out, write to disk when a page of memory needs to be allocated. Paging happens whenever a page fault occurs and a free page cannot be used for allocation CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 2 185 purpose accounting to reason that pages are not available or the number of free pages is lower than required pages. A page replacement algorithm looks at the limited information about accessing the pages provided by hardware, and tries to select which pages should be replaced to minimize the total number of page misses, while balancing it with the costs of primary storage and processor time of the algorithm itself. There are many different page replacement algorithms. We evaluate an algorithm by running it on a particular string of memory reference and computing the number of page faults, Example where page fault can occur 1. First example is in most computers have one or more memory caches consisting of recently used 32-byte or 64-byte memory blocks. When the cache is full, some block has to be chosen for removal. This problem is precisely the same as page replacement except on a shorter time scale (it has to be done in a few nanoseconds, not milliseconds as with page replacement). The reason for the shorter time scale is that cache block misses are satisfied from main memory, which has no seek time and no rotational latency. 2. A second example is in a Web server. The server can keep a certain number of heavily used Web pages in its memory cache. However, when the memory cache is full and a new page is referenced, a decision has to be made which Web page to evict. The considerations are similar to pages of virtual memory, except for the fact that the Web pages are never modified in the cache, so there is always a fresh copy on disk. In a virtual memory system, pages in main memory may be either clean or dirty. 10.3.1 Different Page Replacement Algorithms When total memory requirement in demand paging exceed the physical memory, then there is need to replace pages from memory to free frames for new pages. For replacement of pages various techniques are used. (a) FIFO Page Replacement (b) Optimal Page Replacement (c) LRU Page Replacement CU IDOL SELF LEARNING MATERIAL (SLM)

186 System Programming and Operating System (d) NRU: Not recently used (e) Clock Page Replacement Algorithm 10.3.1.1 First-In, First-Out (FIFO) Page Replacement Algorithm Oldest page in main memory is the one which will be selected for replacement. Easy to implement, keep a list, replace pages from the tail and add new pages at the head. FIFO algorithm low-overhead paging algorithm. The operating system maintains a list of all pages currently in memory, with the page at the head of the list the oldest one and the page at the tail the most recent arrival. On a page fault, the page at the head is removed and the new page added to the tail of the list. Example of FIFO: In all our examples, shown in Fig. 10.2. The reference string is 2, 3, 2, 1, 5, 2, 4, 5, 3, 2, 5, 2 Page address stream FIFO F = page fault occurring after the frame allocation is initially filled Fig. 10.2: FIFO Page Replacement Algorithm Example-1: In all our examples, Shown in Fig 10.3. The reference string is 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 1, 2, 0, 1, 7, 0, 1. Fig. 10.3: FIFO Page Replacement Algorithm CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 2 187 Explanation : 1. The first three references 7 0 1 cause page fault and brought into the empty frames. 2. The next page is 2 and it replaces the first page 7 that was brought first in queue. 3. The next page is 0 since it is already in memory, we have no fault. The next reference 3 is to be replaced with 0. 0,1,2 as follows. 4. The next first reference 1 is to replaced by page reference 0 or 2 is replaced by 4 and 3 i replaced by 2 sequentially as in following Figs. 10.4. and Fig. 10.5. 5. Thus there are 6-page faults occurred. Example of FIFO with Different Frames: In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. 3 frames (3 pages can be in memory at a time per process) Fig. 10.4: FIFO Page Replacement Algorithm Explanation : 1. The first three references 7 0 1 cause page fault and brought into the empty frames. 2. The next page is 2 and it replaces the first page 7 that was brought first in queue. 3. The next page is 0 since it is already in memory, we have no fault. The next reference 3 is to be replaced with 0 as follows sincQitwas the first of 0,1,2 as follows. 4. The next first reference 1 is to be replaced by page reference 0 or 2 is replaced by 4 and 3 i replaced by 2 sequentially as in following Figs. 10.5. and Fig.10.6. 5. Thus there are 6-page faults occurred. CU IDOL SELF LEARNING MATERIAL (SLM)

188 System Programming and Operating System Example of FIFO with Different Frames: In all our examples, the reference string is 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5. frames (3 pages can be in memory at a time per process) 11 45 9 page faults 2 2 13 33 24 Fig. 10.5: 3 Frames frames (4 pages can be in memory at a time per process) 1154 2 2 1 5 10 page faults 332 4 43 Fig. 10.6: 4 Frames Fig. 10.7: FIFO with Total Page Faults Drawback of FIFO:  Process execution is slow The rate of page fault increases Example-2: FIFO Page Replacement Algorithm: Optimal Page Replacement Algorithm: The optimal page algorithm simply says that the page with the highest label should be removed. If one page will not be used for 8 million instructions and another page will not be used CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 2 189 for 6 million instructions, removing the former pushes the page fault that will fetch it back as far into the future as possible. The only problem with this algorithm is that it is unrealizable. At the time of the page fault, the operating system has no way of knowing when each of the pages will be referenced next. Still, by running a program on a simulator and keeping track of all page references, it is possible to implement optimal page replacement on the second run by using the page reference information collected during the first run. Example-1 Page address stream OPT F = page fault occurring after the frame allocation is initially filled Fig. 10.8: Optimal Page Replacement Algorithm Fig. 10.9: Optimal Page Replacement Algorithm Explanation: Fig. 10.9 shows optimal page replacement algorithm. The first three references cause page fault and we store these into first three frames. The reference page 2 replaces 7 because it is not used further for long period. The reference Page 3 replaces page 1 as: The next reference page 4 replaces 0 and so on for page 0 and 1. Thus for last reference 7 will replace 2. CU IDOL SELF LEARNING MATERIAL (SLM)

190 System Programming and Operating System Example with 4 Frames Fig. 10.10: Optimal Page Replacement Algorithm Limitations: 1. Algorithm is difficult to implement. 2. FIFO algorithm uses the time when a page was brought into replacement uses the time when page is used. 3. Memory, whereas optimal page Fig. 10.11: Optimal Total Page Faults Example-2: Optimal Page Replacement LRU (Least recently used) Page replacement algorithm: 1. A good approximation to the optimal algorithm is based on the observation that pages that have been heavily used in the last few instructions will probably be heavily used again in the next few. Conversely, pages that have not been used for ages will probably remain unused for a long time. The best way is to, throw out the page that has been CU IDOL SELF LEARNING MATERIAL (SLM)

Memory Management - 2 191 unused for the longest time when page fault occurs. This strategy is called LRU (Least Recently Used) paging. 2. As shown in Fig. 10.12 although LRU is theoretically realizable, it is not cheap. To fully implement LRU, it is necessary to maintain a linked list of all pages in memory, with the most recently used page at the front and the least recently used page at the rear. 3. The difficulty is that the list must be updated on every memory reference. 4. Finding a page in the list, deleting it, and then moving it to the front is a very time consuming operation, even in hardware (assuming that such hardware could be built) Page address stream LRU F = page fault occurring after the frame allocation is initially filled Fig. 10.12: LRU Page Replacement Algorithm Example-1: LRU place replacement Algorithm Explanation: Fig. 10.13: LRU Page Replacement Algorithm CU IDOL SELF LEARNING MATERIAL (SLM)

192 System Programming and Operating System From the above figure for first five diagrams will be same as those found in optima page replacement. But in next figure with LRU page replacement algorithm, for page replacement it sees which page is recently used, and then it replaces that with required one. E.g. for page 2 is replaced for page 4 and so on. Fig. 10.14: LRU Page Replacement Algorithm The major problem is how to implement LRU replacement: Counter: Whenever a reference to a page is made, the content of the clock register are copied to the time-of-use filed in the page table entry for the page. We replace the page with the smallest time value. Stack: Whenever a page is referenced, it is removed from the stack and put on the top. In this way, the most recently used page is always at the top of the stack. Counter implementation:  Every page entry has a counter; every time page is referenced through this entry, copy the clock into the counter.  When a page needs to be changed, look at the counters to determine which are to change. Stack implementation: Keep a stack of page numbers in a double link form: Page referenced: T move it to the top T requires 6 pointers to be changed No search for replacement CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook