Figure 12.1: Paging Hardware The hardware support for paging is illustrated in Fig. 7.13. Every address generated by the CPU is divided into two parts: a page number (p) and a page offset (d). The page number is used as an index into a page table. The page table contains the base address of each page in physical memory. This base address combined with the page offset to define the physical memory address that is sent to the memory unit. Figure 12.2: Paging Model of Logical and Physical Memory The page size (like the frame size) is defined by the hardware. The size of page is typically a power of 2 varying between 512 bytes and 8,192 bytes per age, depending on the computer architecture. The selection of a power of 2 s a page size makes the translation of a logical address into a page number and page offset particularly easy. If the size of logical address space is 2\", and a page size is 2\" addressing units (bytes or words), then the high-order m – n 251 CU IDOL SELF LEARNING MATERIAL (SLM)
it’s of a logical address designates the page number, and the n low-order bits designate the page offset. Thus, the logical address is as follows: Page Number Page Offset p d n m–n Where p is an index into the page table and d is the displacement within the page. For a concrete, although minuscule, example, consider the memory of Figure 7.15. Using a page size of 4 bytes and a physical memory of 32 bytes (8 pages), we show an example of how the user's view of memory can be mapped into physical memory. Logical address 0 is page 0, offset 0, indexing into the page table; we find that page 0 is in frame 5. Thus, logical address 0 maps to physical address 20 (= (5 x 4) + 0). Logical address 3 (page 0, offset 3) maps to physical address 23 (= (5 x 4) +3). Logical address 4 is page I, offset 0; according to the page table, page I is mapped to frame 6. Thus, logical address 4 maps to physical address 24 (= (6 x 4) +0). Logical address 13 maps to physical address 9. Notice that paging itself is a form of dynamic relocation. Every logical address is bound by the paging hardware to some physical address. The observant reader will have realized that paging is similar to using a table of base (relocation) registers, one for each frame of memory. Figure 12.3: Paging Example for a 32-byte Memory with 4-byte Pages 252 CU IDOL SELF LEARNING MATERIAL (SLM)
When we use a paging scheme, we have no external fragmentation: Any free frame can be allocated to a process that needs it. However, we may have some internal fragmentation. Notice that frames are allocated as units. If the memory requirements of a process do not happen to fall on page boundaries, the last frame allocated may not be completely full. For example, if pages are 2048 bytes, a process of 72,766 bytes would need 35 pages plus 1,086 bytes. It would be allocated 36 frames, resulting in an internal fragmentation of 2048 – 1086 = 962 bytes. In the worst case, a process would need n pages plus one byte. It would be allocated n + 1 frames, resulting in an internal fragmentation of almost an entire frame. If process size is independent of page size, we expect internal fragmentation to average one-half page per process. This consideration suggests that small page sizes are desirable. However, there is quite a bit of overhead involved in each page-table entry, and this overhead is reduced as the size of the page’s physical memory Figure 7.15. Paging example for a 32- byte memory with 4-byte pages increases. Also, disk I/O is more efficient when the number of data being transferred is larger. Generally, page sizes have grown over time as processes, data sets, and main memory have become larger. Today, pages typically are either 2 or 4 kilobytes. When a process arrives in the system to be executed, its size, expressed in pages, is examined. Each page of the process needs one frame. Thus, if the process requires n pages, there must be at least n frames available in memory. If there are n frames available, they are allocated to this arriving process. The first page of the process is loaded into one of the allocated frames, and the frame number is put in the page table for this process. The next page is loaded into another frame, and its frame number is put into the page table, and so on (Figure 7.16). An important aspect of paging is the clear separation between the user's view of memory and the actual physical memory. The user program views that memory as one single contiguous space, containing only this one program. In fact, the user program is scattered throughout physical memory, which also holds other programs. The difference between the user's view of memory and the actual physical memory is reconciled by the address-translation hardware. The logical addresses are translated into physical addresses. This mapping is hidden from the user and is controlled by the operating system. Notice that the user process by definition is 253 CU IDOL SELF LEARNING MATERIAL (SLM)
unable to access memory it does not own. It has no way of addressing memory outside of its page table, and the table includes only those pages that the process owns. Figure 12.4: Free Frames (a) Before Allocation, and (b) After Allocation Since the operating system is managing physical memory, it must be aware of the allocation details of physical memory use; which frames are allocated? Which frames are available? How many total frames there are? And so on. This information is generally kept in a data structure called a frame table. The frame table has one entry for each physical page frame, indicating whether the latter is free or allocated and, if it is allocated, to which page of which process or processes. In addition, the operating system must be aware that user processes operate in user space, and all logical addresses must be mapped to produce physical addresses. If a user makes a system call (to do I/O, for example) and provides an address as a parameter (a buffer, for instance) that address must be mapped to produce the correct physical address. The operating system maintains a copy of the page table for each process, just as it maintains a copy of the instruction counter and register contents. This copy is used to translate logical addresses to physical addresses whenever the operating system must map a logical address to a physical address manually. It is also used by the CPU dispatcher to define the hardware 254 CU IDOL SELF LEARNING MATERIAL (SLM)
page table when a process is to be allocated the CPU. Paging, therefore, increases the context-switch time. 12.3 DEMAND PAGING A demand-paging system is similar to a paging system, discussed earlier, with a little difference that it uses — swapping. Processes reside on secondary memory. When we want to execute a process, we swap it into memory. Rather than swapping the entire process into memory, however, we use a lazy swapper, which swaps a page into memory only when that page is needed. Since we are now viewing a process as a sequence of pages, rather than one large contiguous address space, the use of the term swap will not technically correct. A swapper manipulates entire processes, whereas a pager is concerned with the individual pages of a process. We shall thus use the term pager, rather than swapper, in connection with demand paging. When a process is to be swapped in, the pager guesses which pages will be used before the process in swapped out again. Instead of swapping in a whole process, the pager brings only those necessary pages into memory. Thus, it avoids reading into memory pages that will not be used anyway, decreasing the swap time and the amount of physical memory needed. With this scheme, we need some form of hardware support to distinguish between those pages that are in memory and those pages that are on the disk. The valid-invalid bit scheme described earlier can be used for this purpose. This time, however, when this bit is set to “valid”, this value indicates that the associated page is both legal and in memory. If the bit is set to “invalid,” this value indicates that the page either is not valid, or is valid but is currently on the disk not in the memory. The page-table entry for a page that is not currently in memory is simply marked invalid, or contains the address of the page on disk. 255 CU IDOL SELF LEARNING MATERIAL (SLM)
program swap out 01 23 A swap in 4 5 67 8 9 10 11 program 12 13 14 15 B 16 17 18 19 20 21 22 23 main memory Figure 12.5: Transfer of a Paged Memory to Contiguous Disk Space Notice that marking a page invalid will have no effect if the process never attempts to access that page. Hence, if we guess right and only those pages that are actually needed are brought into the memory, the process will run exactly as though we had brought in all pages. While the process executes and accesses pages that are memory resident, execution proceeds normally. But what happens if the process tries to use a page that was not brought into memory? Access to a page marked invalid causes a page-fault trap. The paging hardware, in translating the address through the page table, will notice that the invalid bit is set, causing a trap to the operating system. This trap is the result of the operating system's failure to bring the desired page into memory, rather than an invalid address error as a result of an attempt to use an illegal memory address. We must therefore, correct this oversight. 256 CU IDOL SELF LEARNING MATERIAL (SLM)
0 1 0A 2 1B frame 3valid-invalid 2C 3D bit 4E 5F 0 4V 4 A 6G C 7H 1i 5 F logical 2 6v 6 AB memory 7 CDE 3i 8 F 4i 5 9v 6i 9 10 7i page table 11 12 13 14 15 physical memory Figure 12.6: Page Table when some Pages are not in Main Memory We check an internal table (usually kept with the process control block) for the process under consideration, to determine whether the reference was a valid or invalid memory access. If the reference was invalid, we terminate the process. If it was valid, but we have not yet brought in that page into memory, we now page in the latter. We find a free frame (by taking one from the free-frame list, for example). We schedule a disk operation to read the desired page into the newly allocated frame. When the disk read is complete, we modify the internal table kept with the process and the page table to indicate that the page is now in memory. We restart the instruction that was interrupted by the illegal address trap. The process can now access the page as though it had always been in memory. 257 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 12.7: Steps in Handling a Page Fault It is important to realize that, because we save the state (registers, condition code, instruction counter) of the interrupted process when the page fault occurs, we can restart the process in exactly the same place and state, except that the desired page is now in memory and is accessible. In this way, we are able to execute a process, even though portions of it are not (yet) in memory. When the process tries to access locations that are not in memory, the hardware traps the operating system which is known as \"page fault\". The operating system reads the desired page into memory and restarts the process as though the page had always been in memory. In the extreme case, we could start executing a process with no pages in memory. When the operating system sets the instruction pointer to the first instruction of the process would immediately fault for the page. After this page was brought into memory due to recourse taken in response to this page fault, the process would continue to execute, faulting as necessary until every page that it needed was actually in memory. At that point, it could execute with no more faults. This scheme is pure demand paging: never bring a page into memory until it is required. Theoretically, some programs may access several new pages of memory with each instruction execution (one page for the instruction and many for data), possibly causing multiple page faults per instruction. This situation would result in unacceptable system performance 258 CU IDOL SELF LEARNING MATERIAL (SLM)
because of the overheads involved. Fortunately, analyses of running processes show that this behaviour is exceedingly unlikely. Programs tend to have locality of reference, which results in reasonable performance from demand paging. The hardware to support demand paging is the same as the hardware for paging and swapping: 1. Page Table: This table has the ability to mark an entry invalid through a valid-invalid bit or special value of protection bits. 2. Secondary Memory: This memory holds those pages that are not present in main memory. The secondary memory is usually a high-speed disk. It is known as the swap device, and the section of disk used for this purpose is known as swap space or backing store. In addition to this hardware support, considerable software is needed, as we shall see. Some additional architectural constraints must be imposed. A crucial issue is the need to be able to restart any instruction after a page fault. In most cases, this requirement is easy to meet. A page fault could occur at any memory reference. If the page fault occurs on the instruction fetch, we can restart by fetching the instruction again. If a page fault occurs while we are fetching an operand not an instruction, then we must re-fetch the instruction, decode it again, and then fetch the operand. As a worst-case scenario, consider a three-address instruction such as ADD the content of A to B placing the result in C. The steps to execute this instruction would be 1. Fetch and Decode the Instruction (ADD). 2. Fetch A. 3. Fetch B. 4. Add A and B. 5. Store the sum in C. If we faulted when we tried to store in C (because C is in a page not currently in memory), we would have to get the desired page in it, correct the page table, and restart the instruction. The restart would require fetching the instruction again, decoding it again, fetching the two 259 CU IDOL SELF LEARNING MATERIAL (SLM)
operands again, and then adding again. However, there is really not much-repeated work (less than one complete instruction), and the repetition is necessary only when a page fault occurs. The major difficulty occurs when one instruction may modify several different locations. For example, consider the IBM System 360/370 MVC (move character) instruction, which can move up to 256 bytes from one location to another (possible overlapping) location. If either block (source or destination) straddles a page boundary, a page fault might occur after the move is partially done. In addition, if the source and destination blocks overlap, the source block may have been modified, in which case we cannot simply restart the instruction. This problem can be solved in two different ways. In one solution, the micro-code computes attempt to access both ends of both blocks. If a page fault is going to occur, it will happen at this step, before anything is modified. The move can then take place, as we know that no page fault can occur, since all the relevant pages are in memory. The other solution uses temporary registers to hold the values of overwritten locations. If there is a page fault, all the old values are written back into memory before the trap occurs. This action restores memory to its state before the instruction was started, so that the instruction can be repeated. A similar architectural problem occurs in machines that use special addressing modes, including auto-decrement and auto-increment modes (for example, the PDP-11). These addressing modes use a register as indicated. Auto-decrement automatically decrements the register before using its contents as the operand address; auto-increment automatically increments the register after using its contents as the operand address. Thus, the instruction: MOV (R2) +, – (R3) copies the contents of the location pointed to by register 2 into the location pointed to by register 3. Register 2 is incremented (by 2 for a word, since the PDP-11 is a byte-addressable computer) after it is used as a pointer; register 3 is decremented (by 2) before it is used as pointer. Now consider what will happen if we get a fault when trying to store into the location pointed to by register 3. To restart the instruction, we must reset the two registers to the values they had before we started the execution of the instruction. One solution is to create a new special status register to record the register number and amount modified for any register allows the operating system to \"undo\" the effects of a partially executed instruction that causes a page fault. 260 CU IDOL SELF LEARNING MATERIAL (SLM)
These are by no means the only architectural problems resulting from adding paging to an existing architecture to allow demand paging, but they illustrate some of the difficulties. Paging is added between the CPU and the memory in a computer system. It should be entirely transparent to the user process. Thus, people often assume that paging could be added to any system. Although this assumption is true for a non-demand paging environment, where a page fault represents a fatal error, it is not true in the case, where a page fault means only that an additional page must be brought into memory and the process restarted. Performance of Demand Paging Demand paging can have a significant effect on the performance of a computer system. To see why, let us compute the effective access time for a demand-paged memory. The memory access time, ma, for most computer systems now ranges from 10 to 200 nanoseconds. As long as we have no page faults, the effective access time is equal to the memory access time. If, however, a page fault occurs, we must first read the relevant page from disk, and then access the desired word. Let p be the probability of a page fault (0 <= p <= 1)). We would expect p to be close to zero; that is, there will be only a few page faults. The effective access time is then effective-access time = (1 – p) * ma + p * page-fault-time. To compute the effective access time, we must know how much time is needed to service a page fault. A page fault causes the following sequence to occur: 1. Cause trap to the operating system. 2. Save the user registers and process state on the memory stack of the process. 3. Determine that the interrupt was a page fault. 4. Check that the page reference was legal and determine location of the page on the disk. 5. Issue a read from the disk to a free frame: t. Wait in a queue for device until the read request is serviced. u. Wait for the device seek and/or latency time. v. Begin the transfer of the page to a free frame. 6. While waiting, allocate the CPU to some other user (CPU scheduling; optional). 261 CU IDOL SELF LEARNING MATERIAL (SLM)
7. Interrupt from the disk (I/O completed). 8. Save the registers and process state for the other user (if step 6 executed). 9. Determine that the interrupt was from the disk. 10. Correct the page table and other tables to show that the desired page is now in memory. 11. Wait for the CPU to be allocated to this process again. 12. Restore the user registers, process state, and new page table, and then resume the interrupted instruction. Not all of these steps may be necessary in every case. For example, we are assuming that, in step 6, the CPU is allocated to another process while the I/O occurs. This arrangement allows multi-programming to maintain CPU utilization, but requires additional time to resume the page-fault service routine when the I/O transfer is complete. In any case, we are faced with three major components of the page-fault service time: 1. Service the page-fault interrupt. 2. Read in the page. 3. Restart the process. The first and third tasks may be reduced, with careful coding, to several hundred instructions. These tasks may take from 1 to 100 microseconds each. The page-switch time, on the other hand, will probably be close to 24 milliseconds. A typical harddisk has an average latency of 8 milliseconds, a seek-time of 15 milliseconds, and a transfer time of 1 millisecond. Thus, the total paging time would be close to 25 milliseconds, including hardware and software time. Remember also that we are looking at only the device service time. If a queue of processes is waiting for the device (other processes that have caused page faults), we have to add device queuing-time as we wait for the paging device to be free to service our request, increasing the time to swap even more than the one calculated. If we take an average page-fault service time of 25 milliseconds and a memory access time of 100 nanoseconds, then the effective access time in nanoseconds is Effective-access-time = (1 – p) * (100) + p * 25 milliseconds = (1 – p) * 100 + p * 25,000,000 262 CU IDOL SELF LEARNING MATERIAL (SLM)
= 100 – 100p + 25000000p = 100 + 24999900p We see then that the effective access time is directly proportional to the page-fault rate (or probability). If one access out of 1000 causes a page fault, the effective access time is 25 microseconds. The computer would be slowed down by a factor of 250 because of demand paging. If we want less than 10 per cent degradation, we need 110 > 100 + 25,000,000 * p 10 > 25,000,000 * p p < 0.0000004 That is, to keep the slowdown due to paging to a reasonable level, we can allow only less than 1 memory access out of 2,500,000 to page fault. That is, to keep the slowdown due to paging to a reasonable level, we can allow only less than 1 memory access out of 2,500,000 to page fault. As is obvious, it is important to keep the page-fault rate low in a demand-paging system. Otherwise, the effective access time increases, slowing process execution dramatically. One additional aspect of demand paging is the handling and overall use of swap space. Disk I/O to swap space is generally faster than that to the file system. It is faster because swap space is allocated in much larger blocks, and file lookups and indirect allocation methods are not used. It is, therefore, possible for the system to gain better paging throughput, by copying an entire file image into the swap space at process start up, and then to perform demand paging from the swap space. Systems with limited swap space can employ a different scheme when binary files are used. Demand pages for such files are brought directly from the file system. However, when page replacement is called and read in from the file system again if needed. Yet another option is initially to demand pages from the file system, but to write the pages to swap space as they are replaced. This approach will ensure that only needed pages are ever read from the file system, but all subsequent paging is done from swap space. This method appears to be a good compromise. This scheme is used in BSD UNIX. 263 CU IDOL SELF LEARNING MATERIAL (SLM)
12.4 DESIGN AND IMPLEMENTATION ISSUES Load Control Consider the case where all processes want more memory. In such a case the processor would need to remove one or more processes from memory. Load Control is deployed to achieve the target, i.e., to remove one or more processes from memory in order to fulfil the memory requirement of another process. Load Control using Working Set and Page Fault Frequency Algorithm Approach involves determining number of processes that are resident in main memory i.e., multiprogramming level. Working Set and Page Fault Frequency algorithm implicitly perform load control as they change the resident set after every clock cycle. Adjust page fault such that time between page faults is equal to time required to process page fault. Load Control using Clock Page Replacement Algorithm Monitor the rate at which the pointer scans the buffer of frames. If number of page faults is below a certain level, which results in fewer requests to advance the pointer OR Average number of frames scanned by pointer per request is small, THEN Number of processes on memory can be increased. Reducing number of process in main memory Remove process with lowest priority Remove process with lowest probability of having a working set-in main memory i.e., highest probability of page fault. Remove last activated process Remove process with smallest working resident set. Remove process with largest remaining execution window. 264 CU IDOL SELF LEARNING MATERIAL (SLM)
Page Size Page Size affects implementation in a variety of ways and choosing an optimal page size is very important. General considerations Internal Fragmentation Smaller page size means lower internal fragmentation. Larger page size means more internal fragmentation. Smaller page size means a greater number of processes per process and larger page tables. Co-relation between page size and page fault rate Small page size -> a greater number of pages in memory -> lesser page faults AND larger page table Large page size -> less pages in memory -> smaller page table AND larger page size Object Oriented Techniques to reduce multiprogramming More number of modules with small sized code. Deploy multithreaded applications Global vs. Local Allocation of pages Local Replacement Policy: choose among resident set of process that generated page fault Global replacement policy: choose among all pages eligible for replacement in main memory Types of allocations Fixed Allocation, Local scope. Allocations cannot be too large or too small as it would be irreparable. Variable allocation, Global Scope Processes that fault a lot need more replacement. Difficult to obtain a good replacement policy. Variable Allocation, Local Scope Re-evaluate allocation from time to time. 12.5 INVERTED PAGE TABLE Most of the Operating Systems implement a separate page table for each process, i.e., for ‘n’ number of processes running on a Multiprocessing/ Timesharing operating system, there are ‘n’ number of page tables stored in the memory. Sometimes when a process is very large in 265 CU IDOL SELF LEARNING MATERIAL (SLM)
size and it occupies virtual memory then with the size of the process, its page table size also increases substantially. Example: A process of size 2 GB with: Page size = 512 Bytes Size of page table entry = 4 Bytes, then Number of pages in the process = 2 GB / 512 B = 222 Page Table Size = 222 * 22 = 224 bytes. Through this example, it can be concluded that for multiple processes running simultaneously in an OS, a considerable part of memory is occupied by page tables only. Operating Systems also incorporate multilevel paging schemes which further increase the space required for storing the page tables and a large amount of memory is invested in storing them. The amount of memory occupied by the page tables can turn out to be a huge overhead and is always unacceptable as main memory is always a scarce resource. Various efforts are made to utilize the memory efficiently and to maintain a good balance in the level of multiprogramming and efficient CPU utilization. An alternate approach is to use the Inverted Page Table structure that consists of one-page table entry for every frame of the main memory. So, the number of page table entries in the Inverted Page Table reduces to the number of frames in physical memory and a single page table is used to represent the paging information of all the processes. Through the inverted page table, the overhead of storing an individual page table for every process gets eliminated and only a fixed portion of memory is required to store the paging information of all the processes together. This technique is called as inverted paging as the indexing is done with respect to the frame number instead of the logical page number. Each entry in the page table contains the following fields. Page number – It specifies the page number range of the logical address. Process id – An inverted page table contains the address space information of all the processes in execution. Since two different processes can have similar set of virtual addresses, it becomes necessary in Inverted Page Table to store a process Id of each process to identify its address space uniquely. This is done by using the combination of PId and Page Number. So, this Process Id acts as an address space identifier and 266 CU IDOL SELF LEARNING MATERIAL (SLM)
ensures that a virtual page for a particular process is mapped correctly to the corresponding physical frame. Control bits – These bits are used to store extra paging-related information. These include the valid bit, dirty bit, reference bits, protection and locking information bits. Chained pointer – It may be possible sometime that two or more processes share a part of main memory. In this case, two or more logical pages map to same Page Table Entry then a chaining pointer is used to map the details of these logical pages to the root page table. Working Figure 12.8: Inverted page table The virtual address generated by the CPU contains the fields and each page table entry contains and the other relevant information required in paging related mechanism. When a memory reference takes place, this virtual address is matched by the memory-mapping unit and the Inverted Page table is searched to match and the corresponding frame number is obtained. If the match is found at the ith entry then the physical address of the process, is sent as the real address otherwise if no match is found then Segmentation Fault is generated. Note: Number of Entries in Inverted page table = Number of frames in Physical address Space (PAS) 267 CU IDOL SELF LEARNING MATERIAL (SLM)
Examples – The Inverted Page table and its variations are implemented in various systems like PowerPC, UltraSPARC and the IA-64 architecture. An implementation of the Mach operating system on the RT-PC also uses this technique. Advantages and Disadvantages: Reduced memory space Inverted Page tables typically reduces the amount of memory required to store the page tables to a size bound of physical memory. The maximum number of entries could be the number of page frames in the physical memory. Longer lookup time Inverted Page tables are sorted in order of frame number but the memory look-up takes place with respect to the virtual address, so, it usually takes a longer time to find the appropriate entry but often these page tables are implemented using hash data structures for a faster lookup. Difficult shared memory implementation – As the Inverted Page Table stores a single entry for each frame, it becomes difficult to implement the shared memory in the page tables. Chaining techniques are used to map more than one virtual address to the entry specified in order of frame number. 12.6 SUMMARY In computer operating systems, memory paging is a memory management scheme by which a computer stores and retrieves data from secondary storage for use in main memory. In this scheme, the operating system retrieves data from secondary storage in same-size blocks called pages. According to the concept of Virtual Memory, in order to execute some process, only a part of the process needs to be present in the main memory which means that only a few pages will only be present in the main memory at any time. However, deciding, which pages need to be kept in the main memory and which need to be kept in the secondary memory, is going to be difficult because we cannot say in advance that a process will require a particular page at particular time? Therefore, to overcome this problem, there is a concept called Demand Paging is introduced. It suggests keeping all pages of the frames in the secondary memory until they are 268 CU IDOL SELF LEARNING MATERIAL (SLM)
required. In other words, it says that do not load any page in the main memory until it is required. Page Table is a data structure used by the virtual memory system to store the mapping between logical addresses and physical addresses. Logical addresses are generated by the CPU for the pages of the processes therefore they are generally used by the processes. Physical addresses are the actual frame address of the memory. They are generally used by the hardware or more specifically by RAM subsystems. Inverted Page Table is the global page table which is maintained by the Operating System for all the processes. In inverted page table, the number of entries is equal to the number of frames in the main memory. It can be used to overcome the drawbacks of page table. There is always a space reserved for the page regardless of the fact that whether it is present in the main memory or not. However, this is simply the wastage of the memory if the page is not present. 12.7 KEYWORDS Paging: Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory. Physical Memory: It is a broken into fixed-sized blocks called frames. Logical memory is also broken into blocks of the same size called pages. Logical Address: A logical address is the address at which an item appears to reside from the perspective of an executing application program. Physical Address: A physical address, is a memory address that is represented in the form of a binary number on the address bus circuitry in order to enable the data bus to access a particular storage cell of main memory, or a register of memory mapped I/O device. Segmentation: A memory management technique in which, the memory is divided into the variable size parts. Each part is known as segment which can be allocated to a process. Page Table: Page Table is a data structure used by the virtual memory system to store the mapping between logical addresses and physical addresses. 269 CU IDOL SELF LEARNING MATERIAL (SLM)
12.8 LEARNING ACTIVITY 1. Under what circumstances do page faults occur? Describe the actions taken by the operating system when a page fault occurs. ___________________________________________________________________________ ____________________________________________________________________ 12.9 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. What are Pages and Frames? 2. Define: Demand paging. 3. What are Pages and Frames? 4. What is pure demand paging? 5. List the need of inverted page table. Long Questions 1. List two differences between logical and physical addresses. 2. Explain design issues of page table. 3. Describe implementation issues in paging. 4. Explain paging scheme for memory management, discuss the paging hardware and Paging. 5. Outline the concept of inverted page table. B. Multiple Choice Questions 1. Physical memory is broken into fixed-sized blocks called ________ a. Frames b. pages c. files 270 CU IDOL SELF LEARNING MATERIAL (SLM)
d. blocks 2. Logical memory is broken into blocks of the same size called _________ a. frames b. pages c. backing store d. blocks 3. Every address generated by the CPU is divided into two parts. They are ____________ a. frame bit & page number b. page number & page offset c. page offset & frame bit d. frame offset & page offset 4. The __________ is used as an index into the page table. a. frame bit b. page number c. page offset d. frame offset 5. The _____ table contains the base address of each page in physical memory. 271 a. process b. memory c. page CU IDOL SELF LEARNING MATERIAL (SLM)
d. frame 272 6. The size of a page is typically ____________ a. varied b. power of 8 c. power of 4 d. power of 2 7. With paging there is no ________ fragmentation. a. internal b. external c. either type of d. None of these 8. Paging increases the ______ time. a. waiting b. execution c. context – switch d. All of these 9. Virtual memory is normally implemented by ________ a. demand paging b. buses c. virtualization CU IDOL SELF LEARNING MATERIAL (SLM)
d. All of these 10. Inverted page tables are a. Faster than regular page tables b. Smaller than regular page tables c. More flexible than regular page tables d. Simple register Answers 1 a, 2 b, 3 b, 4 b, 5 c, 6 d, 7 b, 8 c, 9 a, 10 b. 12.10 REFERENCES A. Silberschatz P.B. Galvin, Gange (2002). Operating System Concepts, 6th Ed., Addison Wesley Publishing Co., Boston. H.M. Deitel (1990). An Introduction to Operating Systems, Addison Wesley Publishing Co., Boston. D.M. Dhamdhare (2002). Operating System. Tata McGraw Hill, New Delhi. A.S. Tanenbaum (2000). Operating Systems: Design and Implementation, Prentice Hall of India, New Delhi. Nutt (2005). Operating Systems, 3rd Edition, Pearson Education, Delhi. 273 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 13: PAGING 2 Structure 13.0 Learning Objectives 13.1 Introduction 13.2 Page replacement algorithms 13.3 Page fault handling 13.4 Working set model 13.5 Local vs. Global Allocation 13.6 Segmentation and Paging 13.7 Summary 13.8 Keywords 13.9 Learning Activity 13.10 Unit End Questions 13.11 References 13.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Describe page replacement algorithms Discuss the methods for page fault handling Design a working set model Discuss local and global allocation scheme Outline segmentation 13.1 INTRODUCTION In an operating system that uses paging for memory management, a page replacement algorithm is needed to decide which page needs to be replaced when new page comes in. 274 CU IDOL SELF LEARNING MATERIAL (SLM)
Page Fault – A page fault happens when a running program accesses a memory page that is mapped into the virtual address space, but not loaded in physical memory. Since actual physical memory is much smaller than virtual memory, page faults happen. In case of page fault, Operating System might have to replace one of the existing pages with the newly needed page. Different page replacement algorithms suggest different ways to decide which page to replace. The target for all algorithms is to reduce the number of page faults. 13.2 PAGE REPLACEMENT ALGORITHMS Batch processing system works as an operating system. Batch processing system means to grab all types of programs and data in the batch form then proceed to process. Main motive of using batch processing system is to decrease the set-up time while submitting the similar jobs to CPU. FIRST IN FIRST OUT (FIFO) This is the simplest page replacement algorithm. In this algorithm, the operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal. Example-1 Consider page reference string 1, 3, 0, 3, 5, 6 with 3-page frames. Find number of page faults. Initially all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults. 275 CU IDOL SELF LEARNING MATERIAL (SLM)
When 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e., 1. —>1 Page Fault. 6 comes, it is also not available in memory so it replaces the oldest page slot i.e., 3 — >1 Page Fault. Finally, when 3 come it is not available so it replaces 0 1-page fault. Belady’s anomaly – Belady’s anomaly proves that it is possible to have more page faults when increasing the number of page frames while using the First in First Out (FIFO) page replacement algorithm. For example, if we consider reference string 3, 2, 1, 0, 3, 2, 4, 3, 2, 1, 0, 4 and 3 slots, we get 9 total page faults, but if we increase slots to 4, we get 10-page faults. OPTIMAL PAGE REPLACEMENT In this algorithm, pages are replaced which would not be used for the longest duration of time in the future. Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, with 4-page frame. Find number of page fault. Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already there so —> 0 Page fault. When 3 came it will take the place of 7 because it is not used for the longest duration of time in the future. —>1 Page fault. 0 is already there so —> 0 Page fault. 276 CU IDOL SELF LEARNING MATERIAL (SLM)
4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory. Optimal page replacement is perfect, but not possible in practice as the operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analysed against it. LEAST RECENTLY USED In this algorithm page will be replaced which is least recently used. Example-3Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2 with 4-page frames. Find number of page faults. Initially all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults. 0 is already their so —> 0 Page fault. When 3 came it will take the place of 7 because it is least recently used —>1 Page fault. 0 is already in memory so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault. Now for the further page reference string —> 0 Page fault because they are already available in the memory. 277 CU IDOL SELF LEARNING MATERIAL (SLM)
Example: Consider a reference string: 4, 7, 6, 1, 7, 6, 1, 2, 7, 2. the number of frames in the memory is 3. Find out the number of page faults respective to: 1. Optimal Page Replacement Algorithm 2. FIFO Page Replacement Algorithm 3. LRU Page Replacement Algorithm Solution: OPTIMAL PAGE REPLACEMENT ALGORITHM Number of Page Faults = 5 LRU PAGE REPLACEMENT ALGORITHM Number of Page Faults in LRU = 6 278 FIFO PAGE REPLACEMENT ALGORITHM CU IDOL SELF LEARNING MATERIAL (SLM)
Number of Page Faults in FIFO = 6 13.3 PAGE FAULT HANDLING A page fault occurs when a program attempts to access data or code that is in its address space, but is not currently located in the system RAM. So, when page fault occurs then following sequence of events happens: The computer hardware traps to the kernel and program counter (PC) are saved on the stack. Current instruction state information is saved in CPU registers. An assembly program is started to save the general registers and other volatile information to keep the OS from destroying it. Operating system finds that a page fault has occurred and tries to find out which virtual page is needed. Sometimes hardware register contains this required information. If not, the operating system must retrieve PC, fetch instruction and find out what it was doing when the fault occurred. Once virtual address caused page fault is known, system checks to see if address is valid and checks if there is no protection access problem. If the virtual address is valid, the system checks to see if a page frame is free. If no frames are free, the page replacement algorithm is run to remove a page. If frame selected is dirty, page is scheduled for transfer to disk, context switch takes place, fault process is suspended and another process is made to run until disk transfer is completed. As soon as page frame is clean, operating system looks up disk address where needed page is, schedules disk operation to bring it in. 279 CU IDOL SELF LEARNING MATERIAL (SLM)
When disk interrupt indicates page has arrived, page tables are updated to reflect its position, and frame marked as being in normal state. Faulting instruction is backed up to state it had when it began and PC is reset. Faulting is scheduled, operating system returns to routine that called it. Assembly Routine reloads register and other state information, returns to user space to continue execution. Figure 13.1: Steps in handling a page fault 13.4 WORKING SET MODEL Normally, if a thread takes a page fault and must wait for the page to be read from disk, the operating system runs a different thread while the I/O is occurring. Thus, page faults are \"free\"? What happens if memory gets overcommitted? Suppose the pages being actively used by the current threads don't all fit in physical memory. Each page fault causes one of the active pages to be moved to disk, so another page fault will occur soon. 280 CU IDOL SELF LEARNING MATERIAL (SLM)
The system will spend all its time reading and writing pages, and won't get much work done. This situation is called thrashing; it was a serious problem in early demand paging systems. How to deal with thrashing? If a single process is too large for memory, there is nothing the OS can do. That process will simply thrash. If the problem arises because of the sum of several processes: Figure out how much memory each process needs. Change scheduling priorities to run processes in groups that fit comfortably in memory: must shed load. Working Sets: Informal definition: the collection of pages a process is using actively, and which must thus be memory-resident to prevent this process from thrashing. If the sum of all working sets of all runnable threads exceeds the size of memory, then stop running some of the threads for a while. Divide processes into two groups: active and inactive: When a process is active its entire working set must always be in memory: never execute a thread, whose working set is not resident. When a process becomes inactive, its working set can migrate to disk. Threads from inactive processes are never scheduled for execution. The collection of active processes is called the balance set. The system must have a mechanism for gradually moving processes into and out of the balance set. As working sets change, the balance set must be adjusted. 281 CU IDOL SELF LEARNING MATERIAL (SLM)
How to compute working sets? Denning proposed a working set parameter T: all pages referenced in the last T seconds comprise the working set. Can extend the clock algorithm to keep an idle time for each page. Pages with idle times less than T are in the working set. Difficult questions for the working set approach: How long should T be (typically minutes)? How to handle changes in working sets? How to manage the balance set? How to account for memory shared between processes? Page Fault Frequency: another approach to preventing thrashing. Per-process replacement; at any given time, each process is allocated a fixed number of physical page frames. Monitor the rate at which page faults are occurring for each process. If the rate gets too high for a process, assume that its memory is overcommitted; increase the size of its memory pool. If the rate gets too low for a process, assume that its memory pool can be reduced in size. If the sum of all memory pools doesn’t fit in memory, deactivate some processes. In practice, today's operating systems don't worry much about thrashing: With personal computers, users can notice thrashing and handle it themselves: Typically, just buy more memory Or, manage balance set by hand 282 CU IDOL SELF LEARNING MATERIAL (SLM)
Thrashing was a bigger issue for timesharing machines with dozens or hundreds of users: Why should I stop my processes just so you can make progress? System had to handle thrashing automatically. Technology changes make it unreasonable to operate machines in a range where memory is even slightly overcommitted; better to just buy more memory. Working-Set Model The working set model is based on the concept of locality, and defines a working set window, of length delta. Whatever pages are included in the most recent delta page references are said to be in the processes working set window, and comprise its current working set. The selection of delta is critical to the success of the working set model - If it is too small then it does not encompass all of the pages of the current locality, and if it is too large, then it encompasses pages that are no longer being frequently accessed. The total demand, D, is the sum of the sizes of the working sets for all processes. If D exceeds the total number of available frames, then at least one process is thrashing, because there are not enough frames available to satisfy its minimum working set. If D is significantly less than the currently available frames, then additional processes can be launched. The hard part of the working-set model is keeping track of what pages are in the current working set, since every reference adds one to the set and removes one older page. An approximation can be made using reference bits and a timer that goes off after a set interval of memory references: 283 CU IDOL SELF LEARNING MATERIAL (SLM)
For example, suppose that we set the timer to go off after every 5000 references (by any process), and we can store two additional historical reference bits in addition to the current reference bit. Every time the timer goes off, the current reference bit is copied to one of the two historical bits, and then cleared. If any of the three bits is set, then that page was referenced within the last 15,000 references, and is considered to be in that processes reference set. Finer resolution can be achieved with more historical bits and a more frequent timer, at the expense of greater overhead. 13.5 LOCAL VS. GLOBAL ALLOCATION An important aspect of operating systems, virtual memory is implemented using demand paging. Demand paging necessitates the development of a page-replacement algorithm and a frame allocation algorithm. Frame allocation algorithms are used if you have multiple processes; it helps decide how many frames to allocate to each process. There are various constraints to the strategies for the allocation of frames: You cannot allocate more than the total number of available frames. At least a minimum number of frames should be allocated to each process. This constraint is supported by two reasons. The first reason is, as a smaller number of frames are allocated, there is an increase in the page fault ratio, decreasing the performance of the execution of the process. Secondly, there should be enough frames to hold all the different pages that any single instruction can reference. The number of frames allocated to a process can also dynamically change depending on whether you have used global replacement or local replacement for replacing pages in case of a page fault. 1. Local replacement: When a process needs a page, which is not in the memory, it can bring in the new page and allocate it a frame from its own set of allocated frames only. Advantage: The pages in memory for a particular process and the page fault ratio is affected by the paging behaviour of only that process. 284 CU IDOL SELF LEARNING MATERIAL (SLM)
Disadvantage: A low priority process may hinder a high priority process by not making its frames available to the high priority process. 2. Global replacement: When a process needs a page, which is not in the memory, it can bring in the new page and allocate it a frame from the set of all frames, even if that frame is currently allocated to some other process; that is, one process can take a frame from another. Advantage: Does not hinder the performance of processes and hence results in greater system throughput. Disadvantage: The page fault ratio of a process cannot be solely controlled by the process itself. The pages in memory for a process depends on the paging behaviour of other processes as well. 13.6 SEGMENTATION AND PAGING Page Size There are quite a few trade-offs of small versus large page sizes: Small pages waste less memory due to internal fragmentation. Large pages require smaller page tables. For disk access, the latency and seek times greatly outweigh the actual data transfer times. This makes it much faster to transfer one large page of data than two or more smaller pages containing the same amount of data. Smaller pages match locality better, because we are not bringing in data that is not really needed. Small pages generate more page faults, with attending overhead. The physical hardware may also play a part in determining page size. It is hard to determine an \"optimal\" page size for any given system. Current norms range from 4K to 4M, and tend towards larger page sizes as time passes. Paging The CPU cannot directly access storage disk, so the MMU emulates memory by mapping pages to frames that are in RAM. 285 CU IDOL SELF LEARNING MATERIAL (SLM)
Before we launch into a more detailed explanation of pages and frames, let’s define some technical terms. Page: A fixed-length contiguous block of virtual memory residing on disk. Frame: A fixed-length contiguous block located in RAM; whose sizing is identical to pages. Physical memory: The computer’s random-access memory (RAM), typically contained in DIMM cards attached to the computer’s motherboard. Virtual memory: Virtual memory is a portion of an HDD or SSD that is reserved to emulate RAM. The MMU serves up virtual memory from disk to the CPU to reduce the workload on physical memory. Virtual address: The CPU generates a virtual address for each active process. The MMU maps the virtual address to a physical location in RAM and passes the address to the bus. A virtual address space is the range of virtual addresses under CPU control. Physical address: The physical address is a location in RAM. The physical address space is the set of all physical addresses corresponding to the CPU’s virtual addresses. A physical address space is the range of physical addresses under MMU control. 286 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 13.2: Paging hardware By assigning an address to a piece of data using a “page table” between the CPU and the computer’s physical memory, a computer’s MMU enables the system to retrieve that data whenever needed. The Paging Process A page table stores the definition of each page. When an active process requests data, the MMU retrieves corresponding pages into frames located in physical memory for faster processing. The process is called paging. The MMU uses page tables to translate virtual addresses to physical ones. Each table entry indicates where a page is located: in RAM or on disk as virtual memory. Tables may have a single or multi-level page table such as different tables for applications and segments. However, constant table lookups can slow down the MMU. A memory cache called the Translation Lookaside Buffer (TLB) stores recent translations of virtual to physical addresses for rapid retrieval. Many systems have multiple TLBs, which may reside at different locations, including between the CPU and RAM, or between multiple page table levels. 287 CU IDOL SELF LEARNING MATERIAL (SLM)
Different frame sizes are available for data sets with larger or smaller pages and matching- sized frames. 4KB to 2MB are common sizes, and GB-sized frames are available in high- performance servers. An issue called hidden fragmentation used to be a problem in older Windows deployments (95, 98, and Me). The problem was internal (or hidden) fragmentation. Unlike the serious external fragmentation of segmenting, internal fragmentation occurred if every frame is not the exact size of the page size. However, this is not an issue in modern Windows OS Segmentation The process known as segmentation is a virtual process that creates address spaces of various sizes in a computer system, called segments. Each segment is a different virtual address space that directly corresponds to process objects. When a process executes, segmentation assigns related data into segments for faster processing. The segmentation function maintains a segment table that includes physical addresses of the segment, size, and other data. Figure 13.3: Segmentation hardware Segmentation speeds up a computer’s information retrieval by assigning related data into a “segment table” between the CPU and the physical memory. 288 CU IDOL SELF LEARNING MATERIAL (SLM)
The Segmentation Process Each segment stores the processes’ primary function, data structures, and utilities. The CPU keeps a segment map table for every process and memory blocks, along with segment identification and memory locations. The CPU generates virtual addresses for running processes. Segmentation translates the CPU- generated virtual addresses into physical addresses that refer to a unique physical memory location. The translation is not strictly one-to-one: different virtual addresses can map to the same physical address. The Challenge of Fragmentation Although segmentation is a high-speed and highly secure memory management function, external fragmentation proved to be an insurmountable challenge. Segmentation causes external fragmentation to the point that modern x86-64 servers treat it as a legacy application, and only support it for backwards compatibility. External fragmentation occurs when unusable memory is located outside of allocated memory blocks. The issue is that the system may have enough memory to satisfy process request, but the available memory is not in a contiguous location. In time, the fragmentation worsens and significantly slows the segmentation process. 289 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 13.3: Example of segmentation Segmented Paging Some modern computers use a function called segmented paging. Main memory is divided into variably-sized segments, which are then divided into smaller fixed-size pages on disk. Each segment contains a page table, and there are multiple page tables per process. Each of the tables contains information on every segment page, while the segment table has information about every segment. Segment tables are mapped to page tables, and page tables are mapped to individual pages within a segment. Advantages include less memory usage, more flexibility on page sizes, simplified memory allocation, and an additional level of data access security over paging. The process does not cause external fragmentation. Paging and Segmentation: Advantages and Disadvantages 290 CU IDOL SELF LEARNING MATERIAL (SLM)
Paging Advantages On the programmer level, paging is a transparent function and does not require intervention. No external fragmentation. No internal fragmentation on updated OS’s. Frames do not have to be contiguous. Paging Disadvantages Paging causes internal fragmentation on older systems. Longer memory lookup times than segmentation; remedy with TLB memory caches. Segmentation Advantages No internal fragmentation. Segment tables consumes less space compared to page tables. Average segment sizes are larger than most page sizes, which allows segments to store more process data. Less processing overhead. Simpler to relocate segments than to relocate contiguous address spaces on disk. Segment tables are smaller than page tables, and takes up less memory. Segmentation Disadvantages Uses legacy technology in x86-64 servers. Linux only supports segmentation in 80×86 microprocessors: states that paging simplifies memory management by using the same set of linear addresses. Porting Linux to different architectures is problematic because of limited segmentation support. Requires programmer intervention. Subject to serious external fragmentation. Key Differences: Paging and Segmentation Size: 291 CU IDOL SELF LEARNING MATERIAL (SLM)
Paging: Fixed block size for pages and frames. Computer hardware determines page/frame sizes. Segmentation: Variable size segments are user-specified. Fragmentation: Paging: Older systems were subject to internal fragmentation by not allocating entire pages to memory. Modern OS’s no longer have this problem. Segmentation: Segmentation leads to external fragmentation. Tables: Paging: Page tables direct the MMU to page location and status. This is a slower process than segmentation tables, but TLB memory cache accelerates it. Segmentation: Segmentation tables contain segment ID and information, and are faster than direct paging table lookups. Availability: Paging: Widely available on CPUs and as MMU chips. Segmentation: Windows servers may support backwards compatibility, while Linux has very limited support. 13.7 SUMMARY Page replacement algorithms help to decide which page must be swapped out from the main memory to create a room for the incoming page. The Optimal page-replacement algorithm has the most reduced page-fault rate overall page replacement algorithms and it will never suffer from the effect of Belady’s anomaly. Optimal page replacement is such an algorithm does exist and has been called OPT or MIN page replacement algorithm. It is just this: Replace the page that won’t be utilized for the longest time frame. A page fault occurs when a program attempts to access data or code that is in its address space, but is not currently located in the system RAM. With computers, page size refers to the size of a page, which is a block of stored memory. Page size affects the amount of memory needed and space used when running programs. Most operating systems determine the page size when a program 292 CU IDOL SELF LEARNING MATERIAL (SLM)
begins running. This feature allows it to calculate the most efficient use of memory while running that program. 13.8 KEYWORDS Operating System: It is a set of complex programs that makes a computer operational by providing an environment in which users can use the power of the computer. Optimal Algorithm: It states replace the page that will not be used for the longest period of time. LRU (Least Recently Used): It states replace the page that has not been used for the longest period of time. FIFO (First in First Out): It is the replacement algorithm associates with each page the time when that page was brought into memory. Virtual address: The CPU generates a virtual address for each active process. The MMU maps the virtual address to a physical location in RAM and passes the address to the bus. A virtual address space is the range of virtual addresses under CPU control. Belady's Anomaly: Belady's anomaly is the name given to the phenomenon where increasing the number of page frames results in an increase in the number of page faults for a given memory access pattern 13.9 LEARNING ACTIVITY 1. Consider the following page-replacement algorithms. Rank these algorithms on a five- point scale from “bad” to “perfect” according to their page-fault rate. Separate those algorithms that suffer from Belady’s anomaly from those that do not. a. LRU replacement b. FIFO replacement c. Optimal replacement d. Second-chance replacement ___________________________________________________________________________ ____________________________________________________________________ 13.10 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. How is memory protected in a paged environment? 293 CU IDOL SELF LEARNING MATERIAL (SLM)
2. What is Internal Fragmentation? 3. What is the basic method of Segmentation? 4. What is the basic approach of Page Replacement? 5. List out various Page Replacement Algorithms used for Page Replacement. Long Questions 1. When page faults will occur? Describe the actions taken by operating system during page fault. 2. Why are segmentation and paging sometimes combined into one scheme? 3. Discuss situation under which the most frequently used page replacement algorithm generates fewer page faults than the least frequently used page replacement algorithm. Also discuss under which circumstances the opposite holds. 4. Consider the following page reference string: 1, 2, 3, 4, 2, 1, 5, 6, 2, 1, 2, 3, 7, 6, 3, 2, 1, 2, 3, 6. Identify the no. of page faults would occur for the following replacement algorithms, assuming one, two, three, four, five, six, or seven frames? Remember all frames are initially empty, so your first unique pages will all cost one fault each. a) LRU replacement b) FIFO replacement c) Optimal replacement 5. Differentiate local and global page replacement algorithm. B. Multiple Choice Questions 294 1. Working set model for page replacement is based on the assumption of: a. modularity b. locality c. globalization CU IDOL SELF LEARNING MATERIAL (SLM)
d. random access 2. A process is thrashing if: a. it is spending more time paging than executing b. it is spending less time paging than executing c. page fault occurs d. swapping cannot take place 3. Which algorithm chooses the page that has not been used for the longest period of time whenever the Page required to be replaced? a. first in first out algorithm b. additional reference bit algorithm c. least recently used algorithm d. counting based page replacement algorithm 4. In FIFO page replacement algorithm, when a page must be replaced: 295 a. oldest page is chosen b. newest page is chosen c. random page is chosen d. None of these 5. Effective access time is directly proportional to: a. page-fault rate b. hit ratio c. memory access time CU IDOL SELF LEARNING MATERIAL (SLM)
d. None of these 6. Memory management technique in which system stores and retrieves data from secondary storage for use in main memory is called: a. fragmentation b. paging c. mapping d. None of these 7. Using transient code, _______ the size of the operating system during program execution. a. increases b. decreases c. changes d. maintains 8. In segmentation, each address is specified by ____________ a. a segment number & offset b. an offset & value c. a value & segment number d. a key & value 9. In paging the user provides only ________ which is partitioned by the hardware into ________ and ______. a. one address, page number, offset b. one offset, page number, address 296 CU IDOL SELF LEARNING MATERIAL (SLM)
c. page number, offset, address d. None of these 10. Each entry in a segment table has a ____________ a. segment base b. segment peak c. segment value d. None of these Answers 1 a, 2 a, 3 c, 4 a, 5 b, 6 b, 7 c, 8 a, 9 a, 10 a. 13.11 REFERENCES A. Silberschatz P.B. Galvin, Gange (2002). Operating System Concepts, 6th Ed., Addison Wesley Publishing Co., Boston. H.M. Deitel (1990). An Introduction to Operating Systems, Addison Wesley Publishing Co., Boston. D.M. Dhamdhare (2002). Operating System. Tata McGraw Hill, New Delhi. A.S. Tanenbaum (2000). Operating Systems: Design and Implementation, Prentice Hall of India, New Delhi. Nutt (2005). Operating Systems, 3rd Edition, Pearson Education, Delhi. 297 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT - 14: PHYSICAL FILE SYSTEM 298 Structure 14.0 Learning Objectives 14.1 Introduction 14.2 File System Organization 14.2.1 File System Time Sharing Operating System 14.3 File Operations 14.3.1 Symbolic Link 14.4 Access Methods 14.4.1 Sequential Access 14.4.2 Direct Access 14.4.3 Other Access Methods 14.5 Consistency Semantics 14.6 Directory Structure Organization 14.6.1 Single-level Directory 14.6.2 Two-level Directory 14.6.3 Tree-structured Directories 14.6.4 Acyclic-graph Directories 14.6.5 General Graph Directory 14.7 File Protection 14.7.1 Protection in Computer System 14.7.2 Access-Matrix Model of Protection 14.7.3 Access Hierarchies 14.7.4 Access Lists 14.8 Summary CU IDOL SELF LEARNING MATERIAL (SLM)
14.9 Keywords 14.10 Learning Activity 14.11 Unit End Questions 14.12 References 14.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Describe file system organization Perform file operations Analyze access methods Exercise consistency semantics Define directory structure organization Discuss file protection 14.1 INTRODUCTION Another part of the operating system is the file manager. While the memory manager is responsible for the maintenance of primary memory, the file manager is responsible for the maintenance of secondary storage. Each file is a named collection of data stored in a device. The file manager implements this abstraction and provides directories for organizing files. It also provides a spectrum of commands to read and write the contents of a file, to set the file read/write position, to set and use the protection mechanism, to change the ownership, to list files in a directory, and to remove a file. The file manager provides a protection mechanism to allow machine users to administer how processes executing on behalf of different users can access the information in files. File protection is a fundamental property of files because it allows different people to store their information on a shared computer, with the confidence that the information can be kept confidential. In addition to these functions, the file manager also provides a logical way for users to organize files in secondary storage. 299 CU IDOL SELF LEARNING MATERIAL (SLM)
14.2 FILE SYSTEM ORGANIZATION The most familiar file systems make use of an underlying data storage device that offers access to an array of fixed-size blocks, sometimes called sector, generally 512 bytes each. The file system software is responsible for organizing these sectors into files and directories, and keeping track of which sectors belong to which file and which are not being used. Most file systems address data in fixed-sized units called \"clusters\" or \"blocks\" which contain a certain number of disk sectors (usually 1–64). This is the smallest logical amount of disk space that can be allocated to hold a file. However, file systems need not make use of a storage device at all. A file system can be used to organize and represent access to any data, whether it be stored or dynamically generated. Whether the file system has an underlying storage device or not, file systems typically have directories which associate file names with files, usually by connecting the file name to an index into a file allocation table of some sort, such as the FAT in an MS-DOS file system, or an inode in a Unix-like file system. Directory structures may be flat, or allow hierarchies where directories may contain subdirectories. In some file systems, file names are structured, with special syntax for filename extensions and version numbers. In others, file names are simple strings, and per-file metadata is stored elsewhere. Other bookkeeping information is typically associated with each file within a file system. The length of the data contained in a file may be stored as the number of blocks allocated for the file or as an exact byte count. The time that the file was last modified may be stored as the file's timestamp. Some file systems also store the file creation time, the time it was last accessed, and the time that the file's meta-data was changed. (Note that many early PC operating systems did not keep track of file times.) Other information can include the file's device type, its owner user-ID and group-ID, and its access permission settings. The hierarchical file system was an early research interest of Dennis Ritchie of UNIX fame; previous implementations were restricted to only a few levels, notably the IBM implementations, even of their early databases like IMS. After the success of UNIX, Ritchie extended the file system concept to every object in his later operating system developments, such as Plan 9 and Inferno. 300 CU IDOL SELF LEARNING MATERIAL (SLM)
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261
- 262
- 263
- 264
- 265
- 266
- 267
- 268
- 269
- 270
- 271
- 272
- 273
- 274
- 275
- 276
- 277
- 278
- 279
- 280
- 281
- 282
- 283
- 284
- 285
- 286
- 287
- 288
- 289
- 290
- 291
- 292
- 293
- 294
- 295
- 296
- 297
- 298
- 299
- 300
- 301
- 302
- 303
- 304
- 305
- 306
- 307
- 308
- 309
- 310
- 311
- 312
- 313
- 314
- 315
- 316
- 317
- 318
- 319
- 320
- 321
- 322
- 323
- 324
- 325
- 326
- 327
- 328
- 329
- 330
- 331
- 332
- 333
- 334
- 335