Scheduling Algorithms 93 I/O CPU 1 CPU 2 CPU 3 CPU 4 Memory Master Slave Slave Slave User runs runs user runs user runs user processes OS processes processes processes OS Fig. 6.11: Symmetric Multiprocessor 6.7 Types On multiprocessor, the scheduler has to decide which process to run and which central processing unit to run. Timesharing On multiprocessor, the simplest scheduling algorithm for dealing with unrelated processes is to have a single system-wide data structure for ready processes possibly just a list, but more likely a set of lists for the processes at different priorities. Space Sharing Multiprocessor scheduling can be used when processes are related to one another. Scheduling two or more than two threads at the same time across multiple central processing units is called as space sharing. The big advantage of space sharing is the elimination of multiprogramming which eliminates the context switching overhead. Gang Scheduling Gang scheduling has the following three parts: Groups of related threads are scheduled as a unit, a gang. All the members of a gand run simultaneously, on different timeshared central processing unit. All the gang members start and end their time slices together. CU IDOL SELF LEARNING MATERIAL (SLM)
94 System Programming and Operating System 6.8 Performance Evaluation of the Scheduling 6.8.1 Algorithm Evaluation How do we select a CPU-scheduling algorithm for a particular system? There are many scheduling algorithms, each with its own merits and demerits based on different parameters. As a result, selecting algorithms can be difficult. The first problem is defining the criteria to be used in selecting an algorithm. Criteria are often defined in terms of CPU utilization, response time or throughput. To select an algorithm, we must first define the relative importance of these measures. Our criteria may include several measures, such as: Maximize throughput such that turnaround is (on average) linearly proportional to total execution time. Once the selection criteria have been defined, we are then going to evaluate the various algorithms under consideration. Basically following are the different evaluation methods which are commonly used 1. Deterministic Modelling Evaluation Method 2. Queuing Models for Evaluation 3. Simulation for Evaluation 6.8.2 Evaluation of Process Scheduling Algorithms The first thing we need to decide is how we will evaluate the algorithms. To do this we need to decide on the relative importance of the factors we listed above (Fairness, Efficiency, Response Times, Turnaround and Throughput). Only once we have decided on our evaluation method can we carry out the evaluation. Deterministic Modelling This evaluation method takes a predetermined workload and evaluates each algorithm using that workload. Assume we are presented with the following processes, which all arrive at time zero. CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 95 Table 6.5: Processes and Brust Time Process Burst Time P1 9 P2 33 P3 2 P4 5 Which of the following algorithms will perform best on this workload? First Come First Served (FCFS), Non-preemptive Shortest Job First (SJF) and Round Robin (RR). Assume a quantum of 8 milliseconds. Before looking at the answers, try to calculate the figures for each algorithm. The advantage of deterministic modelling is that it is exact and fast to compute. The disadvantage is that it is only applicable to the workload that you use to test. As an example, use the above workload but assume P1 only has a burst time of 8 milliseconds. What does this do to the average waiting time? Of course, the workload might be typical and scale up but generally deterministic modelling is too specific and requires too much knowledge about the workload. Queuing Models Another method of evaluating scheduling algorithms is to use queuing theory. Using data from real processes we can arrive at a probability distribution for the length of a burst time and the I/O times for a process. We can now generate these times with a certain distribution. We can also generate arrival times for processes (arrival time distribution). If we define a queue for the CPU and a queue for each I/O device we can test the various scheduling algorithms using queuing theory. Knowing the arrival rates and the service rates we can calculate various figures such as average queue length, average wait time, CPU utilization etc. One useful formula is Little’s Formula. CU IDOL SELF LEARNING MATERIAL (SLM)
96 System Programming and Operating System n = λw where n is the average queue length λ is the average arrival rate for new processes (e.g. five a second) w is the average waiting time in the queue Knowing two of these values we can, obviously, calculate the third. For example, if we know that eight processes arrive every second and there are normally sixteen processes in the queue we can compute that the average waiting time per process is two seconds. The main disadvantage of using queuing models is that it is not always easy to define realistic distribution times and we have to make assumptions. This results in the model only being an approximation of what actually happens. Simulations Rather than using queuing models we simulate a computer. A variable, representing a clock is incremented. At each increment the state of the simulation is updated. Statistics are gathered at each clock tick so that the system performance can be analysed. The data to drive the simulation can be generated in the same way as the queuing model, although this leads to similar problems. Alternatively, we can use trace data. This is data collected from real processes on real machines and is fed into the simulation. This can often provide good results and good comparisons over a range of scheduling algorithms. However, simulations can take a long time to run, can take a long time to implement and the trace data may be difficult to collect and require large amounts of storage. Implementation The best way to compare algorithms is to implement them on real machines. This will give the best results but does have a number of disadvantages. CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 97 It is expensive as the algorithm has to be written and then implemented on real hardware. If typical workloads are to be monitored, the scheduling algorithm must be used in a live situation. Users may not be happy with an environment that is constantly changing. If we find a scheduling algorithm that performs well there is no guarantee that this state will continue if the workload or environment changes. Solutions to Performance: Deterministic Modelling For FCFS the process would be executed in the following order, with the following wait times Table 6.6: Processes and Brust Time and Wait Time Process Burst Time Wait Time P1 9 0 P2 33 9 P3 2 42 P4 5 44 P5 14 49 Therefore, the average waiting time is ((0 + 9 + 42 + 44 + 49)/5) = 28.80 milliseconds. For SJF (non-preempted) the processes would run as follows: Table 6.7: Processes and Brust Time and Wait Time Process Burst Time Wait Time P3 2 0 P4 5 2 P1 9 7 P5 14 16 P2 33 30 The average waiting time is ((0 + 2 + 7 + 16 + 30)/5) = 11 milliseconds. CU IDOL SELF LEARNING MATERIAL (SLM)
98 System Programming and Operating System For RR the jobs would execute as follows: Table 6.8: Processes and CPU Time Process CPU Time P1 8 P2 8 P3 2 P4 5 P5 8 P1 1 P2 8 P5 6 P2 8 P2 8 P2 1 The waiting time for each process is as follows: P1 : 0 + 23 = 23 P2 : 8 + 16 + 6 = 30 P3 : 16 P4 : 18 P5 : 23 + 9 = 32 Therefore, the average waiting time is ((23 + 30 + 16 + 18 + 32)/5) = 23.80 6.9 Summary CPU scheduling is a process which allows one process to use the CPU while the execution of another process is on hold (in waiting state) due to unavailability of any resource like I/O etc, thereby making full use of CPU. The aim of CPU scheduling is to make the system efficient, fast and fair. CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 99 CPU scheduling decisions may take place under the following four circumstances: 1. When a process switches from the running state to the waiting state (for I/O request or invocation of wait for the termination of one of the child processes). 2. When a process switches from the running state to the ready state (for example, when an interrupt occurs). 3. When a process switches from the waiting state to the ready state (for example, completion of I/O). 4. When a process terminates. In circumstances 1 and 4, there is no choice in terms of scheduling. A new process (if one exists in the ready queue) must be selected for execution. There is a choice, however, in circumstances 2 and 3. When Scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme is non-preemptive; otherwise the scheduling scheme is preemptive. There are many different criteria’s to check when considering the “best” scheduling algorithm, they are: CPU Utilization Throughput Turnaround Time Waiting Time Load Average Response Time Scheduling Algorithms To decide which process to execute first and which process to execute last to achieve maximum CPU utilization, computer scientists have defined some algorithms, and they are: 1. First Come First Serve (FCFS) Scheduling 2. Shortest-Job-First (SJF) Scheduling CU IDOL SELF LEARNING MATERIAL (SLM)
100 System Programming and Operating System 3. Priority Scheduling 4. Round Robin (RR) Scheduling 5. Multilevel Queue Scheduling 6. Multilevel Feedback Queue Scheduling Multiprocessing scheduling In the multiprocessor scheduling, there are multiple CPUs which share the load so that various processes run simultaneously. In general, the multiprocessor scheduling is complex as compared to single processor scheduling. In the multiprocessor scheduling, there are many processors and they are identical and we can run any process at any time. 6.10 Key Words/Abbreviations Key Words Preemptive: In preemptive scheduling policy, a low priority process has to be suspend its execution if high priority process is waiting in the same queue for its execution. Non-preemptive Process: In non-preemptive scheduling policy, processes are executed in first come first serve basis, which means the next process is executed only when currently running process finishes its execution. Abbreviations FCFS – First come first serve SJF – Shortest job first SRTF – Shortest Remaining Time First LJF – Longest Job First LRTF – Longest Remaining Time First RR – Round Robin CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 101 6.11 Learning Activity 1. What is the need of scheduling? ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2. What do you mean by Multiprocessor scheduling? ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 6.12 Unit End Questions (MCQ and Descriptive) A. Descriptive Types Questions 1. Explain Preemptive and Non-preemptive scheduling. 2. Explain different types of schedulers. 3. Explain FCFS scheduling algorithm in detail. 4. Explain SJF scheduling algorithm in detail. 5. Explain Round Robin scheduling algorithm in detail. 6. Explain Multiprocessor scheduling and its types. 7. Explain evaluation of the scheduling and their performance. B. Multiple Choice/Objective Type Questions 1. What is Scheduling? (a) Allowing a job to use the processor (b) Making proper use of processor (c) All of the mentioned (d) None of the mentioned 2. The strategy of making processes that are logically unable to be temporarily suspended is called ____________. (a) non-preemptive scheduling (b) preemptive scheduling (c) shortest job first (d) first come first served CU IDOL SELF LEARNING MATERIAL (SLM)
102 System Programming and Operating System 3. Which scheduling algorithm allocates the CPU first to the process that requests the CPU first? (a) First-come, first-served scheduling (b) Shortest job scheduling (c) Priority scheduling (d) None of the mentioned 4. We want to keep CPU as busy as possible. This criteria refers to as (a) Throughput (b) Response time (c) CPU utilization (d) None of the mentioned 5. Which of the following algorithms tends to minimize the process flow time? (a) First come First served (b) Earliest Deadline First (c) Longest Job First (d) Shortest Job First Answers 1. (a), 2. (b), 3. (b), 4. (c), 5. (d) 6.13 References Reference Books 1. https://peer.asee.org/introducing-cpu-scheduling-algorithms-the-photocopier- scenario.pdf 2. https://web.cs.wpi.edu/~cs3013/c07/lectures/Section05-Scheduling.pdf Web Resources 1. https://www.includehelp.com/operating-systems/process-scheduling-in-operating- system.aspx 2. https://www.studytonight.com/operating-system/cpu-scheduling 3. https://www.javatpoint.com/os-scheduling-algorithms 4. https://www.guru99.com/cpu-scheduling-algorithms.html 5. https://www.geeksforgeeks.org/cpu-scheduling-in-operating-systems/ CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT 7 INTER-PROCESS COMMUNICATION AND SYNCHRONIZATION Structure: 7.0 Learning Objectives 7.1 Introduction 7.2 Definition 7.3 Shared Memory System 7.4 Message Passing 7.5 Critical Section 7.6 Mutual Exclusion 7.7 Semaphores 7.8 Summary 7.9 Key Words/Abbreviations 7.10 Learning Activity 7.11 Unit End Questions (MCQ and Descriptive) 7.12 References 7.0 Learning Objectives After studying this unit, you should be able to: Explain the Inter-process Communication Concept. Analyse Shared Memory System Concept. Discuss about Mutual Exclusion and Semaphores.
104 System Programming and Operating System 7.1 Introduction Inter-process Communication (IPC) is a mechanism that involves communication of one process with another process. ... Between related processes initiating from only one process, such as parent and child processes. Between unrelated processes, or two or more different processes. Synchronization is a mechanism which helps to use shared memory resources in an operating system. In the current world, most of the computers are multi-tasking computers. So these computers can do more than one process at the same time. 7.2 Definition Processes executing concurrently in the OS may be either (a) Independent processes. (b) Co-operating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. A process is co-operating if it can affect or be affected by other processes executing in system. There are several reasons for providing an environment that allows process co-operation. 1. Information Sharing: Since several users may be interested in the same piece of info (for instance a shared file) we must provide an environment to allow concurrent access to such info. 2. Computation Speedup: If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with others. Speedup can be achieved only if the computer had multiple processing elements. 3. Modularity: We want to construct the system in a modular fashion, diving the system functions into separate processes or threads. CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 105 4. Convenience: Even an individual user may work of many tasks at the same time. For instance, a user may be editing, printing, and compiling in parallel. Co-operating processes require an inter-process communication (IPC) mechanism that will allow them to exchange data and information. There are 2 fundamental models of IPC: (a) Shared Memory (b) Message Passing 1. Message passing provides a mechanism to allow processes to communicate and to synchronize their address space and is particularly useful where the communicating processes may reside on different computers connected by a network. 2. A message passing facility provides at least 2 operations: Send (Message) and Receive (Message). 3. Messages sent by a process can be of either fixed or variable size. 4. If only fixed size massages can be sent then system level implementation is straight forward. This restriction makes the task of programming more difficult. 5. Variable sized messages requires a more complex system level implementation but the programming task becomes simpler. 6. If processes P and Q want to communicate they send messages to and receive messages from each other, a communication link must exist between them. This link can be implemented in a variety of ways. Here are several methods for logically implementing a link and the send() / receive() operations: Direct or Indirect communication Synchronous or asynchronous communication Automatic or Explicit buffering. Direct or Indirect Communication: Under direct communication each process that want to communicate must explicitly name the recipient or sender of the communication. In this scheme the send() and receive() primitives are defined as: CU IDOL SELF LEARNING MATERIAL (SLM)
106 System Programming and Operating System 1. send (P, Message) – Send a message to process P. 2. receive (Q, Message) – Receive a message from Q. A communication link in this scheme has following properties: 1. A link is established automatically between every pair of processes that want to communicate. The processes need to know only each others identity to communicate. 2. A link is associated with exactly two processes. 3. Between each pair of processes there exists exactly one link. This scheme exhibits symmetry in addressing i.e. both the sender and the receiver processes must name the other to communicate. A variant of this scheme employs asymmetry in addressing. Here, only the sender names the recipient, the recipient is not regard to name and receive() primitives are defined as follows: 1. send(p, msg) – send a message to process p 2. receive(id, msg) – receive a message from any process, the variable id is set to the name of the process with which communication has taken place. The disadvantage in both of these schemes (symmetric and asymmetric) is limited modularity of the resulting process definitions. Changing the identifier of a process may necessitate examining all other process definitions. All the reference to the old identifier must be found, so that they can be modified to the new identifier. In any such hard coding techniques, where identifiers desirable explicitly stated are less desirable than the techniques involving indirection. Indirect Communication With indirect communication, the message are sent to and received from mailboxes, or ports. A mailbox can be viewed abstractly as an object into which messages can be placed by the processes and from which messages can be removed. CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 107 In this scheme, a process can communicate with some other process via a number of different mailboxes. Two processes can communicate only if the processes have a shared mailbox. The send() and receive() primitives are defined as follows: 1. send(A, msg) – send a message to mailbox A. 2. receive(A, msg) – receive a message from mailbox A. In this scheme a communication link has the following properties: 1. A link is established between a pair of processes only if both members of the pair have a shared mailbox. 2. A link may be associated with more than 2 processes. 3. Between each pair of communicating processes, there may be a number of different links, with each link corresponding to one mailbox. 4. A link may be unidirectional or bi-directional. 7.3 Shared Memory System In shared memory model shown in Fig. 7.1. the cooperating process shares a region of memory for sharing of information. some operating systems use the supervisor call to create a share memory space. Similarly, Some operating system use file system to create RAM disk, which is a virtual disk created in the RAM. The shared files are stored in RAM disk to share the information between processes. The shared files in RAM disk are actually stored in the memory. The process can share information by writing and reading data to the shared memory location or RAM disk. In this model shown in Fig.7.1, data is shared between process by passing and receiving messages between co-operating process. Message passing mechanism is easier to implement than shared memory but it is useful for exchanging smaller amount of data. In message passing mechanism data is exchange between processes through kernel of operating system using system calls. Message passing mechanism is particularly useful in a distributed environment where the communicating processes may reside on different components connected by the network. CU IDOL SELF LEARNING MATERIAL (SLM)
108 System Programming and Operating System Fig. 7.1: Inter-process Communication Model For example, a data program used on the internet could be designed so that chat participants communicate with each other by exchanging messages. It must be noted that passing message technique is slower than shared memory technique. Fig. 7.2: Message Format CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 109 A message contains the following information: Header of message that identifies the sending and receiving processes block of data Pointer to block of data Some control information about the process. Process Process A B Time Send() Receive () send data transmitted Reply() reply data transmitted Fig. 7.3: Send and Receive Message Typically Inter-process Communication is based on the ports associated with process. A port represent a queue of processes. Ports are controlled and managed by the kernel. The processes communicate with each other through kernel. In message passing mechanism, two operations are performed. Theses are sending message and receiving message. The function send() and receive() are used to implement these operations as shown in Fig. 7.3 Supposed P1 and P2 want to communicate with each other. A communication link must be created between them to send and receive messages. The communication link can be created using different ways. The most important methods are: Direct model Indirect model Buffering CU IDOL SELF LEARNING MATERIAL (SLM)
110 System Programming and Operating System Methods to Implement Inter-process Communication Inter-process communication (IPC) is a set of interfaces, which is usually programmed in order for a programmer to communicate between a series of processes. This allows the running of programs concurrently in an operating system. There are quite a number of methods used in inter-process communications. They are: 1. Pipes: This allows the flow of data in one direction only. Data from the output is usually buffered until the input process receives it which must have a common origin. 2. Named Pipes: This is a pipe with a specific name. It can be used in processes that do not have a shared common process origin. Example is FIFO where the data is written to a pipe is first named. 3. Message queuing: This allows messages to be passed between messages using either a single queue or several message queues. This is managed by the system kernel. These messages are coordinated using an application program interface (API). 4. Semaphores: This is used in solving problems associated with synchronization and avoiding race conditions. These are integer values which are greater than or equal to zero. 5. Shared Memory: This allows the interchange of data through a defined area of memory. Semaphore value has to be obtained before data can get access to shared memory. 6. Sockets: This method is mostly used to communicate over a network, between a client and a server. It allows for a standard connection which is computer and operating system independent. Facility Provided by Inter-process Communication (IPC) 1. IPC provides a mechanism to allow processes to communicate and to synchronize their actions without sharing the same address space. 2. IPC is particularly useful in a distributed environment where the communicating processes may reside on different computers connected with a network. An example is a chat program used on the World Wide Web. IPC is best provided by a message-passing system, and message systems can be defined in many ways. CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 111 7.4 Message Passing The function of a message system is to allow processes to communicate with one another without the need to resort to shared data. We have already seen message passing used as a method of communication in microkernels. In this scheme, services are provided as ordinary user processes. That is the services operate outside of the kernel. Communication among the user process is accomplished through the passing of messages. An IPC facility provides at least the two operations: send(message) and receive(message). Messages sent by a process can be of either fixed or variable size. Fixed Sized Message: When fixed-sized messages are sent, the system-level implementation is straightforward. But makes the task of programming more difficult. Variable Sized Message: If variable-sized messages needs a more complex system-level implementation, but the programming task becomes simpler. Example: If processes X and Y want to communicate, they must send messages to and receive messages from each other, there exist a link between two process. There are variety of ways to implement this link. Here physical implementation is not of much concern (such as shared memory, hardware bus, or network), but logical implementation is more important. There are several methods for logically implementing a link and the send/receive operations: 1. Direct or indirect communication 2. Symmetric or asymmetric communication CU IDOL SELF LEARNING MATERIAL (SLM)
112 System Programming and Operating System 3. Automatic or explicit buffering 4. Send by copy or send by reference 5. Fixed-sized or variable-sized messages We look at each of these types of message systems next. Naming: Processes that want to communicate can use either direct or indirect communication to refer to each other. Direct Communication: With direct communication, each process that wants to communicate must explicitly name the recipient or sender of the communication. In this scheme, the send and receive primitives are defined as: 1. send(X, message) — Send a message to process X. 2. receive(Y, message) — Receive a message from process Y. Properties of Communication Links: A link is established automatically between every pair of processes that want to communicate. The processes need to know only each others identity to communicate. 1. A link is associated with exactly two processes. 2. Exactly one link exists between each pair of processes. There can be symmetry or not to address both sender and receiver. Symmetry in Addressing: Symmetry in addressing means that, both the sender and the receiver processes must name the other to communicate. 1. send(X, message) — Send a message to process X. 2. receive(Y, message) — Receive a message from process Y. CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 113 Asymmetry in Addressing: Another scheme employs asymmetry in addressing. Only the sender names the recipient; the recipient is not required to name the sender and recipient name. The send and receive primitives are defined as follows: 1. Send (P, message) — Send a message to process X. 2. Receive (id, message) — Receives message from any process; the variable id is set to the name of the process with which communication has taken place. Disadvantage in both symmetric and asymmetric schemes: 1. Is the limited modularity of the resulting process definitions. 2. Changing the name of a process may necessitate examining all other process definitions. 3. All references to the old name must be found, so that they can be modified to the new name. 4. This situation is not desirable from the viewpoint of separate compilation. Indirect Communication: The messages are sent to and received from mailboxes, or ports in indirect communication. A mailbox can be viewed as an object into which messages can be placed by processes and from which messages can be removed. Each mailbox has a unique identification. In indirect communication, a process can communicate with some other process via a number of different mailboxes. Two processes can communicate only if they share a mailbox. The send and receive primitives are defined as follows: 1. Send(P, message) — Send a message to mailbox P. 2. Receive(P, message) — Receive a message from mailbox P. Properties of Communication Link: 1. A link is established between a pair of processes only if both members of the pair have a shared mailbox. 2. A link may be associated with more than two processes. CU IDOL SELF LEARNING MATERIAL (SLM)
114 System Programming and Operating System 3. A number of different links may exist between each pair of communicating processes, with each link corresponding to one mailbox. Example: Let processes X and Y all share mailbox P. Process X sends a message to P, while Y and Z each execute a receive from P. Which process will receive the message sent by X? The answer depends on the scheme that we choose: (A) Allow a link to be associated with at most two processes. (B) Allow at most one process at a time to execute a receive operation. (C) Allow the system to select arbitrarily which process will receive the message (that is, either X or Y but not both, will receive the message). The system may identify the receiver to the sender. Synchronization: Communication between processes takes place by calls to send and receive primitives. There are different design options for implementing each primitive. Message passing may be either blocking or non-blocking-also known as synchronous and asynchronous. Blocking send: The sending process is blocked until the message is received by the receiving process or by the mailbox. Non-blocking send: The sending process sends the message and resumes operation. Blocking receive: The receiver blocks until a message is available. Non-blocking receive: The receiver retrieves either a valid message or a null. Different combinations of send and receive are possible. When both the send and receive are blocking, we have a rendezvous between the sender and the receiver. CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 115 7.5 Critical Section n processes all competing to use some shared data Each process has a code segment, called critical section, in which the shared data is accessed. Problem – ensure that when one process is executing in its critical section, no other process is allowed to execute in its critical section. Solution to Critical-Section Problem 1. Mutual Exclusion: If process Pi is executing in its critical section, then no other processes can be executing in their critical sections. 2. Progress: If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely. 3. Bounded Waiting: A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. Assume that each process executes at a non-zero speed No assumption concerning relative speed of the n processes. Initial Attempts to Solve Problem Only 2 processes, P0 and P1 General structure of process Pi (other process Pj) do { entry section critical section exit section reminder section } while (1); Processes may share some common variables to synchronize their actions. CU IDOL SELF LEARNING MATERIAL (SLM)
116 System Programming and Operating System Two process solution In this section, we restrict our attention to algorithms that are applicable to only two processes at a time. The processes are numbered P0 and P1. For convenience, when presenting Pi we use Pj to denote the other process; that is, j=1–i Algorithm 1 Shared variables: int turn; initially turn = 0 turn - i Pi can enter its critical section Process Pi do { while (turn != i) ; critical section turn = j; reminder section } while (1); Satisfies mutual exclusion, but not progress Our first approach is to let the processes share a common integer variable turn initialized to 0 (or 1). if turn == 1, then process Pi is allowed to execute in its critical section. This solution ensures that only one process at a time can be in its critical section. However, it does not satisfy the progress requirement, since it requires strict alternation of processes in the execution of the critical section. For example, if turn == 0 and P1 is ready to enter its critical section, P1 cannot do so, even though P0 may be in its remainder section. Algorithm 2 Shared variables boolean flag[2]; CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 117 initially flag [0] = flag [1] = false. flag [i] = true Pi ready to enter its critical section Process Pi do { flag[i] := true; while (flag[j]) ; critical section flag [i] = false; remainder section } while (1); Satisfies mutual exclusion, but not progress requirement. The problem with algorithm 1 is that it does not retain sufficient information about the state of each process; it remembers only which process is allowed to enter its critical section. To remedy this problem, we can replace the variable turn with the following array: boolean flag[2]; The elements of the array are initialized to false. If flag [i] is true, this value indicates that P is ready to enter the critical section. In this algorithm, process Pi, first sets flag [i] to be true, signalling that it is ready to enter its critical section. Then, Pichecks to verify that process Pjis not also ready to enter its critical section If ~‘ were ready, then Pi would wait until Pj had indicated that it no longer needed to be in the critical section (that is, until flag [j] was fa1se. At this point, Pi would enter the critical section. On exiting the critical section, Pi would set flag [i] to be false, allowing the other process. In this solution, the mutual-exclusion requirement is satisfied. Unfortunately, the progress requirement is not met. To illustrate this problem, we consider the following execution sequence: T0: P0 sets flag [0] = true T1: P1 sets flag[1] = true Now P0 and P1 are looping forever in their respective while statements. CU IDOL SELF LEARNING MATERIAL (SLM)
118 System Programming and Operating System Algorithm 3 Combined shared variables of algorithms 1 and 2. Process Pi do { flag [i]:= true; turn = j; while (flag [j] and turn = j) ; critical section flag [i] = false; remainder section } while (1); Meets all three requirements; solves the critical-section problem for two processes. By combining the key ideas of algorithm 1 and algorithm 2, we obtain a correct solution to the critical-section problem, where all three requirements are met. The processes share two variables: Boolean flag [2]; int turn; Initially flag [0] = flag [1] = false, and the value of turn is immaterial (but is either 0 or 1). To enter the critical section, process Pi first sets flag [i.] to be true and then sets turn to the value j, thereby asserting that if the other process wishes to enter the critical section it can do so. If both processes try to enter at the same time, turn will be set to both i and j at roughly the same tine. Only one of these assignments will last; the other will occur, but will be overwritten immediately. The eventual value of turn decides which of the two processes is allowed to enter its critical section first. We now prove that this solution is correct. We need to show that: 1. Mutual exclusion is preserved, 2. The progress requirement is satisfied 3. The bounded-waiting requirement is met. CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 119 To prove property 1, we note that each Pi enters its critical section only if either flag[j] == false or turn == j. Also note that, if both processes can be executing in their critical sections at the same time, then flag [0] == flag [11 == true. These two observations imply that P0 and P1 could not have successfully executed their while statements at about the same time, since the value of turn can be either 0 or 1, but cannot be both. Hence, one of the processes—say Pj—must have successfully executed the while statement, whereas Pi had to execute at least one additional statement (“turn == j”). However, since, at that time, flag[j] == true, and turn == j, and this condition will persist as long as Pj is in its critical section, the result follows: Mutual exclusion is preserved. To prove properties 2 and 3, we note that a process Pi can be prevented from entering the critical section only if it is stuck in the while loop with the condition flag[j] == true and turn == j this loop is the only one. If Pj is not ready to enter the critical section, then flag[j] == false and Pi can enter its critical section. If Pj has set flag [j] to true and is also executing in its while statement, then either turn == i or turn == j+ If turn == i, then P1 will enter the critical section. If turn == j then Pj will enter the critical section. However, once Pj exits its critical section. it will reset flag[j] to false, allowing Pi to enter its critical section. If Pj, resets flag[j] to true, it must also set turn to i. Thus, since Pi does not change the value of the variable turn while executing the while statement, Pi will enter the critical section (progress) after at most one entry by Pj, (bounded waiting). Synchronization Hardware Test and modify the content of a word atomically. boolean TestAndSet(boolean & target) { boolean rv = target; target = true; return rv; } CU IDOL SELF LEARNING MATERIAL (SLM)
120 System Programming and Operating System Mutual Exclusion with Test-and-Set Shared data: boolean lock = false; Process Pi do { while (TestAndSet(lock)) ; critical section lock = false; remainder section } Atomically swap two variables. void Swap(boolean &a, boolean &b) { boolean temp = a; a = b; b = temp; } Mutual Exclusion with Swap Shared data (initialized to false): boolean lock; boolean waiting[n]; Process Pi do { key = true; while (key == true) Swap(lock, key); critical section lock = false; remainder section } CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 121 Discus the Requirements of Mutual Exclusion: 1. Mutual exclusion must be enforced: only one process at a time is allowed into its critical section among all processes that have critical section for the same resource or shared object. 2. A process which halts in its non-critical section must do so without interfering with other processes. 3. It must not be possible for a process requiring access to critical section to be delayed indefinitely; no deadlock or starvation can be allowed. 4. When no process is in critical section any process that requests entry to its critical section must be permitted to enter without delay. 5. No assumptions are made about relative speeds or no of processes. A process remains inside its critical section for a finite time only. Describe Dekker’s Algorithm var flag: array[0..1] of Boolean turn 0..1; Procedure P0; begin repeat flag[0] = true; While flag[1] do if turn = 1 then begin flag[0] = false while turn = 0 do {nothing}; flag[0] = true end <critical section> turn = 1 flag[0] = false <remainder> forever End CU IDOL SELF LEARNING MATERIAL (SLM)
122 System Programming and Operating System Procedure P1; {nothing}; begin repeat flag[1] = true While flag[0] do if turn = 0 begin flag[1] = false while turn = 0 do flag[1] = true end <critical section> turn = 0 flag[1] = false <remainder> forever end 7.6 Mutual Exclusion Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of a common resource, such as a global variable, by pieces of computer code called critical sections. Examples of such resources are fine-grained flags, counters or queues, used to communicate between code that runs concurrently, such as an application and its interrupt handlers. The problem is acute because a thread can be stopped or started at any time. To illustrate: suppose a section of code is mutating a piece of data over several program steps, when another thread, perhaps triggered by some unpredictable event, starts executing. If this second thread reads from the same piece of data, the data, in the process of being overwritten, is in an inconsistent and unpredictable state. If the second thread tries overwriting that data, the ensuing state will probably be unrecoverable. These critical sections of code accessing shared CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 123 data must, therefore, be protected, so that other processes which read from or write to the chunk of data are excluded from running. A mutex is also a common name for a program object that negotiates mutual exclusion among threads, also called a lock. 7.7 Semaphores Synchronization tool that does not require busy waiting. Semaphore S – integer variable can only be accessed via two indivisible (atomic) operations wait (S): while S£ 0 do no-op; S--; signal (S): Critical Section of n Processes Shared data: semaphore mutex; //initially mutex = 1 Process Pi: do { wait(mutex); S++; critical section signal(mutex); remainder section } while (1); CU IDOL SELF LEARNING MATERIAL (SLM)
124 System Programming and Operating System Semaphore Implementation Define a semaphore as a record typedef struct { int value; struct process *L; } semaphore; Assume two simple operations: block suspends the process that invokes it. wakeup(P) resumes the execution of a blocked process P. Semaphore operations now defined as wait(S): S.value--; if (S.value < 0) { add this process to S.L; block; } signal(S): S.value++; if (S.value <= 0) { remove a process P from S.L; wakeup(P); } Semaphore as a General Synchronization Tool Execute B in Pj only after A executed in Pi Use semaphore flag initialized to 0 Code: Pi Pj A wait(flag) signal(flag) B CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 125 Two Types of Semaphores Counting semaphore – integer value can range over an unrestricted domain. Binary semaphore – integer value can range only between 0 and 1; can be simpler to implement. Can implement a counting semaphore S as a binary semaphore. Implementing S as a Binary Semaphore Data structures: binary-semaphore S1, S2; int C: Initialization: S1 = 1 S2 = 0 C = initial value of semaphore S Implementing S wait operation wait(S1); signal(S1); C--; wait(S2); if (C < 0) { } signal(S1); signal operation wait(S1); C ++; if (C <= 0) signal(S2); else signal(S1); CU IDOL SELF LEARNING MATERIAL (SLM)
126 System Programming and Operating System Classical Problems of Synchronization Bounded-Buffer Problem Readers and Writers Problem Dining-Philosophers Problem Bounded-Buffer Problem Shared data semaphore full, empty, mutex; Initially: full = 0, empty = n, mutex = 1 Bounded-Buffer Problem Producer Process do { … produce an item in nextp … wait(empty); wait(mutex); … add nextp to buffer … signal(mutex); signal(full); } while (1); Bounded-Buffer Problem Consumer Process do { wait(full) wait(mutex); … remove an item from buffer to nextc CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 127 … signal(mutex); signal(empty); … consume the item in nextc … } while (1); Readers-Writers Problem Shared data semaphore mutex, wrt; Initially mutex = 1, wrt = 1, readcount = 0 Readers-Writers Problem Writer Process wait(wrt); … writing is performed … signal(wrt); Readers-Writers Problem Reader Process wait(mutex); readcount++; if (readcount == 1) wait(rt); signal(mutex); … reading is performed … wait(mutex); readcount--; CU IDOL SELF LEARNING MATERIAL (SLM)
128 System Programming and Operating System if (readcount == 0) signal(wrt); signal(mutex): Dining-Philosophers Problem Fig. 7.4: Dining- Philosophers Problem Consider five philosophers who spend their lives thinking and eating. The philosophers share a common circular table surrounded by five chairs each belonging to one philosopher. In the center of the table is a bowl of rice, and the table is laid with five single chopsticks. When a philosopher thinks, she does not interact with her colleagues. From time to time, a philosopher gets hungry and tries to pick up the two chopsticks that are closest to her (the chopsticks that are between her and her left and right neighbours). A philosopher may pick up only one chopstick at a time. Obviously, she cannot pick up a chopstick that is already in the hand of a neighbour. When a hungry philosopher has both her chopsticks at the same time, she eats without releasing her chopsticks. When she is finished eating she puts down both of her chopsticks and starts thinking again. The dining-philosophers problem is considered a classic synchronization problem. Shared data semaphore chopstick[5]; Initially all values are 1 CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 129 Philosopher i: do { wait(chopstick[i]) wait(chopstick[(i+1) % 5]) … eat … signal(chopstick[i]); signal(chopstick[(i+1) % 5]); … think … } while (1); Critical Regions High-level synchronization construct A shared variable v of type T, is declared as: v:sharedT Variable v accessed only inside statement regionvwhenBdoS where B is a boolean expression. While statement S is being executed, no other process can access variable v. Regions referring to the same shared variable exclude each other in time. When a process tries to execute the region statement, the Boolean expression B is evaluated. If B is true, statement S is executed. If it is false, the process is delayed until B becomes true and no other process is in the region associated with v. CU IDOL SELF LEARNING MATERIAL (SLM)
130 System Programming and Operating System Example – Bounded Buffer Shared data: struct buffer { int pool[n]; int count, in, out; } Bounded Buffer Producer Process Producer process inserts nextp into the shared buffer region buffer when(count < n) { pool[in] = nextp; in:= (in+1) % n; count++; } Bounded Buffer Consumer Process Consumer process removes an item from the shared buffer and puts it in nextc region buffer when (count > 0) { nextc = pool[out]; out = (out+1) % n; count--; } Implementation region x when B do S Associate with the shared variable x, the following variables: semaphore mutex, first-delay, second-delay; int first-count, second-count; Mutually exclusive access to the critical section is provided by mutex. CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 131 If a process cannot enter the critical section because the Boolean expression B is false, it initially waits on the first-delay semaphore; moved to the second-delay semaphore before it is allowed to reevaluate B. Keep track of the number of processes waiting on first-delay and second-delay, with first-count and second-count respectively. The algorithm assumes a FIFO ordering in the queuing of processes for a semaphore. For an arbitrary queuing discipline, a more complicated implementation is required. Monitors High-level synchronization construct that allows the safe sharing of an abstract data type among concurrent processes. monitor monitor-name { shared variable declarations procedure body P1 (…) { ... } procedure body P2 (…) { ... } procedure body Pn (…) { ... } { initialization code } } To allow a process to wait within the monitor, a condition variable must be declared, as condition x, y; Condition variable can only be used with the operations wait and signal. CU IDOL SELF LEARNING MATERIAL (SLM)
132 System Programming and Operating System The operation x.wait(); means that the process invoking this operation is suspended until another process invokes x.signal(); The x.signal operation resumes exactly one suspended process. If no process is suspended, then the signal operation has no effect. entry queue shared data operations initialization code Fig. 7.5: Monitor With Condition Variables entry queue queues associated shared data with x, y conditions x y operations initialization code Fig. 7.6: Dining Philosophers Example CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 133 monitor dp { enum {thinking, hungry, eating} state[5]; condition self[5]; void pickup(int i) // following slides void putdown(int i) // following slides void test(int i) // following slides void init() { for (int i = 0; i < 5; i++) state[i] = thinking; } } void pickup(int i) { state[i] = hungry; test(i); if (state[i] != eating) self[i].wait(); } void putdown(int i) { state[i] = thinking; // test left and right neighbours test((i+4) % 5); test((i+1) % 5); } void test(int i) { if ( (state[(I + 4) % 5] != eating) && (state[i] == hungry) && (state[(i + 1) % 5] != eating)) { state[i] = eating; self[i].signal(); } } CU IDOL SELF LEARNING MATERIAL (SLM)
134 System Programming and Operating System Monitor Implementation Using Semaphores Variables semaphore mutex; // (initially = 1) semaphore next; // (initially = 0) int next-count = 0; Each external procedure F will be replaced by wait(mutex); … body of F; … if (next-count > 0) signal(next) else signal(mutex); Mutual exclusion within a monitor is ensured. For each condition variable x, we have: semaphore x-sem; // (initially = 0) int x-count = 0; The operation x.wait can be implemented as: x-count++; if (next-count > 0) signal(next); else signal(mutex); wait(x-sem); x-count--; Conditional-wait construct: x.wait(c); c – integer expression evaluated when the wait operation is executed. CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 135 value of c (a priority number) stored with the name of the process that is suspended. when x.signal is executed, process with smallest associated priority number is resumed next. Check two conditions to establish correctness of system: User processes must always make their calls on the monitor in a correct sequence. Must ensure that an uncooperative process does not ignore the mutual-exclusion gateway provided by the monitor, and try to access the shared resource directly, without using the access protocols. 7.8 Summary Inter-process Communication (IPC) — a mechanism for process to synchronize and communicate with each other. Why Inter-process Communication Cooperating processes frequently need to communicate with each other to ensure works are correctly done. Sometimes, the correctness of the executing results depend on the executing sequence of cooperating processes. At this scenario, we need to enforce synchronization to make sure we can get the correct results. Sometimes the execution of one process may require the result of another process. At this scenario, we need a communication mechanism for those processes to talk to each other. Two Approaches for IPC There are two approaches that can be used for IPC. One is shared memory, and the other is message passing. Process Synchronization means sharing system resources by processes in a such a way that, Concurrent access to shared data is handled thereby minimizing the chance of inconsistent data. Maintaining data consistency demands mechanisms to ensure synchronized execution of cooperating processes. CU IDOL SELF LEARNING MATERIAL (SLM)
136 System Programming and Operating System 7.9 Key Words/Abbreviations Semaphores: Semaphores are integer variables that are used to solve the critical section problem by using two atomic operations, wait and signal that are used for process synchronization. Mutual Exclusion: Mutual exclusion is a concurrency control property which is introduced to prevent race conditions. It is the requirement that a process can not enter its critical section while another concurrent process is currently present or executing in its critical section i.e. only one process is allowed to execute the critical section at any given instance of time. 7.10 Learning Activity 1. Explain Direct Communication. ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2. What are the solution for critical-section problem? ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 7.11 Unit End Questions (MCQ and Descriptive) A. Descriptive Types Questions 1. Explain inter-process communication. 2. Explain different methods to implement inter-process communication. 3. Explain direct and indirect communication. 4. Explain Shared Memory in detail. 5. Explain Message passing. 6. Explain Critical Section. CU IDOL SELF LEARNING MATERIAL (SLM)
Inter-process Communication and Synchronization 137 7. Write a note on Semaphores and state its types. 8. Explain two process solution for critical section solution. 9. Explain Synchronization hardware algorithm for critical section solution. 10. Explain mutual exclusion with test and set algorithm for critical section solution. 11. Explain mutual exclusion with swap algorithm for critical section solution. 12. Explain implementation of Binary Semaphore. 13. Explain mutual exclusion. 14. Explain Bounded Buffer Producer and Consumer problem. 15. Explain Reader Writers Problem. 16. Explain Dining Philosophers problem to solve mutual exclusion. 17. Explain Monitor implementation using semaphore. 18. What are the requirements of Mutual Exclusion? 19. Explain Message format and what information a message contain. 20. Explain Dekker’s algorithm. B. Multiple Choice/Objective Type Questions 1. In __________ communication each process that want to communicate must explicitly name the recipient or sender of the communication (a) direct (b) indirect (c) written (d) extra 2. Each process has a code segment, called __________. (a) programming (b) critical section (c) process (d) synchronization 3. __________ is process in which communication between processes takes place by calls send and receive primitives. (a) Unlock (b) Lock (c) Synchronization (d) Semaphores CU IDOL SELF LEARNING MATERIAL (SLM)
138 System Programming and Operating System 4. A mutex is also a common name for a program object that negotiates mutual exclusion among threads, also called a ___________. (a) synchronization (b) unlock (c) lock (d) semaphores 5. ___________ is Synchronization tool that does not require busy waiting. (a) Synchronization (b) Unlock (c) Lock (d) Semaphores Answers 1. (a), 2. (b), 3. (c), 4. (c), 5. (d) 7.12 References Reference Books 1. https://www.researchgate.net/publication/265992935_Shared_Memory_Synchronization 2. https://www.unf.edu/public/cop4610/ree/Notes/PPT/PPT8E/CH%2005%20-OS8e.pdf Web Resources 1. https://www.geeksforgeeks.org/inter-process-communication-ipc/ 2. https://kelvin.ink/2018/10/27/IPC_and_Synchronization/ 3. https://www.tutorialspoint.com/semaphores-in-operating-system 4. https://webservices.ignou.ac.in/virtualcampus/adit/course/cst101/block4/unit2/cst101- bl4-u2-04.htm 5. https://www.includehelp.com/operating-systems/process-synchronization-in-operating- system-and-inter-process-communication.aspx CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT 8 DEADLOCKS Structure: 8.0 Learning Objectives 8.1 Introduction 8.2 Conditions 8.3 Modeling 8.4 Detection and Recovery 8.5 Deadlock Avoidance 8.6 Deadlock Prevention 8.7 Summary 8.8 Key Words/Abbreviations 8.9 Learning Activity 8.10 Unit End Questions (MCQ and Descriptive) 8.11 References 8.0 Learning Objectives After studying this unit, you should be able to: Explain the concept of deadlocks. Analyse deadlock detection and recovery techniques. Discuss how can we prevent us from deadlock.
140 System Programming and Operating System 8.1 Introduction Deadlock is a situation where a set of processes are blocked because each process is holding a resource and waiting for another resource acquired by some other process. Consider an example when two trains are coming toward each other on same track and there is only one track, none of the trains can move once they are in front of each other. Similar situation occurs in operating systems when there are two or more processes hold some resources and wait for resources held by other(s). For example, in the below diagram, Process 1 is holding Resource 1 and waiting for resource 2 which is acquired by process 2, and process 2 is waiting for resource 1. Resource 1 Waiting For Assigned to Process 1 Process 2 Waiting Assigned For to Resource 2 Fig. 8.1: Circular Wait 8.2 Conditions There are four necessary conditions for the occurrence of deadlock. Processes will be in Deadlock state if and only if all following four conditions are true. 1. Mutual Exclusion 2. Hold and Wait 3. No Preemption 4. Circular Wait CU IDOL SELF LEARNING MATERIAL (SLM)
Deadlocks 141 Mutual Exclusion: This condition says that at least one resource must be in non-sharable mode i.e. at a time only one process can use the resource. If another process requests that resource then the requesting process must be delayed until the resource has been released. Hold And Wait: A Process must be holding at least one resource and waiting to acquire additional resources that are currently hold by other process. No Preemption: Resources cannot be preempted, i.e. resources can be released by the process only after its completion. Circular Wait: A set of waiting processes like { P0, P1,…,Pn} must exists such that P0 is waiting for a resource that is hold by P1, P1 is waiting for a resource that is occupied with P2 and with Pn is waiting for a resource that is with by P0. 8.3 Modeling A computer scientist/programmer named Holt showed how the four conditions for deadlock (as described in the previous tutorial) can be modeled using directed graphs. These graphs are shown below: AS D TU RB C (A) (B) (C) Fig. 8.2: Resource Allocation Graphs In the above resource allocation graphs, the Figure 8.2 A, B, and C represents: Figure Represents (A) Holding a resource (B) Requesting a resource (C) Deadlock CU IDOL SELF LEARNING MATERIAL (SLM)
142 System Programming and Operating System According to Hold, these graphs have the following two types of nodes: Processes – shown in graph as circles Resources – shown in graph as squares Here, in this graph, an arc from a square (resource node) to a circle (process node) means that the resource has previously been requested by, granted to, and is currently held by that process. As you can see from the figure given above, resource R is currently assigned to process A in figure A. An arc from a process to a resource means that the process is currently blocked waiting for that resource. As you can see from the above figure, in the figure B, process B is waiting for resource S. And, in the figure C, we see a deadlock, that is, the process C is waiting for the resource T, which is currently held by the process D. Here, process D is not about the release the resource T because it is waiting for resource U, held by C. Here both processes will wait forever. 8.4 Detection and Recovery [A] Deadlock Detection Algorithm Deadlock prevention and Deadlock avoidance ensures that deadlock never occurs. If deadlock cannot be prevented then system should detect the deadlock and recover it. To implement the deadlock detection algorithm: (a) System must maintain information about the current allocation of the resources to different processes along with any outstanding resource allocation requests. (b) System must provide and have an algorithm to determine whether the system has entered in to deadlock state. Deadlock detection algorithm for single instance resources. Deadlock detection algorithm for multiple instance resources (a) System must provide a recovery algorithm. CU IDOL SELF LEARNING MATERIAL (SLM)
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261