Operating Systems 43 Many-to-One Many user-level threads mapped to single kernel thread. Used on systems that does not support kernel threads. The entire process will block if a thread makes a blocking system call. Also, because only one thread can access the kernel at a time, multiple threads are unable to run in parallel on multiprocessors. Green threads — a thread library available for Solaris 2— uses this model. In addition, user-level thread libraries implemented on operating systems that do not support kernel threads use the many-to-one model. user thread k karnel thread Fig. 3.3: Many-to-One Model One-to-One Each user-level thread maps to kernel thread. Examples - Windows 95/98/NT/2000 - OS/2 The one-to-one model maps each user thread to a kernel thread. It provides more concurrency than the many-to-one model by allowing another thread to run when a thread makes a blocking system call; it also allows multiple threads to run in parallel on multiprocessors. The only drawback to this model is that creating a user thread requires creating the corresponding kernel thread. Because the overhead of creating kernel threads can burden the performance of an CU IDOL SELF LEARNING MATERIAL (SLM)
44 System Programming and Operating System application, most implementations of this model restrict the number of threads supported by the system. user thread k k k k karnel thread Fig. 3.4: One-to-one Model Many-to-Many Model Allows many user level threads to be mapped to many kernel threads. Allows the operating system to create a sufficient number of kernel threads. Examples: - Solaris 2 - Windows NT/2000 with the Thread Fiber package user thread kk k karnel thread Fig. 3.5: Many-to-Many Model The many-to-many model multiplexes many user-level threads to a smaller or equal number of kernel threads. The number of kernel threads may be specific to either a particular application ora particular machine CU IDOL SELF LEARNING MATERIAL (SLM)
Operating Systems 45 Threading Issues Semantics of fork() and exec() system calls. The fork system call is used to create a separate, duplicate process. In a multithreaded program, the semantics of the fork and exec system calls change. If one thread in a program calls fork, does the new process duplicate all threads or is the new process single-threaded? Some UNIX systems have chosen to have two versions of fork, one that duplicates all threads and another that duplicates only the thread that invoked the fork system call. The exec system call typically works in the same way. That is, if a thread invokes the exec system call, the program specified in the parameter to exec will replace the entire process — including all threads and LWPs. Usage of the two versions of fork depends upon the application. If exec is called immediately after forking, then duplicating all threads is unnecessary as the program specified in the parameters to exec will replace the process. In this instance, duplicating only the calling thread is appropriate. If, however, the separate process does not call exec after forking, the separate process should duplicate all threads. Thread Cancellation Thread cancellation is the task of terminating a thread before it has completed. For example, if multiple threads are concurrently searching through a database and one thread returns the result, the remaining threads might be cancelled. Another situation might occur when a user presses a button on a web browser that stops a web page from loading any further. Often a web page is loaded in a separate thread. When a user presses the stop button, the thread loading the page is cancelled. A thread that is to be cancelled is often referred to as the target thread. Cancellation of a target thread may occur in two different scenarios: 1. Asynchronous cancellation: One thread immediately terminates the target thread. 2. Deferred cancellation: The target thread can periodically check if it should terminate, allowing the target thread an opportunity to terminate itself in an orderly fashion. CU IDOL SELF LEARNING MATERIAL (SLM)
46 System Programming and Operating System The difficulty with cancellation occurs in situations where resources have been allocated to a cancelled thread or if a thread was cancelled while in the middle of updating data it is sharing with other threads. This becomes especially troublesome with asynchronous cancellation. The operating system will often reclaim system resources from a cancelled thread, but often will not reclaim all resources. Therefore, cancelling a thread asynchronously may not free a necessary system-wide resource. Alternatively, deferred cancellation works by one thread indicating that a target thread is to be cancelled. However, cancellation will occur only when the target thread checks to determine if it should be cancelled or not. This allows a thread to check if it should be cancelled at a point when it can safely be cancelled. P threads refers to such points as cancellation points. Must operating systems allow a process or thread to be cancelled asynchronously. However, the P thread API provides deferred cancellation. This means that an operating system implementing the P thread API will allow deferred cancellation. Signal handling A signal is used in UNIX systems to notify a process that a particular event has occurred. A signal may be received either synchronously or asynchronously depending upon the source and the reason for the event being signalled. Whether a signal is synchronous or asynchronous, all signals follow the same pattern: A signal is generated by the occurrence of a particular event. A generated signal is delivered to a process. Once delivered, the signal must be handled. An example of a synchronous signal includes an illegal memory access or division by zero. In this instance, if a running program performs either of these actions, a signal is generated. Synchronous signals are delivered to the same process that performed the operation causing the signal (hence the reason they arc considered synchronous). When a signal is generated by an event external to a running process, that process receives the signal asynchronously. Examples of such signals include terminating a process with specific CU IDOL SELF LEARNING MATERIAL (SLM)
Operating Systems 47 keystrokes (such as <control><c>) or having a timer expire. Typically an asynchronous signal is sent to another process. Every signal may be handled by one of two possible handlers: A default signal handler A user-defined signal handler Thread pools Thread Specific Data Every signal has a default signal handler that is run by the kernel when handling the signal. This default action may be overridden by a user-defined signal handler function. Handling signals in single-threaded programs is straightforward; signals are always delivered to a process. However, delivering signals is more complicated in multithreaded programs, as a process may have several threads. Where then should a signa1 be delivered? In general, the following options exist: Deliver the signal to the thread to which the signal applies. Deliver the signal to every thread in the process. Deliver the signal to certain threads in the process. Assign a specific thread to receive all signals for the process. Thread Pools In Section 5A, we described the scenario of multithreading a web server. In this situation, whenever the server receives a request, it creates a separate thread to service the request. Whereas creating a separate thread is certainly superior to creating a separate process, a multithreaded server nonetheless has potential problems. The first concerns the amount of time required to create the thread prior to servicing the request, compounded with the fact that this thread will be discarded once it has completed its work. The second issue is more problematic: If we allow all concurrent requests to be serviced in a new thread, we have not placed a bound on the number of CU IDOL SELF LEARNING MATERIAL (SLM)
48 System Programming and Operating System threads concurrently active in the system. Unlimited threads could exhaust system resources, such as CPU time or memory. One solution to this issue is to use thread pools. The general idea behind a thread pool is to create a number of threads at process startup and place them into a pool, where they sit and wait for work. When a server receives a request, it awakens a thread from this pool—if one is available—passing it the request to service. Once the thread completes its service, it returns to the pool awaiting more work. If the pool contains no available thread, the server waits until one becomes free. In particular the benefits of thread pools are: It is usually faster to service a request with an existing thread than wailing to create a thread. A thread pool limits the number of threads that exist at any one point. This is particularly important on systems that cannot support a large number of concurrent threads. The number of threads in the pool can be set heuristically based upon factors such as the number of CPUs in the system, the amount of physical memory and the expected number of concurrent client requests. More sophisticated thread-pool architectures can dynamically adjust the number of threads in the pool according to usage patterns. Such architectures provide the further benefit of having a smaller pool — thereby consuming less memory — when the load on the system is low. Thread-specific Data Threads belonging to a process share the data of the process. Indeed, this sharing of data provides one of the benefits of multithreaded programming. However, each thread might need its own copy of certain data in some circircumstance. We will call such data thread-specific data. For example, in a transaction-processing system, we might service each transaction in a separate thread. Furthermore, each transaction may be assigned a unique identifier. To associate each thread with its unique identifier we could use thread-specific data. Most thread libraries — including Win32 and P threads — provide some form of support for thread- specific data. CU IDOL SELF LEARNING MATERIAL (SLM)
Operating Systems 49 3.6 Real Time A real time system is used when there are rigid time requirements on the operation of a processor or the flow of data, and thus is often used as a control device in a dedicated application. A real time operating system has well defined fixed time constraints. Processing must be done within the defined constraints, or the system will fail. A real time system is considered to function correctly only if it returns the correct result within any time constraints. Types of Real Time System 1. Hard real time system: A hard real time system guarantees that critical tasks complete on time. This goal requires that all delays in the system be bounded, from the retrieval of stored data to the time it takes the operating system to finish any request made of it. Secondary storage of any sort is limited or missing. Most advanced operating system features are absent too, since they tend to separate the user further from the hardware, and that separation results in uncertainty about the amount of time an operation will take. Virtual memory is almost never found in real time system. Therefore, hard real time systems conflict with the operation of time-sharing systems and two cannot be mixed. 2. Soft real time system: A less restrictive type of real time system is a soft real time system where a critical real time task gets priority over other tasks, and retains that priority until it completes. As in hard real time systems, kernel delays need to be bounded. A real time task cannot be kept waiting indefinitely for the kernel to run it. Soft real time systems, however, have more limited utility than do hard real time systems. Given their lack of deadline support, they are risky to use for industrial control and robotics. There are several areas in which they are useful including multimedia, virtual reality, and advanced scientific projects. 3.7 Embedded An embedded operating system (OS) is a specialized operating system designed to perform a specific task for a device that is not a computer. An embedded operating system’s main job is to CU IDOL SELF LEARNING MATERIAL (SLM)
50 System Programming and Operating System run the code that allows the device to do its job. The embedded OS also makes the device’s hardware accessible to the software that is running on top of the OS. An embedded system is a computer that supports a machine. Examples include computers in cars, traffic lights, digital televisions, ATMs, airplane controls, point of sale (POS) terminals, digital cameras, GPS navigation system, elevators, digital media receivers and smart meters, among many other possibilities. We find embedded system everywhere around us in our daily life. Embedded Systems are a specially designed computer system that essentially contains software and hardware for performing specific tasks. Mobile Phones, Laptops, Cameras, Washing Machines, ATMs, and Hair Straightener etc. are example of embedded system. You can check medical application of embedded system. Characteristics of Embedded Operating Systems The main characteristics of Embedded Operating Systems are as follows: Direct use of interrupts Reactive operation Real-time operation Streamlined protection mechanisms I/O device flexibility Configurability Embedded vs. Non-embedded An embedded operating system is integrated into a device, and depending on the device, may reside on a chip. Embedded operating systems tend to be limited in scope with regard to what they can do. In contrast, a non-embedded operating system tends to run from a hard disk or an SSD. Non-embedded operating systems such as Windows 10 or Mac OS tend to be user configurable and upgradable, and they are usually designed for general purpose use. CU IDOL SELF LEARNING MATERIAL (SLM)
Operating Systems 51 3.8 Distributed Systems in which processor do not share memory or block instead each processor has its own local memory. The processors communicate with one another through various communication lines, such as high-speed buses or telephone lines. These systems are usually referred as loosely coupled systems or distributed systems. The processors in a distributed system provides a file system interface where clients can create, update, read and delete files. e.g. Web Server that delivers file to client running web browsers. Peer to Peer Computing In this model, the clients and the servers are not distinguished from one another, instead all nodes within the system are considered peers and each may rather act as a client or a server, depending on whether it is requesting or providing a service. Peer-to-Peer provides an advantage over Client - Server systems. In a client server system, server is always bottleneck, but in a peer-to-peer system, services are provided by several nodes distributed throughout network. In a peer-to-peer system, a node must join the network of peers. Once a node has joined the network, it can begin providing services to and requesting services from other nodes in the network. Services are provided in following two ways: 1. When a node joins a network, it registers its service with a centralized lookup service on the network. Any node desiring a specific service first contacts this centralized lookup service to determine which node provides the service. The remainder of the communication takes place between the client and the server provider. 2. A peer acting as a client must first discover what node provides a desired service by broadcasting a request for the service to all other nodes in the network. The node (or nodes) providing that service responds to the peer making the request. To support this approach a discovery protocol must be provided that allows to discover services provided by other peers in the network. CU IDOL SELF LEARNING MATERIAL (SLM)
52 System Programming and Operating System 3.9 Clustered Like parallel systems, clustered systems gather together multiple CPUs to accomplish computational work. Clustered systems differ from parallel systems, however, in that they are composed of two or more individual systems coupled together. The definition of the term clustered is not concrete; the general accepted definition is that clustered computers share storage and are closely linked via LAN networking. Clustering is usually performed to provide high availability. A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others. If the monitored machine fails, the monitoring machine can take ownership of its storage, and restart the application(s) that were running on the failed machine. The failed machine can remain down, but the users and clients of the application would only see a brief interruption of service. Asymmetric Clustering: In this, one machine is in hot standby mode while the other is running the applications. The hot standby host (machine) does nothing but monitor the active server. If that server fails, the hot standby host becomes the active server. Symmetric Clustering: In this, two or more hosts are running applications, and they are monitoring each other. This mode is obviously more efficient, as it uses all of the available hardware. Parallel Clustering: Parallel clusters allow multiple hosts to access the same data on the shared storage. Because most operating systems lack support for this simultaneous data access by multiple hosts, parallel clusters are usually accomplished by special versions of software and special releases of applications. Clustered technology is rapidly changing. Clustered system’s usage and its features should expand greatly as Storage Area Networks (SANs). SANs allow easy attachment of multiple hosts to multiple storage units. Current clusters are usually limited to two or four hosts due to the complexity of connecting the hosts to shared storage. CU IDOL SELF LEARNING MATERIAL (SLM)
Operating Systems 53 3.10 Summary Batch Operating System Some computer processes are very lengthy and time-consuming. To speed the same process, a job with a similar type of needs are batched together and run as a group. The user of a batch operating system never directly interacts with the computer. In this type of OS, every user prepares his or her job on an offline device like a punch card and submit it to the computer operator. Multi-tasking/Time-sharing Operating Systems Time-sharing operating system enables people located at a different terminal (shell) to use a single computer system at the same time. The processor time (CPU) which is shared among multiple users is termed as time sharing. Real Time OS A real time operating system time interval to process and respond to inputs is very small. Examples: Military Software Systems, Space Software Systems. Distributed Operating System Distributed systems use many processors located in different machines to provide very fast computation to its users. Network Operating System Network Operating System runs on a server. It provides the capability to serve to manage data, user, groups, security, application, and other networking functions. 3.11 Key Words/Abbreviations Embedded: An embedded system is a combination of computer hardware and software, either fixed in capability or programmable, designed for a specific function or functions within a larger system CU IDOL SELF LEARNING MATERIAL (SLM)
54 System Programming and Operating System Debugging: An embedded system is a combination of computer hardware and software, either fixed in capability or programmable, designed for a specific function or functions within a larger system. CPU: Central Prossesor Unit OS: Operating System LAN: Local Area Network I/O: Input/Output 3.12 Learning Activity 1. Difference between Embedded and Non-embedded system. ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2. What is Thread? ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 3.13 Unit End Questions (MCQ and Descriptive) A. Descriptive Types Questions 1. Explain Batch Mainframe operating system in detail. 2. Explain in detail Time Sharing Multiprocessing operating system. 3. Explain Multiprogramming operating system. 4. Explain Thread and types of Threads. 5. Write a note on Multithreading. 6. Explain Real Time operating system. 7. Explain Embedded operating system. 8. Explain Distributed operating system. 9. Explain Clustered operating system. CU IDOL SELF LEARNING MATERIAL (SLM)
Operating Systems 55 B. Multiple Choice/Objective Type Questions 1. Multithreading can be greatly increased in a ___________ architecture. (a) multithreading (b) multiprocessor (c) muti-user (d) peer to peer 2. A pool of jobs on disk allows the OS to select which job to run next, to ___________ CPU utilization. (a) stable (b) perform (c) increase (d) decrease 3. ___________ threads are supported directly by the operating system. (a) Kernel (b) Shell (c) Os (d) Multi 4. The ___________ model maps each user thread to a kernel thread. (a) one-to-many (b) many-to-many (c) many-to-one (d) one-to-one 5. A ___________ system guarantees that critical tasks complete on time. (a) soft real time (b) hard real time (c) real time (d) embedded Answers 1. (b), 2. (c), 3. (a), 4. (d), 5. (b) 3.14 References Reference Books 1. http://www.cs.nchu.edu.tw/~hwtseng/OS/os.pdf 2. http://dinus.ac.id/repository/docs/ajar/Operating_System.pdf CU IDOL SELF LEARNING MATERIAL (SLM)
56 System Programming and Operating System Web Resources 1. https://www.includehelp.com/operating-systems/introduction-to-operating-system-and- their-types.aspx 2. https://www.guru99.com/operating-system-tutorial.html CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT 4 OPERATING SYSTEM COMPONENTS Structure: 4.0 Learning Objectives 4.1 Introduction 4.2 Process Management Component 4.3 Memory Management Component 4.4 I/O Management Component 4.5 File Management Component 4.6 Protection System 4.7 Networking Management Component 4.8 Command Interpreter 4.9 Summary 4.10 Key Words/Abbreviations 4.11 Learning Activity 4.12 Unit End Questions (MCQ and Descriptive) 4.13 References 4.0 Learning Objectives After studying this unit, you should be able to: Explain the various components of Operating Systems.
58 System Programming and Operating System Discuss the managing of processes, Management of Memory and I/O devices. Explain about various Networking components and Protection of System from unauthorized users. 4.1 Introduction From the virtual machine point of view (also resource management) These components reflect the services made available by the O.S. 1. Process Management Process is a program in execution — numerous processes to choose from in a multiprogrammed system, Process creation/deletion (bookkeeping) Process suspension/resumption (scheduling, system vs. user) Process synchronization Process communication Deadlock handling 2. Memory Management Maintain bookkeeping information Map processes to memory locations Allocate/deallocate memory space as requested/required 3. I/O Device Management Disk management functions such as free space management, storage allocation, fragmentation removal, head scheduling Consistent, convenient software to I/O device interface through buffering/caching, custom drivers for each device. CU IDOL SELF LEARNING MATERIAL (SLM)
Operating System Components 59 4. File System: Built on top of disk management File creation/deletion. Support for hierarchical file systems Update/retrieval operations: read, write, append, seek Mapping of files to secondary storage 5. Protection: Controlling access to the system Resources — CPU cycles, memory, files, devices Users — authentication, communication Mechanisms, not policies 6. Network Management: Often built on top of file system TCP/IP, IPX, IPng Connection/Routing strategies “Circuit” management — circuit, message, packet switching Communication mechanism Data/Process migration 7. Network Services (Distributed Computing): Built on top of networking Email, messaging (Group-wise) FTP gopher, www Distributed file systems — NFS, AFS, LAN Manager Name service — DNS, YP, NIS Replication — gossip, ISIS Security — kerberos 8. User Interface Character-Oriented shell — sh, csh, command.com (User replaceable) GUI — X, Windows 95 CU IDOL SELF LEARNING MATERIAL (SLM)
60 System Programming and Operating System 4.2 Process Management Component A program does nothing unless its instructions are executed by CPU. A program in execution is a process. 1. A process needs certain resources including CPU time, memory, files, and I/O to accomplish its task 2. The resources are either given to the process when it is created or allocated to it while it is running. 3. Various initialization data (input) may be passed along e.g. filename to open a file. 4. When the process terminates, the OS will reclaim any reusable resources. 5. A program itself is not a process; a program is a passive entity, such as contents of a file stored on disk, whereas a process is an active entity. 6. The execution of such a process must be sequential. The CPU executes one instruction of the process after another until the process completes. 7. A multithreaded process has multiple program counters, each pointing to the next instruction to execute for a given thread. 8. The OS is responsible for the following activities in connection with process management: (a) Creating and detecting both user and system process. (b) Suspending and reusing processes. (c) Providing mechanism for process synchronization. (d) Providing mechanism for process communication. (e) Providing mechanism for deadlock handling. 4.3 Memory Management Component 1. Main memory is a large array of words or bytes, ranging in size from hundreds of thousands to billions. Each word or bytes has its own address. CU IDOL SELF LEARNING MATERIAL (SLM)
Operating System Components 61 2. Main memory is a repository of quickly accessible data shared by the CPU and I/O devices. 3. The main memory is generally the only large storage device that the CPU is able to address and access directly. 4. Instructions must be in memory for the CPU to execute them. For a program to be executed, it must be mapped to absolute addresses and loaded into the main memory. As the program executes, it accesses program instructions and data from memory by generating these absolute addresses. When the program terminates, its memory space is available, and the next program can be loaded and executed. 5. The OS is responsible for following activities in connection with memory management: (a) Keeping track of which parts of memory currently being used and by whom. (b) Deciding which processes and data to move into and out of memory. (c) Allocating and de-allocating memory space as needed. 4.4 I/O Management Component 1. One of the purpose of an OS is to hide the peculiarity of specific H/W devices from the users. 2. The peculiarity of I/O devices are hidden from the bulk of OS itself by the I/O subsystem. 3. The I/O subsystem consists of several components: (a) A memory management component that includes buffering, caching and spooling. (b) A general device driver interface. (c) Drivers for specific H/W devices. 4.5 File Management Component A file is collection of related information defined by its creation. A file represents programs and data. 1. Computers can store information on several different types of physical media. Magnetic disk, optical disk and magnetic tape are the most common. CU IDOL SELF LEARNING MATERIAL (SLM)
62 System Programming and Operating System 2. Each of these media has its own characteristics and physical organization. Each medium is controlled by a device such as a disk drive or tape drive, that also has its properties include access speed, capacity, data transfer rate, and access method. 3. Files are normally organized into directories to make easier to use. 4. When multiple users access to files, it may be desirable to control by whom and what ways files may be accessed. The OS is responsible for the following activities in connection to the file management. (a) Creating and deleting files. (b) Creating and deleting directories to organize files. (c) Supporting primitives for manipulating files and directories. (d) Mapping files onto secondary storage. Backing up files on stable (non-volatile) storage media 4.6 Protection System 1. Protection refers to a mechanism for controlling access by programs, processes, or users to both system and user resources. 2. The protection mechanism must: Distinguish between authorized and unauthorized usage. Specify the controls to be imposed. Provide a means of enforcement. 4.7 Networking Management Component The processors interact with each other via communication lines called network. The communication-network design should consider routing and connection techniques, and the troubles of opinion and safety and security. Presently most operating systems sustain a range of networking methods, hardware, and applications for using them. This implies that computers running different operating systems could take part in a common network for sharing resources CU IDOL SELF LEARNING MATERIAL (SLM)
Operating System Components 63 such as computing, data, printers, and scanners making use of either wired or wireless connections. Networking component often built on top of file system 1. TCP/IP, IPX, IPng 2. Connection/Routing strategies 3. “Circuit” management — circuit, message, packet switching 4. Communication mechanism 5. Data/Process migration 4.8 Command Interpreter User can interface with OS with the technique of Command Interpreter. It is a command line interface, which allows users to directly enter commands that are to be performed by OS. 1. Some OS include the command interpreter in the kernel; others like windows XP and Unix treat the commands as a special program that is running when a job is initiated. 2. On systems with multiple command interpreters to choose from, the interpreters are known as shells example Unix and Linux. 3. Main function of the command interpreter is to get and execute the next user specified command like delete, print, copy, execute and so on. There are two ways to in which these commands can be implemented and are as follows: (i) The command interpreter itself contains the code to execute the command. For example, a command to delete a file may cause the command interpreter to jump to a section of its code that sets up the parameters and makes appropriate system call. (ii) Another approach used by Unix is, it implements most of commands through system programs. CU IDOL SELF LEARNING MATERIAL (SLM)
64 System Programming and Operating System 4.9 Summary An operating system is a large and complex system that can only be created by partitioning into small pieces. Operating system shares the various OS system components like File, Process Memory, I/O device management, etc. A file is a collection of related information which should define by its creator. The process management component is a procedure for managing the many processes that are running simultaneously on the operating system I/O device management is one of the important use of an operating system that helps you to hide the variations of specific hardware devices from the user. Network management is the process of administering and managing computer networks. The memory management process is conducted by using a sequence of reads or writes of certain memory addresses. Secondary-Storage Management, the most important task of a computer system, is to execute programs. Security management includes various processes in an operating system that need to be secured from each other’s activities. The operating system checks the capability of the program to read, write, create, and delete files. 4.10 Key Words/Abbreviations Routing: Routing refers to establishing the routes that data packets take on their way to a particular destination. Process migration: Process Migration refers to the mobility of executing (or suspended) processes in a distributed computing environment. TCP/IP: Transmission Control Protocol/Internet Protocol. CU IDOL SELF LEARNING MATERIAL (SLM)
Operating System Components 65 4.11 Learning Activity 1. What do you mean by command interperter? ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2. List and explain components of operating system? ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 4.12 Unit End Questions (MCQ and Descriptive) A. Descriptive Types Questions 1. Explain Process Management Component in detail. 2. Explain Memory Management component in detail. 3. Explain I/O Management component in detail. 4. Explain in detail File Management component. 5. Explain in detail Protection System. 6. Explain in detail Networking management component. 7. Write a note on Command interpreter. B. Multiple Choice/Objective Type Questions 1. A program in execution is a _________. (a) thread (b) command (c) file (d) process 2. A _________ is collection of related information defined by its creation. (a) thread (b) command (c) file (d) process CU IDOL SELF LEARNING MATERIAL (SLM)
66 System Programming and Operating System 3. __________ refers to a mechanism for controlling access by programs, processes, or users to both system and user resources. (a) Protection (b) Network (c) Process (d) Program 4. User can interface with OS with the technique of _________. (a) memory management (b) command interpreter (c) security management (d) network management 5. The processors interact with each other via communication lines called _________. (a) network (b) storage (c) security (d) program Answers 1. (d), 2. (c), 3. (a), 4. (b), 5. (a) 4.13 References Reference Books 1. https://www.tutorialspoint.com/operating_system/operating_system_tutorial.pdf 2. http://www.crectirupati.com/sites/default/files/lecture_notes/Operating%20Systems% 20Lecture%20Notes.pdf Web Resources 1. https://phoenix.goucher.edu/~kelliher/cs42/sep11.html 2. https://www.guru99.com/components-of-operating-system.html 3. https://www.coursera.org/lecture/technical-support-fundamentals/components-of-an- operating-system-dMmYy CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT 5 PROCESS SCHEDULING Structure: 5.0 Learning Objectives 5.1 Introduction 5.2 Definition 5.3 Scheduling Objectives 5.4 Types of Schedulers 5.5 Scheduling Criteria 5.5.1 CPU Utilization 5.5.2 Throughput 5.5.3 Turnaround Time 5.5.4 Waiting Time 5.5.5 Response Time 5.6 Summary 5.7 Key Words/Abbreviations 5.8 Learning Activity 5.9 Unit End Questions (MCQ and Descriptive) 5.10 References
68 System Programming and Operating System 5.0 Learning Objectives After studying this unit, you should be able to: Elaborate the various Scheduling Objectives. Discuss the various Schedulers. Explain various Scheduling Criteria. 5.1 Introduction Process scheduling is a task of operating system to schedules the processes of different states like ready, running, waiting. To study about process states you can refer Process Management in Operating Systems according to their priorities. This task is very useful in maintain the computer system. Process scheduling allocates the time interval of each process in which the process is to be executed by the central processing unit (CPU). Process scheduling is very important in multiprogramming and multitasking operating system, where multiple processes execute simultaneously. 5.2 Definition The process scheduling is the activity of the process manager that handles the removal of the running process from the CPU and the selection of another process on the basis of a particular strategy. Process scheduling is an essential part of a Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing. 5.3 Scheduling Objectives The objective of multiprogramming is to have some process running at all times, so as to maximize CPU utilization. The objective of time-sharing is to switch the CPU among processes so frequently that users can interact with each program while it is running. A uni-processor CU IDOL SELF LEARNING MATERIAL (SLM)
Process Scheduling 69 system can have only one running process. If more processes exist, the rest must wait until the CPU is free and can be rescheduled. Scheduling Queues As processes enter the system, they are put into a jobqueue. This queue consists of all processes in the system. The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue. This queue is generally stored as a linked list. A ready queue header contains pointers to the first and final PCBs in the list. We extend each PCB to include a pointer field that points to the next PCB in the ready queue. The operating system also has other queues. When a process is allocated the CPU, it executes for a while and eventually quits, is interrupted, or waits for the occurrence of a particular event, such as the completion of an I/O request. The list of processes waiting for a particular IO device is called a device queue. Each device has its own device queue. In Figure below each rectangular box represents a queue. Two types of queues are present: the ready queue and a set of device queues. The circles represent the resources that sent the queues, and the arrows indicate the flow of processes in the system. A new process is initially put in the ready queue. It waits in the ready queue until it is selected for execution (or dispatched). Once the process is assigned to the CPU and is executing one of several events could occur: The process could issue an I/O request, and then be placed in an i/o queue The process could create a new sub-process and wait for its termination. The process could be removed forcibly from the CPU, as a result of an interrupt, and be put back in the ready queue. Process Scheduling Queues 1. Job queue – set of all processes in the system. 2. Ready queue – set of all processes residing in main memory, ready and waiting to execute. 3. Device queues – set of processes waiting for an I/O device. 4. Process migration between the various queues. CU IDOL SELF LEARNING MATERIAL (SLM)
70 System Programming and Operating System Ready Queue and Various I/O Device Queues Representation of Process Scheduling queue header PCB7 PCB2 registers registers ready head queue tall mag head tape tall unit 0 mag head PCB3 PCB14 PCB6 tape tall unit 1 disk head unit 0 tall PCB5 terminal head unit 0 tall Fig. 5.1: Representation of Process Scheduling ready queue CPU I/O I/O queue I/O request time slice expired child fork a executes child interrupt wait for an occurs interrupt Fig. 5.2: Queuing Diagram of Process CU IDOL SELF LEARNING MATERIAL (SLM)
Process Scheduling 71 5.4 Types of Schedulers Operating systems may feature up to three distinct types of schedulers: (1) A long-term scheduler (also known as an admission scheduler or high-level scheduler), (2) A mid-term or medium-term scheduler, and (3) A short-term scheduler (also known as a dispatcher). The names suggest the relative frequency with which these functions are performed. Long-term Scheduler The long-term, or admission, scheduler decides which jobs or processes are to be admitted to the ready queue; that is, when an attempt is made to execute a program, its admission to the set of currently executing processes is either authorized or delayed by the long-term scheduler. Thus this scheduler dictates what processes are to run on a system, and the degree of concurrency to be supported at any one time - i.e. whether a high or low amount of processes are to be executed concurrently, and how the split between IO intensive and CPU intensive processes is to be handled. In modern OS’s, this is used to make sure that real time processes get enough CPU time to finish their tasks. Without proper real time scheduling, modern GUI interfaces would seem sluggish. Long-term scheduling is also important in large-scale systems such as batch processing systems, computer clusters, supercomputers and render farms. In these cases, special purpose job scheduler software is typically used to assist these functions, in addition to any underlying admission scheduling support in the operating system. Mid-term Scheduler The mid-term scheduler, present in all systems with virtual memory, temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. This is commonly referred to as “swapping out” or “swapping in” (also incorrectly as “paging out” or “paging in”). CU IDOL SELF LEARNING MATERIAL (SLM)
72 System Programming and Operating System The mid-term scheduler may decide to swap out a process for example (1) Process which has not been active for some time, or (2) A process which has a low priority, or (3) A process which is page faulting frequently, or (4) A process which is taking up a large amount of memory in order to free up main memory for other processes, swapping the process back in later when more memory is available, or when the process has been unblocked and is no longer waiting for a resource. In many systems today (those that support mapping virtual address space to secondary storage other than the swap file), the mid-term scheduler may actually perform the role of the long-term scheduler, by treating binaries as “swapped out processes” upon their execution. In this way, when a segment of the binary is required it can be swapped in on demand, or “lazy loaded”. Short-term Scheduler The short-term scheduler (also known as the dispatcher) decides which of the ready, in- memory processes are to be executed (allocated a CPU) next following a clock interrupt, an IO interrupt, an operating system call or another form of signal. Thus the short-term scheduler makes scheduling decisions much more frequently than the long-term or mid-term schedulers - a scheduling decision will at a minimum have to be made after every time slice, and these are very short. This scheduler can be preemptive, implying that it is capable of forcibly removing processes from a CPU when it decides to allocate that CPU to another process, or non-preemptive (also known as “voluntary” or “co-operative”), in which case the scheduler is unable to “force” processes off the CPU. The scheduler selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. The ready queue is not necessarily a first-in, firstout (FIFO) queue. As we shall see when we consider the various scheduling algorithms, a ready queue may be implemented as a FIFO queue, a priority queue, a tree, or simply an unordered linked list. CU IDOL SELF LEARNING MATERIAL (SLM)
Process Scheduling 73 Conceptually, however, all the processes in the ready queue are lined up waiting for a chance to run on the CPU. The records in the queues are generally process control blocks (PCBs) of the processes. Types of Scheduling Below given are the circumstances under which CPU Scheduling takes place. (1) When a process switches from the running state to the waiting state (for example, I/O request, or invocation of wait for the termination of one of the child processes). (2) When a process switches from the running state to the ready state (for example, when an interrupt occurs). (3) When a process switches from the waiting state to the ready state (for example, completion of I/O). (4) When a process terminates. Hence there are two types of scheduling: Non-preemptive Scheduling Preemptive Scheduling Non-preemptive Scheduling In circumstances 1 and 4, there is no choice in terms of scheduling. When scheduling takes place only under circumstances 1 and 4, we say the scheduling scheme is non-preemptive; Under non-preemptive scheduling, once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state. This scheduling method is used by the Microsoft Windows 3.1 and by the Apple Macintosh operating systems. It is the only method that can be used on certain hardware platforms, because it does not require the special hardware (for example, a timer) needed for preemptive scheduling. Example of Non-preemptive Scheduling Algorithm: (1) First-come-first served CU IDOL SELF LEARNING MATERIAL (SLM)
74 System Programming and Operating System Preemptive Scheduling A new process (if one exists in the ready queue) must be selected for execution. There is a choice, however, in circumstances 2 and 3. otherwise, the scheduling scheme is preemptive. But preemptive scheduling incurs a cost. Consider the case of two processes sharing data. One may be in the midst of updating the data when it is preempted and the second process is run. The second process may try to read the data, which are currently in an inconsistent state. New mechanisms thus are needed to coordinate access to shared data; Preemption also has an effect on the design of the operating-system kernel. During the processing of a system call, the kernel may be busy with an activity on behalf of a process. Such activities may involve changing important kernel data (for instance, I/O queues). What happens if the process is preempted in the middle of these changes, and the kernel (or the device driver) needs to read or modify the same structure. Preemptive scheduling: (1) Shortest-job-first (2) Round Robin. (3) Priority based scheduling (4) Multi-level Queue 5.5 Scheduling Criteria Since there are many CPU-scheduling algorithms, they have different properties and may favour one class of processes over another. In choosing which algorithm to use in a particular situation, we must consider the properties of the various algorithms. There are many criteria for comparing CPU-scheduling algorithms. The characteristics used for comparison can make a big difference in the determination of the best algorithm. The criteria include the following: (1) CPU Utilization (2) Throughput CU IDOL SELF LEARNING MATERIAL (SLM)
Process Scheduling 75 (3) Turn around Time (4) Waiting Time (5) Response around 5.5.1 CPU Utilization We want to keep the CPU as busy as possible. CPU utilization may range from 0 to 100 per cent. In a real system, it should range from 40 per cent (for a lightly loaded system) to 90 per cent (for a heavily used system). 5.5.2 Throughput If the CPU is busy executing processes, then work is being done. One measure of work is the number of processes completed per time unit, called throughput. For long processes, this rate may be 1 process per hour; for short transactions, throughput might be 10 processes per second. 5.5.3 Turnaround Time From the point of view of a particular process, the important criterion is how long it takes to execute that process. The interval from the time of submission of a process to the time of completion is the turnaround time. Turnaround time is the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O. 5.5.4 Waiting Time The CPU-scheduling algorithm does not affect the amount of time during which a process executes or does I/O; it affects only the amount of time that a process spends waiting in the ready queue. Waiting time is the sum of the periods spent waiting in the ready queue. 5.5.5 Response Time In an interactive system, turnaround time may not be the best criterion. Often, a process can produce some output fairly early, and can continue computing new results while previous results are being output to the user. Thus, another measure is the time from the submission of a request until the first response is produced. This measure, called response time, is the amount of time it takes to start responding, but not the time that it takes to output that response. The turnaround time is generally limited by the speed of the output device. CU IDOL SELF LEARNING MATERIAL (SLM)
76 System Programming and Operating System 5.6 Summary CPU scheduling is the task of selecting a waiting process from the ready queue and allocating the CPU to it. The CPU is allocated to the selected process by the dispatcher. First- come, first-served (FCFS) scheduling is the simplest scheduling algorithm, but it can cause short processes to wait for very long processes. Shortest job-first (SJF) scheduling is provably optimal, providing the shortest average waiting time. Implementing SJF scheduling is difficult, however, because predicting the length of the next CPU burst is difficult. The SJF algorithm is a special case of the general priority scheduling algorithm, which simply allocates the CPU to the highest-priority process. Both priority and SJF scheduling may suffer from starvation. Aging is a technique to prevent starvation. Round-robin (RR) scheduling is more appropriate for a time-shared (interactive) system. RR scheduling allocates the CPU to the first process in the ready queue for q time units, where q is the time quantum. After q time units, if the process has not relinquished the CPU, it is preempted, and the process is put at the tail of the ready queue. The major problem is the selection of the time quantum. If the quantum is too large, RR scheduling degenerates to FCFS scheduling; if the quantum is too small, scheduling overhead in the form of context-switch time becomes excessive. 5.7 Key Words/Abbreviations Scheduling: Scheduling is a method that is used to distribute valuable computing resources, usually processor time, bandwidth and memory, to the various processes, threads, data flows and applications that need them. Scheduling is done to balance the load on the system and ensure equal distribution of resources and give some prioritization according to set rules. This ensures that a computer system is able to serve all requests and achieve a certain quality of service. Preemptive: Preemptive Scheduling is defined as the scheduling which is done when the process changes from running state to ready state or from waiting for the state to ready state. In this, the resources are allocated to execute the process for a certain period. CU IDOL SELF LEARNING MATERIAL (SLM)
Process Scheduling 77 Non-preemptive: Non-preemptive Scheduling is used when a process terminates, or a process switches from running to waiting state. In this scheduling, once the resources (CPU cycles) is allocated to a process, the process holds the CPU till it gets terminated or it reaches a waiting state. PCB - Printed Circuit Board. 5.8 Learning Activity 1. Explain Scheduling Queues. ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2. Explain CPU Utilization. ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 5.9 Unit End Questions (MCQ and Descriptive) A. Descriptive Types Questions 1. Explain process scheduling. 2. Explain process scheduling queues. 3. Explain different types of schedulers. 4. Explain Preemptive and Non-Preemptive Scheduling. 5. Explain different scheduling criteria for CPU Scheduling. B. Multiple Choice/Objective Type Questions 1. When processes enter the system, they are put into a __________. (a) device queues (b) jobqueue (c) ready queue (d) queue CU IDOL SELF LEARNING MATERIAL (SLM)
78 System Programming and Operating System 2. _____________ are set of processes waiting for an I/O device. (a) Jobqueue (b) Queue (c) Ready queue (d) Device queues 3. When a process switches from the running state to the ________ state. (a) ready (b) initilizing (c) waiting (d) terminate 4. When a process switches from the running state to the state. (a) Ready (b) Ready (c) Initilizing (d) Terminate 5. The __________ scheduler, present in all systems with virtual memory, temporarily removes processes from main memory and places them on secondary memory (such as a disk drive) or vice versa. (a) mid-term (b) short-term (c) long-term (d) first-term Answers 1. (b), 2. (d), 3. (c), 4. (a), 5. (a) 5.10 References Reference Books 1. http://www2.latech.edu/~box/os/ch05.pdf 2. https://www.cse.huji.ac.il/course/2010/os/notes/os-notes.pdf Web Resources 1. https://www.includehelp.com/operating-systems/process-scheduling-in-operating- system.aspx 2. https://www.tutorialspoint.com/operating_system/os_process_scheduling.htm 3. http://www.padakuu.com/article/187-summary-of-cpu-scheduling 4. https://www.studytonight.com/operating-system/process-scheduling 5. https://www.geeksforgeeks.org/process-schedulers-in-operating-system/ CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT 6 SCHEDULING ALGORITHMS Structure: 6.0 Learning Objectives 6.1 Introduction 6.2 Preemptive and Non-preemptive 6.3 FCFS 6.4 SJF 6.5 RR 6.6 Multiprocessor Scheduling 6.6.1 Multiprocessor Operating System 6.6.2 Multiprocessing Scheduling 6.7 Types 6.8 Performance Evaluation of the Scheduling 6.8.1 Algorithm Evaluation 6.8.2 Evaluation of Process Scheduling Algorithms 6.9 Summary 6.10 Key Words/Abbreviations 6.11 Learning Activity 6.12 Unit End Questions (MCQ and Descriptive) 6.13 References
80 System Programming and Operating System 6.0 Learning Objectives After studying this unit, you should be able to: Explain the various types of Scheduling. Describe various CPU Scheduling Algorithms. Solve Numerical problems based upon various scheduling Algorithms. 6.1 Introduction Process scheduling is a task of operating system to schedules the processes of different states like ready, running, waiting. To study about process states you can refer Process Management in Operating Systems according to their priorities. This task is very useful in maintain the computer system. Process scheduling allocates the time interval of each process in which the process is to be executed by the central processing unit (CPU). Process scheduling is an essential part of Multiprogramming operating systems. Such operating systems allow more than one process to be loaded into the executable memory at a time and the loaded process shares the CPU using time multiplexing. 6.2 Preemptive and Non-preemptive Explain CPU Scheduler. What do you mean by long term, short term scheduling? State the difference between them. CPU Scheduler A process migrates between the various scheduling queues throughout its lifetime. The OS must select processes from these queues in some fashion. The appropriate scheduler carries out the selection process. 1. The long-term scheduler (job scheduler) selects processes from this pool and loads them into memory for execution. 2. The short-term scheduler (CPU scheduler) selects from among the processes that are ready to execute, and allocate the CPU to one of them. CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 81 3. The primary difference between this scheduler is the frequency of their execution. 4. The short-term scheduler must select a new process for the CPU quite frequently. The short-term scheduler executes at least once every 100 microseconds. Because of the short duration of time between executions, the short-term scheduler must be very fast. 5. The long-term executes much less frequently. The long-term scheduler controls the degree of multi-programming. It is important that long-term scheduler should make a careful selection. As process can be either I/O bound or CPU bound. An I/O bound process is one that spends more of its time in doing I/O than it spends doing computations. A CPU bound process is one that generates I/O requests infrequently, using more of its time doing computation than I/O bound process uses. 6. If all processes are CPU bound, I/O waiting queue will almost be empty, devices will go unused, and again the system will be unbalanced. The system with best performance should have a combination of CPU and I/O bound processes. 7. Some systems use an intermediate level of scheduling called medium-term scheduler. Schedulers 1. Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue. 2. Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU. Addition of Medium Term Scheduling swap in partially executed swap out swapped-out processes ready queue CPU end I/O waiting I/O queues Fig. 6.1: Swap in Swap out of Process using Scheduler CU IDOL SELF LEARNING MATERIAL (SLM)
82 System Programming and Operating System 1. Short-term scheduler is invoked very frequently (milliseconds) (must be fast). 2. Long-term scheduler is invoked very infrequently (seconds, minutes) (may be slow). 3. The long-term scheduler controls the degree of multiprogramming. 4. Processes can be described as either: I/O-bound process – spends more time doing I/O than computations, many short CPU bursts. CPU-bound process – spends more time doing computations; few very long CPU bursts. Write short note on preemptive and non-preemptive scheduling. Preemptive scheduling: CPU scheduling decisions may take place under the following four circumstances: 1. When process switches from the running state to the waiting state (for example as the result of an I/O request or an innovation of wait for the termination of one of the child processes) 2. When a process switches from the running state to the ready state (for example when an interrupt occurs) 3. When a process switches from the waiting state to the ready state (for example at completion of I/O) 4. When a process terminates. For situations 1 and 4 there is no choice in terms of scheduling. A new process must be selected for execution. There is a choice for situation 2 and 3. When scheduling takes place only under circumstances 1 and 4. We say that the scheduling scheme is non-preemptive or cooperative, otherwise it is preemptive. Under non-preemptive scheduling, once the CPU has been allocated to a process, the process will keep the CPU until it releases the CPU either by terminating or by switching to the waiting state. CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 83 Disadvantage: 1. Preemptive scheduling incurs a cost associated with access to shared data. Consider the case of two processes that share data. While one is updating the data, it is preempted so that the second process can run. The second process then tries to read the data, which are in an inconsistent state. In such situations we need mechanism to coordinate access to shared data. 2. Preemption also affects the design of the OS kernel. During the processing of a system call, the kernel may be busy with an activity on behalf of a process. Such activity may involve changing important kernel data like I/O request. What happens when the process is preempted in the middle of these changes and the kernel needs to read or modify the same structure? 6.3 FCFS The simplest CPU Scheduling algorithm is the first come first served (FCFS) Scheduling algorithm. With this scheme the process that requests the CPU first is allocated the CPU first. The implementation of the FCFS policy is easily managed with a FIFO queue. When a process enters in the ready queue, its PCB is linked onto the tail of the queue. When the CPU is free it is allocated to the process at the head of the queue. The running process is then removed from the queue. The code for FCFS Scheduling is simple to write and understand. The average waiting time under FCFS policy is, however, is often quite long. Consider the following set of processes that arrive at the time of 0, with length of CPU burst given in milliseconds as: Table 6.1: Processes with Brust Time Process Burst Time P1 24 P2 3 P3 3 CU IDOL SELF LEARNING MATERIAL (SLM)
84 System Programming and Operating System If the processes arrive in the order P1, P2, P3, and are served in FCFS order, we get result shown in the following Gantt Chart: P1 P2 P3 0 24 27 30 Fig. 6.2: Gannt Chart for FCFS The waiting time is 0 milliseconds for p1, 24 milliseconds for process p2, and 27 milliseconds for process p3. Thus, the average waiting time is (0 + 24 + 27 + 30)/3 = 17 milliseconds. If the processes arrive in the order p2, p3, p1 results are as follows: P2 P3 P1 03 6 30 Fig. 6.3: Gannt Chart for According to proccess Arriving Time The average waiting time is now (0 + 3 + 6)/3 = 3 milliseconds. This reduction is substantial. Thus the average waiting time under an FCFS policy is generally not minimal and may vary if process’s CPU burst times vary greatly. In addition consider the performance of FCFS Scheduling in a dynamic situation. Assume we have one CPU bound process and many I/O bound processes. As the processes flow around the system, the following scenario may result. The CPU bound process will get hold the CPU. During this time all other processes will finish their I/O and will move into ready queue, waiting for the CPU. While the processes wait in the ready queue the I/O devices are idle. Eventually the CPU bound process finishes its CPU burst and moves to an I/O device. All the I/O bound processes, which have short CPU burst, execute quickly and back to the I/O ready queues. At this point the CPU sits idle. The CPU bound process will move back to ready queue and be allocated the CPU. Again all the processes end up waiting in the ready queue until the CPU bound process is done. There is a convoy effect as all other processes wait for the one big process to get off the CPU. This effect CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 85 results in lower CPU and device utilization than might be possible if the shorter processes were allowed to go first. The FCFS scheduling algorithm is non-preemptive. Once the CPU has been allocated to a process, that process keeps the CPU until it releases the CPU, either by terminating or by requesting I/O. The FCFS algorithm is thus particularly troublesome for time sharing systems, where it is important that each user get a share of the CPU at regular intervals. It would be disastrous to allow a process to keep the CPU for an extended period. 6.4 SJF A different approach to CPU scheduling is the Shortest Job First (SJF) Scheduling algorithm. This algorithm associates with each process the length of the process’s next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. If the next CPU bursts of two processes are same, FCFS scheduling is used to break the tie. This algorithm is also called as Shortest next CPU burst algorithm since this scheduling depends on the length of the next CPU burst of a process, rather than its total length. Consider following set of processes with the length of the CPU burst given in milliseconds: Table 6.2: Processes and Brust Time Process Burst Time P1 6 P2 8 P3 7 P4 3 Using SJF Scheduling we would schedule these processes according to the following Gantt Chart: P4 P1 P3 P2 03 9 16 24 Fig. 6.4: Gannt Chart for SJF CU IDOL SELF LEARNING MATERIAL (SLM)
86 System Programming and Operating System The waiting time is 3 milliseconds for the process p1, 16 milliseconds for the process p2, 9 milliseconds for the process p3 and 0 milliseconds for the process p4. thus the average waiting time is (0 + 3+ 9 + 16)/4 = 7 milliseconds. If we use FCFS algorithm then average waiting time would be 10.25 milliseconds. The SJF Scheduling algorithm is provably optimal, in that it gives the minimum average waiting time for a given set of processes. Moving a short process before a long one decreases the waiting time of the short process more then it increases the waiting time of the long process i.e. average waiting time is decreased. Disadvantage: The real difficulty with this algorithm is, knowing the length of the next CPU request. For long term (job) scheduling in a batch system, we can use as the length the process time limit that a user specifies when he submits the job. Thus the user are motivated to estimate the process time limit accurately, since a lower value may mean faster response. SJF scheduling is frequently used in long term scheduling. Although the SJF algorithm is optimal it cannot be implemented at the level of short term CPU scheduling. There is no way to know the length of the next CPU burst. One approach is to try to approximate SJF scheduling. We may not know the length of the CPU burst, but may be able to predict its value. The next CPU burst is generally be predicted as an exponential average of the measured lengths of the previous CPU bursts. Let tn be the length of the nth CPU burst, and let tn+1 be the predicted value for the next CPU burst. Then for α, 0 ≤ α ≤ 1, define tn+1 = αtn + (1 – α)tn This formula defines an exponential average. The value of tn, contains our most recent information, tn stores the past history. The parameter α-controls the relative weight and past history in our prediction. The SJF algorithm can be preemptive or non-preemptive. The choice arises when a new process arrives at ready queue while a previous process is still executing. The next CPU burst of a newly arrived process may be shorter than what is left of the currently executing process. A preemptive SJF algorithm will preempt the currently executing process, whereas a non- CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 87 preemptive SJF algorithm will allow the currently executing process to finish its CPU burst. Preemptive SJF scheduling is also called as Shortest remaining time first scheduling. Consider the following four processes with the length of the CPU burst given in milliseconds: Table 6.3: Processes with Arrival Time and Brust Time Process Arrival time Burst Time P1 0 8 P2 1 4 P3 2 9 P4 3 5 If the processes arrive at ready queue at the times of shown and need the indicated burst time, then the resulting preemptive SJF schedule is as depicted in the following Gantt Chart: P1 P2 P4 P1 P3 01 5 10 17 26 Fig. 6.5: Gannt Chart for SJF Process P1 is started at time 0 since is the only process in the queue. Process p2 arrives at time 1. The remaining time for process p1 (7 milliseconds) is larger than the time required by process p2 (4 milliseconds), so process p1 is preempted, and process p2 is scheduled. The average waiting time for this example is ((10 – 1) + (1 – 1) + (17 – 2) + (5 – 3))/4 = 26/4 = 6.5 milliseconds. Non-preemptive SJF Scheduling: P1 P2 P4 P3 0 8 12 17 26 Fig. 6.6: Gannt chart for Non-preemtive SJF Process P1 is started at time 0 since is the only process in the queue, hence waiting time for p1 is 0 and waiting time for p2 is 8 as it arrives at time 1 milliseconds, for P4 waiting time is 17 since it arrives at 3 milliseconds and has CPU Burst of 5 milliseconds hence it is given to CPU CU IDOL SELF LEARNING MATERIAL (SLM)
88 System Programming and Operating System first and finally P3 has to wait for 17 milliseconds since it has CPU Burst of 9 milliseconds. Hence Average Waiting Time for Non-preemptive SJF is (0 + (8 – 1) + (17 – 2) + (12 – 3))/4 = 31/4 = 7.75 milliseconds. 6.5 RR The Round Robin (RR) Scheduling Algorithm is designed especially for time-sharing systems. It is similar to FCFS scheduling, but preemption is added to switch between processes. A small unit of time called a time quantum or time slice is defined. A time quantum is generally from 10 to 100 milliseconds. The ready queue is treated as circular queue. The CPU scheduler goes around the read queue, allocating CPU to each process for a time interval of up to 1 time quantum. To implement RR scheduling we keep the ready queue as a FIFO queue of processes. New processes are added to the tail of the ready queue. The CPU scheduler picks up the first process from the ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process. One of the two things will happen. The process may have a CPU burst of less than 1 time quantum. In this case process itself will release the CPU voluntarily. The scheduler will then proceed to the next process in the ready queue. Otherwise if the CPU burst of the currently running process is greater than the 1 time quantum, the timer will go off and will cause an interrupt to the OS. A context switch will be executed, and process will be put at the tail of the ready queue. The CPU scheduler will then select the next process from the ready queue. The average waiting time under the RR policy is often long. Consider the following set of processes that arrive at time 0, with the length of the CPU burst given in milliseconds: Table 6.4: Processes and Brust Time Process Burst Time P1 24 P2 3 P3 3 CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 89 If we use a quantum of 4 milliseconds, then process P1 gets the first 4 milliseconds. Since it requires another 20 milliseconds, it is preempted after the first time quantum, and the CPU is given to the next process in the queue, the process P2. Since process P2 does not need 4 milliseconds, it quits before its time quantum expires. The CPU is then given to the next process, process P3. Once each process has received 1 time quantum, the CPU is returned to process P1 for an additional time quantum. The resulting RR scheduling is: P1 P2 P3 P1 P1 P1 P1 P1 04 7 10 14 18 22 26 30 Fig. 6.7: Gannt Chart for RR The average waiting time is 17 / 3 = 5.66 milliseconds. In RR scheduling algorithm, no process is allocated the CPU for more than 1 time quantum in a row unless it is run able process. If a process’s CPU burst exceeds 1 time quantum, that process is preempted and is put back in the ready queue. The RR scheduling algorithm is thus Preemptive. 6.6 Multiprocessor Scheduling 6.6.1 Multiprocessor Operating System A multiprocessor system consists of several processors which share memory. In the multiprocessor, there is more than one processor in the system. The reason we use multiprocessor is that sometimes load on the processor is very high but input output on other function is not required. This type of operating system is more reliable as even if on processor goes down the other can still continues to work. This system is relatively cheap because we are only having the copies of processor but other devices like input-output and memory are shared. In the multiprocessor system all the processor operate under the single operating system. Multiplicity of the processor and how the processors work together are transparent to the other. In this, the user does not know in which processor their process work. A process is divided into several small processes and they work independently on the different processor. A system CU IDOL SELF LEARNING MATERIAL (SLM)
90 System Programming and Operating System can be both multi-programmed by having multiple programs running at the same time and multiprocessing by having more than one physical and the processor. In this diagram, there is more than 1 CPU and they shared a common memory. CPU 1 Main Memory CPU 2 I/O Processors I/O Processors I/O Units I/O Units Fig. 6.8: Common Memory shared by Multiple CPU's 6.6.2 Multiprocessing Scheduling In the multiprocessor scheduling, there are multiple CPUs which share the load so that various process run simultaneously. In general, the multiprocessor scheduling is complex as compared to single processor scheduling. In the multiprocessor scheduling, there are many processors and they are identical and we can run any process at any time. The multiple CPUs in the system are in the close communication which shares a common bus, memory and other peripheral devices. So we can say that the system is a tightly coupled system. These systems are used when we want to process a bulk amount of data. These systems are mainly used in satellite, weather forecasting etc. Multiprocessing system work on the concept of symmetric multiprocessing model. In this system, each processor work on the identical copy of the operating system and these copies communicate with each other. We the help of this system we can save money because of other devices like peripherals. CU IDOL SELF LEARNING MATERIAL (SLM)
Scheduling Algorithms 91 Power supplies and other devices are shared. The most important thing is that we can do more work in a short period of time. If one system fails in the multiprocessor system the whole system will not halt only the speed of the processor will be slow down. The whole performance of the multiprocessing system is managed by the operating system. Operating system assigns different task to the different processor in the system. In the multiprocessing system, the process is broken into the thread which they can be run independently. These type of system allow the threads to run on more than one processor simultaneously. In these systems the various process in the parallel so this is called parallel processor. Parallel processing is the ability of the CPU to run various process simultaneously. In the multiprocessing system, there is dynamically sharing of resources among the various processors. Multiprocessor operating system is a kind of regular OS which handles many systems calls at the same time, do memory management, provide file management also the input-output devices. There are some extra features which multiprocessor performs: Process synchronization Resource management Scheduling There are various organizations of multiprocessor operating system: 1. Each CPU has its own OS: In this types of the organization then there are much Central processing units in the system and each CPU has its own private operating system and memory is shared among all the processors and input-output system are also shared. All the system is connected by the single bus. CPU 1 CPU 2 CPU 3 CPU 4 Memory I/O Has Has Has Has 12 private private private private Data Data OS OS OS OS 34 Data Data OS code Fig. 6.9: CPU with its own OS CU IDOL SELF LEARNING MATERIAL (SLM)
92 System Programming and Operating System 2. Master slave multiprocessor: In this type of multiprocessor model, there is a single data structure which keeps track of the ready processes. In this model, one central processing unit works as master and other central processing unit work as a slave. In this, all the processors are handled by the single processor which is called master server. The master server runs the operating system process and the slave server run the user processes. The memory and input-output devices are shared among all the processors and all the processor are connected to a common bus. This system is simple and reduces the data sharing so this system is called Asymmetric multiprocessing. CPU 1 CPU 2 CPU 3 CPU 4 Memory I/O Master Slave Slave Slave User runs runs user runs user runs user processes OS processes processes processes OS Fig. 6.10: Master Salve Multiprocessor 3. Symmetric multiprocessor Symmetric Multiprocessors (SMP) is the third model. In this model, there is one copy of the OS in memory, but any central processing unit can run it. Now, when a system call is made, then the central processing unit on which the system call was made traps to the kernel and then processes that system call. This model balances processes and memory dynamical. This approach uses Symmetric Multiprocessing where each processor is self- scheduling. The scheduling proceeds further by having the scheduler for each processor examine the ready queue and select a process to execute. In this system, this is possible that all the process may be in common ready queue or each processor may have its own private queue for the ready process. CU IDOL SELF LEARNING MATERIAL (SLM)
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236
- 237
- 238
- 239
- 240
- 241
- 242
- 243
- 244
- 245
- 246
- 247
- 248
- 249
- 250
- 251
- 252
- 253
- 254
- 255
- 256
- 257
- 258
- 259
- 260
- 261