Important Announcement
PubHTML5 Scheduled Server Maintenance on (GMT) Sunday, June 26th, 2:00 am - 8:00 am.
PubHTML5 site will be inoperative during the times indicated!

Home Explore MCA613_System Programming and Operating System

MCA613_System Programming and Operating System

Published by Teamlease Edtech Ltd (Amita Chitroda), 2020-10-23 10:20:48

Description: MCA613_System Programming and Operating System

Search

Read the Text Version

MASTER OF COMPUTER APPLICATIONS SYSTEM PROGRAMMING AND OPERATING SYSTEM MCA613 Prof. Kiran Gurbani

CHANDIGARH UNIVERSITY Institute of Distance and Online Learning Course Development Committee Chairman Prof. (Dr.) R.S. Bawa Vice Chancellor, Chandigarh University, Punjab Advisors Prof. (Dr.) Bharat Bhushan, Director, IGNOU Prof. (Dr.) Majulika Srivastava, Director, CIQA, IGNOU Programme Coordinators & Editing Team Master of Business Administration (MBA) Bachelor of Business Administration (BBA) Co-ordinator - Prof. Pragya Sharma Co-ordinator - Dr. Rupali Arora Master of Computer Applications (MCA) Bachelor of Computer Applications (BCA) Co-ordinator - Dr. Deepti Rani Sindhu Co-ordinator - Dr. Raju Kumar Master of Commerce (M.Com.) Bachelor of Commerce (B.Com.) Co-ordinator - Dr. Shashi Singhal Co-ordinator - Dr. Minakshi Garg Master of Arts (Psychology) Bachelor of Science (Travel & TourismManagement) Co-ordinator - Dr. Samerjeet Kaur Co-ordinator - Dr. Shikha Sharma Master of Arts (English) Bachelor of Arts (General) Co-ordinator - Dr. Ashita Chadha Co-ordinator - Ms. Neeraj Gohlan Master of Arts (Mass Communication and Bachelor of Arts (Mass Communication and Journalism) Journalism) Co-ordinator - Dr. Chanchal Sachdeva Suri Co-ordinator - Dr. Kamaljit Kaur Academic and Administrative Management Prof. (Dr.) Pranveer Singh Satvat Prof. (Dr.) S.S. Sehgal Pro VC (Academic) Registrar Prof. (Dr.) H. Nagaraja Udupa Prof. (Dr.) Shiv Kumar Tripathi Director – (IDOL) Executive Director – USB © No part of this publication should be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording and/or otherwise without the prior written permission of the author and the publisher. SLM SPECIALLY PREPARED FOR CU IDOL STUDENTS Printed and Published by: Himalaya Publishing House Pvt. Ltd., E-mail: [email protected], Website: www.himpub.com For: CHANDIGARH UNIVERSITY Institute of Distance and Online Learning CU IDOL SELF LEARNING MATERIAL (SLM)

System Programming and Operating System Course Code: MCA613 Credits: 3 Course Objectives:  To provide knowledge of working of operating system.  To practice scheduling algorithms and memory management techniques  To analyze the importance of application of linkers, loaders and Software tools in system programming. Syllabus Unit 1 - Introduction to System Software: Machine Structure, Machine Language, Language Translators, Assemblers, Compilers, Interpreters, Linkers and Loaders. Components of System Software. Unit 2 - Basics of Operating Systems: Definition, Generations of Operating Systems, Types of Operating Systems. Unit 3 - Operating Systems: Batch Mainframe, Time Sharing, Multiprocessing, Multiprogramming, Multithreading, Real Time, Embedded, Distributed, Clustered. Unit 4 - Operating System Components: Process Management Component, Memory Management Component, I/O Management Component, File Management Component, Protection System, Networking Management Component, Command Interpreter. Unit 5 - Process Scheduling: Definition, Scheduling Objectives, Types of Schedulers, Scheduling Criteria, CPU Utilization, Throughput, Turnaround Time, Waiting Time, Response Time. Unit 6 - Scheduling Algorithms: Preemptive and Non-preemptive, FCFS, SJF, RR, Multiprocessor Scheduling, Types, Performance Evaluation of the Scheduling. Unit 7 - Inter-process Communication and Synchronization: Definition, Shared Memory System, Message Passing, Critical Section, Mutual Exclusion, Semaphores. CU IDOL SELF LEARNING MATERIAL (SLM)

Unit 8 - Deadlocks: Conditions, Modeling, Detection and Recovery, Deadlock Avoidance, Deadlock Prevention. Unit 9 - Memory Management: Multiprogramming with Fixed Partition, Variable Partitions, Virtual Memory, Paging, Demand Paging, Design and Implementation Issues in Paging Such as Page Tables. Unit 10 - Memory Management: Inverted Page Tables, Page Replacement Algorithms, Page Fault Handling, Working Set Model, Local vs. Global Allocation, Page Size, Segmentation with Paging. Unit 11 - File Systems: Concept, Access Methods, File System Structure, Directory Structure, Allocation Methods, Free Space Management, File Sharing, Protection and Recovery. Text Books: 1. Peterson, J.L., Silberschatz, A. (1983).Operating System Concepts. New Delhi: Addison Wesley. 2. Tanenbaum, A.S. (2001). Operating System. New Delhi: PHI. 3. Donavan J. (1993). System Programming. New York: Tata McGraw Hill. 4. Dhamdhere D.M. (2007). System Programming and Operating System. New Delhi: Tata McGraw Hill. Reference Books: 1. Brinch, Hansen (2005). Operating System Principles. Delhi: PHI. 2. Willams S. (2000). Operating System. Delhi: PHI. 3. Beck L. (1996). System Software. Boston: Addison Wesley Publication. CU IDOL SELF LEARNING MATERIAL (SLM)

CONTENTS 1 - 17 18 - 33 Unit 1: Introduction to System Software 34 - 56 Unit 2: Basics of Operating Systems 57 - 66 Unit 3: Operating Systems 67 - 78 Unit 4: Operating System Components 79 - 102 Unit 5: Process Scheduling 103 - 138 Unit 6: Scheduling Algorithms 139 - 160 Unit 7: Inter-process Communication and Synchronization 161 - 180 Unit 8: Deadlocks 181 - 211 Unit 9: Memory Management - 1 212 - 251 Unit 10: Memory Management - 2 Unit 11: File Systems CU IDOL SELF LEARNING MATERIAL (SLM)

CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 1 INTRODUCTION TO SYSTEM SOFTWARE Structure: 1.0 Learning Objectives 1.1 Introduction 1.2 Machine Structure Machine Language 1.2.1 General Machine Structure 1.2.2 Machine Language 1.2.3 Machine Language vs Assembly Language 1.3 Language Translators 1.3.1 Assemblers 1.3.2 Compilers 1.3.3 Interpreters 1.3.4 Linkers and Loaders 1.4 Components of System Software 1.5 Summary 1.6 Key Words/Abbreviations 1.7 Learning Activity 1.8 Unit End Questions (MCQ and Descriptive) 1.9 References

2 System Programming and Operating System 1.0 Learning Objectives After studying this unit, you should be able to:  Analyse about Computer languages, such as Machine Language, Assembly translators, Assemblers Language and High Level Language.  Explain the Linkers and Loaders are essential for High and Loaders.  Explain the various components of system software. 1.1 Introduction System programming involves designing and writing computer programs that allow the computer hardware to interface with the programmer and the user, leading to the effective execution of application software on the computer system. Typical system programs include the operating system and firmware, programming tools such as compilers, assemblers, I/O routines, interpreters, scheduler, loaders and linkers as well as the runtime libraries of the computer programming languages. 1.2 Machine Structure Machine Language 1.2.1 General Machine Structure All the conventional modern computers are based upon the concept of stored program computer, the model that was proposed by John von Neumann. CU IDOL SELF LEARNING MATERIAL (SLM)

Introduction to System Software 3 Other Memory 0 I/O channels controller 1 2 if any 3 I/O channel Memory Address Register (MAR) Memory Buffer Register (MBR) Location Counter (LC) Working Registers Instruction Register (IR) (WR) Instruction interpreter Instruction Data CPU General Registers (GR) Other CPUs if any Fig. 1.1: General Machine Structure The components of a general machine are as follows: (a) Instruction interpreter: A group of electronic circuits performs the intent of instruction of fetched from memory. (b) Location counter: LC otherwise called as program counter PC or instruction counter IC, is a hardware memory device which denotes the location of the current instruction being executed. (c) Instruction register: A copy of the content of the LC is stored in IR. (d) Working register are the memory devices that serve as “scratch pad” for the instruction interpreter. (e) General register are used by programmers as storage locations and for special functions. CU IDOL SELF LEARNING MATERIAL (SLM)

4 System Programming and Operating System (f) Memory address registers (MAR) contains the address of the memory location that is to read from or stored into. (g) Memory buffer register (MBR) contain a copy of the content of the memory location whose address is stored in MAR. The primary interface between the memory and the CPU is through memory buffer register. (h) Memory controller is a hardware device whose work is to transfer the content of the MBR to the core memory location whose address is stored in MAR. (i) I/O channels may be thought of as separate computers which interpret special instructions for inputting and outputting information from the memory. 1.2.2 Machine Language Machine language, or machine code, is a low-level language comprised of binary digits (ones and zeros). High-level languages, such as Swift and C++ must be compiled into machine language before the code is run on a computer. Since computers are digital devices, they only recognize binary data. Every program, video, image, and character of text is represented in binary. This binary data, or machine code, is processed as input by the CPU. The resulting output is sent to the operating system or an application, which displays the data visually. For example, the ASCII value for the letter “A” is 01000001 in machine code, but this data is displayed as “A” on the screen. An image may have thousands or even millions of binary values that determine the color of each pixel. While machine code is comprised of 1s and 0s, different processor architectures use different machine code. For example, a PowerPC processor, which has a RISC architecture, requires different code than an Intel x86 processor, which has a CISC architecture. A compiler must compile high-level source code for the correct processor architecture in order for a program to run correctly. 1.2.3 Machine Language vs Assembly Language Machine language and assembly language are both low-level languages, but machine code is below assembly in the hierarchy of computer languages. Assembly language includes human- CU IDOL SELF LEARNING MATERIAL (SLM)

Introduction to System Software 5 readable commands, such as mov, add, and sub, while machine language does not contain any words or even letters. Some developers manually write assembly language to optimize a program, but they do not write machine code. Only developers who write software compilers need to worry about machine language. NOTE: While machine code is technically comprised of binary data, it may also be represented in hexadecimal values. For example, the letter “Z,” which is 01011010 in binary, may be displayed as 5A in hexadecimal code. 1.3 Language Translators  Computers only understand machine code (binary), this is an issue because programmers prefer to use a variety of high and low-level programming languages instead.  To get around the issue, the high-level and low-level program code (source code) needs to pass through a translator.  A translator will convert the source code into machine code (object code).  Following are the several types of translator programs, each able to perform different tasks. (a) Assemblers (b) Compilers (c) Interpreters (d) Linkers (e) Loaders 1.3.1 Assemblers The Assembler is used to translate the program written in Assembly language into machine code. The source program is a input of assembler that contains assembly language instructions. The output generated by assembler is the object code or machine code understandable by the computer. CU IDOL SELF LEARNING MATERIAL (SLM)

6 System Programming and Operating System Source Code Assembler Object Code (Assembly Language) (Machine Language) Fig. 1.2: Assembler Working  Basic function of Assembler  Translate mnemonics opcodes to machine language.  Convert symbolic operands to their machine addresses.  Build machine instructions in the proper format.  Convert data constants into machine representation.  Error checking is provided.  Changes can be quickly and easily incorporated with a reassembly.  Variables are represented by symbolic names, not as memory locations.  Assembly language statements are written one per line. A machine code program thus consists of a sequence of assembly language statements, where each statement contains a mnemonics.  Advantages  Reduced errors  Faster translation times  Changes could be made easier and faster  Addresses are symbolic, not absolute  Easy to remember.  Disadvantages  Assembler language are unique to specific types of computer  Program is not portable to the computer  Many instructions are required to achieve small tasks  Programmer required knowledge of the processor architecture and instruction set. CU IDOL SELF LEARNING MATERIAL (SLM)

Introduction to System Software 7 1.3.2 Compilers Compilers are used to translate a program written in a high-level language into machine code (object code). Once compiled (all in one go), the translated program file can then be directly used by the computer and is independently executable. Compiling may take some time but the translated program can be used again and again without the need for recompilation. An error report is often produced after the full program has been translated. Errors in the program code may cause a computer to crash. These errors can only be fixed by changing the original source code and compiling the program again.  Advantages  The whole program is validated so there are no system errors.  The executable file is enhanced by the compiler, so it runs faster.  User do not have to run the program on the same machine it was created.  Disadvantages  It is slow to execute as you have to finish the whole program.  It is not easy to debug as errors are shown at the end of the execution.  Hardware specific, it works on specific machine language and architecture. 1.3.3 Interpreters Source Interpreter Output Program Input Fig. 1.3: Working of Interpreter  Interpreter programs are able to read, translate and execute one statement at a time from a high-level language program. CU IDOL SELF LEARNING MATERIAL (SLM)

8 System Programming and Operating System  The interpreter stops when a line of code is reached that contains an error.  Interpreters are often used during the development of a program. They make debugging easier as each line of code is analysed and checked before execution.  Interpreted programs will launch immediately, but your program may run slower then a complied file.  No executable file is produced. The program is interpreted again from scratch every time you launch it. Difference between Compiler, Interpreter and Assembler Compiler Interpreter Assembler Translates high-level languages into machine code Temporarily executes high-level Translates low-level languages, one statement at a assembly code into machine An executable file of machine code time code is produced (object code) No executable file of machine An executable file of Compiled programs no longer need code is produced (no object machine code is produced the compiler code) (object code) Error report produced once entire program is compiled. These errors Interpreted programs cannot be Assembled programs no may cause your program to crash used without the interpreter longer need the assembler Compiling may be slow, but the Error message produced One low-level language resulting program code will run immediately (and program stops statement is usually quick (directly on the processor) at that point) translated into one machine code instruction One high-level language statement Interpreted code is run through may be several lines of machine the interpreter (IDE), so it may code when compiled be slow, e.g. to execute program loops 1.3.4 Linkers and Loaders Linkers Linker is a program in a system which helps to link a object modules of program into a single object file. It performs the process of linking. Linker are also called link editors. Linking is CU IDOL SELF LEARNING MATERIAL (SLM)

Introduction to System Software 9 process of collecting and maintaining piece of code and data into a single file. Linker also link a particular module into system library. It takes object modules from assembler as input and forms an executable file as output for loader. Linking is performed at both compile time, when the source code is translated into machine code and load time, when the program is loaded into memory by the loader. Linking is performed at the last step in compiling a program. Source code compiler Assembler Object code Linker Executable file Loader Linking is of Two Types 1. Static Linking – It is performed during the compilation of source program. Linking is performed before execution in static linking. It takes collection of relocatable object file and command-line argument and generate fully linked object file that can be loaded and run. Static linker perform two major task:  Symbol resolution – It associates each symbol reference with exactly one symbol definition. Every symbol have predefined task.  Relocation – It relocate code and data section and modify symbol references to the relocated memory location. The linker copy all library routines used in the program into executable image. As a result, it require more memory space. As it does not require the presence of library on the system when it is run. So, it is faster and more portable. No failure chance and less error chance. 2. Dynamic Linking – Dynamic linking is performed during the run time. This linking is accomplished by placing the name of a shareable library in the executable image. There is more chances of error and failure chances. It require less memory space as multiple program can share a single copy of the library. CU IDOL SELF LEARNING MATERIAL (SLM)

10 System Programming and Operating System Here we can perform code sharing. It means we are using a same object a number of times in the program. Instead of linking same object again and again into the library, each module share information of a object with other module having same object. The shared library needed in the linking is stored in virtual memory to save RAM. In this linking we can also relocate the code for the smooth running of code but all the code is not relocatable. It fixes the address at run time. Loaders A loader is a program that performs the functions of a linker program and then immediately schedules the resulting executable program for some kind of action. In other words, a loader accepts the object program, prepares these programs for execution by the computer and then initiates the execution. It is not necessary for the loader to save a program as an executable file. Card Deck Tape Main Memory Assembler Loader Fig. 1.4: An input program is translated by the assembler from a source program into an object program (on tape), which is loaded into memory by the loader for execution. The functions performed by a loader are as follows: 1. Memory Allocation allocates space in memory for the program. 2. Linking: Resolves symbolic references between the different objects. 3. Relocation adjusts all the address dependent locations such as address constants, in order to correspond to the allocated space. 4. Loading places the instructions and data into memory. CU IDOL SELF LEARNING MATERIAL (SLM)

Introduction to System Software 11 Functions of Loader: The loader is responsible for the activities such as allocation, linking, relocation and loading. (a) It allocates the space for program in the memory, by calculating the size of the program. This activity is called allocation. (b) It resolves the symbolic references (code/data) between the object modules by assigning all the user subroutine and library subroutine addresses. This activity is called linking. (c) There are some address dependent locations in the program, such address constants must be adjusted according to allocated space, such activity done by loader is called relocation. (d) Finally it places all the machine instructions and data of corresponding programs and subroutines into the memory. Thus program now becomes ready for execution, this activity is called loading. 1.4 Components of System Software Software is generally divided into two types: system software that keeps everything working, and application software that allows a user to accomplish some task (even if that task is playing solitaire). In this module, we will look primarily at system software. Application software and a third category, malware, will be discussed in following modules. System software has the task of making your computer a usable system. All application programs work with the system software to accomplish their tasks. System software has three components: the operating system, system utilities (OS helpers), and drivers. As can be seen at right, the OS interacts with hardware through drivers. CU IDOL SELF LEARNING MATERIAL (SLM)

12 System Programming and Operating System It consists of the following components: User Application Programs System Software Device Drivers Hardware Devices Fig. 1.5: Layered Structure of OS  Device Driver: This is a computer program that allows higher level computer programs in interacting with the computer hardware. A device driver simplifies programming as it acts as a translator between a hardware device and the applications that use it.  Operating System: An operating system manages computer hardware, provides services for execution of application software. It consists of programs and data. Examples of operating systems for computers are Linux, Microsoft Windows, OS X, Unix.  Server: A server is a program that operates as a socket listener in computer networking. A server computer is a computer, or series of computers, that link other computers and they often provide essential services across a network, either to private users inside a large organization or to public users via the internet.  Utility Software: Utility software is used to manage the computer hardware and application software and performs small tasks. Some of the examples of utility software are systems utilities, virus scanners and disk defragmenters. CU IDOL SELF LEARNING MATERIAL (SLM)

Introduction to System Software 13  Windowing System: A windowing system supports the implementation of window managers and provides basic support for graphics hardware and pointing devices such as mice, and keyboards. It is a component of graphical user interface. 1.5 Summary  Machine Structure in system software consists of – 1. Instruction Interpreter Instruction Interpreter Hardware is a group of circuits that perform operation specified by instructions fetched from memory. 2. Location Counter Location Counter can be called as Program/Instruction Counter which points to current instruction being executed. 3. Working Registers Working registers are called as “scratch pads” because they used to store temporary values during calculation. CPU interfaces with Memory through MAR & MBR. MAR (Memory Address Register) - has address of memory location. MBR (Memory Buffer Register) - has copy of address given by MAR. Memory controller is used for data transfer between MBR and memory location specified by MAR.  Machine language is the lowest-level programming language (except for computers that utilize programmable microcode). Machine languages are the only languages understood by computers.  A translator is a programming language processor that converts a computer program from one language to another. It takes a program written in source code and converts it into machine code. It discovers and identifies the error during translation. CU IDOL SELF LEARNING MATERIAL (SLM)

14 System Programming and Operating System  Different type of language translators are : 1. Assembler 2. Compiler 3. Interpreter 4. Linker 5. Loader  Systems software carries out middleman tasks to ensure communication between other software and hardware to allow harmonious coexistence with the user.  Systems software can be categorized under the following: 1. Operating system 2. Device driver 3. Firmware 4. Translator 5. Utility 1.6 Key Words/Abbreviations  Assembler: An assembler is a type of computer program that interprets software programs written in assembly language into machine language, code and instructions that can be executed by a computer.  Compiler: A compiler is a computer program (or a set of programs) that transforms source code written in a programming language (the source language) into another computer language (the target language), with the latter often having a binary form known as object code.  Interpreter: Interpreter is a program that executes instructions written in a high-level language. There are two ways to run programs written in a high-level language. The most common is to compile the program; the other method is to pass the program through an interpreter. CU IDOL SELF LEARNING MATERIAL (SLM)

Introduction to System Software 15  Linker: Linker is a program in a system which helps to link a object modules of program into a single object file. It performs the process of linking.  Loader: A loader is a program used by an operating system to load programs from a secondary to main memory so as to be executed. 1.7 Learning Activity 1. Explain the difference between machine language and assembly language. ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2. What are the two types of linking? Explain each in detail. ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 1.8 Unit End Questions (MCQ and Descriptive) A. Descriptive Types Questions 1. Define the following terms: (a) Machine language (b) Assembly language 2. Explain the general machine structure with neat diagram. 3. Explain the difference between machine language and assembly language. 4. What do you mean by language translator and its types? 5. Write a short note on (a) Assembler (b) Compiler (c) Interpreter 6. What are the advantages and disadvantages of assembler? CU IDOL SELF LEARNING MATERIAL (SLM)

16 System Programming and Operating System 7. Explain difference between compiler, assembler and interpreter. 8. What do you mean by linking and explain its types. 9. What are the functions of loader? 10. What is System Software? What are the components of System Software? B. Multiple Choice/Objective Type Questions 1. ___________ contain a copy of the content of the memory location whose address is stored in MAR. (a) MAR (b) MBR (c) General register (d) Working register 2. ___________ translates high-level languages into machine code. (a) Compiler (b) Assembler (c) Linker (d) Loader 3. __________ is performed during the compilation of source program. (a) Allocation (b) Dynamic linking (c) Static linking (d) Relocation 4. _________ places the instructions and data into memory. (a) Allocation (b) Relocation (c) Linking (d) Loading 5. __________ is used to manage the computer hardware and application software and performs small tasks. (a) Utility software (b) Operating system (c) Windowing system (d) Device driver Answers 1. (b), 2. (c), 3. (c), 4. (d), 5. (a) CU IDOL SELF LEARNING MATERIAL (SLM)

Introduction to System Software 17 1.9 References Reference Books 1. https://shraddhasshinde.files.wordpress.com/2017/12/spos-by-dhamdhere.pdf 2. http://ebooks.lpude.in/computer_application/mca/term_4/DCAP507_SYSTEM_ SOFTWARE.pdf Web Resources 1. http://www.tgpcet.com/CSE-NOTES/4/SP.pdf 2. https://www.geeksforgeeks.org/language-processors-assembler-compiler-and-interpreter/ 3. https://www.quora.com/What-are-the-components-of-system-software 4. https://teachcomputerscience.com/translators/ 5. https://www.webopedia.com/TERM/I/interpreter.html CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 2 BASICS OF OPERATING SYSTEMS Structure: 2.0 Learning Objectives 2.1 Introduction 2.2 Definition 2.3 Generations of Operating Systems 2.3.1 The First Generation (1945 - 1955): Vacuum Tubes and Plug Boards 2.3.2 The Second Generation (1955 - 1965): Transistors and Batch Systems 2.3.3 The Third Generation (1965 - 1980): Integrated Circuits and Multiprogramming 2.3.4 The Fourth Generation (1980 - Present): Personal Computers 2.4 Types of Operating Systems 2.4.1 Simple Batch Systems 2.4.2 Multiprogramming Batch Systems 2.4.3 Multiprocessor Systems 2.4.4 Desktop Systems 2.4.5 Distributed Operating System 2.4.6 Clustered Systems 2.4.7 Real Time Operating System 2.4.8 Handheld Systems 2.5 Summary

Basics of Operating Systems 19 2.6 Key Words/Abbreviations 2.7 Learning Activity 2.8 Unit End Questions (MCQ and Descriptive) 2.9 References 2.0 Learning Objectives After studying this unit, you should be able to:  Explain the basic of Operating System.  Discuss about the various generations of Operating System.  Analyse the various types of Operating System. 2.1 Introduction An Operating System (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. Other specialized classes of operating systems, such as embedded and real-time systems, exist for many applications. 2.2 Definition An Operating System (OS) is software which acts as an interface between the end user and computer hardware. Every computer must have at least one OS to run other programs. An application likes Chrome, MS Word, Games, etc. needs some environment in which it will run and perform its task. The OS helps you to communicate with the computer without knowing how to speak the computer’s language. It is not possible for the user to use any computer or mobile device without having an operating system. CU IDOL SELF LEARNING MATERIAL (SLM)

20 System Programming and Operating System User 1 User 2 User n System Application Softwares Softwares Software Operating System Hardware CPU RAM I/O Fig. 2.1: Interaction between User and Hardware Features of Operating System Here is a list commonly found important features of an Operating System:  Protected and supervisor mode  Allows disk access and file systems Device drivers Networking Security  Program Execution  Memory management Virtual Memory Multitasking  Handling I/O operations  Manipulation of the file system  Error Detection and handling  Resource allocation  Information and Resource Protection CU IDOL SELF LEARNING MATERIAL (SLM)

Basics of Operating Systems 21 User Application Operating System Hardware Fig. 2.2: Layered structure of OS 2.3 Generations of Operating Systems Operating Systems have evolved over the years. So, their evolution through the years can be mapped using generations of operating systems. There are four generations of operating systems. These can be described as follows: The First Generation (1945-1955) Vacuum Tubes and Plug Boards The Second Generation (1955-1965) Transistors and Batch System The Third Generation (1965-1980) Integrated Circuits and Multiprogramming The Fourth Generation (1980-Current) Personal Computers Fig. 2.3: Operating System Generations CU IDOL SELF LEARNING MATERIAL (SLM)

22 System Programming and Operating System 2.3.1 The First Generation (1945 - 1955): Vacuum Tubes and Plug Boards Digital computers were not constructed until the Second World War. Calculating engines with mechanical relays were built at that time. However, the mechanical relays were very slow and were later replaced with vacuum tubes. These machines were enormous but were still very slow. These early computers were designed, built and maintained by a single group of people. Programming languages were unknown and there were no operating systems so all the programming was done in machine language. All the problems were simple numerical calculations. By the 1950s punch cards were introduced and this improved the computer system. Instead of using plugboards, programs were written on cards and read into the system. 2.3.2 The Second Generation (1955 - 1965): Transistors and Batch Systems Transistors led to the development of the computer systems that could be manufactured and sold to paying customers. These machines were known as mainframes and were locked in air- conditioned computer rooms with staff to operate them. The Batch System was introduced to reduce the wasted time in the computer. A tray full of jobs was collected in the input room and read into the magnetic tape. After that, the tape was rewound and mounted on a tape drive. Then the batch operating system was loaded in which read the first job from the tape and ran it. The output was written on the second tape. After the whole batch was done, the input and output tapes were removed and the output tape was printed. 2.3.3 The Third Generation (1965 - 1980): Integrated Circuits and Multiprogramming Until the 1960s, there were two types of computer systems i.e. the scientific and the commercial computers. These were combined by IBM in the System/360. This used integrated circuits and provided a major price and performance advantage over the second generation systems. The third generation operating systems also introduced multiprogramming. This meant that the processor was not idle while a job was completing its I/O operation. Another job was scheduled on the processor so that its time would not be wasted. CU IDOL SELF LEARNING MATERIAL (SLM)

Basics of Operating Systems 23 2.3.4 The Fourth Generation (1980 - Present): Personal Computers Personal Computers were easy to create with the development of large-scale integrated circuits. These were chips containing thousands of transistors on a square centimeter of silicon. Because of these, microcomputers were much cheaper than minicomputers and that made it possible for a single individual to own one of them. The advent of personal computers also led to the growth of networks. This created network operating systems and distributed operating systems. The users were aware of a network while using a network operating system and could log in to remote machines and copy files from one machine to another. 2.4 Types of Operating Systems Following are some of the most widely used types of Operating System. 1. Simple Batch System 2. Multiprogramming Batch System 3. Multiprocessor System 4. Desktop System 5. Distributed Operating System 6. Clustered System 7. Realtime Operating System 8. Handheld System 2.4.1 Simple Batch Systems Early computers were enormously machines run from a console. The common input devices were card readers and tape drivers. The common outputs were line printers, tape drivers and card punches. The uses of such systems did not interact directly with the computer systems. Rather the user prepared a job which consisted of the program, the data, and some control information about the nature of the job (control cards) and submitted it to the computer operator. CU IDOL SELF LEARNING MATERIAL (SLM)

24 System Programming and Operating System The job would be in the form of punch cards. The output consisted of the result of the program as well as dump of memory and registers in case of program error. The OS in these early computers was fairly simple. Its major task was to transfer control automatically from one job to another. The OS was always (resident) in the memory. To speed up processing jobs with similar needs were batched together and were run through the computer as a group. Thus programmers would leave their programs with the operator. The operator would sort the programs into batches with similar requirements and as the computer became available, would run each batch. The output from each job would be sent back to the appropriate programmer. Operating System User Program Area Fig. 2.4: Memory layout for a simple batch system Advantages of Simple Batch Systems 1. No interaction between user and computer. 2. No mechanism to prioritize the processes. 2.4.2 Multiprogramming Batch Systems  In this the operating system picks up and begins to execute one of the jobs from memory.  Once this job needs an I/O operation operating system switches to another job (CPU and OS always busy).  Jobs in the memory are always less than the number of jobs on disk (Job Pool).  If several jobs are ready to run at the same time, then the system chooses which one to run through the process of CPU Scheduling.  In Non-multiprogrammed system, there are moments when CPU sits idle and does not do any work.  In Multiprogramming system, CPU will never be idle and keeps on processing. CU IDOL SELF LEARNING MATERIAL (SLM)

Basics of Operating Systems 25 Time Sharing Systems are very similar to Multiprogramming batch systems. In fact time sharing systems are an extension of multiprogramming systems. In Time sharing systems the prime focus is on minimizing the response time, while in multiprogramming the prime focus is to maximize the CPU usage. 0 Operating System Job 1 Job 2 Job 3 Job 4 512k Fig. 2.5: CPU Scheduling 2.4.3 Multiprocessor Systems A Multiprocessor system consists of several processors that share a common physical memory. Multiprocessor system provides higher computing power and speed. In multiprocessor system all processors operate under single operating system. Multiplicity of the processors and how they do act together are transparent to the others. Advantages of Multiprocessor Systems 1. Enhanced performance 2. Execution of several tasks by different processors concurrently, increases the system’s throughput without speeding up the execution of a single task. 3. If possible, system divides task into many subtasks and then these subtasks can be executed in parallel in different processors. Thereby speeding up the execution of single tasks. CU IDOL SELF LEARNING MATERIAL (SLM)

26 System Programming and Operating System 2.4.4 Desktop Systems Earlier, CPUs and PCs lacked the features needed to protect an operating system from user programs. PC operating systems, therefore, were neither multiuser nor multitasking. However, the goals of these operating systems have changed with time; instead of maximizing CPU and peripheral utilization, the systems opt for maximizing user convenience and responsiveness. These systems are called Desktop Systems and include PCs running Microsoft Windows and the Apple Macintosh. Operating systems for these computers have benefited in several ways from the development of operating systems for mainframes. Microcomputers were immediately able to adopt some of the technology developed for larger operating systems. On the other hand, the hardware costs for microcomputers are sufficiently low that individuals have sole use of the computer, and CPU utilization is no longer a prime concern. Thus, some of the design decisions made in operating systems for mainframes may not be appropriate for smaller systems. 2.4.5 Distributed Operating System The motivation behind developing distributed operating systems is the availability of powerful and inexpensive microprocessors and advances in communication technology. These advancements in technology have made it possible to design and develop distributed systems comprising of many computers that are inter connected by communication networks. The main benefit of distributed systems is its low price/performance ratio. Advantages Distributed Operating System 1. As there are multiple systems involved, user at one site can utilize the resources of systems at other sites for resource-intensive tasks. 2. Fast processing. 3. Less load on the Host Machine. Types of Distributed Operating Systems Following are the two types of distributed operating systems used: 1. Client-Server Systems 2. Peer-to-Peer Systems CU IDOL SELF LEARNING MATERIAL (SLM)

Basics of Operating Systems 27 Client-Server Systems Centralized systems today act as server systems to satisfy requests generated by client systems. The general structure of a client-server system is depicted in the figure below: Client Client Client Client Network Server Fig. 2.6: Client Server Architure Server Systems can be broadly categorized as: Compute Servers and File Servers.  Compute Server systems, provide an interface to which clients can send requests to perform an action, in response to which they execute the action and send back results to the client.  File Server systems, provide a file-system interface where clients can create, update, read, and delete files. Peer-to-Peer Systems The growth of computer networks – especially the Internet and World Wide Web (WWW) – has had a profound influence on the recent development of operating systems. When PCs were introduced in the 1970s, they were designed for personal use and were generally considered standalone computers. With the beginning of widespread public use of the Internet in the 1990s for electronic mail and FTP, many PCs became connected to computer networks. In contrast to the Tightly Coupled systems, the computer networks used in these applications consist of a collection of processors that do not share memory or a clock. Instead, each processor has its own local memory. The processors communicate with one another through various communication lines, such as high-speed buses or telephone lines. These systems are usually referred to as loosely coupled systems (or distributed systems). The general structure of a client-server system is depicted in the figure below: CU IDOL SELF LEARNING MATERIAL (SLM)

28 System Programming and Operating System CPU CPU CPU Memory Fig. 2.7: Peer – to – Peer Artichture 2.4.6 Clustered Systems  Like parallel systems, clustered systems gather together multiple CPUs to accomplish computational work.  Clustered systems differ from parallel systems, however, in that they are composed of two or more individual systems coupled together.  The definition of the term clustered is not concrete; the general accepted definitions are that clustered computers share storage and are closely linked via LAN networking.  Clustering is usually performed to provide high availability.  A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others. If the monitored machine fails, the monitoring machine can take ownership of its storage, and restart the application(s) that were running on the failed machine. The failed machine can remain down, but the users and clients of the application would only see a brief interruption of service.  Asymmetric Clustering – In this, one machine is in hot standby mode while the other is running the applications. The hot standby host (machine) does nothing but monitor the active server. If that server fails, the hot standby host becomes the active server.  Symmetric Clustering – In this, two or more hosts are running applications, and they are monitoring each other. This mode is obviously more efficient, as it uses all of the available hardware.  Parallel Clustering – Parallel clusters allow multiple hosts to access the same data on the shared storage. Because most operating systems lack support for this simultaneous CU IDOL SELF LEARNING MATERIAL (SLM)

Basics of Operating Systems 29 data access by multiple hosts, parallel clusters are usually accomplished by special versions of software and special releases of applications. Clustered technology is rapidly changing. Clustered system’s usage and its features should expand greatly as Storage Area Networks (SANs). SANs allow easy attachment of multiple hosts to multiple storage units. Current clusters are usually limited to two or four hosts due to the complexity of connecting the hosts to shared storage. 2.4.7 Real Time Operating System It is defined as an operating system known to give maximum time for each of the critical operations that it performs, like OS calls and interrupt handling. The Real-Time Operating system which guarantees the maximum time for critical operations and complete them on time are referred to as Hard Real-Time Operating Systems. While the real-time operating systems that can only guarantee a maximum of the time, i.e. the critical task will get priority over other tasks, but no assurity of completeing it in a defined time. These systems are referred to as Soft Real-Time Operating Systems. 2.4.8 Handheld Systems Handheld systems include Personal Digital Assistants (PDAs), such as Palm-Pilots or Cellular Telephones with connectivity to a network such as the Internet. They are usually of limited size due to which most handheld devices have a small amount of memory, include slow processors, and feature small display screens.  Many handheld devices have between 512 KB and 8 MB of memory. As a result, the operating system and applications must manage memory efficiently. This includes returning all allocated memory back to the memory manager once the memory is no longer being used.  Currently, many handheld devices do not use virtual memory techniques, thus forcing program developers to work within the confines of limited physical memory.  Processors for most handheld devices often run at a fraction of the speed of a processor in a PC. Faster processors require more power. To include a faster processor in a CU IDOL SELF LEARNING MATERIAL (SLM)

30 System Programming and Operating System handheld device would require a larger battery that would have to be replaced more frequently.  The last issue confronting program designers for handheld devices is the small display screens typically available. One approach for displaying the content in web pages is web clipping, where only a small subset of a web page is delivered and displayed on the handheld device. Some handheld devices may use wireless technology such as BlueTooth, allowing remote access to e-mail and web browsing. Cellular telephones with connectivity to the Internet fall into this category. Their use continues to expand as network connections become more available and other options such as cameras and MP3 players, expand their utility. 2.5 Summary An operating system (OS) is a set of programs that control the execution of application programs and act as an intermediary between a user of a computer and the computer hardware. OS is software that manages the computer hardware as well as providing an environment for application programs to run. Examples of OS are: Windows, Windows/NT, OS/2 and MacOS. Operating systems have been evolving through the years. Following table shows the history of OS. Table 2.1: History of OS Generations Year Electronics devices used Types of OS and devices First 1945 - 55 Vacuum tubes Plug boards Second 1955 - 1965 Transistors Batch systems Third 1965 - 1980 Integrated Circuit (IC) Multiprogramming Fourth Since 1980 Large Scale Integration PC Following are some of the most widely used types of Operating system. 1. Simple Batch System. 2. Multiprogramming Batch System. CU IDOL SELF LEARNING MATERIAL (SLM)

Basics of Operating Systems 31 3. Multiprocessor System. 4. Desktop System. 5. Distributed Operating System. 6. Clustered System. 7. Realtime Operating System. 8. Handheld System. 2.6 Key Words/Abbreviations  System Software: System Software are installed on the computer when operating system is installed. System Software is used for operating computer hardware  Application Software: Application software are installed according to user's requirements. Application Software is used by user to perform specific task. 2.7 Learning Activity 1. What is the need of operating system? ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2. Explain the features of operating system. ----------------------------------------------------------------------------------------------------------- ----------------------------------------------------------------------------------------------------------- 2.8 Unit End Questions (MCQ and Descriptive) A. Descriptive Types Questions 1. Define Operating System with its features. 2. Explain different generations of Operating Systems. 3. Explain different types of Operating Systems. CU IDOL SELF LEARNING MATERIAL (SLM)

32 System Programming and Operating System B. Multiple Choice/Objective Type Questions 1. ___________ provides the means of proper use of the hardware in the operations of the computer system, it is similar to government. (a) Operating System (b) Vacuum tubes (c) Multiprogramming (d) Working register 2. Microsoft Windows is ___________ (a) graphic program (b) word Processor (c) operating system (d) None of the above 3. Which of the following is real time operating system? (a) Lynx (b) MS DOS (c) Windows XP (d) Process Control 4. Linux is an _________ operating system. (a) Open source (b) Windows (c) Microsoft (d) Mac 5. __________ are very similar to Multiprogramming batch systems. (a) Time sharing systems (b) Simple batch system (c) Handheld system (d) Desktop system Answers 1. (a), 2. (c), 3. (d), 4. (a), 5. (a) 2.9 References Reference Books 1. http://www.uobabylon.edu.iq/download/M.S%202013-2014/Operating_System_ Concepts,_8th_Edition%5BA4%5D.pdf 2. http://cp2060.pbworks.com/f/Operating+System+Fundamentals.pdf CU IDOL SELF LEARNING MATERIAL (SLM)

Basics of Operating Systems 33 Web Resources 1. https://www.tutorialspoint.com/operating-system-generations 2. https://www.researchgate.net/publication/283778784_Introduction_to_Operating_System 3. https://www.studytonight.com/operating-system/types-of-os 4. https://www.tutorialspoint.com/operating_system/index.htm CU IDOL SELF LEARNING MATERIAL (SLM)

UNIT 3 OPERATING SYSTEMS Structure: 3.0 Learning Objectives 3.1 Introduction 3.2 Batch Mainframe 3.3 Time Sharing Multiprocessing 3.4 Multiprogramming 3.5 Multithreading 3.6 Real Time 3.7 Embedded 3.8 Distributed 3.9 Clustered 3.10 Summary 3.11 Key Words/Abbreviations 3.12 Learning Activity 3.13 Unit End Questions (MCQ and Descriptive) 3.14 References

Operating Systems 35 3.0 Learning Objectives After studying this unit, you should be able to:  Discuss the various types of Operating Systems starting from Mainframe to Distributed System.  Differentiate between Multiprogramming, Multiprocessing and Multithreading.  Analyse about the Time Sharing and Real based Operating System 3.1 Introduction Many different types of operating systems are involved till date. The operating systems are improved in terms of their capabilities. The modern days operating systems allows multiple user to carry out multiple tasks simultaneously. Based on their capabilities and the types of application supported, the operating systems can be divided into following six major categories. 1. Batch processing operating systems 2. Multi-user operating systems 3. Multitasking operating systems 4. Real time operating systems 5. Multiprocessors operating systems 6. Embedded operating systems Batch Processing Operating System The batch processing operating system is capable of executing one job at a time. In batch processing operating system the jobs are combined in form of batch and then these batches are given to system as an input data. The job in batches are processed on first come first serve basis. After execution of one job operating system fetches another job from input data. CU IDOL SELF LEARNING MATERIAL (SLM)

36 System Programming and Operating System Multi-user Operating System The multi-user operating system uses to use a system by multiple users. In other word multi- user operating system allows a number of users to work simultaneously on a single computer system. Multitasking Operating System The multitasking operating system allow a user to give multitask at a same time on a single computer system multitasking operating system are also known as multiprocessing operating system and multiprogramming operating system. Real Time Operating System The real time operating system is similar as multitasking operating system. However, these operating systems are specially designed to handle real time applications. Real time applications are those applications which have to execute within a specific period of time. Therefore, time is major constraint for these applications. The different examples of real time applications are robots, machine learning etc. Multiprocessor Operating System The multiprocessor operating system allows the computer system to use more than one CPU in a single system for executing more than one or multiple processes at a time. A computer system having multiple CPU process faster than a system which contains a single CPU. Embedded Operating System The embedded operating system is similar to real time operating system. This operating system is installed on an embedded computer system which is primary used to perform computational tasks in electronic devices. 3.2 Batch Mainframe Early computers were enormously machines run from a console. The common input devices were card readers and tape drivers. The common output were line printers, tape drivers and card punches. The uses of such systems did not interact directly with the computer systems. Rather the user prepared a job which consisted of the program, the data, and some control information about CU IDOL SELF LEARNING MATERIAL (SLM)

Operating Systems 37 the nature of the job (control cards) and submitted it to the computer operator. The job would be in the form of punch cards. The output consisted of the result of the program as well as dump of memory and registers in case of program error. The OS in these early computers was fairly simple. Its major task was to transfer control automatically from one job to another. The OS was always (resident) in the memory. To speed up processing jobs with similar needs were batched together and were run through the computer as a group. Thus programmers would leave their programs with the operator. The operator would sort the programs into batches with similar requirements and as the computer became available, would run each batch. The output from each job would be sent back to the appropriate programmer. Thus, a batch operating system reads a stream of separate jobs from a card reader each with its own controls that predefine what the job does. When the job is complete its output is usually printed on the line printer. Operating System User Program Area Fig. 3.1: Memory layout for a simple batch system  The definitive feature of a batch system is the lack of interaction between the user and the job while that job is executing. The job is prepared and submitted and at some later time, the output appears.  The delay between job submission and job completion called as turnaround time may result from the amount of computing needed or from the delays before the OS starts to process the job. In this execution environment the CPU is often idle. This idleness occurs because the speeds of the mechanical I/O devices are intrinsically slower than those of electronic devices. The CU IDOL SELF LEARNING MATERIAL (SLM)

38 System Programming and Operating System introduction of the disk technology has helped in this regard rather than the cards being read from the card reader directly into memory and then the job being processed, cards are read from directly from the card reader onto the disk. The location of the card images is recorded in the table kept by the OS. When a job is executed, the OS satisfies its requests for card reader input by reading from the disk. Similarly when the job requests for the printer to output a line that line is copied into a system buffer and is written to the disk. When the job is completed, the output is actually printed. This form of processing is called spooling (simultaneous peripheral operation on line). Spooling uses the disk as a huge buffer for reading as far ahead as possible on input devices and for storing files until the output devices are able to accept them. Spooling is also used for processing data at remote sites. The CPU sends the data via communication paths to a remote printer. The remote processing is notified at its own speed with no CPU intervention. The CPU just needs to be notified when the processing is complete so that it can spool the next batch of data. Spooling overlaps the I/O of one job with computation of other jobs. Even in simple system, the spooler may be reading the input of one job while printing the output of a different job. During this time, still another job can be executed, reading their cards from disk and printing their output lines onto the disk. Thus spooling can keep both CPU and the I/O devices working at much higher rates. 3.3 Time Sharing Multiprocessing Multiprogrammed batched systems provide an environment where the various system resources are utilized effectively. There are some difficulties with a batch system from the point of view of the user, however since the user cannot interact with the job when it is executing, the user must set up the control cards to handle all possible outcomes. In a multi step job, subsequent steps may depend on the result of earlier ones. The running of program may depend on successful compilation. Another difficulty is that programs must be debugged statically, from snapshot dumps. A programmer cannot modify a program as it executes to study its behaviour. CU IDOL SELF LEARNING MATERIAL (SLM)

Operating Systems 39 Time-sharing or multitasking is a logical extension of multiprogramming. Multiple jobs are executed by CPU switching between them, but the switches occur so frequently that the user may interact with each program while it is running. 3.4 Multiprogramming Spooling provides an important data structure – a job pool. Spooling will generally result in several jobs that have already been waiting on disk, ready to run. A pool of jobs on disk allows the OS to select which job to run next, to increase CPU utilization. When several jobs are on a direct access device such as a disk, job scheduling becomes possible. The most important aspect of job scheduling is the ability to multiprogram. A single user cannot keep either the CPU or I/O devices busy at all times. Multiprogramming increases CPU utilization by organizing such that the CPU always has one to execute. The operating system keeps several jobs in the memory at a time. This set of jobs is subset of the jobs kept in the job pool. The OS picks and begins to execute one of the jobs in the memory. Eventually the job may have to wait for some task, such as a tape to mounted or an I/O operation to complete. In a non-multiprogramming system, the CPU would be idle but in a multiprogramming system, the OS simply switches to and executes another job. When that job needs to wait the CPU is switched to another job and so on. Eventually the first job finishes waiting and gets the CPU back. As long as there is always some job to execute, the CPU will never be idle. Multiprogramming is the first instance where the OS must make decisions for the users. All the jobs that enter the system are kept in job pool. This pool consists of all processes residing on mass storage awaiting allocation of main memory. If several jobs are ready to bring into the memory and there is no enough room for all of them, then the system must choose among them. Making this decision is job scheduling. When the OS selects a job from the job pool, it loads that job into memory for execution. Having several jobs in memory at the same time requires having some form of memory management. If several jobs are ready to run at the same time, the system must choose among them, making such decision is CPU scheduling. CU IDOL SELF LEARNING MATERIAL (SLM)

40 System Programming and Operating System 3.5 Multithreading A thread, sometimes called a lightweight process (LWP), is a basic unit of CPU utilization; it comprises a thread ID, a program counter, a register set, and a stack. It shares with other threads belonging to the same process its code section, data section, and other operating-system resources, such as open files and signals. A traditional (or heavyweight) process has a single thread of control. If the process has multiple threads of control, it can do more than one task at a time. A web browser might have one thread display images or text while another thread retrieves data from the network. A word processor may have a thread for displaying graphics, another thread for reading keystrokes from the user, and a third thread for performing spelling and grammar checking in the background. code data files code data files stack registers registers registers registers stack stack stack thread thread Single-threaded Multithreaded Fig. 3.2: Single and Multithreaded Processes Benefits 1. Responsiveness: Multithreading an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user. For instance, a multithreaded web browser CU IDOL SELF LEARNING MATERIAL (SLM)

Operating Systems 41 could still allow user interaction in one thread while an image is being loaded in another thread. 2. Resource Sharing: Threads share the memory and the resources of the process to which they belong. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space. 3. Economy: Allocating memory and resources for process creation is costly. Alternatively, because threads share resources of the process to which they belong, it is more economical to create and context switch threads. It can be difficult to gauge empirically the difference in overhead for creating and maintaining a process rather than a thread, but in general it is much more time consuming to create and manage processes than threads In Solaris 2, creating a process is about 30 times slower than is creating a thread, and context switching is about five times slower. 4. Utilization of MP Architectures: The benefits of multithreading can be greatly increased in a multiprocessor architecture, where each thread may be running in parallel on a different processor. A single-threaded process can only run on one CPU, no matter how many are available. Multithreading on a multi-CPU machine increases concurrency. In single-processor architecture, the CPU generally moves between each thread so quickly as to create an illusion of parallelism, but in reality only one thread is running at a time. User Threads  Thread management done by user-level threads library User threads are supported above the kernel and are implemented by a thread library at the user level. The library provides support for thread creation, scheduling, and management with no support from the kernel, because the kernel is unaware of user-level threads, all thread creation and scheduling are done in user space without the need for kernel intervention. Therefore, user-level threads are generally fast to create and manage; they have drawbacks, however. For instance, if the kernel is single-threaded, then any CU IDOL SELF LEARNING MATERIAL (SLM)

42 System Programming and Operating System user-level thread performing a blocking system call will cause the entire process to block, even if other threads are available to run within the application.  Examples - POSIX Pthreads - Mach C-threads - Solaris threads Kernel Threads  Supported by the Kernel Kernel threads are supported directly by the operating system: The kernel performs thread creation, scheduling, and management in kernel space. Because thread management is done by the operating system, kernel threads are generally slower to create and manage than are user threads. However, since the kernel is managing the threads, if a thread performs a blocking system call, the kernel can schedule another thread in the application. Also, in a multiprocessor environment, the kernel can schedule threads on different processors.  Examples - Windows 95/98/NT/2000 - Solaris - Tru64 UNIX - BeOS - Linux Multithreading Models  Many-to-One  One-to-One  Many-to-Many CU IDOL SELF LEARNING MATERIAL (SLM)


Like this book? You can publish your book online for free in a few minutes!
Create your own flipbook