Foundation Course on Information Technology Outsourcing UNIT - 2: INTRODUCTION TO OPERATING SYSTEM AND HARDWARE BASICS Structure 2.0 Learning Objectives 2.1 Introduction 2.2 Hardware Basics 2.2.1 Components of a Computer System 2.2.2 Memory Subsystem 2.2.3 Input-Output Subsystem 2.3 Operating System 2.3.1 Memory Management 2.3.2 Process Management 2.3.3 File Management 2.4 Summary 2.5 Glossary 2.6 References 2.0 Learning Objectives After studying this unit, you will be able to: • Explain hardware basics of computer system • Explain operating system 2.1 Introduction An Operating System (OS) is software that acts as an interface between computer hardware components and the user. Every computer system must have at least one operating system to run other programs. Applications like Browsers, MS Office, Notepad Games, etc., need some environment to run and perform its tasks. The OS helps you to communicate with the computer without knowing how to speak the computer’s language. It is not possible for the user to use any computer or mobile device without having an operating system. An operating system acts as an
Foundation Course on Information Technology Outsourcing intermediary between the user of a computer and computer hardware. The purpose of an operating system is to provide an environment in which a user can execute programs conveniently and efficiently. An operating system is a software that manages computer hardware. The hardware must provide appropriate mechanisms to ensure the correct operation of the computer system and to prevent user programs from interfering with the proper operation of the system. Hardware Basics of Computer System Functions of Operating system -Operating system performs three functions: • Convenience: An OS makes a computer more convenient to use. • Efficiency: An OS allows the computer system resources to be used efficiently. • Ability to Evolve: An OS should be constructed in such a way as to permit the effective development, testing, and introduction of new system functions at the same time without interfering with service. • Throughput: An OS should be constructed so that It can give maximum throughput (Number of tasks per unit time). Major Functionalities of Operating System: ➢ Resource Management: When parallel accessing happens in the OS means when multiple users are accessing the system the OS works as Resource Manager, its responsibility is to provide hardware to the user. It decreases the load in the system. ➢ Process Management: It includes various tasks like scheduling, termination of the process. OS manages various tasks at a time. Here CPU Scheduling happens means all the tasks would be done by the many algorithms that use for scheduling.
Foundation Course on Information Technology Outsourcing ➢ Storage Management: The file system mechanism used for the management of the storage. NIFS, CFS, CIFS, NFS, etc. are some file systems. All the data stored in various tracks of Hard disks that are all managed by the storage manager. It included Hard Disk. ➢ Memory Management: Refers to the management of primary memory. The operating system has to keep track, how much memory has been used and by whom. It has to decide which process needs memory space and how much. OS also has to allocate and deallocate the memory space. ➢ Security/Privacy Management: Privacy is also provided by the Operating system by means of passwords so that unauthorized applications can’t access programs or data. For example, Windows uses Kerberos authentication to prevent unauthorized access to data. 2.2 Hardware Basics An operating system has to work closely with the hardware system that acts as its foundations. The operating system needs certain services that can only be provided by the hardware. In order to fully understand the Linux operating system, you need to understand the basics of the underlying hardware. This chapter gives a brief introduction to that hardware: the modern PC. When the ``Popular Electronics'' magazine for January 1975 was printed with an illustration of the Altair 8080 on its front cover, a revolution started. The Altair 8080, named after the destination of an early Star Trek episode, could be assembled by home electronics enthusiasts for a mere $397. With its Intel 8080 processor and 256 bytes of memory but no screen or keyboard, it was puny by today's standards. Its inventor, Ed Roberts, coined the term `the `personal computer'' to describe his new invention, but the term PC is now used to refer to almost any the computer that you can pick up without needing help. By this definition, even some of the very powerful Alpha AXP systems are PCs.Enthusiastic hackers saw the Altair's potential and started to write software and build hardware for it. To these early pioneers, it represented freedom; the freedom from huge batch processing mainframe systems run and guarded by an elite priesthood. Overnight fortunes were made by college dropouts fascinated by this new phenomenon, a computer that you could have at home on your kitchen table. A lot of hardware appeared, all different to some degree, and software hackers were happy to write software for these new machines. Paradoxically it was IBM who firmly cast the mould of the modern PC by announcing the IBM PC in 1981 and shipping it to customers early in 1982. With its Intel 8088 processor, 64K of memory (expandable to 256K), two floppy disks, and an 80 character by 25 lines Colour Graphics Adapter (CGA) it was not very powerful by today's standards but it sold well. It was followed, in 1983, by the
Foundation Course on Information Technology Outsourcing IBM PC-XT which had the luxury of a 10Mbyte hard drive. It was not long before IBM PC clones were being produced by a host of companies such as Compaq and the architecture of the PC became a de-facto standard. This de-facto standard helped a multitude of hardware companies to compete together in a growing market which, happily for consumers, kept prices low. Many of the system architectural features of these early PCs have carried over into the modern PC. For example, even the most powerful Intel Pentium Pro based system starts running in the Intel 8086's addressing mode. When Linus Torvalds started writing what was to become Linux, he picked the most plentiful and reasonably priced hardware, an Intel 80386 PC. 2.2.1 Components of computer system Component of Computer System Looking at a PC from the outside, the most obvious components are a system box, a keyboard, a mouse and a video monitor. On the front of the system box are some buttons, a little display showing some numbers and a floppy drive. Most systems
Foundation Course on Information Technology Outsourcing these days have a CD ROM and if you feel that you have to protect your data, then there will also be a tape drive for backups. These devices are collectively known as the peripherals. The CPU The CPU, or rather microprocessor, is the heart of any computer system. The microprocessor calculates, performs logical operations, and manages data flows by reading instructions from memory and then executing them. In the early days of computing, the functional components of the microprocessor were separate (and physically large) units. This is when the term Central Processing Unit was coined. The modern microprocessor combines these components onto an integrated circuit etched onto a very small piece of silicon. The terms CPU, microprocessor and processor are all used interchangeably in this book. Microprocessors operate on binary data; that is data composed of ones and zeros. These ones and zeros correspond to electrical switches being either on or off. Just as 42 is a decimal number meaning ``4 10s and 2 units'', a binary number is a series of binary digits each one representing a power of 2. In this context, a power means the number of times that a number is multiplied by itself. 10 to the power 1 (101) is 10, 10 to the power 2 ( 102 ) is 10x10, 103 is 10x10x10 and so on. Binary 0001 is decimal 1, binary 0010 is decimal 2, binary 0011 is 3, binary 0100 is 4, and so on. So, 42 decimals is 101010 binary or (2 + 8 + 32 or 21 + 23 + 25). Rather than using binary to represent numbers in computer programs, another base, hexadecimal is usually used. In this base, each digital represents a power of 16. As decimal numbers only go from 0 to 9 the numbers 10 to 15 are represented as a single digit by the letters A, B, C, D, E, and F. For example, hexadecimal E is decimal 14 and hexadecimal 2A is decimal 42 (two 16s) + 10). Using the C programming language notation (as I do throughout this book) hexadecimal numbers are prefaced by ``0x''; hexadecimal 2A is written as 0x2A. Microprocessors can perform arithmetic operations such as add, multiply and divide and logical operations such as ``is X greater than Y?'' The processor's execution is governed by an external clock. This clock, the system clock, generates regular clock pulses to the processor and, at each clock pulse, the processor does some work. For example, a processor could execute an instruction for every clock pulse. A processor's speed is described in terms of the rate at which the system clock ticks. A 100Mhz processor will receive 100,000,000 clock ticks every second. It is misleading to describe the power of a CPU by its clock rate as different processors perform different amounts of work per clock tick. However, all things being equal, a faster clock speed means a more powerful processor. The instructions executed by the processor are very simple; for example,
Foundation Course on Information Technology Outsourcing ``read the contents of memory at location X into register Y''. Registers are the microprocessor's internal storage, used for storing data and performing operations on it. The operations performed may cause the processor to stop what it is doing and jump to another instruction somewhere else in memory. These tiny building blocks give the modern microprocessor almost limitless power as it can execute millions or even billions of instructions a second. The instructions have to be fetched from memory as they are executed. Instructions may themselves reference data within memory and that data must be fetched from memory and saved there when appropriate. The size, number and type of register within a microprocessor is entirely dependent on its type. An Intel 4086 processor has a different register set to an Alpha AXP processor; for a start, the Intel's are 32 bits wide and the Alpha AXP's are 64 bits wide. In general, though, any given processor will have a number of general-purpos registers and a smaller number of dedicated registers. Most processors have the following special purpose, dedicated, registers: Program Counter (PC) This register contains the address of the next instruction to be executed. The contents of the PC are automatically incremented each time an instruction is fetched. Stack Pointer (SP) Processors have to have access to large amounts of external read/write random access memory (RAM) which facilitates temporary storage of data. The stack is a way of easily saving and restoring temporary values in external memory. Usually, processors have special instructions which allow you to push values onto the stack and to pop them off again later. The stack works on a last in first out (LIFO) basis. In other words, if you push two values, x, and y, onto a stack and then pop a value off of the stack then you will get back the value of y. Some processors’ stacks grow upwards towards the top of memory whilst others grow downwards towards the bottom, or base, of memory. Some processors support both types, for example, ARM. Processor Status (PS) Instructions may yield results; for example, ``is the content of register X greater than the content of register Y?'' will yield true or false as a result. The PS register holds this and other information about the current state of the processor. For example, most processors have at least two modes of operation, kernel (or supervisor) and user. The PS register would hold information identifying the current mode. Memory All systems have a memory hierarchy with memory at different speeds and sizes at different points in the hierarchy. The fastest memory is known as cache memory and
Foundation Course on Information Technology Outsourcing is what it sounds like - memory that is used to temporarily hold, or cache, contents of the main memory. This sort of memory is very fast but expensive, therefore most processors have a small amount of on-chip cache memory and more system-based (on-board) cache memory. Some processors have one cache to contain both instructions and data, but others have two, one for instructions and the other for data. The Alpha AXP processor has two internal memory caches; one for data (the D-Cache) and one for instructions (the I-Cache). The external cache (or B-Cache) mixes the two together. Finally, there is the main memory which relative to the external cache memory is very slow. Relative to the on-CPU cache, the main memory is positively crawling. The cache and main memories must be kept in step (coherent). In other words, if a word of main memory is held in one or more locations in the cache, then the system must make sure that the contents of cache and memory are the same. The job of cache coherency is done partially by the hardware and partially by the operating system. This is also true for a number of major system tasks where the hardware and software must cooperate closely to achieve their aims. Buses The individual components of the system board are interconnected by multiple connection systems known as buses. The system bus is divided into three logical functions; the address bus, the data bus and the control bus. The address bus specifies the memory locations (addresses) for the data transfers. The data bus holds the data transfered. The data bus is bidirectional; it allows data to be read into the CPU and written from the CPU. The control bus contains various lines used to route timing and control signals throughout the system. Many flavours of bus exist, for example ISA and PCI buses are popular ways of connecting peripherals to the system. Controllers and Peripherals Peripherals are real devices, such as graphics cards or disks controlled by controller chips on the system board or on cards plugged into it. The IDE disks are controlled by the IDE controller chip and the SCSI disks by the SCSI disk controller chips and so on. These controllers are connected to the CPU and to each other by a variety of buses. Most systems built now use PCI and ISA buses to connect together the main system components. The controllers are processors like the CPU itself, they can be viewed as intelligent helpers to the CPU. The CPU is in overall control of the system. All controllers are different, but they usually have registers that control them. Software running on the CPU must be able to read and write those controlling registers. One register might contain a status describing an error. Another might be used for control purposes; changing the mode of the controller. Each controller on a bus can be
Foundation Course on Information Technology Outsourcing individually addressed by the CPU, this is so that the software device driver can write to its registers and thus control it. The IDE ribbon is a good example, as it gives you the ability to access each drive on the bus separately. Another good example is the PCI bus which allows each device (for example a graphics card) to be accessed independently. Address Spaces The system bus connects the CPU with the main memory and is separate from the buses connecting the CPU with the system's hardware peripherals. Collectively the memory space that the hardware peripherals exist in is known as I/O space. I/O space may itself be further subdivided, but we will not worry too much about that for the moment. The CPU can access both the system space memory and the I/O space memory, whereas the controllers themselves can only access system memory indirectly and then only with the help of the CPU. From the point of view of the device, say the floppy disk controller, it will see only the address space that its control registers are in (ISA), and not the system memory. Typically, a CPU will have separate instructions for accessing the memory and I/O space. For example, there might be an instruction that means ``read a byte from I/O address 0x3f0 into register X''. This is exactly how the CPU controls the system's hardware peripherals, by reading and writing to their registers in I/O space. Where in I/O space the common peripherals (IDE controller, serial port, floppy disk controller and so on) have their registers has been set by convention over the years as the PC architecture has developed. The I/O space address 0x3f0 just happens to be the address of one of the serial port's (COM1) control registers. There are times when controllers need to read or write large amounts of data directly to or from system memory. For example, when user data is being written to the hard disk. In this case, Direct Memory Access (DMA) controllers are used to allow hardware peripherals to directly access system memory but this access is under strict control and supervision of the CPU. Timers All operating systems need to know the time and so the modern PC includes a special peripheral called the Real Time Clock (RTC). This provides two things: a reliable time of day and an accurate timing interval. The RTC has its own battery so that it continues to run even when the PC is not powered on, this is how your PC always ``knows'' the correct date and time. The interval timer allows the operating system to accurately schedule essential work.
Foundation Course on Information Technology Outsourcing 2.2.2 Memory Subsystem The memory subsystem is made up of hardware and software components. The following figure shows a conceptual layout of the hardware components of a memory subsystem. Memory Subsystem Main memory is the primary storage area of the computer system. It is in main memory that programs and the data they use are stored while the programs are executing. Data are also stored on slower peripheral devices such as disks and magnetic tape; programs are stored here as well when they are not executing. The input/output processor (IOP) regulates the flow of data between main memory and peripheral devices. The actual transfer of data between the main memory and the CPU or the peripheral devices is handled by the memory controller. Memory Structure and Access Program SegmeProgram compilation converts the symbolic form of a program into object code. The object code is placed into a code file, in the form of code segments. This process of code segmentation is performed by the compilers. For each code segment, the compiler generates a sequence of 52 bits called a code segment descriptor (descriptors are discussed next, under “Descriptors”). The segment descriptors are maintained in a dictionary and kept in the code file. Thus, a code file
Foundation Course on Information Technology Outsourcing contains both code segments and a segment dictionary. When a program is executed, the segment dictionary is read into memory and placed in a segment dictionary stack. This stack is never executed; it is used only to hold segment descriptors. Data items from the program are also maintained as separate areas in memory. For example, an array in ALGOL or an item at the 01 level in COBOL is managed as a data segment. Each data segment is “pointed” to by a data descriptor. Data descriptors are maintained in the program's stack (the process stack). These data descriptors are built as the program executes stack-building code that was generated by the compiler as it scanned the declarations of a program (such as the Data Division in COBOL). Descriptors Descriptors contain all the information needed by the processor and the operating system to access memory for the process, and are said to “point” to the segments they define. Two types of descriptors are available: data descriptors and code descriptors. Both data and code descriptors are composed of 52 bits, broken into several fields of varying length. Accessing MemoryFor any process or program, the processor accesses data arrays or code through the descriptors. If the descriptor indicates that the information you want is present in main memory, the processor can obtain the information and continue processing. If the descriptor indicates that the information you want is not present in main memory, the processor generates an interrupt for system service. 2.2.3 Input-Output Subsystem The I/O subsystem of a computer provides an efficient mode of communication between the central system and the outside environment. It handles all the inputoutput operations of the computer system. Peripheral Devices- Input or output devices that are connected to computer are called peripheral devices. These devices are designed to read information into or out of the memory unit upon command from the CPU and are considered to be the part of computer system. These devices are also called peripherals. For example: Keyboards, display units and printers are common peripheral devices. There are three types of peripherals: 1. Input peripherals: Allows user input, from the outside world to the computer. Example: Keyboard, Mouse etc. 2. Output peripherals: Allows information output, from the computer to the outside world. Example: Printer, Monitor etc 3. Input-Output peripherals: Allows both input (from outised world to computer) as well as, output (from computer to the outside world). Example: Touch screen etc.
Foundation Course on Information Technology Outsourcing Interfaces- Interface is a shared boundary btween two separate components of the computer system which can be used to attach two or more components to the system for communication purposes. There are two types of interfaces: • CPU Inteface • I/O Interface Input-Output Interface -Peripherals connected to a computer need special communication links for interfacing with CPU. In computer system, there are special hardware components between the CPU and peripherals to control or manage the input-output transfers. These components are called input-output interface units because they provide communication links between processor bus and peripherals. They provide a method for transferring information between internal system and input- output devices. I/O Interface The Input/output Interface is required because there are exists many differences between the central computer and each peripheral while transferring information. Some major differences are: 1. Peripherals are electromechanical and electromagnetic devices and their manner of operation is different from the operation of CPU and memory, which are electronic device. Therefore, a conversion of signal values may be required. 2. The data transfer rate of peripherals is usually slower than the transfer rate of CPU, and consequently a synchronisation mechanism is needed. 3. Data codes and formats in peripherals differ from the word format in the CPU and Memory.
Foundation Course on Information Technology Outsourcing 4. The operating modes of peripherals are differing from each other and each must be controlled so as not to disturb the operation of other peripherals connected to CPU. These differences are resolved through an input-output interface. As input-output interface (Interface Unit) contains various components, each of which performs one or more vital functions for the smooth transforming of information between CPU and Peripherals. Input/Output Channels- A channel is an independent hardware component that coordinates all I/O to a set of controllers. Computer systems that use I/O channel have special hardware components that handle all I/O operations. Channels use separate, independent and low-cost processors for its functioning which are called Channel Processors. Channel processors are simple but contain sufficient memory to handle all I/O tasks. When I/O transfer is complete or an error is detected, the channel controller communicates with the CPU using an interrupt and informs the CPU about the error or the task completion. Each channel supports one or more controllers or devices. Channel programs contain a list of commands to the channel itself and for various connected controllers or devices. Once the operating system has prepared a list of I/O commands, it executes a single I/O machine instruction to initiate the channel program, the channel then assumes control of the I/O operations until they are completed. 2.3 Operating System An operating system acts as an intermediary between the user of a computer and computer hardware. The purpose of an operating system is to provide an environment in which a user can execute programs conveniently and efficiently. An operating system is software that manages computer hardware. The hardware must provide appropriate mechanisms to ensure the correct operation of the computer system and to prevent user programs from interfering with the proper operation of the system. 2.3.1 Memory management The main memory is central to the operation of a modern computer. Main Memory is a large array of words or bytes, ranging in size from hundreds of thousands to billions. Main memory is a repository of rapidly available information shared by the CPU and I/O devices. Main memory is the place where programs and information are kept when the processor is effectively utilizing them. Main memory is associated with the processor, so moving instructions and information into and out of the processor is
Foundation Course on Information Technology Outsourcing extremely fast. Main memory is also known as RAM (Random Access Memory). This memory is a volatile memory.RAM lost its data when a power interruption occurs. In a multiprogramming computer, the operating system resides in a part of memory and the rest is used by multiple processes. The task of subdividing the memory among different processes is called memory management. Memory management is a method in the operating system to manage operations between main memory and disk during process execution. The main aim of memory management is to achieve efficient utilization of memory. Memory Management is required for: • Allocate and de-allocate memory before and after process execution. • To keep track of used memory space by processes. • To minimize fragmentation issues. • To proper utilization of main memory. • To maintain data integrity while executing of process. Memory Management Logical and Physical Address Space Logical Address space: An address generated by the CPU is known as “Logical Address”. It is also known as a Virtual address. Logical address space can be defined as the size of the process. A logical address can be changed. Physical Address space: An address seen by the memory unit (i.e the one loaded into the memory address register of the memory) is commonly known as a “Physical Address”. A Physical address is also known as a Real address. The set of all physical
Foundation Course on Information Technology Outsourcing addresses corresponding to these logical addresses is known as Physical address space. A physical address is computed by MMU. The run-time mapping from virtual to physical addresses is done by a hardware device Memory Management Unit(MMU). The physical address always remains constant. Static and Dynamic Loading: To load a process into the main memory is done by a loader. There are two different types of loading: Static loading: - loading the entire program into a fixed address. It requires more memory space. Dynamic loading: - The entire program and all data of a process must be in physical memory for the process to execute. So, the size of a process is limited to the size of physical memory. To gain proper memory utilization, dynamic loading is used. In dynamic loading, a routine is not loaded until it is called. All routines are residing on disk in a relocatable load format. One of the advantages of dynamic loading is that unused routine is never loaded. This loading is useful when a large amount of code is needed to handle it efficiently. Static and Dynamic linking: To perform a linking task a linker is used. A linker is a program that takes one or more object files generated by a compiler and combines them into a single executable file. Static linking: In static linking, the linker combines all necessary program modules into a single executable program. So there is no runtime dependency. Some operating systems support only static linking, in which system language libraries are treated like any other object module. Dynamic linking: The basic concept of dynamic linking is similar to dynamic loading. In dynamic linking, “Stub” is included for each appropriate library routine reference. A stub is a small piece of code. When the stub is executed, it checks whether the needed routine is already in memory or not. If not available, then the program loads the routine into memory. Swapping: When a process is executed it must have resided in memory. Swapping is a process of swapping a process temporarily into a secondary memory from the main memory, which is fast as compared to secondary memory. A swapping allows more processes to be run and can be fit into memory at one time. The main part of swapping is transferred time and the total time is directly proportional to the amount of memory swapped. Swapping is also known as roll-out, roll in, because if a higher priority process arrives and wants service, the memory manager can swap out the lower
Foundation Course on Information Technology Outsourcing priority process and then load and execute the higher priority process. After finishing higher priority work, the lower priority process swapped back in memory and continued to the execution process. Swapping Contiguous Memory Allocation: The main memory should oblige both the operating system and the different client processes. Therefore, the allocation of memory becomes an important task in the operating system. The memory is usually divided into two partitions: one for the resident operating system and one for the user processes. We normally need several user processes to reside in memory simultaneously. Therefore, we need to consider how to allocate available memory to the processes that are in the input queue waiting to be brought into memory. In adjacent memory allotment, each process is contained in a single contiguous segment of memory. Contagious Memory Allocation
Foundation Course on Information Technology Outsourcing Memory allocation: To gain proper memory utilization, memory allocation must be allocated efficient manner. One of the simplest methods for allocating memory is to divide memory into several fixed-sized partitions and each partition contains exactly one process. Thus, the degree of multiprogramming is obtained by the number of partitions. Multiple partition allocation: In this method, a process is selected from the input queue and loaded into the free partition. When the process terminates, the partition becomes available for other processes. Fixed partition allocation: In this method, the operating system maintains a table that indicates which parts of memory are available and which are occupied by processes. Initially, all memory is available for user processes and is considered one large block of available memory. This available memory is known as “Hole”. When the process arrives and needs memory, we search for a hole that is large enough to store this process. If the requirement is fulfilling then we allocate memory to process, otherwise keeping the rest available to satisfy future requests. While allocating a memory sometimes dynamic storage allocation problems occur, which concerns how to satisfy a request of size n from a list of free holes. There are some solutions to this problem: First fit: - In the first fit, the first available free hole fulfills the requirement of the process allocated. First Fit Here, in this diagram 40 KB memory block is the first available free hole that can store process A (size of 25 KB), because the first two blocks did not have sufficient memory space. Best fit:-In the best fit, allocate the smallest hole that is big enough to process requirements. For this, we search the entire list, unless the list is ordered by size.
Foundation Course on Information Technology Outsourcing Best Fit Here in this example, first, we traverse the complete list and find the last hole 25KB is the best suitable hole for Process A (size 25KB). In this method memory utilization is maximum as compared to other memory allocation techniques. Worst fit:-In the worst fit, allocate the largest available hole to process. This method produces the largest leftover hole. Fragmentation: A Fragmentation is defined as when the process is loaded and removed after execution from memory, it creates a small free hole. These holes can not be assigned to new processes because holes are not combined or do not fulfill the memory requirement of the process. To achieve a degree of multiprogramming, we must reduce the waste of memory or fragmentation problem. In operating system two types of fragmentation: Internal fragmentation: Internal fragmentation occurs when memory blocks are allocated to the process more than their requested size. Due to this some unused space is leftover and creates an internal fragmentation problem. Example: Suppose there is a fixed partitioning is used for memory allocation and the different size of block 3MB, 6MB, and 7MB space in memory. Now a new process p4 of size 2MB comes and demand for the block of memory. It gets a memory block of 3MB but 1MB block memory is a waste, and it can not be allocated to other processes too. This is called internal fragmentation. External fragmentation: In external fragmentation, we have a free memory block, but we can not assign it to process because blocks are not contiguous. Example: Suppose (consider above example) three processes p1, p2, p3 comes with sizes 2MB, 4MB, and 7MB respectively. Now they get memory blocks of size 3MB, 6MB, and 7MB allocated respectively. After allocating process p1 process and p2 process left 1MB and 2MB. Suppose a new process p4 comes and demands a 3MB block of memory, which is available, but we can not assign it because free memory space is not contiguous. This is called external fragmentation. Both the first fit and best-fit systems for memory
Foundation Course on Information Technology Outsourcing allocation are affected by external fragmentation. To overcome the external fragmentation problem Compaction is used. In the compaction technique, all free memory space combines and makes one large block. So, this space can be used by other processes effectively. Another possible solution to the external fragmentation is to allow the logical address space of the processes to be noncontiguous, thus permitting a process to be allocated physical memory where ever the latter is available. Paging: • Paging is a memory management scheme that eliminates the need for contiguous allocation of physical memory. This scheme permits the physical address space of a process to be non-contiguous. • Logical Address or Virtual Address (represented in bits): An address generated by the CPU • Logical Address Space or Virtual Address Space (represented in words or bytes): The set of all logical addresses generated by a program • Physical Address (represented in bits): An address actually available on a memory unit • Physical Address Space (represented in words or bytes): The set of all physical addresses corresponding to the logical addresses Paging Here in this example, Process A (Size 25 KB) is allocated to the largest available memory block which is 60KB. Inefficient memory utilization is a major issue in the worst fit. 2.3.2 Process Management A Program does nothing unless its instructions are executed by a CPU. A program in execution is called a process. In order to accomplish its task, the process needs computer resources. There may exist more than one process in the system which may require the same resource at the same time. Therefore, the operating system has to
Foundation Course on Information Technology Outsourcing manage all the processes and resources in a convenient and efficient way. Some resources may need to be executed by one process at one time to maintain consistency otherwise the system can become inconsistent and deadlock may occur. The operating system is responsible for the following activities in connection with Process Management 1. Scheduling processes and threads on the CPUs. 2. Creating and deleting both user and system processes. 3. Suspending and resuming processes. 4. Providing mechanisms for process synchronization. 5. Providing mechanisms for process communication. 2.3.3 File Management File management is one of the basic and important features of operating systems. The operating system is used to manage files of the computer system. All the files with different extensions are managed by the operating system. A file is a collection of specific information stored in the memory of a computer system. File management is defined as the process of manipulating files in the computer system, it management includes the process of creating, modifying, and deleting the files. The following are some of the tasks performed by file management of operating system of any computer system: 1. It helps to create new files in the computer system and place them at specific locations. 2. It helps in easily and quickly locating these files in the computer system. 3. It makes the process of sharing of files among different users very easy and user- friendly. 4. It helps to stores the files in separate folders known as directories. These directories help users to search files quickly or to manage the files according to their types or uses. 5. It helps the user to modify the data of files or to modify the name of the file in the directories.
Foundation Course on Information Technology Outsourcing File Management The above figure shows the general hierarchy of the storage in an operating system. In this figure, the root directory is present at the highest level in the hierarchical structure. It includes all the subdirectories in which the files are stored. The subdirectory is a directory present inside another directory in the file storage system. The directory-based storage system ensures better organization of files in the memory of the computer system. The file management of function in the operating system (OS) is based on the following concepts: ➢ File Attributes-It specifies the characteristics of the files such as type, date of last modification, size, location on disk, etc. file attributes help the user to understand the value and location of files. File attributes are one most important features. It is used to describe all the information regarding a particular file. ➢ File Operations-It specifies the task that can be performed on a file such as opening and closing of a file. ➢ File Access Permission-It specifies the access permissions related to a file such as read and write. ➢ File Systems-It specifies the logical method of file storage in a computer system. Some of the commonly used files systems include FAT and NTFS.
Foundation Course on Information Technology Outsourcing 2.4 Summary • Computer hardware includes the physical parts of a computer, such as the case, central processing unit (CPU), monitor, mouse, keyboard, computer data processing unit. • Computer hardware is the physical components that a computer system requires to function. • Computer operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. 2.5 Glossary • Boot: To start up a computer. Cold boot means restarting the computer after the power is turned off. Warm boot means restarting the computer without turning off the power. • Cache: A small data-memory storage area that a computer can use to instantly re-access data instead of re-reading the data from the original source, such as a hard drive • Chip: A tiny wafer of silicon-containing miniature electric circuits that can store millions of bits of information. • Cursor: A moving position indicator displayed on a computer monitor that shows a computer operator where the next action or operation will take place. 2.6 References • https://www.crucial.com/articles/pc-builders/what-is-computer-hardware • https://web.stanford.edu/class/cs101/hardware-1.html • https://www.techtarget.com/searchnetworking/definition/hardware • https://www.guru99.com/operating-system-tutorial.html • https://www.britannica.com/technology/operating-system
Foundation Course in Information Technology Outsourcing: OS & Hardware Basics The operating system is responsible for the following activities in connection with Process Management 1. Scheduling processes and threads on the CPUs. 2. Creating and deleting both user and system processes. 3. Suspending and resuming processes. 4. Providing mechanisms for process synchronization. 5. Providing mechanisms for process communication. 2.3.3 File Management File management is one of the basic and important features of operating systems. The operating system is used to manage files of the computer system. All the files with different extensions are managed by the operating system. A file is a collection of specific information stored in the memory of a computer system. File management is defined as the process of manipulating files in the computer system, its management includes the process of creating, modifying, and deleting the files. The following are some of the tasks performed by file management of operating system of any computer system: 1. It helps to create new files in the computer system and place them at specific locations. 2. It helps in easily and quickly locating these files in the computer system. 3. It makes the process of sharing files among different users very easy and user-friendly. 4. It helps to stores the files in separate folders known as directories. These directories help users to search files quickly or to manage the files according to their types of uses. 5. It helps the user to modify the data of files or to modify the name of the file in the directories. Page 22 of 24 All Rights Reserved. Vol. TLE001/03-2022
Foundation Course in Information Technology Outsourcing: OS & Hardware Basics File Management The above figure shows the general hierarchy of the storage in an operating system. In this figure, the root directory is present at the highest level in the hierarchical structure. It includes all the subdirectories in which the files are stored. The subdirectory is a directory present inside another directory in the file storage system. The directory-based storage system ensures better organization of files in the memory of the computer system. The file management of function in the operating system (OS) is based on the following concepts: ⮚ File Attributes-It specifies the characteristics of the files such as type, date of last modification, size, location on disk, etc. file attributes help the user to understand the value and location of files. File attributes are one most important features. It is used to describe all the information regarding a particular file. ⮚ File Operations-It specifies the task that can be performed on a file such as opening and closing of a file. ⮚ File Access Permission-It specifies the access permissions related to a file such as read and write. ⮚ File Systems-It specifies the logical method of file storage in a computer system. Some of the commonly used file systems include FAT and NTFS. Page 23 of 24 All Rights Reserved. Vol. TLE001/03-2022
Foundation Course in Information Technology Outsourcing: OS & Hardware Basics 2.4 SUMMARY ● Computer hardware includes the physical parts of a computer, such as a case, central processing unit (CPU), monitor, mouse, keyboard, computer data processing unit. ● Computer hardware is the physical component that a computer system requires to function. ● Computer operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. 2.5 GLOSSARY ● Boot: To start up a computer. Cold boot means restarting the computer after the power is turned off. Warm boot means restarting the computer without turning off the power. ● Cache: A small data-memory storage area that a computer can use to instantly re-access data instead of re-reading the data from the source, such as a hard drive ● Chip: A tiny wafer of silicon-containing miniature electric circuits that can store millions of bits of information. ● Cursor: A moving position indicator displayed on a computer monitor that shows a computer operator where the next action or operation will take place. 2.6 REFERENCES ● https://www.crucial.com/articles/pc-builders/what-is-computer- hardware ● https://web.stanford.edu/class/cs101/hardware-1.html ● https://www.techtarget.com/searchnetworking/definition/hardware ● https://www.guru99.com/operating-system-tutorial.html ● https://www.britannica.com/technology/operating-system Page 24 of 24 All Rights Reserved. Vol. TLE001/03-2022
Search
Read the Text Version
- 1 - 24
Pages: