Programmed I/O Interrupt Initiated I/O The performance of the system is severely The performance of the system is degraded. enhanced to some extent. Table 12.2: Difference between programmed and interrupt initiated I/O 12.4 DMA CONTROLLER AND IOP DMA Controller is a hardware device that allows I/O devices to directly access memory with less participation of the processor. DMA controller needs the same old circuits of an interface to communicate with the CPU and Input/output devices. Fig-1 below shows the block diagram of the DMA controller. The unit communicates with the CPU through data bus and control lines. Through the use of the address bus and allowing the DMA and RS register to select inputs, the register within the DMA is chosen by the CPU. RD and WR are two-way inputs. When BG (bus grant) input is 0, the CPU can communicate with DMA registers. When BG (bus grant) input is 1, the CPU has relinquished the buses and DMA can communicate directly with the memory. DMA controller registers The DMA controller has three registers as follows. Address register – It contains the address to specify the desired location in memory. Word count register – It contains the number of words to be transferred. Control register – It specifies the transfer mode. All registers in the DMA appear to the CPU as I/O interface registers. Therefore, the CPU can both read and write into the DMA registers under program control via the data bus. 201 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 12.1: Block diagramOD DMA controller registers Explanation The CPU initializes the DMA by sending the given information through the data bus. The starting address of the memory blocks where the data is available (to read) or where data are to be stored (to write). It also sends word count which is the number of words in the memory block to be read or write. Control to define the mode of transfer such as read or write. A control to begin the DMA transfer. For the execution of a computer program, it requires the synchronous working of more than one component of a computer. For example, Processors – providing necessary control information, addresses…etc, buses – to transfer information and data to and from memory to I/O devices…etc. The interesting factor of the system would be the way it handles the transfer of information among processor, memory and I/O devices. Usually, processors control all the process of transferring data, right from initiating the transfer to the storage of data at the destination. This adds load on the processor and most of the time it stays in the ideal state, thus decreasing the efficiency of the system. To speed up the transfer of data between I/O devices and memory, DMA controller acts as station master. DMA controller transfers data with minimal intervention of the processor. 202 CU IDOL SELF LEARNING MATERIAL (SLM)
What is a DMA Controller? The term DMA stands for direct memory access. The hardware device used for direct memory access is called the DMA controller. DMA controller is a control unit, part of I/O device’s interface circuit, which can transfer blocks of data between I/O devices and main memory with minimal intervention from the processor. DMA Controller Diagram in Computer Architecture DMA controller provides an interface between the bus and the input-output devices. Although it transfers data without intervention of processor, it is controlled by the processor. The processor initiates the DMA controller by sending the starting address, Number of words in the data block and direction of transfer of data .i.e. from I/O devices to the memory or from main memory to I/O devices. More than one external device can be connected to the DMA controller. Figure 12.2: DMA Controllerdiagram DMA in Computer Architecture DMA controller contains an address unit, for generating addresses and selecting I/O device for transfer. It also contains the control unit and data count for keeping counts of the number of blocks transferred and indicating the direction of transfer of data. When the transfer is completed, DMA informs the processor by raising an interrupt. The typical block diagram of the DMA controller is shown in the figure below. 203 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 12.3: Typical block diagram of DMA controller Working of DMA Controller DMA controller has to share the bus with the processor to make the data transfer. The device that holds the bus at a given time is called bus master. When a transfer from I/O device to the memory or vice versa has to be made, the processor stops the execution of the current program, increments the program counter, moves data over stack then sends a DMA select signal to DMA controller over the address bus. If the DMA controller is free, it requests the control of bus from the processor by raising the bus request signal. Processor grants the bus to the controller by raising the bus grant signal, now DMA controller is the bus master. The processor initiates the DMA controller by sending the memory addresses, number of blocks of data to be transferred and direction of data transfer. After assigning the data transfer task to the DMA controller, instead of waiting ideally till completion of data transfer, the processor resumes the execution of the program after retrieving instructions from the stack. 204 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 12.4: Transfer Of data in computer by DMA controller DMA controller now has the full control of buses and can interact directly with memory and I/O devices independent of CPU. It makes the data transfer according to the control instructions received by the processor. After completion of data transfer, it disables the bus request signal and CPU disables the bus grant signal thereby moving control of buses to the CPU. When an I/O device wants to initiate the transfer then it sends a DMA request signal to the DMA controller, for which the controller acknowledges if it is free. Then the controller requests the processor for the bus, raising the bus request signal. After receiving the bus grant signal it transfers the data from the device. For n channelled DMA controller n number of external devices can be connected. The DMA transfers the data in three modes which include the following. Burst Mode: In this mode DMA handover the buses to CPU only after completion of whole data transfer. Meanwhile, if the CPU requires the bus it has to stay ideal and wait for data transfer. Cycle Stealing Mode: In this mode, DMA gives control of buses to CPU after transfer of every bite. It continuously issues a request for bus control, makes the transfer of one byte and returns the bus. By this CPU doesn’t have to wait for a long time if it needs a bus for higher priority task. Transparent Mode: Here, DMA transfers data only when CPU is executing the instruction which does not require the use of buses. 8237 DMA Controller 8237 has 4 I/O channels along with the flexibility of increasing the number of channels. 205 CU IDOL SELF LEARNING MATERIAL (SLM)
Each channel can be programmed individually and has a 64k address and data capability. The timing control block, Program command control block, Priority Encoder Block is the three main blocks of 8237A. The internal timing and external control signals are driven by the timing control block. Various commands given by the microprocessor to the DMA are decoded by program command control block. Which channel has to be given the highest priority is decided by the priority encoder block. 8237A has 27 internal registers. 8237A operates in two cycles- Ideal cycle and active cycle, where each cycle contains 7 separate states composed of one clock period each. S0- The first state, where the controller has requested for the bus and waiting for the acknowledgment from the processor. S1, S2, S3, S4 are called the working states of the 8237A where the actual transfer of data takes place. If more time is needed for transfer wait states SW are added between these states. For memory –to- memory transfer read-from-memory and write-to-memory transfers have to be made. Eight states are required for single transfer. The first four states with subscripts S11, S12, S13, S14 does the read-from-memory transfer and the next four S21, S22, S23, S24 are for write-to-memory transfer. DMA goes into the ideal state when no channel is requesting service and perform SI state. SI is an inactive state where the DMA is inactive until it receives a request. In this state, DMA is in program condition where the processor can program the DMA. When DMA is in the ideal state and gets no further channel requests, it outputs an HRQ signal to the processor and enters into Active state where it can start the transfer of data either by burst mode, cycle stealing mode or transparent mode. 206 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 12.5: 8237 pin diagrams 8257 DMA Controller When paired with single Intel 8212 I/O port device, the 8257 DMA controller forms a complete 4 channel DMA controller. Upon receiving a transfer request the 8257 controller- Acquires the control over system bus from the processor. The peripheral connected to the highest priority channel is acknowledged. The least significant bits of the memory address are moved over the address lines A0- A7 of the system bus. The most significant 8 bits of the memory address are driven to 8212 I/O port through data lines. Generates the appropriate controls signals for the transfer of data between peripherals and addressed memory locations. When the specified number of bytes is transferred, the controller informs the CPU end of transfer by activating the terminal count (TC) output. For each channel 8257 contains two 16-bit registers– 1) DMA address register and 2) Terminal county register, which should be initialized before a channel is enabled. The address of first memory location to be accessed is loaded in the DMA address register. The lower order 14 bits of the value loaded in the terminal count register indicates the number of DMA cycles minus one before the activation of Terminal count output. Type of operation for a channel is indicated by the most significant two bits of the Terminal count register. 207 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 12.6: 8257 pin diagrams Advantages and Disadvantages of DMA Controller The advantages and disadvantages of DMA controller include the following. Advantages DMA speedups the memory operations by bypassing the involvement of the CPU. The work overload on the CPU decreases. For each transfer, only a few numbers of clock cycles are required Disadvantages Cache coherence problem can be seen when DMA is used for data transfer. Increases the price of the system. DMA (Direct Memory Access) controller is being used in graphics cards, network cards, sound cards etc… DMA is also used for intra-chip transfer in multi-core processors. Operating in one of its three modes, DMA can considerably reduce the load of the processor. In which of the modes of DMA have you worked with? Which of the mode you consider is more effective? IOP The DMA mode of data transfer reduces CPU’s overhead in handling I/O operations. It also allows parallelism in CPU and I/O operations. Such parallelism is necessary to avoid wastage of valuable CPU time while handling I/O devices whose speeds are much slower as compared to CPU. The concept of DMA operation can be extended to relieve the CPU further from getting involved with the execution of I/O operations. This gives rises to the development of special purpose processor called Input-OutputProcessor (IOP) or IO channel. 208 CU IDOL SELF LEARNING MATERIAL (SLM)
The Input Output Processor (IOP) is just like a CPU that handles the details of I/O operations. It is more equipped with facilities than those are available in typical DMA controller. The IOP can fetch and execute its own instructions that are specifically designed to characterize I/O transfers. In addition to the I/O – related tasks, it can perform other processing tasks like arithmetic, logic, and branching and code translation. The main memory unit takes the pivotal role. It communicates with processor by the means of DMA. The block diagram Figure 12.7: IOP The Input Output Processor is a specialized processor which loads and stores data into memory along with the execution of I/O instructions. It acts as an interface between system and devices. It involves a sequence of events to executing I/O operations and then stores the results into the memory. Advantages The I/O devices can directly access the main memory without the intervention by the processor in I/O processor based systems. It is used to address the problems that are arises in Direct memory access method. 209 CU IDOL SELF LEARNING MATERIAL (SLM)
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready. 12.4.1 DIFFERENCE BETWEEN DMA CONTROLLER AND IOP Here we will discuss the difference between I/O Program Controlled Transfer vs DMA Transfer. Sl.No. I/O Program Controlled Transfer DMA Transfer 1. It is software control data transfer It is hardware control data transfer 2. Data transfer speed is slow Data transfer speed is fast. 3. CPU is involved incomplete transfer. CPU is not involved incomplete transfer. 4. Extra hardware is not required. DMA controller is required for data transfer. 5. Data is routed through the processor, Data is not routed through the processor, during the data transfer. during the data transfer. 6. Used for small data transfer. Used for large data transfer. Table 12.3: IOP and DMA 12.5 SUMMARY In program-controlled I/O, the processor program controls the complete data transfer. So only when an I/O transfer instruction is executed, the transfer could take place. It is required to check that device is ready/not for the data transfer in most cases. Usually, the transfer is to & from a CPU register & peripheral. Here, CPU constantly monitors the peripheral. Here, until the I/O unit indicates that it is ready for transfer, the CPU wait & stays in a loop. It is time-consuming as it keeps the CPU busy needlessly. 210 CU IDOL SELF LEARNING MATERIAL (SLM)
To overcome the disadvantage of Programmed I/O,i.e., keeping CPU busy needlessly, Interrupt — Driven I/O is used. In this approach, when a peripheral sends an interrupt signal to the CPU whenever it is ready to transfer data. This indicates that the I/O data transfer is initiated by the external I/O device. The processor stops the execution of the current program & transfers the control to interrupt the service routine when interrupted. The interrupt service routine then performs the data transfer. After the completion of data transfer, it returns control to the main program to the point it was interrupted. DMA transfer is used for large data transfer. Here, a memory bus is used by the interface to transfer data in & out of a memory unit. The CPU provides starting address & number of bytes to be transferred to the interface to initiate the transfer, after that it proceeds to execute other tasks. DMA requests a memory cycle through the memory bus when the transfer is made. DMA transfers the data directly into the memory when the request is granted by the memory controller. To allow direct memory transfer(I/O), the CPU delays its memory access operation. So, DMA allows I/O devices to directly access memory with less intervention of the CPU. DMA is Direct Memory Access. As the name implies, DMA facilitates data transfer between I/O and Memory directly, instead of involving CPU as in the other two cases of I/O data transfer. DMA is faster and bulk data transfer technique. The system bus is common between CPU, Memory, DMA and maybe few I/O controllers. At an instant, the system bus can be used for communication between any two members only. Further, at a time, a resource can be used by only one entity. Thus when DMA communicates with Memory, the CPU will be on hold to use the System Bus. However, the CPU may do any other machine cycle internally using ALU and other resources. DMA Controller (DMAC) is special hardware which manages the above functions. CPU delegates the responsibility of data transfer to DMA by sending the following details: i. A device on which I/O to be carried out ii. What is the command (R/W) to be carried out? iii. The starting address of Memory location for Data Transfer iv. Length of the Data Transfer (Byte Count) - The first two information is given to the device controller while the last two information is stored in the channel register in DMAC. The I/O controller initiates the necessary actions with the device and requests DMAC when it is ready with data. 211 CU IDOL SELF LEARNING MATERIAL (SLM)
DMAC raises the HOLD signal to CPU conveying its intention to take the System bus for data transfer. At the end of the current machine cycle, the CPU disengages itself from the system bus. Then, the CPU responds with a HOLD ACKNOWLEDGE signal to DMAC, indicating that the system bus is available for use. DMAC places the memory address on the address bus, the location at which data transfer is to take place. A read or write signal is then generated by the DMAC, and the I/O device either generates or latches the data. Then DMA transfers data to memory. A register is used as a byte count, decremented for each byte transferred to memory. Increments the memory address by the number of bytes transferred and generates new memory address for the next memory cycle. Upon the byte count reaches zero, the DMAC generates an Interrupt to CPU. As part of the Interrupt Service Routine, the CPU collects the status of Data transfer. 12.6 KEYWORDS Programmed I/O - In this mode the data transfer is initiated by the instructions written in a computer program. Interrupt Initiated I/O - This mode uses an interrupt facility and special commands to inform the interface to issue the interrupt command when data becomes available and interface is ready for the data transfer. Address Register– It contains the address to specify the desired location in memory. Word Count Register– It contains the number of words to be transferred. DMA Controller - DMA Controller is a hardware device that allows I/O devices to directly access memory with less participation of the processor. 12.7 LEARNING ACTIVITY 1. Conduct a session on programmed I/O. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a session for DMA controller. ___________________________________________________________________________ ___________________________________________________________________________ 212 CU IDOL SELF LEARNING MATERIAL (SLM)
12.8 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. DefineI/O? 2. Write about I/O data transfer? 3. What is I/O Controller? 4. What is meant by PIO Modes? 5. What is IOP? Long Questions 1. List and Explain Programmed I/O. 2. Explain the concept of Interrupt Initiated I/O. 3. Discuss about DMA Controller. 4. Explain IOP. 5. Write the difference between DMA Controller and IOP. B. Multiple Choice Questions 1. How the DMA differs from the interrupt mode? a. The involvement of the processor for the operation b. The method of accessing the I/O devices c. The amount of data transfer possible d. None of these 2. Which is called as the DMA transfers are performed by a control circuit? a. Device interface b. DMA controller c. Data controller d. Overlooker 3. Select the right option for the statement, In DMA transfers, the required signals and addresses are given by the. a. Processor b. Device drivers c. DMA controllers 213 CU IDOL SELF LEARNING MATERIAL (SLM)
d. The program itself 4. Select the right option for the statement, after the completion of the DMA transfer, the processor is notified by. a. Acknowledge signal b. Interrupt signal c. WMFC signal d. None of these 5. How much registers the DMA controller has? a. 4 b. 2 c. 3 d. 1 Answers 1-d, 2-b, 3-c, 4-b, 5-c 12.9 REFERENCES References Book Stallings, William (2012). Computer Organization and Architecture (9th ed.). Pearson. Physical Address Extension — PAE Memory and Windows\". Microsoft Windows Hardware Development Central. 2005. Retrieved 2008-04-07. Corbet, Jonathan (December 8, 2005). \"Memory copies in hardware\". LWN.net. Textbook References Osborne, Adam (1980). An Introduction to Microcomputers: Volume 1: Basic Concepts (2nd ed.). Osborne McGraw Hill. pp. 5–64 through 5–93. ISBN 0931988349. Intel 8237 & 8237-2 Datasheet\" (PDF). JKbox RC702 subsite. Retrieved 20 April 2019. DMA Fundamentals on various PC platforms, National Instruments, pages 6 & 7\". Universidad Nacional de la Plata, Argentina. Retrieved 20 April 2019. 214 CU IDOL SELF LEARNING MATERIAL (SLM)
Website https://witscad.com/course/computer-architecture/chapter/io-data-transfer https://www.elprocus.com/direct-memory-access-dma-in-computer-architecture/ https://www.geeksforgeeks.org/introduction-of-input-output-processor/ 215 CU IDOL SELF LEARNING MATERIAL (SLM)
UNIT – 13: SYNCHRONIZATION STRUCTURE 13.0 Learning Objectives 13.1 Introduction 13.2 Synchronization 13.3 Synchronous Data Transfer 13.4 Asynchronous Data Transfer 13.5 Summary 13.6 Keywords 13.7 Learning Activity 13.8 Unit End Questions 13.9 References 13.0 LEARNING OBJECTIVES After studying this unit, you will be able to: Explain the Synchronization. Describe about Synchronous Data Transfer. Illustrate Asynchronous Data Transfer. 13.1 INTRODUCTION Time-keeping and synchronization of clocks is a critical problem in long-distance ocean navigation. Before radio navigation and satellite-based navigation, navigators required accurate time in conjunction with astronomical observations to determine how far east or west their vessel travelled. The invention of an accurate marine chronometer revolutionized marine navigation. By the end of the 19th century, important ports provided time signals in the form of a signal gun, flag, or dropping time ball so that mariners could check and correct their chronometers for error. Synchronization was important in the operation of 19th-century railways, these being the first major means of transport fast enough for differences in local mean time between nearby towns to be noticeable. Each line handled the problem by synchronizing all its stations to headquarters as a standard railway time. In some territories, companies shared a single railroad track and needed to avoid collisions. The need for strict timekeeping led the 216 CU IDOL SELF LEARNING MATERIAL (SLM)
companies to settle on one standard, and civil authorities eventually abandoned local mean time in favour of railway time. In electrical engineering terms, for digital logic and data transfer, a synchronous circuit requires a clock signal. A clock signal simply signals the start or end of some time period, often measured in microseconds or nanosecond that has an arbitrary relationship to any other system of measurement of the passage of minutes, hours, and days. In a different sense, electronic systems are sometimes synchronized to make events at points far apart appear simultaneous or near-simultaneous from a certain perspective.Timekeeping technologies such as the GPS satellites and Network Time Protocol (NTP) provide real-time access to a close approximation to the UTC timescale and are used for many terrestrial synchronization applications of this kind. In computer science (especially parallel computing), synchronization is the coordination of simultaneous threads or processes to complete a task with correct runtime order and no unexpected race conditions; see synchronization (computer science) for details. Synchronization of movement is defined as similar movements between two or more people who are temporally aligned. This is different from mimicry, which occurs after a short delay.Line dance and military step are examples. Muscular bonding is the idea that moving in time evokes particular emotions.This sparked some of the first research into movement synchronization and its effects on human emotion. In groups, synchronization of movement has been shown to increase conformity,cooperation and trust. In dyads, groups of two people, synchronization has been demonstrated to increase affiliation, self-esteem, compassion and altruistic behaviour and increase rapport. During arguments, synchrony between the arguing pair has been noted to decrease, however it is not clear whether this is due to the change in emotion or other factors. There is evidence to show that movement synchronization requires other people to cause its beneficial effects, as the effect on affiliation does not occur when one of the dyad is synchronizing their movements to something outside the dyad. This is known as interpersonal synchrony. There has been dispute regarding the true effect of synchrony in these studies. Research in this area detailing the positive effects of synchrony, have attributed this to synchrony alone; however, many of the experiments incorporate a shared intention to achieve synchrony. Indeed, the Reinforcement of Cooperation Model suggests that perception of synchrony leads to reinforcement that cooperation is occurring, which leads to the pro-social effects of synchrony. More research is required to separate the effect of intentionality from the beneficial effect of synchrony. 217 CU IDOL SELF LEARNING MATERIAL (SLM)
13.2 SYNCHRONIZATION In this tutorial, we will be covering the concept of Process synchronization in an Operating System. Process Synchronization was introduced to handle problems that arose while multiple process executions. Process is categorized into two types on the basis of synchronization and these are given below: Independent Process Cooperative Process Independent Processes Two processes are said to be independent if the execution of one process does not affect the execution of another process. Cooperative Processes Two processes are said to be cooperative if the execution of one process affects the execution of another process. These processes need to be synchronized so that the order of execution can be guaranteed. Process Synchronization It is the task phenomenon of coordinating the execution of processes in such a way that no two processes can have access to the same shared data and resources. It is a procedure that is involved in order to preserve the appropriate order of execution of cooperative processes. In order to synchronize the processes, there are various synchronization mechanisms. Process Synchronization is mainly needed in a multi-process system when multiple processes are running together, and more than one processes try to gain access to the same shared resource or any data at the same time. Race Condition At the time when more than one process is either executing the same code or accessing the same memory or any shared variable; In that condition, there is a possibility that the output or the value of the shared variable is wrong so for that purpose all the processes are doing the race to say that my output is correct. This condition is commonly known as a race condition. As several processes access and process the manipulations on the same data in a concurrent manner and due to which the outcome depends on the particular order in which the access of data takes place. 218 CU IDOL SELF LEARNING MATERIAL (SLM)
Mainly this condition is a situation that may occur inside the critical section. Race condition in the critical section happens when the result of multiple thread execution differs according to the order in which the threads execute. But this condition is critical sections can be avoided if the critical section is treated as an atomic instruction. Proper thread synchronization using locks or atomic variables can also prevent race conditions. Critical Section Problem A Critical Section is a code segment that accesses shared variables and has to be executed as an atomic action. It means that in a group of cooperating processes, at a given point of time, only one process must be executing its critical section. If any other process also wants to execute its critical section, it must wait until the first one finishes. The entry to the critical section is mainly handled by wait () function while the exit from the critical section is controlled by the signal () function. Figure 13.1: Critical section problem Entry Section In this section mainly the process requests for its entry in the critical section. Exit Section This section is followed by the critical section. The solution to the Critical Section Problem A solution to the critical section problem must satisfy the following three conditions: 219 CU IDOL SELF LEARNING MATERIAL (SLM)
Mutual Exclusion Out of a group of cooperating processes, only one process can be in its critical section at a given point of time. Progress If no process is in its critical section, and if one or more threads want to execute their critical section then any one of these threads must be allowed to get into its critical section. Bounded Waiting After a process makes a request for getting into its critical section, there is a limit for how many other processes can get into their critical section, before this process's request is granted. So after the limit is reached, the system must grant the process permission to get into its critical section. Solutions for the Critical Section The critical section plays an important role in Process Synchronization so that the problem must be solved. Some widely used methods to solve the critical section problem are as follows: Peterson's Solution This is widely used and software-based solution to critical section problems. Peterson's solution was developed by a computer scientist Peterson that's why it is named so. With the help of this solution whenever a process is executing in any critical state, then the other process only executes the rest of the code, and vice-versa can happen. This method also helps to make sure of the thing that only a single process can run in the critical section at a specific time. This solution preserves all three conditions: Mutual Exclusion is comforted as at any time only one process can access the critical section. Progress is also comforted, as a process that is outside the critical section is unable to block other processes from entering into the critical section. Bounded Waiting is assured as every process gets a fair chance to enter the Critical section. 220 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 13.2: Peterson's solution The above shows the structure of process Pi in Peterson's solution. Suppose there are N processes (P1, P2, ... PN) and as at some point of time every process requires to enter in the Critical Section A FLAGarray of size N is maintained here which is by default false. Whenever a process requires to enter in the critical section, it has to set its flag as true. Example: If Pi wants to enter it will set FLAG=TRUE. Another variable is called TURN and is used to indicate the process number that is currently waiting to enter into the critical section. The process that enters into the critical section while exiting would change the TURN to another number from the list of processes that are ready. Example: If the turn is 3 then P3 enters the Critical section and while exiting turn=4 and therefore P4 breaks out of the wait loop. 221 CU IDOL SELF LEARNING MATERIAL (SLM)
Synchronization Hardware Many systems provide hardware support for critical section code. The critical section problem could be solved easily in a single-processor environment if we could disallow interrupts to occur while a shared variable or resource is being modified. In this manner, we could be sure that the current sequence of instructions would be allowed to execute in order without pre-emption. Unfortunately, this solution is not feasible in a multiprocessor environment. Disabling interrupt on a multiprocessor environment can be time-consuming as the message is passed to all the processors. This message transmission lag delays the entry of threads into the critical section, and the system efficiency decreases. Mutex Locks As the synchronization hardware solution is not easy to implement for everyone, a strict software approach called Mutex Locks was introduced. In this approach, in the entry section of code, a LOCK is acquired over the critical resources modified and used inside the critical section, and in the exit section that LOCK is released. As the resource is locked while a process executes its critical section hence no other process can access it. 13.3 SYNCHRONOUS DATA TRANSFER In Synchronous data transfer, the sending and receiving units are enabled with same clock signal. It is possible between two units when each of them knows the behaviour of the other. The master performs a sequence of instructions for data transfer in a predefined order. All these actions are synchronized with the common clock. The master is designed to supply the data at a time when the slave is definitely ready for it. Usually, the master will introduce sufficient delay to take into account the slow response of the slave, without any request from the slave. The master does not expect any acknowledgment signal from the slave when data is sent by the master to the slave. Similarly, when data from the slave is read by the master, neither the slave informs that the data has been placed on the data bus nor the master acknowledges that the data has been read. Both the master and slave perform their own task of transferring data at a designed clock period. Since both devices know the behaviour (response time) of each other, no difficulty arises. Prior to transferring data, the master must logically select the slave either by sending slave’s address or sending “device select” signal to the slave. But there is no acknowledgment signal from the slave to the master if the device is selected. Timing diagram of the synchronous read operation is given below: 222 CU IDOL SELF LEARNING MATERIAL (SLM)
Figure 13.3: Timing diagram for synchronous read operation In this timing diagram, the master first places slave’s address in the address bus and read signal in the control line at the falling edge of the clock. The entire read operation is over in one clock period. Advantages The design procedure is easy. The master does not wait for any acknowledges signal from the slave, though the master waits for a time equal to slave’s response time. The slave does not generate an acknowledge signal, though it obeys the timing rules as per the protocol set by the master or system designer. Disadvantages If a slow speed unit connected to a common bus, it can degrade the overall rate of transfer in the system. If the slave operates at a slow speed, the master will be idle for some time during data transfer and vice versa. The term synchronous is used to describe a continuous and consistent timed transfer of data blocks. Synchronous data transmission is a data transfer method in which a continuous stream of data signals is accompanied by timing signals (generated by an electronic clock) to ensure that the transmitter and the receiver are in step (synchronized) with one another. The data is sent in blocks (called frames or packets) spaced by fixed time intervals. 223 CU IDOL SELF LEARNING MATERIAL (SLM)
Synchronous transmission modes are used when large amounts of data must be transferred very quickly from one location to the other. The speed of the synchronous connection is attained by transferring data in large blocks instead of individual characters. Synchronous transmission synchronizes transmission speeds at both the receiving and sending end of the transmission using clock signals built into each component. A continual stream of data is then sent between the two nodes. The data blocks are grouped and spaced in regular intervals and are preceded by special characters called syn or synchronous idle characters. See the following illustration. Figure 13.4: Synchronous transmission After the syn characters are received by the remote device, they are decoded and used to synchronize the connection. After the connection is correctly synchronized, data transmission may begin. An analogy of synchronous transmission would be the transmission of a large text document. Before the document is transferred across the synchronous line, it is first broken into blocks of sentences or paragraphs. The blocks are then sent over the communication link to the remote site. The timing needed for synchronous connections is obtained from the devices located on the communication link. All devices on the synchronous link must be set to the same clocking. The following is a list of characteristics specific to synchronous communication: There are no gaps between characters being transmitted. Timing is supplied by modems or other devices at each end of the connection. Special syn characters precede the data being transmitted. The syn characters are used between blocks of data for timing purposes. Due to there being no start and stop bits the data transfer rate is quicker although more errors will occur, as the clocks will eventually get out of sync, and the receiving device would have the wrong time that had been agreed in the protocol for sending/receiving data, so some bytes could become corrupted (by losing bits). Ways to get around this problem include re-synchronization of the clocks and use of check digits to ensure the bytes is correctly interpreted and received. Most network protocols (such as Ethernet, SONET, and Token Ring) use synchronous transmission. 224 CU IDOL SELF LEARNING MATERIAL (SLM)
13.4 ASYNCHRONOUS DATA TRANSFER In contrast, asynchronous transmission works in spurts and must insert a start bit before each data character and a stop bit at its termination to inform the receiver where it begins and ends. The term asynchronous is used to describe the process where transmitted data is encoded with start and stop bits, specifying the beginning and end of each character. An example of synchronous transmission is shown in the following figure. Figure 13.5: Asynchronous transmission These additional bits provide the timing or synchronization for the connection by indicating when a complete character has been sent or received; thus, timing for each character begins with the start bit and ends with the stop bit. When gaps appear between character transmissions, the asynchronous line is said to be in a mark state. A mark is a binary 1 (or negative voltage) that is sent during periods of inactivity on the line as shown in the following figure. Figure 13.6: Mark (idle) bits in the data stream When the mark state is interrupted by a positive voltage (a binary 0), the receiving system knows that data characters are going to follow. It is for this reason that the start bit, which precedes the data character, is always a space bit (binary 0) and that the stop bit, which signals the end of a character, is always a mark bit (binary 1). The following is a list of characteristics specific to asynchronous communication: Each character is preceded by a start bit and followed by one or more stop bits. Gaps or spaces between characters may exist. With asynchronous transmission, a large text document is organized into long strings of letters (or characters) that make up the words within the sentences and paragraphs. These 225 CU IDOL SELF LEARNING MATERIAL (SLM)
characters are sent over the communication link one at a time and reassembled at the remote location. In asynchronous transmission, ASCII character would actually be transmitted using 10 bits. For example, \"0100 0001\" would become \"1 0100 0001 0\". The extra one (or zero, depending on parity bit) at the start and end of the transmission tells the receiver first that a character is coming and secondly that the character has ended. This method of transmission is used when data are sent intermittently as opposed to in a solid stream. In the previous example the start and stop bits are in bold. The start and stop bits must be of opposite polarity. This allows the receiver to recognize when the second packet of information is being sent. Asynchronous transmission is used commonly for communications over telephone lines. The internal operations in an individual unit of a digital system are synchronized using clock pulse. It means clock pulse is given to all registers within a unit. And all data transfer among internal registers occurs simultaneously during the occurrence of the clock pulse. Now, suppose any two units of a digital system are designed independently, such as CPU and I/O interface. If the registers in the I/O interface share a common clock with CPU registers, then transfer between the two units is said to be synchronous. But in most cases, the internal timing in each unit is independent of each other, so each uses its private clock for its internal registers. In this case, the two units are said to be asynchronous to each other, and if data transfer occurs between them, this data transfer is called Asynchronous Data Transfer. But, the Asynchronous Data Transfer between two independent units requires that control signals be transmitted between the communicating units so that the time can be indicated at which they send data. These two methods can achieve this asynchronous way of data transfer: Strobe control: A strobe pulse is supplied by one unit to indicate to the other unit when the transfer has to occur. Handshaking: This method is commonly used to accompany each data item being transferred with a control signal that indicates data in the bus. The unit receiving the data item responds with another signal to acknowledge receipt of the data. The strobe pulse and handshaking method of asynchronous data transfer is not restricted to I/O transfer. They are used extensively on numerous occasions requiring the transfer of data between two independent units. So, here we consider the transmitting unit as a source and receiving unit as a destination. For example, the CPU is the source during output or writes transfer and the destination unit during input or read transfer. 226 CU IDOL SELF LEARNING MATERIAL (SLM)
Therefore, the control sequence during an asynchronous transfer depends on whether the transfer is initiated by the source or by the destination. So, while discussing each data transfer method asynchronously, you can see the control sequence in both terms when it is initiated by source or by destination. In this way, each data transfer method can be further divided into parts, source initiated and destination initiated. Asynchronous Data Transfer Methods The asynchronous data transfer between two independent units requires that control signals be transmitted between the communicating units to indicate when they send the data. Thus, the two methods can achieve the asynchronous way of data transfer. 1. Strobe Control Method The Strobe Control method of asynchronous data transfer employs a single control line to time each transfer. This control line is also known as a strobe, and it may be achieved either by source or destination, depending on which initiate the transfer. Source initiated strobe: In the below block diagram, you can see that strobe is initiated by source, and as shown in the timing diagram, the source unit first places the data on the data bus. Figure 13.7: Source initiated strobe After a brief delay to ensure that the data resolve to a stable value, the source activates a strobe pulse. The information on the data bus and strobe control signal remains in the active state for a sufficient time to allow the destination unit to receive the data. The destination unit uses a falling edge of strobe control to transfer the contents of a data bus to one of its internal registers. The source removes the data from the data bus after it disables its strobe pulse. Thus, new valid data will be available only after the strobe is enabled again. 227 CU IDOL SELF LEARNING MATERIAL (SLM)
In this case, the strobe may be a memory-write control signal from the CPU to a memory unit. The CPU places the word on the data bus and informs the memory unit, which is the destination. Destination initiated strobe: In the below block diagram, you see that the strobe initiated by destination, and in the timing diagram, the destination unit first activates the strobe pulse, informing the source to provide the data. Figure 13.8: Destination initiated strobe The source unit responds by placing the requested binary information on the data bus. The data must be valid and remain on the bus long enough for the destination unit to accept it. The falling edge of the strobe pulse can use again to trigger a destination register. The destination unit then disables the strobe. Finally, and source removes the data from the data bus after a determined time interval. In this case, the strobe may be a memory read control from the CPU to a memory unit. The CPU initiates the read operation to inform the memory, which is a source unit, to place the selected word into the data bus. 2. Handshaking Method The strobe method has the disadvantage that the source unit that initiates the transfer has no way of knowing whether the destination has received the data that was placed in the bus. Similarly, a destination unit that initiates the transfer has no way of knowing whether the source unit has placed data on the bus. So this problem is solved by the handshaking method. The handshaking method introduces a second control signal line that replays the unit that initiates the transfer. 228 CU IDOL SELF LEARNING MATERIAL (SLM)
In this method, one control line is in the same direction as the data flow in the bus from the source to the destination. The source unit uses it to inform the destination unit whether there are valid data in the bus. The other control line is in the other direction from the destination to the source. This is because the destination unit uses it to inform the source whether it can accept data. And in it also, the sequence of control depends on the unit that initiates the transfer. So it means the sequence of control depends on whether the transfer is initiated by source and destination. Source initiated handshaking: In the below block diagram, you can see that two handshaking lines are \"data valid\", which is generated by the source unit, and \"data accepted\", generated by the destination unit. Figure 13.9: Source initiated handshaking The timing diagram shows the timing relationship of the exchange of signals between the two units. The source initiates a transfer by placing data on the bus and enabling its data valid signal. The destination unit then activates the data accepted signal after it accepts the data from the bus.The source unit then disables its valid data signal, which invalidates the data on the bus.After this, the destination unit disables its data accepted signal, and the system goes into its initial state. The source unit does not send the next data item until after the destination unit shows readiness to accept new data by disabling the data accepted signal. 229 CU IDOL SELF LEARNING MATERIAL (SLM)
This sequence of events described in its sequence diagram, which shows the above sequence in which the system is present at any given time. Destination initiated handshaking: In the below block diagram, you see that the two handshaking lines are \"data valid\", generated by the source unit, and \"ready for data\" generated by the destination unit. Note that the name of signal data accepted generated by the destination unit has been changed to ready for data to reflect its new meaning. Figure 13.10: Destination initiated handshaking The destination transfer is initiated, so the source unit does not place data on the data bus until it receives a ready data signal from the destination unit. After that, the handshaking process is the same as that of the source initiated.The sequence of events is shown in its sequence diagram, and the timing relationship between signals is shown in its timing diagram. Therefore, the sequence of events in both cases would be identical. Advantages of Asynchronous Data Transfer Asynchronous Data Transfer in computer organization has the following advantages, such as: It is more flexible, and devices can exchange information at their own pace. In addition, individual data characters can complete themselves so that even if one packet is corrupted, its predecessors and successors will not be affected. 230 CU IDOL SELF LEARNING MATERIAL (SLM)
It does not require complex processes by the receiving device. Furthermore, it means that inconsistency in data transfer does not result in a big crisis since the device can keep up with the data stream. It also makes asynchronous transfers suitable for applications where character data is generated irregularly. Disadvantages of Asynchronous Data Transfer There are also some disadvantages of using asynchronous data for transfer in computer organization, such as: The success of these transmissions depends on the start bits and their recognition. Unfortunately, this can be easily susceptible to line interference, causing these bits to be corrupted or distorted. A large portion of the transmitted data is used to control and identify header bits and thus carries no helpful information related to the transmitted data. This invariably means that more data packets need to be sent. 13.5 SUMMARY Synchronization is the coordination of events to operate a system in unison. For example, the conductor of an orchestra keeps the orchestra synchronized or in time. Systems that operate with all parts in synchrony are said to be synchronous or in sync—and those that are not are asynchronous. Today, time synchronization can occur between systems around the world through satellite navigation signals and other time and frequency transfer techniques. Time-keeping and synchronization of clocks is a critical problem in long-distance ocean navigation. Before radio navigation and satellite-based navigation, navigators required accurate time in conjunction with astronomical observations to determine how far east or west their vessel travelled. The invention of an accurate marine chronometer revolutionized marine navigation. By the end of the 19th century, important ports provided time signals in the form of a signal gun, flag, or dropping time ball so that mariners could check and correct their chronometers for error. Synchronization was important in the operation of 19th-century railways, these being the first major means of transport fast enough for differences in local mean time between nearby towns to be noticeable. Each line handled the problem by synchronizing all its stations to headquarters as a standard railway time. In some territories, companies shared a single railroad track and needed to avoid collisions. The need for strict timekeeping led the companies to settle on one standard, and civil authorities eventually abandoned local mean time in favour of railway time. 231 CU IDOL SELF LEARNING MATERIAL (SLM)
In electrical engineering terms, for digital logic and data transfer, a synchronous circuit requires a clock signal. A clock signal simply signals the start or end of some time period, often measured in microseconds or nanosecond that has an arbitrary relationship to any other system of measurement of the passage of minutes, hours, and days. In a different sense, electronic systems are sometimes synchronized to make events at points far apart appear simultaneous or near-simultaneous from a certain perspective. Timekeeping technologies such as the GPS satellites and Network Time Protocol (NTP) provide real-time access to a close approximation to the UTC timescale and are used for many terrestrial synchronization applications of this kind. In computer science (especially parallel computing), synchronization is the coordination of simultaneous threads or processes to complete a task with correct runtime order and no unexpected race conditions; see synchronization (computer science) for details. Synchronization of multiple interacting dynamical systems can occur when the systems are autonomous oscillators. Poincare phase oscillators are model systems that can interact and partially synchronize within random or regular networks. In the case of global synchronization of phase oscillators, an abrupt transition from unsynchronized to full synchronization takes place when the coupling strength exceeds a critical threshold. This is known as the Kuramoto model phase transition. Synchronization is an emergent property that occurs in a broad range of dynamical systems, including neural signalling, the beating of the heart and the synchronization of fire-fly light waves. Synchronization of movement is defined as similar movements between two or more people who are temporally aligned. This is different from mimicry, which occurs after a short delay. Line dance and military step are examples. Muscular bonding is the idea that moving in time evokes particular emotions. This sparked some of the first research into movement synchronization and its effects on human emotion. In groups, synchronization of movement has been shown to increase conformity, cooperation and trust. In dyads, groups of two people, synchronization has been demonstrated to increase affiliation, self-esteem, compassion and altruistic behaviour and increase rapport. During arguments, synchrony between the arguing pair has been noted to decrease, however it is not clear whether this is due to the change in emotion or other factors. There is evidence to show that movement synchronization requires other people to cause its beneficial effects, as the effect on affiliation does not occur when one of the dyad is synchronizing their movements to something outside the dyad. This is known as interpersonal synchrony. 232 CU IDOL SELF LEARNING MATERIAL (SLM)
There has been dispute regarding the true effect of synchrony in these studies. Research in this area detailing the positive effects of synchrony, have attributed this to synchrony alone; however, many of the experiments incorporate a shared intention to achieve synchrony. Indeed, the Reinforcement of Cooperation Model suggests that perception of synchrony leads to reinforcement that cooperation is occurring, which leads to the pro-social effects of synchrony. More research is required to separate the effect of intentionality from the beneficial effect of synchrony. 13.6 KEYWORDS Independent Processes - Two processes are said to be independent if the execution of one process does not affect the execution of another process. Cooperative Processes - Two processes are said to be cooperative if the execution of one process affects the execution of another process. These processes need to be synchronized so that the order of execution can be guaranteed. Process Synchronization - It is the task phenomenon of coordinating the execution of processes in such a way that no two processes can have access to the same shared data and resources. Strobe Control - A strobe pulse is supplied by one unit to indicate to the other unit when the transfer has to occur. Handshaking - This method is commonly used to accompany each data item being transferred with a control signal that indicates data in the bus. The unit receiving the data item responds with another signal to acknowledge receipt of the data. 13.7 LEARNING ACTIVITY 1. Conduct a session for synchronization. ___________________________________________________________________________ ___________________________________________________________________________ 2. Create a survey on asynchronous and give feedback. ___________________________________________________________________________ ___________________________________________________________________________ 13.8 UNIT END QUESTIONS A. Descriptive Questions Short Questions 1. Definesynchronization? 233 CU IDOL SELF LEARNING MATERIAL (SLM)
2. Write about Independent Process? 3. What is Cooperative Process? 4. What is meant by Race Condition? 5. What is Critical Section Problem? Long Questions 1. List and Explain Synchronous Data Transfer. 2. Explain the concept of Data Transfer. 3. Discuss about Asynchronous Data Transfer. 4. Explain Synchronous transmission. 5. List about asynchronous transmission. B. Multiple Choice Questions 1. What is the following condition called as:If a process is executing in its critical section, then no other processes can be executing in their critical section? a. Mutual exclusion b. Critical exclusion c. Synchronous exclusion d. Asynchronous exclusion 2. Which one of the following is a synchronization tool? a. Thread b. Pipe c. Semaphore d. Socket 3. Select the right option for the statement, a semaphore is a shared integer variable. a. That cannot drop below zero b. That cannot be more than zero c. That cannot drop below one d. That cannot be more than one 4. Which is called when high priority task is indirectly pre-empted by medium priority task effectively inverting the relative priority of the two tasks, the scenario? a. Priority inversion 234 CU IDOL SELF LEARNING MATERIAL (SLM)
b. Priority removal c. Priority exchange d. Priority modification 5. How do Process synchronization can be done on? a. Hardware level b. Software level c. Both a and b d. None of these Answers 1-a, 2-c, 3-a, 4-a, 5-c 13.9 REFERENCES Reference Book Nolte, David (2015). Introduction to Modern Dynamics: Chaos, Networks, Space and Time. Oxford University Press. \"Sync or sink? Interpersonal synchrony impacts self-esteem\". Frontiers in Psychology. Synchrony and Cooperation – PubMed – Search Results\". Retrieved 2 February 2017 Textbook References Dong, Ping; Dai, Xianchi; Wyer, Robert S. (2015). \"Actors conform, observers react: the effects of behavioural synchrony on conformity\". Journal of Personality and Social Psychology. 108 (1): Valdesolo, Piercarlo; Desteno, David (1 April 2011). \"Synchrony and the social tuning of compassion\". Emotion. 11 (2): 262–266. Vacharkulksemsuk, Tanya; Fredrickson, Barbara L. (2012). \"Strangers in sync: Achieving embodied rapport through shared movements\". Journal of Experimental Social Psychology. 48 (1): 399–402. Website https://www.geeksforgeeks.org/introduction-of-process-synchronization/ https://www.studytonight.com/operating-system/process-synchronization 235 CU IDOL SELF LEARNING MATERIAL (SLM)
https://www.tutorialspoint.com/parallel_computer_architecture/parallel_computer_arc hitecture_cache_coherence_synchronization.htm 236 CU IDOL SELF LEARNING MATERIAL (SLM)
Search
Read the Text Version
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
- 75
- 76
- 77
- 78
- 79
- 80
- 81
- 82
- 83
- 84
- 85
- 86
- 87
- 88
- 89
- 90
- 91
- 92
- 93
- 94
- 95
- 96
- 97
- 98
- 99
- 100
- 101
- 102
- 103
- 104
- 105
- 106
- 107
- 108
- 109
- 110
- 111
- 112
- 113
- 114
- 115
- 116
- 117
- 118
- 119
- 120
- 121
- 122
- 123
- 124
- 125
- 126
- 127
- 128
- 129
- 130
- 131
- 132
- 133
- 134
- 135
- 136
- 137
- 138
- 139
- 140
- 141
- 142
- 143
- 144
- 145
- 146
- 147
- 148
- 149
- 150
- 151
- 152
- 153
- 154
- 155
- 156
- 157
- 158
- 159
- 160
- 161
- 162
- 163
- 164
- 165
- 166
- 167
- 168
- 169
- 170
- 171
- 172
- 173
- 174
- 175
- 176
- 177
- 178
- 179
- 180
- 181
- 182
- 183
- 184
- 185
- 186
- 187
- 188
- 189
- 190
- 191
- 192
- 193
- 194
- 195
- 196
- 197
- 198
- 199
- 200
- 201
- 202
- 203
- 204
- 205
- 206
- 207
- 208
- 209
- 210
- 211
- 212
- 213
- 214
- 215
- 216
- 217
- 218
- 219
- 220
- 221
- 222
- 223
- 224
- 225
- 226
- 227
- 228
- 229
- 230
- 231
- 232
- 233
- 234
- 235
- 236