Computer Networks – Unit – 2 The Data Link Layer Data-link layer is the second layer after physical layer. The data link layer is responsible for maintaining the data link between two hosts or nodes. Design issues with data link layer are: 1. Services provided to the network layer – The data link layer act as a service interface to the network layer. The principle service is transferring data from network layer on sending machine to the network layer on destination machine. This transfer also takes place via DLL (Dynamic Link Library). 2. Frame synchronization – The source machine sends data in the form of blocks called frames to the destination machine. The starting and ending of each frame should be identified so that the frame can be recognized by the destination machine. 3. Flow control – Flow control is done to prevent the flow of data frame at the receiver end. The source machine must not send data frames at a rate faster than the capacity of destination machine to accept them. 4. Error control – Error control is done to prevent duplication of frames. The errors introduced during transmission from source to destination machines must be detected and corrected at the destination machine. Functionality of Data-link Layer Data link layer does many tasks on behalf of upper layer. These are: • Framing: Data-link layer takes packets from Network Layer and encapsulates them into Frames. Then, it sends each frame bit-by-bit on the hardware. At receiver’ end, data link layer picks up signals from hardware and assembles them into frames. • Addressing: Data-link layer provides layer-2 hardware addressing mechanism. Hardware address is assumed to be unique on the link. It is encoded into hardware at the time of manufacturing. • Synchronization: When data frames are sent on the link, both machines must be synchronized in order to transfer to take place. • Error Control: Sometimes signals may have encountered problem in transition and the bits are flipped. These errors are detected and attempted to recover actual data bits. It also provides error reporting mechanism to the sender. • Flow Control: Stations on same link may have different speed or capacity. Data-link layer ensures flow control that enables both machine to exchange data on same speed. • Multi-Access: When host on the shared link tries to transfer the data, it has a high probability of collision. Data-link layer provides mechanism such as CSMA/CD to equip capability of accessing a shared media among multiple Systems.
Error Detection and Correction There are many reasons such as noise, cross-talk etc., which may help data to get corrupted during transmission. The upper layers work on some generalized view of network architecture and are not aware of actual hardware data processing. Hence, the upper layers expect error- free transmission between the systems. Most of the applications would not function expectedly if they receive erroneous data. Applications such as voice and video may not be that affected and with some errors they may still function well. Data-link layer uses some error control mechanism to ensure that frames (data bit streams) are transmitted with certain level of accuracy. But to understand how errors is controlled, it is essential to know what types of errors may occur. Types of Errors Errors can be of three types, namely single bit errors, multiple bit errors, and burst errors. • Single bit error − In the received frame, only one bit has been corrupted, i.e. either changed from 0 to 1 or from 1 to 0. • Multiple bits error − In the received frame, more than one bits are corrupted. • Burst error − In the received frame, more than one consecutive bits are corrupted.
Error Control Error control can be done in two ways • Error detection − Error detection involves checking whether any error has occurred or not. The number of error bits and the type of error does not matter. • Error correction − Error correction involves ascertaining the exact number of bits that has been corrupted and the location of the corrupted bits. For both error detection and error correction, the sender needs to send some additional bits along with the data bits. The receiver performs necessary checks based upon the additional redundant bits. If it finds that the data is free from errors, it removes the redundant bits before passing the message to the upper layers. Error Detection Techniques There are three main techniques for detecting errors in frames: Parity Check, Two-dimensional Parity check, Checksum and Cyclic Redundancy Check (CRC). Parity Check The parity check is done by adding an extra bit, called parity bit to the data to make a number of 1s either even in case of even parity or odd in case of odd parity. While creating a frame, the sender counts the number of 1s in it and adds the parity bit in the following way In case of even parity: If a number of 1s is even then parity bit value is 0. If the number of 1s is odd then parity bit value is 1. In case of odd parity: If a number of 1s is odd then parity bit value is 0. If a number of 1s is even then parity bit value is 1. On receiving a frame, the receiver counts the number of 1s in it. In case of even parity check, if the count of 1s is even, the frame is accepted, otherwise, it is rejected. A similar rule is adopted for odd parity check. The parity check is suitable for single bit error detection only.
Two-dimensional Parity check Parity check bits are calculated for each row, which is equivalent to a simple parity check bit. Parity check bits are also calculated for all columns, then both are sent along with the data. At the receiving end these are compared with the parity bits calculated on the received data. Checksum • In checksum error detection scheme, the data is divided into k segments each of m bits. • In the sender’s end the segments are added using 1’s complement arithmetic to get the sum. The sum is complemented to get the checksum. • The checksum segment is sent along with the data segments. • At the receiver’s end, all received segments are added using 1’s complement arithmetic to get the sum. The sum is complemented. • If the result is zero, the received data is accepted; otherwise discarded.
Cyclic Redundancy Check (CRC) Cyclic Redundancy Check (CRC) involves binary division of the data bits being sent by a predetermined divisor agreed upon by the communicating system. The divisor is generated using polynomials. • Here, the sender performs binary division of the data segment by the divisor. It then appends the remainder called CRC bits to the end of the data segment. This makes the resulting data unit exactly divisible by the divisor. • The receiver divides the incoming data unit by the divisor. If there is no remainder, the data unit is assumed to be correct and is accepted. Otherwise, it is understood that the data is corrupted and is therefore rejected.
Error Correction Techniques Error correction techniques find out the exact number of bits that have been corrupted and as well as their locations. There are two principle ways • Backward Error Correction (Retransmission) − If the receiver detects an error in the incoming frame, it requests the sender to retransmit the frame. It is a relatively simple technique. But it can be efficiently used only where retransmitting is not expensive as in fiber optics and the time for retransmission is low relative to the requirements of the application. • Forward Error Correction − If the receiver detects some error in the incoming frame, it executes error-correcting code that generates the actual frame. This saves bandwidth required for retransmission. It is inevitable in real-time systems. However, if there are too many errors, the frames need to be retransmitted. The four main error correction codes are • Hamming Codes • Binary Convolution Code • Reed – Solomon Code • Low-Density Parity-Check Code
Hamming Codes: Hamming code is a set of error-correction codes that can be used to detect and correct the errors that can occur when the data is moved or stored from the sender to the receiver. It is technique developed by R.W. Hamming for error correction. Redundant bits – Redundant bits are extra binary bits that are generated and added to the information-carrying bits of data transfer to ensure that no bits were lost during the data transfer. The number of redundant bits can be calculated using the following formula: 2^r ≥ m + r + 1 where, r = redundant bit, m = data bit Suppose the number of data bits is 7, then the number of redundant bits can be calculated using: = 2^4 ≥ 7 + 4 + 1 Thus, the number of redundant bits= 4 Parity bits – A parity bit is a bit appended to a data of binary bits to ensure that the total number of 1’s in the data is even or odd. Parity bits are used for error detection. There are two types of parity bits: 1. Even parity bit: In the case of even parity, for a given set of bits, the number of 1’s are counted. If that count is odd, the parity bit value is set to 1, making the total count of occurrences of 1’s an even number. If the total number of 1’s in a given set of bits is already even, the parity bit’s value is 0. 2. Odd Parity bit – In the case of odd parity, for a given set of bits, the number of 1’s are counted. If that count is even, the parity bit value is set to 1, making the total count of occurrences of 1’s an odd number. If the total number of 1’s in a given set of bits is already odd, the parity bit’s value is 0. General Algorithm of Hamming code – The Hamming Code is simply the use of extra parity bits to allow the identification of an error. 1. Write the bit positions starting from 1 in binary form (1, 10, 11, 100, etc). 2. All the bit positions that are a power of 2 are marked as parity bits (1, 2, 4, 8, etc). 3. All the other bit positions are marked as data bits. 4. Each data bit is included in a unique set of parity bits, as determined its bit position in binary form. a. Parity bit 1 covers all the bits positions whose binary representation includes a 1 in the least significant position (1, 3, 5, 7, 9, 11, etc). b. Parity bit 2 covers all the bits positions whose binary representation includes a 1 in the second position from the least significant bit (2, 3, 6, 7, 10, 11, etc). c. Parity bit 4 covers all the bits positions whose binary representation includes a 1 in the third position from
the least significant bit (4–7, 12–15, 20–23, etc). d. Parity bit 8 covers all the bits positions whose binary representation includes a 1 in the fourth position from the least significant bit bits (8–15, 24–31, 40–47, etc). e. In general, each parity bit covers all bits where the bitwise AND of the parity position and the bit position is non-zero. 5. Since we check for even parity set a parity bit to 1 if the total number of ones in the positions it checks is odd. 6. Set a parity bit to 0 if the total number of ones in the positions it checks is even. Determining the position of redundant bits – These redundancy bits are placed at the positions which correspond to the power of 2. As in the above example: 1. The number of data bits = 7 2. The number of redundant bits = 4 3. The total number of bits = 11 4. The redundant bits are placed at positions corresponding to power of 2- 1, 2, 4, and 8
Suppose the data to be transmitted is 1011001, the bits will be placed as follows: Determining the Parity bits – 1. R1 bit is calculated using parity check at all the bits positions whose binary representation includes a 1 in the least significant position. R1: bits 1, 3, 5, 7, 9, 11 To find the redundant bit R1, we check for even parity. Since the total number of 1’s in all the bit positions corresponding to R1 is an even number the value of R1 (parity bit’s value) = 0 2. R2 bit is calculated using parity check at all the bits positions whose binary representation includes a 1 in the second position from the least significant bit. R2: bits 2,3,6,7,10,11
To find the redundant bit R2, we check for even parity. Since the total number of 1’s in all the bit positions corresponding to R2 is odd the value of R2(parity bit’s value)=1 3. R4 bit is calculated using parity check at all the bits positions whose binary representation includes a 1 in the third position from the least significant bit. R4: bits 4, 5, 6, 7 To find the redundant bit R4, we check for even parity. Since the total number of 1’s in all the bit positions corresponding to R4 is odd the value of R4(parity bit’s value) = 1 4. R8 bit is calculated using parity check at all the bits positions whose binary representation includes a 1 in the fourth position from the least significant bit. R8: bit 8,9,10,11 To find the redundant bit R8, we check for even parity. Since the total number of 1’s in all the bit positions corresponding to R8 is an even number the value of R8(parity bit’s value)=0. Thus, the data transferred is: Error detection and correction – Suppose in the above example the 6th bit is changed from 0 to 1 during data transmission, then it gives new parity values in the binary number:
‘ The bits give the binary number as 0110 whose decimal representation is 6. Thus, the bit 6 contains an error. To correct the error the 6th bit is changed from 1 to 0. Hamming Codes: Example: D7 D6 D5 P4 D3 P2 P1 1100110 7654321 • 7 bit hamming code • Data bits = 4 • Parity bits =3 =>2N = 20 = 1, 21 = 2, and 22 = 4 N = 0,1,2, ….N-1, n=7 • P1=>D3, D5, D7 • P2=>D3, D6, D7 • P4=>D5, D6, D7 example: 1101=data bits on sender side = even parity bits P1, P2, and P4 will be calculated and added to the data bits and sent to the receiver P1 = 1, 0, 1 = (two bits of 1 =even no of 1’s bits) = 0 P2 = 1, 1, 1 = (three bits of 1 = odd no of 1’s bits) = 1 P4 = 0, 1, 1 = (two bits of 1 =even no of 1’s bits) = 0 1100110 = data sent Due to the noise in the channel the data that is received = 1000110
on Receiver side = 1000110 D7 D6 D5 P4 D3 P2 P1 1 0 =1 0 0 1 1 0 7654321 P1 = 0 = 1, 0, 1 = even no of 1’s = P1 = 0 P2 = 1 = 1, 0, 1 = odd no of 1’s = P2 = 1 P4 = 0 = 0, 0, 1 = odd no of 1’s = P4 = 1 Error Correction using Hamming codes: 4 21 P4 P2 P1 = P4=1, P2=1, P1=0 = (1*4) + (1*2) + (0*1) = 4+2+0 = 6 th bit there is an error Example: If the 7-bit hamming code word received by a receiver is 1011011. Assuming the even parity, state whether the received code is correct or wrong. If wrong locate the bit and correct it. D7 D6 D5 P4 D3 P2 P1 1011011 7654321 • P1=>D3, D5, D7 • P2=>D3, D6, D7 • P4=>D5, D6, D7 P1 = 1 = 0, 1, 1 = odd no of 1’s = 1 P2 = 1 = 0, 0, 1 = even no of 1’s = 0 P4 = 1 = 1, 0, 1 = odd no of 1’s = 1 4 21 P4 P2 P1 = (1*4) + (0*2) + (1*1) = 5th bit error corrected Data = 1001011
Binary Convolution code: In convolutional codes, the message comprises of data streams of arbitrary length and a sequence of output bits are generated by the sliding application of Boolean functions to the data stream. Convolutional codes were first introduced in 1955, by Elias. After that, there were many interim researches by many mathematicians. In 1973, Viterbi developed an algorithm for maximum likelihood decoding scheme, called Viterbi scheme that lead to modern convolutional codes. Encoding by Convolutional Codes For generating a convolutional code, the information is passed sequentially through a linear finite-state shift register. The shift register comprises of (-bit) stages and Boolean function generators. A convolutional code can be represented as (n,k, K) where • k is the number of bits shifted into the encoder at one time. Generally, k = 1. • n is the number of encoder output bits corresponding to k information bits. • The code-rate, Rc = k/n . • The encoder memory, a shift register of size k, is the constraint length. • n is a function of the present input bits and the contents of K. • The state of the encoder is given by the value of (K - 1) bits. Example of Generating a Convolutional Code Let us consider a convolutional encoder with k = 1, n = 2 and K = 3. The code-rate, Rc = k/n = 1/2 . The input string is streamed from right to left into the encoder. When the first bit, 1, is streamed in the encoder, the contents of encoder will be −
When the next bit, 1 is streamed in the encoder, the contents of encoder will be − When the next bit, 0 is streamed in the encoder, the contents of encoder will be − When the last bit, 1 is streamed in the encoder, the contents of encoder will be − For the binary convolution encoder given in the example − The set of inputs = {0, 1} The set of outputs = { 00, 10, 11} The set of states = { 00, 01, 10, 11}
Reed Solomon Code: Reed–Solomon codes are a group of error-correcting codes that were introduced by Irving S. Reed and Gustave Solomon in 1960. Reed-Solomon codes are block-based error correcting codes with a wide range of applications in digital communications and storage. Reed- Solomon codes are used to correct errors in many systems including: • Storage devices (including tape, Compact Disk, DVD, barcodes, etc) • Wireless or mobile communications (including cellular telephones, microwave links, etc) • Satellite communications • Digital television / DVB • High-speed modems such as ADSL, xDSL, etc. A Reed - Solomon encoder accepts a block of data and adds redundant bits (parity bits) before transmitting it over noisy channels. On receiving the data, a decoder corrects the error depending upon the code characteristics. Low-Density Parity-Check Code: Low - density parity check (LDPC) code is a linear error-correcting block code, suitable for error correction in large block sizes transmitted via very noisy channels. LDPC was developed by Robert G. Gallager, in his doctoral dissertation at the Massachusetts Institute of Technology in 1960. So, these codes are also known as Gallager codes. Encoding by Low-Density Parity Check Codes A low - density parity check (LFPC) code is specified by a parity-check matrix containing mostly 0s and a low density of 1s. The rows of the matrix represent the equations and the columns represent the bits in the codeword, i.e. code symbols.
A LDPC code is represented by , where is the block length, is the number of 1s in each column and is the number of 1s in each row, holding the following properties − • j is the small fixed number of 1’s in each column, where j > 3 • k is the small fixed number of 1’s in each row, where k > j. Example 1 − Parity Check Matrix of Hamming Code The following parity check matrix Hamming code having n = 7, with 4 information bits followed by 3 even parity bits. The check digits are diagonally 1. The parity equations are given alongside − Framing in Data Link Layer • In the physical layer, data transmission involves synchronized transmission of bits from the source to the destination. The data link layer packs these bits into frames. • Data-link layer takes the packets from the Network Layer and encapsulates them into frames. • Frames are the units of digital transmission particularly in computer networks and telecommunications A frame has the following parts − • Frame Header − It contains the source and the destination addresses of the frame. • Payload field − It contains the message to be delivered. • Trailer − It contains the error detection and error correction bits. • Flag − It marks the beginning and end of the frame.
Problems in Framing: • Detecting start of the frame: When a frame is transmitted, every station must be able to detect it. Station detect frames by looking out for special sequence of bits that marks the beginning of the frame i.e. SFD (Starting Frame Delimeter). • How do station detect a frame: Every station listen to link for SFD pattern through a sequential circuit. If SFD is detected, sequential circuit alerts station. Station checks destination address to accept or reject frame. • Detecting end of frame: When to stop reading the frame. EFD (End frame delimeter) Types of Framing: Fixed-sized Framing • Here the size of the frame is fixed and so the frame length acts as delimiter of the frame. Consequently, it does not require additional boundary bits to identify the start and end of the frame. • Example − ATM cells. Asynchronous transfer mode – 53 byte Variable – Sized Framing • Here, the size of each frame to be transmitted may be different. So additional mechanisms are kept to mark the end of one frame and the beginning of the next frame. • It is used in local area networks. Two ways to define frame delimiters in variable sized framing are − • Length Field − Here, a length field is used that determines the size of the frame. It is used in Ethernet (IEEE 802.3). • End Delimiter − Here, a pattern is used as a delimiter to determine the size of frame. It is used in Token Rings. If the pattern occurs in the message, then two approaches are used to avoid the situation − Byte – Stuffing − A byte is stuffed in the message to differentiate from the delimiter. This is also called character-oriented framing. Used when frames consist of character. If data contains ED then, byte is stuffed into data to diffentiate it from ED.Let ED = “$” –> if data contains ‘$’ anywhere, it can be escaped using ‘\\O’ character. –> if data contains ‘\\O$’ then, use ‘\\O\\O\\O$'($ is escaped using \\O and \\O is escaped using \\O).
Bit – Stuffing − A pattern of bits of arbitrary length is stuffed in the message to differentiate from the delimiter. This is also called bit – oriented framing. Let ED = 01111 and if data = 01111 –> Sender stuffs a bit to break the pattern i.e. here appends a 0 in data = 011101. –> Receiver receives the frame. –> If data contains 011101, receiver removes the 0 and reads the data.
Data Flow Control: • It is technique that generally observes proper flow of data from sender to receiver. It is very essential because it is possible for sender to transmit data or information at very fast rate and hence receiver can receive this information and process it. This can happen only if receiver has very high load of traffic as compared to sender, or if receiver has power of processing less as compared to sender. • Flow control is basically technique that gives permission to two of stations that are working and processing at different speeds to just communicate with one another. Flow control in Data Link Layer simply restricts and coordinates number of frames or amount of data sender can send just before it waits for an acknowledgment from receiver. Feedback – based Flow Control: • In this control technique, sender simply transmits data or information or frame to receiver, then receiver transmits data back to sender and also allows sender to transmit more amount of data or tell sender about how receiver is processing or doing. This simply means that sender transmits data or frames after it has received acknowledgments from user. Rate – based Flow Control: • In this control technique, usually when sender sends or transfer data at faster speed to receiver and receiver is not being able to receive data at the speed, then mechanism known as built-in mechanism in protocol will just limit or restricts overall rate at which data or information is being transferred or transmitted by sender without any feedback or acknowledgment from receiver. Techniques of Flow Control in Data Link Layer:
1. Stop-and-Wait Flow Control: This method is the easiest and simplest form of flow control. In this method, basically message or data is broken down into various multiple frames, and then receiver indicates its readiness to receive frame of data. When acknowledgment is received, then only sender will send or transfer the next frame. This process is continued until sender transmits EOT (End of Transmission) frame. In this method, only one of frames can be in transmission at a time. It leads to inefficiency i.e. less productivity if propagation delay is very much longer than the transmission delay. Advantages – • This method is very easiest and simple and each of the frames is checked and acknowledged well. • It can also be used for noisy channels. • This method is also very accurate. Disadvantages – • This method is fairly slow. • In this, only one packet or frame can be sent at a time. • It is very inefficient and makes the transmission process very slow. 1.1. Error control- Stop and wait ARQ (automatic repeat request) Stop-and-Wait ARQ is also known as alternating bit protocol. It is one of simplest flow and error control techniques or mechanisms. This mechanism is generally required in telecommunications to transmit data or information among two connected devices. Receiver simply indicates its readiness to receive data for each frame. In these, sender sends information or data packet to receiver. Sender then stops and waits for ACK (Acknowledgment) from receiver. Further, if ACK does not arrive within given time period i.e., time-out, sender then again resends frame and waits for ACK. But, if sender receives ACK, then it will transmi next data packet to receiver and then again wait for ACK fro receiver. This process to stop and wait continues until sender has no data frame or packet to send.
2. Sliding Window Flow Control: This protocol improves the efficiency of stop and wait protocol by allowing multiple frames to be transmitted before receiving an acknowledgment. The working principle of this protocol can be described as follows − • Both the sender and the receiver has finite sized buffers called windows. The sender and the receiver agrees upon the number of frames to be sent based upon the buffer size. • The sender sends multiple frames in a sequence, without waiting for acknowledgment. When its sending window is filled, it waits for acknowledgment. On receiving acknowledgment, it advances the window and transmits the next frames, according to the number of acknowledgments received. Advantages – • It performs much better than stop-and-wait flow control. • This method increases efficiency. • Multiples frames can be sent one after another. Disadvantages – • The main issue is complexity at the sender and receiver due to the transferring of multiple frames. • The receiver might receive data frames or packets out the sequence.
Types of Sliding Window Protocol Sliding window protocol has two types: 1. Go-Back-N ARQ 2. Selective Repeat ARQ Go-Back-N ARQ Go-Back-N ARQ protocol is also known as Go-Back-N Automatic Repeat Request. It is a data link layer protocol that uses a sliding window method. In this, if any frame is corrupted or lost, all subsequent frames have to be sent again. The size of the sender window is N in this protocol. For example, Go-Back-8, the size of the sender window, will be 8. The receiver window size is always 1. If the receiver receives a corrupted frame, it cancels it. The receiver does not accept a corrupted frame. When the timer expires, the sender sends the correct frame again. The design of the Go- Back-N ARQ protocol is shown below.
The example of Go-Back-N ARQ is shown below in the figure. Selective Repeat ARQ Selective Repeat ARQ is also known as the Selective Repeat Automatic Repeat Request. It is a data link layer protocol that uses a sliding window method. The Go-back-N ARQ protocol works well if it has fewer errors. But if there is a lot of error in the frame, lots of bandwidth loss in sending the frames again. So, we use the Selective Repeat ARQ protocol. In this protocol, the size of the sender window is always equal to the size of the receiver window. The size of the sliding window is always greater than 1. If the receiver receives a corrupt frame, it does not directly discard it. It sends a negative acknowledgment to the sender. The sender sends that frame again as soon as on the receiving negative acknowledgment. There is no waiting for any time-out to send that frame. The design of the Selective Repeat ARQ protocol is shown below.
The example of the Selective Repeat ARQ protocol is shown below in the figure.
Difference between the Go-Back-N ARQ and Selective Repeat ARQ Go-Back-N ARQ Selective Repeat ARQ If a frame is corrupted or lost in In this, only the frame is sent again, which is it,all subsequent frames have to be corrupted or lost. sent again. If it has a high error rate,it wastes a There is a loss of low bandwidth. lot of bandwidth. It is less complex. It is more complex because it has to do sorting and searching as well. And it also requires more storage. It does not require sorting. In this, sorting is done to get the frames in the correct order. It does not require searching. The search operation is performed in it. It is used more. It is used less because it is more complex. The medium access control sub layer: The medium access control (MAC) is a sublayer of the data link layer of the open system interconnections (OSI) reference model for data transmission. It is responsible for flow control and multiplexing for transmission medium. It controls the transmission of data packets via remotely shared channels. It sends data over the network interface card. MAC Layer in the OSI Model The Open System Interconnections (OSI) model is a layered networking framework that conceptualizes how communications should be done between heterogeneous systems. The data link layer is the second lowest layer. It is divided into two sublayers − • The logical link control (LLC) sublayer • The medium access control (MAC) sublayer The following diagram depicts the position of the MAC layer –
Functions of MAC Layer • It provides an abstraction of the physical layer to the LLC and upper layers of the OSI network. • It is responsible for encapsulating frames so that they are suitable for transmission via the physical medium. • It resolves the addressing of source station as well as the destination station, or groups of destination stations. • It performs multiple access resolutions when more than one data frame is to be transmitted. It determines the channel access methods for transmission. • It also performs collision resolution and initiating retransmission in case of collisions. • It generates the frame check sequences and thus contributes to protection against transmission errors. MAC Addresses MAC address or media access control address is a unique identifier allotted to a network interface controller (NIC) of a device. It is used as a network address for data transmission within a network segment like Ethernet, Wi-Fi, and Bluetooth. MAC address is assigned to a network adapter at the time of manufacturing. It is hardwired or hard-coded in the network interface card (NIC). A MAC address comprises of six groups of two hexadecimal digits, separated by hyphens, colons, or no separators. An example of a MAC address is 00:0A:89:5B:F0:11. Channel Allocation Problem: When there are more than one user who desire to access a shared network channel, an algorithm is deployed for channel allocation among the competing users. The network channel may be a single cable or optical fiber connecting multiple nodes, or a portion of the wireless spectrum. Channel allocation algorithms allocate the wired channels and bandwidths to the users, who may be base stations, access points or terminal equipment. Channel Allocation Schemes Channel Allocation may be done using two schemes − • Static Channel Allocation • Dynamic Channel Allocation Static Channel Allocation In static channel allocation scheme, a fixed portion of the frequency channel is allotted to each user. For N competing users, the bandwidth is divided into N channels using frequency division multiplexing (FDM), and each portion is assigned to one user. This scheme is also referred as fixed channel allocation or fixed channel assignment. In this allocation scheme, there is no interference between the users since each user is assigned a fixed channel. However, it is not suitable in case of a large number of users with variable bandwidth requirements.
Dynamic Channel Allocation In dynamic channel allocation scheme, frequency bands are not permanently assigned to the users. Instead channels are allotted to users dynamically as needed, from a central pool. The allocation is done considering a number of parameters so that transmission interference is minimized. This allocation scheme optimises bandwidth usage and results is faster transmissions. Dynamic channel allocation is further divided into centralised and distributed allocation. Multiple Access Protocol: 1. Random Access Protocol: In this, all stations have same superiority that is no station has more priority than another station. Any station can send data depending on medium’s state (idle or busy). It has two features: 1. There is no fixed time for sending data 2. There is no fixed sequence of stations sending data ALOHA Random Access Protocol It is designed for wireless LAN (Local Area Network) but can also be used in a shared medium to transmit data. Using this method, any station can transmit data across a network simultaneously when a data frameset is available for transmission. Aloha Rules 1. Any station can transmit data to a channel at any time. 2. It does not require any carrier sensing. 3. Collision of data can take place and data frames may be lost during the transmission of data through multiple stations. 4. Acknowledgment of the frames exists in Aloha. Hence, there is no collision detection. 5. It requires retransmission of data after some random amount of time.
Pure Aloha Whenever data is available for sending over a channel at stations, we use Pure Aloha. In pure Aloha, when each station transmits data to a channel without checking whether the channel is idle or not, the chances of collision may occur, and the data frame can be lost. When any station transmits the data frame to a channel, the pure Aloha waits for the receiver's acknowledgment. If it does not acknowledge the receiver end within the specified time, the station waits for a random amount of time, called the backoff time (Tb). And the station may assume the frame has been lost or destroyed. Therefore, it retransmits the frame until all the data are successfully transmitted to the receiver. 1. The total vulnerable time of pure Aloha is 2 * Tfr. 2. Maximum throughput occurs when G = 1/ 2 that is 18.4%. 3. Successful transmission of data frame is S = G * e ^ - 2 G.
Slotted Aloha The slotted Aloha is designed to overcome the pure Aloha's efficiency because pure Aloha has a very high possibility of frame hitting. In slotted Aloha, the shared channel is divided into a fixed time interval called slots. So that, if a station wants to send a frame to a shared channel, the frame can only be sent at the beginning of the slot, and only one frame is allowed to be sent to each slot. And if the stations are unable to send data to the beginning of the slot, the station will have to wait until the beginning of the slot for the next time. However, the possibility of a collision remains when trying to send a frame at the beginning of two or more station time slot. 1. Maximum throughput occurs in the slotted Aloha when G = 1 that is 37%. 2. The probability of successfully transmitting the data frame in the slotted Aloha is S = G * e ^ - 2 G. 3. The total vulnerable time required in slotted Aloha is Tfr. CSMA (Carrier Sense Multiple Access) It is a carrier sense multiple access based on media access protocol to sense the traffic on a channel (idle or busy) before transmitting the data. It means that if the channel is idle, the station can send data to the channel. Otherwise, it must wait until the channel becomes idle. Hence, it reduces the chances of a collision on a transmission medium. CSMA Access MODES: • 1-Persistent: In the 1-Persistent mode of CSMA that defines each node, first sense the shared channel and if the channel is idle, it immediately sends the data. Else it must
wait and keep track of the status of the channel to be idle and broadcast the frame unconditionally as soon as the channel is idle. • Non-Persistent: It is the access mode of CSMA that defines before transmitting the data, each node must sense the channel, and if the channel is inactive, it immediately sends the data. Otherwise, the station must wait for a random time (not continuously), and when the channel is found to be idle, it transmits the frames. • P-Persistent: It is the combination of 1-Persistent and Non-persistent modes. The P- Persistent mode defines that each node senses the channel, and if the channel is inactive, it sends a frame with a P probability. If the data is not transmitted, it waits for a (q = 1-p probability) random time and resumes the frame with the next time slot. • O- Persistent: It is an O-persistent method that defines the superiority of the station before the transmission of the frame on the shared channel. If it is found that the channel is inactive, each station waits for its turn to retransmit the data.
CSMA/ CD It is a carrier sense multiple access/ collision detection network protocol to transmit data frames. The CSMA/CD protocol works with a medium access control layer. Therefore, it first senses the shared channel before broadcasting the frames, and if the channel is idle, it transmits a frame to check whether the transmission was successful. If the frame is successfully received, the station sends another frame. If any collision is detected in the CSMA/CD, the station sends a jam/ stop signal to the shared channel to terminate data transmission. After that, it waits for a random time before sending a frame to a channel. CSMA/ CA It is a carrier sense multiple access/collision avoidance network protocol for carrier transmission of data frames. It is a protocol that works with a medium access control layer. When a data frame is sent to a channel, it receives an acknowledgment to check whether the channel is clear. If the station receives only a single (own) acknowledgments, that means the data frame has been successfully transmitted to the receiver. But if it gets two signals (its own and one more in which the collision of frames), a collision of the frame occurs in the shared channel. Detects the collision of the frame when a sender receives an acknowledgment signal. Following are the methods used in the CSMA/ CA to avoid the collision: Interframe space: In this method, the station waits for the channel to become idle, and if it gets the channel is idle, it does not immediately send the data. Instead of this, it waits for some time, and this time period is called the Interframe space or IFS. However, the IFS time is often used to define the priority of the station. Contention window: In the Contention window, the total time is divided into different slots. When the station/ sender is ready to transmit the data frame, it chooses a random slot number of slots as wait time. If the channel is still busy, it does not restart the entire process, except that it restarts the timer only to send data packets when the channel is inactive. Acknowledgment: In the acknowledgment method, the sender station sends the data frame to the shared channel if the acknowledgment is not received ahead of time. Controlled Access Protocol It is a method of reducing data frame collision on a shared channel. In controlled access, the stations seek information from one another to find which station has the right to send. It allows only one node to send at a time, to avoid collision of messages on shared medium. The three controlled-access methods are: 1. Reservation 2. Polling 3. Token Passing
Reservation • In the reservation method, a station needs to make a reservation before sending data. • The time line has two kinds of periods: 1. Reservation interval of fixed time length 2. Data transmission period of variable frames. • If there are M stations, the reservation interval is divided into M slots, and each station has one slot. • Suppose if station 1 has a frame to send, it transmits 1 bit during the slot 1. No other station is allowed to transmit during this slot. • In general, i th station may announce that it has a frame to send by inserting a 1 bit into i th slot. After all N slots have been checked, each station knows which stations wish to transmit. • The stations which have reserved their slots transfer their frames in that order. • After data transmission period, next reservation interval begins. • Since everyone agrees on who goes next, there will never be any collisions. The following figure shows a situation with five stations and a five slot reservation frame. In the first interval, only stations 1, 3, and 4 have made reservations. In the second interval, only station 1 has made a reservation. Polling • Polling process is similar to the roll-call performed in class. Just like the teacher, a controller sends a message to each node in turn. • In this, one acts as a primary station(controller) and the others are secondary stations. All data exchanges must be made through the controller. • The message sent by the controller contains the address of the node being selected for granting access. • Although all nodes receive the message but the addressed one responds to it and sends data, if any. If there is no data, usually a “poll reject”(NAK) message is sent back. • Problems include high overhead of the polling messages and high dependence on the reliability of the controller.
Efficiency Let Tpoll be the time for polling and Tt be the time required for transmission of data. Then, Efficiency = Tt/(Tt + Tpoll) Token Passing • In token passing scheme, the stations are connected logically to each other in form of ring and access of stations is governed by tokens. • A token is a special bit pattern or a small message, which circulate from one station to the next in the some predefined order. • In Token ring, token is passed from one station to another adjacent station in the ring whereas incase of Token bus, each station uses the bus to send the token to the next station in some predefined order. • In both cases, token represents permission to send. If a station has a frame queued for transmission when it receives the token, it can send that frame before it passes the token to the next station. If it has no queued frame, it passes the token simply. • After sending a frame, each station must wait for all N stations (including itself) to send the token to their neighbors and the other N – 1 stations to send a frame, if they have one. • There exists problems like duplication of token or token is lost or insertion of new station, removal of a station, which need be tackled for correct and reliable operation of this scheme.
Performance Performance of token ring can be concluded by 2 parameters:- 1. Delay, which is a measure of time between when a packet is ready and when it is delivered.So, the average time (delay) required to send a token to the next station = a/N. 2. Throughput, which is a measure of the successful traffic. Throughput, S = 1/(1 + a/N) for a<1 and S = 1/{a(1 + 1/N)} for a>1. where N = number of stations a = Tp/Tt (Tp = propagation delay and Tt = transmission delay) Channelization Protocols: Channelization is basically a method that provides the multiple-access and in this, the available bandwidth of the link is shared in time, frequency, or through the code in between the different stations. Channelization Protocols are broadly classified as follows: • FDMA (Frequency-Division Multiple Access) • TDMA (Time-Division Multiple Access) • CDMA (Code-Division Multiple Access)
1. Frequency-Division Multiple Access With the help of this technique, the available bandwidth is divided into frequency bands. Each station is allocated a band in order to send its data. Or in other words, we can say that each band is reserved for a specific station and it belongs to the station all the time. • Each station makes use of the bandpass filter in order to confine the frequencies of the transmitter. • In order to prevent station interferences, the allocated bands are separated from one another with the help of small guard bands. • The Frequency-division multiple access mainly specifies a predetermined frequency for the entire period of communication. • Stream of data can be easily used with the help of FDMA. Figure: Frequency-Division media access. Advantages of FDMA Given below are some of the benefits of using the FDMA technique: • This technique is efficient when the traffic is uniformly constant. • In case if the channel is not in use then it sits idle. • FDMA is simple algorithmically and the complexity is less. • For FDMA there is no restriction regarding the type of baseband or the type of modulation. Disadvantages of FDMA • By using FDMA, the maximum flow rate per channel is fixed and small.
2. Time-Division Multiple Access Time-Division Multiple access is another method to access the channel for shared medium networks. • With the help of this technique, the stations share the bandwidth of the channel in time. • A time slot is allocated to each station during which it can send the data. • Data is transmitted by each station in the assigned time slot. • There is a problem in using TDMA and it is due to TDMA the synchronization cannot be achieved between the different stations. • When using the TDMA technique then each station needs to know the beginning of its slot and the location of its slot. • If the stations are spread over a large area, then there occur propagation delays; in order to compensate this guard, times are used. • The data link layer in each station mainly tells its physical layer to use the allocated time slot. Figure: Time-Division media access. Some examples of TDMA are as follows; • personal digital Cellular(PDC) • Integrated digital enhanced network. • Universal terrestrial radio access(UTRA)
3. Code-Division Multiple Access CDMA (code-division multiple access) is another technique used for channelization. • CDMA technique differs from the FDMA because only one channel occupies the entire bandwidth of the link. • The CDMA technique differs from the TDMA because all the stations can send data simultaneously as there is no timesharing. • The CDMA technique simply means communication with different codes. • In the CDMA technique, there is only one channel that carries all the transmission simultaneously. • CDMA is mainly based upon the coding theory; where each station is assigned a code, Code is a sequence of numbers called chips. • The data from the different stations can be transmitted simultaneously but using different code languages. Advantages of CDMA Given below are some of the advantages of using the CDMA technique: • Provide high voice quality. • CDMA operates at low power levels. • The capacity of the system is higher than the TDMA and FDMA. • CDMA is better cost-effective. Ethernet: Ethernet is the traditional technology for connecting devices in a wired local area network (LAN) or wide area network (WAN), enabling them to communicate with each other via a protocol -- a set of rules or common network language. Ethernet describes how network devices can format and transmit data so other devices on the same local or campus area network segment can recognize, receive and process the information. An Ethernet cable is the physical, encased wiring over which the data travels. Connected devices accessing a geographically localized network with a cable -- that is, with a wired rather than wireless connection -- likely use Ethernet. From businesses to gamers, diverse end users depend on the benefits of Ethernet connectivity, which include reliability and security. Compared to wireless LAN (WLAN) technology, Ethernet is typically less vulnerable to disruptions. It can also offer a greater degree of network security and control than wireless
technology since devices must connect using physical cabling. This makes it difficult for outsiders to access network data or hijack bandwidth for unsanctioned devices. Advantages and disadvantages Ethernet has many benefits for users, which is why it grew so popular. However, there are a few disadvantages as well. Advantages • relatively low cost; • backward compatibility; • generally resistant to noise; • good data transfer quality; • speed; • reliability; and • data security -- common firewalls can be used. Disadvantages • It is intended for smaller, shorter distance networks. • Mobility is limited. • Use of longer cables can create crosstalk. • It does not work well with real-time or interactive applications. • Increased traffic makes the Ethernet speed go down. • Receivers do not acknowledge the reception of data packets. • When troubleshooting, it is hard to trace which specific cable or node is causing the issue. Ethernet vs. Wi-Fi Wi-Fi is the most popular type of network connection. Unlike wired connection types, such as Ethernet, it does not require a physical cable to be connected; data is transmitted through wireless signals.
Differences between Ethernet and Wi-Fi connections Ethernet connection • transmits data over a cable; • limited mobility -- a physical cable is required; • more speed, reliability and security than Wi-Fi; • consistent speed; • data encryption is not required; • lower latency; and • more complex installation process. Wi-Fi connection • transmits data through wireless signals rather than over a cable; • better mobility, as no cables are required; • not as fast, reliable or secure as Ethernet; • more convenient -- users can connect to the internet from anywhere; • inconsistent speed -- Wi-Fi is prone to signal interference; • requires data encryption; • higher latency than Ethernet; and • simpler installation process. How Ethernet works IEEE specifies in the family of standards called IEEE 802.3 that the Ethernet protocol touches both Layer 1 (physical layer) and Layer 2 (data link layer) on the Open Systems Interconnection (OSI) network protocol model. Ethernet defines two units of transmission: packet and frame. The frame includes not just the payload of data being transmitted, but also the following: • the physical media access control (MAC) addresses of both the sender and receiver; • virtual LAN (VLAN) tagging and quality of service (QoS) information; and • error correction information to detect transmission problems.
Each frame is wrapped in a packet that contains several bytes of information to establish the connection and mark where the frame starts. Engineers at Xerox first developed Ethernet in the 1970s; Ethernet initially ran over coaxial cables. Today, a typical Ethernet LAN uses special grades of twisted-pair cables or fiber optic cabling. Early Ethernet connected multiple devices into network segments through hubs -- Layer 1 devices responsible for transporting network data -- using either a daisy chain or star topology. However, if two devices that share a hub try to transmit data at the same time, the packets can collide and create connectivity problems. To alleviate these digital traffic jams, IEEE developed the Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol, which enables devices to check whether a given line is in use before initiating new transmissions.
Types of Ethernet cables The IEEE 802.3 working group approved the first Ethernet standard in 1983. Since then, the technology has continued to evolve and embrace new media, higher transmission speeds and changes in frame content: • 802.3ac was introduced to accommodate VLAN and priority tagging. • 802.3af defines Power over Ethernet (PoE), which is crucial to most Wi-Fi and Internet Protocol (IP) telephony deployments.
• 802.11a, b, g, n, ac and ax define the equivalent of Ethernet for WLANs. • 802.3u ushered in 100BASE-T -- also known as Fast Ethernet -- with data transmission speeds of up to 100 Mbps. The term BASE-T indicates the use of twisted-pair cabling. Gigabit Ethernet boasts speeds of 1,000 Mbps -- 1 gigabit or 1 billion bits per second (bps) - - 10 GbE, up to 10 Gbps, and so on. Network engineers use 100BASE-T largely to connect end-user computers, printers and other devices; to manage servers and storage; and to achieve higher speeds for network backbone segments. Over time, the typical speed of each connection tends to increase. Ethernet cables connect network devices to the appropriate routers or modems, with different cables working with different standards and speeds. For example, the Category 5 (Cat5) cable supports traditional and 100BASE-T Ethernet, the Category 5e (Cat5e) cable can handle GbE and Category 6 (Cat6) works with 10 GbE. Ethernet crossover cables, which connect two devices of the same type, also exist, enabling two computers to be connected without a switch or router between them. Wireless LANs (WLANs): Wireless LANs (WLANs) are wireless computer networks that use high-frequency radio waves instead of cables for connecting the devices within a limited area forming LAN (Local Area Network). Users connected by wireless LANs can move around within this limited area such as home, school, campus, office building, railway platform, etc. Most WLANs are based upon the standard IEEE 802.11 standard or WiFi. Components of WLANs The components of WLAN architecture as laid down in IEEE 802.11 are − • Stations (STA) − Stations comprises of all devices and equipment that are connected to the wireless LAN. Each station has a wireless network interface controller. A station can be of two types − o Wireless Access Point (WAP or AP) o Client • Basic Service Set (BSS) − A basic service set is a group of stations communicating at the physical layer level. BSS can be of two categories − o Infrastructure BSS o Independent BSS • Extended Service Set (ESS) − It is a set of all connected BSS.
• Distribution System (DS) − It connects access points in ESS. Types of WLANS WLANs, as standardized by IEEE 802.11, operates in two basic modes, infrastructure, and ad hoc mode. • Infrastructure Mode − Mobile devices or clients connect to an access point (AP) that in turn connects via a bridge to the LAN or Internet. The client transmits frames to other clients via the AP. • Ad Hoc Mode − Clients transmit frames directly to each other in a peer-to-peer fashion. Advantages of WLANs • They provide clutter-free homes, offices and other networked places. • The LANs are scalable in nature, i.e. devices may be added or removed from the network at greater ease than wired LANs. • The system is portable within the network coverage. Access to the network is not bounded by the length of the cables. • Installation and setup are much easier than wired counterparts. • The equipment and setup costs are reduced. Disadvantages of WLANs • Since radio waves are used for communications, the signals are noisier with more interference from nearby systems. • Greater care is needed for encrypting information. Also, they are more prone to errors. So, they require greater bandwidth than the wired LANs. • WLANs are slower than wired LANs.
Wireless broadband: Wireless broadband (WiBB) a networking technology designed to impart highspeed Internet and data service through wireless networks. Wireless broadband may be delivered through wireless local area networks (WLANs) or wide area networks (WWANs). Similar to other wireless services, wireless broadband can be either fixed or mobile. Features of WiBB • WiBB is similar to wired broadband service since they connect to an internet backbone, with the difference that they use radio waves instead of cables to connect to the last mile of the network. • The range of most broadband wireless access (BWA) services varies around 50 km from the transmitting tower. • Download speeds provided by some wireless Internet service providers (WISPs) are over 100 Mbps. • WiBB mostly provides asymmetrical data rates for downloads and uploads. • WiBB may also be symmetrical, i.e. they have the same data rate in both downstream as well as upstream. This is most seen only in fixed wireless networks. • Any device connected to WiBB needs to be equipped with a wireless adapter to translate data into radio signals which can be then transmitted using an antenna. Types of WiBB Fixed Wireless Broadband Fixed WiBB provides wireless Internet services for devices located in more or less fixed locations, like homes and offices. The services are comparable to those provided through digital subscriber line (DSL) or cable modem, with the difference that it has wireless mode of transmission. The two main technologies used in fixed WiBB are − • LMDS (Local Multipoint Distribution System) • MMDS (Multichannel Multipoint Distribution Service) systems Mobile Wireless Broadband Mobile WiBB, also called mobile broadband, provides high – speed broadband the connection from mobile phone service providers which is accessible from random locations. The locations are within the coverage area of the phone towers of mobile service provider and the connections are subject to monthly service plan subscribed by the user. Mobile broadband can be costlier due to its portability. Also, they generally have varying or limited speed except in urban areas.
Bluetooth: Bluetooth is a short-range wireless technology standard used for exchanging data between fixed and mobile devices over short distances using UHF radio waves in the ISM bands, from 2.402 GHz to 2.48 GHz, and building personal area networks (PANs). It was originally conceived as a wireless alternative to RS-232 data cables. Bluetooth works by the simple principle of sending and receiving data in the form of radio waves. Every Bluetooth enabled device has a card-like attachment known as the Bluetooth adapter. It is this Bluetooth adapter that sends and receives data. A Bluetooth adapter has a particular range of connection. One electronic adaptor can notice another Bluetooth device only if the second device is present within the range of the first device. When they are within the range, they can strike up a connection between themselves. Striking up of connection between two Bluetooth devices are known as paring of devices. The radio-wave connection between two devices is used to send and receive data between two Bluetooth devices. The data send and received at a time is equal to 720 Kilo bytes per second. There are 79 frequency channels of a frequency 2.45 Giga Hertz through which the devices send and receive data to each other. When two devices are trying to be paired, they are actually searching for a common frequency through which they can send and receive data. When such a frequency is discovered, the devices are \"found\". The connecting of two devices does not hamper the connecting of two other devices because they usually use different channels of frequency and hence do not overlap. In simple terms, this is the principle behind Bluetooth technology. One merit of Bluetooth technology is that it allows more than two devices to be sharing information at the same time. When more than two electronic devices enter into the process of sending and receiving data, they form a small network like that of a computer network. Such a micro-network of electronic devices is called a Piconet. In a piconet, there will be more than two devices. The maximum number of devices that a single piconet can accommodate is seven. Any one of these devices acts as the superior device, or the Master device. It is the master device that initiates the action, or \"give the order to begin\" the action. The other devices are known as slave units. They act according to the instructions given by the Master unit. A Bluetooth device can either act as the Master or as the Slave, depending upon the situation. A device can enter a piconet and leave a piconet. When more than one piconets join together, it is termed a scatternet.
Search
Read the Text Version
- 1 - 45
Pages: