Previous | Table of Contents | Next |
Asynchronous protocols are used primarily for low-speed data communications between PCs and very small computers. Framing occurs at the byte level, with each byte surrounded by a start bit (a 0 bit) and a stop bit (a 1 bit). A parity bit often accompanies each character, as well. Telex transmission incorporates an additional stop bit.
Kermit and XMODEM are asynchronous protocols, organizing information into 128-byte packets. Kermit also uses CRC error control. The data also can be blocked at the application level, and VRC can be complemented with the additional technique of LRC for improved error control.
Two general types of data communications protocols exist, byte-oriented and bit-oriented. While the performance characteristics of byte-oriented protocols are acceptable for many applications, bit-oriented protocols are much more appropriate for communications-intensive applications where the integrity of the transmitted data is critical.
Byte-oriented protocols communicate value strings in byte formats, generally of 8 bits per byte. Control characters (e.g., start bit, parity bit, and stop bit) are embedded in the header and trailer of each byte or block of data. As byte-oriented protocols are overhead-intensive, they are used exclusively in older computer protocols, at the second layer, or link layer. Byte-oriented protocols generally are asynchronous and half-duplex (HDX), operating over dial-up, two-wire circuits. Examples include Bisynchronous Communications (Bisync or BSC).
Bit-oriented protocols transmit information in a much larger bit stream, with opening and closing flags identifying the text from the control information, which addresses control issues associated with the entire data set. Bit-oriented protocols are much less overhead-intensive; they are usually synchronous, full-duplex and operate over dedicated, four-wire circuits. Examples include IBMs Synchronous Data Link Control (SDLC) and the ISOs High-Level Data Link Control (HDLC).
Binary Synchronous Protocol (Bisync or BSC)
Bisync was developed by IBM in 1966 as a byte-oriented protocol that frames the data with control codes which apply to the entire set of data. Bisync organizes data into blocks of up to 512 characters, which are sent over the link sequentially (one-at-a-time). An ACK or NAK is transmitted from the receiving terminal to the transmitting device following the receipt of each block. Error control is on the basis of a Block Checking Character (BCC), which is transmitted along with the data; the receiving device independently calculates the BCC and compares the two calculations.
The Bisync block consists of synchronizing bits, data and control characters sent in a continuous data stream, block-by-block. The specific elements of the Bisync block, as illustrated in Figure 7.8, are as follows and in sequence [7-7] and [7-8].
Figure 7.8 BSC block.
Synchronous Data Link Control (SDLC)
SDLC, developed in the mid-1970s, is at the heart of IBMs System Network Architecture (SNA). SDLC is a bit-oriented protocol that uses bit strings to represent characters. SDLC uses CRC error correction techniques, in this specific case known as Frame Check Sequence (FCS). SDLC supports high-speed transmission, and generally employs full-duplex (FDX), dedicated circuits. SDLC can work either in HDX or FDX, supports satellite transmission protocols, and works in point-to-point or multipoint network configurations.
Up to 128 frames can be sent in a string, with each frame containing up to 7 blocks, each of up to 512 characters. Each block within each frame is checked individually for errors. Errored blocks must be identified as such to the transmitting device within a given time limit, or they are assumed to have been received error-free.
The SDLC frame consists of synchronizing bits, data and control characters sent in a continuous data stream, frame-by-frame. The specific elements of the SDLC frame (Figure 7.9) are as follows and in sequence [7-7].
Figure 7.9 SDLC Frame.
Previous | Table of Contents | Next |