Previous | Table of Contents | Next |
American (National) Standard Code for Information Interchange (ANSCII or ASCII)
ASCII was first developed in 1963 and specifically oriented toward data processing applications. It was subsequently modified in 1967 by the American National Standards Institute (ANSI) to address modifications found in contemporary equipmentthat version was originally known as ASCII II, but is now known simply as ASCII [7-5].
ASCII employs a 7-bit coding scheme (Figure 7.5), supporting 128 (27)characters, which is quite satisfactory for most alphabets, puctuation, characters, and so on. As ASCII was designed for use in asynchronous computer systems (non-IBM, in those days), fewer control characters were required, making a 7-bit scheme acceptable.
Figure 7.5 ASCII code example, with character framing.
As is the case with asynchronous communications, in general, start and stop bits frame each character, and synchronization bits are not employed. ASCII makes use of a simple error detection and correction scheme known as parity checking. Parity checking is error prone with detected errors often going unnoticed or requiring retransmission, although forward error correction often is currently employed.
UNICODE (UNIVERSAL CODE)
UNICODE is an attempt to standardize longer and more complex coding schemes used to accommodate more complex languages such as Japanese and Chinese. In the Japanese language, for instance, even the abbreviated Kanji writing system contains well over 2,000 characters; Hatakana and Katakana alphabets are also used, further adding to the complexity. As 7- and 8-bit coding schemes cannot accommodate such a complex alphabet, computer manufacturers traditionally have taken proprietary approaches to this problem through the use of two linked 8-bit values.
UNICODE supports 65,536 (216) characters, thereby accommodating the most complex alphabets; in fact, multiple alphabets can be satisfied simultaneously. Further, UNICODE standardizes the coding scheme in order that computers of disparate origin can communicate information on a standard basis. As the transfer of UNICODE data does not require code translation, speed of transfer is improved, errors are reduced and costs are lowered.
Supported by relatively substantial machines, UNICODE employs synchronous transmission and sophisticated error detection and correction conventions, as discussed above in connection with EBCDIC.
The formatting of the data is a critical part of a communications protocol. Data formats let the receiving device logically determine what is to be done with the data and how to go about doing it. Data formats include code type, message length, and transmission validation techniques. A data format generally involves a header, text, and a trailer (Figure 7.6).
Figure 7.6 Data format, with header, text and trailer.
The integrity of the transmitted data is of prime importance. There are techniques that can be employed for error detection and, ideally, correction. These three basic modes of error control are recognition and flagging, recognition and retransmission, and recognition and forward error correction.
Echo Checking
Echo checking is one of the earliest means of error detection and correction. Echo involves the receiving device echoing the received data back to the transmitting device. The transmitting operator can view the data, as received and echoed, making corrections as appropriate. However, errors can occur in the transmission of the echoed data, as well, making this approach highly unreliable.
Echo can be characterized as very slow and overhead-intensive, as characters are transmitted one at a time; therefore, the process is bandwidth-intensive, as well. Further, the error detection and correction process is manual (human-to-machine) and decidedly unreliable. As a result, echo checking seldom is used in contemporary data communications.
Previous | Table of Contents | Next |