Previous | Table of Contents | Next |
Parity Checking
Parity checking is by far the most commonly used method for error detection and correction, as it is used in asynchronous devices such as PCs. Parity involves the transmitting terminals appending one or more parity bits to the data set in order to create odd parity or even parity. In other words, an odd or even value is always created, character-by-character or set-by-set of data. While less than ideal, this approach is easily implemented and offers reasonable assurance of data integrity. There are two dimensions to parity checking, vertical redundancy checking, and longitudinal redundancy checking.
Vertical Redundancy Checking (VRC) entails the appending of a parity bit at the end of each transmitted character or value to create an odd or even total mathematical bit value. The receiving device executes the same mathematical process to verify that the correct total bit value was received. Logically speaking, the two devices sum the bit values vertically, as represented in Figure 7.7. While inexpensive and easily implemented in computers employing asynchronous transmission, VRC is not highly reliable; VRC often is referred to as send and pray.
Figure 7.7 Example ASCII code with VRC and LRC parity checking.
Longitudinal Redundancy Checking (LRC) or Block Checking Character (BCC) adds another level of reliability, as data is viewed in a block or data set, as though the receiving device were viewing data set in a matrix format. This additional technique of checking the total bit values of the characters on a horizontal basis employs the same parity (odd or even) as does the vertical check (Figure 7.7). However, it is less than completely reliable, as compensating errors can still occur in nonadjacent characters. While remaining relatively inexpensive and easily implemented in devices employing asynchronous transmission, LRC/BCC adds a significant measure of reliability. Also known as checksum, the LRC is sent as an extra character at the end of each data block [7-5].
Block Parity
The technique of block parity improves considerably on simple parity checking. While Spiral Redundancy Checking (SRC) and interleaving improved on the detection of errors due to increased transmission speeds and more complex modulation techniques, they gave way to Cyclic Redundancy Checking (CRC), which is commonly employed today.
Cyclic Redundancy Checking (CRC) validates transmission of a set of data, formatted in a block or frame, through the use of a unique mathematical polynomial known to both transmitter and receiver. The transmitting device statistically samples the data in the block or frame and applies a 17-bit generator polynomial based on a Euclidean algorithm; the result of that calculation is appended to the block or frame or text as either a 16- or 32-bit value. The receiving device executes the identical process, comparing the results. The result is a integrity factor of 10-14; in other words, the possibility of an undetected error is 1 in 100 trillion. By way of example and at a transmission speed of 1 Mbps, 1 undetected error would be expected approximately every 30 years!
An unerrored block or frame is ACKnowledged by the receiving device through the transmission an ACK, whereas an errored block or frame is Negatively AcKnowledged with a NAK. A NAK prompts the transmitting device to retransmit that specific block or frame. The transmission of an ACK by the receiving device cues the sending device that the next block or frame of data can then be sent.
While CRC is relatively memory- and processor-intensive and therefore expensive to implement, it is easily accommodated in high-order computers which benefit from synchronous transmission techniques. As CRC ensures that data transmission is virtually error-free, it is considered mandatory in most sophisticated computer communications environments.
Forward Error Correction (FEC) involves the addition of redundant information embedded in the data set in order that the receiving device can detect errors and correct for them without requiring a retransmission [7-6]. The two most commonly employed techniques are Hamming and BCH (Bose, Chaudhuri and Hocquengham).
While even more memory- and processor-intensive than CRC, FEC allows the receiving device to correct for errors in transmission, thereby avoiding most requirement for retransmission of errored block or frames of data. As a result, FEC improves the efficiency, or throughput, of the network, reducing transmission costs in the process and without sacrificing data integrity.
As the length of the data sets and the distances over which they travel increase, and as the likelihood of errors in transmission increases accordingly, data compression becomes sensible. Additionally, data compression reduces the bandwidth required to transmit a set of databandwidth=$$$. Data compression techniques can include formatting, redundant characters, commonly used characters, and commonly used strings of characters.
Previous | Table of Contents | Next |