Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:

Biblio5

.pdf
Скачиваний:
45
Добавлен:
25.03.2016
Размер:
8.77 Mб
Скачать

Fundamentals of Telecommunications. Roger L. Freeman

Copyright 1999 Roger L. Freeman

Published by John Wiley & Sons, Inc.

ISBNs: 0-471-29699-6 (Hardback); 0-471-22416-2 (Electronic)

10

DATA COMMUNICATIONS

10.1 CHAPTER OBJECTIVE

Data communications is the fastest growing technology in the telecommunications arena. In the PSTN it is expected to equal or surpass voice communications in the next ten years. The widespread availability of the PC not only spurred data communications forward, but it also added a completely new direction, distributed processing. No longer are we tied to the mainframe computer; it has taken on almost a secondary role in the major scheme of things. Another major impetus in this direction is, of course, the Internet.

The IEEE (Ref. 1) defines data communications (data transmission) as “The movement of encoded information by means of communication techniques.” The objective of this chapter is to introduce the reader to the technology of the movement of encoded information. Encoded information includes alpha-numeric data, which may broadly encompass messages that have direct meaning to the human user. It also includes the movement of strictly binary sequences that have meaning to a machine, but no direct meaning to a human being.

Data communications evolved from automatic telegraphy, which was so prevalent from the 1920s through the 1960s. We start the chapter with information coding or how can we express our alphabet and numeric symbols electrically without ambiguity. Data network performance is then covered with a review of the familiar BER. We then move on to the organization of data for transmission and introduce protocols including electrical and logical interfaces. Enterprise networks covering LAN and WAN technology, frame relay and ISDN are treated in Chapters 11 and 12.1 The asynchronous transfer mode (ATM) is covered in Chapter 18. The principal objective of this chapter is to stress concepts, and to leave specific details to other texts.

10.2 THE BIT: A REVIEW

The bit is often called the most elemental unit of information. The IEEE (Ref. 1) defines it as a contraction of binary digit, a unit of information represented by either a 1 or a 0. These are the same bits that were introduced in Section 2.4.3 and later applied in Chapter 6, and to a lesser extent in Chapter 7. In Chapter 6, Digital Networks, the primary purpose of those bits was to signal the distant end the voltage level of an analog channel

1ISDN stands for Integrated Services Digital Networks.

261

262 DATA COMMUNICATIONS

Table 10.1 Equivalent Binary Designations: Summary of Equivalence

Symbol 1

Symbol 0

 

 

Mark or marking

Space or spacing

Current on

Current off

Negative voltage

Positive voltage

Hole (in paper tape)

No hole (in paper tape)

Condition Z

Condition A

Tone on (amplitude modulation)

Tone off

Low frequency (frequency shift

High frequency

keying)

 

Inversion of phase

No phase inversion (differential phase

 

shift keying)

Reference phase

Opposite to reference phase

 

 

Source: (Ref. 2).

 

at some moment in time. Here we will be assembling bit groupings that will represent letters of the alphabet, numerical digits 0 through 9, punctuation, graphic symbols, or just operational bit sequences that are necessary to make the data network operate with little or no ostensible outward meaning to us.

From old-time telegraphy the terminology has migrated to data communications. A mark is a binary 1 and a space is a binary 0. A space or 0 is represented by a positivegoing voltage, and a mark or 1 is represented by a negative-going voltage. (Now I am getting confused. When I was growing up in the industry, a 1 or mark was a positivegoing voltage, and so forth.)

10.3 REMOVING AMBIGUITY: BINARY CONVENTION

To remove ambiguity of the various ways we can express a 1 and a 0, CCITT in Rec. V.1 (Ref. 2) states clearly how to represent a 1 and a 0. This is summarized in Table 10.1, with several additions from other sources. Table 10.1 defines the sense of transmission so that the mark and space, the 1 and 0, respectively, will not be inverted. Inversion can take place by just changing the voltage polarity. We call it reversing the sense. Some data engineers often refer to such a table as a “table of mark-space convention.”

10.4 CODING

Written information must be coded before it can be transmitted over a data network. One bit carries very little information. There are only those two possibilities: the 1 and the 0. It serves good use for supervisory signaling where a telephone line could only be in one of two states. It is either idle or busy. As a minimum we would like to transmit every letter of the alphabet and the 10 basic decimal digits plus some control characters, such as a space and hard/ soft return, and some punctuation.

Suppose we join two bits together for transmission. This generates four possible bit sequences:2

00 01 10 11,

2To a certain extent this is a review of the argument presented in Section 6.2.3.

10.4 CODING

263

or four pieces of information, and each can be assigned a meaning such as 1, 2, 3, 4, or A, B, C, D. Suppose three bits are transmitted in sequence. Now there are eight possibilities:

000 001 010 011

100 101 110 111.

We could continue this argument to sequences of four bits and it will turn out that there are now 16 different possibilities. It becomes evident that for a binary code, the number of distinct characters available is equal to two raised to a power equal to the number of elements (bits) per character. For instance, the last example was based on a four-element code giving 16 possibilities or information characters, that is, 24 c 16.

The classic example is the ASCII code, which has seven information bits per character. Therefore the number of different characters available is 27 c 128. The American Standard Code for Information Interchange (ASCII) is nearly universally used worldwide. Figure 10.1 illustrates ASCII. It will be noted in the figure that there are more than 30 special bit sequences such as SOH, NAK, EOT, and so on. These are/ were used for data circuit control. For a full explanation of these symbols, refer to Ref. 3.

Another yet richer code was developed by IBM. It is the EBCDIC (extended binary coded decimal interchange code) code, which uses eight information bits per character. Therefore is has 28 c 256 character possibilities. This code is illustrated in Figure 10.2. It should be noted that a number of the character positions are unassigned.

Figure 10.1 American Standard Code for Information Interchange (ASCII). (From MiL-STD-188C. Updated [26].)

264 DATA COMMUNICATIONS

Figure 10.2 The extended binary-coded decimal interchange code (EBCDIC).

10.5 ERRORS IN DATA TRANSMISSION

10.5.1 Introduction

In data transmission one of the most important design goals is to minimize error rate. Error rate may be defined as the ratio of the number of bits incorrectly received to the total number of bits transmitted or to a familiar number such as 1000, 1,000,000, etc. CCITT (Ref. 6) holds with a design objective of better than one error in one million (bits transmitted). This is expressed as 1 × 106. Many circuits in industrialized nations provide error performance one or two orders of magnitude better than this.

One method for minimizing the error rate would be to provide a “perfect” transmission channel, one that will introduce no errors in the transmitted information at the output of the receiver. However, that perfect channel can never be achieved. Besides improvement of the channel transmission parameters themselves, error rate can be reduced by forms of a systematic redundancy. In old-time Morse code, words on a bad circuit were often sent twice; this is redundancy in its simplest form. Of course, it took twice as long to send a message; this is not very economical if the number of useful words per minute received is compared to channel occupancy.

This illustrates the trade-off between redundancy and channel efficiency. Redundancy can be increased such that the error rate could approach zero. Meanwhile, the information transfer across the channel would also approach zero. Thus unsystematic redundancy is wasteful and merely lowers the rate of useful communication. On the other

10.5 ERRORS IN DATA TRANSMISSION

265

hand, maximum efficiency could be obtained in a digital transmission system if all redundancy and other code elements, such as “start” and “stop” elements, parity bits, and other “overhead” bits, were removed from the transmitted bit stream. In other words, the channel would be 100% efficient if all bits transmitted were information bits. Obviously, there is a trade-off of cost and benefits somewhere between maximum efficiency on a data circuit and systematically added redundancy.

10.5.2 Nature of Errors

In binary transmission an error is a bit that is incorrectly received. For instance, suppose a 1 is transmitted in a particular bit location and at the receiver the bit in that same location is interpreted as a 0. Bit errors occur either as single random errors or as bursts of errors.

Random errors occur when the signal-to-noise ratio deteriorates. This assumes, of course, that the noise is thermal noise. In this case noise peaks, at certain moments of time, are of sufficient level as to confuse the receiver’s decision, whether a 1 or a 0.

Burst errors are commonly caused by fading on radio circuits. Impulse noise can also cause error bursts. Impulse noise can derive from lightning, car ignitions, electrical machinery, and certain electronic power supplies, to name a few sources.

10.5.3 Error Detection and Error Correction

Error detection just identifies that a bit (or bits) has been received in error. Error correction corrects errors at a far-end receiver. Both require a certain amount of redundancy to carry out the respective function. Redundancy, in this context, means those added bits or symbols that carry out no other function than as an aid in the error-detection or error-correction process.

One of the earliest methods of error detection was the parity check. With the 7-bit ASCII code, a bit was added for parity, making it an 8-bit code. This is character parity. It is also referred to as vertical redundancy checking (VRC).

We speak of even parity and odd parity. One system or the other may be used. Either system is based on the number of marks or 1s in a 7-bit character, and the eighth bit is appended accordingly, either a 0 or a 1. Let us assume even parity and we transmit the ASCII bit sequence 1010010. There are three 1s, an odd number. Thus a 1 is appended as the eighth bit to make it an even number.

Suppose we use odd parity and transmit the same character. There is an odd number of 1s (marks), so we append a 0 to leave the total number of 1s an odd number. With odd parity, try 1000111. If you added a 1 as the eighth bit, you’d be correct.

Character parity has the weakness that a lot of errors can go undetected. Suppose two bits are changed in various combinations and locations. Suppose a 10 became a 01; a 0 became a 1, and a 1 became a 0; and two 1s became two 0s. All would get by the system undetected.

To strengthen this type of parity checking, the longitudinal redundancy check (LRC) was included as well as the VRC. This is a summing of the 1s in a vertical column of all characters, including the 1s or 0s in each eighth bit location. The sum is now appended at the end of a message frame or packet. Earlier this bit sequence representing the sum was called the block check count (BCC). Today it may consist of two or four 8-bit sequences and we call it the FCS (frame check sequence), or sometimes the CRC (cyclic redundancy check). At the distant-end receiver, the same addition is carried out and if the sum agrees with the value received, the block is accepted as error-free. If

266 DATA COMMUNICATIONS

not, it then contains at least one bit error, and a request is sent to the transmit end to retransmit the block (or frame).

Even with the addition of LRC, errors can slip through. In fact, no error-detection system is completely foolproof. There is another method, though, that has superior error detection properties. This is the CRC. It comes in a number of varieties.

10.5.3.1 Cyclic Redundancy Check (CRC). In very simple terms the CRC error detection technique works as follows: A data block or frame is placed in memory. We can call the frame a k-bit sequence and it can be represented by a polynomial which is called G(x). Various modulo-2 arithmetic operations are carried out on G(x) and the result is divided by a known generator polynomial called P(x).3 This results in a quotient Q(x) and a remainder R(x). The remainder is appended to the frame as an FCS, and the total frame with FCS is transmitted to the distant-end receiver where the frame is stored, then divided by the same generating polynomial P(x). The calculated remainder is compared to the received remainder (i.e., the FCS). If the values are the same, the frame is error free. If they are not, there is at least one bit in error in the frame.

For many WAN applications the FCS is 16 bits long; on LANs it is often 32 bits long. Generally speaking, the greater the number of bits, the more powerful the CRC is for catching errors.

The following are two common generating polynomials:

1.

ANSI CRC-16:

X16 + X15 + X 2 + 1

2.

CRC-CCITT:

X16 + X12 + X5 + 1

producing a 16-bit FCS.

CRC-16 provides error detection of error bursts up to 16 bits in length. Additionally, 99.955% of error bursts greater than 16 bits can be detected (Ref. 4).

10.5.3.2 Forward-Acting Error Correction (FEC). Forward-acting error correction (FEC) uses certain binary codes that are designed to be self-correcting for errors introduced by the intervening transmission media. In this form of error correction the receiving station has the ability to reconstitute messages containing errors.

The codes used in FEC can be divided into two broad classes: (1) block codes and (2) convolutional codes. In block codes information bits are taken k at a time, and c parity bits are added, checking combinations of the k information bits. A block consists of n c k + c digits. When used for the transmission of data, block codes may be systematic. A systematic code is one in which the information bits occupy the first k positions in a block and are followed by the (n k) check digits.

A convolution(al) code is another form of coding used for error correction. As the word “convolution” implies, this is one code wrapped around or convoluted on another. It is the convolution of an input-data stream and the response function of an encoder. The encoder is usually made up of shift registers. Modulo-2 adders are used to form check digits, each of which is a binary function of a particular subset of the information digits in the shift register.

3Modulo-2 arithmetic is the same as binary arithmetic but without carries or borrows.

10.5 ERRORS IN DATA TRANSMISSION

267

10.5.3.3 Error Correction with a Feedback Channel. Two-way or feedback error correction is used widely today on data circuits. Such a form of error correction is called ARQ. The letter sequence ARQ derives from the old Morse and telegraph signal, “automatic repeat request.” There are three varieties of ARQ:

1. Stop-and-wait ARQ;

2. Selective or continuous ARQ; and

3. Go-back-n ARQ.

Stop-and-wait ARQ is simple to implement and may be the most economic in the short run. It works on a frame-by-frame basis. A frame is generated; it goes through CRC processing and an FCS is appended. It is transmitted to the distant end, where the frame runs through CRC processing. If no errors are found, an acknowledgment signal (ACK) is sent to the transmitter, which now proceeds to send the next frame—and so forth. If a bit error is found, a negative acknowledgment (NACK) is sent to the transmitter, which then proceeds to repeat that frame. It is the waiting time of the transmitter as it waits for either acknowledgment or negative acknowledgment signals. Many point to this wait time as wasted time. It could be costly on high-speed circuits. However, the control software is simple and the storage requirements are minimal (i.e., only one frame).

Selective ARQ, sometimes called continuous ARQ, eliminates the waiting. The transmit side pours out a continuous stream of contiguous frames. The receive side stores and CRC processes as before, but it is processing a continuous stream of frames. When a frame is found in error, it informs the transmit side on the return channel. The transmit side then picks that frame out of storage and places it in the transmission queue. Several points become obvious to the reader. First, there must be some way to identify frames. Second, there must be a better way to acknowledge or “negative-acknowledge.” The two problems are combined and solved by the use of send sequence numbers and receive sequence numbers. The header of a frame has bit positions for a send sequence number and a receive sequence number. The send sequence number is inserted by the transmit side, whereas the receive sequence number is inserted by the receive side. The receive sequence numbers forwarded back to the transmit side are the send sequence of frame numbers acknowledged by the receive side. Of course, the receive side has to insert the corrected frame in its proper sequence before passing the data message to the end user.

Continuous or selective ARQ is more costly in the short run, compared with stop- and-wait ARQ. It requires more complex software and notably more storage on both sides of the link. However, there are no gaps in transmission and no time is wasted waiting for the ACK or NACK.

Go-back-n ARQ is a compromise. In this case, the receiver does not have to insert the corrected frame in its proper sequence, thus less storage is required. It works this way: When a frame is received in error, the receiver informs the transmitter to “go-back-n,” n being the number of frames back to where the errored frame was. The transmitter then repeats all n frames, from the errored frame forward. Meanwhile, the receiver has thrown out all frames from the errored frame forward. It replaces this group with the new set of n frames it received, all in proper order.

268 DATA COMMUNICATIONS

Figure 10.3 Simplified diagram illustrating a dc loop with (a) neutral keying and (b) polar keying.

10.6 dc NATURE OF DATA TRANSMISSION

10.6.1 dc Loops

Binary data are transmitted on a dc loop. More correctly, the binary data end instrument delivers to the line and receives from the line one or several dc loops. In its most basic form a dc loop consists of a switch, a dc voltage, and a termination. A pair of wires interconnects the switch and termination. The voltage source in data work is called the battery, although the device is usually electronic, deriving the dc voltage from an ac power line source. The battery is placed in the line to provide voltage(s) consistent with the type of transmission desired. A simplified drawing of a dc loop is shown in Figure 10.3a.

10.6.2 Neutral and Polar dc Transmission Systems

Older telegraph and data systems operated in the neutral mode. Nearly all present data transmission systems operate in some form of polar mode. The words “neutral” and “polar” describe the manner in which battery is applied to the dc loop. On a “neutral” loop, following the convention of Table 10.1, battery is applied during spacing (0) conditions and is switched off during marking (1). Current therefore flows in the loop when a space is sent and the loop is closed. Marking is indicated on the loop by a condition of no current. Thus we have two conditions for binary transmission, an open loop (no current flowing) and a closed loop (current flowing). Keep in mind that we could reverse this, namely, change the convention and assign marking to a condition of current flowing or closed loop, and spacing to a condition of no current or an open loop.4 As mentioned, this is called “changing the sense.” Either way, a neutral loop is a dc loop circuit where one binary condition is represented by the presence of voltage

4In fact, this was the older convention, prior to about 1960.

10.7 BINARY TRANSMISSION AND THE CONCEPT OF TIME

269

Figure 10.4 Neutral and polar waveforms.

and the flow of current, and the other condition is represented by the absence of voltage and current. Figure 10.3a illustrates a neutral loop.

Polar transmission approaches the problem differently. Two battery sources are provided, one “negative” and the other “positive.” Following the convention in Table 10.1, during a condition of spacing (binary 0), a positive battery (i.e., a positive voltage) is applied to the loop, and a negative battery is applied during the marking (binary 1) condition. In a polar loop current is always flowing. For a mark or binary “1” it flows in one direction, and for a space or binary “0” it flows in the opposite direction. Figure 10.3b shows a simplified polar loop. Notice that the switch used to selected the voltage is called a keying device. Figure 10.4 illustrates the two electrical waveforms.

10.7 BINARY TRANSMISSION AND THE CONCEPT OF TIME

10.7.1 Introduction

As emphasized in Chapter 6, time and timing are most important factors in digital transmission. For this discussion consider a binary end instrument (e.g., a PC) sending out in series a continuous run of marks and spaces. Those readers who have some familiarity with the Morse code will recall that the spaces between dots and dashes told the operator where letters ended and where words ended. The sending device or transmitter delivers a continuous series of characters to the line, each consisting of five, six, seven, eight, or nine elements (bits) per character. A receiving device starts its print cycle when the transmitter starts sending and, if perfectly in step with the transmitter, can be expected to provide good printed copy and few, if any, errors at the receiving end.

It is obvious that when signals are generated by one machine and received by another, the speed of the receiving machine must be the same or very close to that of the transmitting machine. When the receiver is a motor-driven device, timing stability and accuracy are dependent on the accuracy and stability of the speed of rotation of the motors used. Most simple data-telegraph receivers sample at the presumed center of the signal element. It follows, therefore, that whenever a receiving device accumulates timing error of more than 50% of the period of one bit, it will print in error.

The need for some sort of synchronization is illustrated in Figure 10.5. A five-unit

270 DATA COMMUNICATIONS

Figure 10.5 Five-unit synchronous bit stream with timing error.

code is employed, and it shows three characters transmitted sequentially.5 The vertical arrows are receiver sampling points, which are points in time. Receiving timing begins when the first pulse is received. If there is a 5% timing difference between the transmitter and receiver, the first sampling at the receiver will be 5% away from the center of the transmitted pulse. At the end of the tenth pulse or signal element, the receiver may sample in error. Here we mean that timing error accumulates at 5% per received signal element and when there is a 50% accumulated error, the sampling will now be done at an incorrect bit position. The eleventh signal element will indeed be sampled in error, and all subsequent elements will be errors. If the timing error between transmitting machine and receiving machine is 2%, the cumulative error in timing would cause the receiving device to receive all characters in error after the 25th element (bit).

10.7.2 Asynchronous and Synchronous Transmission

In the earlier days of printing telegraphy, “start–stop” transmission, or asynchronous operation, was developed to overcome the problem of synchronism. Here timing starts at the beginning of a character and stops at the end. Two signal elements are added to each character to signal the receiving device that a character has begun and ended.

For example, consider the seven-element ASCII code (see Figure 10.1) configured for start–stop operation with a stop element which is of 2 bits duration. This is illustrated in Figure 10.6. In front of a character an element called a start space is inserted, and a stop mark is inserted at the end of a character. In the figure the first character is the ASCII letter upper case U(1010101). Here the receiving device knows (a priori) that it starts its timing 1 element (in this case a bit) after the mark-to-space transition—it counts out 8 unit intervals (bits) and looks for the stop-mark to end its counting. This is a transition from space-to-mark. The stop-mark, in this case, is two unit intervals long. It is followed by the mark-to-space transition of the next start space, whence it starts counting unit intervals up to 8. So as not to get confused, the first seven information bits are the ASCII bits, and the eighth bit is a parity bit. Even parity is the convention here.

Figure 10.6 An 8-unit start-stop bit stream with a 2-unit stop element.

5A 5-bit code. The unit and bit are synonymous in this text. A code element carries out a function. It may be one or more bits in duration.

Соседние файлы в предмете [НЕСОРТИРОВАННОЕ]