Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Designing and Developing Scalable IP Networks.pdf
Скачиваний:
36
Добавлен:
15.03.2015
Размер:
2.95 Mб
Скачать

9.3 QoS MARKING MECHANISMS

127

it would require the fragmentation of packets into blocks proportional in size (in bits) to the weight applied to the queue. This would actually not work because it would not be possible to reassemble the packets at the far end of the link or at the destination host.

9.2.4.5 Weighted Deficit Round Robin (WDRR)

Weighted deficit round robin is somewhat similar to WRR and WFQ. However, it selects the queue to be serviced in a slightly different way to a pure WFQ. If we go back to Figure 9.4, we can see that in the starting position, the gold queue has a 1 unit packet and a 4 unit packet. With WFQ, the gold queue can only transmit one packet because it does not have sufficient ‘credit’ to transmit its second packet. With WDRR, the second packet is transmitted. However, at the next round, the gold queue has a deficit of 1 before its weight of 4 is applied because it transmitted one more unit than it was due at the first round. So on the second iteration the gold queue is only given 3 units rather than its nominal weight of 4 units.

WDRR has one major benefit over WFQ in that it is easier to implement and, therefore, is easier to apply to very high speed interfaces. On the downside, WDRR cannot protect effectively against a badly behaved flow within a highly aggregated environment. Also, it is not able to provide quite such accurate control over latency and jitter of packets in the queues as can be provided by priority queueing, for example.

9.2.4.6 Priority Queueing

It is sometimes desirable to service one class of traffic to the possible detriment of all other traffic. For example, any traffic requiring low-latency, low-jitter service (e.g. voice or video over IP) might be placed in such a queue. In order to achieve this, one queue can be created into which the high-priority traffic is placed. If this priority queue contains any packets, it is always serviced next. One possible downside of this mechanism is that it is clearly possible to starve all the other queues of service. In order to mitigate this risk, some vendors have provided a configuration mechanism to place an upper limit on the bandwidth used by the priority queue.

As with most things, no one solution is ideal for all situations. Therefore, it is often necessary to implement a combination of the above queueing techniques to provide the broad range of services required by your customers.

9.3 QoS MARKING MECHANISMS

QoS marking mechanisms are used to provide a consistent, arbitrary marker on each packet that can be used by each device in the path (Layer 2 or Layer 3) to identify the class to which that packet belongs and, therefore, put each packet into the relevant queue. This is the means whereby a consistent behaviour can be achieved right throughout your network.

128

CLASS OF SERVICE AND QUALITY OF SERVICE

9.3.1 LAYER 2 MARKING

9.3.1.1 802.1p

Ethernet has 3 bits in the MAC layer header, which are allocated for the specification of eight levels of QoS. Their use is defined in the IEEE standard 802.1p. This allows an Ethernet switch to prioritize traffic without having to look into the Layer 3 headers (see below). The details of the mechanism used to implement the required behaviour associated with a specific 3-bit pattern is hardware dependent. However, as with all QoS functions, the treatment of outbound packets is based on queueing.

The queueing mechanisms above can all be applied to the packets transiting Ethernet switches. The 802.1p bits are simply used to classify packets and put them into the relevant queue.

9.3.1.2 ATM

ATM has very highly developed QoS functionality, which was integrated into the protocol from its first development. ATM provides for a number of qualities of service. They are signalled during the set-up of each VC and are strictly enforced by switches in the path. The qualities of service available are described as follows:

Class A—Constant Bit Rate (CBR) is a stream of data at a constant rate often of a nature, which is intolerant of delay, jitter and cell loss. CBR is defined in the form of a sustained cell rate (SCR) and a peak cell rate (PCR). The SCR is a guaranteed capacity and PCR is a burst capacity, which can be used should the bandwidth be available. An example application of this class is TDM traffic being carried over ATM.

Class B—Variable Bit Rate—Real Time (VBR-RT) is a stream of data, which is bursty in nature but is intolerant of jitter and cell loss and especially intolerant of delay. An example application of this is video conferencing.

Class C—Variable Bit Rate—Non-Real Time (VBR-NRT) is a stream of data, which is bursty in nature but is more tolerant of delay. An example application of this is video replay (e.g. training on demand).

Class D—Available Bit Rate (ABR) and Unspecified Bit Rate (UBR). These are both ‘best effort’ classes. UBR is subject to cell loss and the discarding of complete packets. ABR provides a minimum cell rate and low cell loss but no other guarantees. Neither ABR nor UBR provide any latency or jitter guarantees. An example application of this is for lower priority data traffic (e.g. best effort Internet traffic).

9.3.1.3 Frame Relay

In common with ATM, Frame Relay has highly developed QoS functionality. Frame Relay also uses the concept of permanent virtual circuits (PVC) and switched virtual circuits (SVC).

9.3 QoS MARKING MECHANISMS

129

Frame Relay uses the concept of Committed Information Rate (CIR) that is analogous to SCR in ATM, and a burst rate that is analogous to PCR in ATM to provide a guaranteed bandwidth and also a maximum rate up to which traffic can burst for a brief period.

9.3.2LAYER 3 QoS

IP provides one byte in the header of every packet that can be used to identify the Type of Service (ToS) of that packet. The ToS byte is divided up as shown in Figure 9.5.

Explicit Congestion

Reliability Throughput

 

 

IP precedence

 

 

 

 

Notification

 

 

 

Delay

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

7

 

6

 

5

4

3

2

 

1

 

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 9.5 The IP Type of Service Byte

9.3.2.1 IP Precedence

The three lower order bits (0–2) of the ToS byte are used to indicate IP precedence. This provides the means to identify eight classes of service. Bit 3 is used to represent normal/low delay, bit 4 to represent normal/high throughput and bit 5 to represent normal/high reliability. Bits 6 and 7 are unused and should be set to zero. Very few applications actually make use of bits 3 to 5 for the purposes for which they were reserved.

The IP precedence bits are possibly the most widely used Layer 3 QoS marking mechanism.

9.3.2.2 Differentiated Services (Diffserv)

Since so few applications make use of bits 3 through 5 of the IP ToS byte, the Differentiated Services working group suggested the use of those three bits in addition to the three low-order bits to provide the ability to identify up to 64 classes of service, known as Diffserv Code Points (DSCP) (see Figure 9.6).

Explicit Congestion

 

 

DiffServ Code Point

 

 

 

 

 

Notification

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

7

 

6

5

4

3

2

 

1

0

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 9.6 The DiffServ Code Point