Добавил:
Upload Опубликованный материал нарушает ваши авторские права? Сообщите нам.
Вуз: Предмет: Файл:
Designing and Developing Scalable IP Networks.pdf
Скачиваний:
36
Добавлен:
15.03.2015
Размер:
2.95 Mб
Скачать

2.3 ASYNCHRONOUS TRANSFER MODE (ATM)

11

the responder will also record the IP address and MAC address of the requester into its own mapping table. This mechanism raises yet another problem. Since ARP uses broadcast messages, they are received by all hosts on the segment, irrespective of whether the recipient has the requested IP address. As the number of hosts on the network grows, the number of ARP messages will inherently grow. Since ARP messages are broadcast, they are received by every node on the segment. As the number of nodes grows, the number of unnecessary packets arriving at each node will grow. In large Ethernet networks with a very flat hierarchy, this can easily grow to become a significant burden. It is generally held that single segments of more than one thousand nodes are vulnerable to disruptions due to the massive number of ARPs sent during the standard transmission process. If there are sufficient hosts, the number of ARPs can actually grow to the point where they actually consume a significant proportion of the available bandwidth in ‘ARP storms’.

2.2.2 MTU

Ethernet was initially constrained to a maximum MTU of 1500 bytes but now there are implementations with maximum MTUs in excess of 9000 bytes, which are known as jumbo frames. Jumbo frames bring with them both benefits and disadvantages. The advantages of jumbo frames are fairly obvious. If the large MTUs are available right through the core, then your network devices do not have to perform any fragmentation. Within large networks that carry large numbers of route prefixes in the core routing protocols, this can significantly improve the efficiency of the routing protocol exchanges.

The main disadvantage associated with jumbo frames is fragmentation if the MTU along the path is not consistently large and path MTU discovery is, for whatever reason, not successful.

2.3ASYNCHRONOUS TRANSFER MODE (ATM)

ATM is a versatile transport protocol, which has been used extensively by service providers as a flexible high-speed transport with excellent traffic engineering and QoS functionality. ATM is not like other protocols described here, because it is carried over another transport media, e.g. SONET, E3, etc. In this respect, ATM is more like PPP or HDLC (i.e. a Layer 2 protocol) rather than the other media, which operate at Layer 1. However, as with the other transport media, ATM requires IP packets to be encapsulated in a sub-protocol. In the case of ATM, this encapsulation layer is called the ATM Adaption Layer 5 (AAL5). This, along with the fixed cell size and associated padding of incompletely filled cells can make ATM exceptionally inefficient, e.g. one 64-byte IP packet is carried in two 48-byte cells, the second of which contains 32 bytes of padding. Then, in addition, you have 10 bytes of ATM cell header (2 cells) and also the 8 bytes of the AAL5 trailer. This gives a total of 50 bytes of overhead to carry a 64-byte packet (>43% overhead). The absolute worst case is for a 49-byte IP packet, which results in

12

TRANSPORT MEDIA

49 bytes

 

Cell

 

Payload

 

CellPyld

Padding

 

AAL5

 

Hdr

 

 

 

Hdr

 

 

 

 

 

 

Trailer

 

 

 

 

 

 

 

 

 

 

 

 

 

5

 

48

 

5

 

1

 

 

47

 

8

 

 

 

 

 

 

 

 

Figure 2.1 Overhead associated with the transport of IP packets in ATM cells

65 bytes of overhead (47 + 5 + 5 + 8) for the carriage of 47 bytes of data, an incredible 57% overhead. Clearly, this is the worst case, see Figure 2.1. On average, ATM tends to suffer from around 20–25% overhead. When considering transcontinental multi-megabit circuits, a loss of 20 to 25% is extremely expensive.

ATM suffers from one other limitation, which constrains its long-term scalability. The process of splitting up the IP frames into 48-byte chunks for installation into the ATM cells and then taking those 48-byte chunks and rebuilding the complete IP packet is known as Segmentation And Reassembly (SAR). This is a computationally expensive function. The expense of building a module with sufficient processing and memory means it is financially unrealistic to create a SAR for widespread use that operates at greater than 622 Mbps. This means that it is not possible to have a single flow of data exceeding 622 Mbps. While that might seem like an extremely large flow of data, when considering macro flows between two major hubs on a large service provider’s network, this is not excessive.

Given all these downsides, you might be wondering why anyone in their right mind would choose to use ATM as a transport for IP packets. There must have been some reasons why, in the mid to late 1990s, many of the largest service providers in the world relied upon ATM in their backbones. There were, of course, excellent reasons, not least of which was scalability! Prior to ATM, many large ISPs had used multiple DS-3s and Frame Relay switches to create an overlay network. However, as the flows of data grew, it became necessary to run more and more parallel links between each pair of hubs. At the time, DS-3 was the largest interface available on frame relay switches. ATM was the obvious next step for the service providers in need of greater capacity on individual links since ATM switches had interfaces running at OC3 and OC12 and could be used in a familiar overlay scheme. ATM also has great qualities for traffic engineering. This allowed ISPs to make better use of expensive bandwidth, and efficiently reroute traffic around failures and bottlenecks.

However, as networks inevitably continued growing, the SAR limitation became significant. With the largest available SAR being 622 Mbps, it was necessary to connect routers to a switch with several links in order to carry sufficient traffic. In the late 1990s, the largest service providers started building new core networks using Packet over SONET and MPLS. This combination provided many (but not all) of the benefits of ATM without the constraint of requiring SAR.

2.5 SRP/RPR AND DPT

13

While ATM is certainly not a scalable solution in the core of the larger, global service providers, it remains a highly effective (although not particularly efficient) transport media for small to medium-sized ISPs and for medium to large enterprises with moderately large networks.

2.4PACKET OVER SONET (POS)

In this classification, we include not only Packet over SONET/SDH but also Packet over wavelength or Packet over dark fibre, which also use SONET/SDH framing. POS has been widely used since the late 1990s for Wide Area circuits, particularly in service providers’ backbones. POS is highly efficient in comparison to ATM as a transport for IP packets. Rather than ATM and AAL5 encapsulation, POS encapsulates IP within HDLC or PPP encapsulated within HDLC. An OC-3 circuit running POS can transport around 148 Mbps of IP data out of 155 Mbps compared to around 120 Mbps of IP data on an identical circuit running ATM. This vastly improved efficiency made POS extremely attractive to operators contemplating using STM-16/OC48 (2.5 Gbps) circuits and above. While ATM switches were capable of supporting STM-16 circuits between themselves, the constraints on SARs mean that it is still only possible to connect a router to the ATM switch at a maximum of STM-4/OC-12. In addition, the prospect of losing 20% of 2.5 Gbps as pure overhead was extremely unpalatable. Losing 20% of an STM-64/OC-192 was considered totally intolerable.

However, the flip side was that POS lacked any of the traffic engineering and QoS functionality available with ATM. It was only with the advent of MPLS that the lack of traffic engineering (and, more recently, the lack of QoS functionality) have been overcome to a certain degree. This removed one of the major objections of some of the engineers at the largest ISPs to using POS and paved the way for the use of STM-16 and STM64 circuits.

2.5SRP/RPR AND DPT

Spatial Reuse Protocol (SRP) was originally developed by Cisco in the late 1990s. It is a resilient, ring-based MAC protocol, which can use a variety of Layer 1 media but, almost invariably, is currently implemented using SONET/SDH encapsulation. This protocol has been documented in an informational RFC (RFC 2892) and adopted by the IEEE to produce Redundant Packet Ring (RPR) 802.17. Dynamic Packet Transport (DPT) is Cisco’s implementation of SRP/RPR.

DPT/RPR is based upon a dual, counter-rotating ring. This provides the basis for the efficient (re)use of bandwidth between various points on the ring. Each node on the ring learns the topology of the ring by listening for Topology Discovery (TD) packets. The TD packets identify the ring on which they were transmitted. This, along with the list of

14

TRANSPORT MEDIA

MAC addresses allows hosts to identify whether there has been a wrap of the ring (see below for a further explanation).

SRP can use mechanisms associated with the Layer 1 functionality (e.g. Loss of Signal (LOS), Loss of Light (LOL) with SONET/SDH) to identify the failure of links and nodes. However, since it is not constrained to media with this functionality built in, it is necessary to include a keepalive function. In the absence of any data to send, a router will transmit keepalives to its neighbour.

As can be seen from Figure 2.2, it is possible for several pairs of nodes to communicate at full line rate, simultaneously, without any interference. However, this relies upon the communicating nodes not having to pass through other nodes and no other node needing to communicate with yet another node. For example, R1 and R2 can communicate with each other and R3 and R4 can communicate with each other. However, if R1 and R4 wanted to communicate and R2 and R3 wanted to communicate, they would have to share the bandwidth between R2 and R3.

SRP uses some Cisco-patented algorithms to ensure fairness of access to the ring. These prevent traffic on the ring from starving a particular node of capacity to insert traffic onto the ring or vice versa. A full description of the algorithms is included in RFC 2892.

 

R1

R8

R2

R7

R3

R6

R4

 

R5

Figure 2.2 Intact SRP dual counter rotating rings