An Analysis of Dynamic Routing Protocols in

Highly Distributed Data Networks

by

Andrew Steven Kessler

B.S., State University of New York, 1989



A thesis submitted to the

Faculty of the Graduate School of the

University of Colorado in partial fulfillment

of the requirement for the degree of

Master of Science

Interdisciplinary Telecommunications Department

1992





This thesis for the Master of Science degree by

Andrew Steven Kessler

has been approved for the

Interdisciplinary Telecommunications Department

by


__________________________________

Gerald Mitchell


__________________________________

Robert Mercer


__________________________________

Jon Sauer


Date: December 21, 1992


Contents

CHAPTER

I. INTRODUCTION

II. ANALYSIS OF SEVERAL ROUTING PROTOCOLS

III. OVERALL ANALYSIS

IV. SUMMARY AND CONCLUSIONS

BIBLIOGRAPHY

GLOSSARY


TABLES

Table

1. TOS field codes

FIGURES

Figure

1. Network Mesh

2. BGP-IGP

3. Range of Variance


CHAPTER I

INTRODUCTION

A network has been described as "a chain of inter-connected people or operations." Spy novels often use the term network to describe a team of separate individuals communicating and passing information between each other on some covert mission. It is fitting that the first electronic inter-connected communication systems would be called networks.

The Telegraph

The first useful electronic telecommunication device was the telegraph. In 1837 Samuel Morse perfected his use of an electromagnet as a signaling device. It took until 1844 to get the funding and support to lay the first long distance telegraph line from Washington D.C. to Baltimore. On May 24th Morse demonstrated the device with the first words to be transmitted - "What hath God wrought". It wasn't long before a web of telegraph cables crisscrossed the continent and then the globe. Instantaneous communication was possible for the first time. The electronic global consciousness was born.

By the Mid-1870s, a number of businesses found they could not operate without a telegraph. Railroads, newspapers, and any business that operated in a nonlocal territory all depended on the machines. As it became indispensable to the business community, it began to enter the business infrastructure.

The Telephone

The telegraph would soon be replaced by another invention - the telephone. Invented by Alexander Graham Bell in 1876, the telephone quickly became the accepted method of long distance communication. The telephone, as Bell invented it, is really nothing more than a microphone and a speaker on each end of a pair of wires. In order to speak with another person, a set of wires would have to be run between each pair of telephones. It has been shown that this approach, called a "mesh" network, would quickly require many more lines than telephones. The equation that describes the progression of the number of lines is n(n-1)/2, where n is the number of nodes or telephones. At this rate, 10 telephones would require 45 separate wires, and 100 would require 4950! It wasn't long before the idea of a wire center emerged, where all the cables are arranged in a star.

In a star network, all the telephones are connected to a central point or hub where the wires are terminated on a patch panel. In these early days of the telephone, routing of telephone calls was done by hand. Someone would literally ring up the operator and tell them who they wanted to call. The operator would then manually place a pair of wires between the two phone sets, and the call was connected. In some respects, this manual system of routing calls worked better than any automatic system ever could. If a call came in for the Mayor of a small town, the operator might know that the Mayor was out at the barber and would ring there, instead of trying his office and finding him not in. This could be considered the first example of dynamic routing.

Eventually the operator was replaced by automatic switches that could connect calls without any human intervention. Almon Strowger, an undertaker from Kansas City invented the first mechanical switch in 1892. The switch was designed so well that many of them are still in use today. Although the switches did enable people to call one another with less effort than ever before, the switches soon had the same problem that the users originally had. The goal was that anyone who subscribed to any phone service would be able to call anyone else, wherever they might be. Therefore, all the switches would have to be connected together. If all the switches were connected directly with one another, we would run into the n(n-1)/2 problem again. The Bell system solved this problem by setting up a hierarchical telephone network.

In a hierarchical system, the network is divided into different layers of authority. Each layer serves a larger geographic area than the one below it. The Bell system implemented a five level hierarchy - local office, toll office, primary office, sectional office, and a regional office. Telephones are physically connected only to a local office, which is also called a central office or an end office. If a number called is in the local exchange, the central office either makes the connection itself or switches the call to the appropriate local office. If the number indicates a long distance call, the local office switches the call to a toll office, which searches for the lowest-level unoccupied path to the call's destination. Typically, if that path is busy or if there is no direct connection to another toll office serving the destination phone, it switches the call up one level to a primary office. In this way the call will move up through the hierarchy as far as necessary until a path is completed to the destination.

A hierarchical network has some advantages. The most obvious is an easily recognizable address. Just by looking at a phone number, you can immediately tell by the area code exactly which area of the country the phone is located. With some more knowledge of the location of the various exchanges, you can even tell in which part of the area the phone is located. This is due to the strict mapping of geographic areas to area codes and exchanges.

The main advantage of a hierarchical network can also be a disadvantage. In today's society, the population is much more mobile than ever before. More than 40 million Americans, close to one in six, changed their place of residence in the past 12 months. Some people would like their phone number to follow them throughout their life. Unfortunately, phone numbers are tied to a place, not a person. A number with a 312 area code cannot be moved to Colorado. Usually a customer cannot even keep the same exchange when moving within an area code without incurring a large expense.

The solution to these types of problems might be to add a layer to the network addressing scheme. A layered approach would work where every user would be assigned a phone number for life, say with an area code of 700. They would also be assigned another number, the same one they have now, that we could call the physical address. The user could then call an automated system and change their 700 number to forward or attach to a different physical address. In this way, people could always call the 700 number, and it would ring at the latest address. This type of layered approach would also work for temporary moves. If someone was going to be working in Dallas for a week, they could change their address and all their calls would ring there.

The downside to this concept is that it would require a centralized database to facilitate the addressing. Whenever a call is made, there would have to be a lookup in the database to see where to send the call. If there was only one copy of the database, traffic would congest around it as thousands of calls would be requesting information. If there were several copies of the database, there would have to be a mechanism to guarantee that they would all be kept up to date. Stability would also be an issue. A backup system would have to be installed to insure that if anything happened to the master database, the backup would come on line immediately. Possibly the address servers could be divided into regions, each one serving a particular range of the 700 area code. This would lower the strain on each of them, and might be a workable solution.

The Computer

The computer was invented to be a computational engine, but soon evolved to be the most revolutionary telecommunications device to date. The first commercial computers introduced were huge mainframe computers. All information was directly entered into the computer, either by punch cards, or later by a CRT. Only certain people were allowed physical access to the computer. The users had to submit their work to the operators, who would enter their job in a batch format. This is the concept of centralized processing and centralized management.

The next major change was the advent of local terminals. Terminals connected to the host machine, and allowed users for the first time to have interactive access with the computer. Users were able to enter and interact with the computer themselves, but all the processing was still done in the mainframe. This still fits in the category of centralized processing, although access to the computer was now distributed.

Finally with the mini and micro-computer revolution, sophisticated processing could now be done on the desktop. The proliferation of small autonomous computers has radically changed the paradigm of computer connectivity. The issue before was how can we connect the terminal back to the mainframe. The terminal was "dumb". Everything that appeared on the screen was created and arranged by the mainframe. Today, when we are connecting computers together, they need to communicate as peers. The processing can now be shared between the computers. This is the concept of distributed processing.

Distributed processing places much greater demands on the network. Instead of just simple screen updates, we now have disk and printer sharing, as well as higher level applications such as electronic mail. Instead of basically one protocol, terminal emulation, to facilitate screen updates, there are now hundreds of different protocols at all levels of the network protocol stack.

In a centralized environment, the data is always travelling between the terminal and the mainframe. In a distributed environment, any two nodes in a mesh network of computers can have information to share. There is not a clear straight path between the host and the terminal. The source and destination of each "packet" of information can be any node on the network. In a small, static and stable network, where the path between computers is usually fixed, this does not present much of a problem. Paths can be statically entered into a route table on each node on the network, so that the right path can be looked up for any particular destination.

However, this solution, is not scalable. It might be manageable for a few hundred machines at a single site, but as soon as you get into thousands of computers spread over the country or the world, it quickly becomes unmanageable. In order to minimize confusion, there must be a coordinated effort to insure that each name and address is unique. The issue becomes larger than any one computer or any particular site. The network itself has to be intelligent. From the users' perspective, we know what we want to send and who we want to get it. Somehow, submit it to the network and it should get there. Issues of what has changed on the network since the last communication with this host, or what is currently the best path to take, should not concern the user.

The best analogy is the phone system. When someone picks up the phone to make a call, they know the destination address (phone number), and dial it without any concern of how the call will be completed. The network takes care of routing the call through the correct central offices or trunks to reach the destination user's central office and ringing the phone. Whether or not a major trunk was down between Chicago and New York and the call had to be re-routed through Dallas is completely transparent to the user.

The problem is similar to that of a child trying to find his way through a restaurant to a particular table. From an adults' point of view, it is pretty straight-forward. The adult can see the destination, and can follow a reasonably straight path there, avoiding any obvious obstacles such as a serving tray. The child can only see what is directly in front of him, and needs to be guided in the right direction, without the benefit of being able to see over tables as to what is coming up.

The case of a computer on a network is more like the child than the adult. A computer is only connected to its closest neighbors, or, on a broadcast network, its closest bridges. Any information it gets from the rest of the network, it must get through its neighbors. Dynamically passing information around the network between the nodes can give computers an "adults" view of the network. This is the concept behind routing protocols.

The subject of using routing protocols on a large scale did not become an issue until there were large networks that required them. Large scale networking began in the research labs at Cal Tech, Cambridge University, Xerox PARC, and Berkeley (among others) in the late 1960s and early 1970s. The first major United States national network, the ARPANET, was funded by the Department of Defense to link universities, government labs, and key DoD industries. The initial networks used dedicated telephone circuits, Digital PDP 11 computers, and BBN (Bolt, Beranek and Newman) software to form virtual circuits between any two nodes on the network. The National Science Foundation became the primary source of funding for the network. In the late 1980s the ARPANET was removed from the authority of the military and is now loosely administered by the Network Information Center (NIC) at the Stanford Research Institute (SRI). Today, what was once the ARPANET is called the NSFnet or more popularly known as the Internet. The Internet often gets confused with the generic term "internet", which refers to the networking of networks or inter-networking. Whenever a reference is made to the successor of the ARPANET, Internet will be spelled with a capital I, otherwise a lower case i will be used. This thesis is primarily concerned with routing protocols at the internet level.

As the size of networks grow, organizations will move towards decentralized control of the network. Decentralization of control will ultimately lead to conflicts in implementation at many different levels. The resolution of these conflicts and fault isolation must fall upon the network itself. Many large corporations and universities have already built networks that are growing faster than they can maintain them. Some managers of local area networks (LANs) have the power to change the configuration of their file servers, workstations, and sometimes bridges. Changes made by LAN managers to their network configuration can have catastrphoic effects on the entire internet, if not done properly. Dynamic routing protocols can isolate problems and keep critical information flowing.

The purpose of this thesis is to present a persuasive argument to separate large distributed computer networks into smaller autonomous groups. Each autonomous group would be responsible for maintaining their section of the network. Assuming that the majority of traffic is usually local to any one network, overhead would be reduced significantly since any routing algorithim would just have to track routes in the local area instead of the entire internet. When traffic is destined for a machine that is not within the area defined by the autonomous groups, it would be passed to a border gateway, which would have more complete information about the rest of the internet. Any disturbance on a network would be automatically severed from the rest of the internet. These reasons, along with a thorough background on routing protocols, form a convincing argument to deploy border gateways and divide large internets into smaller autonomous groups.


CHAPTER II

ANALYSIS OF SEVERAL ROUTING PROTOCOLS

Interior Gateway Protocols

Routing protocols are broadly divided into two classes, interior gateway protocols (IGPs), and exterior gateway protocols (EGPs). IGPs are intended for use within a single set of networks, either under one set of management or closely coordinated managements. The accepted term for a network under one set of management is an "autonomous system". Collections of autonomous systems are connected by EGPs. An IGP is designed to keep track of considerable detail about network topology. Priority in designing an IGP is placed on producing optimal routes and responding to changes quickly. In contrast, priority in designing an EGP is on stability and administrative controls. Often it is sufficient for an EGP to produce a reasonable route, rather than the optimal route. An EGP is intended to protect one system of networks against errors or intentional misrepresentation by other systems. This has become a more serious issue as networking moves towards decentralization of control. In this section we will analyze several major routing protocols to understand how they accomplish these goals.

Routing Information Protocol

Routing Information Protocol (RIP) was originally developed by Xerox as part of the Xerox Network Systems (XNS) architecture. The program "routed", distributed with the 4.3 Berkeley Software Distribution of Unix, is loosely based on the same protocol. There are several other implementations of what is supposed to be the same protocol. Unfortunately, these implementations disagree in various details. The discussion that follows includes features from a combination of various implementations. However, the discussion is general enough to apply to any of the various implementations.

RIP was intended to be used with the IP (Internet Protocol) based Internet. IP is a part of the TCP/IP (Transmission Control Protocol/Internet Protocol) protocol suite that was developed collectively by many groups to serve as the transmission protocol on the Internet. TCP/IP has been implemented on virtually every computer system to allow connectivity between all types of computer platforms.

The Internet is organized into a number of networks connected by gateways. The networks may be either point-to-point links or more complex networks on a broadcast system, such as Ethernet. Hosts and gateways are presented with IP datagrams addressed to some host. Routing is the method by which the host or gateway determines where to send the packet. The simplest case is when the destination is directly connected to the host or gateway, and the packet can be forwarded immediately. The more interesting case is when the destination is not directly reachable. In this case the host or gateway attempts to send the packet to a gateway that is nearer to the destination. The goal of the routing protocol is to supply the information necessary to do the routing.

The routing algorithm stores the information about the network in a database commonly called a routing table. This information is passed between neighboring gateways by update messages to keep each other informed about the status of the network. Each entry in the database typically contains:

These items are defined in such a way that there is no formal distinction between networks and hosts. RIP will work equally well for networks, hosts or any network entity.

RIP is one of a class of algorithms known as "distance vector algorithms". Distance Vector algorithms use information about the internet to determine the best route to every possible network. In the case of RIP, we use a metric. The metric for a route is simply a number, which might represent the distance a packet has to travel when it follows the route. The metric does not have to be the actual distance in miles or feet. It is often a simple count of the number of gateways the packet goes through along the path; this is also called the "hop count". If leased lines are used, it could represent the dollar amount of going along the path. The only critical issue for the algorithm is that the metric must be derived by adding up numbers associated with each network that the packet passes through. This is commonly referred to as the "cost" for each segment.

The core of the distance vector algorithm was developed by Richard Bellman and Lester Ford. The distance vector algorithm is often called the Bellman-Ford algorithm. It as been interpreted as follows:

Let D(i,j) represent the metric of the best route from gateway i to network j. It should be defined for every gateway and network. d(i,j) represents the costs of the individual steps. Formally, let d(i,j) represent the cost of going directly from gateway i to gateway j. It is infinite if i and j are not immediate neighbors, and 1 otherwise. (Note that d(i,i) is infinite. I.e. we don't consider there to be a connection from a node to itself.) Since costs are additive, it is easy to show that the best metric must be described by:

The best routes start by going from i to those neighbors k for which d(i,k) + D(k,j) has the minimum value. This can be shown by induction on the number of steps in the routes.

A simple algorithm can compute the metric based on this principle. Directly connected networks are always assigned metric 0. For other networks, gateway i gets its neighbors k to send it their estimates of the distance to the destination j. When i gets the estimates from k, it adds d(i,k) to each of the numbers. This is simply the cost of traversing the network between i and k. Periodically, i compares the values from all of its neighbors and picks the smallest, that is to say, it picks the minimum distance route.

Above is the theoretical description of RIP. In actual implementations, RIP does not store information about a route to compute a minimum cost. It computes the cost incrementally. Whenever RIP sees a route with a better (smaller) metric than the one in its table, it replaces it with the new one. In this way, the gateway will pick the shortest route from knowledge of several paths. However, there must be some way to increase the metric. There might be a case where the first estimate was too low. A solution to this problem is that when we receive a larger metric for a route from the same gateway as the one defined for our current path, we update the route anyway and accept the larger metric. For example, lets say we have an entry in our database to a network using gateway G with metric M. If we receive an update from another gateway besides G, we would only accept it if the new metric is better than M. But if we receive an update from G itself, we will always accept the new metric. This process will produce the same result as computing an explicit minimum, without the overhead.

This system of incremental updates creates a new problem. RIP, as presented so far, depends on information from its neighbors to update its routes. If a gateway was to crash, it would not be able to send out a message to cancel its route. In order to handle this situation, implementations are designed to timeout routes. Every gateway sends an update message every 30 seconds. Suppose the current route for network N uses gateway G. If we don't receive an update from gateway G in six update periods (180 seconds) we can assume that either gateway G has crashed or that the connection to G is down. At this point, we mark the route as invalid. We will then wait to hear about another route to N, and replace our invalid route with a valid one. The reasoning behind waiting six update periods is that packets are occasionally lost by the network. Therefore, it would not be a good idea to invalidate a route based on a single missed packet. The numbers 30 and 180 are the default values for RIP, but can be changed depending on the speed and reliability of the network. These times are referred to as the "update time" and "route timeout period". The route timeout period should be at least (M + 1) * UT, where M is the number of packets you want to be able to survive losing, and UT is the update time.

RIP marks a route as invalid by showing the metric for that route as very large. The object is to pick a number that is larger that the largest valid metric we could expect. RIP was not designed for a network with a radius larger than 15, hence infinity is traditionally set at 16. Following this principle, when a connection to a network vanishes, the host will wait until the route timeout period has expired, and then set the metric for the route to 16. Eventually all the routes to the vanished networks will converge to 16. Hosts that are one hop away from the original neighbors would have a metric of 17, but for simplicities' sake RIP sets anything greater than 16 to just 16. In order to illustrate how convergence takes place we will look at an example.

Consider the following network:

Figure 1

The "Network Cloud" represents an arbitrary network mesh that connects gateway I and gateway J. We will assign it a cost of 10. The best path from network A to network B is through gateway H. This is true even for hosts on the far side of gateway I and before the cloud, where user Z is marked in the diagram. For those hosts it is only 2 hops to network B through gateway I as opposed to 11 hops through the Network Cloud. In this configuration, both gateway I and gateway G would stabilize on a route to network B as using gateway H with a hop count of 1.

Now suppose the link from gateway H to network B fails. The gateways should switch immediately to the route through gateway I, instead the following scenario develops. Gateway H will time out the route and set the metric to 16. In the next round of updates Gateway H will send out the new information about the route. Unfortunately, vestiges of the old route still exist in the system. When gateway G receives the update from gateway H, it is possible for it to also receive an update from gateway I, which was sent before gateway I had the chance to process the new information from gateway H. This update from gateway I, will incorrectly claim that it has a route to network B with one hop. Gateway G and H will accept this information and add to their routing table a route with a metric of 2 (one more than gateway I). On the next series of updates gateway I will hear the metric of 2 from gateway G and H, and then advertise a route with metric of 3. This will continue until the gateways finally converge on 16, or in this case a metric of 12 through gateway I. This phenomenon is called "counting to infinity".

This problem is one of mutual deception. Gateway G and Gateway I are advertising information about a route that doesn't exist. They keep the misconception alive by lying to each other. The best way to prevent this is by a policy called "split horizon". It basically states that a host will never advertise information about a network back out the interface that the information originally came in on. For example, gateway I was receiving information about the route through gateway H on its interface with network A. It should never re-transmit that same information back on the interface to A. At the very least it would be redundant. The truth is that it doesn't have anything additional to contribute to the situation so it should just leave that route out of its update (but it could include information about its alternate route through the Network Cloud). According to this rule, gateway G should never send out a routing update at all. It does not have any alternate route that the other gateways don't already know about.

There is still another problem. At the same time that the new updates are going out, hosts that haven't received the latest information are sending out their scheduled updates with the old information. These machines are spreading bad rumors. This just prolongs the inevitable. To improve this situation, we have two policies - flash updates and update hold-downs. With flash updates, whenever a gateway sees an increase in a metric, it immediately sends out an update without waiting for the update time to expire. This causes a cascade of triggered updates. The new information should be quickly propagated throughout the network. Unfortunately, regular updates are going out at the same time. The worst case would be a gateway that has just received the flash update, then receives a normal update from a host that hasn't gotten the word. This could reinstate a defunct route for another round of updates. With update hold-down, after a gateway has processed a flash update, it will refuse to process any information for the network that it has just declared inaccessible. This hold-down lasts for a fixed period of time. It should at least be long enough for the flash update to propagate through the network. Usually one update time would be enough, but again since we have to allow for some lost packets it is better to set the hold-down to a multiple of update times.

A combination of split horizon, flash updates, and update hold-downs usually will be sufficient to get rid of erroneous routes quickly. Although, networks which change segment costs dynamically will not work with these rules. Some implementations use real time analysis of network load and modify the cost appropriately. Systems such as this require more complex programming to obtain stability.

Occasionally the need arises to split traffic load between two paths. It sometimes could be advantageous to have two parallel paths at half the capacity than one at double the capacity. The way RIP was designed, it cannot do any type of load balancing. It was intended to pick the best path between points. Load balancing is a more complex technique. In its present form RIP cannot perform load balancing or measure network load dynamically.

Interior Gateway Routing Protocol

Interior Gateway Routing Protocol (IGRP) was developed by Cisco Systems to operate on large networks with complex topology and segments having different bandwidth and delay characteristics. Like RIP, IGRP is a distance-vector protocol based on the Bellman-Ford algorithm. In all of these protocols, gateways exchange routing information with adjacent gateways. These routing packets contain a summary of information about the rest of the network. It can be shown mathematically that all of the computers are working together on an optimization problem by a distributed algorithm. Each gateway only needs to solve part of the problem, and it only has to receive a portion of the total data.

The core of the IGRP algorithm is very similar to RIP. It finds the best path by computing the minimum of the costs of each individual network segment. IGRP differs from RIP in three main respects. First, instead of using a simple metric, IGRP uses a vector of metrics based on several criteria. Second, IGRP splits traffic between several paths that fall within certain parameters. Third, IGRP employs sophisticated techniques to maintain stability in situations where the topology is changing.

The composite metric is based on the topological delay time, the bandwidth of the narrowest bandwidth segment of the path, the channel occupancy of the path, and the reliability of the path. Topological delay time is a physical description of the amount of time it would take to traverse that path, similar to propagation delay. The bandwidth is measured in bits per second by the slowest link in the path. Channel occupancy specifies how busy the channel actually is by indicating how much bandwidth is currently in use. This is measured dynamically, and will change as the network load changes. Reliability is the current error rate. It is computed from the percentage of packets that arrive at the destination intact. It also is dynamically measured.

The metric is computed from the following formula:

[(K1 / Be) + (K2 * Dc)] r

Where:

K1, K2 = constants

Be = unloaded path bandwidth x (1 - channel occupancy)

Dc = topological delay

r = reliability

The formula is designed so that the path with the smallest metric will be the best path. Where there are multiple paths, traffic will be divided proportionately between the paths in accordance with the metric. For example, if a route has a metric of 1 and an alternate route has a metric of 3, three times as much traffic will flow over the path with a metric of one. This is assuming that both paths fall within a specified range of tolerance.

The constants K1 and K2 are used so that the metric can be weighted towards bandwidth or delay. Certain interactive applications, such as terminal emulation, are very delay-sensitive. Other applications, such as large file transfers, demand high bandwidth. Through use of this metric, IGRP allows the network administrators to custom-tailor the network routing to meet their individual needs. Ideally, every packet could include a "type of service" field which would indicate what type of data it is and determine how it would be treated.

Moreover, by accounting for bandwidth in the composite metric, IGRP manages network load as it should. With RIP, each network segment is given a hop count regardless of the bandwidth of the link. IGRP will correctly weight each link by its channel capacity and its current load. Bandwidth limitations do not accumulate in the same way delays do. This metric will account for its special properties.

IGRP further identifies each update as belonging to a particular autonomous system. As mentioned previously, an autonomous system is a collection of networks that are managed by one organization. Tagging each update as being associated with a particular autonomous system would allow several routing systems to be operating on the same network simultaneously. It works especially well in places where several organizations converge. Updates that occur near the border gateways would not be confused with updates from neighboring networks. Since each update is labelled, gateways could be configured only to listen to updates from the right system. This would allow each group to run their own routing protocol. It should be noted however that this is not a complete security solution. Any gateway could be configured to listen to or broadcast to any autonomous system. Nevertheless, it is still a useful tool in implementing routing policies where there is a reasonable degree of trust between the administrators.

Many routing protocols have the concept of a default route to a distant network. It would be impractical for every host to contain an entry for every outlying network. Instead there are boundary gateways assigned that have more complete information. The local gateway would pass traffic destined for an unknown gateway to one of these boundary gateways via a default route. Some of these protocols, such as RIP, pass this information around as if it were a real route. This causes problems when a connection to a distant network goes down and the protocol is fixed to send to a particular default route. IGRP handles default routes better. IGRP flags real routes to be candidates for default routes. It can best be thought of as turning on a bit associated with each route to be a default route. Periodically IGRP scans all the candidate default routes and chooses the one with the lowest metric as the actual default route. This approach is more flexible in that it would dynamically adjust the metric as it does any route.

IGRP uses the techniques of split horizon, flash updates, and update holddowns in a manner similar to RIP. IGRP also uses a different approach, called poison reverse. Poison reverse explicitly marks a network as unreachable, instead of just giving it a large metric. In order to remove vestiges of the old network, a cascade of flash updates is triggered, propagating the new information throughout the network. It is a more stable solution since it avoids the process of timing out the route.

Load balancing is the process of using several paths simultaneously to achieve greater effective bandwidth. Instead of picking a single path with the smallest metric, traffic is split among several paths with metrics falling into a specified range. The most common use of this technique is to set up redundant parallel paths between two networks. This increases throughput and reliability. IGRP accomplishes load balancing by creating a concept called variance. A variance V is specified by the network administrator. All paths with minimal composite metric M are kept. In addition all paths with a metric less than V x M are kept. Traffic is distributed between the paths in inverse proportion to the composite metrics.

The use of variance can create some problems. Variance is designed to allow routers to use parallel paths of different speeds. For example, there might be a 9.6 Kbps line running in parallel with a 19.2 Kbps line, for redundancy. If the variance is set to 1 (the default), only the best path will be used, as long as the 19.2 Kbps line maintains a reasonable reliability. Although, if several paths are the same, traffic will be shared between them with a variance of one. By raising the variance, we can allow traffic to be split between the best route and other routes that are nearly as good. With a large enough variance, traffic will be split between the two lines. The danger is that with a large enough variance, paths will be used that aren't just slower, but are actually in the wrong direction. This could be corrected with an additional routing rule that would prevent traffic from being sent upstream, and thereby cause routing loops. In general a variance of 1 is used, except in specific situations where parallel paths need to be used. In this case, the variance should be carefully set to provide the correct results.

Routing Table Maintenance Protocol

Routing Table Maintenance Protocol (RTMP) was developed by Apple Computer as part of their seven layer protocol suite - AppleTalk. AppleTalk was designed to be a general purpose network system that provides a seamless extension of the user's computer. A key goal was the ability for a user to plug a computing device into a network system and use it immediately, without a complicated configuration process. This has been called "plug and play" capability. The designers were also careful to avoid any type of centralized control, to eliminate the possibility of a single point of failure, and maintain the user's personal control over network resources. The result is a dynamic routing protocol with computers and routers arranged in a peer-to-peer architecture.

There has been has one major upgrade to AppleTalk since its introduction. When the new version was released, it was called Phase 2, and the original flavor was named Phase 1. In this thesis, any disscussion involving AppleTalk will pertain to Phase 2, although many of the points and issues apply equally to both.

RTMP is a distance vector protocol. Like RIP and IGRP, RTMP maintains a routing table by passing updates between routers on the internet. The distance between networks through Internet Routers (Apples' name for a gateway) is measured in hops in the same manner as RIP. The algorithm could be expanded to use more complex metrics, such as transmission capacity or traffic load of the individual links, but the current implementation only takes hops into account.

When a router powers up, it checks each of its ports. Usually, a port would be defined in the configuration of the router with a port descriptor. The port descriptor consists of four fields: a flag indicating whether the port is connected to a network, the port number, the port node address (the network number and node ID for the port on that particular network), and the port network number range (the network number range for the network connected to the port). The router can be programmed to listen to the network, and set up its configuration accordingly. When the router sends out a message assigning the network number, it is said to be "seeding" the network. Problems can arise when routers conflict with different network numbers for the same network. Careful planning has to be taken to insure that all the routers seed the network correctly.

After the router has determined which networks are directly connected, it creates a routing table. Each entry in the routing table consists of the network range, the distance to the network in hops, the port which packets must be forwarded out to reach the distant network, the port number of the next internet router, and the entry state of the connection. The entry state can be one of three values - good, suspect, or bad. Periodically, routers broadcast packets to one another. Each packet basically consists of a picture of the routing table. The information for each entry is represented in pairs of network range and distance called routing tuples.

The routers are operating on two timers - the send-RTMP timer and the entry validity timer. Each time the send-RTMP timer expires, the router broadcasts, through each of its ports, its routing table in the form of RTMP Data packets. The default value for this timer is 10 seconds. RTMP uses split horizon for stability. It will not rebroadcast information about a route through the same port the information came in on.

The entry validity timer is used for aging of the routing table. Each entry in the routing table corresponding to a remote network is the result of an RTMP Data packet that was received by the router from a neighboring Internet Router closer to the remote network. RTMP considers this entry to be valid only for a limited time, called the entry validity time. Before starting the validity timer, the router goes through the routing table and changes the state of every good entry to suspect. Every entry must be revalidated from a new RTMP Data packet before the timer expires. If part of the network is down, the revalidation packet will not get through. The validity timer will expire, and the router will not have received confirmation of the route. At that time, the entry's state is changed from suspect to bad. During this period, any valid alternate route would be accepted in place of the bad route. However, if no new route is discovered, the bad entry will be deleted when the validity timer expires twice more.

This aging process would normally remove lost routes in finite time, but for greater stability, RTMP uses the notify-neighbor technique. Notify-neighbor is a method of identifying entries whose state is bad when sending RTMP Data packets. Bad entries are identified in RTMP Data packets by a tuple distance of 31. This is similar to RIPs metric of 16 for a bad route. The neighboring routers would then pass on the news in their next update. It would be better still if RTMP had implemented some type of flash updates. Forcing the information about the bad route to cascade through the system would be a more stable solution.

RTMP has some more fat to trim. Although split horizon is used, every RTMP Data packet includes a list of all the appropriate networks and their routes, whether there has been a change or not. On a large network with several hundred sub-nets, this can create quite a traffic problem. Every router will broadcast a list of 300 networks every 10 seconds, with nothing new to report. RTMP could be modified to just broadcast changes in the network, thereby reducing unnecessary traffic. However, this would require a major overhaul of the protocol.

Open Shortest Path First Protocol

Open Shortest Path First (OSPF) was developed by the OSPF Working Group of the Internet Engineering Task Force. OSPF is designed to be used as an Internal Gateway Protocol, which means it is to be used internal to a single autonomous system (AS). OSPF is different from RIP and IGRP in that it uses link-state or shortest path first (SPF) technology instead of the distance-vector algorithm. Routers pass individual link state advertisements (LSAs), which describe pieces of the OSPF routing domain, forming the link state database. Every router on the network has an identical copy of the database. A reliable flooding algorithm maintains synchronization of the link state databases on each router. Using this link state database, each router builds a routing table by calculating a shortest-path tree, with the root of the tree being from the point of view of the calculating router itself. This approach has some interesting tradeoffs with the distance vector algorithm, which we will discuss in depth later in chapter III.

When a router process starts up, it initializes all of the data structures that it will need to store the link status information. Then, it waits to hear from the lower level protocols that all of its interfaces are up and running. At this stage, the router uses the OSPF Hello protocol to find out who its neighboring routers are. On a broadcast or multicast network, this can be done dynamically, by broadcasting to the address ALLSPFRouters. Any OSPF router on the network will answer with their router number and address. On a non-broadcast network, other routes have to be discovered statically by the configuration information set up in the router. On broadcast networks, such as ethernet, the Hello protocol elects a designated router. The designated router will be the only local router that transmits and receives routing updates from other networks. All other routers on the same network will transmit and receive information from the designated router.

After the list of neighbors has been determined, the adjacencies are formed. In an OSPF routing domain, certain routers are considered to be adjacent. For example, any two routers connected by a serial link are considered to be adjacent. On multiple access networks, any routers directly connected to the network are considered to be adjacent. As an adjacency is formed, the two routers synchronize their link state databases by exchanging database summaries in the form of OSPF Database Exchange packets. They maintain this synchronization through the flooding algorithm.

These adjacencies control distribution of link status advertisements. On a broadcast network, the designated router decides which routers are adjacent. All link status information is spread along adjacencies. A backup designated router is assigned in the adjacency set up process. This is to insure an immediate switch-over if the designated router was to fail. The backup router has all the same information as the designated router, and should be able to assume the role promptly.

The database is formed from a collection of the link state advertisements from the network. Each router calculates a shortest path tree from the information in the database, with itself at the root. This tree is then used as a basis for the routing table.

Each LSA contains a router ID and a unique sequence number. The router ID is used to determine which router the LSA originated from. The sequence number is used to determine the relative age of two link state advertisements that were produced from the same router.

As in IGRP, OSPF link state advertisements are marked as to which autonomous system they belong. OSPF differs in that it goes one step further. OSPF allows for groups of contiguous networks to be grouped together. Such a group, together with all the routers that have interfaces on any of the included networks, are called an area. Each area runs a separate copy of the SPF routing algorithm. This means that each area has its own topological database and corresponding graph. The topology of an area is invisible to any router or host outside the area. This isolation of knowledge allows the protocol to effect a major reduction in routing traffic as compared with treating the entire autonomous system as one OSPF routing domain. Essentially, we have introduced a two layer hierarchy. Now routing will take place on two levels depending if the source and destination address of a packet reside in the same area. The designated router for the area becomes the area border router.

OSPF is designed to interact well with a router speaking an EGP to another autonomous system. Boundary routers are set up on the periphery of the autonomous system, like boundary routers are set up for areas. Sometimes, the router that is physically connected to the other autonomous system is not participating in the OSPF routing domain. This requires another router to advertise the path to the external network. All packets destined for the external network would converge on the advertising router, so they could in turn be sent to the router with the actual physical connection. A nice feature of OSPF is that it allows a "forwarding address" to be set up for its external advertisements. Now the route would be pointed to the proper gateway directly and therefore save a hop.

The "forward address" field enables internal routers to serve as "route servers" for external routes. An internal router could gain information about external routes through a combination of static configuration and communication with an EGP router. It then can advertise itself as an AS boundary router and supply information for the rest of the network as to where the proper AS exit points are for particular external networks.

In most networks, there are some areas where there is a single exit and entry point to that area. These areas are called "stub" networks. There is no reason to flood these stub networks with information about the AS exit points, or any information about the rest of the network for that matter. Any packet destined for a network outside the stub area can be handled by the gateway at the point of entry. OSPF has provisions for configuring an area as a stub area. Basically, the routers within the stub area are statically configured with a default exit route. This can dramatically reduce the size of the link state database for routers within the area.

Another feature of OSPF is that it can use the Type of Service (TOS) field in IP packets to route traffic accordingly. The TOS field has been standardized for IP to enable software to attempt to minimize delay, maximize throughput, maximize reliability or minimize monetary cost. It is a common situation for many of these attributes to be mutually exclusive. In order to differentiate routes based on TOS, each interface must be assigned a separate cost for each TOS. The OSPF routing process then uses this information to build a separate shortest-path tree for each type of service. Each packet can be routed by the type of service requested. Type of Service is explained in greater detail in chapter III.


Exterior Gateway Protocols

Border Gateway Protocol

Border Gateway Protocol (BGP) was developed by the BGP Working Group of the Internet Engineering Task Force. BGP was designed to replace EGP, the original exterior gateway protocol that is still being used widely in the Defense Data Network and the National Science Foundation Network. BGP is an exterior gateway routing protocol intended to be used to communicate between different autonomous systems. The fundamental structure of BGP, as with any EGP, is the same as an IGP, except that a stronger focus is placed on policy and administrative controls rather than immediate responsiveness toward topology changes.

BGP is used to advertise routing information about networks internal to its own autonomous system to other ASs. BGP is also used to advertise routes for external networks, to routers internal to its autonomous system. Routers inside a particular AS are referred to as internal neighbors, and routers outside the AS are referred to as external neighbors. The routers speaking BGP have to communicate with internal routers speaking an IGP. Within one AS, there can be more than one router speaking BGP, and more than one border gateway to external ASs. BGP allows for the BGP speaker and the border gateway to be physically separate machines. The only restriction is that the BGP speakers and the border gateways share the same physical network. The way indirect speakers are handled is much the same way that OSPF handles "forwarding" addresses. It sets up a route pointing externally bound packets to the physical gateway.

A major feature of BGP, is that it allows the AS to be configured to carry transit traffic from one AS to another. Transit traffic is defined as traffic that does not originate or terminate in the local AS. Traffic that does originate or terminate in a local AS, is called local traffic. Depending on how an AS is physically connected and configured, it can fall into one of three categories:

The way routing information flows through ASs can be described by the following diagram:

Figure 2

As packets hit the the border gateway destined for an internal network, there has to be some cooperation between BGP and the IGP. This is also true for transit traffic. Traffic that enters on one border gateway must be routed to the other side of the AS so that it can leave through the appropriate exit gateway. The simplest way to accomplish this is by using some type of encapsulation. The first border gateway would encapsulate the packet, ship it off to the other border gateway, so that it can in turn unzip the envelope and put the packet out toward the next gateway. This approach requires that the IGP can forward data directly to the border gateway, and that there is some way to recognize the encapsulated packet at the other end.

Further coordination between the IGP and BGP exists in the timing of advertising routes. There is a time lag between when the border gateway receives new routing information, which has originated from another border gateway within the same AS, and the moment when the IGP is capable of routing packets to that border gateway. During this time window, either incorrect routing can occur or routing "black holes" may appear. This scenario can be prevented by requiring that the IGP converge on a route to the proper border gateway, for a particular external network, before advertising that route to other ASs.

The major goals of policy-based routing arise from political, security, or economic considerations. For example, if an AS refuses to carry traffic for another AS, BGP can enforce a policy prohibiting it. BGP can enforce policy in the following situations:

a. A multihomed AS can refuse to act as a transit AS for other ASs. It simply will not advertise routes to networks other than those directly connected to it.

b. A multihomed AS can become a transit AS for a restricted set of adjacent ASs. This can be accomplished by advertising its routing information only to this set of ASs.

c. An AS can favor or disfavor the use of certain ASs for carrying transit traffic from itself.

d. An AS can use a variety of metrics to compare routes through different ASs, including diameter, link speed, capacity, tendency to become congested, and quality of operation. Information about these qualities could be determined by methods other than BGP.

For the most part, any BGP policy should be able to control announcements to adjacent ASs and preferably extend down to particular networks. The granularity of extending down to individual users, in most cases, would be not be usefull. Also, BGP implementations should be able to ignore routes with certain ASs in the path. This could also be accomplished by raising the relative weight of that AS to infinity.

BGP uses an incremental update system. In each update period, only changes in the routing information since the last update are sent, so that complete refreshes of the entire routing table are not required. This keeps the network overhead down to a minimum.

BGP uses TCP (Transmission Control Protocol) as its reliable transmission protocol. This allowed the designers to avoid reinventing the wheel when developing this protocol; The designers did not have to implement fragmentation, retransmission, acknowledgements, and sequencing. The choice of TCP as a transport protocol also allows any authentication scheme used by TCP to be used in addition to the BGP authentication mechanism.


CHAPTER III

OVERALL ANALYSIS

Each routing protocol has some strong points, as well as room for improvement. This section will discuss various features of the protocols, and compare and contrast how each has succeeded or failed in design. These features are representative of basic principles of routing protocols that will be critical as the evolution of networking technology continues.


Information flows

A basic problem stems from networks passing around information about networks that they do not know about first hand. A network must keep track of where the information flows, and who knows what. RIP, IGRP, and RTMP all use a form of split horizon to minimize confusion. The basic procedure is to keep track of which interface each entry came in on, and not rebroadcast the information out that port. This type of practice should minimize old or wrong information from being propagated. OSPF does not have a need for something exactly like split horizon, since each LSA is marked by which router it comes from. However, OSPF does take care not to send information where it is not required. For example, OSPF does not send any additional information into stub areas other than the address of the exit gateway. RIP could achieve this same type of functionality by setting up each host in the stub area with a static route. The RIP daemon (routed) would then be run in a quiet mode, meaning that it would not broadcast any RIP update packets.

Load Balancing

Load balancing is the process of splitting traffic between several paths simultaneously to achieve greater effective bandwidth. RIP can not do load balancing by the very nature of how it works. Any distance vector protocol scans the routing table for the route with the lowest metric, and sends off the packet. Traffic will only be sent over the path with the lowest metric. IGRP has overcome this mechanism with variance. By specifying a "fudge factor", called variance, routes could be selected from a range of acceptable metrics. The metric of the best available route, M, is multiplied by a factor of V. Any route that falls within the range of M and V x M will be an acceptable route.


Figure 3 demonstrates the range of acceptable routes. As was discussed in the section on IGRP, this approach has some problems. With a large enough metric, it is possible that routes will be picked that are actually in the wrong direction. This could cause routing loops and network instability. IGRP could add an additional requirement into the forwarding code, to prevent traffic from being sent upstream.

OSPF does support load balancing with equal cost paths. This would allow for the case of two parallel paths. The actual specification of OSPF has left the details of the forwarding up to the implementation. Some implementations use the routes in a round-robin fashion packet by packet. Some use round-robin on a per-connection basis. Others only use one route. However, it is important to note, that whichever method is chosen, it will not affect interoperability.

Overhead issues

Usually, when people talk about the overhead associated with a network protocol, they are speaking about factors that affect the utilization of the bandwidth. RTMP has a major failing in this area. In each RTMP Data packet, there is a list of every network, even if there is nothing new to report. This is mainly caused by the theory of operation that RTMP runs on. RTMP assumes that if it hasn't heard from a distant network within the entry validity time, the network must be disconnected. Not much could be done about this, without a major re-write of the protocol.

OSPF limits traffic by having only one machine for each network act as a designated router. This keeps the majority of the routing traffic local to the individual networks. OSPF also divides large networks up into areas, therefore reducing traffic by localizing it to smaller areas.

There also is another type of overhead, namely memory and CPU utilization. Distance vector protocols only keep a small amount of information on any particular host or router. The status of the network is distributed throughout the network. Shortest Path First technology maintains information about the entire network at each host. Although it is true that OSPF breaks up the network into areas, to reduce the size of the link state database, the memory for all the data structures and the CPU cycles to calculate the shortest-path tree is considerable. The saving grace of this approach is that the price of RAM and the available power of CPUs has improved tremendously in recent years. The benefits of having smarter routers with greater speed and power is beginning to become evident. This trend fits in well with the paradigm of distributed control.

Type of Service

Paths through extremely large networks, especially between networks, vary widely in the quality of service they can provide. Some paths are more reliable than others. Some impose high call setup or per-packet charges, while others do not do usage-based charging at all. Throughput and delay also vary widely. This often leads to tradeoffs. The path that supplies the highest throughput may not be the one with the lowest delay or that is most reasonably priced. Therefore, the optimal path for traffic to follow through the network may depend on the needs of the application and its user.

Since the upper level protocols have no knowledge of the network themselves, all they can do is pass hints down to the lower layers as to the requirements of any particular packet. This feature has been called "type of service". The Internet Engineering Task Force (IETF) has standardized Type of Service (TOS) for IP packets. The design they decided on uses a four bit field. The actual meaning of the bits are listed in table 1.

TOS field codesTOS field codes

Table 1

TOS field

Meaning

1000

minimize delay

0100

maximize throughput

0010

maximize reliability

0001

minimize monetary cost

0000

normal service

When a router wants to forward a packet, it looks up the network destination address in the routing table. In doing so, it collects a set of candidate routes. If the set is empty, then a "host unreachable" message is sent back to the application. If the set is not empty, the router looks through the list to see which route best fits the TOS requested. Every route should have a TOS associated with it. Any routes learned from routing protocols which support TOS will be assigned the appropriate value. Routes acquired from protocols that do not support TOS will assigned the default - 0000. Static routes must be assigned a TOS by the network administrator. If multiple types of service were requested, the protocol picks the route according to a priority list defined in the implementation.

Asynchronous Transfer Mode (ATM), an emerging network standard, is designed to use the type of service field. ATM will have enough bandwidth to handle video, voice, and data simultaneously. Due to the diversity of information it can carry, there are great differences in the requirements demanded on the network by each type of data. Video, for example, is extremely delay-sensitive. Each video packet would be labelled as such, and it would receive the highest priority for forwarding at the routers.

There was considerable debate of exactly what the TOS value for minimizing monetary cost should mean. Recently, there has been a tremendous increase in commercial networks initiating usage based charging. Some people felt that the TOS field 0001 should mean "must not cost money". This was ultimately rejected for several reasons. First, it requires a particular level of service instead of minimizing an attribute. When a user asks to minimize delay, the result still might not be what he considers "low" delay. The routing protocol is designed to pick a route, not guarantee a particular level of service. Second, it would require special code in the both hosts and routers. Finally, since use of the TOS field is determined by the network administrator of a particular system, a user requesting a free path might not get one, if the packet passes through a routing domain that has not implemented type of service.

There were three different proposals suggested for routing based on the TOS field. They were called Strong TOS, Weak TOS, and Very Weak TOS. Strong TOS required that a route be used only if it exactly matches the type of service requested. If there is no route with the required TOS, the packet would be discarded. Weak TOS is like Strong TOS, except that if there is no route with the requested TOS, the default (0000) is used. If there is no route with either the requested TOS, or the default, then the packet is discarded. Very Weak TOS is like Weak TOS, except that the route with the numerically smallest TOS is used if there is no route that has either the requested TOS or the default TOS. The IETF has adopted Weak TOS.

It is permissible to change type of service in the middle of a connection if the nature of the traffic changes. An example of this would be Simple Mail Transfer Protocol (SMTP). SMTP first sets up a connection, tests the sender and recipient, using short command messages, and then transfers the body of the message using a bulk data transfer. The command messages could be sent using a TOS to minimize delay and then the body of the message could be sent with a TOS to maximize throughput. File Transfer Protocol (FTP) could have a similar configuration between the interactive user commands and the actual file transfers.

By the same logic, both ends of a connection could be using different types of service. For example, the sending side of a mass data transfer should request that the throughput be maximized. The receiving side might request that delay be minimized, since it would only be sending back small acknowledgement packets.

Triggered Updates

An interesting phenomena is caused by flash updates. As discussed in chapter II, flash updates are used to propagate new information around the network quickly so that old information will be purged by initiating a cascade of flash updates. On a broadcast network, such as ethernet, if there are several routers connected to the same network, they will all receive the update at the same time. This will cause a similar response for each router, causing them to broadcast a flash update themselves. When they all try to send out the message at the same time, there will be collisions and they will have to re-broadcast again causing a broadcast storm. This situation can sometimes be compounded by organizations purchasing the same exact hardware for many of their hosts or routers, running the same software. The responses to flash updates will come at almost exactly the same time. This scenario can be prevented by requiring each router to wait a small random amount of time before sending out a flash update. This will prevent the updates from flooding the network at the same instant. This approach is called triggered updates.

Regional Accountability

The division of control between autonomous systems is accomplished by the border gateway protocol. Routing information is passed between ASs according to the various policy controls in the configuration. With coordination between administrations, controls could be set up to allow access for particular services, such as electronic mail, while not allowing complete access to the entire network. This would allow organizations to share a limited amount of information services without opening up their entire network. Eventually, systems from completely different commercial or governmental organizations could be seamlessly connected, according to mutually agreed upon policies. Today, Internet gateways provide a clumsy mechanism for passing electronic mail between organizations. In the future, any application could be allowed or denied access between autonomous systems.

In addition to controlling network access, a BGP could isolate a routing aberration to part of the network. Each geographical area or organizational group would be responsible for maintaining their own network. This is the concept of regional accountability. Erroneous network information, either produced by mistake or intentionally, would be limited to one autonomous system. The type of route flapping that was demonstrated in chapter 2 with RIP would not affect the entire internet. Even routing loops that might be created with an improper use of variance in IGRP would not expand to consume the entire network universe. Regional accountability is a key concept in building a network of different autonomous systems.


CHAPTER IV

SUMMARY AND CONCLUSIONS

Many different people have developed routing protocols with completely different approaches to the problem. The design goals of routing protocols were influenced by motivating factors including stability, flexibility, robustness, security, economic prudence, and political righteousness. As networks grow larger, the need for decentralization of control becomes more apparent. It is ironic that as the network becomes more decentralized, the planning of the network must be more centralized. For now it becomes possible for people to make different configuration or implementation decisions that can adversely affect people they did not even know existed.

More people everyday are becoming dependent on the connectivity that the networks provide. The architecture and design of the networking system have become the foundation on which powerful tools can be built. The computer has become a part of the business infrastructure to the same degree as the telephone. Most people today could not even imagine a job without a computer on their desk. It is all but inconceivable that any computer would not be connected to a network. As the number of nodes increase and networks become arbitrarily complex, the potential for network anarchy becomes more likely. The stability and versatility of routing protocols will keep order through chaos and change.

Unfortunately, it looks like things are going to get more complex before they get any simpler. Maintaining a reasonable level of connectivity will be more complicated and more critical. Routing protocols represent the highest level of this connectivity. They can be configured to insure that people remain connected, or be configured to keep intruders out. The dynamic nature of the evolving network must be matched by the routing protocol. There is still much work to be done. To date, there is no routing protocol that meets all the requirements to everyone's satisfaction.

Decentralization of control in networking is a major driving factor in network architecture and design. Sophisticated applications are now being deployed on a small scale that require significant network resources. These applications must run on a network specifically optimized for the application. The managers in charge of these distributed applications need to have the power to alter the configuration of their portion of the network. These changes should not jeopardize the stability of the internet as a whole. In order to insure that local changes will not have undesirable effects, firewalls will have to be installed to prevent escalation of small problems. Hundreds of people having the power to modify the network creates the potential for complete chaos. Distributed control is inevitable with the availability of inexpensive, powerful distributed applications. Managers of these systems must have compelete control over their local environment. Exterior gateway protocols can provide the necessary firewall.

There is a considerable amount of information that must be processed for each network in the internet. In order to reduce this overhead, we must somehow reduce the number of networks in the internet. Since we know that the number of networks is increasing by a faster rate all the time, the only way to reduce the overhead would be to add another layer of heirarchy. Computer networks are starting to see the same problem that the telephone system had with the increase of users. As more people were connected with phone service, connectivity between central offices became too much of a burden. The Bell system solved this problem by installing a five layer heirarchy of telephone connectivity. Now is the time to set up a heirarchy of network control with routing protocols.

Large internets maintained by corporations and universities, should be divided into smaller areas with managers for each segment. Using the same logic, internets from a large geographical area could be put together under a regional management. This already is being done on the Internet with groups such as Colorado SuperNet, which coordinates routing throughout the state. Dividing up existing networks belonging to a single organization (which could be further divided by departments) is a new concept and is not being done currently. Extending regional accountability down to a department has never been possible until recently. Today with the availablity of BGP on routers from many manufacturers, division of network control can be a reality instead of just a theory.

There will be other benefits associated with deploying EGPs than just regional accountability. The routing tables maintained dynamically in the routers must have information about every network in the routing domain. Dividing up the internet into smaller routing domains will reduce the size of the routing tables. Each routing update will be smaller since there will be fewer networks to include in the report. Flash updates will have to reach less routers than before, greatly decreasing the time of convergence. With the decrease in size of the routing information, there will be more effective CPU utilization on the router, allowing for greater response time on selecting appropriate routes. More available CPU power will also allow for widespread use of sophisticated routing options such as type of service, which is in the direction towards the goal of carrying diverse classes of traffic.

Deployment of an EGP and small autonomous groups is a major improvement in every respect. The principles of autonomous systems, regional accountability, Exterior Gateway Protocols represent a core of universal features that will have to be used to meet the demands of the distributed networks of the future. It is these principles which will be the vehicles to move networking into the next phase. It will not be long before even a small network is a complex, interconnected mesh of heterogeneous devices. We must have the technology to manage the network, and take advantage of it. Border Gateways and small autonomous groups will allow for order through chaos. These core principles will be critical to the success of emerging technologies and the evolution of the communications industry.


Bibliography

Almquist, P., "Type of Service in the Internet Protocol Suite", San Francisco, RFC 1349, July 1992

Comer, Douglas, Internetworking with TCP/IP, Vol I: Principles, Protocols, and Architecture, Englewood Cliffs, New Jersey, Prentice-Hall, 1991

Comer, D., Stevens, D., Internetworking with TCP/IP, Vol II: Design, Implementation, and Internals, Englewood Cliffs, New Jersey, Prentice-Hall, 1991

Constable, George, Understanding Computers: Communications, Alexandria, Virginia, Time-Life Books, 1990

Freeman, Roger, Telecommunication System Engineering, New York, John Wiley & Sons, 1989

Graham, Knuth, Patashnik, Concrete Mathematics, Reading, Mass., Addison-Wesley Publishing Co., 1989

Hedrick, Charles, "Routing Information Protocol", Rutgers University, RFC-1058, June 1988

Hedrick, Charles, "An Introduction to IGRP", Rutgers University, 22 Aug 1991

Kamman, Alan, "The Evolution of Telecommunications Technology", Telecommunications: Global Networking Supplement, April 1989

Krol, E., "The Hitchhikers Guide to the Internet", University of Illinois, 25 Aug 1987

Linden, Fabain,"Getting a Move on", Across the Board, December 1991

Lougheed, K., Rekhter, Y., "A Border Gateway Protocol 3 (BGP-3)", Cisco Systems, IBM, October 1991

Mabee, Carleton, The American Leonardo - A Life of Samuel F. B. Morse, New York, Alfred Knopf, 1943

Moy, John, editor, "OSPF Protocol Analysis", Westborough, MA, RFC 1245, July 1991

Moy, John, editor, "Experience with the OSPF Protocol", Westborough, MA, RFC 1246, July 1991

Moy, John, editor, "OSPF Version 2", Westborough, MA, RFC 1247, July 1991

Nemeth, Evi, Snyder, Garth, Seebass, Scott, UNIX System Administration Handbook, Englewood Cliffs, NJ, Prentice-Hall, 1989

Rekhter, Y., "Application of the Border Gateway Protocol in the Internet", IBM Corp., RFC 1164, October 1991

Sidhu, Gursharan, Inside AppleTalk, Second Edition, Reading, MA, Addison-Wesley, May 1990

Tanenbaum, Andrew, Computer Networks, New Jersey, Prentice-Hall Inc., 1988


Glossary

AS (autonomous system) A collection of networks, either under one set of management or closely coordinated managements.

ATM (Asynchronous Transfer Mode) An emerging network standard with enormous bandwidth potential.

BGP (Border Gateway Protocol) a EGP routing protocol developed by IETF to improve upon the first widely used EGP, simply called EGP.

daemon a process in Unix that performs a particular task. Daemons are usually started at boot time and continue to run as long as the system is up. The origin of the term comes from Greek mythology where daemons were guardian spirits that protected people from afar.

EGP (Exterior Gateway Protocol) a class of routing protocols that is used to communicate between different autonomous systems. EGP is also the name for the first widely used implementation of an EGP.

IGP (Interior Gateway Protocol) a class routing protocols that is used to communicate between routers and hosts within a single autonomous system.

IGRP (Interior Gateway Routing Protocol) a IGP routing protocol developed by Cisco Systems that uses distance vector technology.

IP (Internet Protocol) a layer 3 (network layer) protocol making up part of the TCP/IP suite.

LSA (link state advertisement) the name of a routing update in OSPF which describe the local state of a router or network.

OSPF (Open Shortest Path First) a IGP routing protocol developed by the IETF that uses shortest path first technology.

RIP (Routing Information Protocol) the first widely used IGP protocol that used distance vector technology.

RTMP (Routing Table Maintenance Protocol) a IGP routing protocol developed by Apple Computer to work seamlessly with the Apple operating system.

SPF (shortest path first) A class of routing algorithms that bases its routing table on a shortest path tree of the network.

TOS (Type of Service) a reserved field in IP packets to help determine what class of network service each packet should receive.