Search This Blog

Wednesday, March 12, 2014

Configuring Debian Wheezy as Routers for a Four-Office Virtual Network

A previous article described installing Debian Wheezy servers on Oracle VirtualBox VMs.  This article continues setting up a four-office test environment by deploying Debian Wheezy with Quagga routing software.  These routers are the nervous system of the virtual network, connecting the offices together and to the Internet over redundant links.

Quagga on Debian

Quagga is a routing package whose command structure is similar to Cisco's IOS.  Forked from the Zebra project, Quagga continues to add features.  By default, it is accessed through local telnet sessions to specific ports for each supported routing protocol.  An integrated vty shell is available if so compiled, but this article will use the more familiar process of editing separate configuration files for different daemons.  For this example, Open Shortest Path First (OSPF) is utilized for its fast convergence and rapid recovery from faults.

Installation is simple:  "apt-get install quagga."  There are a few configuration steps before it will work.  First, edit the /etc/quagga/daemons file.  There are a series of statements defining which daemons are active; for our configuration, set "zebra" and "ospfd" to "yes."
  • zebra=yes
  • bgpd=no
  • ospfd=yes
  • ospf6d=no
  • ripd=no
  • ripngd=no
  • isisd=no
  • babeld=no
Each active daemon must also have a configuration file present in the /etc/quagga/ directory owned by user and group quagga with permissions 640.  Create "zebra.conf" and "ospfd.conf" files with the statements:
  • password ########
  • enable password ########
The daemons may then be started with "service quagga start."

The first daemon to configure is Zebra – a name inherited from the original open source project that has been picked up by the Quagga team. Start by establishing a telnet session to port 2601, where the IOS-like log on and syntax is apparent.  Enable password encryption (“service password-encryption”) and assign access passwords.  You may configure the interfaces with IP addresses, but it is not necessary because the daemons pick them up automatically.

OSPF is configured over a telnet session to port 2604.  The configurations used in this model are discussed in detail below.

Quagga injects its routes into the Linux kernel routing table. Notice some of the differences between the Quagga OSPF route metrics (costs) and those in the Linux kernel. Under Quagga, directly connected interfaces have a default cost of 10, but under the Linux kernel, they are 0.  There are some other important differences.  For instance, you may not assign a device IP address through quagga.  To mimic this, create a separate loopback interface on the Linux host (/etc/network/interfaces) and provide it with a routable IP address.  Thus, the Coudersport Router is configured with lo:10 with address 10.0.0.254.

OSPF WAN Model Overview

The model network consists of:
  1. Coudersport – The Host Laptop upon which VirtualBox runs.  It acts as a proxy-firewall Internet gateway router connected to the Internet by a Wireless LAN device and provided with three "public" Host-Only VirtualBox NICs (vboxnet0 through 2).
  2. Philadelphia – A WAN Router with three "public" interfaces and one private interface for a subnetted Class A network.
  3. Harrisburg – A WAN Router with three "public" interfaces and one private interface for a subnetted Class A network.
  4. Pittsburgh – A WAN Router with three "public" interfaces and one private interface for a subnetted Class A network.
  5. A backbone (Area 0) network, consisting of point-to-point connections between the above "public" interfaces.
The rationale for this design is that it mimics a live WAN network without the additional overhead required for VPNs.  That is, despite point-to-point links on an Area 0 backbone, the behavior accurately portrays the desired configuration without the overhead of additional "public" routers and VPN connections.

Initial WAN Router Configurations

Each router is configured as an Area router that summarizes the private routes it serves.  The basic OSPF configuration is quite simple:
  • ospf router-id 10.0.0.254 (10.64.0.254, 10.128.0.254, 10.192.0.254) 
  • network 10.0.0.0/10 area 10.0.0.0 (10.64.0.0/10 area 10.64.0.0, etc.) 
  • network 172.16.0.0/16 area 0.0.0.0 
  • area 0.0.0.0 range 172.16.0.0/16 
  • area 10.0.0.0 range 10.0.0.0/10 (area 10.64.0.0 range 10.64.0.0/10, etc.)
This configuration sets the router ID, defines the private network served and its area number, the backbone network to which the "public" interfaces are connected and summarizes the routes for the connected networks.  More on that later.
 
The Coudersport proxy gateway, connected to the Internet, contains two additional lines:
  • passive-interface wlan0
  • default-information originate metric 1
The first statement prevents OSPF update announcements on the public, Internet-connected interface.  The second statement instructs OSPF to announce that this device is a default gateway -- a route to networks unknown to the routing process on other routers.

ABRs, ASBRs and Route Summarization

The backbone network -- Area 0 or Area 0.0.0.0 -- forms the core of the network.  All other areas are connected to the core, either directly or through virtual-links.  In this model, the core is an isolated network (not connected to any external systems).

Philadelphia, Pittsburgh and Harrisburg act as Area Border Routers (ABRs) -- routers configured with an one or more areas and connected to the backbone.  Coudersport, with an Internet connection, acts as an Autonomous System Border Router -- strictly speaking, an ABR that is also connected to routers running another protocol (e.g. BGP).  In this case, it is simply a default gateway.

Only ABRs and ASBRs may provide route summaries.  For instance, the statement "
area 10.64.0.0 range 10.64.0.0/10" on the Philadelphia router advertises a summary of every network in this range, rather than each individual network as a separate entry.  Thus, the summary includes networks 10.64.0.0/24, 10.64.1.0/24, 10.64.2.0/24... 10.127.254.254/24 -- 16,384 24-bit (or 254 host) networks.  This drastically reduces overhead for routing tables and updates between areas. The four routers are each connected to the backbone Area 0 and provide route summaries to other areas.

Basic OSPF Router Operation

Although simple, the configuration described above is also fully functional.  It provides a redundant mesh network, in which links -- not routers -- may fail and communications continue uninterrupted.  The simplicity of this bare-bones configuration is demonstrated in the video below.  Each router directs traffic directly to adjacent areas -- it lacks any traffic shaping or preferential behavior.  It works, but only for simple configurations in which "all things are equal."  The video initially depicts only the Coudersport Router operating and follows changes in the routing tables as each other router is started up and exchanges link state information with the rest of the network.




OSPF Operation and Updates

The video above displays a series of routers staring up and being monitored by a Nagios /NagVis server.  That's fine, but there is a lot more happening.  Each router running a link-state protocol keeps track of three sets of information in tables. The information is:

  • Its immediate neighbor routers.
  • All the other routers in the network, or in its area of the network, and their attached networks.
  • The best paths to each destination.
And the tables are:
  • OSPF neighbor table = adjacency database
  • OSPF topology table = OSPF topology database = LSDB
  • Routing table = forwarding database
OSPF routers merely keep track of neighboring routers during normal operations.  They only exchange information when there is a topology change.  A defined process then efficiently updates the topology change among routers.  That process is:
  1. When a link changes state, the device that detected the change creates a link-state advertisement (LSA) concerning that link.
  2. The LSA propagates to all neighboring devices using a special multicast address (224.0.0.5).
  3. Each routing device stores the LSA, forwards the LSA to all neighboring devices (in same area).
  4. This flooding of the LSA ensures that all routing devices can update their databases and then update their routing tables to reflect the new topology.
  5. The LSDB is used to calculate the best paths through the network.
  6. Link-state routers find the best paths to a destination by applying Dijkstra’s algorithm, also known as SPF, against the LSDB to build the SPF tree.
  7. Each router selects the best paths from their SPF tree and places them in their routing table.
OSPF Packets
  • Neighbor discovery, to form adjacencies
  • Flooding link-state information, to facilitate LSDBs being built in each router
  • Running SPF to calculate the shortest path to all known destinations
  • Populating the routing table with the best routes to all known destinations

Using OSPF Interface Costs to Modify Routes

The previous model configuration above is simple because traffic takes the shortest route defined by the number of hops -- and there are direct routes available as long as all the links are active.  That may not be desirable.  Suppose there is "a reason" for the Philadelphia, Harrisburg and Pittsburgh traffic to preferentially route through the Philadelphia - Coudersport interface, through the Pittsburgh - Coudersport interface if that fails and as a last resort through the Harrisburg - Coudersport interface.  One way to achieve that is by assigning administrative costs to interfaces.

Using Quagga, telnet to the localhost on port 2604 (OSPF) and issue the "config terminal" command.  Go to the "interface" set of commands and assign a cost with the command "ip ospf cost #" to prioritize traffic.  The greater the cost, the less favorable the interface's preference for a path.  These costs are additive.

To achieve the results described above, we will use the following interface costs:
  • Coudersport Router: 100 on the Pittsburgh interface, 200 on the Harrisburg interface
  • Pittsburgh Router: 100 on the Coudersport interface
  • Harrisburg Router: 200 on the Coudersport interface
Once these costs are assigned, the network will perform as desired.  The video below shows how it works.

Using OSPF Interface Bandwidth to Modify Routes

Assigning administrative costs to interfaces may work for small networks and simple configurations, but not for larger, dynamically changing ones.  They quickly become a time-consuming administrative task, cumbersome and error-prone.  A more viable option is to assign bandwidth values to interfaces and use OSPF bandwidth cost calculations.

The cost is calculated by dividing a reference bandwidth by the value of bandwidth assigned to an interface.  Thus, the greater the bandwidth, the lower the cost.  If you choose to override the default reference bandwidth used for the calculation using the "auto-cost reference-bandwidth" command, make sure to do so on ALL routers and using the same value.  If different reference bandwidths are used, inappropriate cost calculations result and faster interfaces may end up with higher costs than slower ones.

The reference bandwidth default value is 100 Mb/s, but this is no longer a desirable value.  The minimum calculated cost is 1, so a Gigabit Ethernet and Fast Ethernet port will have the same -- 1 -- cost.  Override the value with "auto-cost reference-bandwidth 1000" if Gigabit Ethernet is the fastest speed on the network.

For this model, we will assign bandwidths to all of the "public" interfaces in the backbone Area 0:

  • Coudersport - Philadelphia: 36,000 kbs (T-3)
  • Coudersport - Harrisburg: 1,500 kbs (T-1)
  • Coudersport - Pittsburgh: 18,000 kbs (1/2 T-3 or 12 x T-1)
  • Harrisburg - Philadelphia: 72,000 kbs (2 x T-3)
  • Harrisburg - Pittsburgh: 72,000 kbs (2 x T-3)
  • Philadelphia - Pittsburgh: 36,000 kbs (T-3)

The results are interesting.  Philadelphia and Harrisburg still preferentially route to the Internet through the Philadelphia - Coudersport link, and through Pittsburgh if that fails.  Pittsburgh, however, routes through the Pittsburgh - Coudersport link.  Thus, we distribute Internet traffic over two links instead of one.  The same result can be achieved with administrative interface costs, but doing so for a large network would be a burdensome administrative task.  That does not mean interface administrative costs are not useful -- they override bandwidth cost calculations when assigned and may be used for fine tuning -- but the bandwidth calculation is automated and effective.

The video below demonstrates configuring bandwidth costs and how routes change as interfaces fail.
Merely setting bandwidths in quagga does not actually set interface speeds.  This model's routers are built with Intel PRO/1000 desktop adapters on the virtual machines, so they operate at Gigabit Ethernet speed.  To accurately reflect WAN performance, the link speed must be changed to much lower values.  Merely changing the adapter speed through the Linux driver options limits the choice to Gigabit Ethernet, Fast Ethernet and Ethernet speeds -- 1,000, 100 and 10 kbs, respectively -- and is not granular enough to model WANs.  Another option is to write iptables rules to limit traffic.  That works well and also can be used to configure firewall QoS features.  But a simpler solution is to simply install the package "wondershaper."  It is not feature-rich, but does everything we need for this model -- sets interface speeds to whatever value we want.  From the man page:

  • wondershaper [ interface ] [ downlink ] [ uplink ]
  • Configures the wondershaper on the  specified  interface,  given the  specified  downlink  speed  in kilobits per second, and the specified uplink speed in kilobits per second.

That's all we need to configure realistic WAN link speeds for the model.


NagVis - Zabbix Video Demonstration

Finally, a brief demonstration of the routing protocols in operation during WAN link failures.




No comments :

Post a Comment