Search This Blog

Tuesday, February 17, 2015

GNS3 - VirtualBox Part 11: Debian/Ubuntu Bonded NIC Layer 2 Switches

This article demonstrates how to configure Debian/Ubuntu VirtualBox guests to operate as Layer 2 switches with bonded NICs, aggregating several adapters into higher-speed logical interfaces.


Introduction

There have been many implementations of adapter bonding -- often vendor-specific and proprietary; these implementations are not germane to this article.  Over time, published standards have replaced proprietary ones.  Linux supports seven different bonding types and a Linux Bonding HOW-TO document is available at kernel.org.

This article describes Mode 4, IEEE 802.3ad Dynamic link aggregation.  This is a common implementation and requires switch support.  It is widely supported by vendors.

Bonding Modes

Specifies one of the bonding policies. The default is balance-rr (round robin). Possible values are: 

Balance-rr or 0

Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.

Active-Backup or 1

Active-backup policy: Only one slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The bond's MAC address is externally visible on only one port (network adapter) to avoid confusing the switch. This mode provides fault tolerance.

Balance-XOR or 2

XOR policy: Transmit based on the selected transmit hash policy. The default policy is a simple [(source MAC address XOR'd with destination MAC address XOR packet type ID) modulo slave count]. Alternate transmit policies may be selected via the xmit_hash_policy option. This mode provides load balancing and fault tolerance. 

Broadcast or 3

Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.

802.3ad or 4

IEEE 802.3ad Dynamic link aggregation. Creates aggregation groups that share the same speed and duplex settings. Utilizes all slaves in the active aggregator according to the 802.3ad specification. Slave selection for outgoing traffic is done according to the transmit hash policy, which may be changed from the default simple XOR policy via the xmit_hash_policy option. Most switches will require some type of configuration to enable 802.3ad mode. This mode provides load balancing and fault tolerance.

Balance-TLB or 5

Adaptive transmit load balancing: channel bonding that does not require any special switch support. In tlb_dynamic_lb=1 mode; the outgoing traffic is distributed according to the current load (computed relative to the speed) on each slave. In tlb_dynamic_lb=0 mode; the load balancing based on current load is disabled and the load is distributed only using the hash distribution. Incoming traffic is received by the current slave. If the receiving slave fails, another slave takes over the MAC address of the failed receiving slave.

Balance-ALB or 6

Adaptive load balancing: includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special switch support. The receive load is distributed sequentially (round robin) among the group of highest speed slaves in the bond. When a link is reconnected or a new slave joins the bond the receive traffic is redistributed among all active slaves in the bond.


Installing ifenslave and Modifying Configuration Files

The first step is to install the ifenslave package (apt-get install ifenslave).  The package includes the commands and kernel module support.  Boot time kernel module loading requires adding a line -- bonding -- to /etc/modules thus:
# /etc/modules: kernel modules to load at boot time.
#
# This file contains the names of kernel modules that should be loaded
# at boot time, one per line. Lines beginning with "#" are ignored.
# Parameters can be specified after the module name.

lp
rtc
bonding


A new file -- etc/modprobe.d/modules.conf -- is required for additional configuration.  While many configuration commands may be later added to the /etc/network/interfaces file, they may also be added to this one.

alias bond0 bonding
options bonding mode=4 miimon=100 downdelay=200 updelay=200 max_bonds=2
alias bond1 bonding
options bonding mode=4 miimon=100 downdelay=200 updelay=200 max_bonds=2

The illustrated options are not mandatory, but advisable.  The miimon option specifies the interval (milliseconds) in which MII Link Monitoring occurs.  The default value is 0 and disables MII Link Monitoring.  The downdelay and updelay options specify delay times (milliseconds) between MII Link Monitoring detection of a state change and application of the change; it is a multiple of the miimon value and will be automatically rounded if otherwise defined.  Another important option is "maxbonds=#."  The default for this value is 1 and it allows (but does not automatically create) a bond0 interface.  If you plan to add more than one bonded interface, you will need to specify "maxbond=#" as a larger value.
Additional configuration options are detailed in the kernel.org Linux Bonding HOW-TO document.

Configuring from the Command Line Interface

Bonded interfaces may be configured from the command line or added to the /etc/rc.local file.  The following commands utilize iproute2, ifenslave and the deprecated bridge-utils packages:
ifenslave bond0 eth0 eth1
ifenslave bond1 eth2 eth3 eth4

ip link set bond0 up
ip link set bond1 up
ip link add dev br0 type bridge
ip link set dev bond0 master br0
ip link set dev bond1 master br0
ip addr add 10.64.0.4/255.255.255.0 dev br0ip route add default via 10.64.0.1ip link set dev br0 upbrctl stp br0 on
It is not necessary to set the individual Ethernet adapters to "up" when using bonded interfaces.  However, if unbonded Ethernet adapters are to be used in the bridge, they must be set to "up" thus:
ip link set dev eth5 up

Configuring /etc/network/interfaces

Alternatively, bonded Ethernet interfaces may also be configured in the /etc/network/interfaces file, providing support for the ifupdown package for high-level configuration.   The following configuration creates a bridge (br0) with a static IP address, two bonds (bond0 = eth0 and eth1; bond1 = eth2, eth3 and eth4) and three Ethernet interfaces (eth5, eth6 and eth7) that are members of bridge br0.

auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
pre-up ifenslave bond0 eth0 eth1
post-up ip link set dev bond0 master br0
pre-down ip link set dev bond0 nomaster
post-down ifenslave -d bond0 eth0 eth1

auto bond1
iface bond1 inet manual
pre-up ifenslave bond1 eth2 eth3 eth4
post-up ip link set dev bond1 master br0
pre-down ip link set dev bond1 nomaster
post-down ifenslave -d bond1 eth2 eth3 eth4
 

iface eth5 inet manual


iface eth6 inet manual

iface eth6 inet manual


auto br0
iface br0 inet static
address 10.64.0.4
netmask 255.255.255.0
gateway 10.64.0.1
dns_nameservers 192.168.1.1 8.8.8.8 4.4.4.4

bridge_stp on
bridge_waitport 0
bridge_fd 0
bridge_ports bond0 bond1 eth5 eth6 eth7

Testing the Switches

The Legacy ifconfig Command

This command is of limited utility for bonded interfaces.  Note that it lists the bond and Ethernet interfaces as up and whether they are masters (bonds) or slaves (Ethernet).  It does not specify details of master-slave relationships.
bond0     Link encap:Ethernet  HWaddr 08:00:27:1c:e8:ec 
          inet6 addr: fe80::a00:27ff:fe1c:e8ec/64 Scope:Link
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:420 errors:0 dropped:3 overruns:0 frame:0
          TX packets:576 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:46060 (46.0 KB)  TX bytes:52141 (52.1 KB)

bond1     Link encap:Ethernet  HWaddr 08:00:27:83:45:d7 
          UP BROADCAST RUNNING MASTER MULTICAST  MTU:1500  Metric:1
          RX packets:72 errors:0 dropped:2 overruns:0 frame:0
          TX packets:473 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:8736 (8.7 KB)  TX bytes:38259 (38.2 KB)

br0       Link encap:Ethernet  HWaddr 08:00:27:1c:e8:ec 
          inet addr:10.64.0.4  Bcast:10.64.0.255  Mask:255.255.255.0
          inet6 addr: fe80::a00:27ff:fe1c:e8ec/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:367 errors:0 dropped:1 overruns:0 frame:0
          TX packets:212 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:34606 (34.6 KB)  TX bytes:27137 (27.1 KB)

eth0      Link encap:Ethernet  HWaddr 08:00:27:1c:e8:ec 
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:88 errors:0 dropped:0 overruns:0 frame:0
          TX packets:32 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:8736 (8.7 KB)  TX bytes:3604 (3.6 KB)

...

eth4      Link encap:Ethernet  HWaddr 08:00:27:83:45:d7 
          UP BROADCAST RUNNING SLAVE MULTICAST  MTU:1500  Metric:1
          RX packets:26 errors:0 dropped:2 overruns:0 frame:0
          TX packets:38 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3032 (3.0 KB)  TX bytes:6982 (6.9 KB)

The ip addr Command

This command -- part of the newer iproute2 package -- provides detailed information about interfaces states and master-slave relationships.  For instance, Ethernet interfaces eth0, eth1 and eth2 enumerate bond0 as their masters and bond0 is enumerates br0 as its master.
2: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 08:00:27:1c:e8:ec brd ff:ff:ff:ff:ff:ff
3: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master bond0 state UP group default qlen 1000
    link/ether 08:00:27:1c:e8:ec brd ff:ff:ff:ff:ff:ff
...
9: eth7: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000
    link/ether 08:00:27:81:dd:5a brd ff:ff:ff:ff:ff:ff
10: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default
    link/ether 08:00:27:1c:e8:ec brd ff:ff:ff:ff:ff:ff
    inet6 fe80::a00:27ff:fe1c:e8ec/64 scope link
       valid_lft forever preferred_lft forever
11: bond1: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default
    link/ether 08:00:27:83:45:d7 brd ff:ff:ff:ff:ff:ff
12: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 08:00:27:1c:e8:ec brd ff:ff:ff:ff:ff:ff
    inet 10.64.0.4/24 brd 10.64.0.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::a00:27ff:fe1c:e8ec/64 scope link
       valid_lft forever preferred_lft forever

Listing /proc/net/bonding Files

The /proc/net/bonding file contains bond-specific information, this time for a different switch using bonded Ethernet adapters eth2, eth3 and eth4.  Notice that the bond Transmit Hash Policy is the default Layer 2.  Also notice that the default LACP rate -- slow -- applies.  LACP is the protocol that negotiates bundling between complaint switches.  The default interval is slow (30 seconds) while fast (1 second) must be manually-specified.
Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2 (0)
MII Status: up
MII Polling Interval (ms): 100
Up Delay (ms): 200
Down Delay (ms): 200

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
    Aggregator ID: 1
    Number of ports: 3
    Actor Key: 17
    Partner Key: 17
    Partner Mac Address: 08:00:27:74:2e:0d

Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:83:45:d7
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:1d:37:ae
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth4
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 08:00:27:7e:25:d0
Aggregator ID: 1
Slave queue ID: 0

The Legacy brctl Commands

The legacy bridge-utils package provides three commands -- brctl show, brctl showmacs and brctl showstp -- that provide an overview of basic bridge configuration and operation.

brctl show <bridge_name>

brctl show br0
bridge name    bridge id        STP enabled    interfaces
br0        8000.0800271ce8ec    yes              bond0
                                                                 bond1

brctl showmacs <bridge_name>

brctl showmacs br0
port no    mac addr        is local?    ageing timer
  1    08:00:27:1c:e8:ec    yes           0.00
  2    08:00:27:83:45:d7    yes           0.00
  1    ca:02:10:48:00:06    no           0.00

brctl showstp <bridge_name>

brctl showstp br0
br0
 bridge id        8000.0800271ce8ec
 designated root    8000.0800271ce8ec
 root port           0                                 path cost           0
 max age          20.00                           bridge max age          20.00
 hello time           2.00                          bridge hello time       2.00
 forward delay           2.00                     bridge forward delay       2.00
 ageing time         300.00
 hello timer           0.10                         tcn timer           0.00
 topology change timer       0.00            gc timer         141.52
 flags           


bond0 (1)
 port id        8001                                      state             forwarding
 designated root    8000.0800271ce8ec       path cost           4
 designated bridge    8000.0800271ce8ec    message age timer       0.00
 designated port    8001                             forward delay timer       0.00
 designated cost       0                               hold timer           0.00
 flags           

bond1 (2)
 port id        8002                                      state             forwarding
 designated root    8000.0800271ce8ec       path cost         100
 designated bridge    8000.0800271ce8ec    message age timer       0.00
 designated port    8002                             forward delay timer       0.00
 designated cost       0                               hold timer           0.00
 flags           

GNS3 Configuration

The GNS3 configuration depicted at the top of this article depicts three Linux switches -- PHL-Core, PHL-Servers and PHL-Storage.  PHL-Core is connected to two routers over bridged Ethernet ports eth0 and eth1; it is connected to PHL-Servers over the bridged two-adapter bond0 interface (eth2 and eth3 to eth0 and eth1, respectively).  PHL-Servers is connected to bridged three-adapter bond1 interface (eth2, eth3 and eth4 to eth0, eth1 and eth2, respectively).  The /etc/network/interfaces files for the three configurations are below.

PHL-Core Configuration

auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
pre-up ifenslave bond0 eth2 eth3
post-up ip link set dev bond0 master br0
pre-down ip link set dev bond0 nomaster
post-down ifenslave -d bond0 eth2 eth3

auto br0
iface br0 inet static
address 10.64.0.2
netmask 255.255.255.0
gateway 10.64.0.1
dns-nameservers 192.168.1.1 8.8.8.8 4.4.4.4
bridge_stp on
bridge_waitport 0
bridge_fd 0
bridge_ports eth0 eth1 bond0

iface eth0 inet manual

iface eth1 inet manual

PHL-Servers Configuration

auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
pre-up ifenslave bond0 eth0 eth1
post-up ip link set dev bond0 master br0
pre-down ip link set dev bond0 nomaster
post-down ifenslave -d bond0 eth0 eth1

auto bond1
iface bond1 inet manual
pre-up ifenslave bond1 eth2 eth3 eth4
post-up ip link set dev bond1 master br0
pre-down ip link set dev bond1 nomaster
post-down ifenslave -d bond1 eth2 eth3 eth4

auto br0
iface br0 inet static
address 10.64.0.4
netmask 255.255.255.0
gateway 10.64.0.1
dns-nameservers 192.168.1.1 8.8.8.8 4.4.4.4
bridge_stp on

bridge_waitport 0
bridge_fd 0
bridge_ports bond0 bond1

PHL-Storage Configuration

auto lo
iface lo inet loopback

auto bond0
iface bond0 inet manual
pre-up ifenslave bond0 eth0 eth1 eth2
post-up ip link set dev bond0 master br0
pre-down ip link set dev bond0 nomaster
post-down ifenslave -d bond0 eth0 eth1

auto br0
iface br0 inet static
address 10.64.0.5
netmask 255.255.255.0
gateway 10.64.0.1
dns-nameservers 192.168.1.1 8.8.8.8 4.4.4.4
bridge_stp on

bridge_waitport 0
bridge_fd 0
bridge_ports bond0


Sunday, February 15, 2015

GNS3 - VirtualBox Part 10: Debian/Ubuntu bridge-utils/iproute2 Layer 2 Switches

Debian and Ubuntu Linux provide Layer 2 bridges with the older (and deprecated) bridge-utils package and newer iproute2 package.  This article demonstrates implementing bridges using both.

Introduction

Default Debian/Ubuntu installations include the iproute2 package.  The "ip" family of commands is more stable then the commands it supersedes, such as arp, ifconfig and route.  The iproute2 package also supersedes two older Linux bridge packages: bridge-utils and vlan.  However, the author finds configuring networking in the /etc/network/interfaces file simpler using the bridge-utils package; definitions that require scripting using iproute2 require minimal statements using bridge-utils.  Thus, this article incorporates both iproute2 and bridge-utils commands to configure bridges.

Initial Configuration

The initial VirtualBox installation uses Ubuntu Server 14.04 with eight NICs, only one of which is configured.  The /etc/network/interfaces file consists of:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet static
address 10.64.0.2
netmask 255.255.255.0
gateway 10.64.0.1
dns-nameservers 192.168.1.1 8.8.8.8 4.4.4.4
This configures interface eth0 with an IP address, gateway and name servers, the first of which is the local wireless LAN router.  A Cisco 7206 router has two Gigabit Ethernet interfaces, one (10.64.0.1) for the Ubuntu switch and the other (172.16.0.2) for a connection to a tun/tap device on the host laptop.  The host laptop and Cisco 7206 routers use OSPF to manage routing from the virtual environment, local wireless LAN (192.168.1.0/24) and default gateway.

Configuring the Layer 2 Switch

As mentioned above, the author adds the bridge-utils package (sudo apt-get install bridge-utils).  The bridge-utils package provides several statements that are used in the /etc/network/interfaces file.  The iproute2 commands create the bridge and bridge-utils commands configure it at boot time.  The following is an annotated /etc/network/interfaces file:
auto lo
iface lo inet static

auto br0
iface br0 inet static
address 10.64.0.2
netmask 255.255.255.0
gateway 10.64.0.1
dns-nameservers 192.168.1.1 8.8.8.8 4.4.4.4
bridge_stp on #bridge-utils commad to enable Spanning Tree Protocol
bridge_waitport 0 #bridge-utils commad to set immediate availability
bridge_fd 0 #bridge-utils commad to set no forwarding delay
bridge_ports eth0 eth1 eth2 eth3 eth4 eth5 eth6 eth7 #bridge-utils commad to add ports to bridge
pre-up ip link add br0 type bridge && ip link set dev br0 up #iproute2 command to create and start the bridge interface

iface eth0 inet manual
iface eth1 inet manual
iface eth2 inet manual
iface eth3 inet manual
iface eth4 inet manual
iface eth5 inet manual
iface eth6 inet manual
iface eth7 inet manual
As previously noted, bridge-utils has been superseded by iproute2.  However, the simple configuration used here is stable.  More complex configurations may not be stable using bridge-utils commands and are likely more reliable using iproute2 scripts.

The bridge may be checked using several commands:
switch@phl-core:~$ ip addr
...
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master br0 state UP group default qlen 1000
    link/ether 08:00:27:82:81:29 brd ff:ff:ff:ff:ff:ff
...
10: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
    link/ether 08:00:27:1d:86:04 brd ff:ff:ff:ff:ff:ff
    inet 10.64.0.2/24 brd 10.64.0.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::6854:53ff:fe4c:271/64 scope link
       valid_lft forever preferred_lft forever

This indicates eth0 (and the other seven NICs, omitted for brevity) are up and configured only with MAC addresses.  The bridge is up and configured with both MAC and IP interfaces.

switch@phl-core:~$ ip route
default via 10.64.0.1 dev br0
10.64.0.0/24 dev br0  proto kernel  scope link  src 10.64.0.2
This indicates all Layer 3 (IP) traffic is through the bridge interface.
switch@phl-core:~$ arp -n
Address                  HWtype  HWaddress           Flags Mask            Iface
10.64.0.1                ether   ca:02:10:48:00:06   C                     br0
switch@phl-core:~$ brctl showmacs br0
port no    mac addr        is local?    ageing timer
  7    08:00:27:1d:86:04    yes           0.00
  6    08:00:27:20:e5:3a    yes           0.00
  3    08:00:27:43:d0:28    yes           0.00
  1    08:00:27:82:81:29    yes           0.00
  2    08:00:27:9d:d2:0e    yes           0.00
  4    08:00:27:d9:9c:bb    yes           0.00
  8    08:00:27:f4:a0:8f    yes           0.00
  5    08:00:27:fd:e4:c1    yes           0.00
  1    ca:02:10:48:00:06    no           0.00

These two commands -- arp and brctl showmacs -- show MAC addresses of other devices known by the switch and the entire MAC address table (local and external).  In this case, the switch has entries for all of its local Ethernet devices and the Cisco 7206 router to which it is attached.

Adding Additional Switches

The original device used in this article (VirtualBox machine and Ubuntu host names "switch") is cloned to create additional switches.  Once the switch is working, you may change its /etc/hosts and etc/hostname files to reflect its working name, in this case phl-core, phl-servers and phl-storage.  The only change required in the /etc/network/interfaces file is the IP address of the bridge:  10.64.0.2 for phl-core, 10.64.0.4 for phl-servers and 10.64.0.5 for phl-storage.  Importantly, the default gateway for all switches is 10.64.0.1 -- the Cisco 7206 router.  Even though traffic from phl-servers and phl-storage must pass through phl-core, no Layer 3 processing (IP) is required because Ethernet frames are processed by phl-core using only MAC addresses at Layer 2.

As a check, issue the command "traceroute 8.8.8.8."  Traceroute operates at Layer 3 (IP); since phl-core operates at Laer 2, it does not appear in the trace and the first IP address hop from phl-servers and phl-storage is the Cisco 7206 router's 10.64.0.1 interface.

Ubuntu switches may be added throughout the GNS3 topology, replacing the native (and functionally limited) GNS3 switches.

Spanning Tree Protocol

Spanning Tree Protocol (STP) is a method of detecting bridge loops -- Physical Layer 1 cabling connections in which multiple paths to the same MAC address exist.  Bridging loop continuously forward frames and can eventually build to so much traffic the network becomes congested and unreliable.  STP is a protocol that detects bridge loops and automatically disables ports to logically eliminate them.

In the above switch topology, phl-core is connected to phl-servers (eth2 - eth0) and phl-storage (eth4 - eth0) using two cables.  There is no direct connection between phl-servers and phl-storage, so traffic between the two must pass through phl-core.  The "brctl showstp" command on phl-core will list both eth2 and eth4 as "forwarding," that is, operational.

If we add a cable linking phl-servers to phl-storage (eth1 - eth1), there is now a bridge loop.  For example, traffic from phl-core has two paths to eth0 on phl-servers:  one direct (eth2 - eth0) and the other through phl-storage (eth4 - eth0) and then from phl-storage to phl-servers (eth1 - eth1).  STP detects this loop and automatically shuts down a port.  The "brctl showstp" command on phl-core will now list eth2 as "blocking" and eth4 as "forwarding."  STP does not need to shut down any port on phl-servers and phl-storage; eth0 and eth1 on both remain in the "forwarding" state.  However, all traffic from phl-servers directly to phl-core is blocked and must pass through phl-storage instead.

This case illustrates how STP is an automated protocol whose decisions may result in suboptimal topologies.  The goal to directly connect the servers to the storage switches was achieved, but traffic from the servers to client networks now no longer passes directly to the core, but instead indirectly through the storage switch.

Saturday, February 14, 2015

GNS3 - VirtualBox Part 9: Adding More Than Four NICs to VirtualBox VMs


The VirtualBox GUI manages up to four NICs.  You may add up to eight NICs using the VBoxManage command line interface (CLI).  This article demonstrates how to add and configure eight NICs for a VirtualBox/GNS3 guest.

Introduction

As illustrated above, the VirtualBox GUI only supports four NICs (Adapters 1-4) through the GUI.  this is adequate for servers, but Linux also supports Layer 2 switching and Layer 3 routing; it is a viable option for networking devices in virtualized environments such as Xen and VMWare.  VirtualBox is not an enterprise option, but it is a useful sandbox.  However, four interfaces is not adequate for configuring bonded interfaces and larger switches.


Although the GUI only supports four interfaces, VirtualBox supports up to eight.  If you open the .vbox configuration files in a VM guest's directory, there are a series of sections for Adapter Slots 0-3 beginning with:

<Adapter slot="0" enabled="true" MACAddress="080027FB40D9" cable="true" speed="0" type="82540EM">
There are also a series of sections for Adapter Slots 4-7 beginning with:
<Adapter slot="4" enabled="false" MACAddress="0800272DB7A5" cable="false" speed="0" type="82540EM">
The additional adapters are there, but disabled and not "cabled," or connected to a virtual network.

VBoxManage / GNS3 NIC Configuration

The VBoxManage CLI utility configures adapters 5-8 (in Slots 4-7) and may also be used to modify any existing NICs.  This utility is invoked as the user -- NOT using sudo.  Documentation of the different VirtualBox networking types is available as Chapter 6 and full documentation of the VBoxManage CLI is available as Chapter 8 in Oracle's Documentation pages.  GNS3 uses the UDP Tunnel (Generic) type and will be demonstrated here for Intel Pro 1000 Desktop Adapters (Type 82545EM to VirtualBox).


The format of all commands is:
$VBoxManage modifyvm '<machine name>' --<option> <setting>
The options include:
  1. --nic# generic
  2. --nicpromisc# allow-all
  3. --nictype# 82545EM
  4. --cableconnected# on
  5. --nicgenericdrv# UDPTunnel
  6. --nicproperty# dest=127.0.0.1
where # = 1-8.  You may optionally set --nicproperty# sport=##### and --nicproperty# dport=##### to manually configure each NICs source and destination UDP port numbers, however, VirtualBox will automatically assign them and it is easier (and more reliable) to allow the application to do so.

VBoxManage Script for Adding Eight NICs

As described above, configuring eight NICs from the command line requires 48 commands.  This is time-consuming and error prone.  The following script -- vbox_nic.sh -- may be edited by replacing the machine name Switch  as required.
VBoxManage modifyvm 'Switch' --nic1 generic
VBoxManage modifyvm 'Switch' --nicpromisc1 allow-all
VBoxManage modifyvm 'Switch' --nictype1 82545EM
VBoxManage modifyvm 'Switch' --cableconnected1 on
VBoxManage modifyvm 'Switch' --nicgenericdrv1 UDPTunnel
VBoxManage modifyvm 'Switch' --nicproperty1 dest=127.0.0.1
VBoxManage modifyvm 'Switch' --nic2 generic
VBoxManage modifyvm 'Switch' --nicpromisc2 allow-all
VBoxManage modifyvm 'Switch' --nictype2 82545EM
VBoxManage modifyvm 'Switch' --cableconnected2 on
VBoxManage modifyvm 'Switch' --nicgenericdrv2 UDPTunnel
VBoxManage modifyvm 'Switch' --nicproperty2 dest=127.0.0.1
VBoxManage modifyvm 'Switch' --nic3 generic
VBoxManage modifyvm 'Switch' --nicpromisc3 allow-all
VBoxManage modifyvm 'Switch' --nictype3 82545EM
VBoxManage modifyvm 'Switch' --cableconnected3 on
VBoxManage modifyvm 'Switch' --nicgenericdrv3 UDPTunnel
VBoxManage modifyvm 'Switch' --nicproperty3 dest=127.0.0.1
VBoxManage modifyvm 'Switch' --nic4 generic
VBoxManage modifyvm 'Switch' --nicpromisc4 allow-all
VBoxManage modifyvm 'Switch' --nictype4 82545EM
VBoxManage modifyvm 'Switch' --cableconnected4 on
VBoxManage modifyvm 'Switch' --nicgenericdrv4 UDPTunnel
VBoxManage modifyvm 'Switch' --nicproperty4 dest=127.0.0.1
VBoxManage modifyvm 'Switch' --nic5 generic
VBoxManage modifyvm 'Switch' --nicpromisc5 allow-all
VBoxManage modifyvm 'Switch' --nictype5 82545EM
VBoxManage modifyvm 'Switch' --cableconnected5 on
VBoxManage modifyvm 'Switch' --nicgenericdrv5 UDPTunnel
VBoxManage modifyvm 'Switch' --nicproperty5 dest=127.0.0.1
VBoxManage modifyvm 'Switch' --nic6 generic
VBoxManage modifyvm 'Switch' --nicpromisc6 allow-all
VBoxManage modifyvm 'Switch' --nictype6 82545EM
VBoxManage modifyvm 'Switch' --cableconnected6 on
VBoxManage modifyvm 'Switch' --nicgenericdrv6 UDPTunnel
VBoxManage modifyvm 'Switch' --nicproperty6 dest=127.0.0.1
VBoxManage modifyvm 'Switch' --nic7 generic
VBoxManage modifyvm 'Switch' --nicpromisc7 allow-all
VBoxManage modifyvm 'Switch' --nictype7 82545EM
VBoxManage modifyvm 'Switch' --cableconnected7 on
VBoxManage modifyvm 'Switch' --nicgenericdrv7 UDPTunnel
VBoxManage modifyvm 'Switch' --nicproperty7 dest=127.0.0.1
VBoxManage modifyvm 'Switch' --nic8 generic
VBoxManage modifyvm 'Switch' --nicpromisc8 allow-all
VBoxManage modifyvm 'Switch' --nictype8 82545EM
VBoxManage modifyvm 'Switch' --cableconnected8 on
VBoxManage modifyvm 'Switch' --nicgenericdrv8 UDPTunnel
VBoxManage modifyvm 'Switch' --nicproperty8 dest=127.0.0.1
Once the script is run, the additional adapters will appear in the Network Section of the VirtualBox Manager GUI, but will not be available as tab under the machines settings.  When booted, the adapters are available to the machine.