In my last article I talked about VMM integration with VMware vSphere and Cisco Application Centric Infrastructure. I only covered using the traditional vSphere Distributed Switch, but there is another option: the Cisco AVS (Application Virtual Switch). For those familiar with the Nexus 1000v, it’s not totally dissimilar, but it is a customized version of it and has a different installation procedure than what the N1KV had in the past.
What does the AVS do for you?
The AVS allows you to have a virtual leaf essentially. Rather than using a physical leaf which would have a Tunnel End Point (TEP) address on it, you have a vSphere cluster and install Cisco AVS to create a VTEP on each VMware host. This is basically done via the installation of VIBs on the physical hosts. So, now we have the ability to do VXLAN tunneling with our vSphere infrastructure. Of course, we had that ability with the traditional VDS, but only over one hop. This mean the ESXi host would need to be attached directly to either a physical leaf in our ACI fabric, or an FEX (Fabric Extender) that connects to a leaf as the FEX doesn’t count as a hop.
Create a VMM Domain
In the paper I mention above it goes over how to create a VMM domain to connect your APIC (Application Policy Infrastructure Controller) to vCenter so I’ll only go over it briefly here.
- Login to the APIC GUI
- Click on VM Networking
- Click on Policies underneath that
- Right click on VM Provider VMware and select Create vCenter Domain
- Give it a name
- MAKE SURE TO SELECT CISCO AVS AS THE VIRTUAL SWITCH TYPE
- Choose your Switching Preference
- Choose your encapsulation preference, likely VXLAN
- Associate and AEP
- Specify a multicast address and Multicast Address Pool
- Enter vCenter Credentials
- Enter vCenter Credentials to match the Data Center object within your vCenter
Installation of the Cisco AVS
The installation is really pretty easy, but admittedly not necessarily straight forward. This will be connected in near-future releases. Let’s start with the basics. You’ll need to have Enterprise Plus VMware licensing in order to have any kind of distributed virtual switch, including the AVS. You don’t need to have anything but the virtual standard switch (installed by default with ESXi) running in your vSphere cluster.
Installing Virtual Switch Update Manager
The easiest way to install AVS is to use the Virtual Switch Update Manager, or VSUM. This is an OVA that you may deploy in your vSphere environment. This OVA will ask you to provide networking and host information in a wizard. Once it’s deployed you will see the VSUM client listed in Home>>Inventories as shown in the figure below.
We can now click on the Cisco Virtual Switch Update Manager to install the AVS itself.
Installing the AVS
In previous versions of the N1KV it was often necessary to install the VIBs manually on the ESXi hosts in order to get the N1KV going. However, we can now do it all through the VSUM client.
- In the vSphere Web Client click on Home and then the Home tab to show Inventories
- Double click on the Cisco Virtual Switch Update Manager to open the VSUM client
- Click on the Nexus 1000V button.
Note: This is a bit counter-intuitive currently. This will be fixed in future releases, though, and we will then click on the Cisco AVS button to do the configuration.
- Click on the Configuration Link
- On the right side, under Choose an Available Data Center, select your vSphere Data Center object in which your AVS will be linked.
- On the right side, under Choose an Associated Distributed Switch choose the AVS that you specified during the creation of the VMM integration on the APIC.
- Then click Manage
At this point it becomes similar to configuring a traditional VDS. We need to add hosts and physical NICs to our AVS to enable the networking.
- Click on the Cisco AVS button.
- Click on the Add Host – AVS Tab.
- Click the Target Version pull down menu to select the current version of AVS.
- Then Select the Show Host Button.
- Choose the cluster on which you’d like to choose hosts to add.
- Put a check next to the hosts to add them.
- Click the Suggest button to see applicable physical NICs that may be added from these hosts.
- Put a check next the PNICs that you’d like to add to the AVS.
- Click Finish at the bottom.
This will take you back to the initial Target Version screen. You do not need to go through the process again, but it will not display a success message. So make sure you just go check that your AVS is listed in your Networking inventory.
Where are my portgroups?
In the case of ACI it will automatically add portgroups for you. When you create an EPG (End Point Group) and associate it with a VMM domain it will show up as a portgroup in your AVS. You can then manually change the network setting for the VMs that you would like to put into your portgroups, allowing you to manage the policy for these VMs automatically through ACI.
In order to remove a portgroup you would detach the VM from that portgroup and then remove the EPG via the APIC.
Trust but Verify
You may verify that you’ve created the AVS properly, added portgroups automatically through the APIC, and adding VMs through the vCenter Web Client.
- Click on Home
- Click on Network in the Inventories pane
- Expand the tree and ensure the AVS is there
- Expand the AVS folder to ensure the portgroups are there
- Click on the Virtual Machines tab to ensure your VMs are in the proper portgroups
The Cisco AVS is a really useful solution. There are a few caveats, such as the inability to use some service insertion device packages (like the Cisco ASAv) with the AVS, but these features will come with time. The AVS is a great way to extend and even migrate your current ACI fabric into other parts of your network as well.
The AVS is currently only meant for use with ACI and the Nexus 1000V will continue to have more features added for non-ACI fabrics.
As always, if you have any questions about AVS, or ACI in general please feel free to leave comments or tweet me @Malhoit.