With the increased network card rates, we are now able to let several flows pass on the same network links. We can find on the market network adapters with 10gb/s, 25Gb/s, or 100Gb/s! So, there is no reason to dedicate a network adapter for a specific traffic. Thanks to converged networks, we can deploy VMware ESXi nodes with less network adapters. This brings flexibility and software-oriented network management. Once you have configured VLAN from switches perspective, you just have to create some port groups from vCenter perspective. In this topic, I’ll show you how to deploy a converged network in vSphere 6.5. For this topic, I leverage the vNetwork distributed switch which enables to deploy a consistent network configuration across nodes.
Network configuration overview
To write this topic, I’ve worked on two VMware ESXi 6.5 nodes. Each node has two network adapters (1Gb/s). Each network adapter is plugged on a separate switch. The switch ports are configured in trunk mode where VLAN 50, 51, 52, 53 and 54 are allowed. The VLAN 50 is untagged. I’ve not set any LACP configuration.
The following network will be configured:
-
Management – VLAN 50 (untagged) – 10.10.50.0/24: will be used to manage ESXi nodes
-
vMotion – VLAN 51 – 10.10.51.0/24: used for vMotion traffic
-
iSCSI – VLAN 52 – 10.10.52.0/24: network dedicated for iSCSI
-
DEV – VLAN 53 – 10.10.53.0/24: testing VM will be connected to this network
-
PROD – VLAN 54 – 10.10.54.0/24: production VM will be connected to this network
I’ll call the vNetwork Distributed Switch (vDS) with the following name: vDS-CAN-1G. To implement the following design I will need:
-
1x vNetwork Distributed Switch
-
2x Uplinks
-
5x distributed port groups
So, let’s go 🙂
vNetwork Distributed Switch creation
To create a distributed switch, open vSphere Web Client and navigate to network menu (in navigator). Right click on your datacenter and select Distributed Switch | New Distributed Switch
Then specify a name for the distributed switch. I call mine vDS-CNA-1G.
Next choose a distributed switch version. Depending on the version, you can access to more features. I choose the last version Distributed switch: 6.5.0.
Next you can specify the number of uplinks. In this example, only two uplinks are required. But I leave the default number of uplinks value to 4. You can choose to enable or not the Network I/O Control (NIOC). This feature provides QoS management. Then I choose to create a default port group called Management. This port group will contain VMKernel adapters to manage ESXi nodes.
Once you have reviewed the settings, you can click on finish to create the vNetwork distributed switch.
Now that vDS is created, we can add host to it.
Add hosts to vNetwork distributed switch
To add hosts to vDS, click on Add and Manage Hosts icon (on top of vDS summary page). Next choose Add hosts.
Next click on New hosts and add each host you want.
Check the following tasks:
-
Manage physical adapters: association of physical network adapters to uplinks
-
Manage VMKernel adapters: manage VMKernel adapters (host virtual NICs).
In the next screen, for each node, add physical network adapters to uplink. In this example, I have added each vmnic0 of both node to Uplink 1 and vmnic1 to Uplink 2.
When you deploy ESXi, by default a vSwitch0 is created with one VMKernel for management. This vSwitch is a standard switch. To move the VMKernel to vDS without connection lost, we can reassign the VMKernel to the vDS. To make this operation, select the VMkernels and click on Assign port group. Then select the Management port group.
The next screen presents the impact of the network configuration. When you have reviewed the impacts, you can click on next to add and assign hosts to vDS.
Add additional distributed port group
Now that hosts are associated to vDS, we can add more distributed port group. In this section, I add a distributed port group for vMotion. From the vDS summary pane, click on New Distributed Port Group icon (on top of the pane). Give a name to the distributed port group.
In the next screen, you can configure the port binding and port allocation. You can have more information about port binding in this topic. The recommended port binding for general use is Static binding. I set the number of ports to 8 but because I configure the port allocation to Elastic, the ports are increased or decreased as needed. To finish, I set the VLAN ID to 51.
Add additional VMKernel adapters
Now that the distributed port group is created, we can add VMKernel to this port group. Click on Add and Manage Hosts from vDS summary pane. Then select Manage host networking.
Next click on Attached hosts and select hosts you want.
In the next screen, just check Manage VMKernel adapters.
Then click on New adapter.
In Select an existing network area, click on Browse and choose vMotion.
In the next screen, choose vMotion services. In this way, the vMotion traffic will use this VMKernel.
To finish, specify TCP/IP settings and click on finish.
When this configuration is finished, the vDS schema looks like this:
So we have two port groups and two uplinks. In this configuration we have converged the management and vMotion traffics. Note that Management network has no VLAN ID because I’ve set the VLAN 50 to untagged from switches perspective.
Final result
By repeating the above steps, I have created more distributed port groups. I have not yet created VMkernel iSCSI adapters (for a next topic about storage :)) but I think you know what I’m saying. If you compare the below schema and the one of network overview, they are very similar.
The final job concerns QoS to leave enough bandwidth to specific traffic as vMotion. You can set the QoS thanks to Network I/O Control (NIOC).
detailed explain thanks for taking time and putting a blog on this , very helpful
You’re welcome 🙂
hi using LACP in vmware 5.5 and up will really increase speed -throughput? I mean let’s say I have 4 uplinks (4 nics) and I configure LACP at cisco side and I do also the vmware config
Will a VM have more than 1 GB speed link? usually the transfer speed is 100MB to 130MB but will I get almost 400MB when copying files o any transfer activity?
thanks
Hi,
All network teaming don’t increase speed. If you add four link into the LACP, simply you’ll have 4 VMs that will be able to deliver 1Gbps simultaneously. But 1VM can’t use the four link simultraneously.
How do we do this in vmware workstation ?
Think that there are no network switches