Archive for 17/06/2012

vSphere 5 Standard vSwitch Multi NIC vMotion Setup

vSphere 5 offers the new feature of Multi NIC vMotion allowing you to push vMotion traffic simultaneously down more than a single NIC. In previous releases of vSphere, only a single NIC would ever be used even if multiple NICs were part of a network teaming policy.

To configure Multi NIC vMotion, simply follow the below steps:

Step 1. Under the Configuration tab of a host, click Networking then select Add Networking. Select the option VMKernel and click next, select the network adapters you want to use for vMotion and click next.

Step 2. When presented with the below screen, enter a network label for this first VMKernel port, for example “vMotion-1″ and check the box “use this port group for vMotion” and click next.

Step 3. On the next screen enter the IP address and subnet mask for this first VMKernal port and click next, then click finish. This will now create the vSwitch with the first port group and assign the two chosen virtual adapters to the switch.

Step 4. When the vSwitch has finished building, click the Properties button for the switch and you will be presented with the below screen. Click the Add button and go through creating a second VMKernel port for vMotion as you did in the previous steps.

Step 5. Once you have created your second VMKernel port, go back to the vSwitch properties windows, select the first vMotion VMKernel port and click edit.

Step 6. Click the NIC Teaming tab, then under Failover Order, check the box “Override switch failover order”. The NIC teaming options will now become available as shown below.

Step 7. In the NIC teaming policy editor, ensure only one NIC is moved under Active Adapters and any others placed under Standby Adapters. Once completed, click OK and the changes will apply. This will force this VMKernel port to only use a singular NIC in the networking team.

Step 8. Follow steps 5 to 7 for the second VMKernel port and remember to invert the NIC teaming policy from what you set for the first VMKernel port. For example, if vMotion-1 has vmnic1 as active and vmnic2 as standby, vMotion-2 should have vmnic2 as active and vmnic1 as standby.

The configuration of Multi NIC vMotion is now complete and you should experience much fast vMotion operations. To confirm your setup is working correctly, examine the performance statistics for the virtual adapters you chose during the setup, and issue a vMotion operation. You should see simultaneous active across all NICs.

Set Datastore Multipathing Policy Automaticly with PowerCLI

You may find when you add a new host to your vSphere environment and discover any datastores presented to the host, the multipathing policy will be automatically set to MRU (Most Recently Used). For best performance it’s recommended to change this policy to Roundrobin, however this is a very manual and tedious task of adjusting each datastore on each host.

To speed this process up, below you can find a PowerCLI command that will set all datastores on a specific host to Roundrobin.

Get-Cluster “clustername” | Get-VMHost -Name “hostname” | Get-ScsiLun -LunType “Disk” | Set-ScsiLun -MultipathPolicy “roundrobin”

Equally you can use the same command for setting your datastores to MRU as shown below.

Get-Cluster “clustername” | Get-VMHost -Name “hostname” | Get-ScsiLun -LunType “Disk” | Set-ScsiLun -MultipathPolicy “MostRecentlyUsed”

VMware View 5.1 Storage Accelerator

With the release of VMware View 5.1 a number of new features have been introduced, particularly around storage performance. For me, the key feature is the View Storage Accelerator, or you may hear it referred to as Host Caching.

Without going into intricate detail, the storage accelerator basically creates an additional file within the datastores for the replica disk and all linked clone OS disk. This file is know as a digest file and is used as a caching mechanism. The way this digest file increases performance is by minimising the amount of read requests back to the replica disk. This is done by caching commonly read blocks from the replica disk into the OS disk digest file which can be read directly acting as a cache. This can help minimise boot and log on storms within your View environment.

Enabling the Storage Accelerator is relatively simple and can be achieved by following the below steps.

Step 1. Log in to you VMware View administrator console, under View Configuration click servers. Locate your vCenter server and click Edit.

Step 2. Click on the Host Caching tab then check the Enable host caching for view check box. This will enable the caching capability and allow you to specify the size of the cache for all, or individual hosts. The minimum cache size is 100MB with a maximum of 2GB. Click OK to apply the change.

Step 3. Locate a linked clone pool you want to enable the storage accelerator on and Edit the settings. Click on the far right tab named Advanced Storage and check the Use host caching check box. Click OK to apply the settings.

The storage accelerator features is now enabled for this linked clone pool, and will require a recompose to generate the necessary digest files. Hopefully you will now see a performance increase on read intensive operations.

Thanks for reading.