vSAN (VMware Virtual SAN)

Overview

vSAN, formerly called VMware Virtual SAN is an example of Software-Defined Storage. What VMware has done is recognised the potential behind their server virtualisation philosophy and extended it to include date storage. Virtual SAN is built into the VMware hypervisor. It sits directly in the I/O data path and so can deliver better performance than a virtual appliance or an external device without much CPU overhead. The software that manages and controls the storage is pre-installed on the hypervisor which allows it to share out the storage resource in the same way it shares out CPU and memory.

The vSAN is managed from the vSphere Web Client and because it is located in the hypervisor it integrates with all the VMware goodies, including vMotion, HA, Distributed Resource Scheduler, VMware vCenter Site Recovery Manager and VMware vRealize Automation. vSAN is an example of an HCI product, or Hyper Converged Infrastructure.

Managing vSAN does not require any specialized skillset as it can be managed end-to-end through the familiar vSphere Web Client and vCenter Server instances.

    GFS Advert

Physical Disk Management

Like any virtualisation product, VMware splits up the physical storage into logical pools of capacity that can be shared out flexibly among the hosted VMs. VMware refers to this storage virtualization as the Virtual Data Plane.

You have 2 choices for data storage, all-flash or a hybrid flash with magnetic disk. An all flash solution still offers tiering with 2 levels, a high performance, write-intensive, high-endurance caching tier for the writes and a read-intensive, durable, cost-effective, flash-based device tier for data persistence. With an SSD/Disk hybrid solution every write I/O will go to SSD first.

A vSphere host does not have to contribute storage to the vSAN cluster, but if it does it requires a disk controller. This can be a SAS or SATA host bus adapter (HBA) or a RAID controller. However, the RAID controller must either just deliver plain RAID0 striping, or preferrably be running in Pass-through mode, where the disks are not in any RAID format but just presented as JBOD (just a bunch of disks). The vSAN looks after data resilience by taking copies of entire virtual disks, the number of copies is controlled by policies and can be set differently for individual VMs.

   

Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

Scalability

The vSAN architecture means that scaling is elastic and non-disruptive. Both capacity and performance can be scaled at the same time by adding a new host to the cluster (scale-out); or capacity and performance can be scaled independently by merely adding new drives to existing hosts (scale-up). Ready Made servers, pre-built by thirdy party suppliers, can just be plugged into a vSAN, alowing almost instant scale-out. You can add more SSD for performance or more hard drives for capacity.

System Requirements and Limits

Each Hardware Host must have at least a 1GB Ethernet or a 10Gb Ethernet capable network adapter. 10 Gb is recommended, and is required for an all flash architecture. It must have a SATA/SAS HBA or RAID controller and at least one SSD and one HDD for each capacity-contributing node Cluster. The usual minimum cluster size is three hosts as this configuration enables the cluster to meet the lowest availability requirement of tolerating at least one host, disk, or network failure. However it is possible to install a two node cluster in branches or remote offices.
The software on must be VMware vCenter Server 6.0 and one of; VMware vSphere 6.0, VMware vSphere with Operations Management 6.0 or VMware vCloud Suite 6.0.

Virtual SAN will support up to 64 nodes per cluster and up to 200 virtual machines per host. The Maximum Virtual Disk size is 62TB. Eaxh host can hold between 1 and 5 disk groups.

On each vSphere host, a VMkernel port for Virtual SAN communication must be created. A new VMkernel virtual adapter type has been added to vSphere 5.5 for Virtual SAN.

The VMkernel port is labeled Virtual SAN traffic. This new interface is used for host intra-cluster communications as well as for read and write operations whenever a vSphere host in the cluster is the owner of a particular virtual machine, but the actual data blocks making up that virtual machine's objects are located on a remote host in the cluster.

Hosts can be compute only, but it is not recommended to have too many dedicated compute servers as it is best to spead the storage workload around a lot of servers.

Policies

Traditional storage delivers add-ons like snapshots and replication at hardware level and the storage manager has to look at the business requirements of applications and work out how to apply those requirements to the hosting hardware. Software-Defined Storage as implemented by Virtual SAN uses the Virtual Data Plane (VDP) to handle these requirements, so the administrator works with the applications and the VDP works out how to apply these to the underlying hardware. This means that all the VMware extras like compression, replication, snapshots, de-duplication, availability, migration and data mobility ar available and can be configured differently for each individual VM.

The VDP also allows you to define service level policies for each VM for things like availability and performance. What this means is:
Availability means you can specify how many host, network, disk or rack failures to tolerate in a Virtual SAN cluster when setting the storage policy for each VM. The VDP then translates this into how many copies of the VM are stored and where to meet those policies.
Performance means that you can set policies at individual VM level that dictate what percentage of your read I/O you can expect to come from SSD.
VMware refers to this as the Policy-Driven Control Plan and you can program the policies using public APIs, and with scripting and cloud automation tools.

Basic VSAN Setup and Configuration

VSAN looks like a plugin within VMware and if you are familiar with VMware then you will find it easy to setup and configure, especially if you are familiar with vSphere. There is a setup wizard to help you add the hosts that have storage attached to them to the VSAN cluster. The process goes like this:

  • select the hosts you wish to connect to the VSAN network
  • click on the 'add network' icon
  • select the connection type
  • select the distributed port group
  • select port properties
  • If the default is not suitable then select the IP settings
  • select finish the wizard to add the VSAN network to the hosts

Once the hosts are added to the network you activate the VSAN and then decide how much of the storage to add to the VSAN.

  • select the cluster
  • click the 'manage' tab
  • click the 'general' option under Virtual SAN
  • click 'edit', and then turn on Virtual SAN.

You will now see a prompt asking whether you want to configure VSAN automatically or manually. If you take the automatic option then all available disks will be claimed by VSAN. Otherwise click manual and then you will have to create your disk groups manually through the disk management tab. To do this you select a host and then select the create disk group icon. You must add at least one SSD and one hard drive to a disk group and at least three of the hosts need to have disk groups created.

Once you create the disk groups the VSAN datastore will be available and will show the combined storage capacity of all the drives. At this point your VSAN is complete and ready to use.

Creating a new disk group

Assuming your vSAN cluster is in manual mode, you would use these steps to create a new disk group.

  • Navigate to 'cluster object' in the Inventory
  • Select the 'Management' tab
  • Select 'Disk Groups'
  • Select the host that you want to create the disk group on
  • Click on the disk group icon with the green '+' sign
  • Select one cache device and one or more capacity devices, which would be Flash and Spinning disk for a hybrid system, or different flash types for an all flash system

back to top