Storage Spaces

For most companies, data is their lifeblood and service has to be available 24 hours a day, every day. Hardware fails and can cause both loss of service and data loss. Server clustering can protect your service from loss of a server, and hardware RAID, explained here, can protect you from disk failure. Microsoft do server clustering, but they do not sell hardware, so they brought out Storage Spaces, which is a bit like RAID, but implemented in software. You usually need at least three physical drives, which you group together into a storage pool and then create virtual volumes called Storage Spaces. These Storage Spaces contain extra copies of your data so if one of the physical drives fail, you have enough spare copies of your data to be able to read it all. As a bonus, your data is striped over several disks, which improves performance.
One important restriction of Storage Spaces is that you cannot host your Windows operating system on it.

There are three major ways to use Storage Spaces:

Resilience Levels

Storage Spaces has three basic levels of resilience, or the ability to cope with hardware failure.

Simple Resilience, equivalent to RAID0, just stripes data across physical disks It requires at least one physical disk, and can improve performance but has no resilience. yes, that's right. If you lose a disk you lose all your data. Simple Resilience has no capacity overhead. It is basically just suitable for temporary data that you can afford to lose, data that can be recreated easily if lost, or for applications that provide their own resilience.

Mirror Resilience, equivalent to RAID1 plus, stores either two or three copies of the data across multiple physical disks. A 2 mirror implementation needs at least 2 physical disks, and you can lose one disk and stll have the data intact on the other one. You will also lose at least half of your raw disk capacity. As the data is striped between the disks, there will be a performance benefit.
A 3 way Mirror needs at least five physical disks but it will protect from two simultaneous disk failures.
Mirror Resilience uses dirty region tracking (DRT) to track data updates to the pool disks in the pool. If the system crashes, then the mirrors might not be consistent. When the spaces are brought back online, DRT is used to synchronise the disks again.
Mirror Resilience is the most widely used Storage Spaces implementatkion.

Parity Resilience, equivalent to RAID5, stripes data and parity information across physical disks. It uses less capacity than Mirror Resilience, but still has enough redundant data to survive a disk crash. Parity Resilience uses journaling to prevent data corruption if an unplanned shutdown occurs.
It requires at least three physical disks to protect from single disk failure.
It is best used for workloads that are highly sequential, such as archive or backup.

Storage Spaces on a Windows 10 PC

To configure Storage Spaces on PC you need at least two extra physical drives, apart from the drive where Windows is installed. These drives can be SAS, SATA, or even USB drives, and can be hard drives or solid state drives. Once you have your drives installed, go to

Control Panel > System and Security > Storage Spaces

Select 'Create a new pool and storage space'.
Select the drives you want to add to the new storage space, and then select 'Create pool'.
Give the drive a name and letter, and then choose a layout. See above for the meaning of Simple, Two-way mirror, Three-way mirror, and Parity.
Enter the maximum size the storage space can reach, and then select 'Create storage space'.

Once you have Storage Spaces configure and working, you can add extra drives for more capacity. When you add a drive to your storage pool, you will see a check box, 'Optimize'. Make sure that box is checked, as that will optimise your existing data by spreading some of it over the new drive.
If you want to remove an existing drive from a storage pool, first you need to make sure you have enough space capacity in the pool to hold the data. If there is enough capacity, go to the Storage Spaces screen above, then select

Change settings > Physical drives

which will list all the drives in your pool. Pick out the one you want to remove then select

Prepare for removal > Prepare for removal

Windows will now move all the data off the drive, which could take a long time, several hours if you are removing a big drive. If you are running from a laptop, leave it plugged in, and change the Power and Sleep settings so the PC does not go to sleep when plugged in. Once all the data is migrated, the drive status will change to 'Ready to remove' so select

Remove > Remove drive

Storage Spaces On a single stand-alone server

To create a storage space, you must first create a storage pool, then a virtual disk, then a volume.

CREATE A STORAGE POOL

A storage pool is a collection of physical disks. There are restrictions on what type of disks and disk adaptors can be used. Check with Microsoft for up to date requirements, but in summary the disks need to be:

Disk bus types can be SAS, SATA and also USB drives, but USB is not a good idea in a server environment. iSCSI and Fibre Channel controllers are not supported.
Physical disks must be at least 4 GB and must be blank and not formatted into volumes.
HBAs must be in non-RAID mode with all RAID functionality disabled

The minimum number of physical disks needed depends on what type of resilience you want. See the resilience section above for details. Once you have your physical disks connected, go to

Server Manager
File and Storage Services.
Storage Pools

This will show a list of available storage pools, one of which should be the Primordial pool. If you can't see a primordial pool, this means that your disks do not meet the requirements for Storage Spaces. If you select the Primordial storage pool, you will see a listing of the available physical disks.
Now select STORAGE POOLS > TASKS > New Storage Pool, which will open the 'New Storage Pool' wizard.
Follow the wizard instructions, inputting the storage pool name, the group of available physical disks that you want to use, and then select the check box next to each physical disk that you want to include in the storage pool. You can also designate one or more disks as hot spares here.

CREATE A VIRTUAL DISK

Next, you need to create one or more virtual disks. These virtual disks are also referred to as storage spaces and just look like ordinary disks to the Windows operating system.

Go to Server Manager > File and Storage Services > Storage Pools > VIRTUAL DISKS > TASKS list > New Virtual Disk and the New Virtual Disk Wizard will open. Follow the dialog, selecting the storage pool, then entering a name for the new virtual disk. Next you select the storage layout. This is where you configure the resiliency type (simple, mirror, or parity) and on the 'Specify the provisioning type' page you can also pick the provisioning type, which can be thin or fixed.
Thin provisioning uses storage more efficiently as it is only allocated as it is used. This lets you allocate bigger virtual disks than you have real storage, but you need to keep a close eye on the actual space they are using on an on-going basis, otherwise your physical disks will run out of space.
With fixed provisioning you allocate all the storage capacity at the time a virtual disk is created, so the actual space used is always the same as the virtual disk size.
You can create both thin and fixed provisioned virtual disks in the same storage pool.

So the next step is to specify how big you want the virtual disk to be, which you do on the 'Specify the size of the virtual disk page'. The things is, unless you are using a Simple storage layout, you virtual disk will need more physical space than the size of the disk, as there is an overhead for resilience. You have a choice here -
You can work out how much space your virtual disk will actually use, and see if it will fit into the storage pool
You can select the 'Create the largest virtual disk possible' option, then if the disk size you pick is too large, Windows will reduce the size so it fits.
You can select the 'Maximum size' option, which will create a virtual disk that uses the maximum capacity of the storage pool.

CREATE A VOLUME

You can create several volumes on each virtual disk. To create a new volume, right-click the virtual disk that you want to create the volume on from the VIRTUAL DISKS screen. Now select 'New Volume' which will open the 'New Volume' wizard.

Follow the Wizard dialog, picking out the server and the virtual disk on which you want to create the volume. Enter your volume size, and assign the volume either a drive letter or a folder page. You also select the kind of file system you want, either NTFS or ReFS then optionally the allocation unit size and a volume label.

Once the volume is created, you should be able to see it in Windows Explorer.

Storage Spaces Direct on a Windows Cluster

Clusters, Servers and Network

Storage Spaces Direct uses one or more clusters of Windows servers to host the storage. As is standard in a Windows Cluster, if one node in the cluster fails, then all processing is swapped over to another node in the cluster. The individual servers communicate over Ethernet using the SMB3 protocol, including SMB Direct and SMB Multichannel. Microsoft recommends that you use 10+ GbE with remote-direct memory access (RDMA) to provide direct access from the memory of one server to the memory of another server without involving either server's operating system.
The individual servers use the Windows ReFS filesystem as it is optimised for virtualization. ReFS means that Storage Spaces can automatically move data in real time between faster and slower storage, based on its current usage.
The storage on the individual servers is pulled together with a Cluster Shared Volumes file system, which unifies all the ReFS volumes into a single namespace. This means that every server in the cluster can access all the ReFS volumes in the cluster, as though there were mounted locally.
If you are using converged deployments, then you need a Scale-Out File Server layer to provide remote file access using the SMB3 access protocol to clients, such as another cluster running Hyper-V, over the network.

Storage

The physical storage is attached to, and distributed over all the servers in the cluster. Eash server must have at least 2 NVMe attached solid-state drives and at least 4 slower drives, which can SSDs or spinning drives that are SATA or SAS connected, but must be behind a host-bus adapter (HBA) and SAS expander. All these physical drives are in JBOD or non-RAID format. All the drives are collected together into a storage pool, which is created automatically as the correct type of drives are discovered. Microsoft recommends that you take the default settings and just have one Storage Pool per cluster.

Although the physical disks are not configured in any RAID format, Storage Spaces itself provides fault tolerance by duplicating data between the different servers in the cluster in a similar way to RAID. The different duplication options are 'mirroring' and 'parity'.

Mirroring is similar to RAID-1 with complete copies of the data stored on different drives that are hosted on different servers. It can be implemeted as 2-way mirroring or 3-way mirroring, which will require twice as much, or three times as much physical hardware to store the data. However the data is not just simply replicated onto another server. Storage Spaces splits the data up into 256 MB 'slabs', then writes out 2 or 3 copies of each slab out to different disks on different servers. A large file in a 2-way mirror will not be written to 2 volumes, but will be spread over every volume in the pool, with each pair of 'mirrored' slabs being on separate disks hosted on separate servers. The advantage of this, is that a large file can be read in parallel from multiple volumes, and if one volume is lost, it can be quickly reconstructed by reading and rebuilding the missing data from all the other volumes in the cluster.
Storage Spaces Mirroring does not use dedicated or 'hot' spare drives to rebuild a failed drive. As the capacity is spread all over the drives in the pool, so the spare capacity for a rebuild must be spread over all the drives. If you are using 2 TB drives, then you have to maintain at least 2 TB spare capacity in your pool, so a rebuild can take place.

A Parity configuraton comes in two flavours, Single Parity and Dual Parity, which can be considered equivalent to RAID-5 and RAID-6. You need some expertise in maths to fully understand how these works, but in simple terms, for single parity, the data is split up into chunks and then some chunks are combined together to create a parity chunk. All these chunks are then written out to different disks. If you then lose one chunk, it can be recreated by manipluating the remaining chunks and the parity chunk.
Single parity can only tolerate one failure at a time and needs at least three servers with associated disks (called a Hardware Fault Domain). The extra space overhead is similar to three-way mirroring, which provides more fault tolerance, so while single parity is supported, it would be better to use three-way mirroring.
Dual parity can recover from up to two failures at once, but with better storage efficiency than a three way mirror. It needs at least four servers and with 4 servers, you just need to double up on the amount of allocated storage. So you get the resilience benefits of three way mirroring for the storage overhead of two way mirroring. The minimum storage efficiency of dual parity is 50%, so to store 2 TB of data, you need 4 TB of physical storage capacity. However, as you add more hardware fault domains, or servers with storage, the storage efficiency increases, up to a maximum of 80%. For example, with seven servers, the storage efficiency is 66.7%, so to store 4 TB of data, you need just 6 TB of physical storage capacity.
An advanced technique called 'local reconstruction codes' or LRC was introduced in Storage Spaces Direct, where for large disks, dual parity uses LRC to split its encoding/decoding into a few smaller groups, to reduce the overhead required to make writes or recover from failures.

The final piece in the Storage jigsaw is the Software Storage Bus. This is a software-defined storage fabric that connects all the servers together so they can see all of each other's local drives, a bit like a Software SAN. The Software Storage Bus is essential for caching, as described next.

Cache

What Microsoft calls a service side cache, is essentially a top-most disk tier, usually consisting of NVMe connected SSD drives. When you enable Storage Spaces Direct, it goes out and discovers all the available drives, then automatically selects the faster drives as the 'cache' or top tier. The lower tier is called the 'capacity' tier. Caching has a storage overhead which will reduce your usable capacity.
The different drive type options are:

  • All NMVe SSD; the best for performance and if the drives are all NMVe, then there is no cache. NMVe is a fast SSD protocol where the drives are attached directly to the PCIe bus
  • NMVE + SSD; The NMVe drives are used as cache, and the SSD drives as capacity. Writes are staged to cache, but reads will be served from the SSDs unless the data has not been destaged yet
  • All SAS/SATA attached SSD; there is no automatically configured cache but you can decide to configure one manually. If you run without a cache then you get more usable capacity
  • NVMe + HDD; both reads and writes are cached for performance and data is destaged to the HDD capacity drives as it ages
  • SSD + HDD; as above, both reads and writes are cached for performance. If you have a requirement for large capacity archive data, then you can use this option with a small number of SSDs and a lot of HDDs. This gives you adequate performance at a reasonable price.

When Storage Spaces Direct de-stages data, it uses an algorithm to de-randomise the data, so that the IO pattern looks to be sequential even if the original writes were random. The idea is that this improves the write performance to the HDDs.
It is possible to have a configuration with all three types of drive, NMVe, SSD and HDD. If you implement this, then the NMVe drives become a cache for both the SSDs and the HDDs. The system will only cache writes for the SSDs, but will cache and both reads and writes for the HDDs.

Deployment options

There are two different ways to implement Storage Spaces Direct, called 'Converged' and 'Hyper-Converged'. If you are not keen on those names, then you could call the Converged option 'Disaggregated' instead.
Storage Spaces Direct uses a lot of file servers to host the physical disks. If you work in an SME, it can be quite an overhead to dedicate those servers to just manage disks. If you work for an enterprise business, or as a Service Provider, it is a good idea to run your storage and application, or 'compute' servers in seperate clusters, as the two different workloads can be scaled independently. The two different deployment options means that you can select the one that fits your environment best.

So the Converged deployment option means run the Storage and Compute servers in separate clusters. This needs an extra Scale-out File Server (SoFS) layer to sit on top of Storage Spaces Direct, to provide network-attached storage over SMB3 file shares.
The Hyper-Converged option just uses a single cluster for compute and storage, and runs applications like Hyper-V virtual machines or SQL Server databases directly on the servers providing the storage.

back to top