Enterprise Storage Selection

What is enterprise storage? One way to define it would be: 'Enterprise storage systems can manage large volumes of data as presented by a variety of different types of server, and also support large numbers of concurrent users'. These days, the two essential server types are Windows and Linux, especially when virtualised with a product like VMware. z/OS and UNIX are still here and are essential for many companies too.

The data storage environment has changed dramatically over the last few year, with Flash devices almost replacing spinning magnetic disk. Storage Virtualisation can make it difficult to figure out exactly what sort of storage you are actually accessing, while NVMe is replacing the older connectivity protocols inside the storage devices. Then along comes Storage Class Memory (SCM), faster and more expensive than Flash, and just a proposal a couple of years ago. Now it's here and the big vendors are adopting it as a fast cache for flash disks.
The Cloud seems to be changing everything too. It is said that no-one is building data centers anymore (except Cloud providers), and if you don't build data centers you don't need disk subsystems. Of course, your Cloud provider will be hosting your data on disks of some kind, but one of the Cloud benefits is that you don't worry about that, as long as you can store and access your data. There are two dimensions to Cloud support; if you are a cloud provider then you want your subsystem to support partitioning, so you can isolate your customers data. As an end customer, you might want your subsystems to support the Cloud as an ultimate archive tier.

There are some new players coming onto the field and they are definitely ones to watch. Gartner ranks Pure Storage as best Flash Storage provider, and Huawei are entering the scene with their ES3000 V5 NVMe SSD disk. Non-Volatile Memory Express (NVMe) is providing a step change in performance and all the storage vendors are updating their product ranges to include NVMe as a storage tier, or even producing all-NVMe storage arrays. NVMe supports PCI Express, RDMA and Fibre Channel, and can support much higher bandwidths than SATA or SAS.
The links below will take you to discusions of the enterprise products from the six big enterprise vendors; Pure Storage, EMC, HDS, IBM, HP and NetApp. The final link is to a table that compares some of their products.

PureStorage

History

Pure Storage is the new kid on the block, founded in 2009 and started releasing products in 2011. They initially produced one of the first all-flash arrays for datacentres, the FlashArray 300 series, then added encryption, redundancy, and the ability to replace components like flash drives or RAM modules
In 2015, Pure Storage introduced some new hardware that used 3D-NAND and later added artificial intelligence software that automates the configuration of the storage-array.

Architecture

The latest models are the Pure FlashArray//X, which were designed from scratch to work with flash storage, using the NVMe protocol to deliver very good performance. Pure uses a log structured file architecture to allocate the data. Data is initially held in NV-RAM cache, where it is deduplicated and compressed. Dedup and compression are not optional, they always happen. Pure uses a lookup data unit size of 4KB, which is smaller than other implementations. The lookup data unit alignment is 512B, and for data comparison Pure uses the matched 4KB data unit as an anchor point to which it then extends the match comparison before and after the anchor point in 512B increments until a unique segment found. Pure claim a 5:1 reduction ratio with compression and deduplication only, and 10:1 with thin provisioning.

Once Pure works out the actual data to be stored, it splits the data into segments and uses a flash translation layer to map logical addresses to physical locations. The segments are distributed over the flash devices with RAID redundancy, but the original data is never overwitten, segments are written to unused space. The RAID system used is called RAID 3D, and it calculates parity in two directions. Pure uses QLC NVMe Flash storage, and NVMe internal connectivity for speed.

Models

The //C model comes as a single model, the FlashArray //C60, with three capacity points. There’s entry-level 366TB raw (1.3PB effective), 878TB raw (3.2PB effective) and 1.39PB raw (5.2PB effective).

The //x models are:
//x10,  22TB raw,    73TB effective
//x20,  94TB raw,   314TB effective
//x50, 185TB raw,   663TB effective
//x70, 662TB raw, 2286TB effective
//x90, 878TB raw, 3300TB effective
thin provisioning, encryption and snapshots are supported

Software

Pure1 storage management software is a SaaS-based provisioning, management, and monitoring solution that integrates with Pure’s proactive support. It leverages machine-learning and predictive analytics to help advise customers on optimisation and what-if situations, including capacity planning, and performance simulation.
Pure1 capabilities are built on a global predictive intelligence engine called Pure1 Meta that leverages the accumulated data from the thousands of FlashArrays currently deployed. Pure1 Meta is the AI engine within Pure1 that provides the intelligence to manage, automate, and proactively support the FlashArray. Pure1 Meta collects more than a trillion telemetry data points of performance data per day. Part of the intelligence of Pure1 comes by way of its ability to recognize usage patterns. Pure1 identifies known patterns that may affect the optimal operations of FlashArrays, and notifies other FlashArrays with similar usage patterns of the concern. This way, customers are aware of potential impacts to their arrays and can proactively take preventive measures.
Pure1 is browser-based so you can manage, monitor, and analyze your storage from anywhere with any device, including mobile devices.

Pure supports remote clustering, with ActiveCluster. This allows you to link two different data center sites up to 150 miles apart in an active-active stretch cluster with transparent failover, zero recovery point objective (RPO) and zero recovery time objective (RTO).
Pure’s ActiveCluster solution includes Pure1 Cloud Mediator, which is a software-based third entity that monitors the link between the two sites, and declares which site becomes the primary site, should the link fail. Pure1 Cloud Mediator runs in the Cloud, so no extra software and no extra hardware, and its associated maintenance, is needed. It can be used to provide rack-level active clustering inside a data center as well as linking separate data centers.
A remote third data center can also be added for asynchronous replication, which is accessible and live for replication from both of the primary arrays in the ActiveCluster.

VMware support

VMware is supported

z/OS support

z/OS is not supported

Dell EMC

History

EMC started out producing cache memory and developed solid state disks, memory devices that emulated spinning disks, but with much faster performance. These solid state disks were usually re-badged and sold by StorageTek.

Around 1988, EMC entered the storage market in its own name, selling symmetrix disk subsystems with what at that time was a very large, 256MB cache fronting 24GB of RAID 1 storage. Their mosaic architecture was the first to map IBM CKD mainframe disk format to standard FBA open system backend disks, and as such, could claim to be the first big user of storage virtualisation. In those days, EMC developed a reputation for delivering best performance, but at a price.

In 2008, EMC became the first to use flash storage in an enterprise subsystem, for high performance applications. EMC introduced their latest addition to the symmetrix range, the V-MAX, in April 2009.
In September 2016, Dell bought out EMC and the company is now called Dell EMC.

Architecture

The old Symms used the Direct Matrix architecture, now called Enginuity. The principle behind Direct Matrix is that all IO comes into the box from the front-end directors. These are connected to global memory cache modules, which are in turn connected to back-end directors that drive the IO down to the physical disks. This connectivity is all done by a directly connected, point-to-point fibre-channel matrix.

The new PowerMax series is almost a rebranding of V-MAX. It is all Flash, with faster CPUs, NVMe internal connectivity, and ready for SCM. PowerMax comes in two models, the 2000 and 8000. These are an upgrade from the V-MAX devices, and V-Bricks are now called PowerBricks and the Hypermax OS is called the PowerMax OS.
The 2000 can have one to two PowerBricks and the 8000can have up to eight. The faster CPUs means that the maximum speed jumps from 6.7 million IOPS in the old V-MAX 950F to 10 million IOPS in the PowerMax 8000. These devices are the first to use deduplication, and inline compression, which EMC say will deliver up to 5:1 data reduction. The maximum effective capacity stays the same as th older devices at 1PB and 4PB respectively. EMC also claims that the combination of NVMe and SCM will improve response times by 50 per cent.

V-MAX devices are still available. Their architecture builds on the older DMX architecture, but has some fundamental differences. The Directors and cache are combined together into a V-MAX engine. Each V-MAX engine contains two controllers and each controller contains Host and Disk ports, a CPU complex, cache memory and a Virtual Martix interface.
The VMAX archtecture is described in more detail in the VMAX Architecture section

One interesting feature for hybrid systems is the storage tiering, based on Tier0 Flash storage, Tier1 FC drives and Tier2 SATA drives.
EMC FAST, or "fully automated storage tiering" checks for data usage patterns on files and moves them as required between Fibre Channel, SAS and flash drives to optimise cost effectiveness and performance requirements. Supported subsystems include the V-Max, the Clariion CX4 and the NS unified system.
FAST can also be configured manually to move application data to higher performing disk on selected days of the month or year. This could be useful for a monthly payroll application , for example.
EMC introduced FAST2 in August 2010, which introduced true LUN tiering and can manage data at block level.
The tiering concept has been extended further by adding a 'Cloud' layer, the EMC Cloud Array.

Models

The PowerMax series are all flash, with 2 different models, the PowerMax 2000 and 8000. Capacities are quoted as 'effective', which assumes a 5:1 increase over usable capacity after data reduction. Storage is supplied in 'PowerBricks', which includes a PowerMax engine and 53 TB of base capacity. Flash Capacity Packs let you scale up in 13 TB increments.
The 2000 support up to 2 V-Bricks and has an effective capacity of 1PB
The 8000 also supports up to 8 V-Bricks with an effective capacity of 4PB, but with faster engines than the 850F.

The VMAX F series are all flash, with 2 different models, the VMAX 250F and 850F. Capacities are quoted as 'effective', which assumes a 6:1 increase over usable capacity after data reduction. Storage is supplied in 'V-Bricks', which includes a VMAX engine and 53 TB of base capacity. Flash Capacity Packs let you scale up in 13 TB increments.
The 250F support up to 2 V-Bricks and has an effective capacity of 1PB
The 950F also supports up to 8 V-Bricks with an effective capacity of 4PB, but with faster engines than the 850F.

The V-MAX starts with the VMX 100K, which supports up to 2 VMAX engines, each with 128 GB cache, and 24 to 1,560 disk drives giving a usable capacity of 500 TB. The virtual matrix bandwidth is 200GB/s
The VMAX 200K supports up to 8 engines, each with 2048 GB cache, and up to 3,200 drives, but with a variety of different size disk and Flash drives in different RAID configurations, the total capacity is very much dependent on the configuration. The maximum formatted capacity with 3TB disk drives is 2.9 PB and maximum usable capacity in a RAID configuration is close to 2PB.
The VMAX 400K also supports up to 8 engines, but these are more powerful than the 20K engines. Each can support 2048 GB cache, giving a maximum cache capacity of 4 TB. It supports up to 2,400 drives with a formatted capacity close to 4PB, and a potential RAID5 or RAID6 usable capacity of 3.8PB. The main difference between the 200K and the 400K seems to be increased internal bandwith, 400GB/s compared to 192 GB/s thanks to the 384 2.7 GHz Intel® Xeon Cores, and that the 400K supports 4TB drives which is where the capacity increase comes from.

Software

DMX software includes EMC Symmetrix Management Console for defining and provisioning volumes and managing replication. The Time Finder products are used for in-subsystem and PIT replication, and SRDF for remote replication. SRDF can run in full PPRC compatibility mode, and can also replicate to three sites in a star configuration.
Enginuity 5784 adds new features including SRDF/EDP (Extended Distance Protection) which is similar to cascaded SRDF except that it uses a DLDEV (DiskLess Device) for the intermediate hop.
EMC was lacking in z/OS support for some years, but they have now licensed PAV and MA software from IBM, and have provided z/OS Storage Manager to manage mainframe volumes, datasets and replication.

GDPS support is provided, except for GDPS/GM or a three site GDPS/MGM solution.

VMware support

EMC VMAX3 and EMC VMAX support VMware and VMware Virtual Volumes, but look in the VMware site for the latest up to date list of VMware product names, supported devices and firmware levels.

z/OS support

EMC historically had an issue with supporting z/OS features like FlashCopy and PPRC mirroring, as the equivalent EMC features were introduced earlier, and were arguably (at least by EMC) better. This became a problem when GDPS came along as while Timefinder and SRDF worked fine, they did not work with GDPS. GDPS manages remote mirroring and site failover, but it does much more than just manage the storage, but also manages the failover of z/OS LPARS and applications too. A lot of big sites use it and require that any disk purchase must be 100% GDPS compatible. EMC therefore licenced some of the IBM code to ensure good compatibility.

The EMC implementation of PPRC is called Symmetrix Compatible Peer and is built on SRDF/S code. Some minor differences are:
PPRC needs Fiber Channel path definitions between each z/OS LCU. A DS8000 uses the WWN for each FC adapter to define the links, but the VMAX does not use WWNs, it uses the serial number. This means that in the GEOPLEX LINKS definition of the GDPS Geoparm, you need to specify the link protocol as 'E', then define the links with the serial number (This was how ESCON links were defined, hence EMC uses the 'E' protocol).
Symmetrix Compatible Peer does not support cascaded PPRC, PPRC loopback configurations or Open Systems FBA disks.
For GDPS FREEZE to work correctly, the GDPS / PPRC CGROUP definitions must exactly match the SRDF GROUP definitions and link definitions in the VMAX config file.
If you use Hyperswap and FAST tiering, then the FAST performance stats are copied over when a hyperswap is invoked, so the disk performance will be maintained.
GDPS requires small dedicated utility volumes on each LCU to manage the mirroring. These volumes should not be confused with EMC GDDR Gatekeeper volumes, they have completely different purposes.

The VMAX will also support XRC, which means that it will support 2 sites synchronously mirrored with PPRC, then a third site asynchronously mirrored with XRC.

back to top


   

HDS

History

HDS, now Hitachi Vantara was always known as the company that manufactured disks that were exactly compatible with IBM, but worked a little faster and cost a little less. HDS broke that mould when they introduced the 'Lightning' range of subsystems in 2000, which was a merging of telephony cross-bar technology and storage subsystem technology. They extended and developed that architecture further with the USP (Universal Storage Platform), released in September 2004.
In September 2010 HDS released the Virtual Storage Platform (VSP), a purpose built subsystem that provides automated tiering between flash and spinning disk drives. This model was augmented in late 2015 with the VSP F range, all flash systems.
In 2019, HDS has announced that they are freezing investment in their high end storage subsystems to concentrate on products with a higher profit margin, for example all Flash systems. Part of the rationale for this is that no-one actually uses the full throughput capacity of the VSP-G1500 so they see little point in developing it further at present.

The Hitachi F-Series are all flash systems, with models ranging from F350 to F1500. These systems use NVMe internally for performance. In the specs, two flash types are mentioned, which relate to the two storage module options, SSD or FMD, where FMD stands for Flash Module Device. The capacities below are shown for SSD modules, if FMD modules are installed then the capacity is reduced by half. However Hitachi builds the FMD modules in house, rather than using commodity SSD devices, and claims they give up to 3 times better random read and 5 times better random write than commodity SSD. An FMD has 32 parallel paths to the flash storage, at least twice as many as standard SSD. This means that more NAND storage can be accessed, and also channels can be dedicated to housekeeping work like garbage collection and wear levelling, and so do not interfere with host IO processing.
Hitachi has introduced the Hitachi Storage Virtualisation Operating System (SVOS RF) which is designed to optimise the performance of flash storage.

It is worth briefly mentioning the Hitachi N series, marketed as NAS devices, but they contain virtualised servers, network and storage, in other words, HCI systems. The N series can hold up to 6 PB of data, and use a Hybrid Flash / SAS disk type.

VSP Models

The All Flash models are
the F700 (256GB cache, 13 PB effective capacity)
the F900 (512GB cache, 17.3 PB effective capacity)
the F1500 (2014GB cache, 34.6 PB effective capacity)
The effective capacity figure assumes a 5 times improvement over raw capacity.

There are 3 hybrid disk / flash models; G700, G900 and G1500, with raw internal disk capacities of 11.7, 14 and 6.7 Petabytes respectively. The G1500 has a lower capacity that the G900, as it is optimised for performance.

Some other VSP features are:

Thin provisioning. Disk space is just allocated as needed, up to size of the virtual volume. When data is deleted from the virtual volume, a Zero Page Reclaim utility returns unused storage pages returned back to spare pool.
Automatic Dynamic Rebalancing. When new physical volumes are added to the subsystem virtual volume pages are re-striped to ensure they are still evenly spread over all the physical volumes.
Universal Virtualisation Layer. If you put some external storage behind the VSP then it is carved up and allocated to look the same as the internal storage. This means that mirroring, snapshot and replication software all work consistently for both internal and external storage
Virtual Ports. Up to 1024 virtual FC ports can share the same physical port. Each attached server will only see its own virtual ports, which means they don't get to access each other's data. This feature allows the VSP to efficiently use the high bandwidth that is available on an individual port.
All data stored on the VSP is hardware encrypted for security.

Software

Hitachi High Availability Manager provides non-disruptive failover between VSP and USP systems and means instant data access at remote site if primary site goes down. This is aimed at non-mainframe SAN based applications.
Mainframe availability uses Truecopy synchronous remote mirroring and Universal replicator with full support for GDPS.

The Storage Command suite includes.

VMare support

The VSP systems support VMware virtual volumes throught the Hitachi Storage Provider for VMware vCenter product

back to top


IBM

History

The original IBM hard drive, the RAMAC 350, was manufactured in 1956, had a 24 inch (609mm) platter, and held 5 MB. The subsystem also weighed about 1 ton. That was a bit before my time, but when I joined IT, the storage market was dominated by IBM, the mainframe was king, and the standard disk type was the IBM 3380 model K, a native CKD device which contained 1.89 GB. IBM lost its market leader position to EMC sometime in the 1990s. CKD or Count Key Data was based around accessing physical tracks on spinning disks. CKD is now virtualised on FBA disks.

IBM introduced the DSxxxx series in late 2004 in response to competition from EMC and HDS. They updated their internal bus architecture to increase the internal transfer speed by 200% plus over the ESxxx series, and also abandoned their SSA disk architecture for a switched FC-AL standard. The DS8800 series is essentially a follow-on from the ESS disk series, and re-uses much of the ESS microcode.
IBM is now going all-Flash, with the DS8900 range and the DS9200 Flash plus SCM

DS8900 Architecture

The DS8900 is an all-flash appliance that comes in 2 basic models, the 8910F and the 8950F. When an expansion frame is added to the 8950F, it can hold up to 2TB of cache, and up to 8PB of usable capacity when fully configured. The flash devices come in two flavours, a high performance device that holds up to 800GB, and a high capacity device that holds up to 3.84 TB.
It uses Power9 processors and supports up to 64 ports delivering 16Gb Fiber Channel.

The DS9200 series is also all Flash, but it uses the newer, faster devices. The Flash modules can be a mixture of SCM, NVMe flash and SAS Flash, thus providing three potential performance tiers. The capacity is quoted as 32TB effective with a 1.5TB cache. External connectivity support in 16/32 Gb/s Fiber channel and 10Gb/s Ethernet for iSCSI connections.

DSxxxx Software

The DS software includes Flashcopy for internal subsystem point-in-time data copies, IBM Total Storage DS Manager for configuration and Metro/Global mirror for continuous inter-subsystem data replication.

The older ESS subsystems supported two kinds of z/OS Flashcopy, a basic version that just copied disks, and an advanced version that copied disks and files. DS only supports the advanced Flashcopy.
Flashcopy versions include;
multi-relationship, will support up to 12 targets;
Incremental, can refresh an old Flashcopy to bring the data to a new point-in-time without needing to recopy unchanged data;
Remote Mirror Flashcopy, permits dataset flash operations to a primary mirrored disk;
Inband Flashcopy commands, permits the transmission of flashcopy commands to a remote site through a Metro Mirror link;
Consistency Groups, flash a group of volumes to a consistent point-in-time. A consistency group can span multiple disk subsystems.

Remote mirroring versions include;
Metro Mirror, synchronous remote mirroring up to 300km, was PPRC;
Global Copy, asynchronous remote data copy intended for data migration or backup,was PPRC-XD;
Global Mirror, asynchronous remote mirroring;
Metro/Global Mirror, three site remote replication, two sites being synchronous and the third asynchronous;
z/OS Global Mirror, z/OS host based asynchronous remote mirror, was called XRC;
Z/OS Metro/Global Mirror, three site remote replication, two sites being synchronous and quite close together, the third asynchronous and remote.

VMware support

The DS8880 supports the VMware vSphere Web Client, but not VMware virtual volumes. However this may change so consult the IBM documentation for an up to date position. (the IBM FlashSystem V9000 does support VMware virtual volumes)

back to top


HP

History

HP entered the disk market with the HP 7935. They also introduced the first ever commercially produced hard drive in a 1.3 inch form factor in 1992. It had a capacity of 20 MB. HP has long produced its own range of open systems disk storage, and resells a modified version of the Hitachi VSP, called the the XP8, for high end and mainframe connectivity.

HP XP8 Architecture

Because the HP XP8 is a re-badged Hitachi VSP, it has the same basic architecture. The XP8 supports z/OS mainframes, and several models of Windows, Linux and Unix servers.
It supports thin provisioning for volumes and data tiering between SSD, FMD (Hitachi Flash Module Device) and HDD disks. Smart Tier software supports basic tiering, while Real Time Smart Tier will dynamically manage data to ensure it is in the correct place for its performance requirements.

HP mainframe software includes the following products

The HP XP8 has the same open architecture as the HDS VSP and supports the same range of OEM devices, plus it supports HP MSA devices.

HPE 3PAR StoreServ

The HPE 3PAR StoreServ Storage Achitecture consists of a number of Controller Nodes, each of which contain CPU, Cache, Asics and host/disk connectivity. The Controller Nodes are interconnected by a high-speed, full-mesh backplane. Each controller node has a dedicated 4GB/s bi-directional link to each of the other nodes and a total of 56 of these links form the array’s full-mesh backplane. Also, each controller node may have one or more paths to hosts, either directly or over a SAN. As the Controller Nodes are clustered. the host servers can access volumes over any host-connected port—even if the physical storage for the data is connected to a different controller node. Controller node pairs are connected to dual-ported drive enclosures by a PCIe slot.

The HPE 3PAR StoreServ 20000 Storage is an enterprise flash array which can scale to 24PB on an 8-node system. The architecture is optimised for Flash, with the HPE 3PAR Gen5 ASIC for silicon-based hardware acceleration, and features inline deduplication, compression, data packing and thin provisioning.
The flash disks include HPE Memory Driven Flash, which is storage class memory (SCM), and is based on Intel Optane 3D XPoint. These 750GB modules use NVMe protocol and are used as a very high speed top tier.

3PAR Software

There are other optional software products available, such as HPE 3PAR Peer Motion for load balancing; HPE 3PAR Remote Copy; HPE 3PAR Peer Persistence which produces a metropolitan wide cluster of storage and hosts, and HPE 3PAR Cluster Extension Software which integrates with the Windows OS clustering software and HPE 3PAR Remote Copy to automate failover and failback

Models

The StoreServ 3PAR models are:
The HPE 3PAR StoreServ 20000 Storage, designated an enterprise flash array with 24 PB Useable Capacity. The Cache capacity is 51.6 TiB Maximum.
The HPE 3PAR StoreServ 8000 Storage, with 3PB usable capacity and a maximum Cache of 384 GiB
The HPE 3PAR StoreServ 9000 Storage, Maximum capacity is 6PB usable with a maximum cache size of 896 GiB.

VMware support

HP products have full VMware support, but look in the VMware site for the latest up to date list of VMware product names, supported devices and firmware levels.

z/OS support

The XP8 supports z/OS, but the StoreServ 3PAR range does not.

back to top

NetApp

History

NetApp was founded in 1992 and started out producing NetApp filers. A filer, or NAS device has a built in operating system that owns a filesystem and presents data as files and directories over the network. Contrast this with more traditional block storage approach used by IBM and EMC, where data is presented as blocks over a SAN, and the operating system on the server has to make sense of it and carve it up into filespaces.

NetApp use their own operating system to manage the filers, called Data ONTAP, which has progressively developed over the years, partly by a series of acquisitions. In June 2008 NetApp announced the Performance Acceleration Module (or PAM) to optimize the performance of workloads which carry out intensive random reads.
Data ONTAP 8.0, released at the end of 2010, introduced two major features; 64-bit support and the integration of the Spinnaker code allow clustering of NetApp filers.
According to an IDC report in 2010, at that time NetApp was the third biggest company in the network storage industry behind EMC and IBM
NetApp released the EF550 Flash array device in 2013. This is an all flash storage array, with obvious performance benefits. The current (2020) all flash array, the AFF A800 2-node cluster, will hold 3.16PB raw, on NVMe SSD drives.

NetApp is positioning itself as the company for Hybrid Clouds. Their products support Public Clouds, including those supplied by Alibaba, Amazon, Google, IBM and Microsoft Azure. They also support private clouds with the NetApp Storage Grid. The Cloud support allows you to automatically tier cold data to the cloud with FabricPool and backup and recover Cloud data with cloud-resident NetApp Data Availability Services..

Architecture

File system

Data ONTAP is an operating system, and it contains a file system called Write Anywhere File Layout (WAFL) which is proprietary to NetApp. When WAFL presents data as files, it can act as either NFS or CIFS, so it can present data to both UNIX and Windows, and share that data between them.
All Flash systems use FlashEssentials, a variant of WAFL that is optimised for Flash. It includes things like amalgamating writes to free blocks to maximise performance and increase the flash media life; a new random read I/O processing path that was designed from the ground up for flash; and inline data reduction technologies, including inline compression, inline deduplication, and inline data compaction. This means that the raw subsystem capacities quoted below can be multiplied by 4 to get the effective capacity.

Snapshots

Snapshots are arguably the most useful feature of Data ONTAP. It is possible to take up to 255 snapshots of a given volume and up to 255,000 per controller. UNIX Snapshots are stored in a .snapshots directory or ~snapshots in Windows. They are normally read only, though it is possible to form writeable snapshots called Flexclones or virtual clones.

Snapshots are based at disk block level and use move-after-write techniques, based on inode pointers.

SnapMirror is an extension of Snapshot and is used to replicate snapshots between 2 filers. Cascading replication, that is, snapshots of snapshots, is also possible. Snapshots can be combined with SnapVault software to get full backup and recovery capability.

SyncMirror duplicates data at RAID group, aggregate or traditional volume level between two filers. This can be extended with a MetroCluster option to provide a geo-cluster or active/active cluster between two sites up to 100 km apart.

Snaplock provides WORM (Write Once Read Many) functionality for compliance purposes. Records are given a retention period, and then a volume cannot be deleted or altered until all those records have expired. A full 'Compliance' mode makes this rule absolute, and 'Enterprise' mode lets an administrator with root access override the restriction.

Models

The NetApp models are grouped into 3 series, All-Flash, Hybrid and Object stores. Detailed and up to date specifications can be found on the NetApp web site, but in general terms, the difference between the models are shown below. Each model uses in-line data reduction, which increase the raw capacity by a factor of 5-10. Data updates use redirect-on-write techniques and all have Cloud connectivity for data archiving. Replication can be provided using Metro Cluster (synchronous) or Snap Mirror (asynchronous) and these can be combined into a three site configuration. The all-Flash and Hybrid models come in HA pairs and more pairs can be added to form a scale out cluster. It is possible to combine all-Flash and Hybrid models in the same cluster.

Subsystem type Model Max Capacity Max Cache Connectivity
All Flash AFF A800 (12 HA pairs) 316PB   NVMe/FC, FC, iSCSI, NFS, pNFS, CIFS/SMB
AFF A700 (12 HA Pairs) 702PB   FC, iSCSI, NFS, pNFS, CIFS/SMB
AFF A200 (12 HA Pairs) 193PB   FC, iSCSI, NFS, pNFS, CIFS/SMB
Hybrid FAS9000 176PB 1TB 12Gb SAS, 40GbE, 32GbFC, 10GbE
FAS2650 1.243PB 64GB FC, FCoE, iSCSI, NFS, pNFS, CIFS/SMB

back to top


Storage Subsystem Features table

This first table is a simplistic attempt to contrast some of the all-flash subsytems from the traditional vendors, and one new one. It's difficult to get meaningful comparisons yet, as some of these subsystems are targeted at different applications, so this should be considered an indication of what is available. NVMe systems have been selected where possible, and the EMC PowerMax 8000 is the only one here with FICON (and therefore Mainframe) support, but there are other all-flash mainframe systems out there.
The HP StorServ 9000 is the only SAS/SSD system on the list. HP do provide NVMe storage for servers, so if they do not have an NVMe subsystem available now, then doubtless they have one in the pipeline.

All Flash Subsystems

Vendor Pure Scale EMC HDS HP IBM NetApp
Device //X90 PowerMax 8000 F1500 3PAR StoreServ 20000 FlashSystem 9200R AFF A700
Flash Disk Types
NVMe NVMe NVMe SCM, HPE Memory Driven Flash NVMe, SAS, SCM NVMe
Capacity How much data can you cram into the box? Can be quoted as 'raw' capacity, 'usable' capacity once RAID overhead is calculated, and 'effective' capacity after compression. 'PiB' is multiples of 1024, PB is multiples of 1000
878TB Native, 3.3 PB Effective 4 PB Effective 8.1PB FMD, 34.6PB SSD 6 PB Raw 15 PB Useable On a 4-way cluster;
up to 32 PB usable
702.7 PB; 623.8 PiB
Internal Connectivity See the previous page for details of disk connectivity.
NVMe NVMe NVMe NVMe NVMe NVMe
External Connectivity What kind of cables you can plug into the box. A good box will support a mixture of protocols.
16/32 Gb/s FC, 10/25/40 Gb/s Ethernet, 10 Gb/s NVMe/RoCe 32 Gb/s FC, 10 Gb/s Ethernet (iSCSI), 16 Gb/s FICON 176 FC; 176 FICON; 176 FCoE; 88 iSCSI 10x32 Gb/s FC, 20x16 Gb/s FC,10 GbE (iSCSI), 10 Gb/s Ethernet (FCoE) 24x16GbFC, 12x25GbE, 8x10GbE NVMe/FC, FC, FCoE, iSCSI, NFS, pNFS, SMB

Hybrid subsystems

Indications now are that while manufactures still supply hybrid systems, they are pushed away into a corner while the vendors concentrate on all Flash systems. This in turn suggests that if a hybrid meets your needs, you ought to be able to negotiate a really good deal on one. Just don't forget to tie down the maintenance side of the contract.
The various suppliers of hybrid flash/HDD enterprise disks are contrasted in the tables below. The first row explains why the factor might be important, the second row just presents the facts, which were correct at time of writing, April 2020. However I'd advise you to check with your salesperson for up to date details.

Vendor IBM EMC HDS HP NetApp
Device DS8886 V-MAX 400K VSP G1500 HP XP8 FAS9000
Subsystem Capacities
Maximum, and maximum effective capacity How much data can you cram into the box? Can be quoted as 'raw' capacity, 'usable' capacity once RAID overhead is calculated, and 'effective' capacity after compression.
5.87 PB HDD SAS disks and 614 TB Flash Usable Capacity depends on RAID configuration, but is up to 4 PB. 14 PB FMD
19.7 PB SSD
287 PB External Storage
69 PB raw, 60 PB usable
255 PB External Storage
NAS; 14.7 PB per HA pair, max 176 PB with 12 pairs
SAN; 7.4 PB per HA pair, max 88 PB with 12 pairs
Cache size In theory, the bigger the cache, the better the performance, as you will get a better read-hit ratio, and big writes should not flood the cache. If the cache is segmented, it is more resilient, and has more data paths through it
2 TB 16 TB 6 TB 6 TB 1TB - 12TB with 12 HA pairs
Disk types
Flash Disk support How much flash capacity can be supplied
200,400,800,1,600 GB flash drives; 3.5" SAS Drives: 800 GB, 1.6T B
2.5" SAS Drives: as above plus 960 GB, 1.92 TB
14-3500GB FMD, 30-1,900 GB flash drives 14-3500GB FMD, 30-1,900 GB flash drives 960 GB + 4 TB, 960 GB + 8 TB, 960 GB + 10 TB
RAID levels supported See the RAID section for details
5,6,10; raid5 is not supported for drives bigger than 1TB RAID 1 1,5,6 1,5,6 4, 6
Connections and Connectivity
External Connectivity How many external cables can you connect to the box, and how fast do they run. Numbers quoted are maximum for each type, and if the maximum is installed then that may mean no other port types can be installed. NetApp is for 24 node NAS model.
4 and 8-port 8 Gbps or 4-port 16 Gbps Fibre Channel/IBM FICON to a max of 128 ports 128 x 10 Gb/s SRDF
max 256 x 8/16 GB/s combination of FC, FICON, FCoE, iSCSI
192x16/32 Gb/s FC, 176xFICON, 4x10 Gb/s iSCSI, 192xFCoE 192 * 15/32 Gb/s Fibre Channel
192 * 16 Gb FICON,
192 * 10 Gb FCoE
96x10 Gb iSCSI
12 Gb SAS, 40 GbE, 32 GbFC, 10 GbE
Protocol Support What kind of cables you can plug into the box. A good box will support a mixture of protocols.
Ficon, Fibre Channel Fibre Channel , GbE, iSCSI, FCoE, FICON, SRDF NFS, SMB, FTP, iSCSI, HTTP to Cloud FC, FICON, FCoE, iSCSI, HTTP to Cloud FC, FCoE, iSCSI, NFS, pNFS, CIFS/SMB
Disk Connectivity See the previous page for details of disk connectivity.
PCI-3 connection to an 8 Gbps FCAL backbone PCIe Gen 3 to 6Gb/s 2 port SAS drives NVMe, 6Gb/sec SAS NVMe, 6Gb/sec SAS 6Gb / 12Gb SAS
Availability features
remote copy Do you mirror data between two sites? If so you need this. The remote mirroring section has more details.
Global Mirror, asynchronous
Metro Mirror (PPRC), synchronous
3 site MGM also supported
Synchronous(SRDF/S) and asynchronous(SRDF/A) data replication between subsystems.
SRDF/DM will migrate data between subsystems.
SRDF/AR works with TimeFinder to create remote data replicas.
SRDF products are all EMC to EMC
SRDF can emulate Metro mirror and Global mirror
Hitachi true copy, PPRC compatible and synchronous;
Hitachi Universal Replicator, asynchronous copy.
HPE Recovery Manager, Storageworks replication Metro Cluster (sync.), Snap Mirror (async.) 3 site solution possible
Instant copy 'Instant Copy' of volumes or datasets. Can be used for instant backups, or to create test data. Some implementations require a complete new disk, and so double the storage. Some implementations work on pointers, and just need a little more storage.
Flashcopy at volume and dataset level Timefinder at volume or dataset level. BCV version requires a complete volume be supplied, newer 'snap' version just uses pointers.
EMC Compatible Flash (FlashCopy)
Shadow Image at volume level
Copy on write snapshot
HPE Recovery Manager, Storageworks copy software SnapMirror
Z/OS features
GDPS support for automated site failover See the GDPS pages for details
Yes Yes, including Hyperswap Yes Yes N/A
PAV and MA support Parallel Access Volume and Multiple Allegiance. See the implementation tips section for details. Used to permit multi-tasking to logical devices
Yes Yes , including HyperPAV support Yes Yes , including HyperPAV support N/A
Manufacturer IBM EMC HDS HP NetApp
Device DS8886 V-MAX 400K VSP G1500 HP XP8 FAS9000

Price is usually very negotiable, but be sure to make sure that the vendor quotes for a complete solution with no hidden extras. Also, make sure that you get capped capacity upgrade prices, including increased software charges as software is usually charged by capacity tiers.

back to top


Enterprise Disk

Disk Protocols

   

Lascon latest major updates