Enterprise Storage Selection

What is enterprise storage? One way to define it would be: 'Enterprise storage systems can manage large volumes of data as presented by a variety of different types of server, and also support large numbers of concurrent users'. These days, the two essential server types are Windows and Linux, especially when virtualised with a product like VMware. z/OS and UNIX are still here and are essential for many companies too.

The data storage environment has changed dramatically over the last few year, with Flash devices replacing spinning magnetic disk. Storage Virtualisation can make it difficult to figure out exactly what sort of storage you are actually accessing, while NVMe is replacing the older connectivity protocols inside the storage devices. Then along comes Storage Class Memory (SCM), faster and more expensive than Flash, and just a proposal a couple of years ago. Now it's here and the big vendors are adopting it as a fast cache for flash disks.
The Cloud seems to be changing everything too. It is said that no-one is building data centers anymore (except Cloud providers), and if you don't build data centers you don't need disk subsystems. Of course, your Cloud provider will be hosting your data on disks of some kind, but one of the Cloud benefits is that you don't worry about that, as long as you can store and access your data. There are two dimensions to Cloud support; if you are a cloud provider then you want your subsystem to support partitioning, so you can isolate your customers data. As an end customer, you might want your subsystems to support the Cloud as an ultimate archive tier.

There are some new players coming onto the field and they are definitely ones to watch. Gartner ranks Pure Storage as best Flash Storage provider, and Huawei are entering the scene with their ES3000 V5 NVMe SSD disk. Non-Volatile Memory Express (NVMe) is providing a step change in performance and all the storage vendors are updating their product ranges to include NVMe as a storage tier, or even producing all-NVMe storage arrays. NVMe supports PCI Express, RDMA and Fibre Channel, and can support much higher bandwidths than SATA or SAS.
The links below will take you to discusions of the enterprise products from the six big enterprise vendors; Pure Storage, EMC, HDS, IBM, HP and NetApp. The final link is to a table that compares some of their products.

Pure Storage

History

Pure Storage is the new kid on the block, founded in 2009 and started releasing products in 2011. They initially produced one of the first all-flash arrays for datacentres, the FlashArray 300 series, then added encryption, redundancy, and the ability to replace components like flash drives or RAM modules
In 2015, Pure Storage introduced some new hardware that used 3D-NAND and later added artificial intelligence software that automates the configuration of the storage-array.

Architecture

The latest models are the Pure FlashArray//X, which were designed from scratch to work with flash storage, using the NVMe protocol to deliver very good performance. Pure uses a log structured file architecture to allocate the data. Data is initially held in NV-RAM cache, where it is deduplicated and compressed. Dedup and compression are not optional, they always happen. Pure uses a lookup data unit size of 4KB, which is smaller than other implementations. The lookup data unit alignment is 512B, and for data comparison Pure uses the matched 4KB data unit as an anchor point to which it then extends the match comparison before and after the anchor point in 512B increments until a unique segment found. Pure claim a 5:1 reduction ratio with compression and deduplication only, and 10:1 with thin provisioning.

Once Pure works out the actual data to be stored, it splits the data into segments and uses a flash translation layer to map logical addresses to physical locations. The segments are distributed over the flash devices with RAID redundancy, but the original data is never overwitten, segments are written to unused space. The RAID system used is called RAID 3D, and it calculates parity in two directions. Pure uses QLC NVMe Flash storage, and NVMe internal connectivity for speed.

Models

The //x models are:
//x10,  22TB raw,    73TB effective
//x20,  94TB raw,   314TB effective
//x50, 185TB raw,   663TB effective
//x70, 662TB raw, 2286TB effective
//x90, 878TB raw, 3300TB effective
thin provisioning, encryption and snapshots are supported

Software

Pure1 storage management software is a SaaS-based provisioning, management, and monitoring solution that integrates with Pure’s proactive support. It leverages machine-learning and predictive analytics to help advise customers on optimisation and what-if situations, including capacity planning, and performance simulation.
Pure1 capabilities are built on a global predictive intelligence engine called Pure1 Meta that leverages the accumulated data from the thousands of FlashArrays currently deployed. Pure1 Meta is the AI engine within Pure1 that provides the intelligence to manage, automate, and proactively support the FlashArray. Pure1 Meta collects more than a trillion telemetry data points of performance data per day. Part of the intelligence of Pure1 comes by way of its ability to recognize usage patterns. Pure1 identifies known patterns that may affect the optimal operations of FlashArrays, and notifies other FlashArrays with similar usage patterns of the concern. This way, customers are aware of potential impacts to their arrays and can proactively take preventive measures.
Pure1 is browser-based so you can manage, monitor, and analyze your storage from anywhere with any device, including mobile devices.

Pure supports remote clustering, with ActiveCluster. This allows you to link two different data center sites up to 150 miles apart in an active-active stretch cluster with transparent failover, zero recovery point objective (RPO) and zero recovery time objective (RTO).
Pure’s ActiveCluster solution includes Pure1 Cloud Mediator, which is a software-based third entity that monitors the link between the two sites, and declares which site becomes the primary site, should the link fail. Pure1 Cloud Mediator runs in the Cloud, so no extra software and no extra hardware, and its associated maintenance, is needed. It can be used to provide rack-level active clustering inside a data center as well as linking separate data centers.
A remote third data center can also be added for asynchronous replication, which is accessible and live for replication from both of the primary arrays in the ActiveCluster.

VMware support

VMware is supported

z/OS support

z/OS is not supported

Dell EMC

History

EMC started out producing cache memory and developed solid state disks, memory devices that emulated spinning disks, but with much faster performance. These solid state disks were usually re-badged and sold by StorageTek.

Around 1988, EMC entered the storage market in its own name, selling symmetrix disk subsystems with what at that time was a very large, 256MB cache fronting 24GB of RAID 1 storage. Their mosaic architecture was the first to map IBM CKD mainframe disk format to standard FBA open system backend disks, and as such, could claim to be the first big user of storage virtualisation. In those days, EMC developed a reputation for delivering best performance, but at a price.

In 2008, EMC became the first to use flash storage in an enterprise subsystem, for high performance applications. EMC introduced their latest addition to the symmetrix range, the V-MAX, in April 2009.
In September 2016, Dell bought out EMC and the company is now called Dell EMC.

Architecture

The PowerMax series is all Flash and SCM, with faster CPUs, NVMe internal connectivity. PowerMax comes in two models, the 2000 and 8000. The Directors and cache are combined together into a PowerMAX engine. Each PowerMAX engine contains two controllers and each controller contains Host and Disk ports, a CPU complex, cache memory and a Virtual Martix interface. The building blocks od the PowerMAX are called PowerBricks, controlled by PowerMax OS Software and delivers end-to-end encryption of data from the host to the PowerMax storage media..
The 2000 can have one to two PowerBricks and the 8000 can have up to eight. According to EMC, the faster CPUs means that the maximum speed jumps to 15 million IOPS in the PowerMax 8000, with read response times of under 100 ms. These devices are the first to use deduplication, and inline compression, which EMC say will deliver up to 5:1 data reduction. The maximum effective capacity is 1.2PB and 4.5PB respectively. EMC also claims that the combination of NVMe and SCM will improve response times by 50 per cent.
The PowerMax archtecture is described in more detail in the PowerMAX Architecture section

One interesting feature for hybrid systems is the storage tiering, based on Tier0 SCM and Tier1 Flash storage
EMC FAST, or "fully automated storage tiering" checks for data usage patterns on files and moves them as required between SCM and flash drives to optimise cost effectiveness and performance requirements. Supported subsystems include the V-Max, the Clariion CX4 and the NS unified system.
FAST can also be configured manually to move application data to higher performing disk on selected days of the month or year. This could be useful for a monthly payroll application , for example.
EMC introduced FAST2 in August 2010, which introduced true LUN tiering and can manage data at block level.
The tiering concept has been extended further by adding a 'Cloud' layer, the EMC Cloud Array.

Models

The PowerMax series are all flash and SCM, with 2 different models, the PowerMax 2000 and 8000. Capacities are quoted as 'effective', which assumes a 5:1 increase over usable capacity after data reduction. Storage is supplied in 'PowerBricks', which includes a PowerMax engine and 53 TB of base capacity. Flash Capacity Packs let you scale up in 13 TB increments.
The 2000 support up to 2 V-Bricks and has an effective capacity of 1.2PB
The 8000 also supports up to 8 V-Bricks with an effective capacity of 4.5PB

Software

DMX software includes EMC Symmetrix Management Console for defining and provisioning volumes and managing replication. The Time Finder products are used for in-subsystem and PIT replication, and SRDF for remote replication. SRDF can run in full PPRC compatibility mode, and can also replicate to three sites in a star configuration.
Enginuity 5784 adds new features including SRDF/EDP (Extended Distance Protection) which is similar to cascaded SRDF except that it uses a DLDEV (DiskLess Device) for the intermediate hop.
EMC was lacking in z/OS support for some years, but they have now licensed PAV and MA software from IBM, and have provided z/OS Storage Manager to manage mainframe volumes, datasets and replication.

GDPS support is provided, except for GDPS/GM or a three site GDPS/MGM solution.

VMware support

EMC VMAX3 and EMC VMAX support VMware and VMware Virtual Volumes, but look in the VMware site for the latest up to date list of VMware product names, supported devices and firmware levels.

z/OS support

EMC historically had an issue with supporting z/OS features like FlashCopy and PPRC mirroring, as the equivalent EMC features were introduced earlier, and were arguably (at least by EMC) better. This became a problem when GDPS came along as while Timefinder and SRDF worked fine, they did not work with GDPS. GDPS manages remote mirroring and site failover, but it does much more than just manage the storage, but also manages the failover of z/OS LPARS and applications too. A lot of big sites use it and require that any disk purchase must be 100% GDPS compatible. EMC therefore licenced some of the IBM code to ensure good compatibility.

The EMC implementation of PPRC is called Symmetrix Compatible Peer and is built on SRDF/S code. Some minor differences are:
PPRC needs Fiber Channel path definitions between each z/OS LCU. A DS8000 uses the WWN for each FC adapter to define the links, but the VMAX does not use WWNs, it uses the serial number. This means that in the GEOPLEX LINKS definition of the GDPS Geoparm, you need to specify the link protocol as 'E', then define the links with the serial number (This was how ESCON links were defined, hence EMC uses the 'E' protocol).
Symmetrix Compatible Peer does not support cascaded PPRC, PPRC loopback configurations or Open Systems FBA disks.
For GDPS FREEZE to work correctly, the GDPS / PPRC CGROUP definitions must exactly match the SRDF GROUP definitions and link definitions in the VMAX config file.
If you use Hyperswap and FAST tiering, then the FAST performance stats are copied over when a hyperswap is invoked, so the disk performance will be maintained.
GDPS requires small dedicated utility volumes on each LCU to manage the mirroring. These volumes should not be confused with EMC GDDR Gatekeeper volumes, they have completely different purposes.

The VMAX will also support XRC, which means that it will support 2 sites synchronously mirrored with PPRC, then a third site asynchronously mirrored with XRC.

back to top


   

HDS

History

HDS, now Hitachi Vantara was always known as the company that manufactured disks that were exactly compatible with IBM, but worked a little faster and cost a little less. HDS broke that mould when they introduced the 'Lightning' range of subsystems in 2000, which was a merging of telephony cross-bar technology and storage subsystem technology. They extended and developed that architecture further with the USP (Universal Storage Platform), released in September 2004.
In September 2010 HDS released the Virtual Storage Platform (VSP), a purpose built subsystem that provides automated tiering between flash and spinning disk drives. This model was augmented in late 2015 with the VSP F range, all flash systems.

VSP 5000 series

The Architecture of the VSP 5000 Series in built on controller blocks and each controller block has two nodes, each with two controllers. High availability is achieved as resources can fail over across node controllers, across nodes and across controller blocks with quad redundancy.
The base controller block contains a pair of node interconnection switches and these provide the backbone of the new Hitachi Accelerated Fabric. These switches create data paths between all the controllers in a system, which enables performance and capacity to scale up and out for efficient use and sharing of resources across the system. It also allows the tiering of data across controller blocks for improved price-performance. This is a PCI-Express Gen3 4Lane link (4GB/s). Each controller has two fabric acceleration modules, each with two ports. Four interconnect paths link the controller to four separate infrastructure switch ports.

Each controller block can also include a Media chassis, for storage device attachement.
The media chassis supports SAS SSD, PCIe NVMe SSD, SCM media1 or Hitachi’s FMD flash modules. Unusually, the media chassis also supports standard HDD disk drives. Each media chassis is connected to the two nodes in the same controller block for availability, scale up capacity and performance growth.

The VSP 5500 can grow up to three controller bocks in multiple of 2, and can support any combination of SAS, NVMe or diskless controller blocks. The 2-node VSP 5100 model can be upgraded non-disruprively to the 6-node VSP 5500

The power of the fabric acceleration modules comes from the field-programmable gate arrays (FPGAs) that are embedded in the interconnect on the controllers. FPGAs allow the controllers to offload processing functionality to them, so SVOS RF 9 can make use of flash-optimized code paths. This means that we use fewer CPU cycles to offer more I/O than anyone else in the storage market, with a peak of 21 million IOPS. For applications that need the fastest possible response times for retrieving critical data, these systems can reach as little as 70 microseconds of latency. It is only with Accelerated Fabric that we reach these performance milestones.

Models

Hitachi Virtual Storage Platform 5100
Hitachi Virtual Storage Platform 5100H
Hitachi Virtual Storage Platform 5500
Hitachi Virtual Storage Platform 5500H
The H models are hybrids and use 2.4TB 10K and 14TB 7.2K SAS HDD drives for the lowest tier.

Software

Hitachi High Availability Manager provides non-disruptive failover between VSP and USP systems and means instant data access at remote site if primary site goes down. This is aimed at non-mainframe SAN based applications.
Mainframe availability uses Truecopy synchronous remote mirroring and Universal replicator with full support for GDPS.

The Storage Command suite includes.

VMare support

The VSP systems support VMware virtual volumes throught the Hitachi Storage Provider for VMware vCenter product

back to top


IBM

History

The original IBM hard drive, the RAMAC 350, was manufactured in 1956, had a 24 inch (609mm) platter, and held 5 MB. The subsystem also weighed about 1 ton. That was a bit before my time, but when I joined IT, the storage market was dominated by IBM, the mainframe was king, and the standard disk type was the IBM 3380 model K, a native CKD device which contained 1.89 GB. IBM lost its market leader position to EMC sometime in the 1990s. CKD or Count Key Data was based around accessing physical tracks on spinning disks. CKD is now virtualised on FBA disks.

IBM introduced the DSxxxx series in late 2004 in response to competition from EMC and HDS. They updated their internal bus architecture to increase the internal transfer speed by 200% plus over the ESxxx series, and also abandoned their SSA disk architecture for a switched FC-AL standard. The DS8800 series is essentially a follow-on from the ESS disk series, and re-uses much of the ESS microcode.
IBM is now going all-Flash, with the DS8900 range and the DS9200 Flash plus SCM

DS8900 Architecture

The DS8900 is an all-flash appliance that comes in 2 basic models, the 8910F and the 8950F. When an expansion frame is added to the 8950F, it can hold up to 2TB of cache, and up to 8PB of usable capacity when fully configured. The flash devices come in two flavours, a high performance device that holds up to 800GB, and a high capacity device that holds up to 3.84 TB.
It uses Power9 processors and supports up to 64 ports delivering 16Gb Fiber Channel.

The DS9200 series is also all Flash, but it uses the newer, faster devices. The Flash modules can be a mixture of SCM, NVMe flash and SAS Flash, thus providing three potential performance tiers. The capacity is quoted as 32TB effective with a 1.5TB cache. External connectivity support in 16/32 Gb/s Fiber channel and 10Gb/s Ethernet for iSCSI connections.

DSxxxx Software

The DS software includes Flashcopy for internal subsystem point-in-time data copies, IBM Total Storage DS Manager for configuration and Metro/Global mirror for continuous inter-subsystem data replication.

The older ESS subsystems supported two kinds of z/OS Flashcopy, a basic version that just copied disks, and an advanced version that copied disks and files. DS only supports the advanced Flashcopy.
Flashcopy versions include;
multi-relationship, will support up to 12 targets;
Incremental, can refresh an old Flashcopy to bring the data to a new point-in-time without needing to recopy unchanged data;
Remote Mirror Flashcopy, permits dataset flash operations to a primary mirrored disk;
Inband Flashcopy commands, permits the transmission of flashcopy commands to a remote site through a Metro Mirror link;
Consistency Groups, flash a group of volumes to a consistent point-in-time. A consistency group can span multiple disk subsystems.

Remote mirroring versions include;
Metro Mirror, synchronous remote mirroring up to 300km, was PPRC;
Global Copy, asynchronous remote data copy intended for data migration or backup,was PPRC-XD;
Global Mirror, asynchronous remote mirroring;
Metro/Global Mirror, three site remote replication, two sites being synchronous and the third asynchronous;
z/OS Global Mirror, z/OS host based asynchronous remote mirror, was called XRC;
Z/OS Metro/Global Mirror, three site remote replication, two sites being synchronous and quite close together, the third asynchronous and remote.

VMware support

The DS8880 supports the VMware vSphere Web Client, but not VMware virtual volumes. However this may change so consult the IBM documentation for an up to date position. (the IBM FlashSystem V9000 does support VMware virtual volumes)

back to top


HP

History

HP entered the disk market with the HP 7935. They also introduced the first ever commercially produced hard drive in a 1.3 inch form factor in 1992. It had a capacity of 20 MB. HP has long produced its own range of open systems disk storage, and resells a modified version of the Hitachi VSP, called the the XP8, for high end and mainframe connectivity. HP annouced the release of the Primera range in 2019, most likely intending it to eventually replace the 3PAR storage range.

HPE Primera 600 Storage

Controller nodes are central to the Primera 600 architecture. A single system is configured as a cluster of two or four controller nodes. The minimum system has two controller nodes, so the system will survive one node failure, but is expandable to four nodes for future growth.
The controllers are connected by a high-speed, full-mesh backplane to form an all-active cluster. In every HPE Primera storage system, each controller node has at least one dedicated link to each of the other nodes, which results in a single, highly available system with all the storage accessable from any controller node. This low-latency full-mesh backplane enables a system wide global unified cache which is fault tolerant. The links use dedicated PCIe Gen 3 connections and run full-duplex at 8 GiB/s, driven by specialised ASICs designed to drive the data at NVMe speeds. A Primera storage system with four nodes has 16 ASICs totaling 128 GiB/s of peak interconnect bandwidth.

There are up to four ASICs, called slices, per node, and each ASIC is a high-performance engine, and is more than just a data mover. They also have a dedicated hardware offload engine to accelerate RAID parity calculations, perform inline zero detection, calculate deduplication hashes and perform the CRC checks that validate the data held on the storage drives.
One ASIC slice is dedicated for internode communication, completing the full-mesh all-active architecture.

Each controller node may have one or more paths to hosts—either directly or over a SAN via 32 Gb/s or 16 Gb/s Fibre Channel links. As the controller nodes are clustered and each can see all the storage, the pimera presents hosts with a single storage system. This means that servers can access volumes over any host-connected port—even if the physical storage for the data is connected to a different controller node.
The controller nodes can have up to 12 host ports, 8 drive enclosure ports, 40 CPU cores, and 4 HPE Primera ASICs to facilitate the massive parallelism necessary. The controller nodes support Fibre Channel (FC), iSCSI, or NVMeoF protocols.
As every volume can be accessed from any cluster. The Primera is able to use system wide data striping to eliminate volume hotspots, and also ports, cache and processors. The system-wide striping of data over flash drives, means that I/O patterns are uniform, so spreading wear evenly across the entire system.

Models

The Primera comprises three models: HPE Primera 630, HPE Primera 650, and HPE Primera 670. Each model is available as an all-flash version (A models) or converged flash version (C models). Maximum cache size is 4TiB. The different storage capacities and drive types of the different models are:
Primera A630: 250 TiB / 700 TiB Effective
Primera A650: 800 TiB / 2200 TiB Effective
Primera A670: 1600 TiB / 4900 TiB Effective
Primera C630: 250TiB (SSD only) / Effective 700 TiB (SSD only) / 750TiB (HDD and SSD)
Primera C650: 800TiB (SSD only) / Effective 2200 TiB (SSD only) / 2000TiB (HDD and SSD)
Primera C670: 1600TiB (SSD only) / Effective 4900 TiB (SSD only) / 4000TiB (HDD and SSD)
HP defines effective capacity as assuming a 4:1 estimated data compaction rate.

back to top

NetApp

History

NetApp was founded in 1992 and started out producing NetApp filers. A filer, or NAS device has a built in operating system that owns a filesystem and presents data as files and directories over the network. Contrast this with more traditional block storage approach used by IBM and EMC, where data is presented as blocks over a SAN, and the operating system on the server has to make sense of it and carve it up into filespaces.

NetApp use their own operating system to manage the filers, called Data ONTAP, which has progressively developed over the years, partly by a series of acquisitions. In June 2008 NetApp announced the Performance Acceleration Module (or PAM) to optimize the performance of workloads which carry out intensive random reads.
Data ONTAP 8.0, released at the end of 2010, introduced two major features; 64-bit support and the integration of the Spinnaker code allow clustering of NetApp filers.
According to an IDC report in 2010, at that time NetApp was the third biggest company in the network storage industry behind EMC and IBM
NetApp released the EF550 Flash array device in 2013. This is an all flash storage array, with obvious performance benefits. The current (2020) all flash array, the AFF A800 2-node cluster, will hold 3.16PB raw, on NVMe SSD drives.

NetApp is positioning itself as the company for Hybrid Clouds. Their products support Public Clouds, including those supplied by Alibaba, Amazon, Google, IBM and Microsoft Azure. They also support private clouds with the NetApp Storage Grid. The Cloud support allows you to automatically tier cold data to the cloud with FabricPool and backup and recover Cloud data with cloud-resident NetApp Data Availability Services..

Architecture

File system

Data ONTAP is an operating system, and it contains a file system called Write Anywhere File Layout (WAFL) which is proprietary to NetApp. When WAFL presents data as files, it can act as either NFS or CIFS, so it can present data to both UNIX and Windows, and share that data between them.
All Flash systems use FlashEssentials, a variant of WAFL that is optimised for Flash. It includes things like amalgamating writes to free blocks to maximise performance and increase the flash media life; a new random read I/O processing path that was designed from the ground up for flash; and inline data reduction technologies, including inline compression, inline deduplication, and inline data compaction. This means that the raw subsystem capacities quoted below can be multiplied by 4 to get the effective capacity.

Snapshots

Snapshots are arguably the most useful feature of Data ONTAP. It is possible to take up to 255 snapshots of a given volume and up to 255,000 per controller. UNIX Snapshots are stored in a .snapshots directory or ~snapshots in Windows. They are normally read only, though it is possible to form writeable snapshots called Flexclones or virtual clones.

Snapshots are based at disk block level and use move-after-write techniques, based on inode pointers.

SnapMirror is an extension of Snapshot and is used to replicate snapshots between 2 filers. Cascading replication, that is, snapshots of snapshots, is also possible. Snapshots can be combined with SnapVault software to get full backup and recovery capability.

SyncMirror duplicates data at RAID group, aggregate or traditional volume level between two filers. This can be extended with a MetroCluster option to provide a geo-cluster or active/active cluster between two sites up to 100 km apart.

Snaplock provides WORM (Write Once Read Many) functionality for compliance purposes. Records are given a retention period, and then a volume cannot be deleted or altered until all those records have expired. A full 'Compliance' mode makes this rule absolute, and 'Enterprise' mode lets an administrator with root access override the restriction.

Models

The NetApp models are grouped into 3 series, All-Flash, Hybrid and Object stores. Detailed and up to date specifications can be found on the NetApp web site, but in general terms, the difference between the models are shown below. Each model uses in-line data reduction, which increase the raw capacity by a factor of 5-10. Data updates use redirect-on-write techniques and all have Cloud connectivity for data archiving. Replication can be provided using Metro Cluster (synchronous) or Snap Mirror (asynchronous) and these can be combined into a three site configuration. The all-Flash and Hybrid models come in HA pairs and more pairs can be added to form a scale out cluster. It is possible to combine all-Flash and Hybrid models in the same cluster.

Subsystem type Model Max Capacity Maximum SSDs Connectivity
All Flash AFF A800 (12 HA pairs) 316PB 2880 NVMe/FC, FC, iSCSI, NFS, pNFS, CIFS/SMB, Amazon S3
AFF A700 (12 HA Pairs) 702PB 5760 FC, iSCSI, NFS, pNFS, CIFS/SMB, Amazon S3
AFF A400 (12 HA Pairs) 703PB 5760 FC, iSCSI, NFS, pNFS, CIFS/SMB, Amazon S3
AFF A250 (12 HA Pairs) 35PB 576 FC, iSCSI, NFS, pNFS, CIFS/SMB, Amazon S3

back to top


Storage Subsystem Features table

This first table is a simplistic attempt to contrast some of the all-flash subsytems from the traditional vendors, and one new one. It's difficult to get meaningful comparisons yet, as some of these subsystems are targeted at different applications, so this should be considered an indication of what is available. NVMe systems have been selected where possible, and the EMC PowerMax 8000 is the only one here with FICON (and therefore Mainframe) support, but there are other all-flash mainframe systems out there.
The HP StorServ 9000 is the only SAS/SSD system on the list. HP do provide NVMe storage for servers, so if they do not have an NVMe subsystem available now, then doubtless they have one in the pipeline.

All Flash Subsystems

Vendor Pure Storage EMC HDS HP IBM NetApp
Device //X90 PowerMax 8000 VSP S500 Primera A670 FlashSystem 9200R AFF A700
Flash Disk Types
NVMe Flash, SCM NVMe Flash, SCM NVMe Flash, SCM NVMe Flash NVMe Flash, SAS Flash, SCM NVMe Flash
Capacity How much data can you cram into the box? Can be quoted as 'raw' capacity, 'usable' capacity once RAID overhead is calculated, and 'effective' capacity after compression. 'PiB' is multiples of 1024, PB is multiples of 1000
878 TB Native, 3.3 PB Effective 4.5 PB Effective raw, 8,106TB FMD, 4,356TB SSD 1.6 TiB raw, 4.9 TiB usable On a 4-way cluster;
up to 32 PB usable
702.7 PB; 623.8 PiB
Internal Connectivity See the previous page for details of disk connectivity.
NVMe NVMe NVMe NVMe NVMe NVMe
External Connectivity What kind of cables you can plug into the box. A good box will support a mixture of protocols.
16/32 Gb/s FC, 10/25/40 Gb/s Ethernet, 10 Gb/s NVMe/RoCe 32 Gb/s FC, 10 Gb/s Ethernet (iSCSI), 16 Gb/s FICON 176 FC; 176 FICON; 176 FCoE; 88 iSCSI 48 * 32Gb/s 16GB/s FC 24x16GbFC, 12x25GbE, 8x10GbE NVMe/FC, FC, FCoE, iSCSI, NFS, pNFS, SMB

back to top


Enterprise Disk

Disk Protocols

   

Lascon latest major updates