EMC VMAX ARCHITECTURE

The V-MAX architecture builds on the older DMX architecture, but has some fundamental differences. A traditional storage subsystem consists of discrete components: device adaptors that connect to the outside world, disk adaptors and enclosures for the data storage and a large memory cache to speed up access. A VMAX puts all of these items together into an 'engine', so the architecture is 'engine based'. Each V-MAX engine contains two directors and each director contains Host and Disk device adaptors, a CPU complex, and cache memory. The engine also includes cooling fans and redundant power supplies. If more capacity is required, the VMAX can be upgraded by simply adding another engine, up to a maximum of 8, depending on the VMAX model. Engines are connected together by a Virtual Matrix, so each engine has redundant interfaces to the Dynamic Virtual Matrix dual InfiniBand fabric interconnect.

    GFS Advert

A VMAX3 Hypervisor enables Microsoft Windows and Linux/UNIX clients to share multi-protocols in NFS, CIFS, and SMB 3.0 environments while supporting Fibre Channel access for high bandwidth, latency-sensitive block applications. It also allows the VMAX to achieve SRDF consistency across multiple arrays without needing an external host to manage that consistency and the cycle switching. The Internal eNAS Datamover runs on the VMAX3 hypervisor as a VM container, with a virtual version of the control stations and the datamovers. The active data movers and control stations run on different directors from the standby to ensure the highest availability.

Schematic of a VMAX engine

VMAX models

EMC now offers three main models of VMax -- the 10k, 20k and the 40k, which have a maximum of 1080, 2400 or 3200 drives, respectively (up to 1.5 PB, 2 PB or 4 PB). This capacity is determined by the number of Drive Array Enclosures or DAEs each model can have. The 100K can have two DAEs, the 200K four and the 400K can have eight. The 200k can only have 2 DAEs if 2.5” and 3.5” drives are mixed.

The VMAX 100K is the entry model of the VMAX3 systems. It can scale up to 4 VMAX3 engines with up to 96 CPU cores per array. It can be configured with up to 128 front-end ports and a 1 PB usable capacity. With a VMAX3 engine and up to 720 drives in a single rack, the box has a low footprint for the capacity. It can use FAST.X to take advantage of VMAX3 data services on externally tiered workloads such as EMC XtremIO all-flash array or non-EMC storage. It has the option to use EMC ProtectPoint software for direct backup to a Data Domain system, and also can optionally use data at rest encryption.

The EMC VMAX 200K, can contain up to 2PB of data. Apart from increased capacity, the features are similar to the 100K.

EMC VMAX 400K has the option of 2.5" SAS drives and flash drives and it comes with 32 x 2.8 GHz Intel Xeon 6-core processors, up to 2 TB of mirrored RAM, and up to 4 PB of usable capacity. File services are embedded on the array, so it’s easy to converge block, file and mainframe workloads. EMC VMAX 400K can scale up to 8 VMAX3 engines with up to 384 CPU cores, 256 front-end ports and up to 5,760 drives. It can be used to consolidate OLTP, mainframe, Big Data, and block/file workloads.

   EADM Advert

Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

Cache

The Cache memory is mirrored between directors, and in configurations with 2 or more engines, it is mirrored between engines.
Internally, the engine components communicate locally, so cache access is local. However, the engines must communicate with each other and also support the Enginuity global memory concept. To achieve this, the cache is virtualised, and each engine communicates with other engines using fiber connect and RAPIDIO technology. When a director gets a cache request it then checks the location, and if it is local it is served at memory bus speeds. If it is remote, then the request is packaged up and sent off to the remote director for processing.

Device Adaptors

Internally, the VMAX uses 4Gb/s communications end to end, with support for 8 Gb/s FICON or Fibre Channel host connections, internal connectivity and Fibre Channel Drives. The backend architecture is FC-AL.
To achieve this each director within a V-Max Engine contains two Back End I/O Modules and two Front End I/O Modules.
The Back End I/O modules provide access to four Drive Enclosures using a single Quad Small Form-Factor Pluggable (QSFP) connector. The QSPF contains 4 smaller cables which connect to the disk drives
The Front End I/O Modules can be configured for Fibre Channel, iSCSI and FICON

Virtual Matrix

The VMAX architecture extends the direct matrix principle used in the older Symmetrix subsystems, but now the matrix is virtual. Each engine has 4 virtual matrix ports, 2 on each director. They are used to connect to other engines with two MIBE (Matrix Interface Boards).

System Bays and Storage Bays

Similar to DMX3 and DMX4 arrays,Vmax has two types of bays

The System bay contains all Vmax engines and also the system bay standby power supplies(SPS), Uninterrupted Power Supply(UPS), Matrix Interface Board Enclosure (MIBE), and a Server (Service Processor) with Keyboard- Video-Mouse (KVM) assembly. Each system bay can support up to 720 2.5" drives or up to 360 3.5" drives, or a mix of the two.

The Symmetrix V-Max array Storage Bay is similar to the Storage Bay of the DMX-3 and DMX-4 systems. It consists of eight to sixteen Drive Enclosures, 48 to 240 drives, eight (8) SPS modules, and unique cabling when compared with the DMX Series. The Symmetrix V-Max array Storage Bay is configured with capacities of up to 120 disk drives for a half populated bay or 240 disk drives for a fully populated bay. Drives, LCCs, power supplies, and blower modules are fully redundant and hot swappable and are enclosed inside Disk Array Enclosure(DAE).One DAE holds 15 physical disk drives and one storage bay has total 16 DAEs(hence a storage bay has maximum of 240 disk, 16*15)

The V-MAX starts with the single-cabinet, entry-level device that can hold 120 disks. This can be extended by adding up to 10 more frames, each holding 240 disks. The latest release can consist of two disconnected frames.vOne of the difficulties in machine hall design is leaving room for various frames to grow as cabinets are added to increase capacity. The V-MAX can now be split into 2 frames, where the system bays can be up to 25m apart.

back to top