Storage Area Networks

This page is about theory, the theory behind SAN standards and SNIA / CIM models. If you are not a theory person, skip this page and go to the practical stuff in the rest of the SAN section.

    GFS Advert

SAN Evolution

In the past, file server storage was usually internal, called Direct Attached Storage (DAS). This was problematical as DAS storage was a single point of failure, could not be shared and had a limited size. As Open Systems computing developed, it needed the ability to share storage on a network, with the ability to add much more capacity, mirror the data between sites and provide clustering failover functionality. Server backups were made to individual tape drives attached to each server, which meant someone had to go round and change tapes each day. While this could be fixed by adding tape libraries to the local data network, this affected the performance of applications sharing the LAN. so the need became apparent for a dedicated Storage Area Network, or SAN.
SANs became a practical proposal in 1999, and were mainly used to connect a tape silo to several servers for backup. Switched Fabric SANs with auto failover followed and were used for storage consolidation as it was then possible to connect several servers to one, or a group of storage devices. Network Attached Storage (NAS) is also used for the same function. SANs typically used block I/O on a Fibre Channel network, while NAS used file I/O on the existing IP Network. NAS was therefore cheaper to install than SAN, but lacked the SAN functionality.

However, NAS and SAN did not solve all of the problems of distributed data. One of the biggest issues was that each storage vendor tended to make devices that did not work together, so they needed different management methods and in the worst case, different SANs. This problem has been resolved to some extent by the introduction of standards; standards for storage devices, interfaces and management software. There are lots of standards bodies, which is a problem in itself, but the main ones are the Distributed Management Task Force (DTMF) and the Storage Networking Industry Association (SNIA). Some others include the SCSI Trade Association and the Fibre Channel Industry Association.

DTMF came up with the Common Information Model (CIM) that was basically about simplification of the management of distributed systems. Quoting from the DTMF website, "CIM's common definitions enable vendors to exchange semantically rich management information between systems throughout the network." Translated into English, this means that software and hardware products that are CIM compliant can talk to each other and understand each other.

   

Accelerate DB2 Write with zHyperWrite and "EADM™ by Improving DB2 Logs Volumes Response Time:

SNIA took the CIM model as it applied to Storage Management and developed the Storage Management Initiative Specification (SMI-S). In brief, SMI-S defines persistent naming standards and discovery systems so you can find a device; communication transports so you can talk to a device, and resource locking facilities so you can share a device. The idea is that if an operating system provides a common set of SMI-S compliant interfaces, then a storage designer does not have to write a different interface for every operating system. They just designed one SMI-S compliant interface, so all devices and operating systems should be CIM/SMI-S compliant.

SNIA has models for all the different aspects of storage, the model for shared storage is shown below. While there is no need to know the intricate details on SNIA specifications (unless you design storage subsystems), the SMI-S specification basically consists of the following:

Providers: vendor specific software modules that implement a specific SMI-S profile, so that vendor's independent management software can manage a vendor device by using a standard CIM interface.

Profiles: A detailed description of the base set of information and capabilities that all implementations must make available to allow a client to manage a particular SAN device such as a disk array. Profiles define the classes that a client will use to perform a particular management task in a SAN. The Profile also defines the associations that describe the relationships between classes, for example, how a disk drive would fit in with a disk array.

Classes: describe the properties and methods for a specific object, a disk drive for example. SMI-S mostly uses the standard CIM classes.

SNIA Shared Storage model

back to top