Disk Storage | NAS Storage


DAS, or traditional Direct Attached Storage, is connected directly to a laptop or a general purpose file server and it is difficult to share DAS storage between machines. NAS, or Network Attached Storage was designed to satisfy the need to easily share data between multiple users on different file servers and laptops, by storing the files in a centralised location. However, NAS is not just disk storage, it also includes an operating system and software for configuration and file mapping. The file systems reside on the NAS device, so NAS delivers files to servers. Contrast this with a SAN or Storage Area Network. SANs work on a private, usually fibre channel network and connect storage devices to servers with switches. SANs transfer data in fixed size blocks and the file systems reside on the servers. You can usually have a basic NAS device up and running in 20 minutes, but it is also possible to tailor a NAS array to optimise performance with databases and other applications. NAS vendor's website usually contain several white papers that explain how to do this.
You might well see adverts for 'Networked Attached Storage' that you can plug into your laptop with a USB connector, maybe to give you your own 'personal cloud'. These devices are neither NAS nor Cloud, but are Direct Attached remote storage.

A NAS unit is usually not designed to be a general-purpose server. For example, NAS units usually do not have a keyboard or display, and are controlled and configured over the network, often using a browser. A full-featured operating system is not needed on a NAS device, so often a stripped-down operating system is used. The hardware that performs the NAS control functions is called a NAS head or NAS gateway. The clients always connect to the NAS head, as it is the NAS head is addressable on the network. Disks and in some cases tape drives are attached to the NAS head for capacity. NAS Heads are also sometimes called NAS appliances, based on the ideas that NAS is a commodity item like a toaster or washing machine.

NAS removes the responsibility of file serving from other servers on the network. They typically provide access to files using network file sharing protocols such as NFS, SMB/CIFS, or AFP. The most popular NAS protocols are

NAS can can also be used for load-balancing and fault-tolerant email and web server systems by providing storage services. NAS devices are availabe for the home consumer market, typically for storing large amounts of multi-media data. Such consumer market appliances are now commonly available. Unlike their rackmounted counterparts, they are generally packaged in smaller form factors.


NAS boxes have been around for a long time and are eaay to install, which means that a company might now have hundreds of different NAS file systems to manage. The effort required to keep all the devices patched, and the downtime needed for patching or adding new servers can be considerable if each NAS is managed individually. Once you fill up your first NAS box, you will buy another one, but then you need to change all your users drive mappings to see the second box and this is not a trivial exercise if you've got 2,000 users. Then, when that second box reaches capacity you add a third and have to change the mappings again, which becomes a management nightmare.

One solution to this problem is to add a logical layer of intelligent switches, or maybe a single appliance, between servers and NAs boxes, and so create a global namespace that spans multiple file systems but appears to end users as a single local drive on their computers. This can also automatically balance out I/O and capacity between servers, and you can increase volume sizes or move storage volumes without disrupting users. You can also migrate the data from one box to another with just a brief outage while you change the global namespace to point to the new box. The customers do not even know about it because it's all done behind the scene.

Like SAN virtualisaiton, NAS virtualisation has two approaches. Some products, Acopia for instance, use switches that sit in-band between end users and file servers or NAS devices. Other products such as NuView's StorageX software reside on an out of band Wintel server, and act like a postal service, directing files to the appropriate NAS device or volume within a file server. Which is best? It is possible to argue the case for either approach, but it is arguably more important that the device that you select simplifies the management of all of your NAS devices, rather than whether it is in-band or out-of-band devices is moot.

One problem with managing multiple NAS devices is that file servers based on Windows use the the Common Internet File System protocol to communicate over a network, and servers based on Unix or Linux use the Network File System. The consequence is that Unix and Windows environments need to be virtualised separately.


If you are upgrading your NAS storage, try to purchase enough capacity to last you for the length of your product replacement cycle, usually 4 years, then add a little more as you will certainly need it. Consider limiting the amount of data each user can store on their 'home' drive to try to control runaway data growth.

Tier your data, keep active, mission critical data on high performance (and high cost) drives, and inactive, or less important data on cheaper, slower drives. We could define 'active' as data that has been accessed in the last 30 days. Of course, you want the data to be moved between tiers automatically, so your NAS provider should supply software that does that for you. You could also consider permanently storing less important data, like user's home drives, on cheaper disk.

If you can, use thin provisioning. Thin provisionng means that the amount of data allocated to a drive is only the amount actually being used. So, if you define a 500GB drive, the initial amount of storage allocated to that disk will be quite small, but it will grow as more data is added. The Tiering software shoud alert you when you are getting close to the 500GB limit, so you can consider adding more disk space.

Take snapshots and backups of the data. Snapshots are perfect for recovering deleted files and directories as they are so much faster than recovering from a backup. You just look at the snapshot, then copy the needed data back to the base directory.
However snapshots do not replace backups, as if you lose the array you lose the snapshot too. The data must be backed up to tape or a different disk to cater for loss of the array.

Configuring a NetApp NAS server

A NetApp storage controller can support both file and block protocols concurrently, with a block storage service in the form of SAN LUNs and NAS file service as NFS and SMB/CIFS. However before we go any further we should define some NetApp storage controller terminology.

A NetApp storage controller or 'Node' is a hardware device with a processor, RAM, and NVRAM which connects to a combination of SATA, SAS, or SSD disk drives. Nodes can be grouped together into clusters, so if one node fails all storage and services can fail over to another node. It is also possible to transparently move data between nodes for work balancing or maintenance.
Every cluster must have at least one storage virtual machine (SVM), which owns storage volumes and logical interfaces (LIFs). LIFs communicate with the network on either a physical Ethernet network or a Fiber Channel target port. You create logical disks or CIFS shares inside an SVM, and these are mapped to a Window host.
The point behind an SVM is that the physical data and LIFS are usually dedicated to the SVM, so they can only be accessed through the Windows Share mapping, which means they are secure.

NetApp supplies a number of ways for managing storage, including the GUI based 'OnCommand System Manager' and you can use any SSH client on a Windows Server to run NetApp CLI commands. As always I'll describe how to manage with a CLI interface rather than a GUI. However there are two commands that you could use to create an SVM; 'vserver setup' and 'vserver create'. SVMs used to be called vservers, and just to confuse you, they are still called vserver when using commands.
'vserver setup' starts up a wizard that takes you step by step through the process, generates the appropriate commands and runs them for you. Once the command completes, you have a fully configured SVM that you can immediately access from a Windows client, to create and access files.
'vserver create' gives you more options on the type of volumes you can create, but you need to configure and run a lot of different commands to get to the end result. It is good to know what is happening in the background, especially if anything goes wrong and you need to fix it. So, I will have a go at listing out these commands, but this should be taken as a guide, rather than a definite process.

First, you must create an aggregate (or collection of physical disks) which must be online and have sufficient space for the SVM root volume. The first command will list out existing aggregates, as you may have a suitable existing one. Assuming you need to create a new aggregate, the second command creates a 64-bit aggregate called aggr_01_15k, with 10 drives, the default RAID configuration, and all drives spinning at 15K RPM
In all these commands below I've added illustrative names in italics. You need to substitute your own names, and it is worth working out a good set of naming standards before you start, as you will probably have a lot of SVMs and LIFs.

storage aggregate show
aggr create aggr_01_15k -R 15000 10

There are other things to check before you can create an VM, mainly to do with networking connectivity and IP address setup. Check the NetApp documentation for details. Assuming everything is in place, we will create a new SVM called test_cifs with the NAS protocol CIFS enabled, using the aggregate you just created and the default IPspace.

vserver create -vserver test_cifs -rootvolume lctest -aggregate aggr_01_15k -rootvolume-security-style ntfs -language C.UTF-8

Next, create LIFs, which are basically an IP address associated with a physical or logical port. NetApp recommends creating at least four LIFs per SVM: two data LIFs, one management LIF, and one intercluster LIF for intercluster replication.

network interface create -vserver test_cifs -lif lif_cifs_D1 -role data data-protocol cifs -home-node node-4 -home-port e1c -address -netmask -firewall-policy data -auto-revert true

If you want to add more LIFs, then run the command again with different names.

Now, configure DNS using the first command below, then check it is OK with the second command. I'm assuming the IP addresses of the domain servers are and

vserver services name-service dns create -vserver test_cifs -domains lascon.co.uk -name-servers, -state enabled

vserver services name-service dns show -vserver test_cifs.lascon.co.uk
            Vserver: test_cifs.lascon.co.uk
            Domains: lascon.co.uk
       Name Servers:,
 Enable/Disable DNS: enabled
     Timeout (secs): 2
   Maximum Attempts: 1

The cluster time must match the the time on the AD domain controllers to within five minutes, so you need to configure time services using the command below. The command assumnes the external NTP server is called ntp.production.com. The final command checks the result.

cluster time-service ntp server create -server ntp.production.com

cluster time-service ntp server show

Now create the SMB server in an AD domain, but first check that SMB/CIFS is licensed on your cluster. The final command checks the create command worked correctly.

system license show -package cifs

vserver cifs create -vserver test_cifs.lascon.co.uk -cifs-server smb_server01 -domain lascon.co.uk

vserver cifs show -vserver test_cifs
                          Vserver: test_cifs.lascon.co.uk
         CIFS Server NetBIOS Name: SMB_SERVER01
    NetBIOS Domain/Workgroup Name: EXAMPLE
      Fully Qualified Domain Name: LASCON.CO.UK
Default Site Used by LIFs Without Site Membership:
             Authentication Style: domain
CIFS Server Administrative Status: up
          CIFS Server Description: -
          List of NetBIOS Aliases: -

Log into your DNS server and create both forward and reverse lookup entries to map your SMB server name to the IP address of the data LIF(s).

Now you need to configure the data storage on your SMB. You create a volume and optional qtrees. Qtrees may be a little bit old fashioned, but one big advantage is that you can create up 4,995 qtrees per internal volume, each as a special subdirectory of the root directory. For example, you might find this useful if you are creating individual 'home' directories for your users.
The volume you create must include a junction path for its data to be made available to clients. The command to do this is:

volume create -vserver test_cifs -volume vol1 -aggregate aggr_01_15k -size 500GB -security-style ntfs -junction-path /vol1

This creates a 500GB volume from the aggregate we created earlier, directly under the root called vol1.
If you wanted to create qtrees, the command to create a qtree called 'Home001' would look something like:

volume qtree create -vserver test_cifs -volume vol1 -qtree Home001 -security-style ntfs

Now you must create an SMB share for client access. The SMB points to a directory path which corresponds to the junction path for the volume that was created above. The first command below creates the share, the second command checks to make sure the first command worked.

vserver cifs share create -vserver test_cifs -share-name share001 -path vol1
vserver cifs share show -share-name share001

Log in to a Windows client and map a drive using \\SMB_Server_Name\Share_Name, for example: \\test_cifs.lascon.co.uk\share001, then check all is working by creating and deleting a file.

This process describes creating a test cifs share, and as it stands, anyone could map to this share and have full access to all of the data. Before putting a share like this into production, you would want to create access control lists (ACLs) to control the level of access to the share for users and groups. Users in the 'Administrators' group get full access be default, so let's assume we have another AD group called 'Systest', and we want them to have 'Change' access.
You would first delete the default share ACL that gives full control to Everyone, then define a new ACL.

vserver cifs share access-control delete -vserver test_cifs -share share001 -user-or-group everyone

vserver cifs share access-control create -vserver test_cifs -share share001 -user-or-group Systest -permission Change

After that you would probably want to configure file permissions using the Security tab that you see if you right click on the drive mapping within Windows

Storage Area Networks

Disk Protocols

Lascon updTES

I retired 2 years ago, and so I'm out of touch with the latest in the data storage world. The Lascon site has not been updated since July 2021, and probably will not get updated very much again. The site hosting is paid up until early 2023 when it will almost certainly disappear.
Lascon Storage was conceived in 2000, and technology has changed massively over those 22 years. It's been fun, but I guess it's time to call it a day. Thanks to all my readers in that time. I hope you managed to find something useful in there.
All the best

back to top