- Windows File Systems
- Windows ReFS
- Windows NTFS
- Windows DFS
- Windows CIFS
- Virtual Disk Services
- Volume Shadowcopy Services
- Removable Storage System
- Windows Volume Mgmt.
- Windows System state
CIFS, or Common Internet File System, was not really a file system, but was a messaging protocol that allowed different computers, potentially with different operating systems, to safely share remote files. The original internet was read-only with access managed using one-way protocols like HTTP and FTP, but as the internet became more interactive a new read/write interface was needed. This was where CIFS came in as it was designed to enable all applications, not just Web browsers, to open and share files securely across the Internet.
However as the internet continued to evolve and require larger data transfers and faster speeds, CIFS was unable to keep up. CIFS was based on an old IBM protocol called Server Message Block (SMB) protocol. Microsoft re-wrote SMB and released it as SMB 2.0 with Windows 2008 server, then enhanced it again as SMB 3 in Windows 2012.
Windows has its roots in home PCs, but Microsoft really wanted to extend it into a serious operating system that was capable of running important, server based business applications. To achieve this, Windows had to be improved to be able to work with networked storage then cope with temporary network glitches, cope with loss of a server and be scalable as applications grow. Improving SMB/CIFS was part of making that lot happen. Some of the improvements introduced with SMB 3.0 were -
SMB Multi channel, which allows a client to query the server network configuration and discover the type, speed and IP address of every NIC on the server. The client can then use that information to automatically select the best combination of paths to use. If Windows can use more than one path, then it has the ability to survive the failure of a network path, as long as there is at least one surviving path.
SMB Multi channel can also combine the bandwidth from multiple network adaptors, provided that both client and server are running SMB 3.0. So two 10GbE NICs would be combined to achieve up to 20Gbps throughput. If one of the cables in the client is pulled out, SMB will instantly detect that situation and move all the items queued on the failed NIC to the surviving one. This will be completely transparent, except that you will be working at only 10Gbps after the failure. If a cable is pulled on the server side then it takes a few more seconds as the server needs a TCP/IP timeout to work out that server interface was lost. This will delay a few packets for a few seconds, before the packets are requeued to the other interface. Microsoft had to make several changes to make automatic failover effective, including finding a new way to handle TCP/IP timeouts, so failover processes were faster.
SMB Transparent Failover has two features. Planned failover allows you to transparently failover applications to another cluster node so you can do maintenance work on the originally node. Unplanned failover means that when there's a failure on an SMB file server cluster node, applications are automatically and transparently failed over to a surviving node. This is known as SMB Persistence or Continuously Available file shares and it is now the default for any share in a file server cluster.
The way that Windows uses drive letters to refer to disks is restrictive and a bit archaic. Hyper-V for SMB, introduced with Windows Server 2012, allows the location of a virtual machine to be specified as a Universal Naming Convention (UNC) path rather than a drive letter and directory. This improves scalability as more than 26 drives or file shares can be allocated and also allows the location to be mapped to a service name rather than a physical server.
There may be occasions in a clustered configuration where you want a share to be visible to all the nodes in a cluster. SMB Scale-out is a new SMB 3.0 feature that provides that functionality. It works in an active/active configuration and uses CSV, a special volume that shows on every cluster node simultaneously, so you don’t have to be worried about which cluster node can access the data, because they all do. These CSV volumes show under the C:\ClusterStorage path, which means you don’t need a drive letter for every cluster disk.
Recent solid state disk assemblies need a network that can handle ow-latency, high-throughput traffic. SMB Direct is a feature that supports network adaptors with remote data memory access (RDMA) capability. These adaptors can function at full speed with very low latency, while using very little CPU. For workloads such as Hyper-V or Microsoft SQL Server, this enables a remote file server to resemble local storage
A good data transfer system must be able to cope with both small random read/write I/O and also large sequential data transfers. SMB 3.0 at both the client and the server sides have been optimised for small random read/write I/O, and also large Maximum Transmission Unit (MTU) is turned on by default, which significantly enhances performance in large sequential transfers. You can track how well these are working by using SMB Performance Counters, which provides the management reporting required to track file share utilisation, including throughput, latency and IOPS. The counters are managed through the Performance Monitor tool and cover both the client and server ends of the SMB 3.0 connection, which is useful for troubleshooting performance issues.
Once you data leaves your client and travels over an untrusted network, there is a danger that it can be intercepted by a 'man in the middle' attack. Provided both clients and server are configured for SMB 3.0 then SMB Encryption allows data traveling between client and server to be encrypted across the network. It may be configured on a per share basis, or for the entire file server.