Imagine this, you have blinding fast Flash storage disks and a very fast CPU, but the pipe connecting them is slow, so your system does not run as fast as it should. NVMe to the rescue!
Non Volatile Memory Express, or NVMe, is an open standards protocol specifically designed for communications between between servers and non-volatile memory or Flash Storage. Previously, servers usually communicated with mechanical storage devices using the SCSI protocol over SATA or SAS buses, and when older SSDs arrived, SATA was most often used. However as SSDs became mainstream, it became obvious that SATA was not up to the job as it was designed to work with spinning hard disk drives. Performance could be improved by connecting the SSD devices to PCIe busses, but there was no standard specification to follow.
NVMe was developed to fill this gap by a large consortium of vendors, at first specifically designed to connect SSD storage via PCIe connectors on a motherboard. NVMe removed the last of the storage performance bottlenecks, at least within a server rack.
NVMe manages to do this by using a leaner command set, so it requires fewer than half the number of CPU instructions than SCSI or SATA. It also has an almost unlimited queue depth for parallel processing as it supports 65,535 I/O queues, with each supporting 64,000 commands.
By contrast, SAS and SATA devices support just one queue, with 256 commands for SCSI, and 32 commands for SATA. SAS and SATA are designed for spinning hard drives, which are restricted by the read head to only transfer of one unit of data at a time. An SSD can retrieve many pieces of data simultaneously, as it does not have this hardware restriction. NVMe typically uses 4 lane PCIe to get parallelism.
So what does this mean in real terms? To put some approximate numbers around this, a spinning hard drive will transfer data at 200MBps, SSD over SATA at 550MBps and SSD over NVMe 3GBps, 4 times faster than SSD with SATA.
NVMe can support 3D XPoint technology as well as NAND Flash SSDs.
As mentioned, the original NVME specification was for components inside a chassis, but it was quickly evident that this needed to be extended to run over networks to support external attachment. The NVMe over Fabrics (NVMe-oF) specification was published on June 5, 2016, with a design goal to add no more than 10 microseconds of latency for communication between an NVMe host computer and a network-connected NVMe storage device, compared to the latency associated with an NVMe storage device using a local computer's internal PCIe bus.
The NVMe-oF specification was then extended across various network fabrics, including Fibre Channel (FC), Ethernet and InfiniBand using RDMA. One of the main differences between NVMe-oF and NVMe is the methodology for transmitting and receiving commands and responses. NVMe is designed for local use and maps commands and responses to a computer's shared memory via PCIe. By contrast, NVMe over Fabrics employs a message-based system to communicate between the host computer and target storage device.
Another transport was added in 2018, NVMe-oF using TCP. This is important as TCP is well-understood, and you can use existing TCP/IP routers and switches. In terms of performance, NVMe over TCP lies in between SCSI and NVMe over FC or RDMA. The DMA in RDMA stands for direct memory access, and is arguably best for high performance.
NVMe-oF for Fibre Channel will probably be the network protocol to be accepted first, since it is more mature than NVMe-oF over TCP. Like TCP, Fibre Channel can use existing routers and switches, though some minor software changes might be needed. Other protocols will be used as the demand dictates. I imagine that hyperconverged and software defined storage will most likely use direct attached NVMe over PCIe, while Enterprise storage will use Fibre channel and Distributed storage, TCP. Where real high performance is vital, RDMA will be used. But whichever protocol is used, NVMe and NVMe-oF will eventually replace traditional SCSI based storage.
If you are a Gamer and you are shopping for a new PC, look for one with NVMe. You will notice the difference. If your existing PC is not too old, consider upgrading it to NVMe if the cost case stacks up.
The position within the datacentre is a bit more complicated. While I'm convinced that NVMe is the future, NVMe-based PCIe SSDs are currently more expensive than SAS and SATA-based SSDs of equivalent capacity, and it would be difficult to justify swapping out older flash storage anyway. So the question you will need to ask is, do your applications need the level of performance that NVMe PCIe SSDs provide right now? NVMe is definitely not a nice-to-have storage technology, but it may be one for the next upgrade.