Data Center Strategy: All-Flash NVMe Architecture Solutions

Posted on May 5, 2021 by rawee.k

The challenge to meet the performance demands of on-prem data center applications and services – including high-performance computing (HPC), cloud computing, SQL/NoSQL databases, virtualization (VMs/containers), AI/ML and data analytics – has forced organizations to explore an all-flash NVMe solution. Generally, IT departments migrate to NVMe from traditional SATA or SAS storage in the data center, to focus on cloud and mission-critical applications that require high-performance, low latency. Unfortunately, while cloud hyperscalers have heavily-adopted NVMe but traditional IT departments are still largely using SSDs with SATA and SAS interfaces.

Is now the time for to replace legacy storage with NVMe SSDs and NVMe over Fabrics (NVMe-oF) technologies? First, let’s examine how you can achieve improved performance and efficiency by sharing your NVMe investment across servers.

Researching flash-based solutions to achieve new levels of performance and efficiency? Here’s what data center administrators need to know to evaluate NVMe flash storage solutions. Questions? Drop us an email at

NVMe Solution Architecture

To enable an end-to-end NVMe solution architecture, there’s four components for consideration: solid-state drives (SSDs), controller, storage pools and network fabric.

Solid Stage Drives (SSDs) – Flash-based NVMe SSDs, such as those form Western Digital, are designed to allows consistent response time with low latencies. Additionally, they utilize dual ports so you have redundant ports to all resources. Western Digital NVMe SSD capacity ranging from 1.6TB to 15.36TB with high-density disk expansion units from Western Digital.

NVMe Controller – Early flash storage devices were connected via SATA or SAS – protocols that were developed decades ago for HDDs. SATA and SAS connected flash storage provided huge performance gains over HDDs and are still widely-used for data infrastructure. Yet, as speeds increased – on CPUs, backplanes, DRAM, and networks – the SATA and SAS protocols began to limit the performance from flash devices. SATA and SAS protocols accounted for HDD characteristics, such as rotational delay, head seek times, etc. that add unnecessary complexity for flash-based media.

Developed in early 2008 as a much more efficient (faster) protocol to SATA or SAS, NVMe was designed to take full advantage of flash storage performance, assuming flash memory – not disk – was the storage target. NVMe provides a divided 12-lane highway versus the two-lane country road of the SAS and SATA controllers. With many more IO lanes than SAS or SATA, NVMe delivers extremely high performance and can be connected to a 100 Gbit/s Ethernet connection without requiring a switch.

Devices on the NVMe protocol use the PCIe electrical interface to communicate with CPUs, DRAM, etc. Western Digital’s vertically integrated in-house NVMe RapidFlex controller, firmware and 96-layer 3D TLC NAND technology. When used in a six-controller configurations, this provides sub-500 nanosecond latency for projected platform performance of up to 13 million IOPS.

Storage Pools – NVMe is often used in servers to connect a flash drive to the PCIe bus as direct attached storage, giving the server a more efficient way to use flash media. However, this creates a situation where NVMe SSDs are underutilized when other servers could benefit from additional flash. For example, a hyperconverged (HCI) system would have all these servers landlocked inside a single chassis.

**Introducing NVMe over Fabrics** – Previously, the challenge with using NVMe is that the flash device is not accessible by any other systems beyond the server it is attached to. However, the NVMe protocol is not limited to simply connecting flash drives, it may also be used as a networking protocol. When used in this context, NVMe-oF enables any-to-any connections among attached elements, distinguishing it from a network, which may restrict the possible connections.

With an NVMe “fabric”, any server can access any SSD with no dedicated connections needed. Now, data center operators can create a high-performance JBOF DAS network with latencies that rival legacy SATA or SAS expansion units. Simply connect the JBOF DAS to the switch in the server cabinet and it is accessible by any server.

Next, how do you get all the performance potential out of an all-flash NVMe architecture solution with NVMe over Fabrics?

Choosing an all-flash NVMe Architecture Solution

Low latency or high transfer rates are of little benefit if they swamp the target application. While these systems generate IOPS approaching the millions, the reality is that there are very few workloads that require more than the performance of these systems. However, there is an emerging class of workloads that can take advantage of all the performance and low latency of an end-to-end NVMe system.

If you’d like to learn more about how to maximize your data center infrastructure by up to 90% while minimizing its footprint, give us a call at (888) 828-7646, email us at or book a time calendar to speak. We’ve helped organizations of all sizes deploy composable solutions for just about every IT budget.