Why Composable Infrastructure is the Future of the Data Center

Posted on December 12, 2020 by rawee.k

The IT infrastructure shift from the client-server model era, to the modern private cloud hosted applications and services from static architecture data centers has been driven by the digital transformation of line-of-business applications. Due to this need for always-on, anytime-anywhere access to data, hosted applications become more varied in what they do – from supporting IoT rollouts, content delivery at the edge, artificial intelligence training and more. These complex compute, storage, video graphics and networking configuration requirements has lead to an expansion of the data center.

Traditionally, this explosive application growth has lead to the overprovisioning and underutilization of server component and connected devices, not to mention a burgeoning resource, management and cost problem. As data center administrators struggle to keep up, total utilization goes down while the total cost of an organizations data center goes up.

Composable Infrastructure: What’s Prompting the Need for Change?

Due to hosted application loads growing at a staggering pace data center managers oftentimes overprovision resources to compensate for unknowns. Always provisioning resources to meet current and future business demand means an organization’s data center will continually grow in size exponentially, throughout generations and generations of hardware. As the number of private cloud hosted applications outstrip the data center’s ability to host them, the total cost of ownership increases, while total utilization and efficiency decreases.

Hosted Application Loads are Growing Faster than Technology

As a result of applications getting larger and more complex, more diverse hardware configurations are necessary to meet modern IT service-level requirements. Hosted applications need a dedicated processor, memory, storage capacity, and network bandwidth to run workloads or processes, each with different dataset locations and service-level availability and redundancy.

Hardware Innovation’s Hitting a Wall

Driven by data analytics, artificial intelligence, and machine-learning workloads, the computation tasks of data center hosted applications have become more complex with in what they do, with hardware requirements have become varied. From storage services to data base transactions, the growing demand for has overloaded IT assets coming into the data center.

Previously, due to explosive hardware innovation, the availability of faster computing had previously dictated IT purchases and when IT assets would be EOL’d (end-of-life). However, these type of performance gains and power savings are no longer being realized.

Ending of Moore’s Law of Computing – Previously, the doubling of compute power every year from one Intel or AMD server processor generation to the next, meant a data center could add servers with newer CPU’s and experience twice the compute performance gains. Older servers could be EOL’d as applications would be migrated to newer servers.

Ending of Dennard’s Law on Power Scaling – Law that states there are continual power savings as you scale up the clock-cycle on CPUs and shrink the transistor without heating problems (e.g. from 500 Mhz CPUs to 1GB CPUs from one generation to the next). However, further down the engineering-line, there started to be power leakage decline as CPU clock-cycles got larger that prevented power scaling from occurring.

Limitations of Amdal’s Multi-Core Law for Parallel Computing – Next, cores were added to CPUs to increase computational power to a single CPU’s – and these limitations were also reached. According to Amdal’s law, you can only parallelize an application so much, before you’re bounded by the limitations of parallelized services and how many cores they can use.

Only Domain Specific Architecture Innovations Remain

What hardware innovation remain to optimize data center hosted applications? Domain specific architecture, specifically next-generation graphics, networking and storage accelerators, enable even more complex hardware configurations necessary to optimize performance.

  • GPUs (graphical-processing unit) – Focusing a specific CPU for graphical load processing
  • Networking – Offloading networking stacks thru interface controllers that can create a fast-pass for particular applications.
  • FPGA-based storage accelerators – Sit at the storage-level and improves the I/O path – ideal for optimizing database performance.
  • Flash Devices & Network Bandwidth Outpacing DDR Memory– I/O subsystems are taking up a larger chunk of DDR for I/O processing.

While these hardware innovations increase applications ability to host artificial intelligence and machine learning applications, they also extend the growth of the data center. With compute, networking and storage innovation not dramatically improving over time, the response of overprovisioning is no longer a sustainable solution.

What’s Needed in a New Approach?

As data center administrators try to move thru the curve of ever-expanding applications, flexibility and cost are driving alternative infrastructure strategies.

  • Reduce Overprovisioning – Increase utilization by increasing the allocation of hosted applications
  • Flexible Management – Previously, disaggregated solutions helps with management to automate processes e.g. SAN/NAS storage management on small number of services vs. having storage management on all servers
  • Scalable Elasticity – Changing the unit of allocation from the server-level, to the application-level. As service requirements change, the ability to spin-up services.

As new generations of hardware assets come into the data center, how can organizations get more utilization and efficiency out of each system? The solution is to allow data center administrators to layer hosted applications by composing system designs from available hardware resources.

 

The Solution: Composable Infrastructure

In a composable infrastructure environment, physical resources are logically pooled so that data center administrators don’t have to manually configure hardware to support a specific software application. This focus on provisioning hardware into available resources for flexibility and responsiveness makes management significantly easier to automate processes.

A composable infrastructure framework treats your physical servers, network storage, GPUs, FGPA’s and network switches as services. Depending on what hardware configurations are required by individual hosted application workload (similar to the VMs in a client server), data center managers can compose an infrastructure environment with available hardware resources for optimum performance and efficiency. When the project or process is completed, the hardware is returned to the resource pool.

We’ve Already Been Moving toward Disaggregation

Composable infrastructure may seem like a radical departure from how data centers have previously treated hardware resources to change the way administrators build their IT infrastructures. The concept of disaggregating IT resources is not a new one in the data center, in fact, it already exists.

With software-defined networking, data center admins were able to automatically create virtual networks dynamically, removing the previous limitations of hardware constraints. In addition, software-defined storage to allocate (and reallocate) storage capacity, broadening SAN/NAS, scaleout, NVMe-over-Fabric to scale-out environments. And eventually, new technologies, including fabric-attached persistent memory and storage accelerators, will be added to the disaggregation mix.

Choosing the Right Composable Infrastructure Solution

If learning more about how composable infrastructure management solutions can enable your IT organization to maximize your data center infrastructure by up to 90% while minimizing its footprint, give us a call at (888) 828-7646, email us at or book a time calendar to speak. We’ve helped organizations of all sizes deploy composability solutions for just about every IT budget.