What is Computational Storage?

Moore’s Law, the principle that has fueled technological advancements for the past 50 years, is gradually coming to an end. The rate at which the performance of computer processors doubles is no longer sustainable. As a result, the traditional approach of relying on more powerful central processing units (CPUs) to handle increasing workloads is no longer viable.

In today’s computing systems, data is stored in separate storage devices, such as hard disk drives (HDDs) or solid-state drives (SSDs), and is processed by the CPU. However, this architecture presents several challenges. The data has to be transferred from the storage to the CPU, consuming valuable bandwidth and CPU cycles. Additionally, the CPU is responsible for performing various tasks beyond processing, such as compression, encryption, and database operations.

This is where computational storage comes into play. Computational storage offloads some of the processing tasks from the CPU to other components, such as network interface cards (NICs), storage arrays, and SSDs, which have built-in processing capabilities. By moving specific functions closer to the data, computational storage reduces the need for data movement and minimizes the strain on the CPU.

What is Computational Storage?
What is Computational Storage?

How Computational Storage Works

To better understand how computational storage works, let’s take a closer look at different components of a computing system:

NICs with Offload Capabilities

Network interface cards (NICs) can be equipped with additional processing power, memory, and application-specific integrated circuits (ASICs) to handle tasks like compression and encryption. These NICs, also known as Smart NICs or Data Processing Units (DPUs), enable offloading specific functions from the CPU.

Further reading:  The Amazing Power of Laplace Transform: A Journey into Transforming Badly Behaved Functions

Storage Arrays with Processing Power

Traditional storage arrays consist of HDDs or SSDs and controllers that manage and store data. However, modern storage arrays now include full-blown servers with processors, such as Intel or AMD, inside the controllers. By utilizing the processing power in these controllers, computational storage can offload tasks like data reduction, encryption, and even analytics, optimizing the performance and efficiency of the overall system.

SSDs with Processing Capabilities

Solid-state drives (SSDs) have evolved beyond simple data storage devices. They now come with additional compute power, such as ARM cores and ASICs or field-programmable gate arrays (FPGAs). These components enable SSDs to perform tasks like data reduction and filtering, allowing for faster data processing and reducing the burden on the CPU.

Computational storage takes advantage of the processing capabilities within these components, distributing the processing load and reducing the reliance on the CPU.

The Benefits of Computational Storage

Computational storage offers several benefits that enhance the performance and efficiency of computing systems:

  • Reduced Data Movement: By processing data closer to where it is stored, computational storage minimizes the need for data movement, reducing bandwidth consumption and improving overall system performance.

  • Optimized CPU Usage: Offloading specific processing tasks from the CPU to other components allows the CPU to focus on critical operations, improving its efficiency and enabling it to handle more demanding workloads.

  • Improved System Scalability: Computational storage allows for more efficient scaling of processing capabilities. Instead of relying solely on CPU upgrades, computational storage components can be added or upgraded to meet specific workload requirements.

  • Faster Data Processing: By performing tasks like data reduction, encryption, and filtering closer to the data, computational storage enables faster processing and more efficient data analysis.

Further reading:  Data-Driven Control: Achieving Balance in Example Systems

FAQs

Q: Where can computational storage be implemented?

A: Computational storage can be implemented in multiple components of a computing system, including network interface cards (NICs), storage arrays, and solid-state drives (SSDs). Each component offers unique benefits for specific offloading purposes.

Q: What are the standards for computational storage?

A: Organizations like OCP (Open Compute Project), SNIA (Storage Networking Industry Association), and NVMe (Non-Volatile Memory Express) are working on standards for computational storage. These standards will ensure compatibility and interoperability across different applications and devices.

Conclusion

While Moore’s Law may be coming to an end, the rise of computational storage presents an exciting future for optimizing computing systems. By leveraging the processing capabilities within components like NICs, storage arrays, and SSDs, computational storage reduces data movement, improves CPU efficiency, and enhances system performance. As industry standards continue to evolve, computational storage will play a crucial role in meeting the demands of modern computing applications.

Thank you for joining us on this exploration of computational storage. If you enjoyed this article and want to learn more, please visit Techal.org for further insights and technology updates.

YouTube video
What is Computational Storage?