Overcome the AI Bottleneck with the Right Storage Architecture

Delivering AI insights at the right speed depends on the performance of the data center. And in many cases, the enterprise Infrastructure may not be set up in a way that is optimal for AI use. Learn how putting proper storage infrastructure in place can help meet the demands of AI-based models.

  • August 22, 2023 | Author: Melanie McMullen
Learn More about this topic

Article Key

Organizations everywhere are accelerating the use of AI. Gartner research reports that on a global basis, 40 percent of organizations already have thousands of AI models in production, and those models have successfully passed an approval process and hit quality, accuracy, or value targets.

However, Gartner also notes that organizations are encountering challenges when moving AI models from pilot to production, estimating that 46 percent of AI projects don’t ever see the light of day. One of the reasons for that is because many organizations have gone straight to experimentation and deployment of models at scale, without having the underlying foundational elements in place first, explains George Dragatsis, CTO and Director of Technical Sales for Hitachi Vantara ANZ.

He notes that data preparation problems are common, adding in a recent article that for AI adopters in Australia, “It’s become a rite of passage to step back and clean up their source data before it gets ingested into a model.” He adds that an area of performance optimization that should receive more attention is the suitability of the data infrastructure to support an organization’s AI ambitions.

Data Access for AI Models

IT teams are accustomed to configuring infrastructure to drive consistent, stable performance of applications. But AI is a different world, and organizations need to adjust how data is stored and accessed by AI models.

To support AI platforms and use cases, IT needs to first ensure that its storage technology can meet the demands of AI-based models throughout the entirety of the data lifecycle, including the model-training phase. Dragatsis notes that AI platforms can suffer by not having access to the data they need at any given time.

“Ideally, AI modelling work is underpinned by a scale-out architecture where storage and traffic capacity can be easily increased, and where multiple compute nodes can connect as a single scale-out cluster,” he explains. This enables workloads to run in parallel, so compute nodes can be expanded as needed.

Optimizing Storage for AI

The choice of a data storage model for AI modeling and development depends on various factors, such as the nature of the data, the size of the dataset, the type of AI models in development, and the specific use case.

Two approaches to a storage architecture can help overcome storage-related AI bottlenecks, according to Hitachi Vantara. Those include:

  • Adoption of software-defined storage for block (scale-out) that features a native data plane, to enable data to move between scale-up and scale-out storage.
  • A scale-out file system that supports object-based storage.

Dragatsis adds that these approaches promise the best financial return for AI adopters, as they reduce the labor necessary to maintain the systems. They also remove the brittle nature of traditional data movement, allowing AI models to be fed data seamlessly, which will ultimately yield the best results.

Learn more about the Hitachi Vantara storage platforms portfolio.


Image Credit: Hitachi Vantara

Related Content