Dell releases open supply suite Omnia to handle AI, analytics workloads

Where does your enterprise stand on the AI adoption curve? Take our AI survey to discover out.


Dell today announced the release of Omnia, an open supply application package aimed at simplifying AI and compute-intensive workload deployment and management. Developed at Dell’s High Performance Compute (HPC) and AI Innovation Lab in collaboration with Intel and Arizona State University (ASU), Omnia automates the provisioning and management of HPC, AI, and information analytics to make a pool of hardware sources.

The release of Omnia comes as enterprises are turning to AI throughout the well being crisis to drive innovation. According to a Statista survey, 41.2% of enterprise say that they’re competing on information and analytics though 24% say they’ve made information-driven organizations.  Meanwhile, 451 Research reports that 95% of providers surveyed for its current study take into consideration AI technologies to be significant to their digital transformation efforts.

Dell describes Omnia as a set of Ansible playbooks that speed the deployment of converged workloads with containers and Slurm, along with library frameworks, services, and apps. Ansible, which was initially made by Red Hat, assists with configuration management and app deployment, though Slurm is a job scheduler for Linux utilised by quite a few of the world’s supercomputers and laptop or computer clusters.

Omnia

Omnia automatically imprints application options onto servers — especially networked Linux servers — based on the distinct use case. For instance, these could possibly be HPC simulations, neural networks for AI, or in–memory graphics processing for information analytics. Dell claims that Omnia can minimize deployment time from weeks to minutes.

“As AI with HPC and data analytics converge, storage and networking configurations have remained in silos, making it challenging for IT teams to provide required resources for shifting demands,” Peter Manca, senior VP at Dell Technologies, mentioned in a press release. “With Dell’s Omnia open source software, teams can dramatically simplify the management of advanced computing workloads, helping them speed research and innovation.”

Omnia can construct clusters that use Slurm or Kubernetes for workload management, and it tries to leverage current projects rather than reinvent the wheel. The application automates the cluster deployment procedure, beginning with provisioning the operating technique to servers, and can set up Kubernetes, Slurm, or each along with more drivers, services, libraries, and apps.

“Engineers from ASU and Dell Technologies worked together on Omnia’s creation,” Douglas Jennewein, ASU senior director of investigation computing, mentioned in a statement. “It’s been a rewarding effort working on code that will simplify the deployment and management of these complex mixed workloads, at ASU and for the entire advanced computing industry.”

In a associated announcement today, Dell mentioned that it is expanding its HPC on demand providing to assistance VMware environments to consist of VMware Cloud Foundation, VMware Cloud Director, and VMware vRealize Operations. Beyond this, the firm now delivers Nvidia A30 and A10 Tensor Core GPUs as possibilities for its Dell EMC PowerEdge R750, R750xa, and R7525 servers.


Originally appeared on: TheSpuzz

Scoophot
Logo