The generative AI (GenAI) offensive continued yesterday with VMware Inc. and NVIDIA announcing an expansion of a partnership the two said is designed to “ready the hundreds of thousands of enterprises that run on VMware’s cloud infrastructure for the era of generative AI.”

The focal point of the launch, which took place at VMware Explore 2023 end user and partner conference being held in Las Vegas this week, is a platform the two companies said will feature GenAI software and “accelerated computing” from NVIDIA, built on VMWare Cloud Foundation and optimized for AI.

They said in a release that “to achieve business benefits faster, enterprises are seeking to streamline development, testing and deployment of generative AI applications. McKinsey estimates that generative AI could add up to US$4.4 trillion annually to the global economy.”

In projecting that figure, which was released in June, McKinsey stated that “the era of generative AI is just beginning, and fully realizing the enormous benefits of the technology will take time. But business leaders should begin implementing generative AI use cases as soon as possible rather than waiting on the sidelines as the performance gap between laggards and early adopters will widen quickly.

“The competitive advantage will go to the organizations that are first to use generative AI to accelerate their business priorities, innovations, and company growth. Put simply, business leaders need to emerge themselves now in generative AI (anyone who can ask questions can use the technology) and be prepared to learn continuously.”

The new platform, to be called VMware Private AI Foundation with NVIDIA will, the release stated, “enable enterprises to customize models and run generative AI applications, including intelligent chatbots, assistants, search, and summarization.”

It will be built on VMware Cloud Foundation and NVIDIA AI Enterprise software. Krish Prasad, senior vice president and general manager of VMware’s cloud infrastructure business group, wrote in a blog released soon after the formal announcement that, while “generative AI is a transformative technology, enterprises face daunting challenges in its deployment.”

These include, he said:

Privacy: Enterprise data and IP is private and critically valuable when training large language models to serve the organization’s specific needs. This data needs to be protected to prevent leakage outside the organizational boundary.
Choice: Enterprises need to be able to choose the large language model (LLM) that best fits their generative AI journey and organizational needs. Access to a broad ecosystem and a variety of choices is essential.
Cost: Generative AI models are complex and costly to architect since they are rapidly evolving with new vendors, SaaS components, and bleeding-edge AI software being continuously launched and deployed.
Performance: The demand on the infrastructure experiences a substantial surge during model testing and when data queries are executed. Large language models, by their very nature, usually necessitate the management of enormous data sets. Consequently, these models can place significant infrastructure demands, leading to performance issues.
Compliance: Organizations in different industries have different compliance needs that enterprise solutions, including generative AI, must meet. Access control and audit readiness are vital aspects to consider.

Designed to combat all five challenges, the platform, the two companies said, will feature “NVIDIA NeMo, an end-to-end, cloud-native framework included in NVIDIA AI Enterprise – the operating system of the NVIDIA AI platform – that allows enterprises to build, customize and deploy generative AI models virtually anywhere. NeMo combines customization frameworks, guardrail toolkits, data curation tools and pretrained models to offer enterprises an easy, cost-effective and fast way to adopt generative AI.

“With (it), VMware Private AI Foundation with NVIDIA will enable enterprises to pull in their own data to build and run custom generative AI models on VMware’s hybrid cloud infrastructure.”

In a keynote speech Tuesday where he was joined by NIVIDIA founder and chief executive officer (CEO) Jensen Huang, VMware CEO Raghu Raghuram likened what is currently taking place on the GenAI front to the arrival of the personal computer 40 years ago, or the arrival of the internet, both of which sparked a “whole new wave of application innovation.”

In the release, he stated that “customer data is everywhere – in their data centres, at the edge, and in their clouds. Together with NVIDIA, we will empower enterprises to run their generative AI workloads adjacent to their data with confidence, while addressing their corporate data privacy, security and control concerns.”

At a press conference held Monday to announce the launch, Paul Turner, vice president of product management, vSphere at VMware, said that this is a “a very new space, it’s difficult for people to actually understand what’s the most important applications and services and toolkits that they need, and how do they get those prescribed toolkits as easily as possible. And that’s one of the things that we see as two companies, we want to make that very simple for our customers.”

They will also not be going it alone. It was announced Tuesday that the platform will be supported by Dell Technologies, Hewlett-Packard Enterprise and Lenovo, “which will be among the first to offer systems that supercharge enterprise LLM customization and inference workloads with NVIDIA L40S GPUsNVIDIA BlueField-3 DPUs and NVIDIA ConnectX-7 SmartNICs.”

Asked about pricing and availability during a Q&A session with media and analysts, Turner said pricing has yet to be determined and that general availability of the platform is expected early next year.

The post VMware, NVIDIA team up, launch major GenAI initiative at VMware Explore first appeared on IT World Canada.

Leave a Reply