Nvidia aims to simplify machine learning development this week with the latest release of its AI Enterprise suite, which includes a low-code toolkit for machine learning workloads.
The update also extends support for Red Hat OpenShift, Domino Data Lab’s ML operations platform, and Azure’s NVads A10 v5-series virtual machines.
Introduced last summer, Nvidia presents AI Enterprise as a one-stop-shop for developing and deploying enterprise workloads on its GPUs, whether deployed on-premises or in the cloud.
The suite is a collection of tools and frameworks developed or certified by Nvidia to make building AI/ML applications more accessible to businesses of all sizes. Over the past year, the chipmaker has rolled out support for a variety of popular computing frameworks and platforms, like VMware’s vSphere.
The last version – version 2.1 — introduces low-code support in the form of Nvidia’s CAT toolkit.
Low code is the idea of abstracting the complexity associated with hand coding an application – in this case, AI speech and vision workloads – by using little or no code in the process. Nvidia’s TOA Toolkit, for example, includes REST API support, weight import, TensorBoard integrations, and several pre-trained models, designed to simplify the process of assembling an application. .
Along with low-code features, the release also includes the latest version of Nvidia RAPIDS (22.04) – a suite of open-source software libraries and APIs for data science applications running on GPUs.
Version 2.1 also sees the chipmaker certify these tools and workloads for use with a variety of software and cloud platforms.
For those migrating to containerized and cloud-native frameworks, the update adds official support for running Nvidia workloads on Red Hat’s popular OpenShift Kubernetes platform in the public cloud.
Red Hat’s runtime container is the latest application environment to be certified and follows VMware’s vSphere integration last year. Domino Data Lab’s MLOps service also received Nvidia’s blessing this week. The company’s platform provides tools to orchestrate GPU-accelerated servers for virtualizing AI/ML workloads.
And, which should come as no surprise, the green team has certified the latest generation of Nvidia-based GPU instances from Microsoft Azure, introduced in March. Instances are powered by the chipmaker’s A10 accelerator, which can be split into up to six fractional GPUs using time slicing.
In addition to Nvidia AI Enterprise updates, the company has also introduced three new labs to its LaunchPad service, which provides enterprises with near-term access to its AI/ML software and hardware for proof-of-concept and testing purposes. test.
Latest labs include multi-node training for image classification on vSphere with Tanzu, VMware’s Kubernetes platform; fraud detection using the XGBoost model and Triton, Nvidia’s inference server; and object detection modeling using TOA Toolkit and DeepStream, the chipmaker’s stream analysis service. ®