INFINITIX AI-Stack Leads with One-Click NVIDIA MIG Partitioning/Restoration
NVIDIA's latest GPU servers, NVIDIA DGX A100 and GPU workstation NVIDIA DGX Station A100, along with NVIDIA GPU A100 and NVIDIA GPU A30, feature a special function called "NVIDIA Multi-Instance GPU," commonly known as "MIG" (Multi-Instance GPU). In the NVIDIA Ampere architecture, MIG mode allows seven operations to run simultaneously on an A100 GPU, meaning each A100 GPU can be divided into up to seven instances. In MIG mode, A100 can run up to seven different AI or HPC workloads of varying sizes simultaneously. This feature is particularly useful for AI inference operations, which typically don't require all the performance offered by modern GPUs.
However, MIG mode can only be used on Linux systems with CUDA 11 / R450, and requires writing commands to perform partitioning and restoration settings, involving a series of logic and steps. Generally, the MIG partitioning process provided by NVIDIA is still time-consuming and complex for users. INFINITIX's AI computing management platform "AI-Stack" leads the industry by introducing one-click partitioning/restoration of NVIDIA A100 MIG, supporting automated MIG configuration (1g.5gb) and restoration for administrators. In simple terms, it provides partitioning and restoration functions for A100 nodes, allowing administrators to easily complete the complex A100 MIG partitioning or restoration process with a single click on a web-based user interface.
AI-Stack is a software that optimizes GPU usage, making it easy to use NVIDIA DGX Station A100. When purchasing NVIDIA DGX A100/DGX Station A100/A100/A30, AI-Stack is indispensable. It can be installed and operational within just 3 hours, allowing you to maximize GPU utilization at any time. With MIG, AI model development and deployment can be accelerated. Combined with AI-Stack's ability to execute Batch Job deep learning service training mode, training tasks can be packaged as shell scripts and executed on the platform. The platform will automatically create containers execute batch job content, and automatically delete containers upon completion of training, saving container creation waiting time and eliminating the steps and time required for manually executing tasks and deleting containers.
INFINITIX AI-Stack is one of the few domestic software brands represented by the best IT agent, Zero One Technology. Since its launch, the product has been adopted by many top universities in Taiwan, including National Chengchi University, National Cheng Kung University, National Chiao Tung University, National Taipei University of Technology, and National Yunlin University of Science and Technology. At the same time, INFINITIX has also gained favor from public sectors and state-owned enterprises, with the Central Weather Bureau, Kinmen County Government, Pingtung County Government, and Taiwan Power Company becoming AI-Stack product users, assisting customers in AI model development, training, deployment, and testing. Furthermore, many semiconductor design companies working on AI-on-the-Chip have discovered that AI-Stack allows them to easily deploy AI models into chips, accelerating product design and development processes. End users include Himax Technologies, Realtek Semiconductor, Silicon Integrated Systems, and Kneron. AI-Stack is currently the most popular AI PaaS-GPU management platform in the Taiwanese market and the best choice for customers looking to improve NVIDIA GPU card usage efficiency.
INFINITIX focuses on solving challenges brought by virtualization, containerization, microservices, edge computing, hybrid cloud management, heterogeneous IT environments, and artificial intelligence for enterprises. It integrates heterogeneous cloud management with popular open-source AI deep learning frameworks and development tool environments, providing a one-stop AI machine learning cloud platform solution AI-Stack.