Unlocking maximize heterogeneous GPU utilization in Cloud Native way: Leveraging the Power of HAMi
Xiao Zhang, Yu Yin
Chinese Session 2025-07-25 14:00 GMT+8 (ROOM : JingMing Hall) #aiWith AI’s growing popularity, Kubernetes has become the de facto AI infrastructure. However, the increasing number of clusters with diverse AI devices (e.g., NVIDIA, Intel, Huawei Ascend,Hygon,Metax,Cambrian,Mthreads,iluvatar) presents a major challenge. AI devices are expensive, how to better improve resource utilization? How to better integrate with K8s clusters? How to manage heterogeneous AI devices consistently, support flexible scheduling policies, and observability all bring many challenges The HAMi project was born for this purpose. This session including:
- How K8s manages heterogeneous AI devices (unified scheduling, observability)
- How to improve device usage by GPU share
- how to ensure the QOS of high-priority tasks in GPU share stories
- Support flexible scheduling strategies for GPU (NUMA affinity/anti-affinity, binpack/spread etc)
- Integration with other projects (such as volcano, scheduler-plugin, etc.)
- Real-world case studies from production-level users.
- Some other challenges still faced and roadmap
Speakers:
Xiao Zhang: dynamia.ai founder, a cloud-native enthusiast and community maintainer, focusing on the AI infrastructure.
Xiao Zhang is the founder of dynamia.ai (focusing on infrastructure, AI, multi-cluster management, cluster lifecycle management (LCM), and the Open Container Initiative (OCI)). He is also an active contributor to the community and a cloud native enthusiast. Currently, he is a member of Kubernetes/Kubernetes Special Interest Groups (Kubernetes-sigs) and serves as a maintainer for the Karmada, kubean, and cloudtty projects. At present, he is also a co-author and maintainer of the CNCF HAMi project. His GitHub ID is wawa0210.
Yu Yin: Product Owner @dynamia.ai | Open Source Maintainer @HAMi | Driving GPU Virtualization & AI Infra Innovation on Kubernetes
Yu Yin is the Product Lead at dynamia.ai and a core maintainer of HAMi, an open-source project for GPU virtualization and heterogeneous computing on Kubernetes. With hands-on experience in building AI infrastructure, Yu focuses on enabling scalable GPU sharing, device pooling, and intelligent scheduling for multi-architecture environments. He has helped enterprise users across logistics, telecom, and finance to adopt heterogeneous resource management in production. Yu is also an active advocate for open-source adoption in China and leads the HAMi community’s internationalization efforts.