GrAI Matter Labs (GML), a pioneer of brain-inspired ultra-low latency edge AI computing, has become a founding member of the MLCommons group, an open engineering consortium. Alongside companies like NVIDIA, Intel, and Qualcomm, GML will contribute its expertise to define the machine learning (ML) benchmarks for the industry.
One of MLCommons’ objectives is to build a common set of benchmarks that enables the ML field to measure system performance for both training and inference from edge devices to Cloud services.
The non-profit organization is a consortium of researchers from prestigious universities in the United States along with leaders in ML such as Baidu, Intel, Google and others.
GML’s first AI chip, GrAI One, delivers superior performance for edge AI workloads and is uniquely positioned to offer the lowest latency solution for ML workloads at the lowest power. Our customers are test driving GrAI One to accelerate ML workloads at the edge.
GrAI One is the only silicon chip today that can leverage time-based Sparsity with a Dataflow architecture for real world edge applications. For closed loop system applications, it's critical for organizations to consider the underlying system implications with AI compute times. So, we are honored to contribute our knowledge and proficiency to the work of MLCommons benchmark, to further the impact of machine learning and broaden access to industry.
Read more about MLCommons launch in their press release.