Fastest Edge AI Processing per Watt

Sparsity-Enabled Dataflow Architecture ​

Learn More

Processing fastest edge AI per watt

Sparsity-enabled Dataflow architecture ​

Learn More

GrAI Matter Labs

Brain-inspired low latency computing
packaged for
Plug and Play deployments​

< 1 ms Inferences

1

Performs just the essential computations efficiently

1 - Resnet-50 for Batch = 1​

Scalability​

With ​digital neuromorphic implementation

Easy Deployment​​

Fully programmable ​with standard frameworks​

Our Product

Lorem ipsum dolor

The world’s first sparsity-enabled AI processor optimized for ultra-low latency and low power processing at the edge.

GrAI One drastically reduces application latency, for instance, it reduces the end-to-end latencies for deep learning networks such as PilotNet to the order of milliseconds. The GrAI One chip is based on GML’s innovative NeuronFlow™ technology that combines the dynamic Dataflow paradigm with sparse computing to produce massively parallel in-network processing.

Learn more
Our Technology

Lorem ipsum dolor

GrAI Matter Labs utilizes brain-inspired, neural network architecture to overcome the limitations of traditional Von Neumann machines and application processors. Our unique implementation is the only system that can leverage sparsity from an end-to-end perspective – a fully programmable approach for ease of implementation that offers ultra-low latency while preserving low power levels not feasible before.

Learn more
Markets we serve

Drones​

Industrial Automation​

AR/VR​

Surveillance

Robots

Micro-Edge Servers​

Our Leadership
Ingolf
Held
CEO
Menno
Lindwer
VP Engineering
Jonathan
Tapson
Chief Scientist
Mahesh
Makhijani
VP Marketing & Biz Dev
Orlando
Moreira
Chief Architect
Remi
Poittevin
Head of R&D AI Systems & Applications
Edwin
Van Dalen
VP Engineering

Unleash the lowest latency per Watt for your edge AI devices

Contact Us