TinyML: Engineers Unite to Standardize AI for Ultra‑Low‑Power Systems

SUNNYVALE, Calif. – Almost 200 engineers and researchers gathered to launch a community dedicated to advancing TinyML, the emerging field that brings deep learning to ultra‑low‑power systems.
“There’s no shortage of innovative ideas,” said Ian Bratt, a machine‑learning fellow at Arm, opening the session.
Four years ago, progress had plateaued. The arrival of new floating‑point formats and compression techniques has reignited excitement, yet a significant gap remains between cutting‑edge research and commercially viable solutions, Bratt explained.
The software ecosystem feels like a wild west—fragmented and dominated by corporate giants such as Amazon, Google, and Facebook, each pushing their own frameworks. The challenge for hardware engineers is to deliver products that are widely usable, he added.
An engineer from STMicroelectronics echoed this sentiment. “I just realized there are at least four AI compilers, and the next‑generation chips will fall outside the traditional embedded designer’s toolkit. Stabilizing software interfaces and investing in interoperability—through a standards committee—are essential,” he said.
Pete Warden, co‑chair of the TinyML group and technical lead for TensorFlow Lite, cautioned that software standards may still be premature. “Researchers constantly revise operations, architectures, weights, compression, formats, and quantization. The semantics are in flux, and we must keep pace,” he said.
Warden added that over the next few years, accelerators that cannot execute general‑purpose computations to support new operations or activation functions will become obsolete. Two years from now, new operations are likely to emerge, he warned.
A Microsoft AI researcher concurred. “We are still far from where we should be, and we won’t get there in a year or two. This was the reason Microsoft invested in FPGAs to accelerate its Azure cloud services,” he said. “Building the right abstraction layers is crucial for hardware innovation. An open‑source hardware accelerator could accelerate progress,” he added.
Bratt proposed that a compliance standard might be the first step, ensuring researchers have a consistent edge‑cloud experience, he suggested.
Princeton professor Naveen Verma, whose work focuses on AI processors‑in‑memory, emphasized the need for robust functional specifications at every level. “If we can define clear specs at sufficient levels, it will open pathways to other layers, and this community is uniquely positioned to define them,” he said.
Internet of Things Technology
- Synchronizing Consistency in Industrial IoT: Choosing the Right Model
- Connext DDS on Android: Empowering Industrial IoT with Reliable Publish/Subscribe
- The Rolling Pin: From Etruscan Origins to Modern Craftsmanship
- The Evolution and Craftsmanship of Modern Bowling Pins
- Mastering C# Using Statements: Imports, Aliases, and Static Directives
- Cisco Supports Industrial System Integrators at CSIA’s Executive Forum
- Axiomtek Launches Fanless eBOX100-51R-FL: Ultra‑Compact, Intel‑Powered Edge Computing Solution
- Is Your Manufacturing Facility Ready for IoT? A Practical Guide
- Advanced Drayage Management System: Optimizing Supply Chain Efficiency
- IoT-Powered Gas Detection System: Enhancing Safety in Hospitality and Industrial Settings