Industrial manufacturing
Industrial Internet of Things | Industrial materials | Equipment Maintenance and Repair | Industrial programming |
home  MfgRobots >> Industrial manufacturing >  >> Industrial Internet of Things >> Internet of Things Technology

TinyML Community Targets 1‑mW AI for Edge Devices

When the TinyML group convened its inaugural meeting, the first order of business was a simple question: What exactly is TinyML?

TinyML is a community of engineers dedicated to making machine learning (ML) run on ultra‑low‑power devices. The initial gathering focused on whether ML can be implemented on microcontrollers and whether specialized low‑power ML processors are necessary.

Evgeni Gousev of Qualcomm AI Research defined TinyML as ML techniques that consume 1 mW or less. He called 1 mW the “magic number” for always‑on applications in smartphones.

“While cloud ML dominates the conversation, ML on the device is becoming increasingly sophisticated,” Gousev noted. “But 90 % of the data originates in the real world. How do we fuse all the cameras, IMUs, and other sensors and run ML locally?”

“TinyML is going to be big, and there is an urgent need to build the entire ecosystem—applications, software, tools, algorithms, hardware, ASICs, devices, fabs, and more,” Gousev added.

TinyML Community Targets 1‑mW AI for Edge Devices

Google engineer Nat Jefferies presents at the first TinyML meetup (Image: TinyML)

TensorFlow Lite

Google engineer Daniel Situnayake introduced TensorFlow Lite, a lightweight version of TensorFlow tailored for edge devices, including microcontrollers.

“TensorFlow Lite has served mobile phones for years, and we’re now excited to bring it to even smaller devices,” he said.

After training a model in TensorFlow, engineers convert it to TensorFlow Lite, which shrinks the model, applies quantisation, and otherwise optimises it to fit comfortably on the target hardware.

One power‑saving strategy he described is chaining models together.

“Picture a cascade of classifiers: a tiny, low‑power model first detects whether sound is present, then a slightly larger model determines if it’s human speech, and finally a deeper network activates only when those conditions are met,” Situnayake explained. “By waking the energy‑intensive model only when necessary, we achieve substantial energy savings.”

TinyML Community Targets 1‑mW AI for Edge Devices

Cascading machine learning models can help save power (Image: Google)

Nat Jefferies, a member of Google’s “TensorFlow Lite for microcontrollers” team, highlighted the growing demand for strict energy budgets in consumer devices that feature sophisticated sensors but still rely on long‑lasting batteries or energy harvesting.

“TinyML—deep learning on microcontrollers—is the key,” he said. “It lets us process sensor data locally with minimal CPU cycles and sensor reads, avoiding the energy cost of off‑chip transmission. TinyML can distil raw sensor data into a few bytes, then transmit only that compressed payload.”

He cited a recent Google challenge that attracted impressive submissions for 250 kB models capable of person detection, confirming the viability of this approach.

“We can now shrink TensorFlow models to the point where they fit on microcontrollers, making this an opportune moment to jump into the space,” Jefferies concluded.

Google’s roadmap for TensorFlow Lite on microcontrollers includes open‑sourcing demo projects, collaborating with chip vendors to optimise kernels, reducing memory usage to run more complex models, and expanding platform support—SparkFun Edge is the first board, with Arduino and Mbed support on the horizon.

Specialist Devices

Martin Croome, VP Business Development at GreenWaves Technologies, made the case for specialized low‑power ML processors.

“We need a clearer focus on ultra‑low‑power ML from both algorithmic and hardware perspectives,” Croome said.

GreenWaves’ RISC‑V application processor, the GAP8, targets inference in edge devices with milli‑watt power consumption and ultra‑low standby currents. The chip is designed for battery‑operated devices and those that harvest energy.

TinyML Community Targets 1‑mW AI for Edge Devices

GreenWaves’ ultra‑low‑power machine learning accelerator features nine RISC‑V cores (Image: GreenWaves Technologies)

To keep consumption low, the GAP8 employs parallelism not for speed but to enable a lower clock frequency and voltage. The chip’s hardware accelerator can perform a 5×5 convolution on 16‑bit data in a single cycle. Because convolutional neural networks (CNNs) use fixed‑size inputs and weights, memory management can be determined at compile time, further reducing runtime overhead.

Balancing specialization with flexibility is a challenge. “AI evolves rapidly. What’s cutting‑edge today may be obsolete tomorrow,” Croome warned. “We must avoid over‑specialising, so we remain agile and can adapt to future innovations.”

The GAP8 has been in sampling for a year; production will begin this month, with volume shipments slated for the end of Q3.

TinyML meetups are held on the last Thursday of every month in the Bay Area and welcome participants from industry and academia alike.


Internet of Things Technology

  1. Engineering Beyond the Cubicle: Why RTI Field Applications Engineers Thrive
  2. Defining the Edge: Where Edge Computing Truly Happens
  3. Edge AI: Accelerating On-Device Intelligence and Transforming Consumer and Enterprise Markets
  4. Bridging the Gap: Making Machine Learning Accessible at the Edge
  5. Edge Computing: The Architecture Driving Tomorrow’s Intelligent Networks
  6. VR’s Proven ROI: Transforming Engineering Design, Training, and Collaboration
  7. Edge AI: Why Processing Is Shifting to the Device Layer
  8. Edge Computing: The New Heartbeat of the Cloud Era
  9. Driving Innovation: How Edge Computing Is Transforming the Automotive Industry
  10. How Analytics-Driven Product Engineering Enhances Design and Performance