Hands‑On Machine‑Learning with the reTerminal: Edge Impulse & ARM NN Demo Guide
8 GB RAM, 32 GB eMMC Raspberry Pi CM development board with a 5‑inch touchscreen and a wealth of interfaces.
Story
Previous posts explored TinyML on the Wio Terminal, a Cortex‑M4F board with a rugged LCD. Seeed Studio has now expanded the concept with the reTerminal – a Raspberry Pi 4 Compute Module‑based board featuring an integrated display.
I unboxed the reTerminal, recorded a brief demo video, and am writing this companion article to walk you through environment setup and machine‑learning demonstrations.
Specifications
The reTerminal is powered by a Raspberry Pi Compute Module 4 featuring a quad‑core Cortex‑A72 CPU at 1.5 GHz, 4 GB of RAM, and 32 GB eMMC storage. These specs deliver rapid boot times and a responsive user experience. Its 5‑inch IPS capacitive touch display offers 1280 × 720 resolution, while built‑in peripherals include an accelerometer, real‑time clock, buzzer, four tactile buttons, four status LEDs, and a light sensor. Connectivity options comprise dual‑band 2.4 GHz/5 GHz Wi‑Fi, Bluetooth 5.0 BLE, and a Gigabit Ethernet port.
Power requirements mirror those of a Raspberry Pi 4: a standard 5 V/2 A adapter is sufficient, but the manufacturer recommends a 5 V/4 A supply for maximum stability, especially when additional peripherals are attached. In our demos, the 5 V/2 A adapter worked reliably, yet we advise erring on the side of caution with a 5 V/4 A unit.
The board ships with a 32‑bit Raspbian OS pre‑loaded with all necessary drivers. For machine‑learning workloads, a 64‑bit Raspbian image – also supplied by Seeed Studio – offers a noticeable performance boost.
The touchscreen’s on‑screen keyboard appears automatically when text entry is required and can be disabled in settings. While Raspbian is not a mobile‑optimized OS, the interface is usable for larger UI elements. For a more touch‑friendly experience, consider exploring experimental OS options such as Ubuntu Touch.
Launching a sample QT application demonstrates the board’s ability to display sensor data and control interfaces. With appropriately sized touch targets, the 5‑inch screen proves very responsive.
Edge Impulse Object Detection
Edge Impulse’s new Linux deployment support lets you train and run models directly on the reTerminal. Capture images with a camera attached, train the model in the cloud, and deploy it locally with the edge‑impulse‑linux‑runner.
Installation is straightforward:
curl -sL https://deb.nodesource.com/setup_12.x | sudo bash - sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps npm config set user root && sudo npm install edge-impulse-linux -g --unsafe-perm
Once the Edge Impulse CLI is installed, connect a camera – for instance, a USB webcam. If you prefer the Raspberry Pi camera module, enable it via raspi-config.
Prior to data collection, ensure that the project’s Dashboard > Project info > Labeling method is set to Bounding boxes (object detection). Capture at least 100 images per class. While you can upload raw images, annotations must be performed manually in the Data Acquisition tab.
After collecting and labeling your dataset, create an Impulse by selecting the Image processing block and Object Detection (Images) learning block. Because a modest dataset cannot train a large network from scratch, we fine‑tune a pre‑trained MobileNetV2‑SSD backbone. Default values for epochs, learning rate, and confidence threshold generally suffice; the detection pipeline is not exposed in Expert mode, so these parameters remain fixed.
Training runs on the CPU and can take a while, depending on dataset size. During this time, a beverage of your choice is recommended.
When training completes, launch the edge‑impulse‑linux‑runner to download the model and start live inference:
edge-impulse-linux-runner
The runner automatically prepares the model and streams results to a browser. In the terminal, you’ll see a line similar to:
Want to see a feed of the camera and live classification in your browser? Go to https://192.168.1.19:4912
Click the link to view the live camera feed and inference output. The MobileNetV2‑SSD backbone delivers roughly 2 FPS (~400 ms per frame). Because inference isn’t performed on every frame, bounding boxes may persist briefly after an object disappears.
Edge Impulse’s Linux support is still evolving, so expect continued performance improvements and smoother data upload workflows in the near future.
ARM NN Accelerated Inference
While the Raspberry Pi 4 lacks a dedicated ML accelerator, you can exceed real‑time inference rates by:
- Choosing a lightweight inference mode.
- Utilizing all four CPU cores and the ARM NEON SIMD extensions for parallel processing.
The ARM NN runtime, optimized for NEON, further boosts performance by compiling the model into highly efficient kernels.
Source: reTerminal Machine Learning Demos (Edge Impulse and Arm NN)
Manufacturing process
- Harnessing Machine Learning to Optimize MRO Supply Chain Management
- Harnessing Machine Learning to Transform Additive Manufacturing
- How Robust Data Management Drives Machine Learning and AI in Industrial IoT
- Leveraging DSPs for Real‑Time Audio AI at the Edge
- ADLINK & Charles Industries Unveil First Complete Micro‑Edge AI/ML Solution for 5G Pole‑Mounted Deployments
- Bridging the Gap: Making Machine Learning Accessible at the Edge
- Accelerating Industrial Edge Vision with NXP’s i.MX 8M Plus Processor
- NXP Accelerates Edge AI with eIQ Toolkit
- Build a Raspberry Pi 3 & Arduino Laptop: Step‑by‑Step Guide
- How AI and Machine Learning Revolutionize CNC Machining