Soft Robots Detect Human Touch via Camera and Shadows—A Low-Cost, Touchless Solution
Researchers have created a low-cost method for soft, deformable robots to detect a range of physical interactions, from pats to punches to hugs, without relying on touch at all. Instead, a USB camera located inside the robot captures the shadow movements of hand gestures on the robot’s skin and classifies them with machine-learning software.
Touch is an important mode of communication for most organisms but has been virtually absent from human-robot interaction. One of the reasons is that full-body touch used to require a massive number of sensors and was therefore not practical to implement.
The ShadowSense technology originated through work to develop inflatable robots that could guide people to safety during emergency evacuations. Such a robot would need to be able to communicate with humans in extreme conditions and environments. Imagine a robot physically leading someone down a noisy, smoke-filled corridor by detecting the pressure of the person’s hand.
Rather than installing a large number of contact sensors — which would add weight and complex wiring to the robot and would be difficult to embed in a deforming skin — the team took a counterintuitive approach. In order to gauge touch, they looked to sight. By placing a camera inside the robot, they can infer how the person is touching it and what the person’s intent is just by looking at the shadow images.
The prototype robot consists of a soft, inflatable bladder of nylon skin that is stretched around a cylindrical skeleton, roughly four feet in height, that is mounted on a mobile base. Under the robot’s skin is a USB camera that connects to a laptop. The researchers developed a neural-network-based algorithm that uses previously recorded training data to distinguish among six touch gestures — touching with a palm, punching, touching with two hands, hugging, pointing, and not touching at all — with an accuracy of 87.5 to 96%, depending on the lighting.
The robot can be programmed to respond to certain touches and gestures such as rolling away or issuing a message through a loudspeaker. And the robot’s skin has the potential to be turned into an interactive screen. By collecting enough data, a robot could be trained to recognize an even wider vocabulary of interactions, custom-tailored to fit the robot’s task.
The robot doesn’t even have to be a robot. ShadowSense technology can be incorporated into other materials, such as balloons, turning them into touch-sensitive devices. In the future, the researchers will try using optical devices such as lenses and mirrors to enable additional form factors.
ShadowSense also offers privacy. If the robot can only see a human in the form of their shadow, it can detect what the person is doing without taking high-fidelity images of their appearance. This provides a physical filter and protection, and provides psychological comfort.
The ability to physically interact and understand a person’s movements and moods could ultimately be just as important to the person as it is to the robot.
Sensor
- Autonomous Robots: What They Can—and Cannot—Do
- Packaging Robots 101: Types, Applications, and Seamless Integration
- Robot Integration in the Workplace: Benefits and Challenges
- Dynamis: Force-Feedback Tech Gives Industrial Robots Human-Like Precision
- New Algorithm Enables Self‑Sensing Soft Robots for Reliable Tasks
- Mastering the Raspberry Pi Camera Pinout: A Complete Guide to Setup and Usage
- Enhancing Robot Intelligence and Safety in Manufacturing
- Revolutionizing Automotive Production with Advanced Robotics
- Fanuc Launches Safer Collaborative Robot for Human‑Machine Partnerships
- Robotic Automation Boosts Battery Production for the EV Surge