Industrial manufacturing
Industrial Internet of Things | Industrial materials | Equipment Maintenance and Repair | Industrial programming |
home  MfgRobots >> Industrial manufacturing >  >> Manufacturing Technology >> Industrial Technology

AI Empowers Robots to Identify 90+ Objects by Touch

Humans are good at associating the appearance and material properties of objects across multiple modalities. When we see a scissor, we can imagine what our fingers would feel touching the metal surface, we can picture it in our mind – not just its identify, but also its size, shape, and proportions.

The perception of robots, on the other hand, isn’t inherently multi-modal. Although existing robots equipped with advanced cameras are capable of distinguishing between two different objects, vision alone can often prove inadequate, especially in the presence of occlusion and poor light conditions.

Now, researchers at the University of California, Berkeley have developed a method that allows the robotic manipulator to learn human-like multi-modal associations. It uses both visual and tactile observations to find out whether or not these observations correspond to the same object.

What Exactly They Did?

The research team employed a high-resolution touch sensing via two GelSight sensors (attached on the robot’s finger) and convolutional neural networks (CNNs) for multi-modal association.

These sensors generate readings by means of a camera integrated with an elastomer gel, which records indentations in the gel created by contact with objects. These readings are then fed to CNNs for data processing.

Researchers trained these CNNs to take in the tactile readings from sensors and object image from a camera, and identify whether these inputs represent the same object or not. To perform instance recognition, they combined the robot’s tactile readings with the visual observation of the query object.

Reference: arXiv:1903.03591 | UC Berkeley

They used NVIDIA GeForce GTX 1080 and TITAN X GPUs with CUDA deep learning framework to train and test the CNN for multi-modal association on more than 33,000 images.

Robot (left) consisting of two GelSight tactile sensors (one on each finger) and a frontal RGB camera | Examples of tactile observations (middle) and object images (right) corresponding to a single object | Courtesy of researchers 

The results demonstrate that it’s possible to recognize object instances by tactile readings alone, including instances that were never used in training. In fact, CNN outperformed some human volunteers and alternative methods.

What’s Next?

So far, researchers have only considered individual grasps. In the next study, they will use multiple tactile interactions to obtain a more complete picture of the query object.

Read: 15 Different Types of Robots | Explained

The team also plans to extend their system to robotic warehouses where robots look at product images and retrieve them by feeling for objects on shelves. This new method can be applied to robots in a home environment to make them retrieve objects from hard-to-reach spots.


Industrial Technology

  1. Artificial Intelligence in Retail: Proven Benefits, Not a Fad
  2. AI Predicts Quantum Advantage: Neural Network Forecasts System Behavior
  3. AI-Powered Line-Following Robot Built on Arduino Nano
  4. AI-Powered Maze-Solving Robot Using Arduino and Bluetooth
  5. AI Is More Than an App—It’s a Powerful Methodology for Business Success
  6. AI: Benefits, Risks, and Industry Impact
  7. Big Data vs AI: Synergy Behind Digital Transformation
  8. RF‑Grasp Robot Detects Hidden Objects with Radio Wave Vision
  9. Advanced AI System for Accurate Object Detection in Cluttered Environments
  10. Artificial Intelligence: Evolution, History, and Real-World Applications