Industrial manufacturing
Industrial Internet of Things | Industrial materials | Equipment Maintenance and Repair | Industrial programming |
home  MfgRobots >> Industrial manufacturing >  >> Manufacturing Technology >> Industrial Technology

NVIDIA Deploys AI to Enhance Camera Clarity for Autonomous Vehicles

Dozens of companies are working on autonomous vehicle technology and they all approach the engineering challenges in different ways. To mimic the human’s ability to see, technology mainly relies on three basic elements: radar, cameras, and lidar.

However, several factors like rain, snow and other kinds of blockages can degrade camera vision. This hinders the robust perception system ability to make sense of its surroundings and validate data coming in from sensors.

In order to effectively detect invalidity of sensor data as quick as possible in the processing pipeline before it gets to downstream modules, researchers at NVIDIA have developed an AI model that evaluates the ability of a camera to see clearly.

This model uses a deep neural network — named ClearSightNet — to discover root causes of blockages, occlusions, and reductions in visibility. It has the potential to

  1. Reason across a broad range of possible causes of reduction in camera’s visibility.
  2. Provide actionable data.
  3. Run of various cameras with low computational overhead

How It Works?

The network splits the camera pictures into two different parts; one of them is associated with occlusion while other corresponds to the reduction in visibility.

Source: NVIDIA | YouTube

Occlusion represents the specific portion of camera’s field of view that is blocked by opaque objects (like snow, mud, or dust) or contains no data (for example saturated pixels due to sunlight). In these portions, perception is entirely impaired.

Reduced visibility represents portions that are partially blocked due to fog, glare, or heavy rain. In such cases, the decision taken by algorithms should be marked with ‘lower confidence’.

Left side shows the input image while the right side is the image overlaid with the neural network output mask. Nearly 84 percent of the image pixels are affected by partial and complete occlusion. 

To show these portions, ClearSightNet puts a mask on an input video/image in real-time. Reduced visibility regions are marked by green color, and completely occluded regions are marked by red. The network also displays how much area of the input video is affected by reduced visibility or occlusion.

This data could be used in several ways. The self-driving cars, for example, can choose not to apply any auto feature when visibility is low, and alert drivers to clean windshield or camera lens. Vehicles can use this network to know camera perception.

The team plans to further improve the ClearSightNet to provide end-to-end calculations and more detailed information about camera visibility, enabling greater control over the implementation process of autonomous vehicles.

Read: Nvidia AI Can Convert 30fps Videos To 240fps

As far as performance is considered [of current ClearSightNet], the network runs in about 1.3 millisecond (integrated GPU) and 0.7 millisecond (discrete GPU) per frame on Xavier. It’s already available in the NVIDIA DRIVE 9.0.


Industrial Technology

  1. AppNeta’s Toronto Luncheon: Expert Panel Discusses Cloud Visibility Challenges
  2. Understanding Network Topologies: From Point‑to‑Point to Ring and Star
  3. Understanding Network Protocols: From Physical Layer to Advanced Arbitration
  4. Network Analysis Explained: Advanced Techniques for Complex Electrical Circuits
  5. India Launches the World’s Largest IoT Network, Transforming Communities and Businesses
  6. How Intelligent Networks Drive Business Efficiency and Growth
  7. How Printed Circuit Boards Power Modern Vehicles
  8. Network Rail Modernizes the World’s Oldest Railway with IoT, AI, and Deep Learning
  9. Real-Time Health Monitoring with Smartphone and Computer Cameras
  10. Nature-Inspired Sensors Boost Autonomous Machines' Vision for Safer Navigation