Industrial manufacturing
Industrial Internet of Things | Industrial materials | Equipment Maintenance and Repair | Industrial programming |
home  MfgRobots >> Industrial manufacturing >  >> Industrial Internet of Things >> Sensor

Enhancing Autonomous Vehicles with Advanced Acoustic Sensing

In high‑stakes traffic scenarios, seconds can mean the difference between life and death. While human drivers instinctively rely on both sight and hearing, most autonomous systems prioritize visual perception alone. Avelabs, a Cairo‑based AI and sensor firm, has introduced AutoHears, an end‑to‑end acoustic sensing solution that endows vehicles with a reliable sense of hearing.

Enhancing Autonomous Vehicles with Advanced Acoustic Sensing
“Vision is our most important sense when evaluating the environment,” said Amr Abdelsabour, director of Product Management at Avelabs, in a panel session at this year’s AutoSens Brussels. “But as drivers we also depend on sound – a siren from behind or a horn at a blind intersection can be heard before it is seen.” At AutoSens, Avelabs unveiled AutoHears, a compact acoustic sensor platform that detects, classifies, and localises sounds in real time. The system – comprising custom‑built microphones, a mechanical enclosure and proprietary software – is designed to recognise emergency‑vehicle sirens, obscured field hazards, natural‑disaster cues (e.g., rockslides), and safety‑critical events such as nearby collisions, gunshots or explosions. It also offers vehicle‑self‑diagnostics and basic speech recognition. **EE Times Europe:** *Could you describe the types of sounds that AutoHears can and can’t detect?* **Amr Abdelsabour:** We began with vehicle‑borne noises (tires, engine, brakes) and standard horn and siren patterns from around the world. Those classes are fully validated. We’re now extending the library to include natural‑disaster sounds and collision signatures, which are still in development. **EE Times Europe:** *AutoHears detects sounds from all angles. Are there any physical limitations?* **Abdelsabour:** The system captures sound from every direction and even through walls or other obstructions. Because acoustic sensing is relative to ambient noise, a quiet environment allows detection of faint sounds like bicycle wheels or footsteps. In a loud setting, only the dominant sounds – such as a nearby siren – will be captured. We are quantifying these limits to provide clear performance guarantees. **EE Times Europe:** *What about sound classification?* **Abdelsabour:** Classification blends rule‑based algorithms for standardised sounds (sirens, horns) with machine‑learning models for complex, non‑standard vehicle noises. AutoHears deploys both approaches, selecting the method that best matches the target sound class. **EE Times Europe:** *How does audio data fuse with visual and radar inputs?* **Abdelsabour:** Human drivers use hearing to supplement vision; AutoHears follows the same principle. The raw acoustic stream is forwarded to the vehicle’s domain controller, where it is fused with camera and radar data. For example, a radar can pinpoint a vehicle’s distance, a camera can identify it as a car, and AutoHears can confirm the presence of a horn or siren, delivering a comprehensive situational picture. **EE Times Europe:** *Why did you build an integrated hardware‑software system?* **Abdelsabour:** Accurate localisation relies on precise microphone geometry and signal timing. Off‑the‑shelf microphones simply don’t meet these requirements. By designing the sensor stack ourselves, we control microphone count, spacing, and placement, ensuring the time‑difference‑of‑arrival calculations needed for sub‑degree localisation. **EE Times Europe:** *Details on the sensor and processing platform?* **Abdelsabour:** The sensor operates in a centralized architecture: raw audio is streamed to the vehicle’s domain controller, where we run our algorithms on commercial‑off‑the‑shelf SoCs such as Xilinx FPGAs or TI ADAS TDA chips. The platform is fully adaptable; the only requirement is a compatible controller. **EE Times Europe:** *What does “hardware‑dependent” mean for AutoHears?* **Abdelsabour:** Feature sets can be customised. A single‑microphone configuration gives directionality; adding more microphones unlocks distance estimation. Similarly, higher‑resolution localisation demands more processing power, which the chosen controller must support. Custom drivers are supplied to integrate with the customer’s software stack. **EE Times Europe:** *Where are you in development and when can it hit the road?* **Abdelsabour:** AutoHears is in the product‑development phase. We have validated the core concept, completed demos, and are now moving into public‑road testing and certification. Production readiness will follow successful validation and regulatory approval. **EE Times Europe:** *Do you have early customers?* **Abdelsabour:** Though we announced the product at AutoSens last September, conversations are already underway with several OEMs and fleet operators interested in field testing. These partnerships will provide critical data for further refinement.
Related Contents: For more Embedded, subscribe to Embedded’s weekly email newsletter.

Sensor

  1. Advanced Hearing Aids: Design, Manufacturing, and Future Innovations
  2. Bio-Inspired Electronic Skin Sensors for Precise Motion Tracking
  3. New Algorithm Enables Self‑Sensing Soft Robots for Reliable Tasks
  4. Revolutionizing Robot Manipulation: AI-Driven Grippers for Precise Pick-and-Place
  5. Innovations in Hearing Aids: From DIY to Brain‑Responsive Tech
  6. Brain‑Wave‑Driven Hearing Implant Achieves Accurate Auditory Calibration
  7. Radar-Enabled, Touch-Free Heart Sound Monitoring
  8. SoundWatch: Smart Alerts That Empower the Deaf & Hard-of-Hearing
  9. Innovative Glove‑Like Haptic Device Recreates Real‑World Touch Sensation
  10. Choosing the Right Filament: 1.75 mm vs 3 mm – Which is Best for Your 3D Printer