Integrating Microsoft Kinect with Raspberry Pi for Real‑Time Human Detection on the Sonbi Robot
A. Objective
Our goal was to embed a full‑featured software stack on the Sonbi robot’s Raspberry Pi, connect a Microsoft Kinect, and enable the robot to greet people by waving its arms when it detects a human standing in front of the sensor.
B. Hardware System
- Raspberry Pi with 8 GB flash storage
- Pololu Maestro 24‑channel servo controller
- Microsoft Kinect (RGB camera, depth sensor, IR emitter, multi‑mic array, accelerometer)
- ATX 500 W power supply
- Prototyping boards, wiring, and mechanical components
Raspberry Pi Specifications
- 700 MHz ARM1176JZF‑S core processor
- 512 MB SDRAM
- MicroUSB 5 V power input
- Ethernet, HDMI, and two USB ports for peripherals
- Runs Raspbian OS – extensive community support
Pololu Maestro 24
- 24 independent servo channels
- Pulse rate up to 333 Hz
- Script memory up to 8 KB
- Up to 1.5 A per channel
- USB or power‑header input options
- Scriptable via native API or custom firmware
Connecting Raspberry Pi to Pololu
- Simple TTL serial wiring: power, ground, Tx‑Rx, Rx‑Tx
- Free the Pi’s default serial console by editing
/etc/inittaband/boot/cmdline.txt - Refer to this tutorial for detailed instructions
Microsoft Kinect Features
- RGB camera (1280×960 resolution) for color imagery
- IR emitter and depth sensor for 3‑D distance mapping
- Four‑mic array for audio capture and sound‑source localization
- 3‑axis accelerometer (±2 G) for orientation detection
- Vertical tilt range: 27°
- 30 fps frame rate
C. Integrating Kinect with Raspberry Pi
Because the Kinect’s native drivers are Windows‑only, installing the required libraries on a Linux‑based Raspberry Pi is time‑consuming but essential. The following steps outline the process.
1. Install Kinect Drivers
- Choose between Libfreenect (user‑space driver) or OpenNI (high‑level framework). We installed both for maximum flexibility.
- Libfreenect supports RGB/depth images, motor control, accelerometer, and LED; audio support is experimental.
- OpenNI provides cross‑platform SDK support for depth, audio, and skeleton tracking.
2. Build Libfreenect
sudo apt-get install git-core cmake pkg-config build-essential libusb-1.0-0-dev
git clone https://github.com/OpenKinect/libfreenect
cd libfreenect
mkdir build
cd build
cmake -L ..
make
# optional: cmake --build .
3. Build OpenNI
sudo apt-get install g++ python libusb-1.0-0-dev freeglut3-dev
cd Platform/Linux/CreateRedist
./RedistMaker
sudo ./install.sh
# optional: make mono_wrapper && make mono_samples
D. Software Stack and Runtime
The robot boots a script bootscript_sonbi.sh that launches the face detection module:
python facedetect.py --cascade=face.xml 0
Download the pre‑trained Haar cascade file here: face.xml. The script streams the Kinect’s RGB feed, detects faces in real time, and signals the Sonbi executable to activate the servo motors for a friendly wave.
E. Summary of Actions
When a human is detected, the Kinect captures a depth‑aware image, the detection algorithm confirms a face, and the Pololu Maestro commands the Sonbi robot’s arm servos to perform a welcoming wave. This integration demonstrates the feasibility of using consumer hardware for interactive robotics.
Manufacturing process
- Build a Remote Temperature Sensor with Raspberry Pi and Python – Step‑by‑Step Guide
- Raspberry Pi–Based Bathroom Occupancy Sensor with Voice & SMS Alerts via Twilio
- Build a Smart Robot with the Bridge Shield – Raspberry Pi & Arduino Integration
- Building the XMP‑1: A Low‑Cost XMOS‑Raspberry Pi Mobile Platform
- DIY Wall‑E Inspired Raspberry Pi CD‑Box Robot
- Gesture‑Controlled Robot Powered by Raspberry Pi
- Build a Wi‑Fi‑Controlled Raspberry Pi Robot with Python – Step‑by‑Step Guide
- Arduino Uno-Based Human Detection Robot: Step‑by‑Step Sensor Integration
- Smart Home Automation Powered by Raspberry Pi 2 & Windows 10 IoT
- Fanuc Launches Safer Collaborative Robot for Human‑Machine Partnerships