Real‑Time Face Recognition on Raspberry Pi – End‑to‑End Guide
This tutorial walks you through building a real‑time face‑recognition system on a Raspberry Pi using OpenCV and a Pi‑Cam. From hardware setup to training the recognizer, every step is presented with clear code snippets and practical tips.
Below you’ll see a live demo of the Pi‑Cam detecting faces in real time.
The project relies on the open‑source OpenCV library, which is optimized for real‑time performance. While we’ll focus on a Raspberry Pi running Raspbian, the same code runs on macOS and Windows with only minor adjustments.
3 Phases
A complete face‑recognition workflow can be broken into three distinct stages:
- Face Detection & Data Gathering
- Training the Recognizer
- Real‑time Face Recognition
The diagram below summarizes these phases.
Step 1: Bill of Materials
Essential components:
- Raspberry Pi 3 – $32.00
- 5‑MPx 1080p OV5647 Mini Camera Module – $13.00
Step 2: Installing OpenCV 3
We recommend following Adrian Rosebrock’s guide for a clean OpenCV 3 installation on Raspbian Stretch. After the tutorial, activate the dedicated Python virtual environment:
source ~/.profile
workon cv
Once inside the environment, verify the Python version and OpenCV installation:
python
>>> import cv2
>>> cv2.__version__
You should see a version number like 3.3.0 or newer.
Step 3: Testing Your Camera
If the camera is not detected, enable the driver with:
sudo modprobe bcm2835-v4l2
The following Python snippet opens the Pi‑Cam stream, displays it in color and grayscale, and flips it vertically (adjust if your camera orientation differs):
import numpy as np
import cv2
cap = cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)
while True:
ret, frame = cap.read()
frame = cv2.flip(frame, -1)
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
cv2.imshow('frame', frame)
cv2.imshow('gray', gray)
if cv2.waitKey(30) & 0xff == 27:
break
cap.release()
cv2.destroyAllWindows()
Run it with python simpleCamTest.py (available on the project’s GitHub). Press ESC to exit.
Step 4: Face Detection
OpenCV’s Haar Cascade classifier is a lightweight, machine‑learning approach for detecting faces. Download the pre‑trained XML file (haarcascade_frontalface_default.xml) and place it in a Cascades folder. The following script demonstrates real‑time face detection with bounding boxes:
import cv2
faceCascade = cv2.CascadeClassifier('Cascades/haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)
cap.set(3, 640)
cap.set(4, 480)
while True:
ret, img = cap.read()
img = cv2.flip(img, -1)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(20, 20)
)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
cv2.imshow('video', img)
if cv2.waitKey(30) & 0xff == 27:
break
cap.release()
cv2.destroyAllWindows()
The detectMultiScale parameters control sensitivity: scaleFactor reduces image size each iteration, minNeighbors filters false positives, and minSize sets the smallest detectable face.
Additional Examples
Explore more detection options on the project’s GitHub:
- faceEyeDetection.py
- faceSmileDetection.py
- faceSmileEyeDetection.py
Haar Cascade Object Detection – Face & Eye Tutorial
Step 5: Data Gathering
Collecting a labeled dataset is essential for training. Create a project folder (e.g., FacialRecognitionProject) and a dataset subdirectory:
mkdir FacialRecognitionProject
cd FacialRecognitionProject
mkdir dataset
Download the Haar Cascade file into the project root. Then run 01_face_dataset.py to capture 30 face images per user:
import cv2
import os
cam = cv2.VideoCapture(0)
cam.set(3, 640)
cam.set(4, 480)
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
face_id = input('\nEnter user ID (numeric) and press <Return> → ')
print('\n[INFO] Initializing face capture. Look at the camera…')
count = 0
while True:
ret, img = cam.read()
img = cv2.flip(img, -1)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_detector.detectMultiScale(gray, 1.3, 5)
for (x, y, w, h) in faces:
cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
count += 1
cv2.imwrite(f'dataset/User.{face_id}.{count}.jpg', gray[y:y+h, x:x+w])
cv2.imshow('image', img)
if cv2.waitKey(100) & 0xff == 27 or count >= 30:
break
if count >= 30:
break
cam.release()
cv2.destroyAllWindows()
print('\n[INFO] Exiting Program and cleaning up.')
Each image is named User.{id}.{count}.jpg (e.g., User.1.4.jpg). Adjust the sample count as needed.
Step 6: Trainer
Using the collected images, train an OpenCV face recognizer. The resulting model is saved as a .yml file in a trainer folder. Refer to the full training script in the project repository for implementation details.
Read More Detail: Real‑Time Face Recognition – End‑to‑End Project
Manufacturing process
- Speathe: Breath‑Driven Communication for Paralyzed Users
- Build a Secure Facial Recognition Door with Windows IoT and Raspberry Pi
- Automated Vision‑Based Object Tracking with Raspberry Pi and OpenCV
- Build a Windows IoT Core Rover with Raspberry Pi 2 – Beginner to Advanced Guide
- Arduino Tic Tac Toe with MAX7219 LED Matrix and Cardboard Enclosure
- Build a Reliable Arduino Countdown Timer with SparkFun 7‑Segment Display
- Build a Real-Time Face-Tracking System with Arduino & OpenCV
- Build a Simple Voice Recognition System with C# and Arduino UNO
- Create a Reliable Voice Recognition System with Raspberry Pi – A Beginner’s Guide
- Detecting Red Lights on PLCnext with OpenCV and Python