Advanced Curved Lane Detection with OpenCV
Story
Introduction
Lane markings are the backbone of any autonomous driving system, guiding vehicles safely through traffic. Building on my previous straight‑lane detection project, I have developed a robust curved‑lane detection pipeline that performs reliably in complex environments such as shadows, occlusions, and uneven road surfaces.
The complete image‑processing workflow is implemented in Python using OpenCV:
- Distortion correction
- Perspective warp
- Sobel filtering
- Histogram peak detection
- Sliding‑window search
- Polynomial curve fitting
- Overlay of detected lane
- Real‑time video processing
Limitations of Previous System
The earlier model was limited to straight lanes and struggled with curves, shadows, and partial occlusions. The new pipeline addresses these shortcomings by incorporating edge‑based filtering and a dynamic sliding‑window approach.
Distortion Correction
Camera lenses introduce radial distortion that can skew measurements. By calibrating the camera against a known pattern—typically an asymmetric checkerboard—we generate a distortion model that corrects the entire dataset.
The calibration procedure involves capturing multiple images of the checkerboard, detecting corners with cv2.findChessboardCorners(), and computing the camera matrix and distortion coefficients using cv2.calibrateCamera(). The corrected images are produced with cv2.undistort().
Below is a side‑by‑side comparison of an undistorted checkerboard image, demonstrating the subtle but critical improvement.
Here is the calibration code used:
def undistort_img():
# Prepare object points 0,0,0 … 8,5,0
obj_pts = np.zeros((6*9,3), np.float32)
obj_pts[:,:2] = np.mgrid[0:9, 0:6].T.reshape(-1,2)
objpoints = []
imgpoints = []
images = glob.glob('camera_cal/*.jpg')
for fname in images:
img = cv2.imread(fname)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
ret, corners = cv2.findChessboardCorners(gray, (9,6), None)
if ret:
objpoints.append(obj_pts)
imgpoints.append(corners)
img_size = (img.shape[1], img.shape[0])
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size, None, None)
dst = cv2.undistort(img, mtx, dist, None, mtx)
dist_pickle = {'mtx': mtx, 'dist': dist}
pickle.dump(dist_pickle, open('camera_cal/cal_pickle.p', 'wb'))
return dst
def undistort(img, cal_dir='camera_cal/cal_pickle.p'):
with open(cal_dir, 'rb') as f:
file = pickle.load(f)
mtx, dist = file['mtx'], file['dist']
return cv2.undistort(img, mtx, dist, None, mtx)
undistort_img()
img = cv2.imread('camera_cal/calibration1.jpg')
dst = undistort(img)
The functions are also available in the accompanying Jupyter notebook under the Code section.
Below is an example of distortion correction applied to a driving scene. Even though the visual difference is subtle, accurate geometry is critical for downstream processing.
Perspective Warp
Detecting curved lanes directly in camera space is challenging. By transforming the image to a bird‑sight view, we can assume the lane lies on a flat plane and fit a polynomial with high precision.
The transformation is performed with cv2.getPerspectiveTransform() and cv2.warpPerspective(). The source and destination points are chosen to preserve lane geometry while maximizing visibility.
def perspective_warp(img, dst_size=(1280,720), src=np.float32([(0.43,0.65),(0.58,0.65),(0.1,1),(1,1)]), dst=np.float32([(0,0),(1,0),(0,1),(1,1)])):
img_size = np.float32([img.shape[1], img.shape[0]])
src = src * img_size
dst = dst * np.float32(dst_size)
M = cv2.getPerspectiveTransform(src, dst)
return cv2.warpPerspective(img, M, dst_size)
Sobel Filtering
Color filtering alone can misclassify light concrete as lane markings. Instead, we employ gradient‑based edge detection. The Sobel operator computes horizontal gradients in the HLS color space, highlighting high‑contrast lane edges while suppressing the road surface.
The resulting binary mask is robust to lighting variations and is used as input for the sliding‑window search.
Histogram Peak Detection
Before applying the sliding window, we identify the approximate horizontal positions of the lane lines by computing a histogram of the binary image along the x‑axis. The highest peaks on each side correspond to the left and right lane base points.
Sliding Window Search
Using the base points, the sliding‑window algorithm iteratively collects pixel indices that belong to each lane. It then fits a second‑degree polynomial to each lane, enabling accurate curvature estimation and vehicle position relative to the lane center.
The final overlay displays the detected lane area, curvature, and vehicle offset in real time.
Read More Detail: Curved Lane Detection
Manufacturing process
- Intrusion Detection Systems: Protecting Your Network with Smart Alerts
- Motion Detection Alarm System with Java & Reactive Blocks – Deploy on Raspberry Pi & Send SMS
- Farmaid: Autonomous Robot for Precise Plant Disease Detection & Marking
- Advanced Arduino Collision Warning System – Visual & Audio Alerts for Vehicle Safety
- Detect Coughs with TinyML on Arduino Nano for Early Flu Detection
- Detecting Object Colors with Arduino Nano and TCS3200 Sensor
- Advanced Arduino-Based Hazardous Gas Leak Detection System
- Detecting and Controlling Inclusions in Continuous‑Cast Steel for Superior Quality
- IoT-Powered Gas Detection System: Enhancing Safety in Hospitality and Industrial Settings
- Precision CNC Milling for Curved Surfaces