OpenCV Surveillance in Python
About the project: An OpenCV-based surveillance system is a classic application of computer vision, focusing on real-time motion detection and tracking.
Here we have a single Python file that captures video from your webcam, establishes a baseline, detects significant movement, and draws bounding boxes around the detected motion.
This code implements a robust frame differencing technique. It initializes a static background image after a brief stabilization period. Then, it compares every new frame to that static background. Any significant differences are highlighted as motion using a bounding box and logged to the console.
Project Level: AdvanceThis Python project uses the OpenCV library to turn your webcam feed into a simple surveillance system. It works by establishing a baseline image and then detecting and highlighting any significant change (motion) in subsequent frames.
Prerequisites
You need the opencv-python and numpy libraries installed.
pip install opencv-python numpy
The Code (surveillance_system.py)
This single script contains all the logic for video capture, background subtraction, motion detection, and drawing the bounding boxes.
Project Structure
This project consists of a single Python script. You can name it surveillance_system.py.
The Code (voice_assistant.py)
Copy the following code into your surveillance_system.py file. The code is well-commented to help you understand each part of the voice assistant's functionality.
# surveillance_system.py
import cv2
import numpy as np
import datetime
import time
# --- Configuration ---
# 0 is usually the default built-in camera
CAMERA_INDEX = 0
MIN_CONTOUR_AREA = 1000 # Minimum size for a motion contour to be considered relevant (adjust as needed)
# Initialize the video capture object (webcam)
cap = cv2.VideoCapture(CAMERA_INDEX)
if not cap.isOpened():
print("Error: Could not open webcam.")
exit()
# Variable to hold the reference background frame
# This frame will be set after a few seconds of camera initialization
first_frame = None
# Time stamp for when the system started
start_time = time.time()
WAIT_TIME_SECONDS = 2 # Wait time to stabilize the camera feed and set the first_frame
print(f"[INFO] Initializing camera... Waiting {WAIT_TIME_SECONDS} seconds to set background.")
# Function to speak a simple alert (using print for system log)
def log_alert(message):
"""Logs the motion detection event with a timestamp."""
timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
print(f"[{timestamp}] ALERT: {message}")
try:
while True:
# Read a frame from the video stream
ret, frame = cap.read()
if not ret:
log_alert("Lost video feed.")
break
# Define the initial status
motion_status = "No Motion"
# 1. Pre-process the frame
# Convert to grayscale
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Apply Gaussian blur to smooth the image and remove noise
gray = cv2.GaussianBlur(gray, (21, 21), 0)
# 2. Set the initial reference frame
# We wait for the camera to stabilize before capturing the background
if time.time() - start_time < WAIT_TIME_SECONDS:
# Display a stabilization message on the screen
cv2.putText(frame, "STATUS: Stabilizing...", (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2)
cv2.imshow("Live Feed", frame)
# Reset the first frame if motion is detected during stabilization (optional, but robust)
first_frame = None
elif first_frame is None:
# Once stable, capture the first frame as the background
first_frame = gray
print("[INFO] Background frame set. Starting motion detection.")
else:
# 3. Motion Detection Core Logic
# Calculate the absolute difference between the current frame and the first frame
frame_delta = cv2.absdiff(first_frame, gray)
# Apply a threshold to the difference image
# Pixels with intensity > 25 are turned white (255), others black (0)
thresh = cv2.threshold(frame_delta, 25, 255, cv2.THRESH_BINARY)[1]
# Dilate the thresholded image to fill in holes and gaps (makes contours smoother)
thresh = cv2.dilate(thresh, None, iterations=2)
# Find contours (outlines) of the white regions (motion)
contours, _ = cv2.findContours(thresh.copy(), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
# 4. Analyze and Draw Bounding Boxes
for contour in contours:
# If the contour area is too small, ignore it (likely noise)
if cv2.contourArea(contour) < MIN_CONTOUR_AREA:
continue
# Motion detected!
motion_status = "Motion Detected"
log_alert(f"Motion detected with area: {cv2.contourArea(contour):.0f}")
# Calculate the bounding box for the contour
(x, y, w, h) = cv2.boundingRect(contour)
# Draw the bounding box on the original frame
cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)
# 5. Display the status on the video feed
color = (0, 255, 0) if motion_status == "Motion Detected" else (255, 255, 0)
cv2.putText(frame, f"STATUS: {motion_status}", (10, 30),
cv2.FONT_HERSHEY_SIMPLEX, 1, color, 2)
cv2.putText(frame, datetime.datetime.now().strftime("%A %d %B %Y %H:%M:%S"),
(10, frame.shape[0] - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 255), 1)
# Display the processed thresholded image (for debugging/visualization)
cv2.imshow("Thresholded Image (Motion Area)", thresh)
# Display the main live feed
cv2.imshow("Live Feed", frame)
# Check for 'q' key press to quit the loop
key = cv2.waitKey(1) & 0xFF
if key == ord('q'):
break
finally:
# Cleanup: Release the camera and close all OpenCV windows
cap.release()
cv2.destroyAllWindows()
print("\n[INFO] Surveillance system terminated.")
How to Run
- Save the Python code below as surveillance_system.py.
- Run the script from your terminal:
python surveillance_system.py
- The program will open two windows:
- Live Feed: The main window showing your camera with green bounding boxes around any motion.
- Thresholded Image: This debug window shows the processed frames, where white areas indicate detected motion.
- Press the 'q' key to exit the application.
← Back to Projects