Face Recognition System with Python
About the project: This project provides a simple, two-part system for real-time face recognition. It uses your webcam to first collect images of a person's face and then to recognize that person in a live video feed.
A face recognition system is a great project to build with Python. It's an excellent way to see how computer vision libraries can be used to solve real-world problems.
This is a multi-step process that involves a training phase and a recognition phase, a complete solution requires more than a single file.
Project Level: AdvancePrerequisites
Before you begin, you need to install the required Python libraries. Open your terminal or command prompt and run the following command:
pip install opencv-python face_recognition numpy
- opencv-python: Used for capturing video from the webcam and drawing graphics.
- face_recognition: A powerful and easy-to-use library for face detection and recognition.
- numpy: A fundamental library for numerical operations, used here by face_recognition.
Project Structure
- Create a new directory for your project. Inside this main directory, create a folder named dataset. This is where all the images of the people you want to recognize will be stored.
your-project-directory/
├── dataset/
├── 01_collect_data.py
└── 02_recognize_faces.py
Step 1: Data Collection (01_collect_data.py)
This script captures a series of images of a person's face from your webcam. These images will be used to train the recognition system.
Copy the following code into 01_collect_data.py:
# 01_collect_data.py
import cv2
import os
import time
def collect_images():
"""
Captures a series of face images from the webcam for a new person.
"""
person_name = input("Enter the name of the person: ").strip()
if not person_name:
print("Invalid name. Please try again.")
return
# Create a new directory for this person's images inside the dataset folder
dataset_path = os.path.join("dataset", person_name)
os.makedirs(dataset_path, exist_ok=True)
# Initialize the webcam
camera = cv2.VideoCapture(0)
# Check if the camera is opened successfully
if not camera.isOpened():
print("Error: Could not open camera.")
return
# Initialize a face detector (using Haar Cascade)
face_cascade = cv2.CascadeClassifier(cv2.data.haarcascades + 'haarcascade_frontalface_default.xml')
if face_cascade.empty():
print("Error loading face cascade classifier.")
return
print("\n[INFO] Starting video stream...")
print("[INFO] Look at the camera. Capturing images...")
print("[INFO] Press 'q' to quit.")
count = 0
start_time = time.time()
# Capture images for 10 seconds
while (time.time() - start_time) < 10:
ret, frame = camera.read()
if not ret:
break
gray_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray_frame, scaleFactor=1.3, minNeighbors=5)
for (x, y, w, h) in faces:
# Draw a rectangle around the detected face
cv2.rectangle(frame, (x, y), (x + w, y + h), (255, 0, 0), 2)
# Save the cropped face image
face_image = gray_frame[y:y + h, x:x + w]
image_path = os.path.join(dataset_path, f"{count}.jpg")
cv2.imwrite(image_path, face_image)
count += 1
# Display a counter on the screen
cv2.putText(frame, f"Images Captured: {count}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 0.8, (0, 255, 0), 2)
# Display the live video feed
cv2.imshow("Capture Face", frame)
# Exit on 'q' key press
if cv2.waitKey(1) & 0xFF == ord('q'):
break
print(f"\n[INFO] Done capturing {count} images for {person_name}.")
camera.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
collect_images()
Step 2: Face Recognition (02_recognize_faces.py)
This script loads the images you collected, encodes them, and then uses a real-time webcam feed to recognize faces.
Copy the following code into 02_recognize_faces.py:
# 02_recognize_faces.py
import face_recognition
import cv2
import os
def recognize_faces():
"""
Loads face encodings from the dataset and performs real-time face recognition
using the webcam.
"""
print("[INFO] Loading face encodings...")
known_face_encodings = []
known_face_names = []
# Get the path to the dataset directory
dataset_path = "dataset"
if not os.path.exists(dataset_path):
print("Error: 'dataset' directory not found. Please run 01_collect_data.py first.")
return
# Iterate through each person's directory in the dataset
for person_name in os.listdir(dataset_path):
person_dir = os.path.join(dataset_path, person_name)
if os.path.isdir(person_dir):
for filename in os.listdir(person_dir):
if filename.lower().endswith(('.png', '.jpg', '.jpeg')):
image_path = os.path.join(person_dir, filename)
image = face_recognition.load_image_file(image_path)
# Get face encodings for each image
face_encodings = face_recognition.face_encodings(image)
if face_encodings:
known_face_encodings.append(face_encodings[0])
known_face_names.append(person_name)
print(f"[INFO] Loaded {len(known_face_names)} face encodings.")
# Initialize the webcam
video_capture = cv2.VideoCapture(0)
if not video_capture.isOpened():
print("Error: Could not open camera.")
return
print("\n[INFO] Starting video stream...")
print("[INFO] Press 'q' to quit.")
while True:
ret, frame = video_capture.read()
if not ret:
break
# Convert the image from BGR color (OpenCV) to RGB (face_recognition)
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Find all face locations and encodings in the current frame
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
for (top, right, bottom, left), face_encoding in zip(face_locations, face_encodings):
# Compare the face with the known faces
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
# Find the best match
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
best_match_index = -1
if face_distances.size > 0:
best_match_index = face_distances.argmin()
if best_match_index != -1 and matches[best_match_index]:
name = known_face_names[best_match_index]
# Draw a box and label around the recognized face
cv2.rectangle(frame, (left, top), (right, bottom), (0, 255, 0), 2)
cv2.rectangle(frame, (left, bottom - 35), (right, bottom), (0, 255, 0), cv2.FILLED)
cv2.putText(frame, name, (left + 6, bottom - 6), cv2.FONT_HERSHEY_DUPLEX, 0.8, (255, 255, 255), 1)
# Display the resulting image
cv2.imshow('Video', frame)
# Exit on 'q' key press
if cv2.waitKey(1) & 0xFF == ord('q'):
break
video_capture.release()
cv2.destroyAllWindows()
if __name__ == "__main__":
recognize_faces()
How to Run
Now you are ready to run the project. In your terminal, from the blog_project directory, execute these commands:
- Place both files (01_collect_data.py and 02_recognize_faces.py) in your project directory.
- Open your terminal in the project directory.
- First, run the data collection script to create a dataset for the person you want to recognize. Follow the on-screen instructions.
python 01_collect_data.py
- Once the data is collected, run the recognition script to see the system in action.
python 02_recognize_faces.py
← Back to Projects