Facial Emotion Recognition in 3 Steps Using Raspberry Pi

To not miss my updates, remember to check the official account at the top right corner and set it as a star, take down the stars and send them to me.

Facial Emotion Recognition in 3 Steps Using Raspberry Pi
Facial Emotion Recognition in 3 Steps Using Raspberry Pi

Using OpenCV, TensorFlow, and Keras for emotion recognition based on Raspberry Pi, your mood is fully visible.

Facial Emotion Recognition in 3 Steps Using Raspberry Pi

The facial expression recognition system can be used for various applications and can be used to study or analyze human emotions. Many companies are implanting facial expression recognition systems to study the depression levels of employees. Game companies can also apply facial recognition systems to record players’ satisfaction during gameplay.

Below is a guide on how to implement emotion recognition using a pre-trained model to recognize facial expressions from real-time video streams on Raspberry Pi 4.

Steps to Execute Facial Expression Recognition on Raspberry Pi

To implement emotion recognition on Raspberry Pi, only three steps are needed.

Step 1: Detect faces in the input video stream.

Step 2: Find the region of interest (ROI) of the face.

Step 3: Use the facial expression recognition model to predict human emotions.

This project uses six categories: “Anger”, “Fear”, “Happiness”, “Neutral”, “Sadness”, and “Surprise”. The predicted images will belong to these categories.

Components Required for Facial Expression Recognition

This project involves very little hardware, only two are needed: Raspberry Pi 4 and the Pi camera module, and install OpenCV on Raspberry Pi.OpenCV is used for digital image processing, with the most common applications being object detection, facial recognition, and counting people.

Install OpenCV on Raspberry Pi 4

Before installing OpenCV and other dependencies, the Raspberry Pi needs to be fully updated. Use the following command to update the Raspberry Pi to the latest version:
sudo apt-get update
Then use the following commands to install the required dependencies for OpenCV on Raspberry Pi.
sudo apt-get install libhdf5-dev -y
sudo apt-get install libhdf5-serial-dev -y
sudo apt-get install libatlas-base-dev -y
sudo apt-get install libjasper-dev -y
sudo apt-get install libqtgui4 -y
sudo apt-get install libqt4-test -y
After that, use the command below to install OpenCV on Raspberry Pi.
pip3 install opencv-contrib-python==4.1.0.25

Install TensorFlow and Keras on Raspberry Pi 4

Before installing TensorFlow and Keras, install the required libraries mentioned below.

sudo apt-get install python3-numpy
sudo apt-get install libblas-dev
sudo apt-get install liblapack-dev
sudo apt-get install python3-dev
sudo apt-get install libatlas-base-dev
sudo apt-get install gfortran
sudo apt-get install python3-setuptools
sudo apt-get install python3-scipy
sudo apt-get update
sudo apt-get install python3-h5py
The TensorFlow and Keras libraries can be installed using pip from the terminal (if python3 is set as the default python environment on Raspberry Pi, use the pip3 command).
pip3 install tensorflow
pip3 install keras

Programming Raspberry Pi for Facial Expression Recognition

In the “Darwin Says” WeChat reply: emotion recognition, download the complete emotion recognition code, or download from the following link:

https://drive.google.com/file/d/1HYqKyhi_wm26AIGMvdHAFR8r6lGHXwiE/view

Below will explain the important parts of the code for better understanding. The downloaded project folder contains a subfolder (Haarcascades), a Python file (emotion1.py), and a model (ferjj.h5).

Start the code by importing the important packages mentioned below.

Note: TensorFlow API is used here to import the Keras library.

from tensorflow.keras import Sequential
from tensorflow.keras.models import load_model
import cv2
import numpy as np
from tensorflow.keras.preprocessing.image import img_to_array
Next, use the load_model() function imported from the Keras library to load the pre-trained model (provided in the project folder). In the next line, create a labels and assign labels to the 6 classes.
# We have 6 labels for the model
class_labels = {0: 'Angry', 1: 'Fear', 2: 'Happy', 3: 'Neutral', 4: 'Sad', 5: 'Surprise'}
classes = list(class_labels.values())
# print(class_labels)

Now, the path to the Haarcascade classifier is provided using the CascadeClassifier() function from the OpenCV library.

face_classifier = cv2.CascadeClassifier('./Haarcascades/haarcascade_frontalface_default.xml')

The text_on_detected_boxes() function can be used to design the output labels for detected faces. The parameters of text_on_detected_boxes() already have their default values, which can be changed as needed.

# This function is for designing the overlay text on the predicted image boxes.
def text_on_detected_boxes(text,text_x,text_y,image,font_scale = 1,
                           font = cv2.FONT_HERSHEY_SIMPLEX,
                           FONT_COLOR = (0, 0, 0),
                           FONT_THICKNESS = 2,
                           rectangle_bgr = (0, 255, 0)):

Testing Our Facial Expression Recognition on Images:

In the face_detector_image(img) function, the cvtColor() function is used to convert the input image to grayscale. As shown, the sample image taken here is converted to grayscale.

Facial Emotion Recognition in 3 Steps Using Raspberry Pi

Then extract the region of interest (ROI) of the face from the image. This function returns three important factors: the ROI of the face, the coordinates of the face, and the original image. A rectangle is drawn on the detected face. The code to convert the image to grayscale and draw a box around our ROI is as follows:

def face_detector_image(img):
    gray = cv2.cvtColor(img.copy(), cv2.COLOR_BGR2GRAY) # Convert the image into GrayScale image
    faces = face_classifier.detectMultiScale(gray, 1.3, 5)
    if faces is ():
        return (0, 0, 0, 0), np.zeros((48, 48), np.uint8), img
    allfaces = []
    rects = []
    for (x, y, w, h) in faces:
        cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), 2)
        roi_gray = gray[y:y + h, x:x + w]
        roi_gray = cv2.resize(roi_gray, (48, 48), interpolation=cv2.INTER_AREA)
        allfaces.append(roi_gray)
        rects.append((x, w, y, h))
    return rects, allfaces, img

In this part of the program, the model is applied using the ROI values. The first two lines under the function are used to get the input image and pass it to the face_detector_image(img) function, as described in the previous section.

Facial Emotion Recognition in 3 Steps Using Raspberry Pi

def emotionImage(imgPath):
    img = cv2.imread(imgPath)
    rects, faces, image = face_detector_image(img)
    i = 0
    for face in faces:
        roi = face.astype("float") / 255.0
        roi = img_to_array(roi)
        roi = np.expand_dims(roi, axis=0)
        # make a prediction on the ROI, then lookup the class
        preds = classifier.predict(roi)[0]
        label = class_labels[preds.argmax()]
        label_position = (rects[i][0] + int((rects[i][1] / 2)), abs(rects[i][2] - 10))
        i = + 1
        # Overlay our detected emotion on the picture
        text_on_detected_boxes(label, label_position[0],label_position[1], image)
    cv2.imshow("Emotion Detector", image)
    cv2.waitKey(0)
    cv2.destroyAllWindows()

After prediction, the output results are displayed along with the detected faces. The output results are shown in our previously created class_labels. The function text_on_detected_boxes() is used to design the labels on the detected faces. The imshow() function is used to display the window.

Facial Expression Recognition on Video Streams:

The face_detector_video(img) function is used to detect faces on video streams. We provide the input frame as an image to this function. This function returns the coordinates of the detected faces, the region of interest (ROI) of the face, and the original frame. The rectangle() function is used to draw an overlapping rectangle on the detected face.

def face_detector_video(img):
    # Convert image to grayscale
    gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    faces = face_classifier.detectMultiScale(gray, 1.3, 5)
    if faces is ():
        return (0, 0, 0, 0), np.zeros((48, 48), np.uint8), img
    for (x, y, w, h) in faces:
        cv2.rectangle(img, (x, y), (x + w, y + h), (0, 255, 0), thickness=2)
        roi_gray = gray[y:y + h, x:x + w]
    roi_gray = cv2.resize(roi_gray, (48, 48), interpolation=cv2.INTER_AREA)
    return (x, w, y, h), roi_gray, img

In this section, our model will be applied to recognize expressions on the video stream and display the prediction output in real-time on the video stream.

In the first two lines, we extract a frame from the input video stream. Then, the frame is input to the face_detector_video(frame) function. Now, the predict() function in the classifier is used to predict the expressions of the detected faces. We then assign class_labels to each prediction on the face. Now, imshow() is used to display the window with the recognized expression on each face.
def emotionVideo(cap):
    while True:
        ret, frame = cap.read()
        rect, face, image = face_detector_video(frame)
        if np.sum([face]) != 0.0:
            roi = face.astype("float") / 255.0
            roi = img_to_array(roi)
            roi = np.expand_dims(roi, axis=0)
            # make a prediction on the ROI, then lookup the class
            preds = classifier.predict(roi)[0]
            label = class_labels[preds.argmax()]
            label_position = (rect[0] + rect[1]//50, rect[2] + rect[3]//50)
            text_on_detected_boxes(label, label_position[0], label_position[1], image) # You can use this function for your another opencv projects.
            fps = cap.get(cv2.CAP_PROP_FPS)
            cv2.putText(image, str(fps),(5, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 255, 0), 2)
        else:
            cv2.putText(image, "No Face Found", (5, 40), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2)
        cv2.imshow('All', image)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    cap.release()
    cv2.destroyAllWindows()

This is the main functionality of the code. In the main function, you can use the emotionVideo() function and the emotionImage() function. If you want to use facial expression recognition on an image, just comment out the first two lines of the main function and uncomment the remaining two lines. However, make sure to provide the path of the input image in the IMAGE_PATH variable.

if __name__ == '__main__':
    camera = cv2.VideoCapture(0) # If you are using a USB Camera then change to 1 instead of 0.
    emotionVideo(camera)
    # IMAGE_PATH = "provide the image path"
    # emotionImage(IMAGE_PATH) # If you are using this on an image please provide the path

Testing Our Facial Expression Recognition System on Raspberry Pi

Before starting the Python script, connect the Raspberry Pi camera module to Pi as shown below:

Facial Emotion Recognition in 3 Steps Using Raspberry Pi

Now, check if the Pi camera is working properly. After viewing the camera, start the Python script, and a window will pop up containing the video source. Once the Pi detects an expression, it will be displayed in a green box on the video source.

Facial Emotion Recognition in 3 Steps Using Raspberry Pi

Facial Emotion Recognition in 3 Steps Using Raspberry Pi

Facial Emotion Recognition in 3 Steps Using Raspberry Pi

Original link:
https://circuitdigest.com/microcontroller-projects/raspberry-pi-based-emotion-recognition-using-opencv-tensorflow-and-keras
Project Author: JOYDIP DUTTA
END
Facial Emotion Recognition in 3 Steps Using Raspberry Pi
More Practical Project Recommendations:

<<< STM32 Project Summary >>>

<<< Raspberry Pi Project Summary >>>

<<< ESP32 Project Summary >>>

<<< ESP8266 Project Summary >>>

<<< Arduino Project Summary >>>

<<< Complete Series of Darwin Project Shares >>>

Recommended Reading:
Project Share | Electric Competition Series | Artificial Intelligence | Graduate Entrance Examination
Must-Know Knowledge Points | Graduation Project | Switch Power Supply | Job Seeking
We are Nimo, the founder of Darwin, only talking about technology and not flirting. The Darwin online education platform aims to serve professionals in the electronics industry, providing skill training videos covering popular topics in various subfields, such as embedded systems, FPGA, artificial intelligence, etc. Tailored layered learning content is provided for different groups, such as commonly used knowledge points, breakdown assessments, electric competitions/intelligent vehicles/graduate entrance examinations, etc. Welcome to follow.
Official website: www.darwinlearns.com
Bilibili: Darwin
QQ Group: Group 1: 786258064 (Full)
Group 2: 1057755357 (Full)
Group 3: 871373286
Facial Emotion Recognition in 3 Steps Using Raspberry Pi

Leave a Comment