Build Your Personal Perception Terminal with Raspberry Pi

Hello everyone, it’s been a while since I brought you something fun. This time I would like to introduce a small gadget I developed, which allows you to link your phone with a Raspberry Pi for real-time monitoring.

First, we are using a Raspberry Pi 4B, equipped with a camera, and installed with the Raspberry Pi OS. This system is usually provided by the seller when you purchase it, just follow the tutorial to burn it.

Build Your Personal Perception Terminal with Raspberry Pi

(Purchased Raspberry Pi)

Build Your Personal Perception Terminal with Raspberry Pi

(Entering the system interface)

Then the key step is to install the Linux version of OpenCV. This step is relatively difficult, so I directly found a pre-installed image and installed it on the Raspberry Pi. Other software like Miniconda can be installed by yourself. Then perform the following test; if the desktop successfully captures example1.png, we can basically complete the remaining steps!

import cv2
import numpy as np
cap = cv2.VideoCapture(0)  # Note: This image does not have parameter 0, but it should be passed in 0
f, frame = cap.read()  # Take a photo at this moment
cv2.imwrite("example1.png", frame)  # Save the captured content as a png image
cap.release()  # Close the camera
print('Capture complete')

If we want to perceive the content captured by the camera to determine if there is a scene change (for example, to capture when a pedestrian passes by), the simplest idea is to take a photo every second and compare the images from one second ago and now, calculating the hash similarity. If the similarity is below 0.8, we judge that someone has passed or the scene has changed~

def compare_image(self, file_image1, file_image2, size=(256, 256), part_size=(64, 64)):
    '''
    'file_image1' and 'file_image2' are the file paths passed in.
    You can create 'image1' and 'image2' Image objects using 'Image.open(path)'.
    'size' resets the image object's size, default size is 256 * 256.
    'part_size' defines the size of the split image. Default size is 64*64.
    The return value is the similarity between 'image1' and 'image2'. The higher the similarity, the closer the images are; a similarity of 1.0 indicates the images are identical.
    '''
    image1 = Image.open(file_image1)
    image2 = Image.open(file_image2)
    img1 = image1.resize(size).convert("RGB")
    sub_image1 = self.split_image(img1, part_size)
    img2 = image2.resize(size).convert("RGB")
    sub_image2 = self.split_image(img2, part_size)
    sub_data = 0
    for im1, im2 in zip(sub_image1, sub_image2):
        sub_data += self.calculate(im1, im2)
    x = size[0] / part_size[0]
    y = size[1] / part_size[1]
    pre = round((sub_data / (x * y)), 6)
    # print(str(pre * 100) + '%')
    print('Compare the image result is: ' + str(pre))
    return pre

(Write the comparison code)

At the same time, we set up a while loop, and this system can run continuously. However, simply calculating is not enough; we need to send the results to mobile devices to make it smarter. We introduce the SMTP service for email, allowing it to monitor automatically. Here we use NetEase 163 email; please search for related authorization yourself.

Build Your Personal Perception Terminal with Raspberry Pi

Construct an email request:

def sendEmail(content,title):
    sender = '[email protected]'  # My 163 email
    receivers = ['XXXX']  # Receiving emails, can be set to your QQ email or other email
    message_content = MIMEText(content, 'plain', 'utf-8')  # Content, format, encoding
    message = MIMEMultipart('related')
    message['Subject'] = 'Camera Recognition Detection'
    message['From'] = "{}".format(sender)
    message['To'] = ",".join(receivers)
    # message['Subject'] = title
    # Third-party SMTP service
    mail_host = "smtp.163.com"  # SMTP server
    mail_user = ""  # Username
    mail_pass = ""  # Your smtp key
    img = MIMEImage(open('example1.png', 'rb').read(), _subtype='octet-stream')
    img.add_header('Content-Disposition', 'attachment', filename='example1.png')
    message.attach(img)
    message.attach(message_content)
    try:
        smtpObj = smtplib.SMTP_SSL(mail_host, 465)  # Enable SSL sending, port is usually 465
        smtpObj.login(mail_user, mail_pass)  # Login verification
        smtpObj.sendmail(sender, receivers, message.as_string())  # Send
        print("Mail has been sent successfully.")
    except smtplib.SMTPException as e:
        print(e)

Test results:

Build Your Personal Perception Terminal with Raspberry Pi

Build Your Personal Perception Terminal with Raspberry Pi

Build Your Personal Perception Terminal with Raspberry Pi

(Phone received mail)

Other fun things, such as performing edge detection on images on the Raspberry Pi:

Build Your Personal Perception Terminal with Raspberry Pi

import cv2
import numpy as np
img1 = cv2.imread('/home/pi/Desktop/example1.png')
img2 = cv2.imread('/home/pi/Desktop/example2.png')  # Read image from file
image = cv2.imread('/home/pi/Desktop/example2.png', cv2.IMREAD_COLOR)  # Convert image to grayscale
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)  # Use Canny algorithm for edge detection
edges = cv2.Canny(gray, threshold1=30, threshold2=100)  # Save edge detection result image
cv2.imwrite('edges_image.jpg', edges)  # Display original image and edge detection result
cv2.imshow('Original', image)
cv2.imshow('Edges', edges)
# Wait for user to press any key, then close window
cv2.waitKey(0)
cv2.destroyAllWindows()

Additionally, we can deploy yolo-tiny for object detection and recognition~ The information returned to the phone can be more, and we can even connect to large model interfaces for scene description (see: Urban Subtitle, teaching you how to use images for cross-modal reasoning of spatial knowledge). In the future, we will also integrate algorithms for detecting face age, emotion, ethnicity, and gender, which will be shared in upcoming tutorials.

Conducting scientific experiments, data acquisition is a crucial step. In the future, we plan to develop urban perception micro-stations, integrating more diverse sensors to collect multi-source information such as sound, heat, light, etc., deployed on vehicles to collect first-hand urban data. We welcome everyone to continue following us.

Build Your Personal Perception Terminal with Raspberry Pi

(Purchased some small sensors)

In the next issue, I will continue to share techniques and methods related to Weibo data, teaching you how to build your own public event social media dataset. Here are some preliminary documents:

Thoughts and practices on obtaining Weibo check-in data

Sharing of urban Weibo check-in data & address decoding and correction tutorial

Sharing of Weibo data with geographical coordinates in Beijing & methods for data acquisition and scientific research questions

Beyond Weibo, data collection from Xiaohongshu, data, code, and ideas

Finally, here is all the code shared in this article:

# -*- coding: UTF-8 -*-
__author__ = 'zy'
__time__ = '2020/5/24 20:21'
#coding:utf8
import os
from PIL import Image, ImageDraw, ImageFile
import numpy
import pytesseract
#import cv2
#import imagehash
import collections
import cv2, time
def compare_image_with_hash(image_file1, image_file2, max_dif=0):
        """
        max_dif: Allowed maximum hash difference, the smaller the more accurate, minimum is 0
        It is recommended to use cv2.img_hash.averageHash
        """
        ImageFile.LOAD_TRUNCATED_IMAGES = True
        hash_1 = None
        hash_2 = None
        with open(image_file1, 'rb') as fp:
            hash_1 = cv2.img_hash.averageHash(Image.open(fp))
            print(hash_1)
        with open(image_file2, 'rb') as fp:
            hash_2 = cv2.img_hash.averageHash(Image.open(fp))
            print(hash_2)
        dif = hash_1 - hash_2
        print(dif)
        if dif < 0:
            dif = -dif
        if dif <= max_dif:
            return True
        else:
            return False
class CompareImage():
    def calculate(self, image1, image2):
        g = image1.histogram()
        s = image2.histogram()
        assert len(g) == len(s), "error"
        data = []
        for index in range(0, len(g)):
            if g[index] != s[index]:
                data.append(1 - abs(g[index] - s[index]) / max(g[index], s[index]))
            else:
                data.append(1)
        return sum(data) / len(g)
    def split_image(self, image, part_size):
        pw, ph = part_size
        w, h = image.size
        sub_image_list = []
        assert w % pw == h % ph == 0, "error"
        for i in range(0, w, pw):
            for j in range(0, h, ph):
                sub_image = image.crop((i, j, i + pw, j + ph)).copy()
                sub_image_list.append(sub_image)
        return sub_image_list
    def compare_image(self, file_image1, file_image2, size=(256, 256), part_size=(64, 64)):
        '''
        'file_image1' and 'file_image2' are the file paths passed in.
        You can create 'image1' and 'image2' Image objects using 'Image.open(path)'.
        'size' resets the image object's size, default size is 256 * 256.
        'part_size' defines the size of the split image. Default size is 64*64.
        The return value is the similarity between 'image1' and 'image2'. The higher the similarity, the closer the images are; a similarity of 1.0 indicates the images are identical.
        '''
        image1 = Image.open(file_image1)
        image2 = Image.open(file_image2)
        img1 = image1.resize(size).convert("RGB")
        sub_image1 = self.split_image(img1, part_size)
        img2 = image2.resize(size).convert("RGB")
        sub_image2 = self.split_image(img2, part_size)
        sub_data = 0
        for im1, im2 in zip(sub_image1, sub_image2):
            sub_data += self.calculate(im1, im2)
        x = size[0] / part_size[0]
        y = size[1] / part_size[1]
        pre = round((sub_data / (x * y)), 6)
        # print(str(pre * 100) + '%')
        print('Compare the image result is: ' + str(pre))
        return pre
import smtplib, time
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
from email.mime.multipart import MIMEMultipart
def sendEmail(content,title):
    sender = '[email protected]'  # My 163 email
    receivers = ['XXXX']  # Receiving emails, can be set to your QQ email or other email
    message_content = MIMEText(content, 'plain', 'utf-8')  # Content, format, encoding
    message = MIMEMultipart('related')
    message['Subject'] = 'Camera Recognition Detection'
    message['From'] = "{}".format(sender)
    message['To'] = ",".join(receivers)
    # message['Subject'] = title
    # Third-party SMTP service
    mail_host = "smtp.163.com"  # SMTP server
    mail_user = ""  # Username
    mail_pass = ""  # Your smtp key
    img = MIMEImage(open('example1.png', 'rb').read(), _subtype='octet-stream')
    img.add_header('Content-Disposition', 'attachment', filename='example1.png')
    message.attach(img)
    message.attach(message_content)
    try:
        smtpObj = smtplib.SMTP_SSL(mail_host, 465)  # Enable SSL sending, port is usually 465
        smtpObj.login(mail_user, mail_pass)  # Login verification
        smtpObj.sendmail(sender, receivers, message.as_string())  # Send
        print("Mail has been sent successfully.")
    except smtplib.SMTPException as e:
        print(e)
if __name__=='__main__':
    while True:
        cap = cv2.VideoCapture(0)  # Note: This image does not have parameter 0, but it should be passed in 0
        f, frame = cap.read()  # Take a photo at this moment
        cv2.imwrite("example1.png", frame)  # Save the captured content as a png image
        cap.release()  # Close the camera
        print('Capture complete')
        time.sleep(1)
        cap = cv2.VideoCapture(0)  # Note: This image does not have parameter 0, but it should be passed in 0
        f, frame = cap.read()  # Take a photo at this moment
        cv2.imwrite("example2.png", frame)  # Save the captured content as a png image
        cap.release()  # Close the camera
        compare_image = CompareImage()
        result = compare_image.compare_image("example1.png", "example2.png")
        print(result)
        if result < 0.8:
            sendEmail(str(result), 'Temperature Test Feedback Result')

Leave a Comment

×