This article will use OpenCV, Python, and Ubidots to write a pedestrian counting program, and the code will be explained in detail.
-
Application Requirements
-
Coding – 8 Sections
-
Testing
-
Create Your Own Dashboard
-
Results Display
-
Any embedded Linux with an Ubuntu derivative
-
Python 3 or higher installed in the operating system
-
OpenCV 3.0 or higher installed in the OS. If using Ubuntu or its derivatives, please follow the official installation tutorial or run the following command:
pip install opencv-contrib-python
-
Install Numpy according to the official operating guide, or run the command below
-
Install imutils
-
Install requests
2. Coding
from imutils.object_detection
import non_max_suppression
import numpy as np
import imutils
import cv2
import requests
import time
import argparse
URL_EDUCATIONAL = “http://things.ubidots.com”
URL_INDUSTRIAL = “http://industrial.api.ubidots.com”
INDUSTRIAL_USER = True # Set this to False if you are an educational user
TOKEN = “….” # Put your Ubidots TOKEN here
DEVICE = “detector” # Device where the result will be stored
VARIABLE = “people” # Variable where the result will be stored
# OpenCV pre-trained SVM with HOG people features
HOGCV = cv2.HOGDescriptor()
HOGCV.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
cv2.HOGDescriptor_getDefaultPeopleDetector() calls a pre-trained model for pedestrian detection in OpenCV and provides the evaluation capability of SVM features.
def detector(image):
”’
@image is a numpy array
”’
image = imutils.resize(image, width=min(400, image.shape[1]))
clone = image.copy()
(rects, weights) = HOGCV.detectMultiScale(image, winStride=(8, 8),
padding=(32, 32), scale=1.05)
# Applies non-max suppression from imutils package to kick-off overlapped
# boxes
rects = np.array([[x, y, x + w, y + h] for (x, y, w, h) in rects])
result = non_max_suppression(rects, probs=None, overlapThresh=0.65)
return result
Image credit: https://www.pyimagesearch.com
def localDetect(image_path):
result = []
image = cv2.imread(image_path)
if len(image) <= 0:
print(“[ERROR] could not read your local image”)
return result
print(“[INFO] Detecting people”)
result = detector(image)
# shows the result
for (xA, yA, xB, yB) in result:
cv2.rectangle(image, (xA, yA), (xB, yB), (0, 255, 0), 2)
cv2.imshow(“result”, image)
cv2.waitKey(0)
cv2.destroyAllWindows()
return (result, image)
def cameraDetect(token, device, variable, sample_time=5):
cap = cv2.VideoCapture(0)
init = time.time()
# Allowed sample time for Ubidots is 1 dot/second
if sample_time < 1:
sample_time = 1
while(True):
# Capture frame-by-frame
ret, frame = cap.read()
frame = imutils.resize(frame, width=min(400, frame.shape[1]))
result = detector(frame.copy())
# shows the result
for (xA, yA, xB, yB) in result:
cv2.rectangle(frame, (xA, yA), (xB, yB), (0, 255, 0), 2)
cv2.imshow(‘frame’, frame)
# Sends results
if time.time() – init >= sample_time:
print(“[INFO] Sending actual frame results”)
# Converts the image to base 64 and adds it to the context
b64 = convert_to_base64(frame)
context = {“image”: b64}
sendToUbidots(token, device, variable,
len(result), context=context)
init = time.time()
if cv2.waitKey(1) & 0xFF == ord(‘q’):
break
# When everything done, release the capture
cap.release()
cv2.destroyAllWindows()
def convert_to_base64(image):
image = imutils.resize(image, width=400)
img_str = cv2.imencode(‘.png’, image)[1].tostring()
b64 = base64.b64encode(img_str)
return b64.decode(‘utf-8’)
def detectPeople(args):
image_path = args[“image”]
camera = True if str(args[“camera”]) == ‘true’ else False
# Routine to read local image
if image_path != None and not camera:
print(“[INFO] Image path provided, attempting to read image”)
(result, image) = localDetect(image_path)
print(“[INFO] sending results”)
# Converts the image to base 64 and adds it to the context
b64 = convert_to_base64(image)
context = {“image”: b64}
# Sends the result
req = sendToUbidots(TOKEN, DEVICE, VARIABLE,
len(result), context=context)
if req.status_code >= 400:
print(“[ERROR] Could not send data to Ubidots”)
return req
# Routine to read images from webcam
if camera:
print(“[INFO] reading camera images”)
cameraDetect(TOKEN, DEVICE, VARIABLE)
def buildPayload(variable, value, context):
return {variable: {“value”: value, “context”: context}}
def sendToUbidots(token, device, variable, value, context={}, industrial=True):
# Builds the endpoint
url = URL_INDUSTRIAL if industrial else URL_EDUCATIONAL
url = “{}/api/v1.6/devices/{}”.format(url, device)
payload = buildPayload(variable, value, context)
headers = {“X-Auth-Token”: token, “Content-Type”: “application/json”}
attempts = 0
status = 400
while status >= 400 and attempts <= 5:
req = requests.post(url=url, headers=headers, json=payload)
status = req.status_code
attempts += 1
time.sleep(1)
return req
def argsParser():
ap = argparse.ArgumentParser()
ap.add_argument(“-i”, “–image”, default=None,
help=”path to image test file directory”)
ap.add_argument(“-c”, “–camera”, default=False,
help=”Set as true if you wish to use the camera”)
args = vars(ap.parse_args())
return args
For Section 7, we are about to complete the analysis of the code. The function argsParser() simply parses and returns the script’s parameters in a dictionary format through the terminal. There are two parameters in the parser:
· image: the path of the image file in your system
· camera: this variable will call the cameraDetect() method if set to ‘true’
Section 8:
def main():
args = argsParser()
detectPeople(args)
if __name__ == ‘__main__’:
main()
3. Testing

python peopleCounter.py PATH_TO_IMAGE_FILE
python peopleCounter.py -i dataset/image_1.png
python peopleCounter.py -c true
4. Create Your Own Dashboard
-
Canvas Widget Examples
-
Canvas Widget Introductory Demo
-
Canvas Creating a Real Time Widget
HTML
<img id=”img” width=”400px” height=”auto”/>
JS
var socket;
var srv = “industrial.ubidots.com:443”;
// var srv = “app.ubidots.com:443” // Uncomment this line if you are an educational user
var VAR_ID = “5ab402dabbddbd3476d85967”; // Put your var Id here
var TOKEN = “” // Put your token here
$( document ).ready(function() {
function renderImage(imageBase64){
if (!imageBase64) return;
$(‘#img’).attr(‘src’, ‘data:image/png;base64, ‘ + imageBase64);
}
// Function to retrieve the last value, it runs only once
function getDataFromVariable(variable, token, callback) {
var url = ‘https://things.ubidots.com/api/v1.6/variables/’ + variable + ‘/values’;
var headers = {
‘X-Auth-Token’: token,
‘Content-Type’: ‘application/json’
};
$.ajax({
url: url,
method: ‘GET’,
headers: headers,
data : {
page_size: 1
},
success: function (res) {
if (res.results.length > 0){
renderImage(res.results[0].context.image);
}
callback();
}
});
}
// Implements the connection to the server
socket = io.connect(“https://”+ srv, {path: ‘/notifications’});
var subscribedVars = [];
// Function to publish the variable ID
var subscribeVariable = function (variable, callback) {
// Publishes the variable ID that wishes to listen
socket.emit(‘rt/variables/id/last_value’, {
variable: variable
});
// Listens for changes
socket.on(‘rt/variables/’ + variable + ‘/last_value’, callback);
subscribedVars.push(variable);
};
// Function to unsubscribed for listening
var unSubscribeVariable = function (variable) {
socket.emit(‘unsub/rt/variables/id/last_value’, {
variable: variable
});
var pst = subscribedVars.indexOf(variable);
if (pst !== -1){
subscribedVars.splice(pst, 1);
}
};
var connectSocket = function (){
// Implements the socket connection
socket.on(‘connect’, function(){
console.log(‘connect’);
socket.emit(‘authentication’, {token: TOKEN});
});
window.addEventListener(‘online’, function () {
console.log(‘online’);
socket.emit(‘authentication’, {token: TOKEN});
});
socket.on(‘authenticated’, function () {
console.log(‘authenticated’);
subscribedVars.forEach(function (variable_id) {
socket.emit(‘rt/variables/id/last_value’, { variable: variable_id });
});
});
}
/* Main Routine */
getDataFromVariable(VAR_ID, TOKEN, function(){
connectSocket();
});
connectSocket();
//connectSocket();
// Subscribe Variable with your own code.
subscribeVariable(VAR_ID, function(value){
var parsedValue = JSON.parse(value);
console.log(parsedValue);
//$(‘#img’).attr(‘src’, ‘data:image/png;base64, ‘ + parsedValue.context.image);
renderImage(parsedValue.context.image);
})
});
-
https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js
-
https://iot.cdnedge.bluemix.net/ind/static/js/libs/socket.io/socket.io.min.js
-
When you save your widget, you can get results similar to the one below:
5. Results Display
About the Author Jose García
UIS Electronic Engineer, Ubuntu user, Bucaramanga native, programmer, sometimes bored, wants to travel the world but has little hope of achieving this dream. Hardware and software developer @Ubidots
Original Title:
People Counting with OpenCV, Python & Ubidots
Original Link:
Translator Profile: Wu Zhendong, Master’s degree in Computer Science and Decision from the University of Lorraine, France. Currently engaged in artificial intelligence and big data-related work, striving to become a data scientist for life. From Jinan, Shandong, can’t operate an excavator, but can write Java, Python, and PPT.
Beijing Foreign Studies University has officially opened
“Business Data Analysis” direction for working graduate students
After graduation
You can obtain a master’s degree recognized by the state from a double first-class university
Exclusive benefits for data analysis fans, scan to listen to the course for free👇