Detecting objects and transferring images over MQTT

In this article we will learn how to capture/save and detect objects inside an image every five seconds using open-cv and the YOLO object detector. we will then convert the image to byte-array and publish it over MQTT , which will be received on another remote device (Subscribe) and saved as a JPG.

We will be using YoloV3 algorithm and a free MQTT broker (broker.emqx.io)

Before we proceed, I assume that you have basic knowledge of the following:
· OpenCV
· NumPy
· MQTT

Let us first write the python script on our first device which will be acting as a surveillance system.

Installation:
· pip install opencv-python
· pip install numpy
· pip install paho-mqtt

If you are still having issues with installing opencv, follow the below articles
· Installing on Windows
· Installing on Ubuntu

we will also need to download a few files (pre-trained weights for YoloV3, configuration file and the names files). download them from the below links

(make sure the above 3 downloaded files are save in the same directory as sendImage.py)

Create a new file in the IDE and save the file as sendImage.py

Start with importing the required modules

import paho.mqtt.client as mqtt
import paho.mqtt.publish as publish
import cv2
import numpy as np
import time

We will now initialize variables for the broker and the image

broker = "broker.emqx.io"
port = 1883
timelive=60
image_name="capture.jpg"

Our first function will capture/save the image and will call the process_image() function

def save_image():
#cv2.VideoCapture(0) this can be 0, 1, 2 depending on your device id
videoCaptureObject = cv2.VideoCapture(0)
ret, frame = videoCaptureObject.read()
cv2.imwrite(image_name, frame)
videoCaptureObject.release()
process_image()

The process_image() function is where all the magic happens.

def process_image():
boxes = []
confs = []
class_ids = []

#loading the YoloV3 weights and configuration file using the open-cv dnn module
net = cv2.dnn.readNet("yolov3.weights", "yolov3.cfg")

#storing all the trained object names from the coco.names file in the list names[]
names = []
with open("coco.names", "r") as n:
names = [line.strip() for line in n.readlines()]

#running a foward pass by passing the names of layers of the output to be computed by net.getUnconnectedOutLayersNames()
output_layers = [layer_name for layer_name in net.getUnconnectedOutLayersNames()]
colors = np.random.uniform(0, 255, size=(len(names), 3))

#reading the image from the image_name variable (Same image which was saved by the save_image function)
image = cv2.imread(image_name)
height, width, channels = image.shape

#using blobFromImage function to preprocess the data
blob = cv2.dnn.blobFromImage(image, scalefactor=0.00392, size=(160, 160), mean=(0, 0, 0))
net.setInput(blob)

#getting X/Y cordinates of the object detected, scores for all the classes of objects in coco.names where the predicted object is class with the highest score, height/width of bounding box
outputs = net.forward(output_layers)
for output in outputs:
for check in output:
#this list scores stores confidence for each corresponding object
scores = check[5:]

#np.argmax() gets the class index with highest score which will help us get the name of the class for the index from the names list
class_id = np.argmax(scores)
conf = scores[class_id]
#predicting with a confidence value of more than 40%
if conf > 0.4:
center_x = int(check[0] * width)
center_y = int(check[1] * height)
w = int(check[2] * width)
h = int(check[3] * height)
x = int(center_x - w / 2)
y = int(center_y - h / 2)
boxes.append([x, y, w, h])
confs.append(float(conf))
class_ids.append(class_id)

#drawing bounding boxes and adding labels while removing duplicate detection for same object using non-maxima suppression
indexes = cv2.dnn.NMSBoxes(boxes, confs, 0.5, 0.5)
font = cv2.FONT_HERSHEY_PLAIN
for i in range(len(boxes)):
if i in indexes:
x, y, w, h = boxes[i]
label = str(names[class_ids[i]])
color = colors[i]
cv2.rectangle(image, (x, y), (x + w, y + h), color, 2)
cv2.putText(image, label, (x, y - 5), font, 1, color, 1)

#resizing and saving the the image
width = int(image.shape[1] * 220 / 100)
height = int(image.shape[0] * 220 / 100)
dim = (width, height)
resized = cv2.resize(image, dim, interpolation=cv2.INTER_AREA)
cv2.imwrite('processed.jpg', resized)

#reading the image and converting it to bytearray
f = open("processed.jpg", "rb")
fileContent = f.read()
byteArr = bytes(fileContent)

#topic to publish for our MQTT
TOPIC = "IMAGE"
client = mqtt.Client()

#connecting to the MQTT broker
client.connect(broker, port, timelive)

#publishing the message with bytearr as the payload and IMAGE as topic
publish.single(TOPIC, byteArr, hostname=broker)
print("Published")

Final lines of code to start the save_image function and call it every 5 seconds

while True:
save_image()
time.sleep(5)

Let us now write our second script on a second device which will be used to download and view the detected objects file received over MQTT

Installation:
· pip install paho-mqtt

Create a new file in the IDE and save the file as receiveImage.py

Start with importing the required modules

import paho.mqtt.client as mqtt

Initialize variables for the broker

broker = "broker.emqx.io"
port = 1883
timelive = 60

Connect to the broker with the on_connect() function and subscribe to the topic IMAGE

def on_connect(client, userdata, flags, rc):
print("Connected with result code " + str(rc))
#subscribe to the topic IMAGE, this is the same topic which was used to published the image on the previous device
client.subscribe("IMAGE")

Write the on_message() function to save file as soon as a payload is received

def on_message(client, userdata, msg):
#create/open jpg file [detected_objects.jpg] to write the received payload
f = open('detected_objects.jpg', "wb")
f.write(msg.payload)
f.close()

The root function to connect to the broker and start listening for incoming messages by looping forever

def mqtt_sub():
client = mqtt.Client()
client.connect(broker, port, timelive)
client.on_connect = on_connect
client.on_message = on_message
client.loop_forever()

Final line of code to start the script

mqtt_sub()

Every 5 seconds you will have JPG image-refreshed with the name ‘detected_objects.jpg’ in the same directory as the script file.

And that is it. we have created a surveillance system which captures images every 5 seconds, detects objects in them, sends the image over internet with MQTT.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store