Detection via RT-Thread and ROS

This should be the last document about using RT-Thread with ROS to make a camera car. This document will bring together the previous content to realize a car that can detect targets.

This document covers almost all the content of the previous documents, so it is recommended to familiarize yourself with the previous content before reading this document. If you are familiar with the previous content, you will find that this document is very short, but it is built on the previous foundation.

Content that should be familiar by now:

  • Understand how CNN works;

  • Ability to train your own object detection model using Darknet

  • Able to use rosserial to establish a connection between RT-Thread and ROS

  • Ability to publish image information using ROS

The following will introduce how to use the image information published by ROS and the Darknet connection for target detection.

In fact, what we are going to use below is a ROS software package, which is now also open source:

# 初始化工作环境
$ mkdir catkin_workspace
$ cd catkin_workspace/src
$ catkin_init_workspace

## 下载源码
$ git clone --recursive http://github.com/leggedrobotics/darknet_ros.gitcopymistakeCopy Success

In addition to the source code, we also need to download some trained neural network weights and put them in the following directory:

$ catkin_workspace/src/darknet_ros/darknet_ros/yolo_network_config/weights/copymistakeCopy Success

If you think the overseas download speed is too slow, here is a domestic CDN acceleration mirror:

If the source code and weights are downloaded, we are ready to compile.

In order to ensure that Darknet can obtain the camera data, we need to tell it where the camera information is published. Modify this file:

$ catkin_workspace/src/darknet_ros/darknet_ros/config/ros.yamlcopymistakeCopy Success

Change the topic below to the location where you want to publish your image. For example, I published it in /usb_cam/image_raw

camera_reading:
    topic: /usb_cam/image_raw
    queue_size: 1copymistakeCopy Success

Then you can compile the package in the catkin_workspace directory:

$ catkin_makecopymistakeCopy Success

If everything goes well, the compilation is complete. In fact, you don't need to do too much work. After compiling, remember to update the environment variables so that you can start the software package normally later.

$ sorce devel/setup.bashcopymistakeCopy Success

Before performing target detection, we start the ROS node:

$ roscorecopymistakeCopy Success

Then start a camera node:

roslaunch usb_cam usb_cam-test.launchcopymistakeCopy Success

In this way, you can see the camera data in real time. It doesn’t matter where the camera is. It can be on the car or on the computer. This is also the beauty of ROS. As long as the node publishes the camera message, ROS can process it no matter where the camera is:

Next we start the Darknet node:

$ roslaunch darknet_ros darknet_ros.launchcopymistakeCopy Success

In the following picture, you can see that there are two video streams. The left one is the real-time image without processing, and the right one is the result of running target detection:

RT-Thread is responsible for control as a real-time operating system, while Linux is responsible for providing a rich set of software packages to run algorithms. The combination of the two complements each other's strengths and works well together.

Darknet ROS: https://github.com/leggedrobotics/darknet_ros

Last updated

Assoc. Prof. Wiroon Sriborrirux, Founder of Advance Innovation Center (AIC) and Bangsaen Design House (BDH), Electrical Engineering Department, Faculty of Engineering, Burapha University