# Detection via RT-Thread and ROS

### [introduction](https://www.rt-thread.org/document/site/#/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/object-detection?id=%e5%bc%95%e8%a8%80) <a href="#yin-yan" id="yin-yan"></a>

This should be the last document about using RT-Thread with ROS to make a camera car. This document will bring together the previous content to realize a car that can detect targets.

![img](https://www.rt-thread.org/document/site/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/figures/06-01.png)

This document covers almost all the content of the previous documents, so it is recommended to familiarize yourself with the previous content before reading this document. If you are familiar with the previous content, you will find that this document is very short, but it is built on the previous foundation.

Content that should be familiar by now:

* Understand how CNN works;
* Ability to train your own object detection model using Darknet
* Able to use rosserial to establish a connection between RT-Thread and ROS
* Ability to publish image information using ROS

The following will introduce how to use the image information published by ROS and the Darknet connection for target detection.

### [1. Darknet ROS](https://www.rt-thread.org/document/site/#/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/object-detection?id=_1darknet-ros) <a href="#id-1darknet-ros" id="id-1darknet-ros"></a>

#### [1.1 Get the source code](https://www.rt-thread.org/document/site/#/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/object-detection?id=_11-%e8%8e%b7%e5%8f%96%e6%ba%90%e7%a0%81) <a href="#id-11-huo-qu-yuan-ma" id="id-11-huo-qu-yuan-ma"></a>

In fact, what we are going to use below is a ROS software package, which is now also open source:

```
# 初始化工作环境
$ mkdir catkin_workspace
$ cd catkin_workspace/src
$ catkin_init_workspace

## 下载源码
$ git clone --recursive http://github.com/leggedrobotics/darknet_ros.gitcopymistakeCopy Success
```

In addition to the source code, we also need to download some trained neural network weights and put them in the following directory:

```
$ catkin_workspace/src/darknet_ros/darknet_ros/yolo_network_config/weights/copymistakeCopy Success
```

If you think the overseas download speed is too slow, here is a domestic CDN acceleration mirror:

* yolov2-tiny.weights: <https://wuhanshare-1252843818.cos.ap-guangzhou.myqcloud.com/yolov2-tiny.weights>
* yolov2.weights: <https://wuhanshare-1252843818.cos.ap-guangzhou.myqcloud.com/yolov2.weights>
* yolov3.weights: <https://wuhanshare-1252843818.cos.ap-guangzhou.myqcloud.com/yolov3.weights>

If the source code and weights are downloaded, we are ready to compile.

#### [1.2 Compile source code](https://www.rt-thread.org/document/site/#/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/object-detection?id=_12-%e7%bc%96%e8%af%91%e6%ba%90%e7%a0%81) <a href="#id-12-bian-yi-yuan-ma" id="id-12-bian-yi-yuan-ma"></a>

In order to ensure that Darknet can obtain the camera data, we need to tell it where the camera information is published. Modify this file:

```
$ catkin_workspace/src/darknet_ros/darknet_ros/config/ros.yamlcopymistakeCopy Success
```

Change the topic below to the location where you want to publish your image. For example, I published it in /usb\_cam/image\_raw

```
camera_reading:
    topic: /usb_cam/image_raw
    queue_size: 1copymistakeCopy Success
```

Then you can compile the package in the catkin\_workspace directory:

```
$ catkin_makecopymistakeCopy Success
```

If everything goes well, the compilation is complete. In fact, you don't need to do too much work. After compiling, remember to update the environment variables so that you can start the software package normally later.

```
$ sorce devel/setup.bashcopymistakeCopy Success
```

#### [1.3 Object Detection](https://www.rt-thread.org/document/site/#/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/object-detection?id=_13-%e7%9b%ae%e6%a0%87%e6%a3%80%e6%b5%8b) <a href="#id-13-mu-biao-jian-ce" id="id-13-mu-biao-jian-ce"></a>

Before performing target detection, we start the ROS node:

```
$ roscorecopymistakeCopy Success
```

Then start a camera node:

```
roslaunch usb_cam usb_cam-test.launchcopymistakeCopy Success
```

In this way, you can see the camera data in real time. It doesn’t matter where the camera is. It can be on the car or on the computer. This is also the beauty of ROS. As long as the node publishes the camera message, ROS can process it no matter where the camera is:

![img](https://www.rt-thread.org/document/site/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/figures/06-02.png)

Next we start the Darknet node:

```
$ roslaunch darknet_ros darknet_ros.launchcopymistakeCopy Success
```

In the following picture, you can see that there are two video streams. The left one is the real-time image without processing, and the right one is the result of running target detection:

![img](https://www.rt-thread.org/document/site/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/figures/06-03.png)

### [2. Summary](https://www.rt-thread.org/document/site/#/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/object-detection?id=_2%e6%80%bb%e7%bb%93) <a href="#id-2-zong-jie" id="id-2-zong-jie"></a>

RT-Thread is responsible for control as a real-time operating system, while Linux is responsible for providing a rich set of software packages to run algorithms. The combination of the two complements each other's strengths and works well together.

### [3. References](https://www.rt-thread.org/document/site/#/rt-thread-version/rt-thread-standard/tutorial/smart-car/object-detection/object-detection?id=_3%e5%8f%82%e8%80%83%e6%96%87%e7%8c%ae) <a href="#id-3-can-kao-wen-xian" id="id-3-can-kao-wen-xian"></a>

Darknet ROS: <https://github.com/leggedrobotics/darknet_ros>
