Control the car using RT-Thread

RT-Thread connects to ROS to control the car

This document will introduce how to use RT-Thread and ROS to connect to realize a remote control car with a camera.

But in fact, the RT-Thread part of the code has been introduced in this document: RT-Thread connects to ROS to control the car . On this basis, we only need to modify the ROS code.

Here is a diagram of the entire system, so that if you want to make a car like this yourself, you can try it out:

This is what the actual picture looks like:

The following codes are all operated on a computer with ROS installed

The installation of ROS has been introduced before, so I won’t repeat it here. Let’s create a new workspace first:

$ mkdir  telebot_ws  && cd telebot_ws
$ catkin_init_workspacecopymistakeCopy Success

Let's create a new ROS package:

$ cd src
$ catkin_create_pkg telebot rospycopymistakeCopy Success

Now we can start ROS development, in the telebot_ws directory:

$ catkin_make
$ source devel/setup.bashcopymistakeCopy Success

We first create a node to monitor keyboard keys and publish the received keys to the topic /keys (communication between ROS nodes is achieved by publishing and subscribing to topics). We create a new file key_publisher.py in the telebot_ws/src/telebot/src directory

#!/usr/bin/python

# 导入软件包
import sys, select, tty, termios
import rospy
from std_msgs.msg import String

if __name__ == '__main__':
    # 初始化节点
    key_pub = rospy.Publisher('keys', String, queue_size=1)
    rospy.init_node("keyboard_driver")
    rate = rospy.Rate(100)

    # 设置终端输入模式
    old_attr = termios.tcgetattr(sys.stdin)
    tty.setcbreak(sys.stdin.fileno())
    print("Publishing keystrokes. Press Ctrl-C to exit ...")

    # 循环监听键盘事件
    while not rospy.is_shutdown():
        if select.select([sys.stdin], [], [], 0)[0] == [sys.stdin]:
            # 发布监听到的按键
            key_pub.publish(sys.stdin.read(1))
        rate.sleep()

    # 恢复终端设置
    termios.tcsetattr(sys.stdin, termios.TCSADRAIN, old_attr)copymistakeCopy Success

The code above is less than 20 lines long. I have also added some comments, so I won’t go into detail. Let’s add executable permissions to this file:

$ chmod u+x key_publisher.pycopymistakeCopy Success

You can start the node:

$ rosrun telebot key_publisher.pycopymistakeCopy Success

In this way, we can see that there is a /keys topic that continuously outputs keyboard keys:

$ rostopic echo /keys
data: "w"
---
data: "a"
---
data: "s"
---copymistakeCopy Success

Now there is a node publishing our key messages. The next step is to convert the key messages into motion instructions for the robot, that is, publish them to /cmd_vel. We create a new file keys_to_twist_with_ramps.py in the telebot_ws/src/telebot/src directory:

#!/usr/bin/python

# 导入软件包
import rospy
import math
from std_msgs.msg import String
from geometry_msgs.msg import Twist

# 键盘和速度映设 w a s d
key_mapping = { 'w': [ 0, 1], 'x': [ 0, -1],
                'a': [ -1, 0], 'd': [1,  0],
                's': [ 0, 0] }
g_twist_pub = None
g_target_twist = None
g_last_twist = None
g_last_send_time = None
g_vel_scales = [0.1, 0.1] # default to very slow
g_vel_ramps = [1, 1] # units: meters per second^2

# 防止速度突变
def ramped_vel(v_prev, v_target, t_prev, t_now, ramp_rate):
  # compute maximum velocity step
  step = ramp_rate * (t_now - t_prev).to_sec()
  sign = 1.0 if (v_target > v_prev) else -1.0
  error = math.fabs(v_target - v_prev)
  if error < step: # we can get there within this timestep. we're done.
    return v_target
  else:
    return v_prev + sign * step  # take a step towards the target

def ramped_twist(prev, target, t_prev, t_now, ramps):
  tw = Twist()
  tw.angular.z = ramped_vel(prev.angular.z, target.angular.z, t_prev,
                            t_now, ramps[0])
  tw.linear.x = ramped_vel(prev.linear.x, target.linear.x, t_prev,
                           t_now, ramps[1])
  return tw

# 发布控制指令到 /cmd_vel
def send_twist():
  global g_last_twist_send_time, g_target_twist, g_last_twist,\
         g_vel_scales, g_vel_ramps, g_twist_pub
  t_now = rospy.Time.now()
  g_last_twist = ramped_twist(g_last_twist, g_target_twist,
                              g_last_twist_send_time, t_now, g_vel_ramps)
  g_last_twist_send_time = t_now
  g_twist_pub.publish(g_last_twist)

# 订阅 /keys 的回调函数
def keys_cb(msg):
  global g_target_twist, g_last_twist, g_vel_scales
  if len(msg.data) == 0 or not key_mapping.has_key(msg.data[0]):
    return # unknown key.
  vels = key_mapping[msg.data[0]]
  g_target_twist.angular.z = vels[0] * g_vel_scales[0]
  g_target_twist.linear.x  = vels[1] * g_vel_scales[1]

# 获取传递进来的速度加速度比例
def fetch_param(name, default):
  if rospy.has_param(name):
    return rospy.get_param(name)
  else:
    print "parameter [%s] not defined. Defaulting to %.3f" % (name, default)
    return default

if __name__ == '__main__':
  rospy.init_node('keys_to_twist')
  g_last_twist_send_time = rospy.Time.now()
  g_twist_pub = rospy.Publisher('cmd_vel', Twist, queue_size=1)
  rospy.Subscriber('keys', String, keys_cb)
  g_target_twist = Twist() # initializes to zero
  g_last_twist = Twist()
  g_vel_scales[0] = fetch_param('~angular_scale', 0.1)
  g_vel_scales[1] = fetch_param('~linear_scale', 0.1)
  g_vel_ramps[0] = fetch_param('~angular_accel', 1.0)
  g_vel_ramps[1] = fetch_param('~linear_accel', 1.0)

  rate = rospy.Rate(20)
  while not rospy.is_shutdown():
    send_twist()
    rate.sleep()copymistakeCopy Success

Similarly, we add executable permissions to this file:

$ chmod u+x keys_to_twist_with_ramps.pycopymistakeCopy Success

You can start the node:

$ rosrun telebot keys_to_twist_with_ramps.py _linear_scale:=1.0 _angular_scale:=0.8  _linear_accel:=1.0 _angular_accel:=0.8copymistakeCopy Success

The parameters passed in above are the ratio of the speed and acceleration of the car we want, so we can see that there is a /cmd_vel topic that will output the expected car speed:

$ rostopic echo /cmd_vel
linear:
  x: 1.0
  y: 0.0
  z: 0.0
angular:
  x: 0.0
  y: 0.0
  z: 0.0
---copymistakeCopy Success

Now the car can start, stop and turn slowly according to our instructions. The next step is to add a remote camera to it.

Before connecting to the camera of the car, there is one very important point. The previous operations were all performed on the computer. Next, we want to use our ARM development board as the ROS master node, so we need to set the environment variables and replace the following IP address with the actual IP address of the ARM board on the car :

$ export ROS_MASTER_URI=http://your.armbian_ros.ip.address:11311copymistakeCopy Success

The following codes are all operated on the ARM development board on the car with ROS installed

Let’s create a new workspace:

$ mkdir  telebot_ws  && cd telebot_ws
$ catkin_init_workspacecopymistakeCopy Success

Let's create a new ROS package:

$ cd src
$ catkin_create_pkg telebot_image roscppcopymistakeCopy Success

Now we can start ROS development, in the telebot_ws directory:

$ catkin_make
$ source devel/setup.bashcopymistakeCopy Success

In fact, the code for publishing camera messages is only about 30 lines. We create a new my_publisher.cpp in the telebot_ws/src/telebot_image/src directory.

#include <ros/ros.h>
#include <image_transport/image_transport.h>
#include <opencv2/opencv.hpp>
#include <cv_bridge/cv_bridge.h>

int main(int argc, char** argv)
{
  ros::init(argc, argv, "video_transp");
  ros::NodeHandle nh;
  image_transport::ImageTransport it(nh);
  image_transport::Publisher pub = it.advertise("camera/image", 1);

  cv::VideoCapture cap(0);
  cv::Mat frame;
  sensor_msgs::ImagePtr frame_msg;

  ros::Rate rate(10);

  while(ros::ok())
  {
    cap >> frame;
    if (!frame.empty())
    {
      frame_msg = cv_bridge::CvImage(std_msgs::Header(), "bgr8", frame).toImageMsg();
      pub.publish(frame_msg);
      cv::waitKey(1);
    }
    ros::spinOnce();
    rate.sleep();
  }
  return 0;
}copymistakeCopy Success

Before compiling, you need to install the OpenCV development environment first , because we use the OpenCV library function to obtain the camera data, and then publish it using the ROS library function. This is the CMakeLists.txt in the telebot_ws/src/telebot_image directory

cmake_minimum_required(VERSION 2.8.3)
project(telebot_image)

find_package(catkin REQUIRED COMPONENTS
  cv_bridge
  image_transport
)

catkin_package(
#  INCLUDE_DIRS include
#  LIBRARIES my_image_transport
#  CATKIN_DEPENDS cv_bridge image_transport
#  DEPENDS system_lib
)

include_directories(
  ${catkin_INCLUDE_DIRS}
)

find_package(OpenCV)
include_directories(include ${OpenCV_INCLUDE_DIRS})
#build my_publisher and my_subscriber
add_executable(my_publisher src/my_publisher.cpp)
target_link_libraries(my_publisher ${catkin_LIBRARIES} ${OpenCV_LIBS})copymistakeCopy Success

We compile the project in the telebot_ws directory:

$ catkin_makecopymistakeCopy Success

In this way, the camera message can be published. The image message is published in camera/image:

$rosrun telebot_image my_publishercopymistakeCopy Success

The following codes are all operated on a computer with ROS installed

Subscribing and seeing camera messages is actually very simple:

$ rosrun image_view image_view image:=/camera/imagecopymistakeCopy Success

You can then see the pictures from the car's camera on your computer.

If you already have a car that can be controlled by ROS, you only need to write about 30 lines of code in Part 3 for image publishing. You can use the OpenCV library to obtain camera information and then publish it using the ROS library.

Of course, if you just want to see the car's camera on your computer, there are actually many other ways that don't even require writing code. The advantage of using ROS to publish image data is that we can process the acquired images, such as target detection, which will be further introduced in subsequent documents.

Last updated

Assoc. Prof. Wiroon Sriborrirux, Founder of Advance Innovation Center (AIC) and Bangsaen Design House (BDH), Electrical Engineering Department, Faculty of Engineering, Burapha University