Skip to content
Snippets Groups Projects
Name Last commit Last update
eufs_msgs @ 8f17fa33
README.md

RUB21

This reposotry is a workspace to build a autonomous racing car on simulation environment of EUFS in ROS.

Contents

  1. Install Prerequisites
  2. Compiling
  3. Using The Gui
  4. Sensors
  5. Known issues

1. Install Prerequisites

rosdep install -i --from-path src/
  • Install Python dependencies:
pip install -r eufs_gazebo/requirements.txt

2. Compiling

Navigate to your workspace and build the simulation:

cd [your-catkin-workspace]
catkin build

To enable ROS to find the EUFS packages you also need to run

source ./devel/setup.bash

Note: source needs to be run on each new terminal you open. You can also include it in your .bashrc file.

3. Running with the GUI

Now you can finally run our kickass simulation!!

roslaunch eufs_launcher eufs_launcher.launch

You shold have something like this:

Full Gui

You can select different tracks from the dropdown menu. Then you can launch the simulation with the top-leftmost Launch button

The bottom-left will read in the selected image and turn it into a track, launching it immediately. The bottom-middle will generate a random track image to rand.png. You can see the image in eufs_gazebo/randgen_imgs. If you want to launch it, use the bottom-left button on rand.png.

The bottom-left button is sensitive to a parameter called "noise" - these are randomly placed objects to the side of the track that the car's sensors may pick up, mimicking real-world 'noise' from the environment. By default this is off, but you can drag the slider to adjust it to whatever levels you desire.

If you don't have a good computer, stick to the Small Straights random generation preset, or perhaps Bezier if your computer is very slow.
(Bezier tracks forgo realism for speed, whereas Small Straights keeps the track realistic, just smaller.)

An additional feature of the GUI is the ConversionTools. As the generator creates .png files, the launcher requires .launch files, and important data for perception is often put into .csv format, the GUI has a converter that allows you to freely convert between file formats. By default, files that are converted have a suffix appended to them (usually _CT) to prevent accidental overwriting of important files. This can be turned off by checking the suffix box - the conversion process is fairly lossless, so if a file is accidentally overwritten, it will likely behave the exact same way as the old file did.

A full manual of how to use the GUI is available here.

4. Additional sensors

Additional sensors for testing are avilable via the ros-kinetic-robotnik-sensor package. Some of them are already defined in eufs_description/robots/eufs.urdf.xarco. You can simply commment them in and attach them appropriately to the car.

Sensor suit of the car by default:

  • VLP16 lidar
  • ZED Stereo camera
  • IMU
  • GPS
  • odometry

An easy way to control the car is via

roslaunch ros_can_sim rqt_ros_can_sim.launch

YOLOv3-ROS

Development Environment

  • Ubuntu 16.04 / 18.04
  • ROS Kinetic / Melodic
  • OpenCV

Real-time Cone Detection With ROS

  • __In Progress

YOLOv3_ROS object detection

Prerequisites

To download the prerequisites for this package (except for ROS itself), navigate to the package folder and run:

$ cd yolov3_pytorch_ros
$ sudo pip install -r requirements.txt

Installation

Navigate to your catkin workspace and run:

$ catkin_make yolov3_pytorch_ros

Basic Usage

  1. First, make sure to put your weights in the models folder. For the training process in order to use custom objects, please refer to the original YOLO page. As an example, to download pre-trained weights from the COCO data set, go into the models folder and run:
wget http://pjreddie.com/media/files/yolov3.weights
  1. Modify the parameters in the launch file and launch it. You will need to change the image_topic parameter to match your camera, and the weights_name, config_name and classes_name parameters depending on what you are trying to do.

Start yolov3 pytorch ros node

$ roslaunch yolov3_pytorch_ros detector.launch

Node parameters

  • image_topic (string)

    Subscribed camera topic.

  • weights_name (string)

    Weights to be used from the models folder.

  • config_name (string)

    The name of the configuration file in the config folder. Use yolov3.cfg for YOLOv3, yolov3-tiny.cfg for tiny YOLOv3, and yolov3-voc.cfg for YOLOv3-VOC.

  • classes_name (string)

    The name of the file for the detected classes in the classes folder. Use coco.names for COCO, and voc.names for VOC.

  • publish_image (bool)

    Set to true to get the camera image along with the detected bounding boxes, or false otherwise.

  • detected_objects_topic (string)

    Published topic with the detected bounding boxes.

  • detections_image_topic (string)

    Published topic with the detected bounding boxes on top of the image.

  • confidence (float)

    Confidence threshold for detected objects.

Subscribed topics

  • image_topic (sensor_msgs::Image)

    Subscribed camera topic.

Published topics

  • detected_objects_topic (yolov3_pytorch_ros::BoundingBoxes)

    Published topic with the detected bounding boxes.

  • detections_image_topic (sensor_msgs::Image)

    Published topic with the detected bounding boxes on top of the image (only published if publish_image is set to true).


  • Installing Realsense-ros

    1. catkin workspace
    mkdir -p ~/catkin_ws/src
    cd ~/catkin_ws/src/
    1. Download realsense-ros pkg
    git clone https://github.com/IntelRealSense/realsense-ros.git
    cd realsense-ros/
    git checkout `git tag | sort -V | grep -P "^\d+\.\d+\.\d+" | tail -1`
    cd ..
    1. Download ddynamic_reconfigure
    cd src
    git clone https://github.com/pal-robotics/ddynamic_reconfigure/tree/kinetic-devel
    cd ..
    1. Pkg installation
    catkin_init_workspace
    cd ..
    catkin_make clean
    catkin_make -DCATKIN_ENABLE_TESTING=False -DCMAKE_BUILD_TYPE=Release
    catkin_make install
    echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc
    source ~/.bashrc
    1. Run D435 node
    roslaunch realsense2_camera rs_camera.launch
    1. Run rviz testing
    rosrun rviz rvzi
    Add > Image to view the raw RGB image

How to train (to detect your custom objects)

Training YOlOv3:

Download the dakrnet source code

git clone https://github.com/pjreddie/darknet
cd darknet

vim Makefile
...
GPU=1 # if no using GPU 0
CUDNN=1 # if no 0
OPENCV=0
OPENMP=0
DEBUG=0

make
0. Create folder yolov3
mkdir yolov3
cd yolov3
mkdir JPEGImages labels backup cfg 

####yolov3

├── JPEGImages
│ ├── object-00001.jpg
│ └── object-00002.jpg
│ ...
├── labels
│ ├── object-00001.txt
│ └── object-00002.txt
│ ...
├── backup
│ ├── yolov3-object.backup
│ └── yolov3-object_20000.weights
│ ...
├── cfg
│ ├── obj.data
│ ├── yolo-obj.cfg
│ └── obj.names
└── obj_test.txt...

1. Create file yolo-obj.cfg with the same content as in yolov3.cfg (or copy yolov3.cfg to yolo-obj.cfg) and:
  • change line batch to batch=64

  • change line subdivisions to subdivisions=8

  • change line max_batches to (classes*2000 but not less than 4000), f.e. max_batches=6000 if you train for 3 classes

  • change line steps to 80% and 90% of max_batches, f.e. steps=4800,5400

  • change line classes=80 to your number of objects in each of 3 [yolo]-layers:

    • cfg/yolov3.cfg#L610

    • cfg/yolov3.cfg#L696

    • cfg/yolov3.cfg#L783

      [convolutional]
      ...
      filters = 24 #3*(classes + 5)
      [yolo]
      ...
      classes=3
  • change [filters=255] to filters= 3x(classes + 5) in the 3 [convolutional] before each [yolo] layer

    • cfg/yolov3.cfg#L603
    • cfg/yolov3.cfg#L689
    • cfg/yolov3.cfg#L776

So if classes=1 then should be filters=18. If classes=2 then write filters=21.

(Do not write in the cfg-file: filters=(classes + 5)x3)

2. Create file obj.names in the directory path_to/yolov3/cfg/, with objects names - each in new line
person
car
cat
dog
3. Create file obj.data in the directory path_to/yolov3/cfg/, containing (where classes = number of objects):
classes= 3
train  = /home/cai/workspace/yolov3/obj_train.txt
valid  = /home/cai/workspace/yolov3/obj_test.txt
names = /home/cai/workspace/yolov3/cfg/obj.names
backup = /home/cai/workspace/yolov3/backup/
4. Put image-files (.jpg) of your objects in the directory path_to/yolov3/JPEGImages
5. You should label each object on images from your dataset: LabelImg(https://github.com/tzutalin/labelImg) is a graphical image annotation tool

It will create .txt-file for each .jpg-image-file - in the same directory and with the same name, but with .txt-extension, and put to file: object number and object coordinates on this image, for each object in new line:

<object-class> <x_center> <y_center> <width> <height>

Where:

  • <object-class> - integer object number from 0 to (classes-1)
  • <x_center> <y_center> <width> <height> - float values relative to width and height of image, it can be equal from (0.0 to 1.0]
  • for example: <x> = <absolute_x> / <image_width> or <height> = <absolute_height> / <image_height>
  • atention: <x_center> <y_center> - are center of rectangle (are not top-left corner)

For example for img1.jpg you will be created img1.txt containing:

1 0.716797 0.395833 0.216406 0.147222
0 0.687109 0.379167 0.255469 0.158333
1 0.420312 0.395833 0.140625 0.166667
6. Create file obj_train.txt & obj_test.txt in directory path_to/yolov3/, with filenames of your images, each filename in new line,for example containing:
path_to/yolov3/JPEGImages/img1.jpg
path_to/yolov3/JPEGImages/img2.jpg
path_to/yolov3/JPEGImages/img3.jpg
7. Download pre-trained weights for the convolutional layers (154 MB): https://pjreddie.com/media/files/darknet53.conv.74 and put to the directory path_to/darknet/
wget https://pjreddie.com/media/files/darknet53.conv.74
8. Start training by using the command line:
./darknet detector train [path to .data file] [path to .cfg file] [path to pre-taining weights-darknet53.conv.74]

[visualization]
./darknet detector train path_to/yolov3/cfg/obj.data path_to/yolov3/cfg/yolov3.cfg darknet53.conv.74 2>1 | tee visualization/train_yolov3.log
9. Start testing by using the command line:
./darknet detector test path_to/yolov3/cfg/obj.data path_to/yolov3/cfg/yolov3.cfg path_to/yolov3/backup/yolov3_final.weights path_to/yolov3/test/test_img.jpg