LiDAR Laser distributions

Here are a few LiDAR laser distributions and properties:

LiDAR Model vtop vbottom hRes @ 10hz Preview
VLS-128 15 -25 0.2 alt text
HDL-64S2 2 -24.33 0.16 alt text
HDL-32E 10.64 -30.67 0.16 alt text
VLP-32c 15 -25 0.2 alt text
VLP-32MR 15 -25 0.2
Pandar40 15 -25 0.2 alt text
Pandar64 15 -25 0.2 alt text
PandarQT 52.121 -52.121 0.15 alt text
RS-LiDAR-32 15 -25 0.2 alt text
OS0-32 45 -45 0.35 alt text
OS0-64 45 -45 0.35 alt text
OS1-16 16.6 -16.6 0.35 alt text
OS1-64 16.6 -16.6 0.35 alt text

You can see the laser distribution in the following visualizations, where the red line marks the 0 degree line

VLS128

alt text

HDL64

alt text

HDL32

alt text

VLP32c

alt text

Pandar40

alt text

Pandar64

alt text

PandarQT

alt text

RS32

alt text

OS032

alt text

OS064

alt text

OS164

alt text

How to use webviz in Docker

  1. Clone the webviz project (https://github.com/cruise-automation/webviz)

git clone git@github.com:cruise-automation/webviz.git

  1. Build the image, and name it.

The docker image contains only the basic dependencies. No code or compiled resources.

  1. Run a container in interactive mode with the recently created image, mount the wevbiz code, and attach it to the host network.

  2. Once inside the container, go to the mounted resource /webviz, install dependencies, build, and run the server.

  3. You’ll see a message indicating the webviz server URL ℹ 「wds」: Project is running at http://localhost:8081/

  4. Open your browser (Chrome seems to be the only fully supported), and direct it to that URL, adding app at the end. http://localhost:8081/app/

  5. Drag and drop a rosbag to the browsers’s window. You can also execute rosbridge to communicate with the host roscore. roslaunch rosbridge_server rosbridge_websocket.launch

  6. Select topics to be displayed. Be sure to select the correct coordinate frame.

Running Gitlab runners locally

How to Run Gitlab runners locally

Summary

  1. Install Docker.
  2. Setup Gitlab repository and install gitlab-runner.
  3. Prepare or get the .gitlab-ci.yml to test.
  4. Execute a runner on the desired job.

Docker

Install docker

https://docs.docker.com/install/linux/docker-ce/ubuntu/

Don’t forget the post-setup steps.

https://docs.docker.com/install/linux/linux-postinstall/

Setup repository and install gitlab-runner

https://docs.gitlab.com/runner/install/linux-repository.html#installing-the-runner

Prepare or get gitlab-ci.yml

Supposing the following script:

image: "ruby:2.5"

before_script:

  - apt-get update -qq && apt-get install -y -qq sqlite3 libsqlite3-dev nodejs

  - ruby -v

  - which ruby

  - gem install bundler --no-document

  - bundle install --jobs $(nproc)  "${FLAGS[@]}"

rspec:

  script:

    - bundle exec rspec

rubocop:

  script:

    - bundle exec rubocop

there are two jobs defined

  • rspec
  • rubocop

Execute the runner on a job

in a terminal execute:

gitlab-runner execute docker rspec

this will execute the script in a clean docker the job rspec.

to define a timeout use

--timeout TIME

to pass env variables

--env VAR=VALUE

Git creating features

Creating features from fork

Important note

Don’t forget FIRST to create ISSUE in upstream describing the fix/feature.

Summary

These are the steps involved:

  1. Fork in Github.
  2. Clone fork locally.
  3. Set upstream.
  4. Create features/fixes in local branches.
  5. Sign commits.
  6. Push to your fork.
  7. Create PR upstream.

Fork to your own account/organization

Simple step, just press the Fork button in the top right of the GitHub repository, and select where to fork.

Clone your fork locally

Clone the default branch in the current directory:
git clone --recursive https://github.com/YOUR_ACCOUNT/YOUR_FORK.git

If you wish to clone a different branch:
git clone --recursive --branch BRANC_NAME https://github.com/YOUR_ACCOUNT/YOUR_FORK.git

If you wish to clone to a directory with a different name:
git clone --recursive https://github.com/YOUR_ACCOUNT/YOUR_FORK.git DESIRED_DIR_NAME

Set upstream repository

Set upstream (original source):
git remote add upstream https://github.com/ORIGINAL_ORGANIZATION/ORIGINAL_REPOSITORY.git

Confirm remotes:
git remote -v

> origin    https://github.com/YOUR_ACCOUNT/YOUR_FORK.git (fetch)
> origin    https://github.com/YOUR_ACCOUNT/YOUR_FORK.git (push)
> upstream  https://github.com/ORIGINAL_ORGANIZATION/ORIGINAL_REPOSITORY.git (fetch)
> upstream  https://github.com/ORIGINAL_ORGANIZATION/ORIGINAL_REPOSITORY.git (push)

In this example:
origin represents your fork repo.
upstream represents your fork’s origin.

NOTE: Names other than origin or upstream can be used. Just be careful to follow the same naming when pulling/pushing commits.

Create features/fixes

In the Autoware case , please always create new features from master branch.

Sign commits

GPG Sign

Set your GPG keys following GitHub article: https://help.github.com/en/articles/managing-commit-signature-verification

Signoff commits

Signoff your commits using git commit -s (https://git-scm.com/docs/git-commit#Documentation/git-commit.txt–s)

If you have an older git version the -s flag might not available. You can either update it via source/build/install,or use a PPA. ( Taken from https://unix.stackexchange.com/a/170831)

sudo add-apt-repository ppa:git-core/ppa -y
sudo apt-get update
sudo apt-get install git -y
git --version

Update fork

git checkout master
git fetch upstream
git merge upstream/master

Create feature branch

git checkout -b feature/awesome_stuff

Push to our fork

git push origin feature/awesome_stuff

Once finished

Create PR from Github website, and target master branch.

Deeplab v3 Test

  1. Create Python Env $ mkdir ~/tf_ros && cd ~/tf_ros $ virtualenv --system-site-packages -p python2 tensor_ros_deeplab
  2. Activate Environment source tensor_ros_deeplab/bin/activate
  3. Install tensorflow GPU pip install --upgrade tensorflow_gpu-1.12.0-cp27-none-linux_x86_64.whl
  4. Move to the Deeplab dir $ git clone https://github.com/tensorflow/models
  5. Clone Tensorflow’s Models repo containing Deeplab ~/tf_ros/models/research/deeplab
  6. Download a pretrained model: $ wget https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/model_zoo.md
  7. Use sample script for a single image: https://github.com/tensorflow/models/blob/master/research/deeplab/deeplab_demo.ipynb
  8. Change the MODEL var to the path of the previously downloaded model: MODEL = DeepLabModel("~/tf_ros/models/research/deeplab/deeplabv3_mnv2_dm05_pascal_trainaug_2018_10_01.tar.gz")
  9. To use a local image instead of a URL change the run_visualization function to:
def run_visualization(image_path):
  """Inferences DeepLab model and visualizes result."""
  try:
    original_im = Image.open(image_path)
  except IOError:
    print('Cannot read image. Please check path: ' + image_path)
    return

  print('running deeplab on image %s...' % image_path)
  resized_im, seg_map = MODEL.run(original_im)

  vis_segmentation(resized_im, seg_map)
  1. Finally save and run the script: ptyhon script_name.py

Yolo3 training notes

Yolo3 custom training notes

To train a model, YOLO training code expects:

  • Images
  • Labels
  • NAMES File
  • CFG file
  • train.txt file
  • test.txt file
  • DATA file
  • Pretrained weights (optional)

Images and Labels

The images and labels should be located in the same directory. Each image and label is related to its counterpart by filename.
i.e.
For image 001.jpg, the corresponding label should be named 001.txt

Label format

The file containing the labels is a plain text file. Each line contains a bounding box for each object. The colums are separated by spaces, in the following format:

classID x y width height

The x, y, width and height should be expressed in a normalized pixel with values from 0 to 1.
x and y correspond to the coordinate of the center of the bounding box.

Yolo includes the following python helper function to easily achieve that:

def convert(size, box):
    dw = 1./(size[0])
    dh = 1./(size[1])
    x = (box[0] + box[1])/2.0 - 1
    y = (box[2] + box[3])/2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x*dw
    w = w*dw
    y = y*dh
    h = h*dh
    return (x,y,w,h)

Names file

This file contains the label string for each class. The first line corresponds to the class 0. second line corresponds to the class 1, and so on.
i.e. Contents of classes.names

classA
classB
classC

This would create the following relationship:

Class ID (labels) Class identifier
0 classA
1 classB
2 classC

CFG file

This file is a darknet configuration file. To simplify the explanation:

Modifications required to train, according to:

GPU memory available:

[net]
# Training
 batch=64 #Number of images to move to GPU memory on each batch.
 ...

Number of classes

The number of classes should be set on each of the [yolo] sections in the CFG file.

[yolo]
...
classes= NUM_CLASSES (1,2,3,4) should match names file.
...

Number of Filters

Before each [yolo] section, the number of filters in the [convolutional] layer should also be updated to match the following formula:

classes=(classes + 5) * 3

For instnace, for 3 classes:

[convolutional]
size=1
stride=1
pad=1
filters=24
activation=linear

[yolo]
classes= 3

train.txt file

This plain text file contains each of the images that will be used for training. Each line should include the absolute path to the Image.
i.e.

/home/test/images/train/001.jpg
/home/test/images/train/002.jpg
/home/test/images/train/003.jpg

test.txt file

In the same way as the train.txt file, this text file contains the paths to the images used for testing, one per line.
i.e.

/home/user/dataset/images/test/001.jpg
/home/user/dataset/images/test/002.jpg
/home/user/dataset/images/test/003.jpg

DATA file

This plain text file summarizes the dataset using the following format:

classes= 20
train  = /home/user/dataset/train.txt
valid  = /home/user/dataset/test.txt
names = /home/user/dataset/classes.names
backup = /home/user/dataset/backup

Training

./darknet detector train file.data file.cfg darknet53.conv.74

TensorFlow and ROS Kinetic with Python 3.5 Natively

Setup

Native setup, no venv

  1. Create working directory
    $ mkdir ~/py3_tf_ros && cd ~/py3_tf_ros
  1. Install Python 3.5 $ sudo apt-get install python3-dev python3-yaml python3-setuptools

  2. Install rospkg for Python 3

$ git clone git://github.com/ros/rospkg.git
$ cd roskpkg && sudo python3 setup.py install
  1. Install catkin_pkg for Python 3
$ git clone git://github.com/ros-infrastructure/catkin_pkg.git
$ cd catkin_pkg && sudo python3 setup.py install && cd ..
  1. Install catkin for Python 3
$ git clone git://github.com/ros/catkin.git
$ cd catkin && sudo python3 setup.py install && cd ..
  1. Install OpenCV for Python 3 pip3 install opencv-python

  2. Download desired TensorFlow version

  3. Setup Nvidia Drivers, CUDA and CUDNN according to the TensorFlow version.

  4. Install downloaded TensforFlow package pip3 install --user --upgrade tensorflow-package.whl

  5. Check that symbolic link /usr/local/cuda corresponds to the CUDA version required by TensorFlow. (if there are several CUDA versions installed in the system).

  6. Test TensorFlow python -c "import tensorflow as tf; print(tf.__version__)" This should display the version 1.XX.YY you selected.

  7. It is possible to import ros and import tensorflow.

  8. If cv2 package is also required:

sys.path.remove('/opt/ros/kinetic/lib/python2.7/dist-packages')

import cv2

sys.path.append(`/opt/ros/kinetic/lib/python2.7/dist-packages`)

import ros

TensorFlow and ROS Kinetic with Python 2.7

TensorFlow and ROS Kinetic with Python 2.7

  1. Create a working directory $ mkdir ~/tf_ros && cd ~/tf_ros
  2. Download and Install an NVIDIA Driver >= 384.x (https://www.nvidia.co.jp/Download/index.aspx) We recommend RUN files, please check our other post on this topic.

  3. Download and Install CUDA 9.0 (https://developer.nvidia.com/cuda-90-download-archive)

  4. Download and Install CUDNN for CUDA 9.0 (https://developer.nvidia.com/rdp/cudnn-download; https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux)

  5. Download TensorFlow for Python 2.7 $ wget https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.1-cp27-none-linux_x86_64.whl

  6. Install Python VirtualEnv $ sudo apt-get install python-virtualenv

  7. Create Python 2.7 virtual environment. $ virtualenv --system-site-packages -p python2 p2tf_venv (p2tf_venv can be any name)

  8. Activate environment $ source p2tf_venv/bin/activate

  9. Install TensorFlow (p2_venv) $ pip install --upgrade tensorflow_gpu-1.10.1-cp27-none-linux_x86_64.whl

  10. Make sure you have in LD_LIBARY_PATH, the cuda 9 libraries and binaries.

$ export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH && export PATH=/usr/local/cuda/bin:$PATH

Running Dreamview (Apollo 3.0) for pre-Haswell processors

Problem overview

The Apollo docker includes a version of PCL which was built with the FMA instruction set (fused multiply-add). For any Intel processor older than Haswell (and perhaps even some Broadwell processors), this causes an illegal instruction and prevents Dreaview from starting. In previous version of Apollo, this was difficult to problem solve since no error details were provided – in fact, Dreamview would be reported to be running but localhost:8888 could not be accessed. Since 3.0, however, Dreamview reports the following:
dreamview: ERROR (spawn error)
Note: this can probably be caused by many other issues. However, if you are running Haswell or earlier, the solution provided here is likely to work. You can use gdb to troubleshoot spawn errors.

The solution

You have to build PCL within the docker to overcome this problem.

cd /apollo
git clone https://github.com/PointCloudLibrary/pcl.git
git checkout -b pcl-custom

Now you need to add the following to /apollo/pcl/CMakeLists.txt:

  • Below the line set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "possible configurations" FORCE), add the following:
if (CMAKE_VERSION VERSION_LESS "3.1")
    set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11")
    message("Build with c++11 support")
else ()
    set (CMAKE_CXX_STANDARD 11)
endif ()

Build PCL with the default options.

cd /apollo/pcl
mkdir build
cd build
cmake ..
make

Replace the current PCL libraries with the new ones you just made (you can backup first).

sudo mkdir -p /usr/local/lib/pcl.backup
sudo mv /usr/local/lib/libpcl* /usr/local/lib/pcl.backup
sudo cp -a lib/* /usr/local/lib/
sudo ldconfig

Now you have to re-build Apollo.

cd /apollo
./apollo.sh clean
./apollo.sh build_gpu

And hopfully you can now start and access Dreamview with the usual ./scripts/bootstrap.sh.

How to setup industrial_ci locally and test Autoware

industrial_ci will build, install and test every package in an isolated manner inside a clean Docker image. In this way, missing dependencies(system and other packages) can be easily spotted and fixed before publishing a package. This eases the deployment of Autoware (or any ROS package).

Running locally instead of on the cloud (travis-ci) speeds up the build time.

Autoware and industrial_ci require two different catkin workspaces.

Requirements:

  • Docker installed and working.
  • Direct connection to the Internet (No proxy). See below for proxy instructions.

Instructions:

  1. Install catkin tools:$ sudo apt-get install python-catkin-tools.
  2. Clone Autoware (if you don’t have it already):$ git clone https://github.com/CPFL/Autoware. (if you wish to test an specific branch change to that branch or use -b).
  3. Create a directory to hold a new workspace at the same level as Autoware, and subdirectory src. (In this example catkin_ws and base dir being home ~). ~/$ mkdir -p catkin_ws/src && cd catkin_ws/src.
  4. Initialize that workspace running catkin_init_workspace inside src of catkin_ws. ~/catkin_ws/src$ catkin_init_workspace
  5. Clone industrial_ci inside the catkin_ws/src. ~/catkin_ws/src$ git clone https://github.com/ros-industrial/industrial_ci.
  6. The directory structure should look as follows:
~
├── Autoware
│   ├── ros
│   │   └── src
│   └── .travis.yml
├── catkin_ws
          └── src
               └── industrial_ci
  1. Go to catkin_ws and build industrial_ci. ~/catkin_ws$ catkin config --install && catkin b industrial_ci && source install/setup.bash.
  2. Once finished, move to Autoware directory ~/catkin_ws$ cd ~/Autoware.
  3. Run industrial_ci:
  • Using rosrun industrial_ci run_ci ROS_DISTRO=kinetic ROS_REPO=ros or rosrun industrial_ci run_ci ROS_DISTRO=indigo ROS_REPO=ros. This method will manually specify the distribution and repositories sources.
  • ~/Autoware$ rosrun industrial_ci run_travis .. This will parse the .travis.yml and run in a similar fashion to travis-ci.

For more detailed info: https://github.com/ros-industrial/industrial_ci/blob/master/doc/index.rst#run-industrial-ci-on-local-host

How to run behind a proxy

Configure your docker to use proxy (from https://stackoverflow.com/questions/26550360/docker-ubuntu-behind-proxy and https://docs.docker.com/config/daemon/systemd/#httphttps-proxy):

Ubuntu 14.04

Edit the file /etc/default/docker, go to the proxy section, and change the values:

# If you need Docker to use an HTTP proxy, it can also be specified here.
export http_proxy="http://proxy.address:port"
export https_proxy="https://proxy.address:port"

Execute in a terminal sudo service docker restart.

Ubuntu 16.04

  1. Create config container directory: $ sudo mkdir -p /etc/systemd/system/docker.service.d
  2. Create the http-conf.d file inside: $ sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf

  3. Paste the following text and edit with your proxy values:

[Service]
Environment="HTTP_PROXY=http://proxy.address:port"
Environment="HTTPS_PROXY=https://proxy.address:port"
  1. Save the file

Modifications to industrial_ci

  1. Add the following lines in ~/catkin_ws/src/industrial_ci/industrial_ci/src/docker.sh at line 217:

Original: