TensorFlow and ROS Kinetic with Python 3.5 Natively

Setup

Native setup, no venv
1. Create working directory
$ mkdir ~/py3_tf_ros && cd ~/py3_tf_ros

  1. Install Python 3.5
    $ sudo apt-get install python3-dev python3-yaml python3-setuptools

  2. Install rospkg for Python 3

$ git clone git://github.com/ros/rospkg.git
$ cd roskpkg && sudo python3 setup.py install
  1. Install catkin_pkg for Python 3
$ git clone git://github.com/ros-infrastructure/catkin_pkg.git
$ cd catkin_pkg && sudo python3 setup.py install && cd ..
  1. Install catkin for Python 3
$ git clone git://github.com/ros/catkin.git
$ cd catkin && sudo python3 setup.py install && cd ..
  1. Install OpenCV for Python 3
    pip3 install opencv-python

  2. Download desired TensorFlow version

  3. Setup Nvidia Drivers, CUDA and CUDNN according to the TensorFlow version.

  4. Install downloaded TensforFlow package
    pip3 install --user --upgrade tensorflow-package.whl

  5. Check that symbolic link /usr/local/cuda corresponds to the CUDA version required by TensorFlow. (if there are several CUDA versions installed in the system).

  6. Test TensorFlow
    python -c "import tensorflow as tf; print(tf.__version__)"
    This should display the version 1.XX.YY you selected.

  7. It is possible to import ros and import tensorflow.

  8. If cv2 package is also required:

sys.path.remove('/opt/ros/kinetic/lib/python2.7/dist-packages')

import cv2

sys.path.append(`/opt/ros/kinetic/lib/python2.7/dist-packages`)

import ros

TensorFlow and ROS Kinetic with Python 2.7

TensorFlow and ROS Kinetic with Python 2.7

  1. Create a working directory
    $ mkdir ~/tf_ros && cd ~/tf_ros

  2. Download and Install an NVIDIA Driver >= 384.x (https://www.nvidia.co.jp/Download/index.aspx) We recommend RUN files, please check our other post on this topic.

  3. Download and Install CUDA 9.0 (https://developer.nvidia.com/cuda-90-download-archive)

  4. Download and Install CUDNN for CUDA 9.0 (https://developer.nvidia.com/rdp/cudnn-download; https://docs.nvidia.com/deeplearning/sdk/cudnn-install/index.html#installlinux)

  5. Download TensorFlow for Python 2.7
    $ wget https://storage.googleapis.com/tensorflow/linux/gpu/tensorflow_gpu-1.10.1-cp27-none-linux_x86_64.whl

  6. Install Python VirtualEnv
    $ sudo apt-get install python-virtualenv

  7. Create Python 2.7 virtual environment.
    $ virtualenv --system-site-packages -p python2 p2tf_venv
    (p2tf_venv can be any name)

  8. Activate environment
    $ source p2tf_venv/bin/activate

  9. Install TensorFlow
    (p2_venv) $ pip install --upgrade tensorflow_gpu-1.10.1-cp27-none-linux_x86_64.whl

  10. Make sure you have in LD_LIBARY_PATH, the cuda 9 libraries and binaries.

$ export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH && export PATH=/usr/local/cuda/bin:$PATH

  1. Check the ROS libraries are in the LD_LIBRARY_PATH as well.
    $ echo $LD_LIBRAY_PATH.
    It should contain /opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu

  2. Check that Python ROS libraries are also included in the $PYTHONPATH.
    $ echo $PYTHONPATH.
    It should contain /opt/ros/kinetic/lib/python2.7/dist-packages.

  3. Test your installation is correct.
    (p2_venv) $ python -c "import tensorflow as tf; print(tf.__version__)"
    It should print the TensorFlow version install, such as 1.10.1.

  4. Ready to go!

Running Dreamview (Apollo 3.0) for pre-Haswell processors

Problem overview

The Apollo docker includes a version of PCL which was built with the FMA instruction set (fused multiply-add). For any Intel processor older than Haswell (and perhaps even some Broadwell processors), this causes an illegal instruction and prevents Dreaview from starting. In previous version of Apollo, this was difficult to problem solve since no error details were provided – in fact, Dreamview would be reported to be running but localhost:8888 could not be accessed. Since 3.0, however, Dreamview reports the following:
dreamview: ERROR (spawn error)
Note: this can probably be caused by many other issues. However, if you are running Haswell or earlier, the solution provided here is likely to work. You can use gdb to troubleshoot spawn errors.

The solution

You have to build PCL within the docker to overcome this problem.

cd /apollo
git clone https://github.com/PointCloudLibrary/pcl.git
git checkout -b pcl-custom

Now you need to add the following to /apollo/pcl/CMakeLists.txt:
– Below the line set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "possible configurations" FORCE), add the following:

if (CMAKE_VERSION VERSION_LESS "3.1")
    set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11")
    message("Build with c++11 support")
else ()
    set (CMAKE_CXX_STANDARD 11)
endif ()

Build PCL with the default options.

cd /apollo/pcl
mkdir build
cd build
cmake ..
make

Replace the current PCL libraries with the new ones you just made (you can backup first).

sudo mkdir -p /usr/local/lib/pcl.backup
sudo mv /usr/local/lib/libpcl* /usr/local/lib/pcl.backup
sudo cp -a lib/* /usr/local/lib/
sudo ldconfig

Now you have to re-build Apollo.

cd /apollo
./apollo.sh clean
./apollo.sh build_gpu

And hopfully you can now start and access Dreamview with the usual ./scripts/bootstrap.sh.

How to setup industrial_ci locally and test Autoware

industrial_ci will build, install and test every package in an isolated manner inside a clean Docker image. In this way, missing dependencies(system and other packages) can be easily spotted and fixed before publishing a package. This eases the deployment of Autoware (or any ROS package).

Running locally instead of on the cloud (travis-ci) speeds up the build time.

Autoware and industrial_ci require two different catkin workspaces.

Requirements:

  • Docker installed and working.
  • Direct connection to the Internet (No proxy). See below for proxy instructions.

Instructions:

  1. Install catkin tools:$ sudo apt-get install python-catkin-tools.
  2. Clone Autoware (if you don’t have it already):$ git clone https://github.com/CPFL/Autoware. (if you wish to test an specific branch change to that branch or use -b).
  3. Create a directory to hold a new workspace at the same level as Autoware, and subdirectory src. (In this example catkin_ws and base dir being home ~).
    ~/$ mkdir -p catkin_ws/src && cd catkin_ws/src.
  4. Initialize that workspace running catkin_init_workspace inside src of catkin_ws.
    ~/catkin_ws/src$ catkin_init_workspace
  5. Clone industrial_ci inside the catkin_ws/src.
    ~/catkin_ws/src$ git clone https://github.com/ros-industrial/industrial_ci.
  6. The directory structure should look as follows:
~
├── Autoware
│   ├── ros
│   │   └── src
│   └── .travis.yml
├── catkin_ws
          └── src
               └── industrial_ci
  1. Go to catkin_ws and build industrial_ci. ~/catkin_ws$ catkin config --install && catkin b industrial_ci && source install/setup.bash.
  2. Once finished, move to Autoware directory ~/catkin_ws$ cd ~/Autoware.
  3. Run industrial_ci:

– Using rosrun industrial_ci run_ci ROS_DISTRO=kinetic ROS_REPO=ros or rosrun industrial_ci run_ci ROS_DISTRO=indigo ROS_REPO=ros. This method will manually specify the distribution and repositories sources.
~/Autoware$ rosrun industrial_ci run_travis .. This will parse the .travis.yml and run in a similar fashion to travis-ci.

For more detailed info: https://github.com/ros-industrial/industrial_ci/blob/master/doc/index.rst#run-industrial-ci-on-local-host

How to run behind a proxy

Configure your docker to use proxy (from https://stackoverflow.com/questions/26550360/docker-ubuntu-behind-proxy and https://docs.docker.com/config/daemon/systemd/#httphttps-proxy):

Ubuntu 14.04

Edit the file /etc/default/docker, go to the proxy section, and change the values:

# If you need Docker to use an HTTP proxy, it can also be specified here.
export http_proxy="http://proxy.address:port"
export https_proxy="https://proxy.address:port"

Execute in a terminal sudo service docker restart.

Ubuntu 16.04

  1. Create config container directory:
    $ sudo mkdir -p /etc/systemd/system/docker.service.d

  2. Create the http-conf.d file inside:
    $ sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf

  3. Paste the following text and edit with your proxy values:

[Service]
Environment="HTTP_PROXY=http://proxy.address:port"
Environment="HTTPS_PROXY=https://proxy.address:port"
  1. Save the file

Modifications to industrial_ci

  1. Add the following lines in ~/catkin_ws/src/industrial_ci/industrial_ci/src/docker.sh at line 217:

Original:

FROM $DOCKER_BASE_IMAGE
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections

Change to:

FROM $DOCKER_BASE_IMAGE
ENV http_proxy "http://proxy.address:port"
ENV https_proxy "https://proxy.address:port"
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
  1. Change line 80 of ~/catkin_ws/src/industrial_ci/industrial_ci/src/tests/source_test.sh

Original:

# Setup rosdep
rosdep --version
if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
    sudo rosdep init
fi

Change to:

# Setup rosdep
rosdep --version
if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
    rosdep init
fi
  1. In ~/catkin_ws/src/industrial_ci/industrial_ci/src/tests/source_test.sh line 56:

Original:

ici_time_start setup_apt

sudo apt-get update -qq

# If more DEBs needed during preparation, define ADDITIONAL_DEBS variable where you list the name of DEB(S, delimitted by whitespace)
if [ "$ADDITIONAL_DEBS" ]; then
    sudo apt-get install -qq -y $ADDITIONAL_DEBS || error "One or more additional deb installation is failed. Exiting."
fi
source /opt/ros/$ROS_DISTRO/setup.bash

ici_time_end  # setup_apt

if [ "$CCACHE_DIR" ]; then
    ici_time_start setup_ccache
    sudo apt-get install -qq -y ccache || error "Could not install ccache. Exiting."
    export PATH="/usr/lib/ccache:$PATH"
    ici_time_end  # setup_ccache
fi

ici_time_start setup_rosdep

Change to:

ici_time_start setup_apt

apt-get update -qq

# If more DEBs needed during preparation, define ADDITIONAL_DEBS variable where you list the name of DEB(S, delimitted by whitespace)
if [ "$ADDITIONAL_DEBS" ]; then
    apt-get install -qq -y $ADDITIONAL_DEBS || error "One or more additional deb installation is failed. Exiting."
fi
source /opt/ros/$ROS_DISTRO/setup.bash

ici_time_end  # setup_apt

if [ "$CCACHE_DIR" ]; then
    ici_time_start setup_ccache
    apt-get install -qq -y ccache || error "Could not install ccache. Exiting."
    export PATH="/usr/lib/ccache:$PATH"
    ici_time_end  # setup_ccache
fi

ici_time_start setup_rosdep
  1. In the same way change line 70 ~/catkin_ws/src/industrial_ci/industrial_ci/src/tests/abi_check.sh:

Original:

    # Setup rosdep
    rosdep --version
    #if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
        sudo rosdep init
    #fi

Change to:

    # Setup rosdep
    rosdep --version
    #if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
        rosdep init
    #fi

Compile again with catkin, then you can run industrial_ci as indicated above.

Apollo messages

This post provides an overview of the messages used by Apollo, as captured from the demo rosbag for testing the perception module.

To do:

  • Investigate protobuf message definitions for variable types
  • Investiage contents of standard ROS type messages

Standard message types

sensor_msgs/Image
sensor_msgs/PointCloud2
tf2_msgs/TFMessage

These messages use the standard ROS message format for their respective types.

Custom message types

Apollo uses a custom version of ROS which replaces the msg message description language with a Real Time Publish Subscribe and Google protobuf messaging protocol for publishing and receiving the following messages.

pb_msgs/Chassis
pb_msgs/ContiRadar
pb_msgs/EpochObservation
pb_msgs/GnssBestPose
pb_msgs/GnssEphemeris
pb_msgs/GnssStatus
pb_msgs/Gps
pb_msgs/Imu
pb_msgs/InsStat
pb_msgs/LocalizationEstimate

Messages using these types cannot be interpreted using rosbag tools as only the header is compatible. However, their content can be dumped as text using read_messages().

Message topics

The demo rosbag contains the following topics:

topics: /apollo/canbus/chassis 5851 msgs : pb_msgs/Chassis
/apollo/localization/pose 5856 msgs : pb_msgs/LocalizationEstimate
/apollo/sensor/camera/traffic/image_long 471 msgs : sensor_msgs/Image
/apollo/sensor/camera/traffic/image_short 469 msgs : sensor_msgs/Image
/apollo/sensor/conti_radar 789 msgs : pb_msgs/ContiRadar
/apollo/sensor/gnss/best_pose 59 msgs : pb_msgs/GnssBestPose
/apollo/sensor/gnss/corrected_imu 5838 msgs : pb_msgs/Imu
/apollo/sensor/gnss/gnss_status 59 msgs : pb_msgs/GnssStatus
/apollo/sensor/gnss/imu 11630 msgs : pb_msgs/Imu
/apollo/sensor/gnss/ins_stat 118 msgs : pb_msgs/InsStat
/apollo/sensor/gnss/odometry 5848 msgs : pb_msgs/Gps
/apollo/sensor/gnss/rtk_eph 49 msgs : pb_msgs/GnssEphemeris
/apollo/sensor/gnss/rtk_obs 352 msgs : pb_msgs/EpochObservation
/apollo/sensor/velodyne64/compensator/PointCloud2 587 msgs : sensor_msgs/PointCloud2
/tf 11740 msgs : tf2_msgs/TFMessage
/tf_static 1 msg : tf2_msgs/TFMessage

Example messages

All custom messages were read using the following script, within the Apollo docker:

import rosbag
import std_msgs

from std_msgs.msg import String

bag = rosbag.Bag('../apollo_data/2018-01-03-19-37-16.bag', 'r')

read_topic = '/apollo/canbus/chassis'

counter = 0

for topic, msg, t in bag.read_messages():
    if topic == read_topic:
        out_file = 'messages/' + str(counter) + '.txt'
        f = file(out_file, 'w')
        f.write(str(msg))
        f.close()
        counter = counter + 1

Note that this will not work outside of the Apollo docker – the files will be empty.

pb_msgs/Chassis

Of custom protobuf type pb_msgs/Chassis.

engine_started: true
engine_rpm: 0.0
speed_mps: 0.261111110449
odometer_m: 0.0
fuel_range_m: 0
throttle_percentage: 14.9950408936
brake_percentage: 20.5966281891
steering_percentage: 4.36170196533
steering_torque_nm: -0.75
parking_brake: false
driving_mode: COMPLETE_AUTO_DRIVE
error_code: NO_ERROR
gear_location: GEAR_DRIVE
header {
    timestamp_sec: 1513807824.58
    module_name: "chassis"
    sequence_num: 96620
}
signal {
    turn_signal: TURN_NONE
    horn: false
}
chassis_gps {
    latitude: 37.416912
    longitude: -122.016053333
    gps_valid: true
    year: 17
    month: 12
    day: 20
    hours: 22
    minutes: 10
    seconds: 23
    compass_direction: 270.0
    pdop: 0.8
    is_gps_fault: false
    is_inferred: false
    altitude: -42.5
    heading: 285.75
    hdop: 0.4
    vdop: 0.6
    quality: FIX_3D
    num_satellites: 20
    gps_speed: 0.89408
}

/apollo/localization/pose

Of custom protobuf type pb_msgs/LocalizationEstimate.

header {
timestamp_sec: 1513807826.05
module_name: "localization"
sequence_num: 96460
}
pose {
    position {
    x: 587068.814494
    y: 4141577.34872
    z: -31.0619329279
    }
    orientation {
        qx: 0.0354450020653
        qy: 0.0137914670665
        qz: -0.608585615062
        qw: -0.792576177036
    }
    linear_velocity {
        x: -1.08479686092
        y: 0.30964034124
        z: -0.00187507107555
    }
    linear_acceleration {
        x: -2.10530393576
        y: 0.553837321635
        z: 0.170289232445
    }
    angular_velocity {
        x: 0.00630864843147
        y: 0.0111583669994
        z: 0.0146261464379
    }
    heading: 2.88124066533
    linear_acceleration_vrf {
        x: -0.0137881469155
        y: 2.15869303259
        z: 0.328470914294
    }
    angular_velocity_vrf {
        x: 0.0120972352232
        y: -0.00428235807315
        z: 0.0146133729179
    }
    euler_angles {
        x: 0.0213395674976
        y: 0.0730372234802
        z: -3.40194464185
    }
}
measurement_time: 1513807826.04

/sensor/camera/traffic/image_long

This is a standard sensor_msgs/Image message.

/apollo/sensor/camera/traffic/image_short

This is a standard sensor_msgs/Image message.

/apollo/sensor/conti_radar

Of custom protobuf type pb_msgs/ContiRadar.

header {
timestamp_sec: 1513807824.52
module_name: "conti_radar"
sequence_num: 12971
}
contiobs {
header {
timestamp_sec: 1513807824.52
module_name: "conti_radar"
sequence_num: 12971
}
clusterortrack: false
obstacle_id: 0
longitude_dist: 107.6
lateral_dist: -17.2
longitude_vel: -0.25
lateral_vel: -0.75
rcs: 7.5
dynprop: 1
longitude_dist_rms: 0.371
lateral_dist_rms: 0.616
longitude_vel_rms: 0.371
lateral_vel_rms: 0.616
probexist: 1.0
meas_state: 2
longitude_accel: 0.23
lateral_accel: 0.0
oritation_angle: 0.0
longitude_accel_rms: 0.794
lateral_accel_rms: 0.005
oritation_angle_rms: 1.909
length: 2.8
width: 2.4
obstacle_class: 1
}`
...[99 more `contiobs`]...
`object_list_status {
nof_objects: 100
meas_counter: 8464
interface_version: 0
}`

<h3>/apollo/sensor/gnss/best_pose</h3>
Of custom protobuf type `pb_msgs/GnssBestPose`.

`header {
timestamp_sec: 1513807825.02
}
measurement_time: 1197843043.0
sol_status: SOL_COMPUTED
sol_type: NARROW_INT
latitude: 37.4169108497
longitude: -122.016059063
height_msl: 2.09512365051
undulation: -32.0999984741
datum_id: WGS84
latitude_std_dev: 0.0114300707355
longitude_std_dev: 0.00970683153719
height_std_dev: 0.0248824004084
base_station_id: "0"
differential_age: 2.0
solution_age: 0.0
num_sats_tracked: 13
num_sats_in_solution: 13
num_sats_l1: 13
num_sats_multi: 11
extended_solution_status: 33
galileo_beidou_used_mask: 0
gps_glonass_used_mask: 51

/apollo/sensor/gnss/corrected_imu

Of custom protobuf type pb_msgs/Imu.

This appears to be empty, with just a timestamp header.

/apollo/sensor/gnss/gnss_status

Of custom protobuf type pb_msgs/GnssStatus.

header {
timestamp_sec: 1513807826.02
}
solution_completed: true
solution_status: 0
position_type: 50
num_sats: 14

/apollo/sensor/gnss/imu

Of custom protobuf type pb_msgs/Imu.

header {
timestamp_sec: 1513807824.58
}
measurement_time: 1197843042.56
measurement_span: 0.00499999988824
linear_acceleration {
x: -0.172816216946
y: -0.864528119564
z: 9.75685194135
}
angular_velocity {
x: -0.000550057197804
y: 0.00203638196634
z: 0.00155888550527
}

/apollo/sensor/gnss/ins_stat

Of custom protobuf type pb_msgs/InsStat.

header {
timestamp_sec: 1513807826.0
}
ins_status: 3
pos_type: 56

/apollo/sensor/gnss/odometry

Of custom protobuf type pb_msgs/Gps.

header {
timestamp_sec: 1513807824.58
}
localization {
position {
x: 587069.287353
y: 4141577.22403
z: -31.0546750054
}
orientation {
qx: 0.0399296504032
qy: 0.0164343412444
qz: -0.606054063971
qw: -0.79425059458
}
linear_velocity {
x: -0.231635539107
y: 0.0795322332148
z: 0.00147877897123
}
}

/apollo/sensor/gnss/rtk_eph

Of custom protobuf type pb_msgs/GnssEphemeris.

gnss_type: GPS_SYS
keppler_orbit {
gnss_type: GPS_SYS
sat_prn: 23
gnss_time_type: GPS_TIME
week_num: 1980
af0: -0.000219020526856
af1: 0.0
af2: 0.0
iode: 43.0
deltan: 5.23450375238e-09
m0: -1.97123659751
e: 0.0118623136077
roota: 5153.61836624
toe: 345600.0
toc: 345600.0
cic: 1.30385160446e-07
crc: 266.5
cis: -8.56816768646e-08
crs: -9.28125
cuc: -7.13393092155e-07
cus: 5.16884028912e-06
omega0: 2.20992318653
omega: -2.40973031377
i0: 0.943533454733
omegadot: -8.16355433052e-09
idot: -2.10723063176e-11
accuracy: 2
health: 0
tgd: -2.04890966415e-08
iodc: 43.0
}
glonass_orbit {
gnss_type: GLO_SYS
slot_prn: 10
gnss_time_type: GLO_TIME
toe: 339318.0
frequency_no: -7
week_num: 1980
week_second_s: 339318.0
tk: 3990.0
clock_offset: 1.41123309731e-05
clock_drift: 9.09494701773e-13
health: 0
position_x: -16965363.2812
position_y: -15829665.5273
position_z: -10698784.1797
velocity_x: -994.89402771
velocity_y: -1087.91637421
velocity_z: 3184.79061127
accelerate_x: -2.79396772385e-06
accelerate_y: -3.72529029846e-06
accelerate_z: -1.86264514923e-06
infor_age: 0.0
}

/apollo/sensor/gnss/rtk_obs

Of custom protobuf type pb_msgs/EpochObservation.

receiver_id: 0
gnss_time_type: GPS_TIME
gnss_week: 1980
gnss_second_s: 339042.6
sat_obs_num: 13
sat_obs {
sat_prn: 31
sat_sys: GPS_SYS
band_obs_num: 2
band_obs {
band_id: GPS_L1
frequency_value: 0.0
pseudo_type: CORSE_CODE
pseudo_range: 22104170.8773
carrier_phase: 116158209.251
loss_lock_index: 173
doppler: -2880.74853516
snr: 173.0
}
band_obs {
band_id: GPS_L2
frequency_value: 0.0
pseudo_type: CORSE_CODE
pseudo_range: 22104174.5591
carrier_phase: 90512897.445
loss_lock_index: 140
doppler: -2244.73510742
snr: 140.0
}
}

…[13 more sat_obs]…

/apollo/sensor/velodyne64/compensator/PointCloud2

This is a standard sensor_msgs/PointCloud2 message.

/tf

This is a standard tf2_msgs/TFMessage message. The messages alternate between frame_id: world to child_frame_id: localization transform messages and frame_id: world to child_frame_id: novatel transform messages.

/tf_static

This is a standard tf2_msgs/TFMessage message, for the frame_id: novatel to velodyne64 transform.

Setup of Allied Vision Camera with VimbaSDK and ROS node

How to setup and use avt_vimba_camera node

Assuming an x86_64 system and a GiGE PoE camera

http://wiki.ros.org/avt_vimba_camera

Driver Setup

  1. Download the Vimba SDK from Allied Vision website https://www.alliedvision.com/en/products/software.html
  2. Extract the contents of the file
  3. Go to Vimba_2_1/VimbaGigETL/
  4. Execute sudo ./Install.sh. This will add a couple of files inside /etc/profile.d
    • VimbaGigETL_64bit.sh
    • VimbaGigETL_32bit.sh
  5. The SDK will ask you to logout and login again. If you don’t wish to do so, execute source /etc/profile.d/VimbaGigETL_64bit.sh
  6. Connect the camera.
  7. You’re ready to use the camera.

Change Camera IP Address

Initially the camera is running on DHCP mode. If you wish to change this.

  1. Go to Vimba_2_1/Tools/Viewer/Bin/x86_64bit/
  2. Execute sudo -E ./VimbaViewer
  3. Right Click the camera and Press Open CONFIG
  4. Go to the Tree and Open the GiGE root and the configuration subnode
  5. Change IP Configuration Mode to Persistent
  6. Move to the Persistent tree and change the Persistent IP Address to the desired value.
  7. Finally Click on IP Configuration Apply and click the Execute button
  8. Close the CONFIG MODE window
  9. After a few moments, the camera will appear showing the selected IP.

View Image Stream from Camera

  1. Go to Vimba_2_1/Tools/Viewer/Bin/x86_64bit/
  2. Execute sudo -E ./VimbaViewer
  3. Right Click the camera while in the viewer
  4. Select Open FULL ACCESS
  5. A new Window will appear, press the Blue Play button

ROS Node setup

  1. Once the driver is working, install the node using sudo apt-get install ros-xxxx-avt-vimba-camera. Where xxxx represents your ROS distro.
  2. From Autoware launch in a sourced terminal roslaunch runtime_manager avt_camera.launch guid:=SERIALNUMBER or roslaunch runtime_manager avt_camera.launch ip:=xxx.xxx.xxx.xxx

SERIAL NUMBER or IP Address were previously obtained from the Configuration tool.

Dynamic configuration

The avt_vimba_camera package supports dynamic configuration.

Once the node is running execute rosrun dynamic_reconfigure dynparam get /avt_camera to get all the supported parameters.

To change one execute:
rosrun dynamic_reconfigure dynparam set /avt_camera exposure_auto_max 500000

this will update the maximum auto exposure time to 500 ms.

For extra details on dynamic_reconfigure check: http://wiki.ros.org/dynamic_reconfigure

Continental ARS 308-21 SSAO Radar setup

This is a short setup guide for Continental ARS 308-21 SSAO Radar node. This guide covers the following topics;

Contents

  1. Requirements and the hardware setup
  2. A brief introduction to CAN bus communication using Linux
  3. Radar configuration
  4. Receiving CAN messages on ROS
  5. Using the Autoware RADAR Node

 

1. Requirements

Hardware

  1. Continental ARS 308-21 SSAO Radar
  2. 12V DC power supply
  3. Can interface device
  4. Can adaptor*

Software

  • Ubuntu 14.04 or above
  • ROS
  • ROS Packgage: socketcan_interface (http://wiki.ros.org/socketcan_interface)

*: Adaptor is needed for the termination of the can circuit. From the technical documentation of Continental ARS 308-2C/-21;

“Since no termination resistors are included in the radar sensor ARS 308-2C and ARS 308-21, two 120 Ohm terminal resistors have to be connected to the network (separately or integrated in the CAN interface of the corresponding unit).”

 

Hardware setup

1-Connect the devices as shown below

Fig 1. Hardware setup. 

2-Turn on the power supply. The device should start working with audible  operation.

2. A brief introduction to CAN bus communication using Linux

There are various ways to communicate with CAN bus. For linux, SocketCAN is one of the most used CAN drivers. It is open source and comes with the kernel. Furthermore it can be used with many devices. If a different vendor driver is liked to be used however, please refer to  that drivers manual for communicating via CAN bus.

First load the drivers. The device sends the messages with a specific bitrate, if it is not matched the stream would not be synchronized. Therefore the ip can link must be set with the bitrate of the device.  Bitrate for this device is constant at 500000/s and cannot be changed. Below is an example sniplet for setting can device can1:

$ modprobe can_dev
$ modprobe can
$ modprobe can_raw
$ sudo ip link set can1 type can bitrate 500000
$ sudo ifconfig can1 up

Now the connection between the computer and the sensor is established.

For checking the information sent by the device, a user friendly tool package called can-utils can be used with SocketCAN for accessing the messages via the driver.

Get the can-utils;

$ sudo apt-get install can-utils

Display the can messages from can1;

$ candump can1

A stream of CAN messages should be received at this point. An example of the message stream is shown below;

The can messages sent from the device have to be converted into meaningful information. Can messages have headers to identify the content of the message. Below the headers and the content of the messages sent from the Radar are shown;

0x300 and 0x301 are the input signals. The ego-vehicle speed and yaw rate can be sent to the device. If this information is provided, the radar will return detected objects’s positions and speeds relative to the ego-vehicle. If this information is not sent, the radar will assume that it is stationary.

0x600, 0x701 and 0x702 are the output messages of the radar. The structure of these messages are given below;

Figure 2. Message structure of 0x600
Figure 3. Message structure of 0x701. The physical meanings are given also.

3. Configuring the radar

The radar must be configured first to receive tracked object information. This device does not transmit raw data from the radar scans. Instead, its microcontroller reads the raw sensing data and detects/tracks objects with its own algorithm (this algorithm is not accesable). The sensor sends the detected/tracked object information through the CAN bus.

The default behavior of the device is to send detected objects (not tracked). In order to receive tracked object messages, 0x60A, 0x60B and 0x60C, a configuration message has to be sent. The following command will send the configuration message using the can-utils for receiving tracked object messages:

$ cansend can1 200#0832000200000000 

Now 0x60A, 0x60B and 0x60C messages can be received instead of 0x600, 0x701 and 0x702. We can check this by dumping the CAN stream on the terminal screen with the following command again:

$ candump can1 

The stream should include 0x60A, 0x60B and 0x60C messages now.

4. Receiving CAN messages on ROS

Ros package socketcan_interface is needed to receive can messages in ROS. This package is used with the socketCAN.

Install socketcan_interface;

$ sudo apt-get install ros-kinetic-socketcan-interface 

Test the communication in ros;

$ rosrun socketcan_interface socketcan_dump can1

This should display the received messages in ROS. An example is shown below;

With socketcan_interface a driver for this device can be developed. However there is already a ready can driver in ros called ros_canopen.  Install this package with the following command;

$ sudo apt-get install ros-kinetic-ros-canopen

This package will be used to publish the can messages received from the device in the ROS environment.

The socketcan_to_topic node in the  socketcan_bridge package can be used to publish topics from the can stream. First, start a ROS core and then launch this node with the name of the can port as an argument (e.g can1).

$ roscore
$ rosrun socketcan_bridge socketcan_to_topic_node _can_device:="can1"

This will publish a topic called “received_messages”.  Check the messages with the following command:

$ rostopic echo /received_messages

This should show the received messages. We are interested in the “id” and “data” fields. An example of the received_messages is shown below.

Running Apollo 2.0 – GPU

This is a brief guide to getting Apollo 2.0 up and running. It is based on the Apollo README with additional setup for the Perception modules.

Prerequisites

  • Ubuntu 16.04 (also works on 14.04).
  • Nvidia GPU. Install the drivers as described here. You don’t need CUDA installed (it’s included in the Apollo docker). On 16.04 you will need a new-ish version – the below is tested using 390.25. The Apollo recommended 275.39 will not work on 16.04, but will work on 14.04. However, as this requires a newer GCC version that breaks the build system, it is much easier to go straight to the 390.25 driver.

Download code and Docker image

  1. Get the code:
    git clone https://github.com/ApolloAuto/apollo.git
  2. If you don’t have Docker already:
    ./apollo/docker/scripts/install_docker.sh
  3. Then log out and log back in again.
  4. Pull the docker image. The dev_start.sh script downloads the docker image (or updates it if already downloaded) and starts the container.
    cd apollo/
    ./docker/scripts/dev_start.sh

Install Nvidia graphics drivers in the Docker image

  1. Check which driver you are using (in host) with nvidia-smi.
  2. First off we need to enter the container with root priveledges so we can install the matching graphics drivers.
    docker exec -it apollo_dev /bin/bash
    wget http://us.download.nvidia.com/XFree86/Linux-x86_64/***.**/NVIDIA-Linux-x86_64-***.**.run

    where ***.** is the driver version running on your host system.
    Note: Disregard the Apollo instructions to upgrade to GCC 4.9. Not only is it not necessary with newer versions of the Nvidia drivers, but it will make the build fail. Stick with the GCC version of 4.8.4 which comes in the Docker image.
  3. Now install the drivers:
    chmod +x NVIDIA-Linux-x86_64-***.**.run
    ./NVIDIA-Linux-x86_64-***.**.run -a --skip-module-unload --no-kernel-module --no-opengl-files

    Hit ‘enter’ to go with the default choices where prompted. Once done, check that the driver is working with nvidia-smi.
  4. To create a new image with your changes, check what the container ID of your image is (on the host):
    docker ps -l
  5. Use the resulting container ID with the following command to create a new image (on the host):
    docker commit CONTAINER_ID apolloauto/apollo:NEW_DOCKER_IMAGE_TAG
    where CONTAINER_ID is the container ID you found before, and NEW_DOCKER_IMAGE_TAG is the name you choose for your Apollo GPU image.

Build Apollo in your new Docker image

  1. To get into your new docker image, use the following:
    ./docker/scripts/dev_start.sh -l -t NEW_DOCKER_IMAGE_TAG
    ./docker/scripts/dev_into.sh
  2. Now you should be able to build the GPU version of Apollo:
    ./apollo.sh clean
    ./apollo.sh build_gpu

Run Apollo!

  1. From within the docker image, start Apollo:
    scripts/bootstrap.sh
  2. Check that Dreamview is running at http://localhost:8888.
  3. Set up in Dreamview by selecting the setup mode, vehicle, and map in the top right. For the sample data rosbag, select “Standard”, “Mkz8” and “Sunnyvale Big Loop”.
  4. Start the rosbag in the docker container with rosbag play path/to/rosbag.bag.
  5. Once you see the vehicle moving in Dreamview, pause the rosbag with the space bar.
  6. Wait a few seconds for the perception, prediction and traffic light modules to load.
  7. Resume playing the rosbag with the spacebar.

Once the rosbag playing is complete, to play it again you have to first shutdown with scripts/bootstrap.sh stop and then repeat the above from step 1 (otherwise the time discrepancy stops the modules from working).

Rosbag Record from Rosbag Play, timestamps out of sync

When recording data from a previously recorded rosbag instead of sensor data, clock might become a problem.
Rosbag record updates the clock to the time when the rosbag is being created, but the original timestamps are not updated causing the clock in the rosbag and the topics timestamps to be out of sync.

To fix this when recording the new rosbag add /clock to the list of recorded topics. this will keep the clock of the original rosbag instead of creating a new one.

Example:

<launch>

  <param name="use_sim_true" value="true" />

  <node pkg="rosbag" type="play" name="rosbagplay" output="screen" required="true" args="--clock /PATH/TO/ROSBAGTOPLAY"/>

  <node pkg="rosbag" type="record" name="rosbagrecord" output="screen" required="true" args="/clock LISTOFTOPICS -O /PATH/TO/OUTPUTFILE"/>

</launch>

EOF

Traffic Light recognition

Pre-requisites:
– Vector Map
– NDT working
– Calibration publisher
– Tf between camera and localizer

Traffic light recognition is splitted in two parts
1. feat_proj finds the ROIs of the traffic signals in the current camera FOV
2. region_tlr checks each ROI and publishes result, it also publishes /tlr_superimpose_image image with the traffic lights overlayed
2a. region_tlr_ssd deep learning based detector.

Launch Feature Projection

roslaunch road_wizard feat_proj.launch camera_id:=/camera0

Launch HSV classifier

roslaunch road_wizard traffic_light_recognition.launch camera_id:=/camera0 image_src:=/image_XXXX

SSD Classifier

roslaunch road_wizard traffic_light_recognition_ssd.launch camera_id:=/camera0 image_src:=/image_XXXX network_definition_file:=/PATH_TO_NETWORK_DEFINITION/deploy.prototxt pretrained_model_file:=/PATH_TO_MODEL/Autoware_tlr_SSD_.caffemodel use_gpu:=true gpu_device_id:=0