Running Apollo 2.0 – GPU

This is a brief guide to getting Apollo 2.0 up and running. It is based on the Apollo README with additional setup for the Perception modules.

Prerequisites

  • Ubuntu 16.04 (also works on 14.04).
  • Nvidia GPU. Install the drivers as described here. You don’t need CUDA installed (it’s included in the Apollo docker). On 16.04 you will need a new-ish version – the below is tested using 390.25. The Apollo recommended 275.39 will not work on 16.04, but will work on 14.04. However, as this requires a newer GCC version that breaks the build system, it is much easier to go straight to the 390.25 driver.

Download code and Docker image

  1. Get the code:
    git clone https://github.com/ApolloAuto/apollo.git
  2. If you don’t have Docker already:
    ./apollo/docker/scripts/install_docker.sh
  3. Then log out and log back in again.
  4. Pull the docker image. The dev_start.sh script downloads the docker image (or updates it if already downloaded) and starts the container.
    cd apollo/
    ./docker/scripts/dev_start.sh

Install Nvidia graphics drivers in the Docker image

  1. Check which driver you are using (in host) with nvidia-smi.
  2. First off we need to enter the container with root priveledges so we can install the matching graphics drivers.
    docker exec -it apollo_dev /bin/bash
    wget http://us.download.nvidia.com/XFree86/Linux-x86_64/***.**/NVIDIA-Linux-x86_64-***.**.run

    where ***.** is the driver version running on your host system.
    Note: Disregard the Apollo instructions to upgrade to GCC 4.9. Not only is it not necessary with newer versions of the Nvidia drivers, but it will make the build fail. Stick with the GCC version of 4.8.4 which comes in the Docker image.
  3. Now install the drivers:
    chmod +x NVIDIA-Linux-x86_64-***.**.run
    ./NVIDIA-Linux-x86_64-***.**.run -a --skip-module-unload --no-kernel-module --no-opengl-files

    Hit ‘enter’ to go with the default choices where prompted. Once done, check that the driver is working with nvidia-smi.
  4. To create a new image with your changes, check what the container ID of your image is (on the host):
    docker ps -l
  5. Use the resulting container ID with the following command to create a new image (on the host):
    docker commit CONTAINER_ID apolloauto/apollo:NEW_DOCKER_IMAGE_TAG
    where CONTAINER_ID is the container ID you found before, and NEW_DOCKER_IMAGE_TAG is the name you choose for your Apollo GPU image.

Build Apollo in your new Docker image

  1. To get into your new docker image, use the following:
    ./docker/scripts/dev_start.sh -l -t NEW_DOCKER_IMAGE_TAG
    ./docker/scripts/dev_into.sh
  2. Now you should be able to build the GPU version of Apollo:
    ./apollo.sh clean
    ./apollo.sh build_gpu

Run Apollo!

  1. From within the docker image, start Apollo:
    scripts/bootstrap.sh
  2. Check that Dreamview is running at http://localhost:8888.
  3. Set up in Dreamview by selecting the setup mode, vehicle, and map in the top right. For the sample data rosbag, select “Standard”, “Mkz8” and “Sunnyvale Big Loop”.
  4. Start the rosbag in the docker container with rosbag play path/to/rosbag.bag.
  5. Once you see the vehicle moving in Dreamview, pause the rosbag with the space bar.
  6. Wait a few seconds for the perception, prediction and traffic light modules to load.
  7. Resume playing the rosbag with the spacebar.

Once the rosbag playing is complete, to play it again you have to first shutdown with scripts/bootstrap.sh stop and then repeat the above from step 1 (otherwise the time discrepancy stops the modules from working).

Rosbag Record from Rosbag Play, timestamps out of sync

When recording data from a previously recorded rosbag instead of sensor data, clock might become a problem.
Rosbag record updates the clock to the time when the rosbag is being created, but the original timestamps are not updated causing the clock in the rosbag and the topics timestamps to be out of sync.

To fix this when recording the new rosbag add /clock to the list of recorded topics. this will keep the clock of the original rosbag instead of creating a new one.

Example:

<launch>

  <param name="use_sim_true" value="true" />

  <node pkg="rosbag" type="play" name="rosbagplay" output="screen" required="true" args="--clock /PATH/TO/ROSBAGTOPLAY"/>

  <node pkg="rosbag" type="record" name="rosbagrecord" output="screen" required="true" args="/clock LISTOFTOPICS -O /PATH/TO/OUTPUTFILE"/>

</launch>

EOF

Traffic Light recognition

Pre-requisites:
– Vector Map
– NDT working
– Calibration publisher
– Tf between camera and localizer

Traffic light recognition is splitted in two parts
1. feat_proj finds the ROIs of the traffic signals in the current camera FOV
2. region_tlr checks each ROI and publishes result, it also publishes /tlr_superimpose_image image with the traffic lights overlayed
2a. region_tlr_ssd deep learning based detector.

Launch Feature Projection

roslaunch road_wizard feat_proj.launch camera_id:=/camera0

Launch HSV classifier

roslaunch road_wizard traffic_light_recognition.launch camera_id:=/camera0 image_src:=/image_XXXX

SSD Classifier

roslaunch road_wizard traffic_light_recognition_ssd.launch camera_id:=/camera0 image_src:=/image_XXXX network_definition_file:=/PATH_TO_NETWORK_DEFINITION/deploy.prototxt pretrained_model_file:=/PATH_TO_MODEL/Autoware_tlr_SSD_.caffemodel use_gpu:=true gpu_device_id:=0

How to install SSD Caffe for Autoware

Caffe Prerequisites

  1. sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libhdf5-serial-dev protobuf-compiler
  2. sudo apt-get install libgflags-dev libgoogle-glog-dev liblmdb-dev
  3. sudo apt-get install libatlas-base-dev

Clone SSD fork of Caffe

  1. Go to your home directory
  2. Clone the code: git clone https://github.com/weiliu89/caffe.git ssdcaffe
  3. Move inside the directory cd ssdcaffe
  4. Checkout compatible API version git checkout 5365d0dccacd18e65f10e840eab28eb65ce0cda7
  5. Create config file cp Makefile.config.example Makefile.config
  6. Start building make
  7. Once completed execute make distribute
  8. Compile Autoware, Cmake will detect SSD Caffe and compile the SSD nodes.
  9. To test, download the object detection models from:
    http://ertl.jp/~amonrroy/ssd_models/ssd500.zip
    http://ertl.jp/~amonrroy/ssd_models/ssd300.zip
    The 300 model will run faster but won’t provide good results at farther distances. In contrast, the 500 model require more computing power but will detect at lower resolutions(farther objects).
  10. In Autoware’s RTM use the [app] button next to ssd_unc in the Computing Tab. to select the correct image input src and the models path.
    image
  11. Launch the node and play a rosbag with image data.
  12. In Rviz add the ImageViewer Panel
  13. Select the Image topic and the Object Rect topic

How to setup Nvidia Drivers and CUDA in Ubuntu

Disabling Nouveau, if required (login loop or low res mode), otherwise skip to next section

  1. Confirm nouveau is loaded

lsmod | grep -i nouveau

You’ll see the text nouveau in the 4th column, if loaded

video XXXX Y nouveau

  1. If nouveau is loaded then blacklist it. Create the file blacklist-nouveau.conf in /etc/modprobe.d/

sudo nano /etc/modprobe.d/blacklist-nouveau.conf

And add the following text

blacklist nouveau
options nouveau modeset=0
  1. Execute sudo update-initramfs -u

  2. Restart

NVIDIA Driver Setup

  1. Download the RUN file from NVidia’s website
    https://www.geforce.com/drivers
    You’ll have a file named similarly to NVIDIA-Linux-x86_64-XXX.YY.run

  2. Assign execution permissions chmod +x NVIDIA-Linux-x86_64-XXX.YY.run

  3. Move to a virtual console pressing Ctrl+Alt+F1 and login

  4. Terminate the X Server executing sudo service lightdm stop

  5. Run the installer sudo ./NVIDIA-Linux-x86_64-XXX.YY.run
    (If you are running on a laptop run instead sudo ./NVIDIA-Linux-x86_64-XXX.YY.run --no-opengl-files)

  6. Follow the instruction from the wizard. At the end, do not allow the wizard to modify the X configuration.

  7. Once back in the console, execute sudo service lightdm start. The GUI should be displayed. Login.

  8. To confirm everything is set run in a terminal nvidia-smi

CUDA Setup

  1. Download the CUDA Installation RUN File from https://developer.nvidia.com/cuda-downloads
    You’ll have a file named similarly to cuda_X.0.YY.Z_linux.run

  2. Assign execution permissions chmod +x cuda_X.0.YY.Z_linux.run

  3. Run the installer sudo ./cuda_X.0.YY.Z_linux.run

  4. Follow the instructions on screen. DO NOT install the NVIDIA Driver included. Install the CUDA Samples on your home directory.

  5. Once finished, to confirm everything is ok. Go to your home directory and execute cd NVIDIA_CUDA-X.Y_Samples/1_Utilities/deviceQuery Match X, Y to your CUDA version. i.e. CUDA 9.0 cd NVIDIA_CUDA-9.0_Samples/1_Utilities/deviceQuery

  6. Compile the sample running make

  7. Run the sample ./deviceQuery you should see the details about your GPU(s) and CUDA Setup.

Autoware Full Stack to achieve autonomous driving

Velodyne – BaseLink TF

roslaunch runtime_manager setup_tf.launch x:=1.2 y:=0.0 z:=2.0 yaw:=0.0 pitch:=0.0 roll:=0.0 frame_id:=/base_link child_frame_id:=/velodyne period_in_ms:=10

Robot Model

roslaunch model_publisher vehicle_model.launch

PCD Map

rosrun map_file points_map_loader noupdate PCD_FILES_SEPARATED_BY_SPACES

VectorMap

rosrun map_file vector_map_loader CSV_FILES_SEPARATED_BY_SPACES

World-Map TF

roslaunch world_map_tf.launch

Example launch file (From Moriyama data)

<launch>
  <!-- world map tf -->
  <node pkg="tf" type="static_transform_publisher" name="world_to_map" args="14771 84757 -39 0 0 0 /world /map 10" />
</launch>

roslaunch PATH/TO/MAP_WORLF_TF.launch

Voxel Grid Filter

roslaunch points_downsampler points_downsample.launch node_name:=voxel_grid_filter

Ground Filter

roslaunch points_preprocessor ring_ground_filter.launch node_name:=ring_ground_filter point_topic:=/points_raw

NMEATOPOSE (IF GNSS available)

roslaunch gnss_localizer nmea2tfpose.launch plane:=7

NDT Matching

roslaunch ndt_localizer ndt_matching.launch use_openmp:=False use_gpu:=False get_height:=False

Mission Planning

rosrun lane_planner lane_rule

rosrun lane_planner lane_stop

roslaunch lane_planner lane_select.launch enablePlannerDynamicSwitch:=False

roslaunch astar_planner obstacle_avoid.launch avoidance:=False avoid_distance:=13 avoid_velocity_limit_mps:=4

roslaunch autoware_connector vel_pose_connect.launch topic_pose_stamped:=/ndt_pose topic_twist_stamped:=/estimate_twist sim_mode:=False

roslaunch astar_planner velocity_set.launch use_crosswalk_detection:=False enable_multiple_crosswalk_detection:=False points_topic:=points_no_ground enablePlannerDynamicSwitch:=False

roslaunch waypoint_maker waypoint_loader.launch multi_lane_csv:=/path/to/saved_waypoints.csv decelerate:=1

roslaunch waypoint_follower pure_pursuit.launch is_linear_interpolation:=True publishes_for_steering_robot:=False

roslaunch waypoint_follower twist_filter.launch

How to setup IMU XSense MTI

Check the assigned USB device to the IMU using dmesg

[ 8808.219908] usb 3-3: new full-speed USB device number 28 using xhci_hcd
[ 8808.237513] usb 3-3: New USB device found, idVendor=2639, idProduct=0013
[ 8808.237522] usb 3-3: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[ 8808.237527] usb 3-3: Product: MTi-300 AHRS
[ 8808.237531] usb 3-3: Manufacturer: Xsens
[ 8808.237534] usb 3-3: SerialNumber: 03700715
[ 8808.265957] usbcore: registered new interface driver usbserial
[ 8808.265982] usbcore: registered new interface driver usbserial_generic
[ 8808.265999] usbserial: USB Serial support registered for generic
[ 8808.268037] usbcore: registered new interface driver xsens_mt
[ 8808.268048] usbserial: USB Serial support registered for xsens_mt
[ 8808.268063] xsens_mt 3-3:1.1: xsens_mt converter detected
[ 8808.268112] usb 3-3: xsens_mt converter now attached to ttyUSB0
  1. Change permissions of the device chmod a+rw /dev/ttyUSB0 Probably is USB0, change it accordingly to your setup
  2. In an Autoware sourced terminal execute: rosrun xsens_driver mtdevice.py -m 2 -f 100
    (this configures the IMU to publish raw data from the sensor at 100Hz)
    To publish data execute (in a sourced terminal):
    rosrun xsens_driver mtnode.py _device:=/dev/ttyUSB0 _baudrate:=115200
  3. Confirm data is actually coming using rostopic echo /imu_raw in a different terminal

How to launch NDT Localization

Requisites

Before starting make sure you have

  1. PCD Map (.pcd)
  2. Vector Map (.csv)
  3. TF File (.launch)

How to start localization

  1. Launch Autoware’s Runtime Manager
  2. Go to Setup Tab
  3. Click on TF and Vehicle Model buttons

    This will create the transformation between the localizer (Velodyne) to the base_link frame (car’s tires)
  4. Go to Map tab
  5. Click the ref button on the PointCloud section
  6. Select ALL the PCD Files that form the map, then click Open
  7. Click the Point Cloud button to the left. A bar below will show the progress of the load. Wait until it’s complete.

    Do the same for Vector Map, but this time select all the csv files
    Finally load the TF for the map
  8. Go to Simulation Tab
  9. Click the Ref Button and load a ROSBAG.
  10. Click Play and once the ROSBAG started to play, immediatly press Pause

    This step is required to set the Time to simulation instead of real.
  11. IF your rosbag contains /velodyne_packets instead of /points_raw, go to Sensing tab and Launch the LiDAR node corresponding to your sensor, to decode the packets into points_raw

    The corresponding calibration YML files are located in ${AUTOWARE_PATH}/ros/src/sensing/drivers/lidar/packages/velodyne/velodyne_pointcloud/params/
    Select the correct one depending on the sensor.
  12. In the Sensing Tab, Inside the *Points Downsampler** section, click on voxel_grid_filter
  13. Go to Computing tab and click on the [app] button next to ndt_matching inside the Localization section. Make sure the Initial Pos is selected.
  14. Click on the ndt_matching checkbox.
  15. Launch RVIZ using the button below Runtime Manager, and load the default.rviz configuration file located in ${AUTOWARE_PATH}/ros/src/.config/rviz.
  16. In RVIZ click on the 2D Pose Estimate button, located in the top bar
  17. Click on the initial position to start localization AND drag to give an initial pose.
  18. If the initial position and pose are correct, the car model should now be seen in the correct position.
    If the model starts spinning, try to give a new initial position and pose.

Building Apollo outside of Docker

WARNING – this will probably mess up your system.
At some point I had an unrelated crash so I restarted my computer and now it is stuck in a boot loop. So don’t attempt this until I have updated with a safe version!

I will probably try this again in a VM, assuming graphics drivers will work. Apollo is extremely fussy about library versions so be warned!

Current status: Bricked my OS, but at last attempt built until hitting some ROS dependency issues, and some linker problems probably caused by Apollo wanting some archaic glibc version..

You must not have Boost or PCL installed before attempting this, as Apollo requires specific versions. You’ll also need to make sure you have the correct version of GCC (4.8) and probably glibc (TODO).

sudo apt-get install gcc-4.8 g++-4.8 g++-4.8-multilib gcc-4.8-multilib
sudo update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-4.8 50
sudo update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-4.8 50

Now you need to build Boost.

wget http://downloads.sourceforge.net/project/boost/boost/1.54.0/boost_1_54_0.tar.gz
tar -zxvf boost_1_54_0.tar.gz
cd boost_1_54_0

You will also need to modify the Boost source code to get the threading library to compile with the old GCC version – see here:
Then you can build boost as normal.

./bootstrap.sh --prefix=/usr/local
sudo ./b2 --with=all -j 4

Now get some dependencies:

sudo apt-get install -y build-essential apt-utils curl debconf-utils doxygen lcov libcurl4-openssl-dev libfreetype6-dev lsof python-pip python-matplotlib python-scipy python-software-properties realpath software-properties-common unzip vim nano wget zip cppcheck libgtest-dev git bc apt-transport-https shellcheck cmake gdb psmisc python-empy librosconsole0d librosconsole-dev libtf-conversions0d
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get install -y oracle-java8-installer

Install Bazel (again, very fussy with which version):

wget https://github.com/bazelbuild/bazel/releases/download/0.5.3/bazel_0.5.3-linux-x86_64.deb
dpkg -i bazel_0.5.3-linux-x86_64.deb
sudo apt-get install -f

At this point I did some cleanup.

sudo apt-get clean autoclean
sudo apt-get autoremove -y
sudo rm -fr /var/lib/apt/lists/*

You have to build a specific version of protobuf. Get yourself a coffee ready:

wget https://github.com/google/protobuf/releases/download/v3.3.0/protobuf-cpp-3.3.0.tar.gz
tar xzf protobuf-cpp-3.3.0.tar.gz
cd protobuf-3.3.0/
./configure --prefix=/usr
make
sudo make install
sudo chmod 755 /usr/bin/protoc

Moving on to install node JS (again, a specific version required here):

wget https://github.com/tj/n/archive/v2.1.0.tar.gz
tar xzf v2.1.0.tar.gz
cd n-2.1.0/
sudo make install
sudo n 8.0.0

Make up an empty file called py27_requirements.txt and insert the following:

glog
protobuf == 3.1
python-gflags
flask
flask-socketio
gevent
requests >= 2.18
simplejson
PyYAML
catkin_pkg
rospkg

Now install the python corresponding bits and pieces for Python:

pip install -r py27_requirements.txt

I think the following might have to be done earlier, and might also get the wrong version of ROS given current build errors. Investigation needed (TODO):

curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | sudo apt-key add -
echo "deb https://dl.yarnpkg.com/debian/ stable main" | sudo tee /etc/apt/sources.list.d/yarn.list
sudo apt-get update && sudo apt-get install -y yarn

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
sudo apt-key adv --keyserver hkp://ha.pool.sks-keyservers.net:80 --recv-key 421C365BD9FF1F717815A3895523BAEEB01FA116

And some more pesky libraries:

sudo apt-get install -y libbz2-dev libconsole-bridge-dev liblog4cxx10-dev libeigen3-dev liblz4-dev libpoco-dev libproj-dev libtinyxml-dev libyaml-cpp-dev sip-dev uuid-dev zlib1g-dev

sudo apt-get install -y libatlas-base-dev libflann-dev libhdf5-serial-dev libicu-dev libleveldb-dev liblmdb-dev libopencv-dev libopenni-dev libqhull-dev libsnappy-dev libvtk5-dev libvtk5-qt4-dev mpi-default-dev

sudo apt-get update && apt-get install -y --force-yes libglfw3 libglfw3-dev freeglut3-dev

You have to build glew to get the right version:

wget https://github.com/nigels-com/glew/releases/download/glew-2.0.0/glew-2.0.0.zip
unzip glew-2.0.0.zip
cd glew-2.0.0/
make
sudo make install

Same kind of thing for PCL:

wget https://github.com/PointCloudLibrary/pcl/archive/pcl-1.7.2.tar.gz
tar xzf pcl-1.7.2.tar.gz
cd pcl-pcl-1.7.2 && mkdir build && cd build
cmake ..
make
sudo make install

And the big bad CUDA (you need CUDA 8.0):

sudo apt-get install -y libgflags-dev libgoogle-glog-dev
wget https://developer.nvidia.com/compute/cuda/8.0/Prod2/local_installers/cuda-repo-ubuntu1604-8-0-local-ga2_8.0.61-1_amd64-deb
sudo dpkg -i cuda-repo-ubuntu1604-9-0-local_9.0.176-1_amd64-deb
sudo apt-key add /var/cuda-repo-9-0-local/7fa2af80.pub
sudo apt-get update
sudo apt-get install -y cuda-8-0

Next up is Caffe:

wget https://github.com/BVLC/caffe/archive/rc5.zip
unzip rc5.zip
cd caffe-rcg/
cp Makefile.config.example Makefile.config

In the file Makefile.config you have to uncomment the line WITH_PYTHON_LAYER := 1 before continuing.

sudo ln -s libhdf5_serial.so.10.1.0 libhdf5.so
sudo ln -s libhdf5_serial_hl.so.10.0.2 libhdf5_hl.so
cd python
for req in $(cat requirements.txt); do pip install --user $req; done
cd..
make all
make test
make runtest
make distribute

Now you can get to the actual Apollo part.

wget https://github.com/ApolloAuto/apollo/archive/master.zip
unzip master.zip
cd apollo-master
./apollo.sh build-gpu #will fail
source bazel-apollo/external/ros/setup.bash
./apollo.sh build-gpu

You might need to grab the docker image before doing the Apollo build – I will test if it works without. I think this is because it grabs a special, Apollo-specific version of ROS. You can see why they recommend you keep within the Docker!

How to create a map using NDT Mapping

  1. Go to Autoware/ros directory
  2. Run Autoware using “./run” command
  3. Go to Simulation tab and Load a ROSBAG
  4. Click Play and immediately PAUSE
  5. Click Computing tab and select ndt_mapping
  6. Click RViz button at the bottom
  7. In Rviz click File menu and then click Open Config to select visualization template for ndt_mapping.rviz located in Autoware/src/.config/rviz
  8. ndt_mapping will read from /points_raw
    IF the pointcloud is being published in a different topic, use the relay tool in a new terminal window
    rosrun topic_tools relay /front/velodyne_points /points_raw
    This will forward the topic /front/velodyne_points to /points_raw
  9. Go back to Simulation tab and click Pause to start mapping
  10. Mapping process can be seen from Rviz
  11. Once the desired area is mapped. Click [app] button next to ndt_mapping
  12. Select the desired path specified using the Ref button
  13. Press the PCD OUTPUT to generate the file.
  14. Uncheck the ndt_mapping node to stop.

DONE

How to verify Map

  1. Select Map tab in runtime Manager and click on Ref button
  2. Select the recently created file
  3. Click on the PointCloud button and wait until the progress bar reaches Loading… 100%
  4. Open RVIZ, Click the ADD button
  5. Select the By Topic Tab
  6. Double Click on /points_map PointCloud2
  7. The map will be displayed (remember to set the frame to map)