How to use webviz in Docker

  1. Clone the webviz project (

git clone

  1. Build the image, and name it.

The docker image contains only the basic dependencies. No code or compiled resources.

  1. Run a container in interactive mode with the recently created image, mount the wevbiz code, and attach it to the host network.

  2. Once inside the container, go to the mounted resource /webviz, install dependencies, build, and run the server.

  3. You'll see a message indicating the webviz server URL
    ℹ 「wds」: Project is running at http://localhost:8081/

  4. Open your browser (Chrome seems to be the only fully supported), and direct it to that URL, adding app at the end.

  5. Drag and drop a rosbag to the browsers's window.
    You can also execute rosbridge to communicate with the host roscore. roslaunch rosbridge_server rosbridge_websocket.launch

  6. Select topics to be displayed. Be sure to select the correct coordinate frame.

Running Gitlab runners locally

How to Run Gitlab runners locally


  1. Install Docker.
  2. Setup Gitlab repository and install gitlab-runner.
  3. Prepare or get the .gitlab-ci.yml to test.
  4. Execute a runner on the desired job.


Install docker

Don't forget the post-setup steps.

Setup repository and install gitlab-runner

Prepare or get gitlab-ci.yml

Supposing the following script:

image: "ruby:2.5"


  - apt-get update -qq && apt-get install -y -qq sqlite3 libsqlite3-dev nodejs

  - ruby -v

  - which ruby

  - gem install bundler --no-document

  - bundle install --jobs $(nproc)  "${FLAGS[@]}"



    - bundle exec rspec



    - bundle exec rubocop

there are two jobs defined

  • rspec
  • rubocop

Execute the runner on a job

in a terminal execute:

gitlab-runner execute docker rspec

this will execute the script in a clean docker the job rspec.

to define a timeout use

--timeout TIME

to pass env variables


Git creating features

Creating features from fork

Important note

Don't forget FIRST to create ISSUE in upstream describing the fix/feature.


These are the steps involved:

  1. Fork in Github.
  2. Clone fork locally.
  3. Set upstream.
  4. Create features/fixes in local branches.
  5. Sign commits.
  6. Push to your fork.
  7. Create PR upstream.

Fork to your own account/organization

Simple step, just press the Fork button in the top right of the GitHub repository, and select where to fork.

Clone your fork locally

Clone the default branch in the current directory:
git clone --recursive

If you wish to clone a different branch:
git clone --recursive --branch BRANC_NAME

If you wish to clone to a directory with a different name:
git clone --recursive DESIRED_DIR_NAME

Set upstream repository

Set upstream (original source):
git remote add upstream

Confirm remotes:
git remote -v

> origin (fetch)
> origin (push)
> upstream (fetch)
> upstream (push)

In this example:
origin represents your fork repo.
upstream represents your fork's origin.

NOTE: Names other than origin or upstream can be used. Just be careful to follow the same naming when pulling/pushing commits.

Create features/fixes

In the Autoware case , please always create new features from master branch.

Sign commits

GPG Sign

Set your GPG keys following GitHub article:

Signoff commits

Signoff your commits using git commit -s (

If you have an older git version the -s flag might not available. You can either update it via source/build/install,or use a PPA. ( Taken from

sudo add-apt-repository ppa:git-core/ppa -y
sudo apt-get update
sudo apt-get install git -y
git --version

Update fork

git checkout master
git fetch upstream
git merge upstream/master

Create feature branch

git checkout -b feature/awesome_stuff

Push to our fork

git push origin feature/awesome_stuff

Once finished

Create PR from Github website, and target master branch.

Deeplab v3 Test

  1. Create Python Env
    $ mkdir ~/tf_ros && cd ~/tf_ros
    $ virtualenv --system-site-packages -p python2 tensor_ros_deeplab
  2. Activate Environment
    source tensor_ros_deeplab/bin/activate
  3. Install tensorflow GPU
    pip install --upgrade tensorflow_gpu-1.12.0-cp27-none-linux_x86_64.whl
  4. Move to the Deeplab dir
    $ git clone
  5. Clone Tensorflow's Models repo containing Deeplab
  6. Download a pretrained model:
    $ wget
  7. Use sample script for a single image:
  8. Change the MODEL var to the path of the previously downloaded model:
    MODEL = DeepLabModel("~/tf_ros/models/research/deeplab/deeplabv3_mnv2_dm05_pascal_trainaug_2018_10_01.tar.gz")
  9. To use a local image instead of a URL change the run_visualization function to:
def run_visualization(image_path):
  """Inferences DeepLab model and visualizes result."""
    original_im =
  except IOError:
    print('Cannot read image. Please check path: ' + image_path)

  print('running deeplab on image %s...' % image_path)
  resized_im, seg_map =

  vis_segmentation(resized_im, seg_map)
  1. Finally save and run the script:

Yolo3 training notes

Yolo3 custom training notes

To train a model, YOLO training code expects:
* Images
* Labels
* NAMES File
* CFG file
* train.txt file
* test.txt file
* DATA file
* Pretrained weights (optional)

Images and Labels

The images and labels should be located in the same directory. Each image and label is related to its counterpart by filename.
For image 001.jpg, the corresponding label should be named 001.txt

Label format

The file containing the labels is a plain text file. Each line contains a bounding box for each object. The colums are separated by spaces, in the following format:

classID x y width height

The x, y, width and height should be expressed in a normalized pixel with values from 0 to 1.
x and y correspond to the coordinate of the center of the bounding box.

Yolo includes the following python helper function to easily achieve that:

def convert(size, box):
    dw = 1./(size[0])
    dh = 1./(size[1])
    x = (box[0] + box[1])/2.0 - 1
    y = (box[2] + box[3])/2.0 - 1
    w = box[1] - box[0]
    h = box[3] - box[2]
    x = x*dw
    w = w*dw
    y = y*dh
    h = h*dh
    return (x,y,w,h)

Names file

This file contains the label string for each class. The first line corresponds to the class 0. second line corresponds to the class 1, and so on.
i.e. Contents of classes.names


This would create the following relationship:

Class ID (labels) Class identifier
0 classA
1 classB
2 classC

CFG file

This file is a darknet configuration file. To simplify the explanation:

Modifications required to train, according to:

GPU memory available:

# Training
 batch=64 #Number of images to move to GPU memory on each batch.

Number of classes

The number of classes should be set on each of the [yolo] sections in the CFG file.

classes= NUM_CLASSES (1,2,3,4) should match names file.

Number of Filters

Before each [yolo] section, the number of filters in the [convolutional] layer should also be updated to match the following formula:

classes=(classes + 5) * 3

For instnace, for 3 classes:


classes= 3

train.txt file

This plain text file contains each of the images that will be used for training. Each line should include the absolute path to the Image.


test.txt file

In the same way as the train.txt file, this text file contains the paths to the images used for testing, one per line.


DATA file

This plain text file summarizes the dataset using the following format:

classes= 20
train  = /home/user/dataset/train.txt
valid  = /home/user/dataset/test.txt
names = /home/user/dataset/classes.names
backup = /home/user/dataset/backup


./darknet detector train file.cfg darknet53.conv.74

TensorFlow and ROS Kinetic with Python 3.5 Natively


Native setup, no venv
1. Create working directory
$ mkdir ~/py3_tf_ros && cd ~/py3_tf_ros

  1. Install Python 3.5
    $ sudo apt-get install python3-dev python3-yaml python3-setuptools

  2. Install rospkg for Python 3

$ git clone git://
$ cd roskpkg && sudo python3 install
  1. Install catkin_pkg for Python 3
$ git clone git://
$ cd catkin_pkg && sudo python3 install && cd ..
  1. Install catkin for Python 3
$ git clone git://
$ cd catkin && sudo python3 install && cd ..
  1. Install OpenCV for Python 3
    pip3 install opencv-python

  2. Download desired TensorFlow version

  3. Setup Nvidia Drivers, CUDA and CUDNN according to the TensorFlow version.

  4. Install downloaded TensforFlow package
    pip3 install --user --upgrade tensorflow-package.whl

  5. Check that symbolic link /usr/local/cuda corresponds to the CUDA version required by TensorFlow. (if there are several CUDA versions installed in the system).

  6. Test TensorFlow
    python -c "import tensorflow as tf; print(tf.__version__)"
    This should display the version 1.XX.YY you selected.

  7. It is possible to import ros and import tensorflow.

  8. If cv2 package is also required:


import cv2


import ros

TensorFlow and ROS Kinetic with Python 2.7

TensorFlow and ROS Kinetic with Python 2.7

  1. Create a working directory
    $ mkdir ~/tf_ros && cd ~/tf_ros

  2. Download and Install an NVIDIA Driver >= 384.x ( We recommend RUN files, please check our other post on this topic.

  3. Download and Install CUDA 9.0 (

  4. Download and Install CUDNN for CUDA 9.0 (;

  5. Download TensorFlow for Python 2.7
    $ wget

  6. Install Python VirtualEnv
    $ sudo apt-get install python-virtualenv

  7. Create Python 2.7 virtual environment.
    $ virtualenv --system-site-packages -p python2 p2tf_venv
    (p2tf_venv can be any name)

  8. Activate environment
    $ source p2tf_venv/bin/activate

  9. Install TensorFlow
    (p2_venv) $ pip install --upgrade tensorflow_gpu-1.10.1-cp27-none-linux_x86_64.whl

  10. Make sure you have in LD_LIBARY_PATH, the cuda 9 libraries and binaries.

$ export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH && export PATH=/usr/local/cuda/bin:$PATH

  1. Check the ROS libraries are in the LD_LIBRARY_PATH as well.
    $ echo $LD_LIBRAY_PATH.
    It should contain /opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu

  2. Check that Python ROS libraries are also included in the $PYTHONPATH.
    $ echo $PYTHONPATH.
    It should contain /opt/ros/kinetic/lib/python2.7/dist-packages.

  3. Test your installation is correct.
    (p2_venv) $ python -c "import tensorflow as tf; print(tf.__version__)"
    It should print the TensorFlow version install, such as 1.10.1.

  4. Ready to go!

Running Dreamview (Apollo 3.0) for pre-Haswell processors

Problem overview

The Apollo docker includes a version of PCL which was built with the FMA instruction set (fused multiply-add). For any Intel processor older than Haswell (and perhaps even some Broadwell processors), this causes an illegal instruction and prevents Dreaview from starting. In previous version of Apollo, this was difficult to problem solve since no error details were provided - in fact, Dreamview would be reported to be running but localhost:8888 could not be accessed. Since 3.0, however, Dreamview reports the following:
dreamview: ERROR (spawn error)
Note: this can probably be caused by many other issues. However, if you are running Haswell or earlier, the solution provided here is likely to work. You can use gdb to troubleshoot spawn errors.

The solution

You have to build PCL within the docker to overcome this problem.

cd /apollo
git clone
git checkout -b pcl-custom

Now you need to add the following to /apollo/pcl/CMakeLists.txt:
- Below the line set(CMAKE_CONFIGURATION_TYPES "Debug;Release" CACHE STRING "possible configurations" FORCE), add the following:

    set (CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=gnu++11")
    message("Build with c++11 support")
else ()
endif ()

Build PCL with the default options.

cd /apollo/pcl
mkdir build
cd build
cmake ..

Replace the current PCL libraries with the new ones you just made (you can backup first).

sudo mkdir -p /usr/local/lib/pcl.backup
sudo mv /usr/local/lib/libpcl* /usr/local/lib/pcl.backup
sudo cp -a lib/* /usr/local/lib/
sudo ldconfig

Now you have to re-build Apollo.

cd /apollo
./ clean
./ build_gpu

And hopfully you can now start and access Dreamview with the usual ./scripts/

How to setup industrial_ci locally and test Autoware

industrial_ci will build, install and test every package in an isolated manner inside a clean Docker image. In this way, missing dependencies(system and other packages) can be easily spotted and fixed before publishing a package. This eases the deployment of Autoware (or any ROS package).

Running locally instead of on the cloud (travis-ci) speeds up the build time.

Autoware and industrial_ci require two different catkin workspaces.


  • Docker installed and working.
  • Direct connection to the Internet (No proxy). See below for proxy instructions.


  1. Install catkin tools:$ sudo apt-get install python-catkin-tools.
  2. Clone Autoware (if you don't have it already):$ git clone (if you wish to test an specific branch change to that branch or use -b).
  3. Create a directory to hold a new workspace at the same level as Autoware, and subdirectory src. (In this example catkin_ws and base dir being home ~).
    ~/$ mkdir -p catkin_ws/src && cd catkin_ws/src.
  4. Initialize that workspace running catkin_init_workspace inside src of catkin_ws.
    ~/catkin_ws/src$ catkin_init_workspace
  5. Clone industrial_ci inside the catkin_ws/src.
    ~/catkin_ws/src$ git clone
  6. The directory structure should look as follows:
├── Autoware
│   ├── ros
│   │   └── src
│   └── .travis.yml
├── catkin_ws
          └── src
               └── industrial_ci
  1. Go to catkin_ws and build industrial_ci. ~/catkin_ws$ catkin config --install && catkin b industrial_ci && source install/setup.bash.
  2. Once finished, move to Autoware directory ~/catkin_ws$ cd ~/Autoware.
  3. Run industrial_ci:

- Using rosrun industrial_ci run_ci ROS_DISTRO=kinetic ROS_REPO=ros or rosrun industrial_ci run_ci ROS_DISTRO=indigo ROS_REPO=ros. This method will manually specify the distribution and repositories sources.
- ~/Autoware$ rosrun industrial_ci run_travis .. This will parse the .travis.yml and run in a similar fashion to travis-ci.

For more detailed info:

How to run behind a proxy

Configure your docker to use proxy (from and

Ubuntu 14.04

Edit the file /etc/default/docker, go to the proxy section, and change the values:

# If you need Docker to use an HTTP proxy, it can also be specified here.
export http_proxy="http://proxy.address:port"
export https_proxy="https://proxy.address:port"

Execute in a terminal sudo service docker restart.

Ubuntu 16.04

  1. Create config container directory:
    $ sudo mkdir -p /etc/systemd/system/docker.service.d

  2. Create the http-conf.d file inside:
    $ sudo nano /etc/systemd/system/docker.service.d/http-proxy.conf

  3. Paste the following text and edit with your proxy values:

  1. Save the file

Modifications to industrial_ci

  1. Add the following lines in ~/catkin_ws/src/industrial_ci/industrial_ci/src/ at line 217:


RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections

Change to:

ENV http_proxy "http://proxy.address:port"
ENV https_proxy "https://proxy.address:port"
RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
  1. Change line 80 of ~/catkin_ws/src/industrial_ci/industrial_ci/src/tests/


# Setup rosdep
rosdep --version
if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
    sudo rosdep init

Change to:

# Setup rosdep
rosdep --version
if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
    rosdep init
  1. In ~/catkin_ws/src/industrial_ci/industrial_ci/src/tests/ line 56:


ici_time_start setup_apt

sudo apt-get update -qq

# If more DEBs needed during preparation, define ADDITIONAL_DEBS variable where you list the name of DEB(S, delimitted by whitespace)
if [ "$ADDITIONAL_DEBS" ]; then
    sudo apt-get install -qq -y $ADDITIONAL_DEBS || error "One or more additional deb installation is failed. Exiting."
source /opt/ros/$ROS_DISTRO/setup.bash

ici_time_end  # setup_apt

if [ "$CCACHE_DIR" ]; then
    ici_time_start setup_ccache
    sudo apt-get install -qq -y ccache || error "Could not install ccache. Exiting."
    export PATH="/usr/lib/ccache:$PATH"
    ici_time_end  # setup_ccache

ici_time_start setup_rosdep

Change to:

ici_time_start setup_apt

apt-get update -qq

# If more DEBs needed during preparation, define ADDITIONAL_DEBS variable where you list the name of DEB(S, delimitted by whitespace)
if [ "$ADDITIONAL_DEBS" ]; then
    apt-get install -qq -y $ADDITIONAL_DEBS || error "One or more additional deb installation is failed. Exiting."
source /opt/ros/$ROS_DISTRO/setup.bash

ici_time_end  # setup_apt

if [ "$CCACHE_DIR" ]; then
    ici_time_start setup_ccache
    apt-get install -qq -y ccache || error "Could not install ccache. Exiting."
    export PATH="/usr/lib/ccache:$PATH"
    ici_time_end  # setup_ccache

ici_time_start setup_rosdep
  1. In the same way change line 70 ~/catkin_ws/src/industrial_ci/industrial_ci/src/tests/


    # Setup rosdep
    rosdep --version
    #if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
        sudo rosdep init

Change to:

    # Setup rosdep
    rosdep --version
    #if ! [ -d /etc/ros/rosdep/sources.list.d ]; then
        rosdep init

Compile again with catkin, then you can run industrial_ci as indicated above.

Apollo messages

This post provides an overview of the messages used by Apollo, as captured from the demo rosbag for testing the perception module.

To do:

  • Investigate protobuf message definitions for variable types
  • Investiage contents of standard ROS type messages

Standard message types


These messages use the standard ROS message format for their respective types.

Custom message types

Apollo uses a custom version of ROS which replaces the msg message description language with a Real Time Publish Subscribe and Google protobuf messaging protocol for publishing and receiving the following messages.


Messages using these types cannot be interpreted using rosbag tools as only the header is compatible. However, their content can be dumped as text using read_messages().

Message topics

The demo rosbag contains the following topics:

topics: /apollo/canbus/chassis 5851 msgs : pb_msgs/Chassis
/apollo/localization/pose 5856 msgs : pb_msgs/LocalizationEstimate
/apollo/sensor/camera/traffic/image_long 471 msgs : sensor_msgs/Image
/apollo/sensor/camera/traffic/image_short 469 msgs : sensor_msgs/Image
/apollo/sensor/conti_radar 789 msgs : pb_msgs/ContiRadar
/apollo/sensor/gnss/best_pose 59 msgs : pb_msgs/GnssBestPose
/apollo/sensor/gnss/corrected_imu 5838 msgs : pb_msgs/Imu
/apollo/sensor/gnss/gnss_status 59 msgs : pb_msgs/GnssStatus
/apollo/sensor/gnss/imu 11630 msgs : pb_msgs/Imu
/apollo/sensor/gnss/ins_stat 118 msgs : pb_msgs/InsStat
/apollo/sensor/gnss/odometry 5848 msgs : pb_msgs/Gps
/apollo/sensor/gnss/rtk_eph 49 msgs : pb_msgs/GnssEphemeris
/apollo/sensor/gnss/rtk_obs 352 msgs : pb_msgs/EpochObservation
/apollo/sensor/velodyne64/compensator/PointCloud2 587 msgs : sensor_msgs/PointCloud2
/tf 11740 msgs : tf2_msgs/TFMessage
/tf_static 1 msg : tf2_msgs/TFMessage

Example messages

All custom messages were read using the following script, within the Apollo docker:

import rosbag
import std_msgs

from std_msgs.msg import String

bag = rosbag.Bag('../apollo_data/2018-01-03-19-37-16.bag', 'r')

read_topic = '/apollo/canbus/chassis'

counter = 0

for topic, msg, t in bag.read_messages():
    if topic == read_topic:
        out_file = 'messages/' + str(counter) + '.txt'
        f = file(out_file, 'w')
        counter = counter + 1

Note that this will not work outside of the Apollo docker - the files will be empty.


Of custom protobuf type pb_msgs/Chassis.

engine_started: true
engine_rpm: 0.0
speed_mps: 0.261111110449
odometer_m: 0.0
fuel_range_m: 0
throttle_percentage: 14.9950408936
brake_percentage: 20.5966281891
steering_percentage: 4.36170196533
steering_torque_nm: -0.75
parking_brake: false
error_code: NO_ERROR
gear_location: GEAR_DRIVE
header {
    timestamp_sec: 1513807824.58
    module_name: "chassis"
    sequence_num: 96620
signal {
    turn_signal: TURN_NONE
    horn: false
chassis_gps {
    latitude: 37.416912
    longitude: -122.016053333
    gps_valid: true
    year: 17
    month: 12
    day: 20
    hours: 22
    minutes: 10
    seconds: 23
    compass_direction: 270.0
    pdop: 0.8
    is_gps_fault: false
    is_inferred: false
    altitude: -42.5
    heading: 285.75
    hdop: 0.4
    vdop: 0.6
    quality: FIX_3D
    num_satellites: 20
    gps_speed: 0.89408


Of custom protobuf type pb_msgs/LocalizationEstimate.

header {
timestamp_sec: 1513807826.05
module_name: "localization"
sequence_num: 96460
pose {
    position {
    x: 587068.814494
    y: 4141577.34872
    z: -31.0619329279
    orientation {
        qx: 0.0354450020653
        qy: 0.0137914670665
        qz: -0.608585615062
        qw: -0.792576177036
    linear_velocity {
        x: -1.08479686092
        y: 0.30964034124
        z: -0.00187507107555
    linear_acceleration {
        x: -2.10530393576
        y: 0.553837321635
        z: 0.170289232445
    angular_velocity {
        x: 0.00630864843147
        y: 0.0111583669994
        z: 0.0146261464379
    heading: 2.88124066533
    linear_acceleration_vrf {
        x: -0.0137881469155
        y: 2.15869303259
        z: 0.328470914294
    angular_velocity_vrf {
        x: 0.0120972352232
        y: -0.00428235807315
        z: 0.0146133729179
    euler_angles {
        x: 0.0213395674976
        y: 0.0730372234802
        z: -3.40194464185
measurement_time: 1513807826.04


This is a standard sensor_msgs/Image message.


This is a standard sensor_msgs/Image message.


Of custom protobuf type pb_msgs/ContiRadar.

header {
timestamp_sec: 1513807824.52
module_name: "conti_radar"
sequence_num: 12971
contiobs {
header {
timestamp_sec: 1513807824.52
module_name: "conti_radar"
sequence_num: 12971
clusterortrack: false
obstacle_id: 0
longitude_dist: 107.6
lateral_dist: -17.2
longitude_vel: -0.25
lateral_vel: -0.75
rcs: 7.5
dynprop: 1
longitude_dist_rms: 0.371
lateral_dist_rms: 0.616
longitude_vel_rms: 0.371
lateral_vel_rms: 0.616
probexist: 1.0
meas_state: 2
longitude_accel: 0.23
lateral_accel: 0.0
oritation_angle: 0.0
longitude_accel_rms: 0.794
lateral_accel_rms: 0.005
oritation_angle_rms: 1.909
length: 2.8
width: 2.4
obstacle_class: 1
...[99 more `contiobs`]...
`object_list_status {
nof_objects: 100
meas_counter: 8464
interface_version: 0

Of custom protobuf type `pb_msgs/GnssBestPose`.

`header {
timestamp_sec: 1513807825.02
measurement_time: 1197843043.0
sol_status: SOL_COMPUTED
sol_type: NARROW_INT
latitude: 37.4169108497
longitude: -122.016059063
height_msl: 2.09512365051
undulation: -32.0999984741
datum_id: WGS84
latitude_std_dev: 0.0114300707355
longitude_std_dev: 0.00970683153719
height_std_dev: 0.0248824004084
base_station_id: "0"
differential_age: 2.0
solution_age: 0.0
num_sats_tracked: 13
num_sats_in_solution: 13
num_sats_l1: 13
num_sats_multi: 11
extended_solution_status: 33
galileo_beidou_used_mask: 0
gps_glonass_used_mask: 51


Of custom protobuf type pb_msgs/Imu.

This appears to be empty, with just a timestamp header.


Of custom protobuf type pb_msgs/GnssStatus.

header {
timestamp_sec: 1513807826.02
solution_completed: true
solution_status: 0
position_type: 50
num_sats: 14


Of custom protobuf type pb_msgs/Imu.

header {
timestamp_sec: 1513807824.58
measurement_time: 1197843042.56
measurement_span: 0.00499999988824
linear_acceleration {
x: -0.172816216946
y: -0.864528119564
z: 9.75685194135
angular_velocity {
x: -0.000550057197804
y: 0.00203638196634
z: 0.00155888550527


Of custom protobuf type pb_msgs/InsStat.

header {
timestamp_sec: 1513807826.0
ins_status: 3
pos_type: 56


Of custom protobuf type pb_msgs/Gps.

header {
timestamp_sec: 1513807824.58
localization {
position {
x: 587069.287353
y: 4141577.22403
z: -31.0546750054
orientation {
qx: 0.0399296504032
qy: 0.0164343412444
qz: -0.606054063971
qw: -0.79425059458
linear_velocity {
x: -0.231635539107
y: 0.0795322332148
z: 0.00147877897123


Of custom protobuf type pb_msgs/GnssEphemeris.

gnss_type: GPS_SYS
keppler_orbit {
gnss_type: GPS_SYS
sat_prn: 23
gnss_time_type: GPS_TIME
week_num: 1980
af0: -0.000219020526856
af1: 0.0
af2: 0.0
iode: 43.0
deltan: 5.23450375238e-09
m0: -1.97123659751
e: 0.0118623136077
roota: 5153.61836624
toe: 345600.0
toc: 345600.0
cic: 1.30385160446e-07
crc: 266.5
cis: -8.56816768646e-08
crs: -9.28125
cuc: -7.13393092155e-07
cus: 5.16884028912e-06
omega0: 2.20992318653
omega: -2.40973031377
i0: 0.943533454733
omegadot: -8.16355433052e-09
idot: -2.10723063176e-11
accuracy: 2
health: 0
tgd: -2.04890966415e-08
iodc: 43.0
glonass_orbit {
gnss_type: GLO_SYS
slot_prn: 10
gnss_time_type: GLO_TIME
toe: 339318.0
frequency_no: -7
week_num: 1980
week_second_s: 339318.0
tk: 3990.0
clock_offset: 1.41123309731e-05
clock_drift: 9.09494701773e-13
health: 0
position_x: -16965363.2812
position_y: -15829665.5273
position_z: -10698784.1797
velocity_x: -994.89402771
velocity_y: -1087.91637421
velocity_z: 3184.79061127
accelerate_x: -2.79396772385e-06
accelerate_y: -3.72529029846e-06
accelerate_z: -1.86264514923e-06
infor_age: 0.0


Of custom protobuf type pb_msgs/EpochObservation.

receiver_id: 0
gnss_time_type: GPS_TIME
gnss_week: 1980
gnss_second_s: 339042.6
sat_obs_num: 13
sat_obs {
sat_prn: 31
sat_sys: GPS_SYS
band_obs_num: 2
band_obs {
band_id: GPS_L1
frequency_value: 0.0
pseudo_type: CORSE_CODE
pseudo_range: 22104170.8773
carrier_phase: 116158209.251
loss_lock_index: 173
doppler: -2880.74853516
snr: 173.0
band_obs {
band_id: GPS_L2
frequency_value: 0.0
pseudo_type: CORSE_CODE
pseudo_range: 22104174.5591
carrier_phase: 90512897.445
loss_lock_index: 140
doppler: -2244.73510742
snr: 140.0

...[13 more sat_obs]...


This is a standard sensor_msgs/PointCloud2 message.


This is a standard tf2_msgs/TFMessage message. The messages alternate between frame_id: world to child_frame_id: localization transform messages and frame_id: world to child_frame_id: novatel transform messages.


This is a standard tf2_msgs/TFMessage message, for the frame_id: novatel to velodyne64 transform.